Indexed by:
Abstract:
In the service robot application scenario, the stable grasp requires careful balancing the contact forces and the property of the manipulation objects, such as shape, weight. Deducing whether a particular grasp would be stable from indirect measurements, such as vision, is therefore quite challenging, and direct sensing of contacts through tactile sensor provides an appealing avenue toward more successful and consistent robotic grasping. Other than this, an object's shape and weight would also decide whether to grasping stabilize or not. In this work, we investigate the question of whether tactile information and object intrinsic property aid in predicting grasp outcomes within a multi-modal sensing framework that combines vision, tactile and object intrinsic property. To that end, we collected more than 2550 grasping trials using a 3-finger robot hand which mounted with multiple tactile sensors. We evaluated our multi-modal deep neural network models to directly predict grasp stability from either modality individually or multimodal modalities. Our experimental results indicate the visual combination of tactile readings and intrinsic properties of the object significantly improve grasping prediction performance. © 2018 IEEE.
Keyword:
Reprint 's Address:
Email:
Version:
Source :
Year: 2018
Page: 1563-1568
Language: English
Cited Count:
WoS CC Cited Count: 0
SCOPUS Cited Count: 10
ESI Highly Cited Papers on the List: 0 Unfold All
WanFang Cited Count:
Chinese Cited Count:
30 Days PV: 1
Affiliated Colleges: