ICRA 2020 ViTac Workshop: Closing the Perception-Action Loop with Vision and Tactile Sensing
Abstract: Humans couple perception and action tightly with multimodal sensory inputs, especially vision and touch sensing. For example, moving your hands over an object’s surface enables you to see/feel its shape, temperature and texture with your eyes and the sense of touch in hands. In contrast, artificial systems usually lack the capability of using multimodal sensing to facilitate the perception and action in the same loop. Future robots, as embodied agents, should exploit all available sensing modalities to interact with the environment. In recent decades, there has been much advancement in both vision sensors and tactile sensors available for robots. Good example vision sensors are RGB webcams, depth sensors such as the Kinect and stereo cameras; tactile sensors include optical tactile sensors, such as the GelSight and TacTip, and modular tactile sensors for large body areas. This convergence between visual and tactile sensing technologies, offers new opportunities to fuse information to perform robot tasks, such as closed-loop active visuo-tactile perception when grasping/manipulating objects. The proposed full-day workshop will encompass recent progress in the area of combining vision and touch sensing for an integrated perception-action loop, aiming to enhance active collaboration and address challenges for this important topic and applications.
Organisers:
Shan Luo, King’s College London
Nathan Lepora, University of Bristol
Wenzhen Yuan, Carnegie Mellon University
Gordon Cheng, Technische Universität München
Invited speakers:
● Peter Allen (Columbia University)
● Dieter Fox (University of Washington and Nvidia)
● Vincent Hayward (UPMC Univ Paris)
● Alberto Rodriguez (MIT)
● Kaspar Althoefer (Queen Mary University of London)
● Huaping Liu (Tsinghua University)
● Robert Haschke (Bielefeld University)
● Lorenzo Natale (Istituto Italiano di Tecnologia)
Accepted papers
- Maria Bauza, Eric Valls, Bryan Lim, Theo Sechopoulos, Alberto Rodriguez. "Object Pose Estimation with Geometric Tactile Rendering and Tactile Image Matching"
- Fedor Chervinskii, Alexander Rybnikov, Damian Bogunowicz and Komal Vendidandi, "Sim2Real for Peg-Hole Insertion with Eye-in-Hand Camera"
- Carmelo Sferrazza and Raffaello D’Andrea. "Accurate estimation of the 3D contact force distribution with an optical tactile sensor – Live demonstration"
- Guanqun Cao and Shan Luo. "STAM: An Attention Model for Tactile Texture Recognition"
- Daniel Fernandes Gomes, Zhonglin Lin, Shan Luo. "Exploiting Touch Sensing around Fingers"
- Sudharshan Suresh, Joshua G. Mangelson, and Michael Kaess. "Incremental shape and pose estimation from planar pushing using contact implicit surfaces"
We thank the support from the following IEEE RAS Technical Committees:
- Haptics
- Cognitive Robotics
- Computer and Robot Vision
- Human-Robot Interaction & Coordination