ICRA 2026 Workshop on "7th ViTac Workshop: Learning to See and Feel: Vision–Tactile Synergy for Embodied AI"
Introduction Tactile robotics has emerged as a key frontier of embodied intelligence, equipping robots with the physical sense required to perceive and interact in unstructured environments. By integrating visual and tactile sensing, robots are empowered to perform tasks demanding dexterity, adaptability, and a deep understanding of physical interaction. Recently, the field has witnessed a paradigm shift driven by novel hardware design, high-fidelity simulation and vision-language-action models.
The co-design of hardware and algorithm greatly enhances robotic tactile intelligence, while advanced simulators capable of rendering realistic visual-tactile feedback are now bridging the sim-to-real gap, greatly accelerating robot skill learning. The emergence of visual-tactile-language-action models is revolutionizing robot grasping and contact-rich manipulation toward human-level capabilities.
This workshop brings together a diverse group of leading experts and early-career researchers to discuss these important topics. By synthesizing these perspectives, we aim to cover recent advancements in the area, identify remaining scientific gaps, and foster new collaborations toward human-level visual-tactile intelligence.
The workshop will be divided into two sessions, with invited talks from leading experts in academia and industry, poster presentations from young researchers, and a panel discussion.
- Hardware Intelligence and Tactile Simulation
- Robot Dexterity with Visuo-Tactile Fusion: Visual-Tactile Foundation Models
Sessions will be followed by breaks for refreshments, posters, and live demonstrations to encourage interaction among attendees.
Date: Half-day workshop, 8:50 am - 12:30 pm, June 1, 2026
Location: ICRA2026 Venue, Vienna, Austria
Past Workshops:
ViTac2025,
ViTac2024,
ViTac2023,
ViTac2021,
ViTac2020,
ViTac2019
Invited Speakers
Harold SohAssociate Professor, National University of Singapore
Yan WuPrincipal Scientist, A*STAR Institute for Infocomm Research
Tentative Schedule
| Time | Event |
8:50 - 9:00 PM (5 min) | Opening Remarks |
| Session 1: Hardware Intelligence & Tactile Simulation |
9:10 - 9:30 AM (20 min) | Katherine J. Kuchenbecker: Multimodal Haptic Intelligence |
9:30 - 9:50 AM (20 min) | Matei Ciocarlie: New contact sensors for new manipulators: design, integration and applications |
9:50 - 10:10 AM (20 min) | Ireti Akinola: Force-Aware Policy Learning: From Sim-to-Real to Sim-Plus-Real |
10:10-10:25 AM (15 min) | Spotlight Poster Presentations (3x) |
10:25-10:40 AM (15 min) | Coffee Break and Poster Session |
| Session 2: Robot Dexterity with Visuo-Tactile Fusion: Visual-Tactile Foundation Models |
10:40 - 10:50 AM (10 min) | Industry Speakers |
10:50 - 11:10 AM (20 min) | Yan Wu : TBD |
11:10 - 11:30 AM (20 min) | Harold Soh: Explorations and Reflections in Tactile Learning |
11:30 - 11:50 AM (20 min) | Francesco Nori: TBD |
11:50 - 12:20 PM (30 min) | Panel Discussion: Future of Visual-Tactile Intelligence |
12:20 - 12:30 PM (10 min) | Best Paper Award and Closing Remarks |
Discussion Topics
1. Co-design of Hardware and Algorithms
- Tactile Sensor Design How can we co-design hardware and algorithms to maximize the performance of tactile sensing systems?
- Bioinspired Approaches How can we get inspiration from biological systems to design more effective tactile sensors?
- Scalable Tactile Data Collection Systems How can we build scalable systems for collecting tactile data for large models?
2. High-Fidelity Simulation
- Tactile Interpretation: What is the bottleneck for simulation across diverse tactile sensors (vision-based and non-vision-based)? Should simulators render high-fidelity "raw" signals (e.g., RGB images, electrical signal) or interpreted signals (e.g., marker displacement, force vectors, depth maps) to maximize sim-to-real transfer efficiency?
- Deformable Physics: What are the bottlenecks in modeling elastic materials, soft-body contacts, and complex friction phenomena (stick-slip, hysteresis) in real-time?
- Neural Rendering: Can emerging techniques like Gaussian Splatting replace traditional graphical rendering to generate realistic digital twins for visual-tactile sensing?
3. Sim-to-Real Transfer
- The Reality Gap: What specific physical discrepancies (e.g., visual-tactile signal or physics) are the primary blockers preventing zero-shot transfer of tactile policies?
- Cross-Sensor Transfer: How can we transfer learned skills between tactile sensors with different sensing principles (e.g., optical vs. magnetic) without retraining from scratch?
- Benchmarking Standards: What defines a visual-tactile manipulation policy benchmark? What are the essential datasets, tasks, and evaluation metrics for a benchmark in visual-tactile sensing across simulation and real world?
4. Multimodal Perception & Representation Learning
- Unified Representations across Vision and Touch: What is the unified representation across tactile sensors? How to fusing high-dimensional visual data with sparse, local tactile contact information?
- Dynamic Reweighting for visual-tactile fusion: How should a robot dynamically attribute attention between vision and touch in challenging scenarios (e.g., heavy occlusion, poor lighting)?
5. Control, Learning & Dexterity in the Era of Large Models
- Vision-Tactile-Language-Action Model: How can we effectively integrate visual, tactile, and language information in a large model for robot manipulation?
- Hybrid Architectures: How can we effectively synergize model-based control (for stability and safety) with learning-based paradigms (for adaptability in unstructured environments)?
- Human-Like Reflexes: How do we design control frameworks that decouple high-frequency tactile reflexes (spinal level) from low-frequency visual planning (cortical level)?
- Safety & Robustness: In end-to-end learning frameworks, how do we impose hard safety constraints (e.g., "do not crush") when tactile feedback is noisy or ambiguous?
- Scalability: How can we scale learned policies with visual-tactile sensing across embodiments (sensors and robots)?
Call for Papers/Demos
Topics of Interest
- Development of tactile sensors and visual-tactile data collection systems for robot tasks
- High-fidelity simulation and neural rendering for visual-tactile sensing
- Sim-to-Real for visual-tactile policy learning
- Benchmarking standards, datasets, and evaluation metrics for visual-tactile skill learning
- Visual-tactile foundation models in robot grasping and manipulation
- Visual-tactile representations learning across sensors and embodiments
- Hybrid visual-tactile control architectures combining model-based and learning-based paradigms
- Cognitive control of movement and sensory-motor representations with vision and touch
- The control and design of robotic visuo-tactile systems for human-robot interaction
Posters and live demonstrations will be selected from a call for extended abstracts (2 pages + references), reviewed by the organisers. The best posters will be invited to give talks at the workshop.
All submissions will be reviewed using a double-blind review process. Accepted contributions will be presented during the workshop as posters.
Submissions must be sent in PDF, following the ICRA conference style (two-columns).
Note on OpenReview Profiles: New profiles created without an institutional email will go through a moderation process that can take up to two weeks. New profiles created with an institutional email will be activated automatically.
- Submission Portal: OpenReview
- Submission Deadline: Mar 20, 2026 12:00AM UTC-0
- Notification of acceptance: April 5, 2026
- Camera-ready deadline: May 1, 2026
- Demo Video: Please upload video demos as a single zip under the supplementary material field in OpenReview.
- Dual Submission: Papers to be submitted or in preparation for submission to other major venues in the field are allowed. We also welcome published works as long as explicitly stated at the time of submission.
- Visibility: Submissions and reviews will not be public. Only accepted papers will be made public.
- Contact Email: zhuo.7.chen@kcl.ac.uk
Organisers
Early-Career Organizers
Senior Advisory Board
Sponsors (more to be confirmed)