Touch2Touch: Cross-Modal Tactile Generation for Object Manipulation

Samanta Rodriguez, Yiming Dou, Miquel Oller, Andrew Owens, Nima Fazeli

Work Under Review, 2024

ArXiv View Paper

Touch2Touch: Cross-Modal Tactile Generation for Object Manipulation

Today’s touch sensors come in many shapes and sizes. This has made it challenging to develop general-purpose touch processing methods since models are generally tied to one specific sensor design. We address this problem by performing cross-modal prediction between touch sensors: given the tactile signal from one sensor, we use a generative model to estimate how the same physical contact would be perceived by another sensor. This allows us to apply sensor-specific methods to the generated signal. We implement this idea by training a diffusion model to translate between the popular GelSlim and Soft Bubble sensors. As a downstream task, we perform in-hand object pose estimation using GelSlim sensors while using an algorithm that operates only on Soft Bubble signals.

Project website: Touch2Touch: Cross-Modal Tactile Generation for Object Manipulation

Download paper here