MULTIMODAL HAPTICS

This project will explore how touch can be integrated with sound and vision in next-generation multisensory human-computer interfaces (HCI) combining tactile, auditory and visual feedback. It aims to develop a novel multimodal haptic framework by characterizing how the human integrates multisensory cues into a unified percept. Specifically, we will explore how reinforcement and disruption of haptic shape representation (e.g. the shape of a button on a display) occurs when tactile, auditory, and visual cues are independently modulated. 

The question of introducing more multimodal haptic feedback into consumer products is becoming crucial today, with the advent of a society increasingly focused on digital solutions. Smartphones and tablets, which rely entirely on touch screens for user interaction, have become so cost effective that physical buttons and knobs are removed and replaced by virtual buttons displayed on the flat and hard surface of displays in vending machines, car dashboards, etc. 

 

Feedback strategies are also developing fast in the field of healthcare and education, where professionals are more and more relying on technological devices to interact with the patients and for improving trainees learning. In the future, novel strategies and ergonomic devices using multimodal feedback will increasingly optimize the tools and programs for professional trainings related to remote interaction and become more widespread in all immersive applications. 

multimodal.png