Two “Best Paper Awards” at the world’s largest conference on human-computer interaction

Bild der Pressemitteilung

Dennis Wittchen, recipient of a Best Paper Award at CHI 2026. Photo: private


Making virtual worlds tangible is one of the key challenges in the field of human-computer interaction. While vision and hearing are already well integrated into virtual and augmented reality (VR and AR), the sense of touch still lags behind. The Sensorimotor Interaction group, led by Dr. Paul Strohmeier at the Max Planck Institute (MPI) for Informatics, is researching how to change this. Two papers written by the group in collaboration with Saarland University and international partners will now each receive a Best Paper Award at the world’s largest conference in the field of human-computer interaction.

This honor is granted to only about 3.6% of the more than 1,700 papers accepted at the Conference on Human Factors in Computing Systems. The conference takesplace in Barcelona from April 13 to 17, 2026.

The first award-winning paper is “Scene2Hap: Generating Scene-Wide Haptics for VR from Scene Context with Multimodal LLMs.” Paul Strohmeeier explains: “In virtual reality, we are used to seeing or hearing content. Virtual worlds you can feel by touch are much rarer. While visual content is created through light and acoustic content through sound waves, our approach is based on vibration. Building on simple effects like those one might recognize from VR controllers or smartphones, we recreate the complex dynamics of the tactile world this way.” However, the vibration patterns required to create such haptic impressions (vibrotactile feedback) currently still have to be created manually, which does not scale for complex VR scenes with many objects. That is the focus of the newly awarded paper.

With “Scene2Hap,” first authors Arata Jingu from Professor Jürgen Steimle’s Human-Computer Interaction Lab at Saarland University and Easa AliAbbasi from Paul Strohmeier’s Sensorimotor Interaction group have now developed an approach for automatically designing meaningful vibrotactile feedback for objects and scenes in virtual reality. To do this, the researchers use a multimodal large language model (LLM) that can process not only language, but also image and audio data. The model automatically infers the semantics of objects, physical properties and material characteristics, as well as the physical context of the scene. “We draw on various layers of meta-information, ranging from the context of a virtual object to material properties that the LLM can recognize in the image,” explains Easa AliAbbasi. The vibrotactile feedback is then generated and transmitted separately to each hand via the VR controllers being held. In three different user studies, the team was able to show that “Scene2Hap” successfully improved users’ sense of space and perception of materials and generally contributed to a better user experience when the VR environment was created entirely with the newly developed pipeline.

The second award-winning paper, “How are Vibrotactile Experiences Visually Represented? A Taxonomy of Illustration Characteristics,” is a meta-study that examines how haptic impressions and tactile information are communicated in research. More specifically, the award-winning paper investigates how vibrotactile feedback is represented visually. “When new methods for visual rendering are developed, their quality can be shown directly in papers, for example through an image. In haptics research, things are different: we can only describe what something feels like, but we cannot easily convey the actual sensation directly. In my opinion, this limited ability to represent haptic experiences is a central challenge in haptics research,” says Paul Strohmeier.

To analyze this issue, the researchers first developed a taxonomy for representing vibrotactile experiences (VTX) and then collected a total of 1,652 papers from the past 25 years from the digital libraries of the two world’s largest professional associations in computing, the Association for Computing Machinery (ACM) and the Institute of Electrical and Electronics Engineers (IEEE). Within these, they identified 768 visual representations from 409 research papers and coded them according to their visual representation of VTX based on the taxonomy. Their results indicate that (1) half of the illustrations communicate on the timing of vibrotactile feedback with regards to users’ actions, (2) illustrations depict stimuli rather than experiences and infrequently communicate multimodal aspects of the experiences, and (3) contextual information of vibrotactile displays and experiential aspects are often distributed across several complementary figures.

“With our taxonomy, we want to give authors a tool to analyze and improve their illustrations. At the same time, in combination with the corresponding dataset it could serve as an approach for generative models to automatically create ideas or inspiration for visualizing one’s own research,” explains Dennis Wittchen from the SensInt group, who co-authored the paper as first author together with Bruno Fruchard from the French research institute Inria.

Orignal publications:
Arata Jingu, Easa AliAbbasi, Sara Safaee, Paul Strohmeier, and Jürgen Steimle. 2026. Scene2Hap: Generating Scene-Wide Haptics for VR from Scene Context with Multimodal LLMs. In Proceedings of the 2026 CHI Conference on Human Factors in Computing Systems (CHI ’26), April 13–17, 2026, Barcelona, Spain. ACM, New York, NY, USA, 21 pages. https://doi.org/10.1145/3772318.3791297

Bruno Fruchard, Dennis Wittchen, Nihar Sabnis, Paul Strohmeier, and Donald Degraen. 2026. How are Vibrotactile Experiences Visually Represented? A Taxonomy of Illustration Characteristics. In Proceedings of the 2026 CHI Conference on Human Factors in Computing Systems (CHI ’26), April 13–17, 2026, Barcelona, Spain. ACM, New York, NY, USA, 24 pages.
doi.org/10.1145/3772318.3790598

Further information:
Website of the Conference on Human Factors in Computing Systems: https://chi2026.acm.org/
Website of the Sensorimotor Interaction group: https://sensint.mpi-inf.mpg.de/

Editor:
Philipp Zapf-Schramm
Max Planck Institute for Informatics
Phone: +49 681 9325 4509
Email: pzs@mpi-inf.mpg.de