Adaptive Multimodal In-car Interaction (AMICI) - a research project

Date → 23 February, 2024
Author → Siili Auto
Series → Demos
Tags
Qt partnership
research

We are dedicated to our motto 'We drive innovation,’ which is why, in addition to our work on commercial projects and business demonstrations, we are glad to engage in research studies led by academic institutions in partnership with other forward-thinking businesses. We have recently concluded one such project – the AMICI project (Adaptive Multimodal In-car Interaction) – a partially publicly funded (thanks to Business Finland) research endeavor led by the University of Tampere in Finland.

Multimodal refers to a combination of methods through which a car communicates with a user, including visual (information displayed on screens), audio, and haptics (user interface responding to touch, such as vibrations). The primary motivation behind the project was to create optimized interaction systems that deliver essential information to the driver without causing distractions. Research and advancements aimed at enhancing the efficiency of these systems constituted the primary goal.

The project identified several key challenges, including information overload, carsharing, semi-autonomous driving, and an aging population, which drove the urgent need for research. Siili Auto’s primary contribution to the project was the MESA demonstrator: a full-featured multi-modal infotainment Human-Machine Interface (HMI). Subsequently, Tampere University utilized it to conduct a comprehensive User Experience Study with a diverse test group in their driving simulator laboratory. The data collected from these tests was promptly analyzed, leading to improvements implemented in later versions of the MESA UI, which were then retested.

The user interface was developed using Qt technology, enabling us to achieve an impressive effect praised by study participants as „highly appealing” and „visually clear and attractive.” The physical controller was provided by our partners at TactoTek.

We were thrilled to contribute to advancing understanding of in-car interactions. Additionally, we’ve collected a remarkable volume of data and ideas for enhancements, which we believe will enable us to develop even better, more exciting, efficient, and safer HMIs for our clients.

About the
authors

Max Pusa

A normal 18-year-old teenager from Espoo, Finland who's currently attending the last year of high school. He’s always been a car and a motorcycle enthusiast. He's had his driver’s license for a little more than a year now and driven over forty thousand kilometers in the past year. One of his favorite hobbies is riding a motorcycle and playing different sports with his friends.

Mateusz Skoczylas

Mateusz is a Designer with over 14+ years of experience. His marketing background has taught him to combine user and business needs creatively. As an interaction designer, he is looking for Automotive HMI & Artificial Intelligence's functional business application, creating high-fidelity prototypes (interactive, connected to external APIs). He is a great team player who always serves with a helping hand by changing perspective for each challenge and finding the best approach to work.

Latest posts

Adaptive Multimodal In-car Interaction (AMICI) - a research project
Read more →
Designing Automotive Using XR - Unreal Fest 2023 session recording
Read more →
The Mixed Reality Cockpit - a video series
Read more →
Qt World Summit 2023 conference insights
Read more →