Advancing In-Vehicle Infotainment Display Interaction with Flexible Gesture Recognition Technology
As the automotive industry progresses toward increasingly intricate in-vehicle displays, these displays are now at the forefront of intra-vehicle user interaction—making it essential to provide drivers with seamless and intuitive control over vehicle systems. At Siili Auto, we recognize the importance of delivering a user-friendly, responsive, and reliable interface for drivers. That’s why we’ve accepted the challenge to develop a refined gesture recognition system specifically tailored to the unique requirements of modern in-vehicle infotainment systems.
Understanding the Challenge
The goal is to empower users to effortlessly navigate and operate the vehicle’s visual interface—combining intuitive gestures with traditional UI elements—all while avoiding unintended actions. Achieving this objective requires a delicate balance, as users should be able to transition seamlessly between navigation gestures and on-screen widget interaction in the absence of any disruption to their driving experience.
With the Amici project at Siili Auto, our team has accepted the challenge to develop a refined gesture recognition system tailored specifically to the unique needs of modern in-vehicle infotainment systems.
From Inputs to Intent
Central to our approach is the ability to interpret and determine user intent while maintaining that same intent throughout the interaction, a feat accomplished via the orchestration of stateful touch management and the careful interpretation of touch sensor data. Navigation gestures remain detectable even when performed atop interactive widgets, ensuring uninterrupted user control. Simultaneously, interactive widgets receive the appropriate inputs without interference from global gesture detection—preserving the user’s intended actions.
Key Features
Our gesture recognition system supports arange of key features to enhance the invehicle user experience:
1. Comprehensive Gesture Recognition: Our system can detect various gesture types including long presses, one- and two-finger swipes, and three-finger taps. This diversity gives drivers an extensive set of intuitive controls at their fingertips.
2. Seamless Interaction: Interactive widgets can seamlessly receive touch press-and-release events as well as position updates for manipulating complex controls (such as sliders and dials). This natural interaction allows drivers to effortlessly engage with the interface.
3. Intelligent Gesture Differentiation: Our system excels in distinguishing between gestures and interactive inputs, ensuring that navigation gestures remain detectable even when performed over interactive widgets. Interactive widgets, in turn, receive inputs without interference from global gesture detection—delivering a frustration-free experience.
Our solution builds upon the solid foundation of Qt/C++ touch event support and exposes this functionality for QML: meaning developers can seamlessly integrate gesture recognition into Qt Quick applications, all without needing to delve into underlying technical details.
We would be thrilled to discuss vehicle software development with you! Please reach out to Head of Sales Juhani Vanhala via his Linkedin profile.