US20240248546
2024-07-25
Physics
G06F3/017
A multi-modal interaction system enhances augmented reality (AR) experiences by allowing users to control AR objects through various human interactions. Users can select an AR experience within an application on their device, which displays associated AR objects on the graphical user interface (GUI). Textual cues are provided to guide the user in manipulating these objects, making the interaction intuitive and user-friendly.
The system utilizes hand gestures and voice commands to modify the selected AR objects. For instance, users can point at an object to select it and then issue a voice command to alter its attributes. This dual approach not only streamlines the customization process but also allows users to engage with the AR environment without needing to physically hold a device.
To further assist users, the system offers visual hints that suggest available hand gestures and voice commands. These cues help users understand how to interact with the AR objects, promoting a more personalized and engaging experience. By alternating between gestures and voice commands, users can efficiently configure complex settings in real time.
The multi-modal interaction system is integrated within a broader networked computing environment that facilitates data exchange between client devices and servers. This architecture supports various applications, including messaging clients that enhance user interaction with AR content. The seamless communication between devices and servers ensures that users can enjoy a rich and interactive augmented reality experience.