Invention Title:

CONTROLLING AUGMENTED REALITY EFFECTS THROUGH MULTI-MODAL HUMAN INTERACTION

Publication number:

US20240248546

Publication date:
Section:

Physics

Class:

G06F3/017

Inventors:

Applicant:

Drawings (4 of 12)

Smart overview of the Invention

A multi-modal interaction system enhances augmented reality (AR) experiences by allowing users to control AR objects through various human interactions. Users can select an AR experience within an application on their device, which displays associated AR objects on the graphical user interface (GUI). Textual cues are provided to guide the user in manipulating these objects, making the interaction intuitive and user-friendly.

Hand Gestures and Voice Commands

The system utilizes hand gestures and voice commands to modify the selected AR objects. For instance, users can point at an object to select it and then issue a voice command to alter its attributes. This dual approach not only streamlines the customization process but also allows users to engage with the AR environment without needing to physically hold a device.

Visual Hints for Enhanced Interaction

To further assist users, the system offers visual hints that suggest available hand gestures and voice commands. These cues help users understand how to interact with the AR objects, promoting a more personalized and engaging experience. By alternating between gestures and voice commands, users can efficiently configure complex settings in real time.

Integration with Networked Computing Environment

The multi-modal interaction system is integrated within a broader networked computing environment that facilitates data exchange between client devices and servers. This architecture supports various applications, including messaging clients that enhance user interaction with AR content. The seamless communication between devices and servers ensures that users can enjoy a rich and interactive augmented reality experience.