Invention Title:

SYSTEM AND METHOD FOR A BLENDED REALITY USER INTERFACE AND GESTURE CONTROL SYSTEM

Publication number:

US20250238075

Publication date:
Section:

Physics

Class:

G06F3/012

Inventors:

Applicants:

Drawings (4 of 5)

Smart overview of the Invention

The patent application describes a system that integrates real-world video feeds with virtual reality environments to create a blended reality experience. This system utilizes sensors, a display, and a blending engine to achieve a seamless interaction between live and virtual elements. By tracking user movements, the system detects commands that influence how live and virtual feeds are combined and displayed on a head-mounted display.

Blending Engine Functionality

A key component is the blending engine, which processes live video and virtual reality feeds. It adjusts the transparency of these feeds based on user actions, such as head tilts or controller manipulations. This dynamic adjustment allows for a more intuitive interface where users can engage with both their physical environment and virtual content.

Sensor Integration

The system includes multiple sensors like accelerometers and gyroscopes to monitor user movements. These sensors provide input to the blending engine, enabling it to modify the prominence of live or virtual feeds. This feature ensures that users can seamlessly transition between realities, enhancing their interaction with both environments.

User Interaction

The interface supports complex user interactions by interpreting gestures as commands. For example, looking down might increase the visibility of physical surroundings within the virtual space, aiding in tasks that require real-world awareness. The system also adapts to user movements, such as leaning forward, to extend their range of motion in the virtual realm.

Advanced Features

Additional functionalities include blending live video feeds based on camera orientation and user perspective, ensuring realistic depth perception. The system can also prioritize certain physical objects over others within the virtual view, enhancing usability and safety in mixed-reality applications. Multiple cameras may be used to broaden the user's field of view or provide detailed visual information.