US20240379102
2024-11-14
Physics
G10L15/22
The disclosure addresses methods and interfaces for managing immersive outputs in extended reality (XR) environments. It focuses on computer systems capable of generating virtual experiences, such as augmented reality (AR) and mixed reality (MR), through various electronic devices. The system responds to user speech requests by providing immersive outputs, which are then modified based on additional inputs from the user.
This innovation pertains to computer systems linked with display generation components, which facilitate virtual experiences. These systems include electronic devices that deliver XR experiences via displays, such as head-mounted devices (HMDs) and other portable or wearable gadgets.
Recent advancements in augmented reality have led to environments where virtual elements enhance or replace the physical world. Users interact with these environments using input devices like cameras, controllers, and touchscreens. Despite progress, current XR systems are often cumbersome and inefficient, especially when using HMDs, which can overwhelm users due to their immersive nature.
The proposed system offers improved methods for controlling immersive XR outputs, making user interaction more efficient and intuitive. It reduces the need for extensive user inputs by simplifying the connection between inputs and device responses. The innovation is applicable to various devices, including desktops, tablets, and wearables, featuring components like touchpads, cameras, and eye-tracking systems.
The system enhances user interfaces for XR experiences by enabling users to initiate and modify immersive outputs through spoken inputs. For instance, upon receiving a speech request, an XR environment is displayed over the physical setting. Users can further adjust the immersive experience through additional inputs, enhancing control without compromising immersion.