US20240394952
2024-11-28
Physics
G06T15/005
The system described integrates mixed reality technology, utilizing a device and a base station connected through a wireless link. The device, which could be a headset or similar wearable, is equipped with sensors that gather data about the user's surroundings and personal metrics. This data is then sent to the base station, which processes the information to create and encode visual frames. These frames are transmitted back to the device for display, leveraging the base station's superior computing power compared to standalone systems, while avoiding the physical constraints of tethered systems.
This mixed reality setup employs several innovative methods to enhance performance and user experience. Notably, it maintains a target frame rate and minimizes latency through various techniques. Among these are warp space rendering, which optimizes frame sampling, and foveated rendering, which adjusts resolution based on user gaze direction. These methods not only improve rendering efficiency but also reduce bandwidth usage by minimizing unnecessary data transmission.
The system introduces dynamic rendering and compression strategies to adaptively manage frame rate and latency. Dynamic rendering adjusts the complexity of frame generation based on current bandwidth and processing capabilities, while dynamic compression modifies compression levels to ensure smooth operation under varying network conditions. Additionally, motion-based rendering adapts frame rates according to user movement, enhancing responsiveness.
Slice-based rendering is employed to further reduce latency by transmitting only parts of frames as they are ready. This method decreases memory usage and power consumption. In scenarios where the wireless connection is disrupted, the device can function independently as a fallback mechanism. The system also includes methods for handling incomplete or missing frames by using previously received data, ensuring continuity in user experience.
The device incorporates both world-facing and user-facing sensors to collect comprehensive environmental and personal data. The base station is equipped with advanced hardware such as CPUs, GPUs, and other processors for efficient frame rendering and compression. This configuration allows for high-quality mixed reality experiences without the limitations of traditional tethered or standalone systems, providing users with greater freedom of movement and enhanced visual content.