US20240135657
2024-04-25
Physics
G06T19/006
A method and apparatus for augmented reality (AR) processing focus on enhancing visual information displayed through an AR device. The core process involves determining a compensation parameter that addresses light attenuation caused by the device's display area. This allows for accurate representation of a target scene, ensuring that virtual images overlay seamlessly onto real-world visuals.
The method begins by capturing a background image of the target scene using the AR device's camera. This image is generated without any light attenuation effects. Following this, a compensation image is created by adjusting the brightness of the background image based on the previously determined compensation parameter, setting the stage for overlaying virtual objects.
Virtual object images are generated to be overlaid on the captured scene. These images may include areas that express dark colors, utilizing light attenuation effects to enhance realism. The process involves synthesizing the compensation image with virtual object elements, which can include shadows and other effects to create a cohesive visual experience.
To ensure that the displayed images match the user's perspective, adjustments are made based on depth information from the AR device to the target area. This includes calibrating based on differences between the camera's capture viewpoint and the user's observation viewpoint, allowing for accurate representation of virtual objects in three-dimensional space.
The described method is applicable to various AR devices, including glasses and head-mounted displays. These devices utilize cameras for scene capture and processors to execute the compensation and synthesis processes. Projectors may be employed to display the final images onto lenses, ensuring that users experience an integrated blend of real-world and virtual elements.