US20240420430
2024-12-19
Physics
G06T19/006
The patent application describes a system and method for creating an augmented reality (AR) experience that integrates virtual content into a user's physical environment. This is achieved using a mobile device equipped with a camera and display to capture images and telemetry data, such as GPS location and orientation. The data is sent to a server system that determines the precise location and orientation of the device by accessing various data sources like street maps, visual positioning system (VPS) data, and image anchors.
Once the server receives the telemetry data, it renders an image of virtual content corresponding to the device's location. This rendered image covers a larger area in the virtual world than the physical environment captured by the device. The server then transmits this rendered image back to the mobile device through a wireless communication channel. The mobile device captures subsequent images and telemetry data to update its position and orientation.
The mobile device composites the rendered image with real-time images it captures, adjusting for any changes in telemetry data between different captures. This ensures that the virtual content appears seamlessly within the physical environment on the mobile device's display. The virtual content can include various interactive elements like characters or objects that are specific to certain locations.
Participants can interact with these virtual elements as they move through different locations, providing an immersive AR experience. The system allows multiple users to access this persistent virtual content simultaneously, enhancing the shared experience. The server system can render images with multiple data channels, including color, depth, and shadow information, ensuring high-quality visual representation.
The telemetry data includes orientation details such as roll, yaw, and pitch. Images are transmitted with six channels of information divided across frames to optimize data flow. The mobile device sends telemetry data at frequencies between 30-120 Hz and includes camera settings like field of view (FOV), resolution, focal length, and f-stop. This comprehensive approach ensures accurate integration of virtual elements in real-world environments.