US20250104376
2025-03-27
Physics
G06T19/20
The electronic device described is designed to generate virtual objects within a real-world space using a combination of camera inputs and advanced AI technology. It includes a display, a camera, memory for storing instructions, and a processor to execute these instructions. The device captures images to gather spatial information and user inputs, which are then used to create object characteristic information. This data is processed by a generative AI model to produce three-dimensional (3D) virtual objects that are displayed on the device.
Key functionalities involve obtaining spatial details from real-world images captured by the camera, gathering user inputs based on these images, and deriving object characteristics from these inputs. The device then uses this information to create object generation data, which is fed into an AI model trained to generate 3D virtual objects. This model leverages spatial and object data to produce virtual objects that can be displayed in the user's environment.
The invention addresses limitations of current augmented reality (AR) technologies, which typically rely on pre-modeled 3D objects and lack flexibility in generating or modifying virtual objects that do not exist in reality. Existing AR systems often fail to offer users the ability to alter generated objects, limiting the overall AR experience. This device aims to expand AR capabilities by enabling the creation and customization of virtual objects through advanced AI techniques.
The technology has broad applications in various electronic devices, particularly those used for augmented reality experiences. Potential implementations include mobile devices, smart glasses, head-mounted displays (HMDs), and other wearable AR devices. These devices can enhance everyday activities such as navigation, information retrieval, and photography by integrating customizable virtual elements into real-world environments.
The device's components work together seamlessly: the camera captures images for spatial analysis; the processor interprets user inputs and object characteristics; the AI model generates detailed 3D objects; and the display presents these objects in the real world. This integration allows for a dynamic interaction between virtual and physical spaces, enhancing user engagement through immersive AR experiences.