US20250193626
2025-06-12
Electricity
H04S7/303
The patent application describes a system for creating immersive augmented reality (AR) experiences using an eyewear device equipped with spatial audio capabilities. This device integrates a processor, memory, image sensors, and speakers to capture and process environmental image data. By matching captured images with pre-existing data, the system identifies objects and target locations within the environment. The eyewear device uses spatial audio signals to guide users toward these targets, enhancing navigation through auditory cues.
The invention pertains to portable electronic devices, specifically wearable technology such as smart eyewear. It focuses on utilizing spatial audio feedback to assist users in navigating their surroundings. The technology leverages various sensors and interfaces common in modern electronic devices, including touch-sensitive surfaces and graphical user interfaces (GUIs), to enhance user interaction and experience.
The eyewear device's functionality centers around its ability to capture and analyze environmental images through dual cameras, generating a three-dimensional perspective. It employs spatial audio through strategically placed speakers to create directional audio zones that guide the user. For instance, audio signals from different speakers indicate the direction of a target relative to the wearer, with volume adjustments signifying proximity.
User interaction is facilitated through a touchpad integrated into the eyewear, allowing for intuitive navigation of menus and selection of options via gestures such as taps and swipes. This interface enhances the user experience by providing a seamless way to interact with the AR content displayed on the device's optical assemblies. The touchpad supports various gestures that trigger specific actions within the GUI.
Key components of the eyewear include visible-light cameras for capturing depth images and speakers positioned around the device for spatial audio output. The cameras work together to provide overlapping fields of view, enabling the generation of depth images necessary for accurate AR representation. This setup allows for a comprehensive AR experience by combining visual data with auditory guidance.