Invention Title:

DEVICES, METHODS, AND GRAPHICAL USER INTERFACES FOR GAZE NAVIGATION

Publication number:

US20250103133

Publication date:
Section:

Physics

Class:

G06F3/013

Inventors:

Applicant:

Smart overview of the Invention

The patent application focuses on enhancing user interaction with virtual objects in extended reality environments through gaze-based navigation. It applies to computer systems equipped with display generation components and input devices, which facilitate virtual reality (VR) and mixed reality (MR) experiences. These systems use various input methods, including cameras, touch-sensitive surfaces, and eye-tracking technologies, to interact with digital elements like images, videos, and control icons within augmented reality (AR) settings.

Background and Challenges

Augmented reality technology has grown rapidly, integrating virtual elements that enhance or replace the real world. Traditional input devices like joysticks and touch screens often complicate user interactions with these environments. Current systems may lack sufficient feedback or require multiple inputs for simple tasks, leading to inefficiencies and increased cognitive load for users. These shortcomings are particularly problematic for battery-powered devices where energy efficiency is crucial.

Proposed Solutions

The disclosed methods aim to create more intuitive and efficient interfaces for interacting with virtual environments. By reducing the complexity of user inputs and enhancing the connection between user actions and system responses, these methods improve the overall human-machine interface. The solutions involve implementing gaze targets that respond to user eye movements, thus streamlining interactions and conserving energy in portable devices.

System Components

The computer systems described can be desktop or portable devices like tablets and smartphones, featuring components such as touchpads, cameras, touch-sensitive displays, and eye-tracking mechanisms. They may also include hand-tracking components and various output devices like tactile generators and audio systems. Users interact with these systems through gestures, eye movements, or voice commands, enabling functions like image editing, gaming, video conferencing, and web browsing.

Implementation Details

The patent outlines methods for displaying user interfaces with multiple gaze targets. When a user's gaze is detected on a specific target, the system displays corresponding content. This approach reduces the need for manual inputs and provides a seamless experience by adapting to the user's focus. The invention also includes instructions stored in computer-readable media for executing these functions efficiently across different types of computing devices.