Invention Title:

DEVICES, METHODS, AND GRAPHICAL USER INTERFACES FOR MODIFYING AVATARS IN THREE-DIMENSIONAL ENVIRONMENTS

Publication number:

US20250245942

Publication date:
Section:

Physics

Class:

G06T19/006

Inventors:

Applicant:

Smart overview of the Invention

The patent application discusses techniques to enhance the user experience in modifying avatars within extended reality (XR) environments. It focuses on improving the efficiency and intuitiveness of user interactions with avatars, particularly when tracking of a user's physical movements is lost. This involves using heuristics to modify avatar displays, aiming to create a more seamless and natural interface.

Technical Field

The disclosed technology pertains to computer systems that facilitate virtual and mixed reality experiences. These systems involve electronic devices capable of presenting and altering user avatars through various input methods such as cameras, controllers, touch-sensitive surfaces, and displays. The goal is to integrate virtual elements with the physical world effectively.

Background

Recent advancements in augmented reality have led to environments where virtual elements enhance or replace parts of the physical world. Users interact with these environments through various input devices to modify virtual objects like avatars. Existing methods often result in unnatural avatar movements and require complex inputs, which can be inefficient and burdensome for users.

Summary of Innovation

The invention addresses inefficiencies by offering improved methods for avatar modification in XR environments. It reduces cognitive load by simplifying inputs and enhancing feedback mechanisms. This innovation aims to streamline interactions, conserve energy—particularly in battery-operated devices—and improve overall user experience through a more efficient human-machine interface.

Implementation Details

The system can be implemented on various devices, including desktops, tablets, wearables, and smartphones. It supports multiple input modalities such as touch, eye-tracking, hand-tracking, and voice commands. By reducing the need for precise sensors and enabling operation under varied conditions, the system enhances device portability and usability while conserving power.