Invention Title:

CAMERA MAPPING IN A VIRTUAL EXPERIENCE

Publication number:

US20250086871

Publication date:
Section:

Physics

Class:

G06T13/40

Inventors:

Assignee:

Applicant:

Smart overview of the Invention

A metaverse application processes video frames to create a virtual experience where a user's movements are accurately reflected by an avatar. Initially, the application receives a video frame including the user's head and determines facial landmarks. These landmarks help generate an animation frame with a 3D avatar and background. The application also determines the user's head orientation and maps it to the orientation of their mobile device.

Methodology

For subsequent video frames, the application's method involves updating the orientation of the mobile device relative to the user's head orientation. This is achieved by analyzing changes in facial landmarks across frames. The application generates new animation frames based on these updates, ensuring that both the avatar and background reflect the user's movements accurately.

Technical Details

The system employs a computer-implemented method that considers roll, yaw, and pitch to map device and head orientations. It uses bounding boxes to determine facial landmarks, focusing on key features like eyes and mouth. This approach allows for dynamic adjustments to the avatar's direction based on a percentage of changes in these landmarks.

Embodiments

Some embodiments include updating the background perspective in animation frames based on device orientation changes. If facial landmarks indicate movement directions like up, down, left, or right, these are translated into corresponding avatar movements. The system can adjust the avatar's proximity to simulate depth changes as the user moves closer or farther from the device.

Applications

This technology enhances virtual experiences such as video calls in a metaverse setting by providing realistic avatar interactions. It addresses issues in conventional systems where static backgrounds and inaccurate avatar movements can lead to discomfort for users. By ensuring that both avatars and backgrounds respond dynamically to user movements, it aims to provide a more immersive experience.