Invention Title:

METHOD AND APPLICATION FOR ANIMATING COMPUTER GENERATED IMAGES

Publication number:

US20250182368

Publication date:
Section:

Physics

Class:

G06T13/40

Inventors:

Applicant:

Smart overview of the Invention

The disclosed technology introduces a method for animating digital assets during film production, focusing on capturing real-time facial features and body movements of a subject. This data is processed to generate animations that can be mapped onto digital assets, creating augmented video content within a virtual environment. The system operates in real-time, allowing the augmented video data to be displayed immediately on one or more screens, enhancing the filmmaking process with dynamic CGI integration.

Background

Current video production techniques often utilize computer-generated imagery (CGI) to enhance visual content, though they are limited in customization and require specialized equipment beyond mobile devices. Existing augmented reality filters offer basic effects but lack the ability to capture complex facial expressions and full-body movements. This invention addresses the need for more sophisticated and customizable animation tools that can be accessed via common devices like smartphones, enabling creators to produce richer content without extensive hardware.

System and Methodology

The system involves several components that work together to animate digital assets. It includes modules for capturing facial and body data through video signals, which are then processed by a skeleton module that maps these movements onto a digital asset. This mapping process results in an animated digital asset that reflects the subject's actions. The system's architecture allows for storage of this data, enabling further manipulation or use in different contexts.

Applications

The described technology is particularly beneficial for industries such as film and television, where it can be integrated into previsualization systems to enhance production quality. By capturing and animating body movements and facial expressions in real-time, filmmakers can visualize scenes more effectively and make adjustments on-the-fly. This capability is exemplified by its potential incorporation into existing previsualization devices, offering an innovative toolset for creative professionals.

Technical Implementation

The animation system utilizes a mobile device equipped with image sensors and a processor to capture environmental data. These sensors may include multiple cameras optimized for specific tasks, such as macro or wide-angle photography. The processor handles the execution of applications and instructions stored in memory, coordinating the capture and processing of visual data to create augmented video content. The system's flexibility allows it to be adapted across various device types, from smartphones to desktop computers.