Invention Title:

METHODS AND SYSTEMS FOR PRODUCING CONTENT IN MULTIPLE REALITY ENVIRONMENTS

Publication number:

US20250086892

Publication date:
Section:

Physics

Class:

G06T17/20

Inventors:

Applicants:

Smart overview of the Invention

The patent application describes a volumetric video production platform designed to integrate traditional filmmaking techniques with modern content environments like video games, augmented reality (AR), virtual reality (VR), and mixed reality. This platform provides filmmakers with tools that allow them to create content suitable for these new environments without needing to acquire additional expertise in gaming engines or complex software systems. The system bridges the gap between video editing and interactive digital environments by enabling the creation of 3D geometric objects from video footage.

Technical Components

The platform utilizes cameras enhanced with hardware accessories and editing tools to convert video segments into 3D geometric objects that can be used by game engines and similar platforms. It offers a comprehensive data processing pipeline architecture, including a super-resolution stage and a deferred surface reconstruction stage. This architecture optimizes the processing of video and depth pixel information, making the production of volumetric video content more efficient and accessible. The super-resolution pipeline stage combines low-resolution depth data with high-resolution video signals to produce high-quality synthetic images.

Data Handling

A texture packing module efficiently compresses video input depth and color streams as planar image data. This process enables dynamic surface density modulation, allowing for real-time computation of object surfaces based on run-time conditions. The deferred surface reconstruction engine facilitates lower bandwidth processing by delaying surface construction until necessary, reducing computational demands. Additionally, a view-dependent blending technique refines the final output in real time according to the user's perspective within the display environment.

User Interface

The platform includes an editing environment that mirrors familiar workflows for filmmakers, allowing them to manipulate depth information and edit volumetric video content objects. This user interface supports non-linear storytelling by enabling filmmakers to define behavioral conditions for 3D objects based on user interactions within the final display environment. The interface also allows users to adjust various parameters of volumetric content during editing, ensuring seamless integration into 3D environments.

Output and Integration

The platform supports the delivery of volumetric video content to AR and VR environments, adapting output data structures for seamless integration. It includes a real-time streaming system that transmits 3D volumetric content objects, such as live actor footage, into these digital spaces. For end users, this means that video content captured by filmmakers can be effortlessly incorporated into interactive environments like video games and virtual reality experiences, enhancing the viewer's engagement with dynamic 3D content.