Invention Title:

SYSTEMS AND METHODS FOR CAPTURING, TRANSPORTING, AND REPRODUCING THREE-DIMENSIONAL SIMULATIONS AS INTERACTIVE VOLUMETRIC DISPLAYS

Publication number:

US20250182401

Publication date:
Section:

Physics

Class:

G06T17/20

Inventors:

Applicant:

Smart overview of the Invention

The invention provides a system, method, and device designed to capture, transport, and display three-dimensional (3D) volumetric simulations. These simulations can be interactively viewed by multiple users in both recorded and real-time formats. The system operates independently of the original application that generated the 3D simulation, making it particularly useful for applications like gameplay capture, virtual reality (VR) game streaming, and cross-platform communications.

Background

Virtual Reality (VR) and Augmented Reality (AR) have evolved significantly since their inception in the 1970s, expanding from specialized uses in industries like medical and military training to widespread consumer applications. The growth of VR-related products is evident with major companies such as Meta, Google, Apple, and others investing heavily in VR and AR technologies. Mixed reality, a blend of real and virtual worlds, has also seen diverse applications. The rise of VTubing, where entertainers use virtual avatars controlled by motion capture technology, highlights the increasing demand for interactive 3D environments.

Technical Details

The invention captures virtual 3D volumetric simulations using multiple virtual cameras positioned within the simulation environment. Each camera records distinct views of the simulation, which are then processed into time-tagged frames. These frames can be transported to a user's screen for live viewing or stored for later playback. This process allows users to interact with the simulation from different perspectives without needing the original application.

Preferred Embodiments

  • Rendering Process: Virtual cameras render views using separate render-to-texture processes. Each view includes color and depth information, saved with time tags in a view matrix.
  • Equirectangular View: Captures an additional spherical view of the environment for reduced computational load during playback.
  • Audio Synchronization: Audio is recorded separately with time tags for synchronization with visual frames. It may include spatial sound data if supported by the simulation.

Transmission and Compression

Captured video frames and audio are compressed, packetized, and either streamed live or stored for later playback. Each camera's depth buffer is converted to grayscale video on a frame-by-frame basis, while color frames are compressed using standard codecs. This approach allows users to select their desired quality level for an optimal viewing experience. Audio is formatted in a compressed multichannel stream using Pulse Coded Modulation (PCM).