Invention Title:

SYSTEMS AND METHODS FOR AN IMMERSIVE AUDIO EXPERIENCE

Publication number:

US20240379113

Publication date:
Section:

Physics

Class:

G10L19/00

Inventors:

Assignee:

Applicant:

Smart overview of the Invention

The invention focuses on creating an immersive audio experience using a media file containing stem pulse-code modulation (PCM) data for various instrument types in an audio track. It incorporates vibe data, which includes binding objects linking audio/visual (A/V) fixture capabilities with the audio track's stem PCM data, and rule objects determining when to apply these bindings. This media file also provides visualization instructions for A/V fixtures based on the vibe data, aiming to enhance the sensory experience.

Context

Traditionally, home audio experiences lack the multi-sensory input found in live music events, which often include audio, visual, and other sensory elements. Achieving such an immersive experience at home typically requires significant investment in equipment and effort. The invention aims to bridge this gap by offering a system that can replicate or simulate live events' sensory richness without the need for extensive resources or technical setup.

System Features

The system is designed to create immersive experiences across various settings like homes or vehicles. It enables users to enjoy music, podcasts, workouts, and more in a sensory-rich environment. This includes simulating live events or generating unique artist-driven experiences with visual elements like light patterns and motion graphics. The system involves hardware components like L.E.D. lights and projectors, along with software on user devices connected to cloud networks.

Technological Approach

Advanced technologies and smart devices transform harmonic intelligence from audio streams into visual experiences. Machine learning extracts time-coded metadata to design the ultimate "vibe" accompanying the auditory content. The system can analyze audio tracks to generate vibe data, which includes time-coded metadata used to create visualization instructions for A/V devices.

User Interaction

Users can select audio tracks via a graphical interface on their computing devices, which communicate wirelessly with A/V devices. The system analyzes track metadata to determine characteristics and generate vibe data. Users can also edit vibe data to customize their experience further. The system operates with minimal user input, leveraging AI and machine learning for seamless integration of audio and visual elements.