US20250063153
2025-02-20
Electricity
H04N13/302
The patent application describes a method for enhancing the immersive experience of autostereoscopic display devices by simultaneously presenting three-dimensional images and sound. This method adapts both visual and auditory elements to the viewer's position, aligning them with the viewer's eyes and ears relative to the display. This adaptation allows the virtual scene to be experienced as if it were real, enhancing immersion and interaction, which is particularly useful in applications like teleconferencing and gaming.
Autostereoscopic displays have become popular due to their ability to present 3D images without requiring eyewear. These displays use eye-tracking technology and lenticular lenses or parallax barriers to deliver separate images to each eye, creating depth perception. However, conventional systems often fail in providing a realistic auditory experience that matches the visual perspective, especially when the viewer moves or interacts with virtual objects.
The current technology faces issues where sound does not accurately reflect head movements or interactions with virtual scenes. Traditional solutions, such as multiple speakers or headphones, disrupt the immersive experience by not aligning sound perception with visual cues. Moreover, these methods often require additional equipment that can detract from the user's engagement with the virtual environment.
The invention aims to address these shortcomings by adjusting both visual and auditory outputs based on the viewer's position. This involves using an eye-tracking system and means to determine ear positions, allowing the device to tailor both image and sound delivery. The processor within the device uses virtual cameras and microphones to generate appropriate data for creating synchronized 3D visuals and audio.
The proposed device includes a display portion for rendering 3D images and audio means for producing 3D sound. It features an eye-tracking system along with mechanisms for ear position detection. The processor generates image data for left and right eyes and sound data for left and right ears, ensuring that both image and sound are adapted to the viewer's head movements, thus enhancing the overall immersive experience.