US20240119619
2024-04-11
Physics
G06T7/70
The described techniques aim to enhance the experience of videoconferencing by creating a realistic sense of physical presence among participants. Unlike traditional methods that rely on fixed camera perspectives, this approach allows multiple users to engage in an immersive experience without the need for head-mounted displays or other wearable devices.
A real-time three-dimensional model of the remote conference scene is generated and transmitted to local participants. This model adapts based on each participant's location and perspective, providing them with a spatially accurate stereoscopic view. Changes in perspective dynamically alter what each participant sees, enhancing the feeling of being physically present with others.
Traditional videoconferencing setups often limit the sense of presence due to fixed camera angles. In contrast, the innovative approach utilizes a model generation engine that creates a detailed 3D representation of the remote environment. By identifying the location and eye position of each participant, a personalized view is presented, overcoming the limitations of conventional video feeds.
The system incorporates various video cameras to capture data from different angles, which can be processed individually or collectively to form a comprehensive 3D model. This model includes contours and textures that accurately represent participants and their surroundings, ensuring a more complete visual experience during the conference.
By enabling direct eye contact and accurately conveying emotional states through non-verbal cues, this technology fosters clearer communication among participants. The shared visual experience and ambient audio across locations contribute to a more engaging and interactive videoconferencing environment, making remote meetings feel more like in-person interactions.