Invention Title:

GENERATING PERSONALIZED VIDEOS WITH SELECTIVE REPLACEMENT OF CHARACTERS

Publication number:

US20250159306

Publication date:
Section:

Electricity

Class:

H04N21/8126

Inventors:

Applicants:

Smart overview of the Invention

The patent application outlines innovative systems and methods for creating personalized videos by selectively replacing characters in a video. This process involves obtaining an input video that includes multiple individuals and analyzing it to determine specific properties of the depicted persons. By leveraging a user's personalized profile, the system can replace one of the individuals in the video with an avatar, resulting in a customized output video that retains other original characters.

Technological Context

The invention falls within the broader field of media stream generation, particularly focusing on personalizing videos based on user preferences. The need arises from the vast amount of media produced daily, which is often inaccessible to global audiences due to language barriers. While existing technologies like real-time subtitles offer some solutions, they are not always effective or enjoyable for all users. The disclosed embodiments aim to provide enhanced methods for generating revoiced audio streams that maintain the original speaker's characteristics while translating into target languages.

Methodology

The application describes a comprehensive method for generating revoiced media streams. This includes receiving a media stream with individuals speaking in an origin language, obtaining and translating transcripts to a target language, and analyzing voice characteristics to create synthesized voices. These synthesized voices are used to dub the original content, ensuring that it sounds as though the individuals are speaking the target language naturally.

Applications

Several embodiments are presented, each tailored to different scenarios involving multiple languages and speakers. For instance, one method addresses media streams with individuals speaking different languages by providing selective dubbing based on language requirements. Another approach involves adjusting accents in dubbed versions, catering to user preferences for accent levels in target languages. These methods ensure that diverse audiences can access content in a way that feels authentic and personalized.

User-Centric Features

The system also incorporates user-specific features by revising transcripts based on user categories or preferred language characteristics. This personalization extends to selecting target languages and adapting vocabulary to suit individual users' needs. By analyzing voice profiles and user preferences, the invention generates media streams that align closely with users' expectations, enhancing their viewing experience.