Invention Title:

REAL-TIME DYNAMIC VIDEO GENERATION BASED ON USER PREFERENCES

Publication number:

US20250200822

Publication date:
Section:

Physics

Class:

G06T11/00

Inventors:

Applicant:

Smart overview of the Invention

A novel method leverages user preferences to dynamically generate real-time video content. The process begins by extracting a set of keywords from a user's stored preferences. These keywords are then prioritized and organized into subsets for further processing. This approach ensures that the content aligns closely with the user's interests and preferences.

Keyword Processing

The prioritized subsets of keywords are processed using a bi-directional attention-based long short-term memory recurrent neural network (LSTM RNN). This advanced neural network model is designed to generate coherent story text that reflects the nuances of the user's preferences. The story text serves as a foundational element for subsequent video generation.

Story to Video Transformation

Once the story text is generated, it is fed into a video generative model. This model is conditioned with images corresponding to objects mentioned in the story, ensuring that the visual output is directly tied to the narrative content. The result is a video that not only tells a story but also visually represents it in a meaningful way.

Video Generation

The video generative model creates at least one video frame based on the input story text and associated images. This process involves sophisticated algorithms capable of producing high-quality visual content that aligns with the narrative structure provided by the LSTM RNN-generated story.

Compliance Verification

A crucial aspect of this method is ensuring that each generated video frame complies with predefined conditions set by an embedded smart contract. This verification step guarantees that the generated content adheres to specific legal, ethical, or user-specified guidelines, providing an additional layer of assurance and control over the final output.