US20240296676
2024-09-05
Physics
G06V20/40
Detecting deepfake videos involves a systematic approach using machine learning models. The process starts with analyzing a video to extract relevant data and context, followed by annotating the video based on this analysis. The annotations may include details about people, language, location, and technical parameters of the video.
A variety of verification models are selected according to the annotations made during the analysis. These models include at least one forensic model that is not specifically trained to identify deepfakes. The forensic model's outcomes are compared against a ground truth derived from authentic videos to ensure accuracy in detection.
The results from the different verification models are aggregated to provide a comprehensive assessment. This aggregation helps in determining the likelihood that a given video was created using deepfake technology. The method emphasizes the importance of cross-referencing outcomes from multiple models to enhance reliability.
The analysis phase encompasses both technical details, such as file format and resolution, and contextual information, including where the video was found and user interactions. This multifaceted approach aids in building a robust framework for identifying deepfakes by considering various aspects that contribute to video authenticity.
The described method can be implemented through software instructions executed by processors, allowing for real-time analysis and detection of deepfake content. By computing evaluation weights based on annotations, the system can dynamically select appropriate verification models, enhancing its ability to adapt to new types of deepfake technologies as they emerge.