US20240394572
2024-11-28
Physics
G06N5/04
The patent outlines a system designed to detect deepfake media, which includes altered audio and video content that misrepresents a user's actions or speech. It employs a model trained to identify whether media content likely contains deepfake elements by comparing it against known genuine and manipulated samples. This technology aims to alert users to potential deception in media they consume or share.
Deepfakes are sophisticated forgeries created using machine learning, often making it difficult for individuals to discern their authenticity. These can include altered videos showing someone performing actions they never did or audio mimicking someone's voice. The proliferation of such media poses risks, as people may be misled into believing false information, potentially influencing decisions and actions.
To combat this, the system utilizes a model trained with both genuine and deepfake media samples. This model can be integrated into mobile devices, providing real-time analysis of incoming calls or stored media files. For instance, during a phone call, the device can analyze the audio and indicate if it likely contains deepfake content, helping users verify the authenticity of the communication.
The system comprises a network server equipped with a model trainer that uses machine learning techniques to develop models capable of detecting deepfakes across various media types, including voicemails and video messages. These models are trained using diverse datasets that include sensor data and other relevant information to enhance detection accuracy.
Once trained, these models are stored for deployment in user devices or network servers. They can process different data types, such as location or biometric data, alongside media files to assess their authenticity. This comprehensive approach ensures users receive reliable assessments of media content before engaging with it, safeguarding against misleading deepfake influences.