US20250112988
2025-04-03
Electricity
H04M3/2281
The patent application outlines a method to detect synthetic voice and video calls, particularly those generated by deepfake technology. It involves a process where a call is received, and the caller is requested to complete a Deep-Fake Algorithm Anomaly Triggering (DFAAT) task. The response to this task is analyzed for anomalies that indicate whether the call is legitimate or fake. Depending on the outcome, appropriate actions are taken to respond to either a legitimate or fake call.
Deepfake technology, which uses deep neural networks to create realistic media, has been used for both beneficial and malicious purposes. Since its emergence in 2017, it has been employed in various fields but also exploited for unethical activities like impersonation and misinformation. The increasing sophistication of real-time deepfakes (RT-DFs) poses significant risks, as attackers can impersonate individuals during voice and video calls, leading to potential breaches in security and privacy.
The application highlights potential scenarios where RT-DFs could be used maliciously. These include impersonating family members in distress calls to manipulate victims into transferring money or state actors targeting critical infrastructure workers. Such attacks are facilitated by the ability of deepfake technology to convincingly mimic voices and faces with minimal data, posing a significant threat to personal and national security.
Existing methods for detecting deepfakes often rely on identifying semantic errors or artifacts in generated media. However, these methods may become obsolete as the quality of deepfakes improves. Additionally, techniques that depend on forensic evidence can be easily circumvented through common audio and video processing techniques like filtering or compression, making them less reliable in real-world scenarios.
The proposed method, Deep Fake Algorithm Anomaly Based Protection (DFAABP), actively engages with the caller by requesting them to perform tasks that are simple for humans but challenging for deepfake models. This approach leverages anomality detection during task responses to identify fake calls. Various tests, including identity and task fulfillment tests, are conducted using machine learning models to ensure robustness against evasion tactics. This proactive method aims to enhance the detection of deepfakes in real-time communication.