Invention Title:

ARTIFICIAL INTELLIGENCE VIRTUAL SIGN LANGUAGE AVATAR INTERPRETER

Publication number:

US20240404429

Publication date:
Section:

Physics

Class:

G09B21/009

Inventors:

Applicant:

Drawings (4 of 5)

Smart overview of the Invention

The patent describes a method, computer system, and computer program product designed to detect, interpret, and translate sign language during web conferences. It utilizes artificial intelligence to analyze user profiles and capture gestures and spoken language through Internet of Things (IoT) devices. The system processes this data to interpret and translate it into sign language, which is then displayed using a digital avatar. This approach aims to enhance inclusivity in virtual communications by adapting to different types of sign language used by participants.

Background

Artificial intelligence is leveraged to create expert systems capable of mimicking human gestures on digital interfaces, facilitating communication for sign language users. Despite existing methods that assign human interpreters or convert audio data into text for avatar control, these solutions have limitations. They are often costly, time-consuming, and lack the ability to dynamically detect and translate various sign languages during live conferences. The invention addresses these gaps by providing a more efficient AI-driven solution.

Functionality

The invention operates by detecting active web conferences and analyzing participant profiles to identify sign language users. It captures presenters' gestures and spoken language using IoT devices, processes the data with machine learning models, and translates it into the appropriate sign language understood by the audience. A digital avatar is then generated to display these translations in real-time, offering a personalized interpretation experience tailored to the needs of each participant.

Advantages

This AI-based system enhances virtual communication by ensuring accessibility for sign language users without the need for human interpreters. It dynamically adapts to real-time events and multiple presenters, providing seamless translation across different sign languages. By integrating as a plug-in or extension within video conferencing platforms, it can automatically deploy during meetings, thus streamlining the process of sign language interpretation.

Implementation

The program integrates with video conferencing tools to access audio and video streams, employing speech recognition and object detection techniques. It analyzes user profiles with machine learning algorithms to identify languages spoken or signed by participants. The system can customize digital avatars based on user preferences and contextual information such as meeting location and agenda. This ensures that communication is both effective and personalized for each conference attendee.