US20240353680
2024-10-24
Physics
G02B27/017
The patent application describes a sensory eyewear system designed for mixed reality devices, which enhances user interactions with people and the environment by recognizing and interpreting sign language. This system translates sign language into understandable information for the wearer. Additionally, it can recognize environmental text, modify it, and render the modified text to obscure the original content.
The invention pertains to virtual, augmented, and mixed reality systems, focusing on recognizing sign language or text within an environment to render virtual content. These technologies aim to provide a seamless integration of digital images with real-world visuals, enhancing user experience by addressing the complexities of human visual perception.
The sensory eyewear system is part of a wearable device, typically a head-mounted display equipped with imaging sensors and processors. It captures and interprets sign language gestures using machine learning algorithms. The system translates these gestures into text or graphics in real-time, facilitating communication between users who may not share a common sign language.
Beyond interpersonal communication, the system also enhances interaction with the environment by recognizing and modifying text. It uses techniques like optical character recognition to detect text in images from the user's surroundings. The system can then alter display characteristics or translate the text, making it more accessible or understandable to the user.
The wearable device can present both 2D and 3D virtual images, integrating them with real-world views for an immersive experience. This includes displaying translated or modified text over original signage in various conditions to improve clarity and comprehension. The system's versatility allows it to function in diverse environments, improving accessibility and interaction through advanced visualization techniques.