Invention Title:

HUMAN AUGMENTATION PLATFORM USING CONTEXT, BIOSIGNALS, AND LANGUAGE MODELS

Publication number:

US20240419246

Publication date:
Section:

Physics

Class:

G06F3/015

Inventors:

Assignee:

Applicant:

Smart overview of the Invention

The disclosed system enhances human agency by integrating context information, historical data, biosensors, explicit user input, and generative AI. It involves input means, tokenization, a generative AI or generalist agent, and an output stage that acts on behalf of the user. This approach is designed to assist users who are unable to interact with their environment independently due to various limitations.

Background

Human agency refers to an individual's ability to make choices and affect their surroundings. Certain individuals may struggle with this due to physical or cognitive limitations. Technological advancements in augmented reality, virtual reality, robotics, and artificial intelligence provide potential solutions. However, these systems often remain inaccessible for those with limited mobility or sensory impairments. Generative AI (GenAI) offers a way for users to interact with AI and robotic assistants using outputs rather than coding or natural language queries.

System Components

The system utilizes various inputs such as context from the user's environment, historical usage data, biosignals from wearable devices, and explicit user inputs. These inputs are processed by a prompt composer that generates prompts for a generative AI or generalist agent. The AI's output facilitates user interaction with supportive entities like robotic aids and smart systems.

Subsystems and Interaction

Key components include a biosignals subsystem that processes signals from wearable devices and a context subsystem that gathers environmental and historical data. These subsystems communicate to enhance data accuracy and relevance. User input is also considered, allowing for direct interaction through various modalities like typed or spoken language.

Use Cases and Implementation

The system can generate complex commands for devices based on user inputs and contextual data. For instance, it can guide an autonomous vehicle using specific motor commands derived from general navigational instructions. Contextual prompts are generated using sensors on devices like smartphones or wearables, enabling seamless integration of user actions and environmental cues.