Invention Title:

DEVICES, SYSTEMS, AND METHODS FOR TRANSFERRING PHYSICAL SKILLS TO ROBOTS

Publication number:

US20250229417

Publication date:
Section:

Performing operations; transporting

Class:

B25J9/163

Inventors:

Applicant:

Drawings (4 of 26)

Smart overview of the Invention

The patent introduces a robotic training system that facilitates the transfer of physical skills to robots through intuitive human demonstrations. It employs paired leader and follower robotic devices, where the leader is manipulated by humans and the follower replicates these movements. This setup allows robots to perform complex tasks without requiring extensive manual programming. The system uses force, torque, or force-torque sensors to capture and process data from these demonstrations, which are then used to train AI models. This approach eliminates the need for costly hardware setups while improving robotic learning capabilities.

Key components of the system include user control interfaces, multiple workspace viewpoints, and motion modification features that provide haptic feedback. The control system can automatically transition between position and force control based on real-time sensor data. This integration of visual and force data into learned policies enhances the robot's ability to understand and replicate complex interactions with precision. The system is designed to be adaptable across various robot configurations, ensuring broad applicability in different robotic platforms.

The invention also includes a handheld demonstration device for capturing robotic skills. This device is equipped with sensors to measure interaction forces during demonstrations and supports interchangeable end effector tools. These tools can include grippers or customizable fingers, providing versatility in task execution. The device can integrate cameras for capturing synchronized viewpoints and features electronic triggers for precise activation during demonstrations.

An advanced control system processes demonstration data comprising force and visual information, dynamically switching between position and force control based on task requirements. Neural networks verify task completion by analyzing sensor data, ensuring accurate replication of demonstrated tasks. Additionally, the system generates natural language descriptions of robot actions, creating human-readable narratives that facilitate understanding and further programming of robot tasks.

Supporting systems include modules for generating training data through 3D environmental scanning and simulation scenarios. These modules help adapt foundation models using demonstration data, guided by natural language inputs to determine task specifics. By leveraging this comprehensive approach, the system offers a low-cost solution for effective robot learning from human demonstrations, addressing traditional limitations in robotic programming.