US20250238993
2025-07-24
Physics
G06T13/40
A system and method are introduced for creating a virtual avatar that represents a user by utilizing sensor data collected from various sensors attached to the user's body. This involves force sensors placed under the user's feet and inertial measurement units (IMUs) positioned at specific locations on the user. The collected data helps in determining the user's activity type and estimating their full-body posture using a machine learning model. The outcome is a virtual avatar that mirrors the user's movements without relying on traditional motion capture technology.
The system employs multiple sensors, including force sensors and IMUs, to gather detailed information about the user's movements. These sensors can be placed strategically on different parts of the body, such as underfoot, wrists, sacrum, trunk, thighs, and shanks. The data collected encompasses force measurements and various datasets from the IMUs, which provide insights into limb positions and orientations. This comprehensive data collection allows for an accurate representation of the user's posture.
A machine learning model is central to processing the sensor data. It is trained to estimate a full-body posture by analyzing inputs from force sensors and IMUs, along with identifying the type of activity being performed by the user. The model outputs an estimated posture specifying limb positions and orientations, which is then used to construct the virtual avatar. Training for this model involves using data from users performing various activities to ensure accuracy.
The generation of the virtual avatar involves creating skeletal lines based on the estimated full-body posture. These lines form an avatar base template, which is then scaled using anthropometric data specific to the user. This process ensures that the virtual avatar accurately reflects the user's physical characteristics and movements. The avatar can be displayed in a virtual environment, moving in response to real-time sensor data.
The system comprises processors connected to the sensors and a storage memory containing the trained machine learning model. The processors are responsible for obtaining sensor data, determining activity types, estimating postures, and generating avatars without visual motion capture data. This configuration allows users to create avatars in various locations with minimal equipment while maintaining high accuracy in representing their movements.