US20250157114
2025-05-15
Physics
G06T13/40
The patent application outlines innovative systems and methods for generating animatable characters or avatars using three-dimensional (3D) representations. It introduces a hybrid system that combines Gaussians with pose-driven primitives and implicit neural fields to create detailed and realistic 3D models. This approach contrasts with traditional manual modeling techniques, offering enhanced efficiency and fidelity in character representation.
The system works by assigning a set of first elements of a 3D model to specific locations on the subject's surface in an initial pose. It then assigns second elements to these first elements, where each second element's opacity is determined by its distance from the subject's surface. These elements are updated based on target poses and subject attributes, allowing for dynamic adjustments and rendering of the character.
A significant feature of this approach is the use of Signed Distance Function (SDF)-based implicit mesh learning, which improves the stability and efficiency of learning complex Gaussian parameters. This technique allows for high-speed rendering suitable for real-time applications, enhancing both the geometric precision and visual appearance of avatars or characters.
The system can be implemented using processors equipped with circuits capable of handling various computational tasks. These include assigning elements, updating them based on objective functions, and determining opacities using SDFs. The system also supports input from diverse data types such as text, audio, or video to inform character attributes.
Potential applications span across various domains, including virtual reality (VR), augmented reality (AR), mixed reality (MR), digital twin operations, and conversational AI. The technology can be integrated into systems for synthetic data generation, simulation operations, collaborative content creation, and more, leveraging cloud computing resources or edge devices.