US20250173962
2025-05-29
Physics
G06T17/00
The patent describes a method and system for transforming unstructured sketches and accompanying text into 3D objects. This process allows users lacking expertise in 3D modeling to create high-quality 3D assets across various fields such as animation, gaming, and fashion. The method involves inputting a sketch and text description, generating a 2D image from these inputs, and subsequently creating a 3D object from the 2D image. This eliminates the need for specialized authoring tools and extensive knowledge in 3D content creation.
Traditional 3D content creation requires multiple complex processes including modeling, texturing, and sculpting, often necessitating the use of specialized tools like CAD or Blender. These processes can be daunting and costly for users without advanced skills or resources. The invention addresses this challenge by providing an intuitive solution that simplifies the creation of 3D objects using basic sketches and descriptive text, thus making high-quality 3D content accessible to non-experts.
The method begins with the user inputting a rough sketch and a descriptive text about the desired 3D object. A machine learning model then processes these inputs to generate a 2D image. From this image, another model creates the final 3D object. The system can also generate multiple variations of the text to produce diverse styles of 2D images and corresponding 3D objects. This approach compensates for any ambiguity in the sketch by leveraging textual descriptions to provide clarity and detail.
A processor within the system is tasked with receiving sketch and text inputs, creating a 2D image, and subsequently generating a 3D object. Storage is provided for handling necessary data. The process involves applying AI-based models to analyze and transform user inputs into detailed digital assets. This streamlined approach enables users to bypass traditional learning curves associated with professional-grade 3D modeling software.