Invention Title:

NEURAL RENDERING OF MAKEUP BASED ON IN VITRO COSMETIC ANALYSIS

Publication number:

US20240412463

Publication date:
Section:

Physics

Class:

G06T19/00

Inventors:

Assignee:

Applicant:

Smart overview of the Invention

The patent application outlines a method for rendering makeup on digital images using neural rendering techniques. A computing system utilizes a tensor of neural descriptors to capture the attributes of reference makeup from an image. This tensor is processed by a renderer, which generates a rendered image that can be displayed on various devices. The process aims to create realistic digital representations of makeup applications.

Technical Details

The method incorporates advanced generative models like Generative Adversarial Networks (GANs) or Variational Autoencoders (VAEs) within the renderer. These models handle input images that may include depth data and utilize components such as UV mapping and three-dimensional modeling. The tensor of neural descriptors is stored in a data repository and is derived from in vitro images of makeup applied to sample cards with different textures and finishes.

System Components

A system is designed to train both an attribute extractor and a renderer for accurate makeup rendering. This involves generating neural descriptors from reference images and updating the models through iterative comparisons with ground truth images. The system may include hardware like cameras and support apparatus for capturing reference images, ensuring consistency in lighting and positioning.

Applications

The neural rendering approach offers improvements over traditional graphical engines by automating parameter extraction, thus enhancing efficiency and accuracy, especially for complex makeup finishes like pearl and metallic. This technology is applicable to various types of makeup, including lip and eye products, and can be integrated into applications where memory efficiency is crucial, such as mobile apps and web browsers.

Implementation Example

A schematic example illustrates the flow from capturing a reference image to generating a rendered image. The process begins with an in vitro image of makeup, which is analyzed by an attribute extractor using neural networks to produce a tensor of neural descriptors. These descriptors are then used by a renderer to apply the makeup attributes onto an input image, potentially creating realistic virtual try-on experiences.