Invention Title:

SYSTEMS AND METHODS FOR IMPROVED FACIAL ATTRIBUTE CLASSIFICATION AND USE THEREOF

Publication number:

US20240249504

Publication date:
Section:

Physics

Class:

G06V10/764

Inventors:

Assignee:

Applicant:

Drawings (4 of 16)

Smart overview of the Invention

Facial attribute classification (FAC) focuses on analyzing face images to identify various characteristics, such as lip size, eye color, and hair color. This classification is crucial for applications like image retrieval, face recognition, and recommendation systems. Recent advancements in machine learning, particularly through Convolutional Neural Networks (CNN), have significantly enhanced the effectiveness of FAC methods. These methods can be categorized into single-label and multi-label learning approaches, each with distinct strengths and weaknesses in predicting facial attributes.

Deep Learning Model for Attribute Prediction

A deep learning supervised regression model has been developed to improve facial attribute prediction. This model can be integrated into augmented or virtual reality interfaces to modify images based on predicted facial attributes. For example, it can apply makeup effects that correspond with identified attributes, enhancing user interaction within e-commerce platforms. The model not only predicts attributes but also recommends products tailored to the user's specific features.

Dataset and Annotation Methodology

The approach utilizes a custom dataset consisting of images from 3,790 subjects, each annotated by multiple human reviewers to capture a range of opinions on facial attributes. Unlike traditional methods that rely on a single "hard label," this system employs a "soft label" approach, recognizing that human annotators may disagree on attribute classifications. This flexibility allows the algorithm to recommend multiple outputs based on varying levels of confidence rather than settling on a potentially inaccurate single answer.

Human Agreement and Performance Metrics

The system evaluates human agreement and performance through metrics that assess how consistently annotators identify facial attributes. For instance, when multiple annotators classify an attribute like hair color, the agreement is calculated based on majority votes. Additionally, two performance metrics are established: one that includes cases with no clear majority and another that disregards them. This comprehensive evaluation provides insights into both the reliability of human annotations and the effectiveness of the classification model.