Published March 25, 2024 | Version v1
Dataset Open

LUVLi Pre-trained Model

  • 1. University of Utah, Michigan State University
  • 2. Mitsubishi Electric Research Laboratories (MERL)
  • 3. ROR icon University of Manchester
  • 4. Michigan State University
  • 5. New York University

Description

Introduction

Modern face alignment methods have become quite accurate at predicting the locations of facial landmarks, but they do not typically estimate the uncertainty of their predicted locations nor predict whether landmarks are visible. In this paper, we present a novel framework for jointly predicting landmark locations, associated uncertainties of these predicted locations, and landmark visibilities. We model these as mixed random variables and estimate them using a deep network trained with our proposed Location, Uncertainty, and Visibility Likelihood (LUVLi) loss. In addition, we release an entirely new labeling of a large face alignment dataset with over 19,000 face images in a full range of head poses. Each face is manually labeled with the ground-truth locations of 68 landmarks, with the additional information of whether each landmark is unoccluded, self-occluded (due to extreme head poses), or externally occluded. Not only does our joint estimation yield accurate estimates of the uncertainty of predicted landmark locations, but it also yields state-of-the-art estimates for the landmark locations themselves on multiple standard face alignment datasets. Our method’s estimates of the uncertainty of predicted landmark locations could be used to automatically identify input images on which face alignment fails, which can be critical for downstream tasks.

To foster further research into this topic, we are publicly releasing our pre-trained LUVLi models. Please see our CVPR 2020 paper titled LUVLi Face Alignment: Estimating Landmarks’ Location, Uncertainty, and Visibility Likelihood for details

At a Glance

-The size of the unzipped model is ~700MB.

-The unzipped folder contains: (i) a README.md file and (ii) pre-trained models and logs. The pre-trained models could be loaded in our publicly released LUVLi implementation.

Other Resources

Citation

If you use the LUVLi data in your research, please cite our paper:

@inproceedings{kumar2020luvli,
  title={{LUVLi} Face Alignment: Estimating Landmarks' Location, Uncertainty, and Visibility Likelihood},
  author={Kumar, Abhinav and Marks, Tim K. and Mou, Wenxuan and Wang, Ye and Jones, Michael and Cherian, Anoop and Koike-Akino, Toshiaki and Liu, Xiaoming and Feng, Chen},
  booktitle={IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
  year={2020}
}

License

The LUVLi data is released under CC-BY-SA-4.0 license.

All data:

Created by Mitsubishi Electric Research Laboratories (MERL), 2022,2023

SPDX-License-Identifier: CC-BY-SA-4.0

Files

LUVLi_models.zip

Files (565.2 MB)

Name Size Download all
md5:9842157c845f0e95acaaa7e33bb98ad1
565.2 MB Preview Download