Published July 22, 2025 | Version v1
Dataset Open

FairFaceGPT

  • 1. ROR icon Idiap Research Institute

Description

Description

Multimodal large language models (MLLMs) have shown remarkable performance in vision-language tasks. However, existing MLLMs are primarily trained on generic datasets, limiting their ability to reason on domain-specific visual cues such as those in facial images. In particular, tasks that require detailed understanding of facial structure, expression, emotion, and demographic features remain underexplored by MLLMs due to the lack of large-scale annotated face image-text datasets. We propose a novel weakly supervised pipeline that uses ChatGPT with attribute-aware prompts to generate high-quality question-answer pairs based on images from the FairFace dataset. The resulting corpus, called FairFaceGPT, covers a diverse set of attributes including expression, pose, skin texture, and forensic information. We use the FairFaceGPT to train FaceLLM, a multimodal large language model trained specifically for facial image understanding. Our experiments demonstrate that FaceLLM improves the performance of MLLMs on various face-centric tasks and achieves state-of-the-art performance. This work highlights the potential of synthetic supervision via language models for building domain-specialized MLLMs, and sets a precedent for trustworthy, human-centric multimodal AI systems. 

Project page: https://www.idiap.ch/paper/facellm

Reference 

  @article{facellm2025,
    title={FaceLLM: A Multimodal Large Language Model for Face Understanding},
    author={Hatef Otroshi Shahreza and S{\'e}bastien Marcel},
    journal={arXiv preprint arXiv:2507.10300},
    year={2025}
  }

Files

Files (46.6 MB)

Name Size Download all
md5:0b9f892110e7919319f91b328beff9f8
46.6 MB Download

Additional details

Funding

Hasler Foundation
reSponsible fAir FacE Recognition (SAFER) 21044