MOST-GAN Pre-trained Model
Creators
- 1. MIT
- 2. Mitsubishi Electric Research Laboratories (MERL)
- 3. Michigan State University
Description
Introduction
Recent advances in generative adversarial networks (GANs) have led to remarkable achievements in face image synthesis. While methods that use style-based GANs can generate strikingly photorealistic face images, it is often difficult to control the characteristics of the generated faces in a meaningful and disentangled way. Prior approaches aim to achieve such semantic control and disentanglement within the latent space of a previously trained GAN. In contrast, we propose a framework that a priori models physical attributes of the face such as 3D shape, albedo, pose, and lighting explicitly, thus providing disentanglement by design. Our method, MOST-GAN, integrates the expressive power and photorealism of style-based GANs with the physical disentanglement and flexibility of nonlinear 3D morphable models, which we couple with a state-of-the-art 2D hair manipulation network. MOST-GAN achieves photorealistic manipulation of portrait images with fully disentangled 3D control over their physical attributes, enabling extreme manipulation of lighting, facial expression, and pose variations up to full profile view.
To foster further research into this topic, we are publicly releasing our pre-trained model for MOST-GAN. Please see our AAAI paper titled [MOST-GAN: 3D Morphable StyleGAN for Disentangled Face Image Manipulation](https://arxiv.org/abs/2111.01048) for details.
At a Glance
-The size of the unzipped model is ~300MB.
-The unzipped folder contains: (i) a README.md file and (ii) ./checkpoints/checkpoint01.pt pre-trained model. The pre-trained model could be loaded in our publicly released MOST-GAN implementation.
Citation
If you use the MOST-GAN data in your research, please cite our paper:
@inproceedings{medin2022most,
title={MOST-GAN: 3D morphable StyleGAN for disentangled face image manipulation},
author={Medin, Safa C and Egger, Bernhard and Cherian, Anoop and Wang, Ye and Tenenbaum, Joshua B and Liu, Xiaoming and Marks, Tim K},
booktitle={Proceedings of the AAAI conference on artificial intelligence},
volume={36},
number={2},
pages={1962--1971},
year={2022}
}
License
The MOST-GAN data is released under CC-BY-SA-4.0 license.
All data:
Created by Mitsubishi Electric Research Laboratories (MERL), 2022,2023
SPDX-License-Identifier: CC-BY-SA-4.0
Files
most-gan-data.zip
Files
(318.7 MB)
Name | Size | Download all |
---|---|---|
md5:5761c2c66fde4e2d3085176d144e7a51
|
318.7 MB | Preview Download |