This repository is the official implementation of the CVPR 2024 paper 3D Facial Expressions through Analysis-by-Neural Synthesis.
SMIRK reconstructs 3D faces from monocular images with facial geometry that faithfully recover extreme, asymmetric, and subtle expressions.
You need to have a working version of PyTorch and Pytorch3D installed. We provide a requirements.txt
file that can be used to install the necessary dependencies for a Python 3.9 setup with CUDA 11.7:
conda create -n smirk python=3.9
pip install -r requirements.txt
# install pytorch3d now
pip install --no-index --no-cache-dir pytorch3d -f https://dl.fbaipublicfiles.com/pytorch3d/packaging/wheels/py39_cu117_pyt201/download.html
Then, in order to download the required models, run:
bash quick_install.sh
The above installation includes downloading the FLAME model. This requires registration. If you do not have an account you can register at https://flame.is.tue.mpg.de/
This command will also download the SMIRK pretrained model which can also be found on Google Drive.
We provide two demos. One that can be used to test the model on a single image,
python demo.py --input_path samples/test_image2.png --out_path results/ --checkpoint pretrained_models/SMIRK_em1.pt --crop
and one that can be used to test the model on a video,
python demo_video.py --input_path samples/dafoe.mp4 --out_path results/ --checkpoint pretrained_models/SMIRK_em1.pt --crop --render_orig