8000 GitHub - weimengting/AMAN: A Novel Micro-Expression Recognition Approach Using Attention-Based Magnification-Adaptive Networks, in ICASSP, 2022
[go: up one dir, main page]
More Web Proxy on the site http://driver.im/
Skip to content

A Novel Micro-Expression Recognition Approach Using Attention-Based Magnification-Adaptive Networks, in ICASSP, 2022

Notifications You must be signed in to change notification settings

weimengting/AMAN

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

7 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

A Novel Micro-Expression Recognition Approach Using Attention-Based Magnification-Adaptive Networks

Mengting Wei, Wenming Zheng, Yuan Zong, Xingxun Jiang, Cheng Lu, Jiateng Liu

In ICASSP 2022 (Download the paper [here])

New!

We add the magnification model here!

Directory Structure

project
├── README.md
├── dataset
│   └── load_dataset.py - script to load the dataset for training
├── extract_features
│   ├── resnet.py - script of the Resnet-18 model
│   └── resnet_features.py - script of extracting ME features
├── main
│   ├── config.py - check & change configurations here
│   ├── network.py - AMAN model
│   └── train.py - main script for training and testing
├── utils
│   └── util.py - script of some utils
|   |

Dependices

Current version is tested on:

  • Windows 10 Pro, 64 bit with CUDA 11.4
  • python==3.7.5
  • torch==1.10.2
  • torchvision==0.11.3
  • numpy==1.21.6

Model

Dataset Preparation

  • We use CASME II, SAMM and SMIC-HS for training.
    • Magnify the original frames according to the paper, you can use any techniques such as Eulerian Video Magnification, Lagrangian motion magnification or Learning-based Magnification. We recommend you to use the last one since it can produce less artifacts.
  • To reduce computational cost, extract the features with some former layers of the Resnet-18 and use the features for training and testing. The model is pre-trained on FER+. You can find the pre-trained model in the assets directory.
  • Demo of magnified images:

Magnification

We provide a pre-trained magnification model here. Go inside the mag_imple directory and simply implement:

python mag.py

They you can see the magnified results in the output directory.

Citation

If you find this code useful for your research, please consider citing the following papers:

@inproceedings{wei2022novel,
  title={A Novel Micro-Expression Recognition Approach Using Attention-Based Magnification-Adaptive Networks},
  author={Wei, Mengting and Zheng, Wenming and Zong, Yuan and Jiang, Xingxun and Lu, Cheng and Liu, Jiateng},
  booktitle={ICASSP 2022-2022 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)},
  pages={2420--2424},
  year={2022},
  organization={IEEE}
}
@inproceedings{oh2018learning,
  title={Learning-based video motion magnification},
  author={Oh, Tae-Hyun and Jaroensri, Ronnachai and Kim, Changil and Elgharib, Mohamed and Durand, Fr'edo and Freeman, William T and Matusik, Wojciech},
  booktitle={Proceedings of the European Conference on Computer Vision (ECCV)},
  pages={633--648},
  year={2018}
}

If you have any confusions about the code feel free to contact mengting.wei@oulu.fi.

About

A Novel Micro-Expression Recognition Approach Using Attention-Based Magnification-Adaptive Networks, in ICASSP, 2022

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

0