8000 GitHub - tiongsikng/gc2sa_net: Self-Attentive Contrastive Learning for Conditioned Periocular and Face Biometrics
[go: up one dir, main page]
More Web Proxy on the site http://driver.im/
Skip to content

tiongsikng/gc2sa_net

Repository files navigation

Gated Convolutional Channel-wise Self-Attention Network (GC2SA-Net)

Self-Attentive Contrastive Learning for Conditioned Periocular and Face Biometrics

Published in Transaction of Information Forensics and Security (DOI: 10.1109/TIFS.2024.3361216)
Paper Link


Network Architecture

Pre-requisites:

  • Environment: Check requirements.txt file which was generated using pip list --format=freeze > requirements.txt command for the environment requirement. These files are slightly filtered manually, so there may be redundant packages.
  • Dataset: Download dataset (training and testing) from this link. Password is conditional_biometrics. Ensure that datasets are located in data directory. Configure datasets_config.py file to point to this data directory by changing main path.
  • Pre-trained models: (Optional) The pre-trained MobileFaceNet model for fine-tuning or testing can be downloaded from this link.

Training:

  1. Change hyperparameters accordingly in params.py file. The set values used are the default, but it is possible to alternatively change them when running the python file.
  2. Run python training/main.py. The training should start immediately.
  3. Testing will be performed automatically after training is done, but it is possible to perform testing on an already trained model (see next section).

Testing:

  1. Based on the (pre-)trained models in the models(/pretrained) directory, load the correct model and the architecture (in network directory) using < 98CC code>load_model.py file. Change the file accordingly in case of different layer names, etc.
  2. Evaluation:
    • Identification / Cumulative Matching Characteristic (CMC) curve: Run cmc_eval_identification.py. Based on the generated .pt files in data directory, run plot_cmc_roc_sota.ipynb to generate CMC graph.
    • Verification / Receiver Operating Characteristic (ROC) curve: Run roc_eval_verification.py. Based on the generated .pt files in data directory, run plot_cmc_roc_sota.ipynb to generate ROC graph.
  3. Visualization:
    • Gradient-weighted Class Activation Mapping (Grad-CAM): Run grad_cam.py, based on the selected images that are stored in a directory. The images will be generated in the graphs directory.
    • t-distributed stochastic neighbor embedding (t-SNE) : Run the Jupyter notebook accordingly. Based on the included text file in data/visualization/tsne/img_lists, 10 toy identities are selected to plot the t-SNE points, which will be generated in the graphs directory.

Comparison with State-of-the-Art (SOTA) models

Method Intra-Modal Rank-1 IR (%)
(Periocular)
Intra-Modal Rank-1 EER (%)
(Periocular)
Intra-Modal IR (%)
(Periocular Gallery)
Inter-Modal EER (%)
(Periocular-Face)
PF-GLSR Paper Link
Pre-trained Weights
79.03 15.56 - -
CMB-Net Paper Link
Pre-trained Weights
86.96 9.62 77.26 9.80
HA-ViT Paper Link
Pre-trained Weights
77.75 11.39 64.72 13.14
GC2SA-Net Paper Link
Pre-trained Weights
93.63 6.39 90.77 6.50

The project directory is as follows:

├── configs: Dataset path configuration file and hyperparameters.
│   ├── datasets_config.py - Directory path for dataset files. Change 'main' in 'main_path' dictionary to point to dataset, e.g., /home/gc2sa_net/data (without slash).
│   └── params.py - Adjust hyperparameters and arguments in this file for training. 
├── data: Dataloader functions and preprocessing.
│   ├── [INSERT DATASET HERE.]
│   ├── The .pt files to plot the CMC and ROC graphs will be generated in this directory.
│   └── data_loader.py - Generate training and testing PyTorch dataloader. Adjust the augmentations etc. in this file. Batch size of data is also determined here, based on the values set in params.py.
├── eval: Evaluation metrics (identification and verification). Also contains CMC and ROC evaluations.
│   ├── cmc_eval_identification.py - Evaluates Rank-1 Identification Rate (IR) and generates Cumulative Matching Characteristic (CMC) curve, which are saved as .pt files in data directory. Use these .pt files to generate CMC curves.
│   ├── grad_cam.py - Plot GradCAM images. For usage, store all images in a single folder, and change the path accordingly. More details of usage in the file's main function.
│   ├── plot_cmc_roc_sota.ipynb - Notebook to plot CMC and ROC curves side-by-side, based on generated .pt files from cmc_eval.py and roc_eval.py. Graph is generated in graphs directory.
│   ├── plot_tSNE.ipynb - Notebook to plot t-SNE images based on the 10 identities of periocular-face toy examples. Example of text file (which correlates to the image paths) are in data/visualization/tsne/img_lists.
│   └── roc_eval_verification.py - Evaluates Verification Equal Error Rate (EER) and generates Receiver Operating Characteristic (ROC) curve, which are saved as .pt files in data directory. Use these .pt files to generate ROC curves.
├── graphs: Directory where graphs and visualization evaluations are generated.
│   └── CMC and ROC curve file is generated in this directory. Some evaluation images are also generated in this directory.
├── logs: Stores logs based on 'Method' and 'Remarks' based on config files, with time.
│   └── Logs will be generated in this directory. Each log folder will contain backups of training files with network files and hyperparameters used.
├── models: Directory to store pretrained models, and also where models are generated.
│   ├── [INSERT PRE-TRAINED MODELS HERE.]
│   ├── The base MobileFaceNet for fine-tuning the GC2SA-Net can be downloaded in this link.
│   └── Trained models will be generated in this directory.
├── network: Contains loss functions, and network related files.
│   ├── gc2sa_net.py - Architecture file for GC2SA-Net.
│   ├── load_model.py - Loads pre-trained weights based on a given model.
│   └── logits.py - Contains some loss functions that are used.
└── training: Main files for training.
    ├── main.py - Main file to run for training. Settings and hyperparameters are based on the files in configs directory.
    └── train.py - Training file that is called from main.py. Gets batch of dataloader and contains criterion for loss back-propagation.

Citation for this work:

@ARTICLE{gc2sa_net,
  author={Ng, Tiong-Sik and Chai, Jacky Chen Long and Low, Cheng-Yaw and Beng Jin Teoh, Andrew},
  journal={IEEE Transactions on Information Forensics and Security}, 
  title={Self-Attentive Contrastive Learning for Conditioned Periocular and Face Biometrics}, 
  year={2024},
  volume={19},
  number={},
  pages={3251-3264},
  keywords={Face recognition;Faces;Biometrics (access control);Feature extraction;Biological system modeling;Self-supervised learning;Correlation;Biometrics;face;periocular;channel-wise self-attention;modality alignment loss;intra-modal matching;inter-modal matching},
  doi={10.1109/TIFS.2024.3361216}
}

About

Self-Attentive Contrastive Learning for Conditioned Periocular and Face Biometrics

Topics

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Contributors 2

  •  
  •  
0