- Environment: Check
requirements.txt
file which was generated usingpip list --format=freeze > requirements.txt
command for the environment requirement. These files are slightly filtered manually, so there may be redundant packages. - Dataset: Download dataset (training and testing) from this link. Password is conditional_biometrics.
Ensure that datasets are located in
data
directory. Configuredatasets_config.py
file to point to this data directory by changing main path. - Pre-trained models: (Optional) The pre-trained MobileFaceNet model for fine-tuning or testing can be downloaded from this link.
- Change hyperparameters accordingly in
params.py
file. The set values used are the default, but it is possible to alternatively change them when running the python file. - Run
python training/main.py
. The training should start immediately. - Testing will be performed automatically after training is done, but it is possible to perform testing on an already trained model (see next section).
- Based on the (pre-)trained models in the
models(/pretrained)
directory, load the correct model and the architecture (innetwork
directory) using < 98CC code>load_model.py file. Change the file accordingly in case of different layer names, etc. - Evaluation:
- Identification / Cumulative Matching Characteristic (CMC) curve: Run
cmc_eval_identification.py
. Based on the generated.pt
files indata
directory, runplot_cmc_roc_sota.ipynb
to generate CMC graph. - Verification / Receiver Operating Characteristic (ROC) curve: Run
roc_eval_verification.py
. Based on the generated.pt
files indata
directory, runplot_cmc_roc_sota.ipynb
to generate ROC graph.
- Identification / Cumulative Matching Characteristic (CMC) curve: Run
- Visualization:
- Gradient-weighted Class Activation Mapping (Grad-CAM): Run
grad_cam.py
, based on the selected images that are stored in a directory. The images will be generated in thegraphs
directory. - t-distributed stochastic neighbor embedding (t-SNE) : Run the Jupyter notebook accordingly. Based on the included text file in
data/visualization/tsne/img_lists
, 10 toy identities are selected to plot the t-SNE points, which will be generated in thegraphs
directory.
- Gradient-weighted Class Activation Mapping (Grad-CAM): Run
Method | Intra-Modal Rank-1 IR (%) (Periocular) |
Intra-Modal Rank-1 EER (%) (Periocular) |
Intra-Modal IR (%) (Periocular Gallery) |
Inter-Modal EER (%) (Periocular-Face) |
---|---|---|---|---|
PF-GLSR |
79.03 | 15.56 | - | - |
86.96 | 9.62 | 77.26 | 9.80 | |
77.75 | 11.39 | 64.72 | 13.14 | |
93.63 | 6.39 | 90.77 | 6.50 |
├── configs: Dataset path configuration file and hyperparameters. │ ├── datasets_config.py - Directory path for dataset files. Change 'main' in 'main_path' dictionary to point to dataset, e.g.,/home/gc2sa_net/data
(without slash). │ └── params.py - Adjust hyperparameters and arguments in this file for training. ├── data: Dataloader functions and preprocessing. │ ├── [INSERT DATASET HERE.] │ ├── The.pt
files to plot the CMC and ROC graphs will be generated in this directory. │ └── data_loader.py - Generate training and testing PyTorch dataloader. Adjust the augmentations etc. in this file. Batch size of data is also determined here, based on the values set inparams.py
. ├── eval: Evaluation metrics (identification and verification). Also contains CMC and ROC evaluations. │ ├── cmc_eval_identification.py - Evaluates Rank-1 Identification Rate (IR) and generates Cumulative Matching Characteristic (CMC) curve, which are saved as.pt
files indata
directory. Use these.pt
files to generate CMC curves. │ ├── grad_cam.py - Plot GradCAM images. For usage, store all images in a single folder, and change the path accordingly. More details of usage in the file's main function. │ ├── plot_cmc_roc_sota.ipynb - Notebook to plot CMC and ROC curves side-by-side, based on generated.pt
files fromcmc_eval.py
androc_eval.py
. Graph is generated ingraphs
directory. │ ├── plot_tSNE.ipynb - Notebook to plot t-SNE images based on the 10 identities of periocular-face toy examples. Example of text file (which correlates to the image paths) are indata/visualization/tsne/img_lists
. │ └── roc_eval_verification.py - Evaluates Verification Equal Error Rate (EER) and generates Receiver Operating Characteristic (ROC) curve, which are saved as.pt
files indata
directory. Use these.pt
files to generate ROC curves. ├── graphs: Directory where graphs and visualization evaluations are generated. │ └── CMC and ROC curve file is generated in this directory. Some evaluation images are also generated in this directory. ├── logs: Stores logs based on 'Method' and 'Remarks' based on config files, with time. │ └── Logs will be generated in this directory. Each log folder will contain backups of training files with network files and hyperparameters used. ├── models: Directory to store pretrained models, and also where models are generated. │ ├── [INSERT PRE-TRAINED MODELS HERE.] │ ├── The base MobileFaceNet for fine-tuning the GC2SA-Net can be downloaded in this link. │ └── Trained models will be generated in this directory. ├── network: Contains loss functions, and network related files. │ ├── gc2sa_net.py - Architecture file for GC2SA-Net. │ ├── load_model.py - Loads pre-trained weights based on a given model. │ └── logits.py - Contains some loss functions that are used. └── training: Main files for training. ├── main.py - Main file to run for training. Settings and hyperparameters are based on the files inconfigs
directory. └── train.py - Training file that is called frommain.py
. Gets batch of dataloader and contains criterion for loss back-propagation.
@ARTICLE{gc2sa_net,
author={Ng, Tiong-Sik and Chai, Jacky Chen Long and Low, Cheng-Yaw and Beng Jin Teoh, Andrew},
journal={IEEE Transactions on Information Forensics and Security},
title={Self-Attentive Contrastive Learning for Conditioned Periocular and Face Biometrics},
year={2024},
volume={19},
number={},
pages={3251-3264},
keywords={Face recognition;Faces;Biometrics (access control);Feature extraction;Biological system modeling;Self-supervised learning;Correlation;Biometrics;face;periocular;channel-wise self-attention;modality alignment loss;intra-modal matching;inter-modal matching},
doi={10.1109/TIFS.2024.3361216}
}