Official PyTorch repository for the papers:
- Hypercomplex Multimodal Emotion Recognition from EEG and Peripheral Physiological Signals, ICASSPW 2023. [IEEEXplore][ArXiv Preprint]
- Hierarchical Hypercomplex Network for Multimodal Emotion Recognition, MLSP 2024. [IEEEXplore][ArXiv Preprint]
- PHemoNet: A Multimodal Network for Physiological Signals, RTSI 2024. [IEEEXplore][ArXiv Preprint]
Authors:
Eleonora Lopez, Eleonora Chiarantano, Eleonora Grassucci, Aurelio Uncini, and Danilo Comminiello from ISPAMM Lab 🏘️
- [2025.05.15] Released pretrained weights 💣
- [2025.05.14] Updated code with H2 and PHemoNet models from MLSP and RTSI papers! 👩🏻💻
- [2024.07] Extension papers have been accepted at MLSP and RTSI 2024!
- [2023.11.11] Code is available for HyperFuseNet! 👩🏼💻
- [2023.04.14] The paper has been accepted for presentation at ICASSP workshop 2023 🎉!
Model | Paper | Arousal F1 | Arousal Acc | Valence F1 | Valence Acc | Highlights | Weights |
---|---|---|---|---|---|---|---|
🥇 H2 | MLSP 2024 [IEEEXplore][ArXiv] | 0.557 | 56.91 | 0.685 | 67.87 | Hierarchical model with PHC-based encoders in modality-specific domains, achieves best performance | Arousal - Valence |
🥈 PHemoNet | RTSI 2024 [IEEEXplore][ArXiv] | 0.401 | 42.54 | 0.505 | 50.77 | PHM-based encoders with modality-specifc domains and revised hypercomplex fusion module | Arousal - Valence |
🥉 HyperFuseNet | ICASSPW 2023 [IEEEXplore][ArXiv] | 0.397 | 41.56 | 0.436 | 44.30 | Introduces hypercomplex fusion module | Arousal - Valence |
pip install -r requirements.txt
-
Download the data from the official website.
-
Preprocess the data:
python data/preprocessing.py
- This will create a folder for each subject with CSV files containing the preprocessed data and save everything inside
args.save_path
.
- This will create a folder for each subject with CSV files containing the preprocessed data and save everything inside
-
Create torch files with augmented and split data:
python data/create_dataset.py
- This performs data splitting and augmentation from the preprocessed data in step 2.
- You can specify which label to consider by setting the parameter
label_kind
to eitherArsl
orVlnc
. - The data is saved as .pt files which are used for training.
To reproduce the results, use the corresponding configuration file for each model and task:
configs/h2.yml
→ H2 modelconfigs/phemonet.yml
→ PHemoNetconfigs/hyperfusenet_arousal.yml
→ HyperFuseNet for valenceconfigs/hyperfusenet_valence.yml
→ HyperFuseNet for arousal
Run training with:
python main.py --train_file_path /path/to/arsl_or_vlnc_train.pt --test_file_path /path/to/arsl_or_vlnc_test.pt --config configs/config.yml
To do a sweep (used in HyperFuseNet paper) run: python sweep.py
Experiments will be directly tracked on Weight&Biases.
Please cite our works if you found this repo useful 🫶
- H2 model:
@inproceedings{lopez2024hierarchical,
title={Hierarchical hypercomplex network for multimodal emotion recognition},
author={Lopez, Eleonora and Uncini, Aurelio and Comminiello, Danilo},
booktitle={2024 IEEE 34th International Workshop on Machine Learning for Signal Processing (MLSP)},
pages={1--6},
year={2024},
organization={IEEE}
}
- PHemoNet:
@inproceedings{lopez2024phemonet,
title={PHemoNet: A Multimodal Network for Physiological Signals},
author={Lopez, Eleonora and Uncini, Aurelio and Comminiello, Danilo},
booktitle={2024 IEEE 8th Forum on Research and Technologies for Society and Industry Innovation (RTSI)
6321
},
pages={260--264},
year={2024},
organization={IEEE}
}
- HyperFuseNet:
@inproceedings{lopez2023hypercomplex,
title={Hypercomplex Multimodal Emotion Recognition from EEG and Peripheral Physiological Signals},
author={Lopez, Eleonora and Chiarantano, Eleonora and Grassucci, Eleonora and Comminiello, Danilo},
booktitle={2023 IEEE International Conference on Acoustics, Speech, and Signal Processing Workshops (ICASSPW)},
pages={1--5},
year={2023},
organization={IEEE}
}
Check out:
- Multi-view hypercomplex learning for breast cancer screening, under review at TMI, 2022 [Paper][GitHub]
- PHNNs: Lightweight neural networks via parameterized hypercomplex convolutions, IEEE Transactions on Neural Networks and Learning Systems, 2022 [Paper][GitHub].
- Hypercomplex Image-to-Image Translation, IJCNN, 2022 [Paper][GitHub]