This project aims to explore and evaluate various fine-tuning techniques for the task of Vietnamese sentiment analysis.
- Gain a comprehensive understanding of fundamental fine-tuning techniques, including Full Model Fine-tuning and Parameter-Efficient Fine-Tuning (PEFT) methods such as Adapter, LoRA, and QLoRA.
- Learn to utilize multiple GPUs for fine-tuning large-scale models (e.g., ViT5).
- Achieve results suitable for submission to an international scientific conference.
The following datasets are used in this project:
The project utilizes the following Vietnamese pre-trained language models:
- Full Model Fine-tuning
- LoRA (Low-Rank Adaptation)
- QLoRA
- Adapter
The following libraries need to be installed:
pip install pandas
pip install transformers datasets adapters peft bitsandbytes
pip install lightning torchmetrics
pip install underthesea
pip install gputil
pip install matplotlib seaborn
All scripts include an argument parser, so you can run any Python script with the --help
parameter to see usage instructions. See the example below.
python vsfc_full_fine_tune.py --help