• 🔥 The code of AutoMetrics has been released. It extends the capabilities of T2I-Metrics.
# Git clone the repo
git clone https://github.com/QuanjianSong/AutoMetrics.git
# Installation with the requirement.txt
conda create -n AutoMetrics python=3.8
conda activate AutoMetrics
pip install -r requirements.txt
pip install torch==1.12.1+cu116 torchvision==0.13.1+cu116 torchaudio==0.12.1 --extra-index-url https://download.pytorch.org/whl/cu116 # Choose a version that suits your GPU
pip install scipy
pip install git+https://github.com/openai/CLIP.git
# Or installation with the environment.yaml
conda env create -f environment.yaml
pip install torch==1.12.1+cu116 torchvision==0.13.1+cu116 torchaudio==0.12.1 --extra-index-url https://download.pytorch.org/whl/cu116 # Choose a version that suits your GPU
pip install scipy
pip install git+https://github.com/openai/CLIP.git
You need to download the inception_v3_google.pth, pt_inception.pth, and ViT-B-32.pt weights files and place them in the checkpoints folder. We have integrated them into the following links for your convenience.
Baidu cloud disk link, extraction code: fpfp
Before starting the evaluation, you need to prepare the corresponding jsonl files in advance. Different evaluation metrics require reading different types of jsonl files. These generally fall into three categories: image-prompt pairs, image-image pairs, and single images. Each line in the jsonl file should include the appropriate file paths. We provide example files in the ./examples
directory to help you construct your own.
We provide simple examples in main.py
to quickly get started with metric evaluation.
python main.py
Different evaluation metrics require reading different jsonl files, and we have provided corresponding examples in the previous step.