Yannic Neuhaus, Matthias Hein
Tübingen AI Center - University of Tübingen
RePOPE Annotation Files | Requirements | arXiv | Citation
This repository contains a relabeling of the POPE. We also created DASH-B a harder and less saturated object hallucination benchmark for VLMs.
We introduce RePOPE, a relabeling of the commonly used object hallucination benchmark COCO POPE. We correct wrong annotations and remove ambiguous ones. The imbalance between incorrect "Yes" and incorrect "No" labels (9.3% vs 1.7%) has a significant effect on the F1 scores.
We provide the corrected annotation files in the same format as the original POPE files:
annotations/coco_repope_random.json
annotations/coco_repope_popular.json
annotations/coco_repope_adversarial.json
Install the conda environment as follows to reproduce the results using our code:
conda create --name repope python=3.12
conda activate repope
conda install nvidia/label/cuda-12.1.0::cuda-nvcc
pip install -r requirements_pip.txt
pip install flash-attn
You can run the following command to evaluate a model on POPE and RePOPE. Supported models can be found here.
CUDA_VISIBLE_DEVICES=<GPU index> python src/evaluate.py --vlm_name <VLM name> --bs <batchsize> &
@article{neuhaus2025repope,
title={RePOPE: Impact of Annotation Errors on the POPE Benchmark},
author={Neuhaus, Yannic and Hein, Matthias},
journal={arXiv preprint arXiv:2504.15707},
year={2025}
}