8000 GitHub - Krasjet-Yu/YOLO-FaceV2: YOLO-FaceV2: A Scale and Occlusion Aware Face Detector
[go: up one dir, main page]
More Web Proxy on the site http://driver.im/
Skip to content

Krasjet-Yu/YOLO-FaceV2

Repository files navigation

YOLO-FaceV2

Introduction

YOLO-FaceV2: A Scale and Occlusion Aware Face Detector
Pattern Recognition or arxiv

Framework Structure

Requirments

Create a Python Virtual Environment.

conda create -n {name} python=x.x

Enter Python Virtual Environment.

conda activate {name}

Install pytorch in this.

pip install torch==1.10.0+cu111 torchvision==0.11.0+cu111 torchaudio==0.10.0 -f https://download.pytorch.org/whl/torch_stable.html

Install other python package.

pip install -r requirements.txt

Step-Through Example

Downloaded Dataset

bash data/scripts/get_widerface.sh

Dataset

Download the WIDER FACE dataset. Then convert it to YOLO format.

# You can modify convert.py and voc_label.py if needed.
python3 data/convert.py
python3 data/voc_label.py
cd data
python3 train2yolo.py /path/to/original/widerface/train [/path/to/save/widerface/train]
python3 val2yolo.py  /path/to/original/widerface [/path/to/save/widerface/val]

Preweight

Model Easy Medium Hard PreWeight
YOLO-Facev2n 0.936 0.921 0.812 yolo-facev2n.pt
YOLO-FaceV2s 0.983 0.970 0.893 yolo-facev2s.pt
YOLO-FaceV2m 0.984 0.973 0.905 yolo-facev2m.pt
YOLO-FaceV2l 0.986 0.979 0.919 yolo-facev2l.pt

Training

python train.py --weights preweight.pt    
                --data data/WIDER_FACE.yaml    
                --cfg models/yolov5s_v2_RFEM_MultiSEAM.yaml     
                --batch-size 32   
                --epochs 250

Evaluate

python3 test_widerface.py --weights 'your test model' --img-size 640
  
cd widerface_evaluate/    
python evaluation.py --pred ./widerface_txt_x
Easy Medium Hard

Visualize

Visualization of small-scale face detection effect:

Visualization of the heat map effect of attention covering human faces:

Finetune

see in ultralytics/yolov5#607

# Single-GPU
python train.py --epochs 10 --data coco128.yaml --weights yolov5s.pt --cache --evolve

# Multi-GPU
for i in 0 1 2 3 4 5 6 7; do
  sleep $(expr 30 \* $i) &&  # 30-second delay (optional)
  echo 'Starting GPU '$i'...' &&
  nohup python train.py --epochs 10 --data coco128.yaml --weights yolov5s.pt --cache --device $i --evolve > evolve_gpu_$i.log &
done

# Multi-GPU bash-while (not recommended)
for i in 0 1 2 3 4 5 6 7; do
  sleep $(expr 30 \* $i) &&  # 30-second delay (optional)
  echo 'Starting GPU '$i'...' &&
  "$(while true; do nohup python train.py... --device $i --evolve 1 > evolve_gpu_$i.log; done)" &
done

Reference

https://github.com/ultralytics/yolov5
https://github.com/deepcam-cn/yolov5-face
https://github.com/open-mmlab/mmdetection
https://github.com/dongdonghy/repulsion_loss_pytorch

Cite

If you think this work is helpful for you, please cite

@article{YU2024110714,
title = {YOLO-FaceV2: A scale and occlusion aware face detector},
journal = {Pattern Recognition},
pages = {110714},
year = {2024},
issn = {0031-3203},
url = {https://www.sciencedirect.com/science/article/pii/S0031320324004655},
author = {Ziping Yu and Hongbo Huang and Weijun Chen and Yongxin Su and Yahui Liu and Xiuying Wang},
keywords = {Face detection, YOLO, Scale-aware, Occlusion, Imbalance problem},
}

Contact

We use code's license is MIT License. The code can be used for business inquiries or professional support requests.

About

YOLO-FaceV2: A Scale and Occlusion Aware Face Detector

Resources

51AA

Stars

Watchers

Forks

Packages

No packages published
0