8000 GitHub - yomik-js/RP-FSOD
[go: up one dir, main page]
More Web Proxy on the site http://driver.im/
Skip to content

yomik-js/RP-FSOD

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

69 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

RP-FSOD

Introduction

This repository contains the source code for our paper "Retentive Compensation and Personality Filtering for Few-Shot Remote Sensing Object Detection" by Jiashan Wu, Chunbo Lang, Gong Cheng, Xingxing Xie, and Junwei Han.

Quick Start

1. Check Requirements

  • Linux with Python >= 3.8
  • PyTorch >= 1.6 & torchvision that matches the PyTorch version.
  • CUDA 10.1
  • GCC >= 4.9

2. Build RP-FSOD

  • Clone Code
    git clone https://github.com/yomik-js/RP-FSOD.git
    cd RP-FSOD
    
  • Create a virtual environment (optional)
    conda create -n RP-FSOD python==3.8.0
    conda activate RP-FSOD
    
  • Install PyTorch 1.7.1 with CUDA 10.1
    pip install torch==1.7.1+cu101 torchvision==0.8.2+cu101 torchaudio==0.7.2 -f https://download.pytorch.org/whl/torch_stable.html 
    
  • Install Detectron2
    python3 -m pip install detectron2==0.3 -f https://dl.fbaipublicfiles.com/detectron2/wheels/cu101/torch1.7/index.html
    
    • If you use other version of PyTorch/CUDA, check the latest version of Detectron2 in this page: Detectron2.
    • Sorry for that I don’t have enough time to test on more versions, if you run into problems with other versions, please let me know.
  • Install other requirements.
    python3 -m pip install -r requirements.txt
    

3. Prepare Data and Weights

  • Data Preparation
    • We evaluate our models on DIOR and NWPU VHR-10.v2, put them into datasets and put it into your project directory:
        ...
        datasets
          | -- DIOR (JPEGImages/*.jpg, ImageSets/, Annotations/*.xml)
          | -- DIORsplit
          | -- NWPU
          | -- NWPUsplit
        RP-FSOD
        tools
        ...
      
  • Weights Preparation
    • We use the imagenet pretrain weights to initialize our model. Download the same models from here: GoogleDrive

4. Training and Evaluation

For ease of training and evaluation over multiple runs, we integrate the whole pipeline of few-shot object detection into one script run_*.sh, including base pre-training and novel-finetuning.

  • To reproduce the results on VOC, EXP_NAME can be any string and SPLIT_ID mu 558B st be 1 or 2 or 3 or 4 .
    bash run_voc.sh EXP_NAME SPLIT_ID
    
  • Please read the details of few-shot object detection pipeline in run_*.sh, you need change IMAGENET_PRETRAIN* to your path.

Acknowledgement

This repo is developed based on DeFRCN. Please check it for more details and features.

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published
0