8000 GitHub - paathelb/MEDL-U: Official implementation of MEDL-U: Uncertainty-aware 3D Automatic Annotation based on Evidential Deep Learning (ICRA 2024)
[go: up one dir, main page]
More Web Proxy on the site http://driver.im/
Skip to content

Official implementation of MEDL-U: Uncertainty-aware 3D Automatic Annotation based on Evidential Deep Learning (ICRA 2024)

License

Notifications You must be signed in to change notification settings

paathelb/MEDL-U

Repository files navigation

MEDL-U

[ICRA 2024] The PyTorch implementation of 'MEDL-U: Uncertainty-aware 3D Automatic Annotation based on Evidential Deep Learning.'

Model Architecture

Abstract

Advancements in deep learning-based 3D object detection necessitate the availability of large-scale datasets. However, this requirement introduces the challenge of manual annotation, which is often both burdensome and time-consuming. To tackle this issue, the literature has seen the emergence of several weakly supervised frameworks for 3D object detection which can automatically generate pseudo labels for unlabeled data. Nevertheless, these generated pseudo labels contain noise and are not as accurate as those labeled by humans. In this paper, we present the first approach that addresses the inherent ambiguities present in pseudo labels by introducing an Evidential Deep Learning (EDL) based uncertainty estimation framework. Specifically, we propose MEDL-U, an EDL framework based on MTrans, which not only generates pseudo labels but also quantifies the associated uncertainties. However, applying EDL to 3D object detection presents three primary challenges: (1) relatively lower pseudolabel quality in comparison to other autolabelers; (2) excessively high evidential uncertainty estimates; and (3) lack of clear interpretability and effective utilization of uncertainties for downstream tasks. We tackle these issues through the introduction of an uncertainty-aware IoU-based loss, an evidence-aware multi-task loss function, and the implementation of a post-processing stage for uncertainty refinement. Our experimental results demonstrate that probabilistic detectors trained using the outputs of MEDL-U surpass deterministic detectors trained using outputs from previous 3D annotators on the KITTI val set for all difficulty levels. Moreover, MEDL-U achieves state-of-the-art results on the KITTI official test set compared to existing 3D automatic annotators.

Publication

Our work is featured in an article on AutoWise. You can read more about it here: AutoWise ICRA 2024 Paper.

Table of Contents

Data Preparation

The KITTI 3D detection dataset can be downloaded from the official website: link.

Training

Modify the configuration file located at /configs/MTrans_kitti.yaml.

To train a MTrans with the KITTI dataset, simply run:

python train.py --cfg_file configs/MTrans_kitti.yaml

Pretrained Model

You can download the pretrained model from the following link: Download Pretrained Model.

Generate Pseudolabels

To generate pseudolabels using the pretrained model, ensure that the best_model.pt file is saved in the experiment_name/ckpt folder. After verifying this, run the following command:

python train.py --cfg_file configs/MTrans_kitti_gen_label.yaml

Postprocess Uncertainties

To effectively postprocess uncertainties and integrate them with KITTI info and dbinfo files, please refer to the following Jupyter notebook:

  • Notebook Location: /small_experiments/glenet_weights_statistics.ipynb

This notebook provides detailed instructions and code for processing uncertainties, ensuring that you can accurately interpret and utilize the uncertainty data generated by the model.

References

This work is based on the MTrans framework. We extend our gratitude to the authors for their open-source code: MTrans GitHub Repository.

TODO

  • Add additional features or improvements here.
  • Update documentation as necessary.

Citation

If you use this work in your research, please cite it as follows:

@inproceedings{Paat2023MEDLUU3,
  title={MEDL-U: Uncertainty-aware 3D Automatic Annotation based on Evidential Deep Learning},
  author={Helbert Paat and Qing Lian and Weilong Yao and Tong Zhang},
  booktitle={ICRA},
  year={2024}
}

About

Official implementation of MEDL-U: Uncertainty-aware 3D Automatic Annotation based on Evidential Deep Learning (ICRA 2024)

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published
0