8000 GitHub - frontcover/vtr3: VT&R3 is a C++ implementation of the Teach and Repeat navigation framework. It enables a robot to be taught a network of traversable paths and then accurately repeat any network portion.
[go: up one dir, main page]
More Web Proxy on the site http://driver.im/
Skip to content
forked from utiasASRL/vtr3

VT&R3 is a C++ implementation of the Teach and Repeat navigation framework. It enables a robot to be taught a network of traversable paths and then accurately repeat any network portion.

License

Notifications You must be signed in to change notification settings

frontcover/vtr3

 
 

Repository files navigation

Visual Teach & Repeat 3 (VT&R3)

Update (2022-02-24)

For code related to Are We Ready for Radar to Replace Lidar in All-Weather Mapping and Localization? (Keenan Burnett, Yuchen Wu, David J. Yoon, Angela P. Schoellig, Timothy D. Barfoot, 2022), please see the radar_lidar_dev branch. Code related to lidar teach and repeat is located under main/src/vtr_lidar and code related to radar teach and repeat is located under main/src/vtr_radar. We are working on merging this code into the main branch.

What is VT&R3?

VT&R3 is a C++ implementation of the Teach and Repeat navigation framework. It enables a robot to be taught a network of traversable paths and then accurately repeat any network portion. VT&R3 is designed for easy adaptation to various sensor (camera/LiDAR/RaDAR/GPS) and robot combinations. The current implementation includes a feature-based visual odometry and localization pipeline that estimates the robot's motion from stereo camera images and a point-cloud-based odometry and localization pipeline for LiDAR sensors.

Installation and Getting Started

The following articles will help you get started with VT&R3:

More information can be found in the wiki page.

Citation

Please cite the following paper when using VT&R3 for your research:

@article{paul2010vtr,
  author = {Furgale, Paul and Barfoot, Timothy D.},
  title = {Visual teach and repeat for long-range rover autonomy},
  journal = {Journal of Field Robotics},
  year = {2010},
  doi = {https://doi.org/10.1002/rob.20342}
}

Additional Citations

  • Multi-Experience Localization
    @inproceedings{michael2016mel,
      author = {Paton, Michael and MacTavish, Kirk and Warren, Michael and Barfoot, Timothy D.},
      booktitle = {2016 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)},
      title = {Bridging the appearance gap: Multi-experience localization for long-term visual teach and repeat},
      year={2016},
      doi={10.1109/IROS.2016.7759303}
    }
  • Lidar, Radar Teach and Repeat
      @article{burnett_radar22,
      author = {Burnett, Keenan and Wu, Yuchen and Yoon, David J., Schoellig, Angela P. and Barfoot, Timothy D.},
      title = {Are We Ready for Radar to Replace Lidar in All-Weather Mapping and Localization?},
      journal={arXiv preprint arXiv:2203.10174},
      year={2022}
    }

About

VT&R3 is a C++ implementation of the Teach and Repeat navigation framework. It enables a robot to be taught a network of traversable paths and then accurately repeat any network portion.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • C++ 84.7%
  • Cuda 8.2%
  • Python 2.2%
  • C 2.2%
  • MATLAB 1.5%
  • CMake 1.1%
  • Other 0.1%
0