TL;DR: PINGS is a LiDAR-visual SLAM system unifying distance fields and radiance fields within a neural point map
[Demo videos (click to expand)]
SLAM example 1 | SLAM example 2 | Render from Map |
---|---|---|
pings_demo_01.mp4 |
pings_demo_02.mp4 |
pings_demo_03.mp4 |
π§ Repo under construction π§
[Details (click to expand)]
Robots require high-fidelity reconstructions of their environment for effective operation. Such scene representations should be both, geometrically accurate and photorealistic to support downstream tasks. While this can be achieved by building distance fields from range sensors and radiance fields from cameras, the scalable incremental mapping of both fields consistently and at the same time with high quality remains challenging. In this paper, we propose a novel map representation that unifies a continuous signed distance field and a Gaussian splatting radiance field within an elastic and compact point-based implicit neural map. By enforcing geometric consistency between these fields, we achieve mutual improvements by exploiting both modalities. We devise a LiDAR-visual SLAM system called PINGS using the proposed map representation and evaluate it on several challenging large-scale datasets. Experimental results demonstrate that PINGS can incrementally build globally consistent distance and radiance fields encoded with a compact set of neural points. Compared to the state-of-the-art methods, PINGS achieves superior photometric and geometric rendering at novel views by leveraging the constraints from the distance field. Furthermore, by utilizing dense photometric cues and multi-view consistency from the radiance field, PINGS produces more accurate distance fields, leading to improved odometry estimation and mesh reconstruction.-
Ubuntu OS (tested on 20.04)
-
With GPU, memory > 8 GB recommended
git clone git@github.com:PRBonn/PINGS.git --recursive
cd PINGS
conda create --name pings python=3.10
conda activate pings
conda install pytorch==2.5.1 torchvision==0.20.1 torchaudio==2.5.1 pytorch-cuda=11.8 -c pytorch -c nvidia
The commands depend on your CUDA version (check it by nvcc --version
). You may check the instructions here.
pip3 install -r requirements.txt
[TODO]
A dataset with both RGB and depth observations (LiDAR or depth camera) with extrinsic and intrinsic calibration parameters is required. Note that the input images are supposed to have been already undistorted.
If only depth measurements are available, you can still run PINGS with the --gs-off
flag while PINGS would degenerate to PIN-SLAM.
To extract individual observations from a ROS bag, you may use the ROS bag converter tool.
For your own dataset, you may need to implement a new dataloader class and put it in the dataset/dataloaders
folder.
Check here for an example.
To check how to run PINGS and which datasets have already been supported, you can use the following command:
python3 pings.py -h
To check how to inspect the map built by PINGS, you can use the following command:
python3 inspect_pings.py -h
[Details (click to expand)]
If you use PINGS for any academic work, please cite our original paper.
@inproceedings{pan2025rss,
author = {Y. Pan and X. Zhong and L. Jin and L. Wiesmann and M. Popovi\'c and J. Behley and C. Stachniss},
title = {{PINGS: Gaussian Splatting Meets Distance Fields within a Point-Based Implicit Neural Map}},
booktitle= {Robotics: Science and Systems (RSS)},
year = {2025},
codeurl = {https://github.com/PRBonn/PINGS},
url = {https://www.ipb.uni-bonn.de/wp-content/papercite-data/pdf/pan2025rss.pdf}
}
[Details (click to expand)]
PINGS is built on top of our previous work PIN-SLAM and we thank the authors for the following works: