Hongyu Zhou1, Longzhong Lin1, Jiabao Wang1, Yichong Lu1, Dongfeng Bai2, Bingbing Liu2, Yue Wang1, Andreas Geiger3,4, Yiyi Liao1,†
1 Zhejiang University 2 Huawei 3 University of Tübingen 4 Tübingen AI Center
† Corresponding Authors
This is the official project repository of the paper HUGSIM: A Real-Time, Photo-Realistic and Closed-Loop Simulator for Autonomous Driving.
First, install pixi:
curl -fsSL https://pixi.sh/install.sh | sh
As the repository depends on some packages that can only be installed from source code, and rely on pytorch and cuda to compile, the installation of pixi environment is seperated as two steps:
- Comment the packages below
# install from source code
inpixi.toml
, then runpixi install
to install the packages from pypi. - Uncomment the packages in the previous step, then run
pixi install
to install these packages from source code. - Install apex (required by InverseForm) by running
pixi run install-apex
Change into the pixi environment by using the command pixi shell
.
Or you can use pixi run <command>
to run a command in the pixi environment.
Please refer to Data Preparation Document
You can download sample data from here.
seq=${seq_name}
input_path=${datadir}/${seq}
output_path=${modeldir}/${seq}
mkdir -p ${output_path}
CUDA_VISIBLE_DEVICES=4 \
python -u train_ground.py --data_cfg ./configs/${dataset_name: [kitti360, waymo, nusc, pandaset]}.yaml \
--source_path ${input_path} --model_path ${output_path}
CUDA_VISIBLE_DEVICES=4 \
python -u train.py --data_cfg ./configs/${dataset_name}.yaml \
--source_path ${input_path} --model_path ${output_path}
The reconstructed scene folders contain some information that won't be utilized during the simulation. The scenes are expected to be exported as a minimized format to facilitate easier sharing and simulation.
python eval_render/export_scene.py --model_path ${recon_scene_path} --output_path ${export_path} --iteration 30000
We've made some changes in the capturing and reloading code. If you would like to convert scenes from previous version (before commit 1ca821a8) of our code, add --ver0
in the above command.
We have released all the 3DRealCar files, 23 scenes and corresponding 158 scenarios available at the release link. We plan to hold a competition, so some scenarios and scenarios will be hosted privately.
Note that this GUI is only used for configuration scenarios, rather than simulation. The rendering quality in GUI is not the results during simulation
First convert the vehicles and scenes to splat and semantic format.
python eval_render/convert_vehicles.py --vehicle_path ${PATH_3DRealCar}
python eval_render/convert_scene.py --model_path ${PATH_Scene}
Then, you can run the GUI to configure the scenario. nuscenes_camera.yaml in gui/static/data provides a camera configuration template, you can modify it to fit your needs.
cd gui
python app.py --scene ${PATH_Scene} --car_folder ${PATH_3DRealCar/converted}
You can configure the scenario with the GUI, and download the yaml file to use in simulation.
Here is a video for the GUI usage demonstration: GUI Video
Before simulation, UniAD_SIM, VAD_SIM and NAVSIM client should be installed. The client environments are allowed to be separated from the HUGSIM environment.
The dependencies for NAVSIM are already specified as the pixi environment file, so you don't need to manually install the dependencies.
In closed_loop.py, we automatically launch autonomous driving algorithms.
Paths in configs/sim/*_base.yaml should be updated as paths on your machine.
CUDA_VISIBLE_DEVICES=${sim_cuda} \
python closed_loop.py --scenario_path ./configs/benchmark/${dataset_name}/${scenario_name}.yaml \
--base_path ./configs/sim/${dataset_name}_base.yaml \
--camera_path ./configs/sim/${dataset_name}_camera.yaml \
--kinematic_path ./configs/sim/kinematic.yaml \
--ad ${method_name: [uniad, vad, ltf]} \
--ad_cuda ${ad_cuda}
Run the following commands to execute.
sim_cuda=0
ad_cuda=1
# change this variable as the scenario path on your machine
scenario_dir=${SCENARIO_PATH}
for cfg in ${scenario_dir}/*.yaml; do
echo ${cfg}
CUDA_VISIBLE_DEVICES=${sim_cuda} \
python closed_loop.py --scenario_path ${cfg} \
--base_path ./configs/sim/nuscenes_base.yaml \
--camera_path ./configs/sim/nuscenes_camera.yaml \
--kinematic_path ./configs/sim/kinematic.yaml \
--ad uniad \
--ad_cuda ${ad_cuda}
done
In practice, you may encounter errors due to an incorrect environment, path, and etc. For debugging purposes, you can modify the last part of code as:
# process = launch(ad_path, args.ad_cuda, output)
# try:
# create_gym_env(cfg, output)
# check_alive(process)
# except Exception as e:
# print(e)
# process.kill()
# For debug
create_gym_env(cfg, output)
If you find our paper and codes useful, please kindly cite us via:
@article{zhou2024hugsim,
title={HUGSIM: A Real-Time, Photo-Realistic and Closed-Loop Simulator for Autonomous Driving},
author={Zhou, Hongyu and Lin, Longzhong and Wang, Jiabao and Lu, Yichong and Bai, Dongfeng and Liu, Bingbing and Wang, Yue and Geiger, Andreas and Liao, Yiyi},
journal={arXiv preprint arXiv:2412.01718},
year={2024}
}