Zhejiang University of Technology
(*) corresponding author
The segmentation of dense point clouds from industrial LiDAR scans presents challenges in computational overhead and VRAM usage, hindering the development of automated fast measurement systems. To address this, we propose EPNet, an efficient model for part segmentation of dense point clouds. EPNet employs a U-Net-like architecture with skip connections to merge original and recovered features, enhancing local feature extraction via KNN and cosine similarity. Factorization-dimensionality-reduction module based on self-attention overcomes the limitations of trilinear interpolation in feature recovery, improving both local and global feature fusion. In experiments on the LVPC dataset of dense vehicle point clouds, EPNet outperforms models from the past three years, achieving a 1.7% accuracy improvement and a 9.7% increase in average Instance IoU compared to PointNet++. EPNet also achieves a single-file inference time of under 1 second while requiring minimal GPU VRAM resources, demonstrating its potential for real-world industrial high-precision fast automated measurements.
conda env create -f py38.yaml
pip install -r requirements.txt
LVPC Dataset: Download the offical data from here. Unzip the file under data/PartSeg/LVPC/
.
The directory structure should be
|LVPC/
├──04379243/
│ ├── 1wqxaf.txt
│ ├── .......
│──train_test_split/
│──synsetoffset2category.txt
ShapeNetPart Dataset: Download the offical data from here. Unzip the file under data/PartSegshapenetcore_partanno_segmentation_benchmark_v0_normal/
.
The directory structure should be
|shapenetcore_partanno_segmentation_benchmark_v0_normal/
├──02691156/
│ ├── 1a04e3eab45ca15dd86060f189eb133.txt
│ ├── .......
│── .......
│──train_test_split/
│──synsetoffset2category.txt
LVPC
python -m torch.distributed.launch --nproc_per_node=1 --master_port 29502 --use_env train_partseg_ddp.py --cfg config/ShapeNetPart/train_LVPC.json
ShapeNetPart
python -m torch.distributed.launch --nproc_per_node=2 --master_port 29502 --use_env train_partseg_ddp.py --cfg config/ShapeNetPart/train_Shapenetpart.json
LVPC
python test_partseg.py --cfg config/ShapeNetPart/test_LVPC.json
ShapeNetPart
python test_partseg.py --cfg config/ShapeNetPart/test_Shapenetpart.json
Method | Reference | OA↑ | mIoU↑ | VRAM↓ | Train↓ | Params↓ | Checkpoints Download | logs |
---|---|---|---|---|---|---|---|---|
PointMLP | ICLR 2022 | 86.6% | 53.64% | 19.9G | 14.7H | 16.7M | PointMLP.pth | PointMLP.txt |
PointNeXt | NeurIPS 2022 | - | 34.46% | 40.8G | 59.7H | 22.4M | PointNeXt.pth | PointNeXt.txt |
Point-BERT | CVPR 2022 | 76.8% | 40.86% | 24.7G | 17.9H | 27.05M | Point-BERT.pth | Point-BERT.txt |
Point-MAE | ECCV 2022 | 93.8% | 78.53% | 26.1G | 17.1H | 27.05M | Point-MAE.pth | Point-MAE.txt |
Point-M2AE | NeurIPS 2022 | 93.6% | 79.89% | 46.2G | 19.7H | 25.47M | Point-M2AE.pth | Point-M2AE.txt |
ACT | ICLR 2023 | 93.6% | 78.65% | 42.5G | 18.2H | 27.05M | ACT.pth | ACT.txt |
PointGPT | NeurIPS 2023 | 91.3% | 69.39% | 42.9G | 19.2H | 24.69M | PointGPT.pth | PointGPT.txt |
ReCon | ICML 2023 | 93.8% | 78.18% | 33.0G | 22.4H | 48.53M | ReCon.pth | ReCon.txt |
ShapeLLM | ECCV 2024 | 90.1% | 72.75% | 30.6G | 4.66H | 48.53M | ShapeLLM.pth | ShapeLLM.txt |
PointMamba | NeurIPS 2024 | 92.8% | 77.66% | 23.4G | 5.50H | 5.78M | PointMamba.pth | PointMamba.txt |
PointRWKV | AAAI 2025 | 92.1% | 75.29% | 44.7G | 5.16H | 27.05M | PointRWKV.pth | PointRWKV.txt |
PointNet++ | NeurIPS 2017 | 91.0% | 70.76% | 14.4G | 4.16H | 1.47M | PointNet++.pth | PointNet++.txt |
EPNet(Ours) | ICMR 2025 | 92.7% | 80.46% | 10.64G | 2.47H | 3.90M | EPNet.pth | EPNet.txt |
Method | Reference | OA↑ | mIoU↑ | Time↓ | VRAM↓ | logs |
---|---|---|---|---|---|---|
PointNet++ | NeurIPS 2017 | 90.69% | 70.78% | 3.41s | 14.49G | PointNet++.txt |
Point-MAE | ECCV 2022 | 93.22% | 77.84% | 9.71s | 25.26G | Point-MAE.txt |
Point-M2AE | NeurIPS 2022 | 92.48% | 79.41% | 10.11s | 41.95G | Point-M2AE.txt |
ACT | ICLR 2023 | 92.67% | 78.39% | 9.91s | 25.26G | ACT.txt |
ReCon | ICML 2023 | 92.93% | 77.02% | 9.57s | 38.25G | ReCon.txt |
PointMamba | NeurIPS 2024 | 91.66% | 76.13% | 2.70s | 30.21G | PointMamba.txt |
EPNet(Ours) | ICMR 2025 | 93.01% | 81.96% | 0.47s | 7.34G | EPNet.txt |
You should set batch-size
to 1. After running, use Cloudcompare
to view the generated files
LVPC
python test_partseg_save.py --cfg config/ShapeNetPart/test_LVPC_save.json
Our codes are built upon PointNet++, ACT, PAConv, Point-BERT, Point-M2AE, Point-MAE, Point-Transformer, PointCloucMamba, PointGPT, PointMamba, PointMLP, PointNeXt, PointRWKV, ReCon, ShapeLLM, SPoTr and TAP.
@inproceedings{10.1145/3731715.3733329,
author = {Wang, Cheng and Hu, Wulong and Wang, Minqian and Cheng, Zhenbo and Zhang, Yuanming and Gao, Fei},
title = {EPNet: Efficient Part Segmentation for Dense Point Clouds},
year = {2025},
isbn = {9798400718779},
publisher = {Association for Computing Machinery},
address = {New York, NY, USA},
url = {https://doi.org/10.1145/3731715.3733329},
doi = {10.1145/3731715.3733329},
abstract = {The segmentation of dense point clouds from industrial LiDAR scans presents challenges in computational overhead and VRAM usage, hindering the development of automated fast measurement systems. To address this, we propose EPNet, an efficient model for part segmentation of dense point clouds. EPNet employs a U-Net-like architecture with skip connections to merge original and recovered features, enhancing local feature extraction via KNN and cosine similarity. Factorization-dimensionality-reduction module based on self-attention overcomes the limitations of trilinear interpolation in feature recovery, improving both local and global feature fusion. In experiments on the LVPC dataset of dense vehicle point clouds, EPNet outperforms models from the past three years, achieving a 1.7\% accuracy improvement and a 9.7\% increase in average Instance IoU compared to PointNet++. EPNet also achieves a single-file inference time of under 1 second while requiring minimal GPU VRAM resources, demonstrating its potential for real-world industrial high-precision fast automated measurements. The code is available at https://github.com/duskNNNN/EPNet.},
booktitle = {Proceedings of the 2025 International Conference on Multimedia Retrieval},
pages = {1358–1366},
numpages = {9},
keywords = {dense point cloud, efficient, industrial automation measurement, part segmentation, pointnet++},
location = {Chicago, IL, USA},
series = {ICMR '25}
}