8000 GitHub - Soappyooo/RefGrasp: A referring expression comprehension / segmentation dataset generation engine for grasping scenarios.
[go: up one dir, main page]
More Web Proxy on the site http://driver.im/
Skip to content

Soappyooo/RefGrasp

Repository files navigation

RefGrasp Dataset

A referring expression comprehension / segmentation dataset generation engine, capable of generating synthetic photos and expressions for language-conditioned grasping. Based on Blender and BlenderProc.

The RefGrasp Dataset (6.5GB) can be downloaded from OneDrive or 夸克网盘.

Usage

Instructions on how to use the dataset can be found here.

Examples:

fig1 fig2 fig3

RefGrasp Dataset Generation

This part is for dataset generation. If you are only interested in using the dataset, please refer to links and content above.
A conda environment is recommended for the project. Below is a guide to set up the environment for project and run the dataset generation process. The project has been tested on Windows 10, Windows 11 and Ubuntu 18.04.

1. Clone the project:

cd /path/to/project
mkdir RefGrasp && cd RefGrasp
git clone https://github.com/Soappyooo/Refer_Grasp.git .

2. Create a conda environment:

conda create -n refgrasp python==3.10.13
conda activate refgrasp
pip install -r requirements.txt

3. Install BlenderProc:

pip install -e ./BlenderProc

4. Install required packages:

It may take a while to install the packages as a Blender 3.5 would be installed first.

python -m blenderproc pip install pandas tqdm debugpy openpyxl psutil

5. Download models and scene:

Download (760MB) from OneDrive or 夸克网盘. Extract the downloaded file to the project directory. The directory structure should look like this:

RefGrasp  
├─ blender_files
│  └─ background.blend
├─ BlenderProc  
├─ core  
├─ images 
├─ models 
│  └─ ...(290 files)
├─ tool_scripts
├─ utils
└─ ...(other files)  

6. Run dataset_generation.py:

Generate a Demo:

Running the script will generate a demo dataset (about 50 images) in a output_demo directory.

python -m blenderproc run dataset_generation.py

Run with arguments on single GPU:

For example, to generate a dataset with 1000 iterations and 5 images per iteration, run:

python -m blenderproc run dataset_generation.py --output-dir ./output --iterations 1000 --images-per-iteration 5

All possible arguments are listed below:

--gpu-id: GPU device id used for rendering. Default to 0.
--output-dir: Output directory of the generated dataset. Default to "./output_demo".
--obj-dir: Model files directory. Default to "./models".
--blender-scene-file-path: Background scene file path. Default to "./blender_files/background.blend".
--models-info-file-path: Models info file path. Default to "./models/models_info_all.xlsx".
--log-file-path: Log file path. Default to "./output_demo/dataset_generation.log".
--iterations: Number of iterations. Default to 10.
--images-per-iteration: Number of images per iteration. Total images <= iterations * images_per_iteration. Images within an iteration shares the same scene with different background, light and camera pose. Default to 5.
--resolution-width: Render resolution width. Default to 512.
--resolution-height: Render resolution height. Default to 512.
--scene-graph-rows: Scene graph rows. Default to 4.
--scene-graph-cols: Scene graph cols. Default to 4.
--seed: Random seed, should not be set unless debugging. Default to None.
--persistent-data-cleanup-interval: Clean up persistent data interval. Default to 100.
--texture-limit: Texture limit. Default to "2048".
--cpu-threads: Number of CPU threads used in render. 0 for auto setting. Default to 0.
--samples: Max samples for rendering, bigger for less noise. Default to 512.

Run with arguments on multiple GPUs:

For example, to generate a dataset with 1000 iterations and 5 images per iteration on 4 GPUs, run:

python dataset_generation_multi_gpu.py --gpu-ids 0,1,2,3 --output-dir ./output --iteration 1000 --images-per-iteration 5

All possible arguments are listed below:

--gpu-ids: GPU device ids used for rendering. Default to "0,1,2,3".
--output-dir: Output directory of the generated dataset. Default to "./output".
--obj-dir: Model files directory. Default to "./models".
--blender-scene-file-path: Background scene file path. Default to "./blender_files/background.blend".
--models-info-file-path: Models info file path. Default to "./models/models_info_all.xlsx".
--iterations: Number of iterations per gpu. Default to 10.
--images-per-iteration: Number of images per iteration. Total images <= iterations * images_per_iteration * gpu_count. Default to 5.
--resolution-width: Render resolution width. Default to 512.
--resolution-height: Render resolution height. Default to 512.
--scene-graph-rows: Scene graph rows. Default to 4.
--scene-graph-cols: Scene graph cols. Default to 4.
--persistent-data-cleanup-interval: Clean up persistent data interval. Default to 100.
--texture-limit: Texture limit. Default to "2048".
--cpu-threads: Number of CPU threads used in render for each subprocess. 0 for auto setting. Default to 16.
--samples: Max samples for rendering, bigger for less noise. Default to 512.

Acknowledgments

This project would not have been possible without the valuable work of several other open-source projects. We would like to extend our gratitude to the following repositories for their contributions, which were instrumental in the development of this project:

  • BlenderProc: Thank you for providing a procedrual rendering pipeline and other useful functionalities that was adapted and integrated into this project.
  • polygon-transformer: We utilized and modified the polygon processing module, which greatly helped in annotating the dataset.

We highly appreciate the efforts of these projects and their maintainers. Their contributions to the open-source community are invaluable.

to top

About

A referring expression comprehension / segmentation dataset generation engine for grasping scenarios.

Resources

License

Stars

Watchers

Forks

0