8000 GitHub - XueyuLiu/PPO: The official implementation code for Plug-and-Play PPO: An Adaptive Point Prompt Optimizer Making SAM Greater.
[go: up one dir, main page]
More Web Proxy on the site http://driver.im/
Skip to content
/ PPO Public

The official implementation code for Plug-and-Play PPO: An Adaptive Point Prompt Optimizer Making SAM Greater.

Notifications You must be signed in to change notification settings

XueyuLiu/PPO

Repository files navigation

Plug-and-Play PPO: An Adaptive Point Prompt Optimizer Making SAM Greater

Description

Powered by extensive curated training data, the Segment Anything Model (SAM) demonstrates impressive generalization capabilities in open-world scenarios, effectively guided by user-provided prompts.However, the class-agnostic characteristic of SAM renders its segmentation accuracy highly dependent on prompt quality. In this paper, we propose a novel Plug-and-Play dual-space Point Prompt Optimizer (PPO) designed to enhance prompt distribution through deep reinforcement learning (DRL)-based heterogeneous graph optimization. PPO optimizes initial prompts for any task without requiring additional training, thereby improving SAM’s downstream segmentation performance. Specifically, PPO constructs a dual-space heterogeneous graph, leveraging the robust feature-matching capabilities of a foundational pre-trained model to create internal feature and physical distance matrices. A DRL policy network iteratively refines the distribution of prompt points, optimizing segmentation predictions. In conclusion, PPO redefines the prompt optimization problem as a heterogeneous graph optimization task, using DRL to construct an effective, plug-and-play prompt optimizer. This approach holds potential for broader applications across diverse segmentation tasks and provides a promising solution for point prompt optimization.

🚀🚀This work has been accepted by CVPR2025!🚀🚀

Usage

Setup

  • Cuda 12.7
  • Python 3.9.20
pip install -r requirements.txt

Datasets

../                          # parent directory
├── ./data                   # data path
│   ├── reference_image      # the one-shot reference image
│   ├── reference_mask       # the one-shot reference mask
│   ├── target_image         # testing images
│   ├── initial_indices      # initial prompt indices
│   ├── optimized_indices    # optimized prompt indices

Generate initial prompt

python generate_initial_prompts.py

Train PPO

python train_GPOA.py

Optimize PPO with feature matching

python main_FM.py

Optimization results for different datasets

ablation

Acknowledgement

Thanks DINOv2, SAM, GBMSeg. for serving as building blocks of PPO.

About

The official implementation code for Plug-and-Play PPO: An Adaptive Point Prompt Optimizer Making SAM Greater.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published
0