8000 GitHub - ZongqianLi/Prompt-Tuning-Survey: [ICML 2025 Workshop] Repository for the paper: A Survey on Prompt Tuning
[go: up one dir, main page]
More Web Proxy on the site http://driver.im/
Skip to content

ZongqianLi/Prompt-Tuning-Survey

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

2 Commits
 
 
 
 

Repository files navigation

[ICML 2025 Workshop] A Survey on Prompt Tuning

Content

🚀 News✏️ Todo • < 89BB a href="#introduction">✨ Introduction

🌳 Tree Overview📖 Paper List 🎨 Visualisations

📌 Citation🔖 License

 

Links

Project PagePaper

🚀 News

  • [2025.06.11] This paper is accepted by ICML 2025 Workshop.
  • [2025.02.05] This page is created.
 
 
 

✏️ Todo

 
 
 

✨ Introduction

This survey reviews prompt tuning, a parameter-efficient approach for adapting language models by prepending trainable continuous vectors while keeping the model frozen. We classify existing approaches into two categories: direct prompt learning and transfer learning. Direct prompt learning methods include: general optimization approaches, encoder-based methods, decomposition strategies, and mixture-of-experts frameworks. Transfer learning methods consist of: general transfer approaches, encoder-based methods, and decomposition strategies. For each method, we analyze method designs, innovations, insights, advantages, and disadvantages, with illustrative visualizations comparing different frameworks. We identify challenges in computational efficiency and training stability, and discuss future directions in improving training robustness and broadening application scope.

 
 
 

🌳 Tree Overview

Hierarchical overview of prompt tuning methods including direct learning and transfer learning.

 
 
 

📖 Paper List

Direct Learning:

  • General:

    • Token Embeddings:

      • [Prompt Tuning] The Power of Scale for Parameter-Efficient Prompt Tuning
      • [Xprompt] Exploring the Extreme of Prompt Tuning
    • KV Values:

      • [P Tuning v2] Prompt Tuning Can Be Comparable to Fine-tuning Across Scales and Tasks
  • Encoder:

  • Decomposition:

    • Soft Prompt:

    • Input Prompt:

      • [DePT] Decomposed Prompt Tuning for Parameter-Efficient Fine-tuning
  • MoE:

    • Selection:

      • [SMoP] Towards Efficient and Effective Prompt Tuning with Sparse Mixture-of-Prompts
    • Selection & Efficiency:

      • [PT-MoE] PT-MoE: An Efficient Finetuning Framework for Integrating Mixture-of-Experts into Prompt Tuning

Transfer Learning:

  • General:

    • Selective:

    • Mixed:

      • [Multi-task, SPoT] Better Frozen Model Adaptation through Soft Prompt Transfer
    • Selection:

      • [ATTEMPT] Parameter-Efficient Multi-task Tuning via Attentional Mixtures of Soft Prompts
  • Encoder:

    • Input:

      • [TransPrompt] Towards an Automatic Transferable Prompting Framework for Few-shot Text Classification
    • Both:

      • [CTPT] Efficient Cross-Task Prompt Tuning for Few-Shot Conversational Emotion Recognition
  • Decomposition:

    • Soft Prompt:

      • [MPT] Multitask Prompt Tuning Enables Parameter-Efficient Transfer Learning
 
 
 

🎨 Visualisations

Illustration of different prompt tuning (PT) methods. Prompt Tuning directly prepends soft prompts to input. XPrompt applies pruning to soft prompts. P-Tuning v2 and Prefix Tuning incorporate prompts or KV pairs across all the layers of the language model. P-Tuning and Residual PT use encoders to process soft prompts. Decomposed PT and DePT leverage matrix decomposition strategies. SPoT leverages pre-trained prompts from source tasks, ATTEMPT or CTPT utilizes prompt mixing, TransPrompt uses encoders for task-specific and universal knowledge, while MPT decomposes prompts into shared and task-specific components.

 
 
 

📌 Citation

@misc{li2025surveyprompttuning,
      title={A Survey on Prompt Tuning}, 
      author={Zongqian Li and Yixuan Su and Nigel Collier},
      year={2025},
      eprint={2507.06085},
      archivePrefix={arXiv},
      primaryClass={cs.CL},
      url={https://arxiv.org/abs/2507.06085}, 
}
 
 
 

🔖 License


About

[ICML 2025 Workshop] Repository for the paper: A Survey on Prompt Tuning

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published
0