Content
🚀 News • ✏️ Todo • < 89BB a href="#introduction">✨ Introduction
🌳 Tree Overview • 📖 Paper List 🎨 Visualisations
Links
- [2025.06.11] This paper is accepted by ICML 2025 Workshop.
- [2025.02.05] This page is created.
This survey reviews prompt tuning, a parameter-efficient approach for adapting language models by prepending trainable continuous vectors while keeping the model frozen. We classify existing approaches into two categories: direct prompt learning and transfer learning. Direct prompt learning methods include: general optimization approaches, encoder-based methods, decomposition strategies, and mixture-of-experts frameworks. Transfer learning methods consist of: general transfer approaches, encoder-based methods, and decomposition strategies. For each method, we analyze method designs, innovations, insights, advantages, and disadvantages, with illustrative visualizations comparing different frameworks. We identify challenges in computational efficiency and training stability, and discuss future directions in improving training robustness and broadening application scope.
Hierarchical overview of prompt tuning methods including direct learning and transfer learning.
-
General:
-
Token Embeddings:
- [Prompt Tuning] The Power of Scale for Parameter-Efficient Prompt Tuning
- [Xprompt] Exploring the Extreme of Prompt Tuning
-
KV Values:
- [P Tuning v2] Prompt Tuning Can Be Comparable to Fine-tuning Across Scales and Tasks
-
-
Encoder:
-
Token Embeddings:
- [P Tuning] GPT Understands, Too
- [Residual Prompt Tuning] Improving prompt tuning with residual reparameterization
-
KV Values:
- [Prefix Tuning] Optimizing Continuous Prompts for Generation
-
-
Decomposition:
-
Soft Prompt:
- [Decomposed Prompt Tuning] Decomposed Prompt Tuning via Low-Rank Reparameterization
-
Input Prompt:
- [DePT] Decomposed Prompt Tuning for Parameter-Efficient Fine-tuning
-
-
MoE:
-
General:
-
Selective:
- [Single-task, SPoT] Better Frozen Model Adaptation through Soft Prompt Transfer
-
Mixed:
- [Multi-task, SPoT] Better Frozen Model Adaptation through Soft Prompt Transfer
-
Selection:
- [ATTEMPT] Parameter-Efficient Multi-task Tuning via Attentional Mixtures of Soft Prompts
-
-
Encoder:
-
Input:
- [TransPrompt] Towards an Automatic Transferable Prompting Framework for Few-shot Text Classification
-
Both:
- [CTPT] Efficient Cross-Task Prompt Tuning for Few-Shot Conversational Emotion Recognition
-
-
Decomposition:
-
Soft Prompt:
- [MPT] Multitask Prompt Tuning Enables Parameter-Efficient Transfer Learning
-
Illustration of different prompt tuning (PT) methods. Prompt Tuning directly prepends soft prompts to input. XPrompt applies pruning to soft prompts. P-Tuning v2 and Prefix Tuning incorporate prompts or KV pairs across all the layers of the language model. P-Tuning and Residual PT use encoders to process soft prompts. Decomposed PT and DePT leverage matrix decomposition strategies. SPoT leverages pre-trained prompts from source tasks, ATTEMPT or CTPT utilizes prompt mixing, TransPrompt uses encoders for task-specific and universal knowledge, while MPT decomposes prompts into shared and task-specific components.
@misc{li2025surveyprompttuning,
title={A Survey on Prompt Tuning},
author={Zongqian Li and Yixuan Su and Nigel Collier},
year={2025},
eprint={2507.06085},
archivePrefix={arXiv},
primaryClass={cs.CL},
url={https://arxiv.org/abs/2507.06085},
}