Stars
minLoRA: a minimal PyTorch library that allows you to apply LoRA to any PyTorch model.
LoRI: Reducing Cross-Task Interference in Multi-Task Low-Rank Adaptation
This repository is a Pytorch implementation of LoCA: Location-Aware Cosine Adaptation for Parameter-Efficient Fine-Tuning (Accepted to ICLR 2025).
WebNLG+ Challenge 2020: Scripts to evaluate the RDF-to-text task with automatic metrics (BLEU, METEOR, chrF++, TER and BERT-Score)
MSPLoRA: A Multi-Scale Pyramid Low-Rank Adaptation for Efficient Model Fine-Tuning
[ICLR 2025] RaSA: Rank-Sharing Low-Rank Adaptation
[ICLR 2025] The official pytorch implement of "Dynamic Low-Rank Sparse Adaptation for Large Language Models".
LoR2C : Low-Rank Residual Connection Adaptation for Parameter-Efficient Fine-Tuning
One Initialization to Rule them All: Fine-tuning via Explained Variance Adaptation
[ICML2024 (Oral)] Official PyTorch implementation of DoRA: Weight-Decomposed Low-Rank Adaptation
Pytorch implementation of 2D Discrete Wavelet (DWT) and Dual Tree Complex Wavelet Transforms (DTCWT) and a DTCWT based ScatterNet
[NAACL 2025] MiLoRA: Harnessing Minor Singular Components for Parameter-Efficient LLM Finetuning
Code for our EMNLP 2023 Paper: "LLM-Adapters: An Adapter Family for Parameter-Efficient Fine-Tuning of Large Language Models"
[NeurIPS'24 Oral] HydraLoRA: An Asymmetric LoRA Architecture for Efficient Fine-Tuning
Code for LAS-AT: Adversarial Training with Learnable Attack Strategy (CVPR2022)
Code for paper [Neat: Nonlinear Parameter-efficient Adaptation of Pre-trained Models]
[NAACL 24 Oral] LoRETTA: Low-Rank Economic Tensor-Train Adaptation for Ultra-Low-Parameter Fine-Tuning of Large Language Models
The code and data for the GPT-4 based benchmark in the vicuna blog post