LLMs_PEFT
updated
Combining Modular Skills in Multitask Learning
Paper
• 2202.13914
• Published
• 4
The Power of Scale for Parameter-Efficient Prompt Tuning
Paper
• 2104.08691
• Published
• 10
Prefix-Tuning: Optimizing Continuous Prompts for Generation
Paper
• 2101.00190
• Published
• 6
Paper
• 2103.10385
• Published
• 11
Controlling Text-to-Image Diffusion by Orthogonal Finetuning
Paper
• 2306.07280
• Published
• 25
Multitask Prompt Tuning Enables Parameter-Efficient Transfer Learning
Paper
• 2303.02861
• Published
• 2
Low-rank Adaptation of Large Language Model Rescoring for
Parameter-Efficient Speech Recognition
Paper
• 2309.15223
• Published
• 23
Navigating Text-To-Image Customization:From LyCORIS Fine-Tuning to Model
Evaluation
Paper
• 2309.14859
• Published
• 5
FedPara: Low-Rank Hadamard Product for Communication-Efficient Federated
Learning
Paper
• 2108.06098
• Published
• 2
LLaMA-Adapter: Efficient Fine-tuning of Language Models with Zero-init
Attention
Paper
• 2303.16199
• Published
• 4
Few-Shot Parameter-Efficient Fine-Tuning is Better and Cheaper than
In-Context Learning
Paper
• 2205.05638
• Published
• 6
Adaptive Budget Allocation for Parameter-Efficient Fine-Tuning
Paper
• 2303.10512
• Published
• 4