LoRA-Augmented Generation (LAG) for Knowledge-Intensive Language Tasks
Published in ArXiv, 2025
The proliferation of fine-tuned language model experts for specific tasks and domains signals the need for efficient selection and combination methods. We propose LoRA-Augmented Generation (LAG) for leveraging large libraries of knowledge and task-specific LoRA adapters. LAG requires no additional training or access to data, and efficiently filters, retrieves, and applies experts on a per-token and layer basis. We evaluate LAG on various knowledge-intensive tasks, achieving superior performance over existing data-free methods. We explore scenarios where additional data is available, demonstrating LAG’s compatibility with alternative solutions such as retrieval-augmented generation (RAG).
Recommended citation: William Fleshman and Benjamin Van Durme, LoRA-Augmented Generation (LAG) for Knowledge-Intensive Tasks, 2025. https://fleshman.dev/files/lag.pdf