Search Results for author: Matheus Pereira

Found 3 papers, 2 papers with code

Towards Modular LLMs by Building and Reusing a Library of LoRAs

no code implementations18 May 2024 Oleksiy Ostapenko, Zhan Su, Edoardo Maria Ponti, Laurent Charlin, Nicolas Le Roux, Matheus Pereira, Lucas Caccia, Alessandro Sordoni

The growing number of parameter-efficient adaptations of a base large language model (LLM) calls for studying whether we can reuse such trained adapters to improve performance for new tasks.

Language Modelling Large Language Model

Multi-Head Adapter Routing for Cross-Task Generalization

1 code implementation NeurIPS 2023 Lucas Caccia, Edoardo Ponti, Zhan Su, Matheus Pereira, Nicolas Le Roux, Alessandro Sordoni

We find that routing is most beneficial during multi-task pre-training rather than during few-shot adaptation and propose $\texttt{MHR}$-$\mu$, which discards routing and fine-tunes the average of the pre-trained adapters on each downstream tasks.

Cannot find the paper you are looking for? You can Submit a new open access paper.