AdapterFusion: Non-Destructive Task Composition for Transfer Learning

Abstract

Current approaches to solving classification tasks in NLP involve fine-tuning a pre-trained language model on a single target task. This paper focuses on sharing knowledge extracted not only from a pre-trained language model, but also from several source tasks in order to achieve better performance on the target task. Sequential fine-tuning and multi-task learning are two methods for sharing information, but suffer from problems such as catastrophic forgetting and difficulties in balancing multiple tasks. Additionally, multi-task learning requires simultaneous access to data used for each of the tasks, which does not allow for easy extensions to new tasks on the fly. We propose a new architecture as well as a two-stage learning algorithm that allows us to effectively share knowledge from multiple tasks while avoiding these crucial problems. In the first stage, we learn task specific parameters that encapsulate the knowledge from each task. We then combine these learned representations in a separate combination step, termed AdapterFusion. We show that by separating the two stages, i.e., knowledge extraction and knowledge combination, the classifier can effectively exploit the representations learned from multiple tasks in a non destructive manner. We empirically evaluate our transfer learning approach on 16 diverse NLP tasks, and show that it outperforms traditional strategies such as full fine-tuning of the model as well as multi-task learning.

Bibtex

@inproceedings{pfeiffer-etal-2020-adapterfusion,
    title = "{AdapterFusion}:  Non-Destructive Task Composition for Transfer Learning",
    author = {Pfeiffer, Jonas and
      Kamath, Aishwarya and
      R{\"u}ckl{\'e}, Andreas  and
      Cho, Kyunghyun and
      Gurevych, Iryna},
  booktitle = "Proceedings of The 16th Conference of the European Chapter of the Association for Computational Linguistics (EACL-2021)",
  year = "2021",
  address = "Virtual Conference",
  address = "Online",
  publisher = "Association for Computational Linguistics",
  url = "https://arxiv.org/pdf/2005.00247.pdf"
}