Papers
arxiv:2202.03218

Efficient Adapter Transfer of Self-Supervised Speech Models for Automatic Speech Recognition

Published on Feb 7, 2022
Authors:
,
,

Abstract

<PRE_TAG>Self-supervised learning (SSL)</POST_TAG> is a powerful tool that allows learning of underlying representations from unlabeled data. <PRE_TAG>Transformer based models</POST_TAG> such as <PRE_TAG>wav2vec 2.0</POST_TAG> and <PRE_TAG>HuBERT</POST_TAG> are leading the field in the speech domain. Generally these models are fine-tuned on a small amount of labeled data for a downstream task such as <PRE_TAG>Automatic Speech Recognition (ASR)</POST_TAG>. This involves re-training the majority of the model for each task. <PRE_TAG>Adapters</POST_TAG> are small lightweight modules which are commonly used in Natural Language Processing (NLP) to adapt <PRE_TAG>pre-trained models</POST_TAG> to new tasks. In this paper we propose applying adapters to <PRE_TAG>wav2vec 2.0</POST_TAG> to reduce the number of parameters required for downstream ASR tasks, and increase scalability of the model to multiple tasks or languages. Using adapters we can perform ASR while training fewer than 10% of parameters per task compared to full fine-tuning with little degradation of performance. Ablations show that applying adapters into just the top few layers of the pre-trained network gives similar performance to full transfer, supporting the theory that <PRE_TAG>higher pre-trained layers</POST_TAG> encode more <PRE_TAG>phonemic information</POST_TAG>, and further optimizing efficiency.

Community

Sign up or log in to comment

Models citing this paper 0

No model linking this paper

Cite arxiv.org/abs/2202.03218 in a model README.md to link it from this page.

Datasets citing this paper 0

No dataset linking this paper

Cite arxiv.org/abs/2202.03218 in a dataset README.md to link it from this page.

Spaces citing this paper 0

No Space linking this paper

Cite arxiv.org/abs/2202.03218 in a Space README.md to link it from this page.

Collections including this paper 0

No Collection including this paper

Add this paper to a collection to link it from this page.