Papers
arxiv:1905.05583
How to Fine-Tune BERT for Text Classification?
Published on May 14, 2019
Authors:
Abstract
Language model pre-training has proven to be useful in learning universal language representations. As a state-of-the-art language model pre-training model, BERT (Bidirectional Encoder Representations from Transformers) has achieved amazing results in many language understanding tasks. In this paper, we conduct exhaustive experiments to investigate different fine-tuning methods of BERT on text classification task and provide a general solution for BERT fine-tuning. Finally, the proposed solution obtains new state-of-the-art results on eight widely-studied text classification datasets.
Models citing this paper 2
Datasets citing this paper 0
No dataset linking this paper
Cite arxiv.org/abs/1905.05583 in a dataset README.md to link it from this page.
Spaces citing this paper 1
Collections including this paper 0
No Collection including this paper
Add this paper to a
collection
to link it from this page.