Papers
arxiv:2106.09449

DocNLI: A Large-scale Dataset for Document-level Natural Language Inference

Published on Jun 17, 2021
Authors:
,
,

Abstract

Natural language inference (NLI) is formulated as a unified framework for solving various NLP problems such as relation extraction, question answering, summarization, etc. It has been studied intensively in the past few years thanks to the availability of large-scale labeled datasets. However, most existing studies focus on merely sentence-level inference, which limits the scope of NLI's application in downstream NLP problems. This work presents Doc<PRE_TAG>NLI</POST_TAG> -- a newly-constructed large-scale dataset for document-level <PRE_TAG>NLI</POST_TAG>. Doc<PRE_TAG>NLI</POST_TAG> is transformed from a broad range of NLP problems and covers multiple genres of text. The premises always stay in the document granularity, whereas the hypotheses vary in length from single sentences to passages with hundreds of words. Additionally, Doc<PRE_TAG>NLI</POST_TAG> has pretty limited artifacts which unfortunately widely exist in some popular sentence-level NLI datasets. Our experiments demonstrate that, even without fine-tuning, a model pretrained on Doc<PRE_TAG>NLI</POST_TAG> shows promising performance on popular sentence-level benchmarks, and generalizes well to out-of-domain NLP tasks that rely on inference at document granularity. Task-specific fine-tuning can bring further improvements. Data, code, and pretrained models can be found at https://github.com/salesforce/Doc<PRE_TAG>NLI</POST_TAG>.

Community

Sign up or log in to comment

Models citing this paper 3

Datasets citing this paper 0

No dataset linking this paper

Cite arxiv.org/abs/2106.09449 in a dataset README.md to link it from this page.

Spaces citing this paper 0

No Space linking this paper

Cite arxiv.org/abs/2106.09449 in a Space README.md to link it from this page.

Collections including this paper 0

No Collection including this paper

Add this paper to a collection to link it from this page.