Create README.md
Browse files
README.md
ADDED
@@ -0,0 +1,25 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
---
|
2 |
+
language:
|
3 |
+
- en
|
4 |
+
|
5 |
+
pipeline_tag: text-classification
|
6 |
+
---
|
7 |
+
|
8 |
+
# Span NLI BERT (base)
|
9 |
+
|
10 |
+
This is a **BERT-base** model ([`bert-base-uncased`][2]) fine-tuned on the [**ContractNLI**][3] dataset (non-disclosure agreements) with the **Span NLI BERT** model architecture,
|
11 |
+
from [*ContractNLI: A Dataset for Document-level Natural Language Inference for Contracts* (Koreeda and Manning, 2021)][1].
|
12 |
+
|
13 |
+
For a hypothesis, the **Span NLI BERT** model predicts NLI labels and identifies evidence for documents as premises.
|
14 |
+
Spans of documents should be pre-annotated; evidence is always full sentences or items in an enumerated list in the document.
|
15 |
+
|
16 |
+
For details of the architecture and usage of the relevant training/testing scripts, check out the paper and their [Github repo][4].
|
17 |
+
This model is fine-tuned according to the hyperparameters in `data/conf_base.yml` in their repo,
|
18 |
+
which differs from their hyperparameters that produced the best dev scores as noted in the Appendix of the paper.
|
19 |
+
|
20 |
+
ArXiv: <https://arxiv.org/abs/2110.01799>
|
21 |
+
|
22 |
+
[1]: https://aclanthology.org/2021.findings-emnlp.164/
|
23 |
+
[2]: https://huggingface.co/bert-base-uncased
|
24 |
+
[3]: https://stanfordnlp.github.io/contract-nli/
|
25 |
+
[4]: https://github.com/stanfordnlp/contract-nli-bert
|