hancheolp commited on
Commit
e2def95
·
verified ·
1 Parent(s): 8ebf3f8

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +29 -0
README.md ADDED
@@ -0,0 +1,29 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: mit
3
+ language:
4
+ - en
5
+ base_model:
6
+ - FacebookAI/roberta-base
7
+ ---
8
+
9
+ # Ambiguity-aware RoBERTa
10
+
11
+ This model is trained on a subset of the MNLI dataset and is capable of representing the ambiguity occurring in natural language inference tasks as an accurate distribution (i.e., softmax output). It was introduced in the following paper: ["Deep Model Compression Also Helps Models Capture Ambiguity"](https://aclanthology.org/2023.acl-long.381.pdf) (ACL 2023).
12
+
13
+ # Usage
14
+
15
+ ```python
16
+ from transformers import RobertaTokenizer, RobertaForSequenceClassification
17
+ tokenizer = RobertaTokenizer.from_pretrained('hancheolp/ambiguity-aware-roberta-snli')
18
+ model = RobertaForSequenceClassification.from_pretrained('hancheolp/ambiguity-aware-roberta-snli')
19
+ premise = "To the sociologists' speculations, add mine."
20
+ hypothesis = "I don't agree with sociologists."
21
+ encoded_input = tokenizer(premise, hypothesis, return_tensors='pt')
22
+ output = model(**encoded_input)
23
+ distribution = output.logits.softmax(dim=-1)
24
+ ```
25
+
26
+ Each index of the output vector represents the following:
27
+ * 0: entailment
28
+ * 1: neutral
29
+ * 2: contradiction