File size: 1,600 Bytes
8bccc59
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
634f629
8bccc59
 
 
 
 
 
 
0d5d6d6
 
8bccc59
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
---
license: cc-by-nc-sa-4.0
language:
- en
library_name: transformers
pipeline_tag: text-classification
datasets: 
  - ClaimRev
widget:
- text: "Teachers are likely to educate children better than parents."
  context: "Homeschooling should be banned."
---

# Model
This model was obtained by fine-tuning `microsoft/deberta-base` on the extended ClaimRev dataset.

Paper: [To Revise or Not to Revise: Learning to Detect Improvable Claims for Argumentative Writing Support](https://arxiv.org/abs/2305.16799)

Authors: Gabriella Skitalinskaya and Henning Wachsmuth

# Suboptimal Claim Detection
We cast this task as a binary classification task, where the objective is, given an argumentative claim and some contextual information (in this case, the **parent claim** in the debate, which is opposed or supported by the claim in question), to decide whether it is in need of further revision or can be considered to be phrased more or less optimally.

# Usage

```python
from transformers import AutoTokenizer, AutoModelForSequenceClassification
import torch

tokenizer = AutoTokenizer.from_pretrained("gabski/deberta-suboptimal-claim-detection-with-parent-context")
model = AutoModelForSequenceClassification.from_pretrained("gabski/deberta-suboptimal-claim-detection-with-parent-context")
claim = 'Teachers are likely to educate children better than parents.'
parent_claim = 'Homeschooling should be banned.'
model_input = tokenizer(claim, parent_claim, return_tensors='pt')
model_outputs = model(**model_input)

outputs = torch.nn.functional.softmax(model_outputs.logits, dim = -1)
print(outputs)
```