EvilScript commited on
Commit
4462d9a
·
verified ·
1 Parent(s): 2dd090d

Initial upload of Academic Sentiment Classifier

Browse files
README.md ADDED
@@ -0,0 +1,136 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ language: en
3
+ library_name: transformers
4
+ pipeline_tag: text-classification
5
+ license: mit
6
+ tags:
7
+ - sentiment-analysis
8
+ - distilbert
9
+ - sequence-classification
10
+ - academic-peer-review
11
+ - openreview
12
+ ---
13
+
14
+ # Academic Sentiment Classifier (DistilBERT)
15
+
16
+ DistilBERT-based sequence classification model that predicts the sentiment polarity of academic peer-review text (binary: negative vs positive). It supports research on evaluating the sentiment of scholarly reviews and AI-generated critique, enabling large-scale, reproducible measurements for academic-style content.
17
+
18
+ ## Model details
19
+
20
+ - Architecture: DistilBERT for Sequence Classification (2 labels)
21
+ - Max input length used during training: 512 tokens
22
+ - Labels:
23
+ - LABEL_0 -> negative
24
+ - LABEL_1 -> positive
25
+ - Format: `safetensors`
26
+
27
+ ## Intended uses & limitations
28
+
29
+ Intended uses:
30
+
31
+ - Analyze sentiment of peer-review snippets, full reviews, or similar scholarly discourse.
32
+ - Evaluate the effect of attacks (e.g., positive/negative steering) on generated reviews by measuring polarity shifts.
33
+
34
+ Limitations:
35
+
36
+ - Binary polarity only (no neutral class); confidence scores should be interpreted with care.
37
+ - Domain-specific: optimized for academic review-style English text; may underperform on general-domain data.
38
+ - Not a replacement for human judgement or editorial decision-making.
39
+
40
+ Ethical considerations and bias:
41
+
42
+ - Scholarly reviews can contain technical jargon, hedging, and nuanced tone; polarity is an imperfect proxy for quality or fairness.
43
+ - Potential biases may reflect those present in the underlying corpus.
44
+
45
+ ## Training data
46
+
47
+ The model was fine-tuned on a corpus of academic peer-review text curated from OpenReview review texts. The task is binary sentiment classification over review text spans.
48
+
49
+ Note: If you plan to use or extend the underlying data, please review the terms of use for OpenReview and any relevant dataset licenses.
50
+
51
+ ## Training procedure (high level)
52
+
53
+ - Base model: DistilBERT (transformers)
54
+ - Objective: single-label binary classification
55
+ - Tokenization: standard DistilBERT tokenizer, truncation to 512 tokens
56
+ - Optimizer/scheduler: standard Trainer defaults (AdamW with linear schedule)
57
+
58
+ Exact hyperparameters may vary across runs; typical training uses AdamW with a linear learning rate schedule and truncation to 512 tokens.
59
+
60
+ ## How to use
61
+
62
+ Basic pipeline usage:
63
+
64
+ ```python
65
+ from transformers import pipeline
66
+
67
+ clf = pipeline(
68
+ task="text-classification",
69
+ model="YOUR_USERNAME/academic-sentiment-classifier",
70
+ tokenizer="YOUR_USERNAME/academic-sentiment-classifier",
71
+ return_all_scores=False,
72
+ )
73
+
74
+ text = "The paper is clearly written and provides strong empirical support for the claims."
75
+ print(clf(text))
76
+ # Example output: [{'label': 'LABEL_1', 'score': 0.97}] # LABEL_1 -> positive
77
+ ```
78
+
79
+ If you prefer friendly labels, you can map them:
80
+
81
+ ```python
82
+ from transformers import pipeline
83
+
84
+ id2name = {"LABEL_0": "negative", "LABEL_1": "positive"}
85
+ clf = pipeline("text-classification", model="YOUR_USERNAME/academic-sentiment-classifier")
86
+ res = clf("This section lacks clarity and the experiments are inconclusive.")[0]
87
+ res["label"] = id2name.get(res["label"], res["label"]) # map to human-friendly label
88
+ print(res)
89
+ ```
90
+
91
+ Batch inference:
92
+
93
+ ```python
94
+ from transformers import AutoTokenizer, AutoModelForSequenceClassification
95
+ import torch
96
+
97
+ device = 0 if torch.cuda.is_available() else -1
98
+ tok = AutoTokenizer.from_pretrained("YOUR_USERNAME/academic-sentiment-classifier")
99
+ model = AutoModelForSequenceClassification.from_pretrained("YOUR_USERNAME/academic-sentiment-classifier")
100
+
101
+ texts = [
102
+ "I recommend acceptance; the methodology is solid and results are convincing.",
103
+ "Major concerns remain; the evaluation is incomplete and unclear.",
104
+ ]
105
+
106
+ inputs = tok(texts, padding=True, truncation=True, max_length=512, return_tensors="pt")
107
+ with torch.no_grad():
108
+ logits = model(**inputs).logits
109
+ probs = torch.softmax(logits, dim=-1)
110
+ pred_ids = probs.argmax(dim=-1)
111
+
112
+ # Map to friendly labels
113
+ id2name = {0: "negative", 1: "positive"}
114
+ preds = [id2name[i.item()] for i in pred_ids]
115
+ print(list(zip(texts, preds)))
116
+ ```
117
+
118
+ ## Evaluation
119
+
120
+ If you compute new metrics on public datasets or benchmarks, consider sharing them via a pull request to this model card.
121
+
122
+ ## License
123
+
124
+ The model weights and card are released under the MIT license. Review and comply with any third-party data licenses if reusing the training data.
125
+
126
+ ## Citation
127
+
128
+ If you use this model, please cite the project:
129
+
130
+ ```bibtex
131
+ @software{academic_sentiment_classifier,
132
+ title = {Academic Sentiment Classifier (DistilBERT)},
133
+ year = {2025},
134
+ url = {https://huggingface.co/EvilScript/academic-sentiment-classifier}
135
+ }
136
+ ```
config.json ADDED
@@ -0,0 +1,24 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "activation": "gelu",
3
+ "architectures": [
4
+ "DistilBertForSequenceClassification"
5
+ ],
6
+ "attention_dropout": 0.1,
7
+ "dim": 768,
8
+ "dropout": 0.1,
9
+ "dtype": "float32",
10
+ "hidden_dim": 3072,
11
+ "initializer_range": 0.02,
12
+ "max_position_embeddings": 512,
13
+ "model_type": "distilbert",
14
+ "n_heads": 12,
15
+ "n_layers": 6,
16
+ "pad_token_id": 0,
17
+ "problem_type": "single_label_classification",
18
+ "qa_dropout": 0.1,
19
+ "seq_classif_dropout": 0.2,
20
+ "sinusoidal_pos_embds": false,
21
+ "tie_weights_": true,
22
+ "transformers_version": "4.56.1",
23
+ "vocab_size": 30522
24
+ }
model.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:38c9adf16f21badfe6569b17c551a5167f4deea78f91887519513153a4382eb9
3
+ size 267832560
special_tokens_map.json ADDED
@@ -0,0 +1,7 @@
 
 
 
 
 
 
 
 
1
+ {
2
+ "cls_token": "[CLS]",
3
+ "mask_token": "[MASK]",
4
+ "pad_token": "[PAD]",
5
+ "sep_token": "[SEP]",
6
+ "unk_token": "[UNK]"
7
+ }
tokenizer.json ADDED
The diff for this file is too large to render. See raw diff
 
tokenizer_config.json ADDED
@@ -0,0 +1,56 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "added_tokens_decoder": {
3
+ "0": {
4
+ "content": "[PAD]",
5
+ "lstrip": false,
6
+ "normalized": false,
7
+ "rstrip": false,
8
+ "single_word": false,
9
+ "special": true
10
+ },
11
+ "100": {
12
+ "content": "[UNK]",
13
+ "lstrip": false,
14
+ "normalized": false,
15
+ "rstrip": false,
16
+ "single_word": false,
17
+ "special": true
18
+ },
19
+ "101": {
20
+ "content": "[CLS]",
21
+ "lstrip": false,
22
+ "normalized": false,
23
+ "rstrip": false,
24
+ "single_word": false,
25
+ "special": true
26
+ },
27
+ "102": {
28
+ "content": "[SEP]",
29
+ "lstrip": false,
30
+ "normalized": false,
31
+ "rstrip": false,
32
+ "single_word": false,
33
+ "special": true
34
+ },
35
+ "103": {
36
+ "content": "[MASK]",
37
+ "lstrip": false,
38
+ "normalized": false,
39
+ "rstrip": false,
40
+ "single_word": false,
41
+ "special": true
42
+ }
43
+ },
44
+ "clean_up_tokenization_spaces": false,
45
+ "cls_token": "[CLS]",
46
+ "do_lower_case": true,
47
+ "extra_special_tokens": {},
48
+ "mask_token": "[MASK]",
49
+ "model_max_length": 512,
50
+ "pad_token": "[PAD]",
51
+ "sep_token": "[SEP]",
52
+ "strip_accents": null,
53
+ "tokenize_chinese_chars": true,
54
+ "tokenizer_class": "DistilBertTokenizer",
55
+ "unk_token": "[UNK]"
56
+ }
vocab.txt ADDED
The diff for this file is too large to render. See raw diff