tolga-ozturk commited on
Commit
142b87d
·
1 Parent(s): db951cd

Upload README.md

Browse files
Files changed (1) hide show
  1. README.md +82 -0
README.md ADDED
@@ -0,0 +1,82 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ language:
3
+ - en
4
+ tags:
5
+ - nsp
6
+ - next-sentence-prediction
7
+ - t5
8
+ datasets:
9
+ - wikipedia
10
+ metrics:
11
+ - accuracy
12
+ ---
13
+
14
+ # T5-base-nsp
15
+
16
+ T5-base-nsp is fine-tuned for Next Sentence Prediction task on the [wikipedia dataset](https://huggingface.co/datasets/wikipedia) using [t5-base](https://huggingface.co/t5-base) model. It was introduced in this [paper](https://arxiv.org/abs/2307.07331) and first released on this page.
17
+
18
+ ## Model description
19
+
20
+ T5-base-nsp is a Transformer-based model which was fine-tuned for Next Sentence Prediction task on 22000 English Wikipedia articles.
21
+
22
+ ## Intended uses
23
+
24
+ - Apply Next Sentence Prediction tasks. (compare the results with BERT models since BERT natively supports this task)
25
+ - See how to fine-tune a T5 model using our [code](https://github.com/slds-lmu/stereotypes-multi/tree/main)
26
+ - Check our [paper](https://arxiv.org/abs/2307.07331) to see its results
27
+
28
+ ## How to use
29
+
30
+ You can use this model directly with a pipeline for next sentence prediction. Here is how to use this model in PyTorch:
31
+
32
+ ### Necessary Initialization
33
+ ```python
34
+ import torch
35
+ from transformers import T5ForConditionalGeneration, MT5Tokenizer
36
+ from huggingface_hub import hf_hub_download
37
+
38
+ class ModelNSP(torch.nn.Module):
39
+ def __init__(self, pretrained_model, tokenizer, nsp_dim=300):
40
+ super(ModelNSP, self).__init__()
41
+ self.zero_token, self.one_token = (self.find_label_encoding(x, tokenizer).item() for x in ["0", "1"])
42
+ self.core_model = T5ForConditionalGeneration.from_pretrained(pretrained_model)
43
+ self.nsp_head = torch.nn.Sequential(torch.nn.Linear(self.core_model.config.hidden_size, nsp_dim),
44
+ torch.nn.Linear(nsp_dim, nsp_dim), torch.nn.Linear(nsp_dim, 2))
45
+
46
+ def forward(self, input_ids, attention_mask=None):
47
+ outputs = self.core_model.generate(input_ids=input_ids, attention_mask=attention_mask, max_length=3,
48
+ output_scores=True, return_dict_in_generate=True)
49
+ logits = [torch.Tensor([score[self.zero_token], score[self.one_token]]) for score in outputs.scores[1]]
50
+ return torch.stack(logits).softmax(dim=-1)
51
+
52
+ @staticmethod
53
+ def find_label_encoding(input_str, tokenizer):
54
+ encoded_str = tokenizer.encode(input_str, add_special_tokens=False, return_tensors="pt")
55
+ return (torch.index_select(encoded_str, 1, torch.tensor([1])) if encoded_str.size(dim=1) == 2 else encoded_str)
56
+
57
+ tokenizer = MT5Tokenizer.from_pretrained("tolga-ozturk/t5-base-nsp")
58
+ model = torch.nn.DataParallel(ModelNSP("t5-base", tokenizer).eval())
59
+ model.load_state_dict(torch.load(hf_hub_download(repo_id="tolga-ozturk/t5-base-nsp", filename="model_weights.bin")))
60
+ ```
61
+
62
+ ### Inference
63
+ ```python
64
+ batch_texts = [("binary classification: In Italy, pizza is presented unsliced.", "The sky is blue."),
65
+ ("binary classification: In Italy, pizza is presented unsliced.", "However, it is served sliced in Turkey.")]
66
+ encoded_dict = tokenizer.batch_encode_plus(batch_text_or_text_pairs=batch_texts, truncation="longest_first", padding=True, return_tensors="pt", return_attention_mask=True, max_length=256)
67
+ print(torch.argmax(model(encoded_dict.input_ids, attention_mask=encoded_dict.attention_mask), dim=-1))
68
+ ```
69
+
70
+ ## BibTeX entry and citation info
71
+
72
+ ```bibtex
73
+ @misc{title={How Different Is Stereotypical Bias Across Languages?},
74
+ author={Ibrahim Tolga Öztürk and Rostislav Nedelchev and Christian Heumann and Esteban Garces Arias and Marius Roger and Bernd Bischl and Matthias Aßenmacher},
75
+ year={2023},
76
+ eprint={2307.07331},
77
+ archivePrefix={arXiv},
78
+ primaryClass={cs.CL}
79
+ }
80
+ ```
81
+
82
+ The work is done with Ludwig-Maximilians-Universität Statistics group, don't forget to check out [their huggingface page](https://huggingface.co/misoda) for other interesting works!