Update README.md
Browse files
README.md
CHANGED
@@ -1,7 +1,23 @@
|
|
|
|
|
|
|
|
|
|
1 |
## Usage
|
2 |
```
|
3 |
from transformers import AutoTokenizer, AutoModelForSequenceClassification
|
4 |
|
5 |
tokenizer = AutoTokenizer.from_pretrained("albert-base-v2")
|
6 |
model = AutoModelForSequenceClassification.from_pretrained("AnReu/albert-for-math-ar-base-ft")
|
7 |
-
```
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
# ALBERT for Math AR
|
2 |
+
|
3 |
+
This model is further pre-trained on the Mathematics StackExchange questions and answers. It is based on Albert base v2 and uses the same tokenizer. In addition to pre-training the model was finetuned on Math Question Answer Retrieval. The sequence classification head is trained to output a relevance score if you input the question as the first segment and the answer as the second segment. You can use the relevance score to rank different answers for retrieval.
|
4 |
+
|
5 |
## Usage
|
6 |
```
|
7 |
from transformers import AutoTokenizer, AutoModelForSequenceClassification
|
8 |
|
9 |
tokenizer = AutoTokenizer.from_pretrained("albert-base-v2")
|
10 |
model = AutoModelForSequenceClassification.from_pretrained("AnReu/albert-for-math-ar-base-ft")
|
11 |
+
```
|
12 |
+
|
13 |
+
## Reference
|
14 |
+
If you use this model, please consider referencing our paper:
|
15 |
+
|
16 |
+
```
|
17 |
+
@inproceedings{reusch2021tu_dbs,
|
18 |
+
title={TU\_DBS in the ARQMath Lab 2021, CLEF},
|
19 |
+
author={Reusch, Anja and Thiele, Maik and Lehner, Wolfgang},
|
20 |
+
year={2021},
|
21 |
+
organization={CLEF}
|
22 |
+
}
|
23 |
+
```
|