Thomas Lemberger
commited on
Commit
·
0ae1b3c
1
Parent(s):
b7637ec
card
Browse files
README.md
CHANGED
@@ -15,7 +15,7 @@ metrics:
|
|
15 |
|
16 |
## Model description
|
17 |
|
18 |
-
This model is a [RoBERTa base model](https://huggingface.co/roberta-base)
|
19 |
|
20 |
## Intended uses & limitations
|
21 |
|
@@ -54,18 +54,18 @@ Training code is available at https://github.com/source-data/soda-roberta
|
|
54 |
|
55 |
- Command: `python -m lm.train /data/json/oapmc_abstracts_figs/ MLM`
|
56 |
- Tokenizer vocab size: 50265
|
57 |
-
- Training data:
|
58 |
-
- Training with: 12005390 examples
|
59 |
-
- Evaluating on: 36713 examples
|
60 |
-
- Epochs
|
61 |
-
- per_device_train_batch_size
|
62 |
-
- per_device_eval_batch_size
|
63 |
-
- learning_rate
|
64 |
-
- weight_decay
|
65 |
-
- adam_beta1
|
66 |
-
- adam_beta2
|
67 |
-
- adam_epsilon
|
68 |
-
- max_grad_norm
|
69 |
- tensorboard run: lm-MLM-2021-01-27T15-17-43.113766
|
70 |
|
71 |
End of training:
|
|
|
15 |
|
16 |
## Model description
|
17 |
|
18 |
+
This model is a [RoBERTa base pre-trained model](https://huggingface.co/roberta-base) that was further trained using a masked language modeling task on a compendium of english scientific textual examples from the life sciences using the [BioLang dataset](https://huggingface.co/datasets/EMBO/biolang).
|
19 |
|
20 |
## Intended uses & limitations
|
21 |
|
|
|
54 |
|
55 |
- Command: `python -m lm.train /data/json/oapmc_abstracts_figs/ MLM`
|
56 |
- Tokenizer vocab size: 50265
|
57 |
+
- Training data: EMBO/biolang MLM
|
58 |
+
- Training with: 12005390 examples
|
59 |
+
- Evaluating on: 36713 examples
|
60 |
+
- Epochs: 3.0
|
61 |
+
- `per_device_train_batch_size`: 16
|
62 |
+
- `per_device_eval_batch_size`: 16
|
63 |
+
- `learning_rate`: 5e-05
|
64 |
+
- `weight_decay`: 0.0
|
65 |
+
- `adam_beta1`: 0.9
|
66 |
+
- `adam_beta2`: 0.999
|
67 |
+
- `adam_epsilon`: 1e-08
|
68 |
+
- `max_grad_norm`: 1.0
|
69 |
- tensorboard run: lm-MLM-2021-01-27T15-17-43.113766
|
70 |
|
71 |
End of training:
|