Update README.md
Browse files
README.md
CHANGED
@@ -1 +1,29 @@
|
|
1 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
---
|
2 |
+
|
3 |
+
language:
|
4 |
+
- English
|
5 |
+
tags:
|
6 |
+
- Clinical notes
|
7 |
+
- Discharge summaries
|
8 |
+
- longformer
|
9 |
+
license: "cc-by-4.0"
|
10 |
+
datasets:
|
11 |
+
- MIMIC-III
|
12 |
+
|
13 |
+
---
|
14 |
+
|
15 |
+
* Continue pre-training RoBERTa-base using discharge summaries from MIMIC-III datasets.
|
16 |
+
|
17 |
+
* Details can be found in the following paper
|
18 |
+
|
19 |
+
> Xiang Dai and Ilias Chalkidis and Sune Darkner and Desmond Elliott. 2022. Revisiting Transformer-based Models for Long Document Classification. (https://arxiv.org/abs/2204.06683)
|
20 |
+
|
21 |
+
* Important hyper-parameters
|
22 |
+
|
23 |
+
| | |
|
24 |
+
|---|---|
|
25 |
+
| Max sequence | 4096 |
|
26 |
+
| Batch size | 8 |
|
27 |
+
| Learning rate | 5e-5 |
|
28 |
+
| Training epochs | 6 |
|
29 |
+
| Training time | 130 GPU-hours |
|