Update README.md
Browse files
README.md
CHANGED
@@ -1,3 +1,21 @@
|
|
1 |
---
|
2 |
license: apache-2.0
|
3 |
---
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
---
|
2 |
license: apache-2.0
|
3 |
---
|
4 |
+
# Skimformer
|
5 |
+
|
6 |
+
## Model description
|
7 |
+
Skimformer is a two-stage Transformer that replaces self-attention with Skim-Attention, a self-attention module that computes attention solely based on the 2D positions of tokens in the page. The model adopts a two-step approach: first, the skim-attention scores are computed once and only once using layout information alone; then, these attentions are used in every layer of a text-based Transformer encoder. For more details, please refer to our paper:
|
8 |
+
|
9 |
+
[Skim-Attention: Learning to Focus via Document Layout](https://arxiv.org/abs/2109.01078)
|
10 |
+
Laura Nguyen, Thomas Scialom, Jacopo Staiano, Benjamin Piwowarski, [EMNLP 2021](https://2021.emnlp.org/papers)
|
11 |
+
|
12 |
+
## Citation
|
13 |
+
|
14 |
+
``` latex
|
15 |
+
@article{nguyen2021skimattention,
|
16 |
+
title={Skim-Attention: Learning to Focus via Document Layout},
|
17 |
+
author={Laura Nguyen and Thomas Scialom and Jacopo Staiano and Benjamin Piwowarski},
|
18 |
+
journal={arXiv preprint arXiv:2109.01078}
|
19 |
+
year={2021},
|
20 |
+
}
|
21 |
+
```
|