Update README.md
Browse files
README.md
CHANGED
@@ -38,7 +38,7 @@ We proposed two novel methods to align the representation of multiple languages:
|
|
38 |
Cross-Attention Masked Language Modeling(CAMLM): In CAMLM, we learn the multilingual semantic representation by restoring the MASK tokens in the input sentences.
|
39 |
Back-Translation masked language modeling(BTMLM): We use BTMLM to train our model to generate pseudo-parallel sentences from the monolingual sentences. The generated pairs are then used as the input of the model to further align the cross-lingual semantics, thus enhancing the multilingual representation.
|
40 |
|
41 |
-
|
42 |
|
43 |
## Benchmark
|
44 |
|
|
|
38 |
Cross-Attention Masked Language Modeling(CAMLM): In CAMLM, we learn the multilingual semantic representation by restoring the MASK tokens in the input sentences.
|
39 |
Back-Translation masked language modeling(BTMLM): We use BTMLM to train our model to generate pseudo-parallel sentences from the monolingual sentences. The generated pairs are then used as the input of the model to further align the cross-lingual semantics, thus enhancing the multilingual representation.
|
40 |
|
41 |
+
![ernie-m](ernie_m.png)
|
42 |
|
43 |
## Benchmark
|
44 |
|