sewon commited on
Commit
b06e8a3
·
1 Parent(s): 85eccb3

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +52 -0
README.md ADDED
@@ -0,0 +1,52 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: cc-by-nc-4.0
3
+ ---
4
+
5
+ # NPM-single
6
+
7
+ NPM-single is a nonparametric masked language model, pretrained on English text data.
8
+ It was introduced by ["Nonparametric Masked Language Modeling"][paper]
9
+ and first released in [facebookresearch/NPM][repo].
10
+
11
+ ### Model description
12
+
13
+ NPM consists of an encoder and a reference corpus, and models a nonparametric distribution over a reference corpus.
14
+ The key idea is to map all the phrases in the corpus into a dense vector space using the
15
+ encoder and, when given a query with a MASK at inference, use the encoder to locate the nearest
16
+ phrase from the corpus and fill in the MASK.
17
+
18
+ NPM-single is a variant of NPM that retrieves a token from the corpus, instead of a phrase.
19
+
20
+ ### Intended uses & limitations
21
+ While this repo includes the encoder weights, NPM-single has to be used together with a datstore.
22
+ For more details on how to use NPM-single, please refer to the [original repo][repo].
23
+
24
+ Note that this model is primarily for filling in a MASK token. Future work can investigate how to use NPM-single for text generation.
25
+
26
+ ### Training procedure
27
+
28
+ NPM-single was trained on English Wikipedia (August 2019) and an English portion of CC-News (Mackenzie et al. (2020), February 2019), which contains 13B tokens in total.
29
+ NPM-single used the model architecture and initial weights of RoBERTa large (Liu et al., 2019), consisting of 354M parameters.
30
+ Training is done for 100,000 steps, using thirty-two 32GB GPUs.
31
+
32
+ More details about training can be found in the [paper][paper].
33
+ Code for training NPM-single can be found in the [original repo][repo].
34
+
35
+ ### Evaluation results
36
+ NPM-single is evaluated on nine closed-set tasks (tasks with a small set of options given).
37
+ NPM-single consistently outperforms significantly larger models such as GPT-3 and T5.
38
+ Detailed results can be found from the [paper][paper].
39
+
40
+ ### BibTeX entry and citation info
41
+ ```
42
+ @article{ min2022nonparametric,
43
+ title={ Nonparametric Masked Language Modeling },
44
+ author={ Min, Sewon and Shi, Weijia and Lewis, Mike and Chen, Xilun and Yih, Wen-tau and Hajishirzi, Hannaneh and Zettlemoyer, Luke },
45
+ year={ 2022 }
46
+ }
47
+ ```
48
+
49
+ [paper]: https://arxiv.org/abs/2212.01349
50
+ [repo]: https://github.com/facebookresearch/NPM
51
+
52
+