kernelmachine commited on
Commit
b19d27f
·
1 Parent(s): f58a637

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +83 -5
README.md CHANGED
@@ -9,23 +9,101 @@ tags:
9
  - silo
10
  ---
11
 
12
-
13
  # Silo Language Models: Isolating Legal Risk in a Datastore
14
 
15
  This is Silo-PDSW, first introduced in [Silo Language Models]() by researchers at University of Washington, UC Berkeley, and the Allen Institute for AI.
16
 
 
 
 
 
 
 
 
17
 
 
 
 
 
 
18
 
19
  ### Model Description
20
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
21
  ### Intended Uses and Limitations
22
 
 
 
23
  ### How to use
24
 
25
- ### Limitations and Bias
26
 
27
- ### Training data
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
28
 
29
- #### Collection Process
30
 
31
- ### Training procedure
 
9
  - silo
10
  ---
11
 
 
12
  # Silo Language Models: Isolating Legal Risk in a Datastore
13
 
14
  This is Silo-PDSW, first introduced in [Silo Language Models]() by researchers at University of Washington, UC Berkeley, and the Allen Institute for AI.
15
 
16
+ ### NOTE: Dependencies
17
+
18
+ To use the model, you need to install a specific transformers fork:
19
+
20
+ ```
21
+ pip install git+https://github.com/kernelmachine/transformers@openlm#egg=transformers
22
+ ```
23
 
24
+ The model also depends on `xformers`, install via
25
+
26
+ ```
27
+ pip install xformers
28
+ ```
29
 
30
  ### Model Description
31
 
32
+
33
+ Silo-PDSW is a 1.3B parameter, decoder-only language model trained on data in the public domain and under permissive software licenses from [the Open License Corpus (OLC)](https://huggingface.co/datasets/kernelmachine/open-license-corpus).
34
+
35
+ The model is based on the LLaMA architecture as implemented in (OpenLM)[].
36
+
37
+ The model is trained with 128 A100 GPUs across 16 nodes.
38
+
39
+
40
+ ### Model and Training Hyperparameters
41
+
42
+ We follow the model architecture of LLaMa, and we use the GPT-NeoX-20B tokenizer, with 50432 BPE types.
43
+
44
+ During training, we use 2,048 token sequences that are packed across document boundaries, and we pre-pend a beginning-of-text token to every document.
45
+
46
+ We use weight decay of 0.1, the Adam optimizer with beta_2 of 0.95, 2,000 steps of warmup, with a cosine learning rate scheduler.
47
+
48
+
49
+ | Model | #L | #H | d_model | LR | Batch |
50
+ |--------|-----|-----|-------------|--------|--------|
51
+ | 1.3B | 24 | 16 | 2048 | 1e-3 | 2.6M |
52
+
53
+
54
+
55
+ ### Training data
56
+
57
+ Silo-PDSW was trained on data in the public domain and under permissive software licenses from [the Open License Corpus (OLC)](https://huggingface.co/datasets/kernelmachine/open-license-corpus).
58
+
59
+ The model was trained on the following domain proportions (please see the OLC repository for more details on the data sources for each domain):
60
+
61
+
62
+ | Domain | Tokens (B) | % |
63
+ |-----------------|------------|-------|
64
+ | Code | 58.9 | 59.1 |
65
+ | Legal | 27.1 | 27.2 |
66
+ | Conversation | 5.9 | 5.9 |
67
+ | Math | 3.5 | 3.5 |
68
+ | Books | 2.9 | 2.9 |
69
+ | Science | 1.2 | 1.2 |
70
+ | News | 0.2 | 0.2 |
71
+ | Total | 99.6 | 100.0 |
72
+
73
+ We train with early stopping for 250B tokens in total, or a little more than two epochs of training over this subset
74
+
75
+ Since the distribution of OLC is highly skewed, we perform a simple upweighting scheme where we upsample all data that accounts for less than 5% of the corpus by a factor of 3x, which we found to work well after a sweep of different settings.
76
+
77
  ### Intended Uses and Limitations
78
 
79
+ This model can be used for prompting for evaluation of downstream tasks as well as text generation.
80
+
81
  ### How to use
82
 
 
83
 
84
+ You can use this model directly with a pipeline for text generation.
85
+
86
+
87
+ ```python
88
+ from transformers import pipeline
89
+ generator = pipeline('text-generation', model="kernelmachine/silo-pdsw-1.3b", device='cuda')
90
+ generator("Hello")
91
+ [{'generated_text': "Hello, I'm a new user of Ubuntu. I'm trying to install the latest version of Ubuntu"}]
92
+ ```
93
+
94
+ By default, generation is deterministic. In order to use the top-k sampling, please set do_sample to True.
95
+
96
+
97
+ ```python
98
+ from transformers import pipeline, set_seed
99
+ set_seed(32)
100
+ generator = pipeline('text-generation', model="kernelmachine/silo-pdsw-1.3b", device='cuda', do_sample=True)
101
+ generator("Hello")
102
+ [{'generated_text': 'Hello: Hello World;", ""));\n }\n\n [Test]\n public void'}]
103
+ ```
104
+
105
+ ### Limitations and Bias
106
 
107
+ Silo-PDSW inherits the biases and limitations of public domain data, which carry risks of toxic or otherwise unfair output, due to the prevalence of older copyright-expired text.
108
 
109
+ Silo-PDSW may also output personally identifiable information, because we did not filter that out of training data.