File size: 3,978 Bytes
536533b
417a571
 
 
 
 
 
 
 
f58a637
 
 
 
 
 
b19d27f
 
 
 
 
 
 
f58a637
b19d27f
 
 
 
 
f58a637
 
 
b19d27f
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
f58a637
 
b19d27f
 
f58a637
 
 
b19d27f
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
f58a637
b19d27f
f58a637
b19d27f
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
---
license: apache-2.0
language:
- en
pipeline_tag: text-generation
tags:
- text-generation
- openlm
- silo
---

# Silo Language Models: Isolating Legal Risk in a Datastore

This is Silo-PDSW, first introduced in [Silo Language Models]() by researchers at University of Washington, UC Berkeley, and the Allen Institute for AI.

### NOTE: Dependencies

To use the model, you need to install a specific transformers fork:

```
pip install git+https://github.com/kernelmachine/transformers@openlm#egg=transformers
```

The model also depends on `xformers`, install via 

```
pip install xformers
```

### Model Description


Silo-PDSW is a 1.3B parameter, decoder-only language model trained on data in the public domain and under permissive software licenses from [the Open License Corpus (OLC)](https://huggingface.co/datasets/kernelmachine/open-license-corpus). 

The model is based on the LLaMA architecture as implemented in (OpenLM)[].

The model is trained with 128 A100 GPUs across 16 nodes.


### Model and Training Hyperparameters

We follow the model architecture of LLaMa, and we use the GPT-NeoX-20B tokenizer, with 50432 BPE types. 

During training, we use 2,048 token sequences that are packed across document boundaries, and we pre-pend a beginning-of-text token to every document. 

We use weight decay of 0.1, the Adam optimizer with beta_2 of 0.95, 2,000 steps of warmup, with a cosine learning rate scheduler. 


| Model  | #L  | #H  | d_model | LR     | Batch  |
|--------|-----|-----|-------------|--------|--------|
| 1.3B   | 24  | 16  | 2048        | 1e-3   | 2.6M   |



### Training data

Silo-PDSW was trained on data in the public domain and under permissive software licenses from [the Open License Corpus (OLC)](https://huggingface.co/datasets/kernelmachine/open-license-corpus). 

The model was trained on the following domain proportions (please see the OLC repository for more details on the data sources for each domain):


| Domain            | Tokens (B) | %     |
|-----------------|------------|-------|
| Code              | 58.9       | 59.1  |
| Legal             | 27.1       | 27.2  | 
| Conversation      | 5.9        | 5.9   |
| Math               | 3.5        | 3.5   |
| Books              | 2.9        | 2.9   |
| Science          | 1.2        | 1.2   | 
| News              | 0.2        | 0.2   |
| Total           | 99.6       | 100.0 |

We train with early stopping for 250B tokens in total, or a little more than two epochs of training over this subset 

Since the distribution of OLC is highly skewed, we perform a simple upweighting scheme where we upsample all data that accounts for less than 5% of the corpus by a factor of 3x, which we found to work well after a sweep of different settings.

### Intended Uses and Limitations

This model can be used for prompting for evaluation of downstream tasks as well as text generation. 

### How to use


You can use this model directly with a pipeline for text generation.


```python
from transformers import pipeline
generator = pipeline('text-generation', model="kernelmachine/silo-pdsw-1.3b", device='cuda')
generator("Hello")
[{'generated_text': "Hello, I'm a new user of Ubuntu. I'm trying to install the latest version of Ubuntu"}]
```

By default, generation is deterministic. In order to use the top-k sampling, please set do_sample to True.


```python
from transformers import pipeline, set_seed
set_seed(32)
generator = pipeline('text-generation', model="kernelmachine/silo-pdsw-1.3b", device='cuda', do_sample=True)
generator("Hello")
[{'generated_text': 'Hello: Hello World;", ""));\n        }\n\n        [Test]\n        public void'}]
```

### Limitations and Bias

Silo-PDSW inherits the biases and limitations of public domain data, which carry risks of toxic or otherwise unfair output, due to the prevalence of older copyright-expired text.

Silo-PDSW may also output personally identifiable information, because we did not filter that out of training data.