git commited on
Commit
e0d32c4
·
0 Parent(s):
This view is limited to 50 files because it contains too many changes.   See raw diff
.gitattributes ADDED
@@ -0,0 +1,2 @@
 
 
 
1
+ *.bin filter=lfs diff=lfs merge=lfs -text
2
+ *.json filter=lfs diff=lfs merge=lfs -text
README.md ADDED
@@ -0,0 +1,279 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ library_name: mlc-llm
3
+ tags:
4
+ - mlc-llm
5
+ - web-llm
6
+ language:
7
+ - en
8
+ base_model:
9
+ - SicariusSicariiStuff/Eximius_Persona_5B
10
+ pipeline_tag: text-generation
11
+ ---
12
+
13
+ This is a MLC converted weight from [Eximius_Persona_5B](https://huggingface.co/SicariusSicariiStuff/Eximius_Persona_5B) model in MLC format `q4f16_1`.
14
+
15
+ The model can be used for projects [MLC-LLM](https://github.com/mlc-ai/mlc-llm) and [WebLLM](https://github.com/mlc-ai/web-llm).
16
+
17
+ ---
18
+
19
+ <div align="center">
20
+ <b style="font-size: 40px;">Eximius_Persona_5B</b>
21
+
22
+
23
+ </div>
24
+
25
+
26
+ <img src="https://huggingface.co/SicariusSicariiStuff/Eximius_Persona_5B/resolve/main/Images/Eximius_Persona_5B.png" alt="Eximius_Persona_5B" style="width: 70%; min-width: 500px; display: block; margin: auto;">
27
+
28
+
29
+ ---
30
+
31
+ <a href="https://huggingface.co/SicariusSicariiStuff/Eximius_Persona_5B#tldr" style="color: purple; font-weight: bold; font-size: 48px; text-decoration: none; display: block; text-align: center;">Click here for TL;DR</a>
32
+
33
+ ---
34
+
35
+
36
+ I wanted to create a model with an **exceptional** capacity for using varied speech patterns and **fresh** role-play takes. The model had to have a unique personality, not on a surface level but on the inside, **for real**. Unfortunately, SFT alone just didn't cut it. And I had only 16GB of VRAM at the time. Oh, and I wanted it to be small enough to be viable for phones and to be able to give a fight to larger models while at it. If only there was a magical way to do it.
37
+
38
+ **Merges**. Merges are quite unique. In the early days, they were considered "fake." Clearly, there's no such thing as merges. Where are the papers? No papers? Then it's clearly impossible. "Mathematically impossible." Simply preposterous. To mix layers and hope for a coherent output? What nonsense!
39
+
40
+ And yet, they were **real**. <a href="https://huggingface.co/Undi95">Undi95</a> made some of the earliest merges I can remember, and the "LLAMA2 Era" was truly amazing and innovative thanks to them. Cool stuff like <a href="https://huggingface.co/KoboldAI/LLaMA2-13B-TiefighterLR">Tiefighter</a> was being made, and eventually the time tested <a href="https://huggingface.co/sophosympatheia/Midnight-Miqu-70B-v1.5">Midnight-Miqu-70B (v1.5 is my personal favorite)</a>.
41
+
42
+ Merges are an interesting thing, as they affect LLMs in a way that is currently **impossible** to reproduce using **SFT** (or any 'SOTA' technique). One of the plagues we have today, while we have orders of magnitude smarter LLMs, is **GPTisms** and **predictability**. Merges can potentially 'solve' that. How? In short, if you physically tear neurons (**passthrough** brain surgery) while you somehow manage to keep the model coherent enough, and if you're lucky, it can even follows instructions- then magical stuff begins to happen.
43
+
44
+ Magic, because it's **not** an exact science, there's some art to it, as it is done with a lot of **intuition**. GPTisms are patterns that the model really **really** "wants" to follow, it's quite hard to dissuade it. But if you yeet a couple of layers and rearrange them, boy does it get hard to spew those shivers down the spine... and instead the model starts spewing stuff that it was never intended to. It breaks its patterns and introduces some healthy chaos into the mix.
45
+
46
+ This model, **Eximius_Persona_5B**, is the result of multiple merges, that have been tuned, then merged again, then... for many times and iterations. The base was LLAMA 3.2 3B and I focused on achieving the following **4 traits**, in that specific order:
47
+
48
+ - **2nd Highest rated model** in the 3-6B category according to a closed external benchmark. See details at the buttom of the page.
49
+
50
+ - Varied speech patterns
51
+
52
+ - Roleplay ability
53
+
54
+ - Long context coherency
55
+
56
+ - Instruction following
57
+
58
+ For me, getting varied speech patterns was more important than instruction following, for instruction following we got API models, or LLAMA 3.3. Many models are excellent assistants, yet they all sound pretty much the same.
59
+
60
+ I also wanted to make use of my **4090m 16GB** while my workstation crunches **Phi-4'** brain. Making a nice 5B model aligns with my goal of making AI accessible and fun for everyone, and hence **Eximius_Persona_5B** was born. Let this also be a call to action for more people to make AI models, you don't have to have multiple GPUs or spend a fortune on the cloud (although that definitely opens up options), you can do plenty with a mere 16GB of VRAM. And in case 16GB seems out of reach too, I should mention that Google Collab gives access to a free T4.
61
+
62
+ I uploaded a more funky, less stable, and thiccer version of Eximius_Persona to my prototyping org here:
63
+
64
+ [Eximius_Persona with 84 Layers from various checkpoints](https://huggingface.co/Sicarius-Prototyping/Eximius_Persona_84L)
65
+
66
+ (from some early tests, occasionally it outputs stories that fool GPTZERO that it was written by a human- **60% human**, 40% AI with a lucky roll)
67
+
68
+ <details>
69
+ <summary><b>See example:</b></summary>
70
+
71
+ <img src="https://huggingface.co/SicariusSicariiStuff/Eximius_Persona_5B/resolve/main/Images/Eximius_Persona_5B_GPTZERO.png" alt="GPTZERO Example" style="width: 100%; min-width: 600px; display: block; margin: auto;">
72
+
73
+ </details>
74
+
75
+
76
+ ---
77
+
78
+ ### TL;DR
79
+ - **Fun & Fresh Roleplay** flavour.
80
+ - **Interesting speech patterns** in creative writing.
81
+ - **Good long context coherency** in Roleplay.
82
+ - **Occasionally** outputs quite **human like** stories.
83
+ - **50 Layers** LLAMA 3.2, fully coherent.
84
+ - **Strong performance** in general for a **5B model**.
85
+
86
+ ### Important: Make sure to use the correct settings!
87
+
88
+ [Assistant settings](https://huggingface.co/SicariusSicariiStuff/Eximius_Persona_5B#recommended-settings-for-assistant-mode)
89
+
90
+ [Roleplay settings](https://huggingface.co/SicariusSicariiStuff/Eximius_Persona_5B#recommended-settings-for-roleplay-mode)
91
+
92
+
93
+ ---
94
+
95
+ ## Eximius_Persona_5B is available at the following quantizations:
96
+
97
+ - Original: [FP16](https://huggingface.co/SicariusSicariiStuff/Eximius_Persona_5B)
98
+ - GGUF: [Static Quants](https://huggingface.co/SicariusSicariiStuff/Eximius_Persona_5B_GGUF) | [iMatrix_GGUF](https://huggingface.co/SicariusSicariiStuff/Eximius_Persona_5B_iMatrix)
99
+ - EXL2: [3.5 bpw](https://huggingface.co/SicariusSicariiStuff/Eximius_Persona_5B-3.5bpw) | [4.0 bpw](https://huggingface.co/SicariusSicariiStuff/Eximius_Persona_5B-4.0bpw) | [5.0 bpw](https://huggingface.co/SicariusSicariiStuff/Eximius_Persona_5B-5.0bpw) | [6.0 bpw](https://huggingface.co/SicariusSicariiStuff/Eximius_Persona_5B-6.0bpw) | [7.0 bpw](https://huggingface.co/SicariusSicariiStuff/Eximius_Persona_5B-7.0bpw) | [8.0 bpw](https://huggingface.co/SicariusSicariiStuff/Eximius_Persona_5B-8.0bpw)
100
+ - Specialized: [FP8](https://huggingface.co/SicariusSicariiStuff/Eximius_Persona_5B_FP8)
101
+
102
+ ---
103
+
104
+ ## Model Details
105
+
106
+ - Intended use: **Role-Play**, **Creative Writing**, General Tasks.
107
+
108
+ - Censorship level: <b>Medium</b>
109
+
110
+ - **5 / 10** (10 completely uncensored)
111
+
112
+
113
+ ## UGI score:
114
+
115
+
116
+ <img src="https://huggingface.co/SicariusSicariiStuff/Eximius_Persona_5B/resolve/main/Images/Eximius_Persona_5B_UGI.png" alt="UGI Score" style="width: 100%; min-width: 700px; display: block;">
117
+
118
+ ### Don't use it for coding :)
119
+ ---
120
+
121
+
122
+ # Model instruction template: Llama-3-Instruct
123
+
124
+ ```
125
+ <|begin_of_text|><|start_header_id|>system<|end_header_id|>
126
+
127
+ {system_prompt}<|eot_id|><|start_header_id|>user<|end_header_id|>
128
+
129
+ {input}<|eot_id|><|start_header_id|>assistant<|end_header_id|>
130
+
131
+ {output}<|eot_id|>
132
+ ```
133
+
134
+ ---
135
+ <h2 style="color: darkorange; font-weight: bold; font-size: 55px; text-align: center;">Roleplay format: Classic Internet RP</h2>
136
+
137
+ ```
138
+ *action* speech *narration*
139
+ ```
140
+
141
+ ### The model is pretty smart, so it might handle other formats as well, but it was trained and tested specifically with the classic internet RP style in mind.
142
+
143
+ ## Recommended settings for assistant mode
144
+ <details>
145
+ <summary>Full generation settings: <b>Debug Deterministic</b>.</summary>
146
+
147
+ <img src="https://huggingface.co/SicariusSicariiStuff/Dusk_Rainbow/resolve/main/Presets/Debug-deterministic.png" alt="Negative_LLAMA_70B_Settings" style="width: 100%; min-width: 600px; display: block; margin: auto;">
148
+
149
+ </details>
150
+
151
+ <details>
152
+ <summary>Full generation settings: <b>min_p</b>.</summary>
153
+
154
+ <img src="https://huggingface.co/SicariusSicariiStuff/Dusk_Rainbow/resolve/main/Presets/min_p.png" alt="Negative_LLAMA_70B_Settings" style="width: 100%; min-width: 600px; display: block; margin: auto;">
155
+
156
+ </details>
157
+
158
+ ---
159
+
160
+ ## Recommended settings for Roleplay mode
161
+
162
+ <details>
163
+ <summary><b>Roleplay settings:</b>.</summary>
164
+ A good repetition_penalty range is <b>between 1.12 - 1.15</b>, feel free to experiment.
165
+
166
+ With these settings, each output message should be neatly displayed in <b>1 - 3</b> paragraphs, <b>1 - 2</b> is the most common. A single paragraph will be output as a response to a simple message ("What was your name again?").
167
+
168
+ <b>min_P</b> for RP works too but is more likely to put everything under one large paragraph, instead of a neatly formatted short one. Feel free to switch in between.
169
+
170
+ <b>(Open the image in a new window to better see the full details)</b>
171
+ <img src="https://huggingface.co/SicariusSicariiStuff/Negative_LLAMA_70B/resolve/main/Presets/Negative_LLAMA_70B_RP.png" alt="Negative_LLAMA_70B_Settings" style="width: 100%; min-width: 600px; display: block; margin: auto;">
172
+
173
+ ```
174
+ temperature: 0.8
175
+ top_p: 0.95
176
+ top_k: 25
177
+ typical_p: 1
178
+ min_p: 0
179
+ repetition_penalty: 1.12
180
+ repetition_penalty_range: 1024
181
+ ```
182
+
183
+ </details>
184
+
185
+
186
+ ---
187
+
188
+ **Other recommended generation Presets:**
189
+
190
+ <details>
191
+ <summary><b>Midnight Enigma</b></summary>
192
+
193
+ ```
194
+ max_new_tokens: 512
195
+ temperature: 0.98
196
+ top_p: 0.37
197
+ top_k: 100
198
+ typical_p: 1
199
+ min_p: 0
200
+ repetition_penalty: 1.18
201
+ do_sample: True
202
+ ```
203
+
204
+
205
+ </details>
206
+
207
+
208
+ <details>
209
+ <summary><b>Divine Intellect</b></summary>
210
+
211
+ ```
212
+ max_new_tokens: 512
213
+ temperature: 1.31
214
+ top_p: 0.14
215
+ top_k: 49
216
+ typical_p: 1
217
+ min_p: 0
218
+ repetition_penalty: 1.17
219
+ do_sample: True
220
+ ```
221
+
222
+
223
+ </details>
224
+
225
+ <details>
226
+ <summary><b>simple-1</b></summary>
227
+
228
+ ```
229
+ max_new_tokens: 512
230
+ temperature: 0.7
231
+ top_p: 0.9
232
+ top_k: 20
233
+ typical_p: 1
234
+ min_p: 0
235
+ repetition_penalty: 1.15
236
+ do_sample: True
237
+ ```
238
+
239
+
240
+ </details>
241
+
242
+ ---
243
+
244
+ <h2 style="color: green; font-weight: bold; font-size: 65px; text-align: center;">Your support = more models</h2>
245
+ <a href="https://ko-fi.com/sicarius" style="color: pink; font-weight: bold; font-size: 48px; text-decoration: none; display: block; text-align: center;">My Ko-fi page (Click here)</a>
246
+
247
+ ---
248
+
249
+ ## Benchmarks
250
+
251
+ | Metric |Value|
252
+ |-------------------|----:|
253
+ |Avg. |21.78|
254
+ |IFEval (0-Shot) |65.60|
255
+ |BBH (3-Shot) |22.20|
256
+ |MATH Lvl 5 (4-Shot)| 9.89|
257
+ |GPQA (0-shot) | 1.90|
258
+ |MuSR (0-shot) | 7.33|
259
+ |MMLU-PRO (5-shot) |23.78|
260
+
261
+ ---
262
+
263
+ # Additional benchmarks
264
+
265
+ On the **17th of February, 2025**, I became aware that the model was ranked as the **2nd place in the world** among **3-6B** models, in a closed external benchmark.
266
+
267
+ Bnechmarked on the following site:
268
+ ```
269
+ https://moonride.hashnode.dev/biased-test-of-gpt-4-era-llms-300-models-deepseek-r1-included
270
+ ```
271
+
272
+ <img src="https://huggingface.co/SicariusSicariiStuff/Eximius_Persona_5B/resolve/main/Images/Eximius_Persona_5B_Bench.png" alt="External Benchmark" style="width: 100%; min-width: 600px; display: block; margin: auto;">
273
+
274
+ ---
275
+
276
+ ## Other stuff
277
+ - [SLOP_Detector](https://github.com/SicariusSicariiStuff/SLOP_Detector) Nuke GPTisms, with SLOP detector.
278
+ - [LLAMA-3_8B_Unaligned](https://huggingface.co/SicariusSicariiStuff/LLAMA-3_8B_Unaligned) The grand project that started it all.
279
+ - [Blog and updates (Archived)](https://huggingface.co/SicariusSicariiStuff/Blog_And_Updates) Some updates, some rambles, sort of a mix between a diary and a blog.
mlc-chat-config.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:f8307b6a8dc46ec6a6d0cf4d02167acf4fbc0a6f725099717190ef352ba2d3ad
3
+ size 2314
ndarray-cache.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:e7fff0f0e59bb34acb5a7f84f5cb0090bc5e5471382c164e71fe2b970c492d0b
3
+ size 214932
params_shard_0.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:088c55943ac10cb7a84495602ca802bf8fd59a07cbda815d93f60e34871ac922
3
+ size 197001216
params_shard_1.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:9799cf6b30d5421736bfbf531334fbafc242650da2eef9c80edb0ff912a86ee7
3
+ size 24631296
params_shard_10.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:eb4aae71596546aca4a7e670ef0917bf559f35390b5079752a926d5e97483c7c
3
+ size 25165824
params_shard_100.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:131d6944a9ca2f09909355cd9088931c7c9987cd2cdea66592684834aadedc3b
3
+ size 25165824
params_shard_101.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:306094c706442ec0bf27d0b021af67f8f3d65dd36add36f16504f5dd263c25d1
3
+ size 31469568
params_shard_11.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:c6e237dde1d533b40738093b7af32cb8c85b1372459ad2f65c9e1869baf09876
3
+ size 31469568
params_shard_12.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:5520c23b8e14bd7fa603b3fb16ff2dbe92d388438775f4900c7e0f793cae3e53
3
+ size 25165824
params_shard_13.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:f7a6d3f056d49a73c1d825cd693380d9a37c8cc79af10df5a339768d3fe4dd5d
3
+ size 31469568
params_shard_14.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:e7a9f6ac9e54fd0099cf28976ad78ab465522f8445253ed04fe6770a29efea15
3
+ size 25165824
params_shard_15.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:9a07100c2e2b6b8f0a4d13afdf42024e0eb9af5840f0d3fc0df5b9a75814aae8
3
+ size 31469568
params_shard_16.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:be4a6d51c866c1343ef9033a4c54ead36669dfed67c325722d49ebdd58c41db4
3
+ size 25165824
params_shard_17.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:752f569e875af8eba818b98d556cb50dccc39ef716391e96a42ea82de0cbede8
3
+ size 31469568
params_shard_18.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:7009f7a358a84eed0729ef9eeb2aebf91a9163cb8a2c4a86ae163c0de95a90f6
3
+ size 25165824
params_shard_19.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:b1159b56bc486985e0e67f5e950668b0778c06370c8e68354c3d9e9fb87f3e4a
3
+ size 31469568
params_shard_2.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:07f3021b49d52abb4b716a41c2224709b04341ab1d96ffe5a2bdcfe2a7f79fa3
3
+ size 25165824
params_shard_20.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:a8acbcc5538c961bbf936acda9c1a8c03873fee9b3a6446467b5c619b065b8cb
3
+ size 25165824
params_shard_21.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:598c5262de855d447b94b9d7b0cb5bb8a1e65a8af4dbe55a790fd05bc436bdd6
3
+ size 31469568
params_shard_22.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:b0d5a15c6fd58ee2d5bd60e430dbf7417c583117d1a62c011a868cf247bb20cf
3
+ size 25165824
params_shard_23.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:7b5127df892733061d6a94f090f82361dbc7fbd7ed4473adab6df6f6497821f9
3
+ size 31469568
params_shard_24.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:f6f22a3b03ecd6804846500903224a39659d3c2b0140edb67471021ee61f2e07
3
+ size 25165824
params_shard_25.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:8982b31245e55d18702cab7e8c4f7495d03fb31c0f165ad8b325d4dbef81d2b7
3
+ size 31469568
params_shard_26.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:4aede07f6e7400b062e483dc9aa958208af95daeb3d2d48145de346553d3c73c
3
+ size 25165824
params_shard_27.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:3734db86afbbc23491753517752865722c46b816795ca1b343aa49924b09d566
3
+ size 31469568
params_shard_28.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:78be1ddaa170f25e69b187ec40494c285dcd84773aca10187d0f3218266ca638
3
+ size 25165824
params_shard_29.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:c438db6074bce2777150edf8851771fa4cefb120fd74e558c8afdfcdcd8b34af
3
+ size 31469568
params_shard_3.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:6a5f06c8abf861dbf9f9e7de2502596f98818ac35c2f462c9ee2eb420af384d9
3
+ size 31469568
params_shard_30.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:6c594372066b35179356062bd78333d1f9fe8f7e52e17a214c6658c0fd3d7470
3
+ size 25165824
params_shard_31.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:6025da031cabf521c7323febf8cb5c898a5e95ebbd806a83e01054381e0e5b00
3
+ size 31469568
params_shard_32.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:6fdcaf7bc427a29b63c3acd7204139d2601afcabb4362cf09eb02702bada3243
3
+ size 25165824
params_shard_33.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:3597fa6a859c23dd115efdb35bc5c51e00860a66a08eb94bbb4bd706f11a67c2
3
+ size 31469568
params_shard_34.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:df6e4a710328c6c159e54fdb880d46b233df1ca4ec193769a4b4cb2aaf435498
3
+ size 25165824
params_shard_35.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:9198ad78b93c966acd11d9f388940c93922ae135b79dec74ecf425118d2e6a72
3
+ size 31469568
params_shard_36.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:70ad500ed14d1f1b09522437d94704a1e566801bf7cddfad39c071d4a05d9cb4
3
+ size 25165824
params_shard_37.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:9511ef477a9566017080133bce852bf9e63f314bc2756d6cd6c5bd8476ee4e1f
3
+ size 31469568
params_shard_38.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:ee812ddbed20b3c361bf02f8b10ad4dee9d401fa607ebfb8201da01bd943af1d
3
+ size 25165824
params_shard_39.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:62d6ff9694e6facece9eaf62f147ac4b0f43ac33fdd97dd677496eeb3886fbbd
3
+ size 31469568
params_shard_4.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:40713dae354dfc4ef0bed7779bdc5dfd9eaa7e4d7008ee266e2a385fc54c41db
3
+ size 25165824
params_shard_40.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:0563c272471f69c4ab2d94434169b5cfe4ea5bdc14ec8d0e1c268f1bbd8d5a1f
3
+ size 25165824
params_shard_41.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:8b215dd3c57942e8862c4b5fa47961fe05c0eb9e85ae82f940aa2ec90788eda1
3
+ size 31469568
params_shard_42.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:0517bdf37f0e014525d26c9162ff7f2c37c3ff595cefd7ebe8f56862423ab720
3
+ size 25165824
params_shard_43.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:b744d84aeb87584eb894609a3b231e9fad8313eb5affd2e4a9a5fe74ed5f5873
3
+ size 31469568
params_shard_44.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:3dac2514e4884b125711a0efad2dcf3956ec3ebdb63e333381884a92683e8dbc
3
+ size 25165824
params_shard_45.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:e41b491da3416b98d7e9411bd0a843a47a2155d878792b75711e8f037135151b
3
+ size 31469568
params_shard_46.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:9f3760803c7a04c9305b14a882622457ae2a6043dbd4351f77224ef761fa3767
3
+ size 25165824
params_shard_47.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:6eeefbf499c321fed4abb3b7c52d5e3493190ebaa839a96c925820dd08f16de5
3
+ size 31469568
params_shard_48.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:492431744a625fcb5f816430ff1891ad932b3e4f775b650144eaa6952e05547b
3
+ size 25165824