File size: 1,679 Bytes
00f4a0b
992d190
48116da
 
00f4a0b
 
 
9cc3643
00f4a0b
48116da
 
 
 
 
 
 
 
 
 
 
 
9cc3643
00f4a0b
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
48116da
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
---
base_model: HuggingFaceTB/SmolLM-135M
datasets:
- LDJnr/Capybara
---

###EVEN SMALLER Frankenstein of smolLm-0.13b upped to 0.15b
Use this frankenbase for training. 

If you're here from twitter and imatient, get the trained checkpoint file.

```bash
biggie-smollm-checkpoint-twitter-q8_0.gguf
```

```bash
wget https://huggingface.co/nisten/Biggie-SmoLlm-0.15B-Base/resolve/main/biggie-smollm-checkpoint-twitter-q8_0.gguf

./llama-cli -n 1024 -fa -b 512 --min-p 0.3 --top-p 0.85 -ctk q8_0 -ctv q8_0 --keep -1 -p "You are a Nasa jpl engineer teaching the user about space and cats. <|im_start|>User: How to build a city on Mars via calculating Aldrin-Cycler orbits?<im_end> /n " -m biggie-smollm-checkpoint-twitter-q8_0.gguf --temp 2 -ngl 0 -t 1 -co -cnv --reverse-prompt "Assistant:"
```

Done via semi-automated continuous merging to figure out the recipe.
Model is more coherent. 

![image/png](https://cdn-uploads.huggingface.co/production/uploads/6379683a81c1783a4a2ddba8/H6rv3ULQip4sYPpGGiZZe.png)

```bash
wget https://huggingface.co/nisten/Biggie-SmoLlm-0.15B-Base/resolve/main/Biggie_SmolLM_0.15B_Base_bf16.gguf
```
```verilog
llama-cli -ngl 99 -co --temp 0 -p "How to build a city on Mars via calculating Aldrin-Cycler orbits?" -m Biggie_SmolLM_0.15B
_Base_bf16.gguf
```
The temperature settings and min p etc need to be adjusted but even at default temp0 it was coherent for first 100 tokens. 
Amazing option for further training. And this is a merge of the base, not the instruct!

![image/png](https://cdn-uploads.huggingface.co/production/uploads/6379683a81c1783a4a2ddba8/UK0_mQxy6GOHKxGKBbdhx.png)

I don't understand how the f a 150mb file can talk but it can