ABX-AI commited on
Commit
4ff7593
·
verified ·
1 Parent(s): b3fed0b

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +73 -0
README.md ADDED
@@ -0,0 +1,73 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ base_model: []
3
+ library_name: transformers
4
+ tags:
5
+ - mergekit
6
+ - merge
7
+ - llama
8
+ - not-for-all-audiences
9
+ ---
10
+ # GGUF / IQ / Imatrix for [Silver-Sun-v2-11B](https://huggingface.co/ABX-AI/Silver-Sun-v2-11B)
11
+
12
+ ![image/png](https://cdn-uploads.huggingface.co/production/uploads/65d936ad52eca001fdcd3245/9DobeVeyL98G7QUufEeQg.png)
13
+
14
+ **Why Importance Matrix?**
15
+
16
+ **Importance Matrix**, at least based on my testing, has shown to improve the output and performance of "IQ"-type quantizations, where the compression becomes quite heavy.
17
+ The **Imatrix** performs a calibration, using a provided dataset. Testing has shown that semi-randomized data can help perserve more important segments as the compression is applied.
18
+
19
+ Related discussions in Github:
20
+ [[1]](https://github.com/ggerganov/llama.cpp/discussions/5006) [[2]](https://github.com/ggerganov/llama.cpp/discussions/5263#discussioncomment-8395384)
21
+
22
+ The imatrix.txt file that I used contains general, semi-random data, with some custom kink.
23
+
24
+ # Silver-Sun-v2-11B
25
+
26
+ > This is an updated version of Silver-Sun-11B. The change is that now the Solstice-FKL-v2-10.7B merge uses Sao10K/Fimbulvetr-11B-v2 instead of v1.
27
+ > Additionally, the config of the original Silver-Sun was wrong, and I have also updated that.
28
+ > As expected, this is a HIGHLY uncensored model. It should perform even better than the v1 due to the updated Fimb, and the fixed config.
29
+
30
+ **Works with Alpaca, and from my tests, also ChatML. However Alpaca may be a better option. Try it out and use whatever works better for you.**
31
+ **Due to a quirk with Solar, if you want the best quality either launch at 4K context, or launch at 8K (and possibly beyond - have not tested it that high) with 4k context pre-loaded in the prompt.**
32
+
33
+ > This model is intended for fictional storytelling and writing, focusing on NSFW capabilities and lack of censorship for RP reasons.
34
+
35
+ ## Merge Details
36
+
37
+ This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
38
+
39
+ ### Merge Method
40
+
41
+ This model was merged using the SLERP merge method.
42
+
43
+ ### Models Merged
44
+
45
+ The following models were included in the merge:
46
+ * [Himitsui/Kaiju-11B](https://huggingface.co/Himitsui/Kaiju-11B)
47
+ * ABX-AI/Solstice-FKL-v2-10.7B
48
+ >[!NOTE]
49
+ >A mixture of [Sao10K/Solstice-11B-v1](https://huggingface.co/Sao10K/Solstice-11B-v1) and
50
+ >[ABX-AI/Fimbulvetr-Kuro-Lotus-v2-10.7B] which is updated saishf/Fimbulvetr-Kuro-Lotus-10.7B with Fimb v2
51
+
52
+ ### Configuration
53
+
54
+ The following YAML configuration was used to produce this model:
55
+
56
+ ```yaml
57
+ slices:
58
+ - sources:
59
+ - model: ./MODELS/Solstice-FKL-v2-10.7B
60
+ layer_range: [0, 48]
61
+ - model: Himitsui/Kaiju-11B
62
+ layer_range: [0, 48]
63
+ merge_method: slerp
64
+ base_model: ./MODELS/Solstice-FKL-v2-10.7B
65
+ parameters:
66
+ t:
67
+ - filter: self_attn
68
+ value: [0.6, 0.7, 0.8, 0.9, 1]
69
+ - filter: mlp
70
+ value: [0.4, 0.3, 0.2, 0.1, 0]
71
+ - value: 0.5
72
+ dtype: bfloat16
73
+ ```