Triangle104 commited on
Commit
10d4031
·
verified ·
1 Parent(s): 5adb04f

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +52 -1
README.md CHANGED
@@ -6,12 +6,63 @@ tags:
6
  - merge
7
  - llama-cpp
8
  - gguf-my-repo
 
9
  ---
10
 
11
  # Triangle104/Yomiel-22B-Q5_K_M-GGUF
12
  This model was converted to GGUF format from [`Silvelter/Yomiel-22B`](https://huggingface.co/Silvelter/Yomiel-22B) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
13
  Refer to the [original model card](https://huggingface.co/Silvelter/Yomiel-22B) for more details on the model.
14
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
15
  ## Use with llama.cpp
16
  Install llama.cpp through brew (works on Mac and Linux)
17
 
@@ -50,4 +101,4 @@ Step 3: Run inference through the main binary.
50
  or
51
  ```
52
  ./llama-server --hf-repo Triangle104/Yomiel-22B-Q5_K_M-GGUF --hf-file yomiel-22b-q5_k_m.gguf -c 2048
53
- ```
 
6
  - merge
7
  - llama-cpp
8
  - gguf-my-repo
9
+ license: apache-2.0
10
  ---
11
 
12
  # Triangle104/Yomiel-22B-Q5_K_M-GGUF
13
  This model was converted to GGUF format from [`Silvelter/Yomiel-22B`](https://huggingface.co/Silvelter/Yomiel-22B) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
14
  Refer to the [original model card](https://huggingface.co/Silvelter/Yomiel-22B) for more details on the model.
15
 
16
+ Merge Method
17
+ -
18
+ This model was merged using the della_linear merge method using ArliAI/Mistral-Small-22B-ArliAI-RPMax-v1.1 as a base.
19
+
20
+ Models Merged
21
+ -
22
+ The following models were included in the merge:
23
+
24
+ nbeerbower/Mistral-Small-Drummer-22B
25
+ gghfez/SeminalRP-22b
26
+ TheDrummer/Cydonia-22B-v1.1
27
+ anthracite-org/magnum-v4-22b
28
+
29
+ Configuration
30
+ -
31
+ The following YAML configuration was used to produce this model:
32
+
33
+ base_model: ArliAI/Mistral-Small-22B-ArliAI-RPMax-v1.1
34
+ parameters:
35
+ epsilon: 0.04
36
+ lambda: 1.05
37
+ int8_mask: true
38
+ rescale: true
39
+ normalize: false
40
+ dtype: bfloat16
41
+ tokenizer_source: base
42
+ merge_method: della_linear
43
+ models:
44
+ - model: ArliAI/Mistral-Small-22B-ArliAI-RPMax-v1.1
45
+ parameters:
46
+ weight: [0.2, 0.3, 0.2, 0.3, 0.2]
47
+ density: [0.45, 0.55, 0.45, 0.55, 0.45]
48
+ - model: gghfez/SeminalRP-22b
49
+ parameters:
50
+ weight: [0.01768, -0.01675, 0.01285, -0.01696, 0.01421]
51
+ density: [0.6, 0.4, 0.5, 0.4, 0.6]
52
+ - model: anthracite-org/magnum-v4-22b
53
+ parameters:
54
+ weight: [0.208, 0.139, 0.139, 0.139, 0.208]
55
+ density: [0.7]
56
+ - model: TheDrummer/Cydonia-22B-v1.1
57
+ parameters:
58
+ weight: [0.208, 0.139, 0.139, 0.139, 0.208]
59
+ density: [0.7]
60
+ - model: nbeerbower/Mistral-Small-Drummer-22B
61
+ parameters:
62
+ weight: [0.33]
63
+ density: [0.45, 0.55, 0.45, 0.55, 0.45]
64
+
65
+ ---
66
  ## Use with llama.cpp
67
  Install llama.cpp through brew (works on Mac and Linux)
68
 
 
101
  or
102
  ```
103
  ./llama-server --hf-repo Triangle104/Yomiel-22B-Q5_K_M-GGUF --hf-file yomiel-22b-q5_k_m.gguf -c 2048
104
+ ```