jtatman commited on
Commit
0f3d0f4
·
verified ·
1 Parent(s): bc264c8

Upload folder using huggingface_hub

Browse files
Files changed (1) hide show
  1. README.md +24 -20
README.md CHANGED
@@ -3,37 +3,41 @@ tags:
3
  - merge
4
  - mergekit
5
  - lazymergekit
6
- - Severus27/BeingWell_llama2_7b
7
- - ParthasarathyShanmugam/llama-2-7b-samantha
8
  base_model:
9
- - Severus27/BeingWell_llama2_7b
10
- - ParthasarathyShanmugam/llama-2-7b-samantha
11
  ---
12
 
13
  # Dr-Samantha-Philosopher-7B-slerp
14
 
15
  Dr-Samantha-Philosopher-7B-slerp is a merge of the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing):
16
- * [Severus27/BeingWell_llama2_7b](https://huggingface.co/Severus27/BeingWell_llama2_7b)
17
- * [ParthasarathyShanmugam/llama-2-7b-samantha](https://huggingface.co/ParthasarathyShanmugam/llama-2-7b-samantha)
18
 
19
  ## 🧩 Configuration
20
 
21
  ```yaml
22
- models:
23
- - model: togethercomputer/Llama-2-7B-32K-Instruct
24
- # No parameters necessary for base model
25
- - model: Severus27/BeingWell_llama2_7b
26
- parameters:
27
- density: 0.53
28
- weight: 0.4
29
- - model: ParthasarathyShanmugam/llama-2-7b-samantha
30
- parameters:
31
- density: 0.53
32
- weight: 0.4
33
- merge_method: dare_ties
34
- base_model: togethercomputer/Llama-2-7B-32K-Instruct
35
  parameters:
36
- int8_mask: true
 
 
 
 
 
 
 
37
  dtype: float16
38
  ```
39
 
 
3
  - merge
4
  - mergekit
5
  - lazymergekit
6
+ - mlabonne/NeuralBeagle14-7B
7
+ - cognitivecomputations/samantha-1.2-mistral-7b
8
  base_model:
9
+ - mlabonne/NeuralBeagle14-7B
10
+ - cognitivecomputations/samantha-1.2-mistral-7b
11
  ---
12
 
13
  # Dr-Samantha-Philosopher-7B-slerp
14
 
15
  Dr-Samantha-Philosopher-7B-slerp is a merge of the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing):
16
+ * [mlabonne/NeuralBeagle14-7B](https://huggingface.co/mlabonne/NeuralBeagle14-7B)
17
+ * [cognitivecomputations/samantha-1.2-mistral-7b](https://huggingface.co/cognitivecomputations/samantha-1.2-mistral-7b)
18
 
19
  ## 🧩 Configuration
20
 
21
  ```yaml
22
+ slices:
23
+ - sources:
24
+ - model: mlabonne/NeuralBeagle14-7B
25
+ layer_range: [0, 32]
26
+ - model: cognitivecomputations/samantha-1.2-mistral-7b
27
+ layer_range: [0, 32]
28
+
29
+ merge_method: slerp
30
+ base_model: cognitivecomputations/samantha-1.2-mistral-7b
31
+
 
 
 
32
  parameters:
33
+ t:
34
+ - filter: self_attn
35
+ value: [0, 0.5, 0.3, 0.7, 1]
36
+ - filter: mlp
37
+ value: [1, 0.5, 0.7, 0.3, 0]
38
+ - value: 0.5 # fallback for rest of tensors
39
+ tokenizer_source: union
40
+
41
  dtype: float16
42
  ```
43