sometimesanotion commited on
Commit
39236e0
·
verified ·
1 Parent(s): c525e38

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +1 -1
README.md CHANGED
@@ -10,7 +10,7 @@ tags:
10
  ---
11
  # merge
12
 
13
- This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
14
 
15
  ## Merge Details
16
  ### Merge Method
 
10
  ---
11
  # merge
12
 
13
+ The merits of multi-stage arcee_fusion merges are clearly shown in [sometimesanotion/Lamarck-14B-v0.7-Fusion](https://huggingface.co/sometimesanotion/Lamarck-14B-v0.7-Fusion), which has a valuable uptick in GPQA over its predecessors. Will its gains be maintained with a modified version of the SLERP recipe from [suayptalha/Lamarckvergence-14B](https://huggingface.co/suayptalha/Lamarckvergence-14B)? Clearly, self-attention and perceptrons can unlock a lot of power in this kind of merge.
14
 
15
  ## Merge Details
16
  ### Merge Method