EXPERIMENTAL:

So what's this new arcee_fusion merge method, and what can we do with it? This model aims to find out, as a multi-stage merge where 3 out of 4 steps are fusions:

I've seen strong prose from this model, which is natural considering its re-emphasis of Qwenvergence-14B-v12-Prose-DS. A full evaluation will be cued shortly.

This is actually a lot simpler than a mainline Lamarck release, and where it fits for efforts towards a Lamarck v0.8 depends greatly on evaluation and feedback.

Configuration

The following YAML configuration was used to produce this model:

name:                Lamarck-14B-v0.7-Fusionvergence
merge_method:        arcee_fusion
base_model:          sometimesanotion/Lamarck-14B-v0.7
tokenizer_source:    base
parameters:
  int8_mask:         true
  normalize:         true
  rescale:           false
dtype:               bfloat16
out_dtype:           bfloat16
models:
  - model:           suayptalha/Lamarckvergence-14B
---
name:                Slerp-Lamarckvevergence
base_model:          sometimesanotion/Lamarck-14B-v0.7-Fusion-Lamarckvergence
merge_method:        slerp
tokenizer_source:    base
dtype:               float32
out_dtype:           bfloat16
parameters:
  t:
    - filter:        self_attn
      value:         [ 0.00, 0.50, 0.30, 0.70, 1.00 ]
    - filter:        mlp
      value:         [ 1.00, 0.50, 0.70, 0.30, 0.00 ]
    - value:         [ 0.00, 0.00, 0.00, 0.00, 0.04, 0.08, 0.12, 0.16, 0.24, 0.32, 0.40, 0.48, 0.56, 0.64, 0.72, 0.72, 0.72, 0.72, 0.72, 0.72, 0.72, 0.72, 0.64, 0.56, 0.48 ]
slices:
  - sources:
      - model:       sometimesanotion/Lamarck-14B-v0.7-Fusion-Lamarckvergence
        layer_range: [ 0, 48 ]
      - model:       sometimesanotion/Qwenvergence-14B-v12-Prose-DS
        layer_range: [ 0, 48 ]
---
name:                Chocolatine-Fusion-Qwenvergence
merge_method:        arcee_fusion
base_model:          jpacifico/Chocolatine-2-14B-Instruct-v2.0.3
tokenizer_source:    base
parameters:
  int8_mask:         true
  normalize:         true
  rescale:           false
dtype:               bfloat16
out_dtype:           bfloat16
models:
  - model:           sometimesanotion/Qwenvergence-14B-v12-Prose-DS
---
name:                Lamarck-14B-v0.7-Fusion
merge_method:        arcee_fusion
base_model:          sometimesanotion/Slerp-Lamarckvevergence
tokenizer_source:    base
parameters:
  int8_mask:         true
  normalize:         true
  rescale:           false
dtype:               bfloat16
out_dtype:           bfloat16
models:
  - model:           sometimesanotion/Chocolatine-Fusion-Qwenvergence
Downloads last month
0
Safetensors
Model size
14.8B params
Tensor type
BF16
·
Inference Providers NEW
This model is not currently available via any of the supported Inference Providers.

Model tree for sometimesanotion/Lamarck-14B-v0.7-Fusion