final_model

This is a merge of pre-trained language models created using mergekit.

Merge Details

Merge Method

This model was merged using the SLERP merge method.

Models Merged

The following models were included in the merge:

Configuration

The following YAML configuration was used to produce this model:

base_model: RoyJoy/llama-jan16
dtype: bfloat16
merge_method: slerp
parameters:
  int8_mask: 1.0
  normalize: 0.0
slices:
- base_model: RoyJoy/llama-jan16
  parameters:
    t: 0.05525589653207248
  sources:
  - layer_range: [0, 4]
    model: RoyJoy/llama-jan16
  - layer_range: [0, 4]
    model: luaqi/llama_01141
- base_model: RoyJoy/llama-jan16
  parameters:
    t: 0.2813848161881896
  sources:
  - layer_range: [4, 8]
    model: RoyJoy/llama-jan16
  - layer_range: [4, 8]
    model: luaqi/llama_01141
- base_model: RoyJoy/llama-jan16
  parameters:
    t: 0.25453432850347796
  sources:
  - layer_range: [8, 12]
    model: RoyJoy/llama-jan16
  - layer_range: [8, 12]
    model: luaqi/llama_01141
- base_model: RoyJoy/llama-jan16
  parameters:
    t: 0.4805486738076675
  sources:
  - layer_range: [12, 16]
    model: RoyJoy/llama-jan16
  - layer_range: [12, 16]
    model: luaqi/llama_01141
- base_model: luaqi/llama_01141
  parameters:
    t: 0.48019897775293435
  sources:
  - layer_range: [16, 20]
    model: luaqi/llama_01141
  - layer_range: [16, 20]
    model: RoyJoy/llama-jan16
- base_model: luaqi/llama_01141
  parameters:
    t: 0.4494215120636509
  sources:
  - layer_range: [20, 24]
    model: luaqi/llama_01141
  - layer_range: [20, 24]
    model: RoyJoy/llama-jan16
- base_model: luaqi/llama_01141
  parameters:
    t: 0.4597712692088405
  sources:
  - layer_range: [24, 28]
    model: luaqi/llama_01141
  - layer_range: [24, 28]
    model: RoyJoy/llama-jan16
- base_model: luaqi/llama_01141
  parameters:
    t: 0.38700893940161823
  sources:
  - layer_range: [28, 32]
    model: luaqi/llama_01141
  - layer_range: [28, 32]
    model: RoyJoy/llama-jan16
- base_model: luaqi/llama_01141
  parameters:
    t: 0.4221244565643624
  sources:
  - layer_range: [32, 36]
    model: luaqi/llama_01141
  - layer_range: [32, 36]
    model: RoyJoy/llama-jan16
- base_model: RoyJoy/llama-jan16
  parameters:
    t: 0.41183869503962045
  sources:
  - layer_range: [36, 40]
    model: RoyJoy/llama-jan16
  - layer_range: [36, 40]
    model: luaqi/llama_01141
- base_model: RoyJoy/llama-jan16
  parameters:
    t: 0.42148543903663827
  sources:
  - layer_range: [40, 44]
    model: RoyJoy/llama-jan16
  - layer_range: [40, 44]
    model: luaqi/llama_01141
- base_model: luaqi/llama_01141
  parameters:
    t: 0.31836409493446777
  sources:
  - layer_range: [44, 48]
    model: luaqi/llama_01141
  - layer_range: [44, 48]
    model: RoyJoy/llama-jan16
Downloads last month
12
Safetensors
Model size
13.9B params
Tensor type
BF16
·
Inference Providers NEW
This model is not currently available via any of the supported Inference Providers.

Model tree for deepcoreCoalbiter/jimmy

Merge model
this model