final_model
This is a merge of pre-trained language models created using mergekit.
Merge Details
Merge Method
This model was merged using the SLERP merge method.
Models Merged
The following models were included in the merge:
Configuration
The following YAML configuration was used to produce this model:
base_model: RoyJoy/llama-jan16
dtype: bfloat16
merge_method: slerp
parameters:
int8_mask: 1.0
normalize: 0.0
slices:
- base_model: RoyJoy/llama-jan16
parameters:
t: 0.12347326361005005
sources:
- layer_range: [0, 2]
model: RoyJoy/llama-jan16
- layer_range: [0, 2]
model: luaqi/llama_01141
- base_model: luaqi/llama_01141
parameters:
t: 0.4274371274240911
sources:
- layer_range: [2, 4]
model: luaqi/llama_01141
- layer_range: [2, 4]
model: RoyJoy/llama-jan16
- base_model: RoyJoy/llama-jan16
parameters:
t: 0.45996115782476704
sources:
- layer_range: [4, 6]
model: RoyJoy/llama-jan16
- layer_range: [4, 6]
model: luaqi/llama_01141
- base_model: luaqi/llama_01141
parameters:
t: 0.4472011798824502
sources:
- layer_range: [6, 8]
model: luaqi/llama_01141
- layer_range: [6, 8]
model: RoyJoy/llama-jan16
- base_model: luaqi/llama_01141
parameters:
t: 0.3626764469890088
sources:
- layer_range: [8, 10]
model: luaqi/llama_01141
- layer_range: [8, 10]
model: RoyJoy/llama-jan16
- base_model: luaqi/llama_01141
parameters:
t: 0.3047811008444177
sources:
- layer_range: [10, 12]
model: luaqi/llama_01141
- layer_range: [10, 12]
model: RoyJoy/llama-jan16
- base_model: luaqi/llama_01141
parameters:
t: 0.4191841705224738
sources:
- layer_range: [12, 14]
model: luaqi/llama_01141
- layer_range: [12, 14]
model: RoyJoy/llama-jan16
- base_model: luaqi/llama_01141
parameters:
t: 0.48611727995422105
sources:
- layer_range: [14, 16]
model: luaqi/llama_01141
- layer_range: [14, 16]
model: RoyJoy/llama-jan16
- base_model: RoyJoy/llama-jan16
parameters:
t: 0.49327335687995005
sources:
- layer_range: [16, 18]
model: RoyJoy/llama-jan16
- layer_range: [16, 18]
model: luaqi/llama_01141
- base_model: luaqi/llama_01141
parameters:
t: 0.45941960258651676
sources:
- layer_range: [18, 20]
model: luaqi/llama_01141
- layer_range: [18, 20]
model: RoyJoy/llama-jan16
- base_model: RoyJoy/llama-jan16
parameters:
t: 0.4951314948007201
sources:
- layer_range: [20, 22]
model: RoyJoy/llama-jan16
- layer_range: [20, 22]
model: luaqi/llama_01141
- base_model: luaqi/llama_01141
parameters:
t: 0.40612891630067377
sources:
- layer_range: [22, 24]
model: luaqi/llama_01141
- layer_range: [22, 24]
model: RoyJoy/llama-jan16
- base_model: luaqi/llama_01141
parameters:
t: 0.4717773146173076
sources:
- layer_range: [24, 26]
model: luaqi/llama_01141
- layer_range: [24, 26]
model: RoyJoy/llama-jan16
- base_model: RoyJoy/llama-jan16
parameters:
t: 0.35856355412368607
sources:
- layer_range: [26, 28]
model: RoyJoy/llama-jan16
- layer_range: [26, 28]
model: luaqi/llama_01141
- base_model: luaqi/llama_01141
parameters:
t: 0.474645297482983
sources:
- layer_range: [28, 30]
model: luaqi/llama_01141
- layer_range: [28, 30]
model: RoyJoy/llama-jan16
- base_model: RoyJoy/llama-jan16
parameters:
t: 0.4113346104507393
sources:
- layer_range: [30, 32]
model: RoyJoy/llama-jan16
- layer_range: [30, 32]
model: luaqi/llama_01141
- base_model: RoyJoy/llama-jan16
parameters:
t: 0.42271098156195525
sources:
- layer_range: [32, 34]
model: RoyJoy/llama-jan16
- layer_range: [32, 34]
model: luaqi/llama_01141
- base_model: RoyJoy/llama-jan16
parameters:
t: 0.43002043563359715
sources:
- layer_range: [34, 36]
model: RoyJoy/llama-jan16
- layer_range: [34, 36]
model: luaqi/llama_01141
- base_model: RoyJoy/llama-jan16
parameters:
t: 0.4024317979779294
sources:
- layer_range: [36, 38]
model: RoyJoy/llama-jan16
- layer_range: [36, 38]
model: luaqi/llama_01141
- base_model: luaqi/llama_01141
parameters:
t: 0.42781244776074845
sources:
- layer_range: [38, 40]
model: luaqi/llama_01141
- layer_range: [38, 40]
model: RoyJoy/llama-jan16
- base_model: luaqi/llama_01141
parameters:
t: 0.4744212098311982
sources:
- layer_range: [40, 42]
model: luaqi/llama_01141
- layer_range: [40, 42]
model: RoyJoy/llama-jan16
- base_model: RoyJoy/llama-jan16
parameters:
t: 0.42107835961309203
sources:
- layer_range: [42, 44]
model: RoyJoy/llama-jan16
- layer_range: [42, 44]
model: luaqi/llama_01141
- base_model: RoyJoy/llama-jan16
parameters:
t: 0.3570606753644864
sources:
- layer_range: [44, 46]
model: RoyJoy/llama-jan16
- layer_range: [44, 46]
model: luaqi/llama_01141
- base_model: luaqi/llama_01141
parameters:
t: 0.49464856531161844
sources:
- layer_range: [46, 48]
model: luaqi/llama_01141
- layer_range: [46, 48]
model: RoyJoy/llama-jan16
- Downloads last month
- 3
Inference Providers
NEW
This model is not currently available via any of the supported third-party Inference Providers, and
the model is not deployed on the HF Inference API.