Model soups: averaging weights of multiple fine-tuned models improves accuracy without increasing inference time
Paper
•
2203.05482
•
Published
•
7
WittyAthena-24b is a merge of pre-trained language models created using mergekit.
This model was merged using the Linear merge method using arcee-ai/Arcee-Blitz as a base.
The following models were included in the merge:
The following YAML configuration was used to produce this model:
base_model: arcee-ai/Arcee-Blitz
dtype: bfloat16
merge_method: linear
models:
- model: arcee-ai/Arcee-Blitz
parameters:
weight: 0.34
- model: Vortex5/Clockwork-Flower-24B
parameters:
weight: 0.33
- model: TheDrummer/Cydonia-24B-v3
parameters:
weight: 0.33