--- base_model: - CoolSpring/Qwen2-0.5B-Abyme-merge3 - >- FlofloB/100k_fineweb_continued_pretraining_Qwen2.5-0.5B-Instruct_Unsloth_merged_16bit library_name: transformers tags: - mergekit - merge - rp - roleplay language: - es - en datasets: - HuggingFaceFW/fineweb --- # merge This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit). ## Merge Details
The Brothers [Abyss](https://huggingface.co/Novaciano/Qwen2.5-0.5B-Abyss) & [Cliff](https://huggingface.co/Novaciano/Qwen2.5-0.5B-Cliff) IMG-20250311-031904 Both is the combination of the best models QWEN2.5-0.5B of the Open LLM Leaderscore.
### Merge Method This model was merged using the [Arcee Fusion](https://arcee.ai) merge method using [FlofloB/100k_fineweb_continued_pretraining_Qwen2.5-0.5B-Instruct_Unsloth_merged_16bit](https://huggingface.co/FlofloB/100k_fineweb_continued_pretraining_Qwen2.5-0.5B-Instruct_Unsloth_merged_16bit) as a base. ### Models Merged The following models were included in the merge: * [CoolSpring/Qwen2-0.5B-Abyme-merge3](https://huggingface.co/CoolSpring/Qwen2-0.5B-Abyme-merge3) ### Configuration The following YAML configuration was used to produce this model: ```yaml models: - model: FlofloB/100k_fineweb_continued_pretraining_Qwen2.5-0.5B-Instruct_Unsloth_merged_16bit - model: CoolSpring/Qwen2-0.5B-Abyme-merge3 parameters: density: 0.53 weight: 0.6 merge_method: arcee_fusion base_model: FlofloB/100k_fineweb_continued_pretraining_Qwen2.5-0.5B-Instruct_Unsloth_merged_16bit tokenizer_source: union parameters: int8_mask: true dtype: bfloat16 random_seed: 0 ```