Qwen2.5-METACODER 22.1B (mergekit-model1) by Solshine (Caleb DeLeeuw)

image/png

merge

This is an expirimental merge of pre-trained language models created using mergekit. No Fine-tuning, nor benchmarking, has been done with this model.

License

Hippocratic License 3.0 + Ecocide module, + Extractive Industries module, + Copyleft Hippocratic License HL3-CL-ECO-EXTR https://firstdonoharm.dev/version/3/0/cl-eco-extr.txt

Merge Details

Merge Method

This model was merged using the passthrough merge method.

Models Merged

The following models were included in the merge:

Configuration

The following YAML configuration was used to produce this model:

slices:
  - sources:
      - model: Qwen/Qwen2.5-Coder-7B-Instruct
        layer_range: [0, 16]
  - sources:
      - model: Qwen/Qwen2.5-Math-7B-Instruct
        layer_range: [3, 4]
  - sources:
      - model: Qwen/Qwen2.5-Coder-7B
        layer_range: [4, 20]
  - sources:
      - model: Qwen/Qwen2.5-Math-7B
        layer_range: [8, 24]
  - sources:
      - model: Qwen/Qwen2.5-Coder-7B
        layer_range: [10, 24]
  - sources:
      - model: Qwen/Qwen2.5-Coder-7B
        layer_range: [10, 14]
  - sources:
      - model: Qwen/Qwen2.5-Coder-7B-Instruct
        layer_range: [6, 26]
  - sources:
      - model: Qwen/Qwen2.5-Math-7B-Instruct
        layer_range: [25, 26]
  - sources:
      - model: Qwen/Qwen2.5-Coder-7B-Instruct
        layer_range: [26, 28]
merge_method: passthrough
dtype: float16
Downloads last month
5
Safetensors
Model size
22.1B params
Tensor type
FP16
·
Inference Providers NEW
This model is not currently available via any of the supported third-party Inference Providers, and the model is not deployed on the HF Inference API.

Model tree for Solshine/Qwen2.5-METACODER-22.1B-mergekit-model1