merge

This is a merge of pre-trained language models created using mergekit.

Merge Details

Merge Method

This model was merged using the passthrough merge method.

Models Merged

The following models were included in the merge:

Configuration

The following YAML configuration was used to produce this model:

slices:
- sources:
  - layer_range: [0, 10]
    model: meta-llama/Meta-Llama-3-8B-Instruct
- sources:
  - layer_range: [10, 20]
    model: meta-llama/Meta-Llama-3-8B-Instruct
- sources:
  - layer_range: [20, 30]
    model: meta-llama/Meta-Llama-3-8B-Instruct
merge_method: passthrough
dtype: float16
Downloads last month
65
Safetensors
Model size
7.59B params
Tensor type
FP16
·
Inference Providers NEW
This model is not currently available via any of the supported third-party Inference Providers, and the model is not deployed on the HF Inference API.

Model tree for wassemgtk/merge-passthrough-Meta-Llama-3-8B-Instruct

Finetuned
(544)
this model
Quantizations
1 model