merge

This is a merge of pre-trained language models created using mergekit.

Merge Details

Merge Method

This model was merged using the DARE TIES merge method using mlabonne/NeuralBeagle14-7B as a base.

Models Merged

The following models were included in the merge:

Configuration

The following YAML configuration was used to produce this model:

base_model: mlabonne/NeuralBeagle14-7B
dtype: bfloat16
merge_method: dare_ties
models:
- model: mlabonne/NeuralBeagle14-7B
- model: mlabonne/AlphaMonarch-7B
  parameters:
    density: '0.53'
    weight: '0.4'
- model: Intel/neural-chat-7b-v3-1
  parameters:
    density: '0.53'
    weight: '0.3'
- model: HuggingFaceH4/zephyr-7b-beta
  parameters:
    density: '0.53'
    weight: '0.3'
parameters:
  int8_mask: true
Downloads last month
176
Safetensors
Model size
7.24B params
Tensor type
BF16
Β·
Inference Providers NEW
This model is not currently available via any of the supported third-party Inference Providers, and the model is not deployed on the HF Inference API.

Model tree for jambroz/sixtyoneeighty-7b

Spaces using jambroz/sixtyoneeighty-7b 6