Merged Model

This is a merge of pre-trained language models created using mergekit.

Falcon-Merge-Logo

This model is currently ranked #1 on the Open LLM Leaderboard among models up to 8B parameters and #4 among models up to 14B parameters!

Merge Details

Merge Method

This model was merged using the SLERP merge method.

Models Merged

The following models were included in the merge:

Falcon3-7B-Instruct

Falcon3 family of Open Foundation Models is a set of pretrained and instruct LLMs ranging from 1B to 10B.

This repository contains the Falcon3-7B-Instruct. It achieves state of art results (at the time of release) on reasoning, language understanding, instruction following, code and mathematics tasks. Falcon3-7B-Instruct supports 4 languages (English, French, Spanish, Portuguese) and a context length up to 32K.

Configuration

The following YAML configuration was used to produce this model:

base_model: neopolita/jessi-v0.4-falcon3-7b-instruct
dtype: bfloat16
merge_method: slerp
parameters:
  t:
  - filter: self_attn
    value: [0.0, 0.5, 0.3, 0.7, 1.0]
  - filter: mlp
    value: [1.0, 0.5, 0.7, 0.3, 0.0]
  - value: 0.5
slices:
- sources:
  - layer_range: [0, 28]
    model: tiiuae/Falcon3-7B-Instruct
  - layer_range: [0, 28]
    model: neopolita/jessi-v0.4-falcon3-7b-instruct

Open LLM Leaderboard Evaluation Results

Detailed results can be found here

Metric Value
Avg. 35.23
IFEval (0-Shot) 76.76
BBH (3-Shot) 37.29
MATH Lvl 5 (4-Shot) 34.59
GPQA (0-shot) 8.28
MuSR (0-shot) 20.49
MMLU-PRO (5-shot) 34.00

Buy Me A Coffee

Downloads last month
140
Safetensors
Model size
7.46B params
Tensor type
BF16
Β·
Inference Providers NEW
This model is not currently available via any of the supported Inference Providers.

Model tree for suayptalha/Falcon3-Jessi-v0.4-7B-Slerp

Spaces using suayptalha/Falcon3-Jessi-v0.4-7B-Slerp 6

Collection including suayptalha/Falcon3-Jessi-v0.4-7B-Slerp

Evaluation results