SeQwence-14Bv3 / README.md
leaderboard-pr-bot's picture
Adding Evaluation Results
a8c8f64 verified
|
raw
history blame
6.47 kB
metadata
library_name: transformers
tags:
  - mergekit
  - merge
base_model:
  - CultriX/SeQwence-14Bv2
  - CultriX/Qwestion-14B
  - CultriX/SeQwence-14Bv1
model-index:
  - name: SeQwence-14Bv3
    results:
      - task:
          type: text-generation
          name: Text Generation
        dataset:
          name: IFEval (0-Shot)
          type: HuggingFaceH4/ifeval
          args:
            num_few_shot: 0
        metrics:
          - type: inst_level_strict_acc and prompt_level_strict_acc
            value: 57.19
            name: strict accuracy
        source:
          url: >-
            https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=CultriX/SeQwence-14Bv3
          name: Open LLM Leaderboard
      - task:
          type: text-generation
          name: Text Generation
        dataset:
          name: BBH (3-Shot)
          type: BBH
          args:
            num_few_shot: 3
        metrics:
          - type: acc_norm
            value: 46.39
            name: normalized accuracy
        source:
          url: >-
            https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=CultriX/SeQwence-14Bv3
          name: Open LLM Leaderboard
      - task:
          type: text-generation
          name: Text Generation
        dataset:
          name: MATH Lvl 5 (4-Shot)
          type: hendrycks/competition_math
          args:
            num_few_shot: 4
        metrics:
          - type: exact_match
            value: 22.13
            name: exact match
        source:
          url: >-
            https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=CultriX/SeQwence-14Bv3
          name: Open LLM Leaderboard
      - task:
          type: text-generation
          name: Text Generation
        dataset:
          name: GPQA (0-shot)
          type: Idavidrein/gpqa
          args:
            num_few_shot: 0
        metrics:
          - type: acc_norm
            value: 15.32
            name: acc_norm
        source:
          url: >-
            https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=CultriX/SeQwence-14Bv3
          name: Open LLM Leaderboard
      - task:
          type: text-generation
          name: Text Generation
        dataset:
          name: MuSR (0-shot)
          type: TAUR-Lab/MuSR
          args:
            num_few_shot: 0
        metrics:
          - type: acc_norm
            value: 17.27
            name: acc_norm
        source:
          url: >-
            https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=CultriX/SeQwence-14Bv3
          name: Open LLM Leaderboard
      - task:
          type: text-generation
          name: Text Generation
        dataset:
          name: MMLU-PRO (5-shot)
          type: TIGER-Lab/MMLU-Pro
          config: main
          split: test
          args:
            num_few_shot: 5
        metrics:
          - type: acc
            value: 48.17
            name: accuracy
        source:
          url: >-
            https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=CultriX/SeQwence-14Bv3
          name: Open LLM Leaderboard

final_model

This is a merge of pre-trained language models created using mergekit.

Merge Details

Merge Method

This model was merged using the DARE TIES merge method using CultriX/SeQwence-14Bv1 as a base.

Models Merged

The following models were included in the merge:

Configuration

The following YAML configuration was used to produce this model:

base_model: CultriX/SeQwence-14Bv1
dtype: bfloat16
merge_method: dare_ties
parameters:
  int8_mask: 1.0
  normalize: 1.0
slices:
- sources:
  - layer_range: [0, 8]
    model: CultriX/SeQwence-14Bv1
    parameters:
      density: 1.0
      weight: 0.34927958017496047
  - layer_range: [0, 8]
    model: CultriX/Qwestion-14B
    parameters:
      density: 1.0
      weight: 0.4785529567298472
  - layer_range: [0, 8]
    model: CultriX/SeQwence-14Bv2
    parameters:
      density: 0.9095619834430182
      weight: 0.08292400341270245
- sources:
  - layer_range: [8, 16]
    model: CultriX/SeQwence-14Bv1
    parameters:
      density: 1.0
      weight: 0.31847489577754107
  - layer_range: [8, 16]
    model: CultriX/Qwestion-14B
    parameters:
      density: 1.0
      weight: 0.34008726542768253
  - layer_range: [8, 16]
    model: CultriX/SeQwence-14Bv2
    parameters:
      density: 1.0
      weight: -0.010187285487908426
- sources:
  - layer_range: [16, 24]
    model: CultriX/SeQwence-14Bv1
    parameters:
      density: 1.0
      weight: 0.1562216100470764
  - layer_range: [16, 24]
    model: CultriX/Qwestion-14B
    parameters:
      density: 1.0
      weight: 0.31090250951964327
  - layer_range: [16, 24]
    model: CultriX/SeQwence-14Bv2
    parameters:
      density: 0.8226944254037076
      weight: 0.4055505847346826
- sources:
  - layer_range: [24, 32]
    model: CultriX/SeQwence-14Bv1
    parameters:
      density: 1.0
      weight: 0.1478643123383346
  - layer_range: [24, 32]
    model: CultriX/Qwestion-14B
    parameters:
      density: 0.8233564236912981
      weight: 0.34508971280776113
  - layer_range: [24, 32]
    model: CultriX/SeQwence-14Bv2
    parameters:
      density: 1.0
      weight: 0.47963393901209633
- sources:
  - layer_range: [32, 40]
    model: CultriX/SeQwence-14Bv1
    parameters:
      density: 0.9078052860602195
      weight: 0.5051482718423455
  - layer_range: [32, 40]
    model: CultriX/Qwestion-14B
    parameters:
      density: 1.0
      weight: 0.21938011111527006
  - layer_range: [32, 40]
    model: CultriX/SeQwence-14Bv2
    parameters:
      density: 0.9287247232625168
      weight: 0.12414619742696054
- sources:
  - layer_range: [40, 48]
    model: CultriX/SeQwence-14Bv1
    parameters:
      density: 1.0
      weight: 0.1932759286778445
  - layer_range: [40, 48]
    model: CultriX/Qwestion-14B
    parameters:
      density: 0.9846832888894079
      weight: 0.572903756192807
  - layer_range: [40, 48]
    model: CultriX/SeQwence-14Bv2
    parameters:
      density: 1.0
      weight: 0.33759567132306584

Open LLM Leaderboard Evaluation Results

Detailed results can be found here

Metric Value
Avg. 34.41
IFEval (0-Shot) 57.19
BBH (3-Shot) 46.39
MATH Lvl 5 (4-Shot) 22.13
GPQA (0-shot) 15.32
MuSR (0-shot) 17.27
MMLU-PRO (5-shot) 48.17