Spaetzle-v69-7b / README.md
cstr's picture
Update README.md
4fc4f67 verified
|
raw
history blame
3.86 kB
metadata
tags:
  - merge
  - mergekit
  - lazymergekit
base_model:
  - abideen/AlphaMonarch-dora
  - mayflowergmbh/Wiedervereinigung-7b-dpo
  - flemmingmiguel/NeuDist-Ro-7B
  - ResplendentAI/Flora_DPO_7B
  - yleo/EmertonMonarch-7B
  - occiglot/occiglot-7b-de-en-instruct
  - OpenPipe/mistral-ft-optimized-1227
  - DiscoResearch/DiscoLM_German_7b_v1
  - LeoLM/leo-mistral-hessianai-7b
  - DRXD1000/Phoenix
  - VAGOsolutions/SauerkrautLM-7b-v1-mistral
  - malteos/hermeo-7b
  - FelixChao/WestSeverus-7B-DPO-v2
  - cognitivecomputations/openchat-3.5-0106-laser
license: cc-by-nc-4.0

Spaetzle-v69-7b

This is a progressive (mostly dare-ties, but also slerp) merge with the intention of a suitable compromise for English and German local tasks.

There is also a 4q_k_m quantized GGUF.

It achieves (running quantized) in

  • German EQ Bench: Score (v2_de): 62.59 (Parseable: 171.0).
  • English EQ Bench: Score (v2): 76.43 (Parseable: 171.0).

It should work sufficiently well with ChatML prompt template (for all merged models should have seen ChatML prompts at least in DPO stage).

Spaetzle-v69-7b is a merge of the following models using LazyMergekit:

The merge tree in total involves the following original models:

🧩 Configuration

models:
  - model: cstr/Spaetzle-v68-7b
    # no parameters necessary for base model
  - model: abideen/AlphaMonarch-dora
    parameters:
      density: 0.60
      weight: 0.30
merge_method: dare_ties
base_model: cstr/Spaetzle-v68-7b
parameters:
  int8_mask: true
dtype: bfloat16
random_seed: 0
tokenizer_source: base

💻 Usage

!pip install -qU transformers accelerate

from transformers import AutoTokenizer
import transformers
import torch

model = "cstr/Spaetzle-v69-7b"
messages = [{"role": "user", "content": "What is a large language model?"}]

tokenizer = AutoTokenizer.from_pretrained(model)
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
pipeline = transformers.pipeline(
    "text-generation",
    model=model,
    torch_dtype=torch.float16,
    device_map="auto",
)

outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])