Neo_7b-merge8

Neo_7b-merge8 is a merge of the following models using LazyMergekit:

🧩 Configuration

slices:
  # Group 1 (layers 0-3 to 0-2)
  - sources:
      - model: m-a-p/neo_7b
        layer_range: [0, 0]
      - model: DewEfresh/neo_7b
        layer_range: [3, 3]
  - sources:
      - model: m-a-p/neo_7b
        layer_range: [1, 1]
      - model: DewEfresh/neo_7b
        layer_range: [3, 3]
  - sources:
      - model: m-a-p/neo_7b
        layer_range: [2, 2]
      - model: DewEfresh/neo_7b
        layer_range: [3, 3]
  
  # Group 2 (layers 4-7 to 3-5)
  - sources:
      - model: m-a-p/neo_7b
        layer_range: [3, 3]
      - model: DewEfresh/neo_7b
        layer_range: [7, 7]
  - sources:
      - model: m-a-p/neo_7b
        layer_range: [4, 4]
      - model: DewEfresh/neo_7b
        layer_range: [7, 7]
  - sources:
      - model: m-a-p/neo_7b
        layer_range: [5, 5]
      - model: DewEfresh/neo_7b
        layer_range: [7, 7]

  # Group 3 (layers 8-11 to 6-8)
  - sources:
      - model: m-a-p/neo_7b
        layer_range: [6, 6]
      - model: DewEfresh/neo_7b
        layer_range: [11, 11]
  - sources:
      - model: m-a-p/neo_7b
        layer_range: [7, 7]
      - model: DewEfresh/neo_7b
        layer_range: [11, 11]
  - sources:
      - model: m-a-p/neo_7b
        layer_range: [8, 8]
      - model: DewEfresh/neo_7b
        layer_range: [11, 11]

  # Group 4 (layers 12-15 to 9-11)
  - sources:
      - model: m-a-p/neo_7b
        layer_range: [9, 9]
      - model: DewEfresh/neo_7b
        layer_range: [15, 15]
  - sources:
      - model: m-a-p/neo_7b
        layer_range: [10, 10]
      - model: DewEfresh/neo_7b
        layer_range: [15, 15]
  - sources:
      - model: m-a-p/neo_7b
        layer_range: [11, 11]
      - model: DewEfresh/neo_7b
        layer_range: [15, 15]

  # Group 5 (layers 16-19 to 12-14)
  - sources:
      - model: m-a-p/neo_7b
        layer_range: [12, 12]
      - model: DewEfresh/neo_7b
        layer_range: [19, 19]
  - sources:
      - model: m-a-p/neo_7b
        layer_range: [13, 13]
      - model: DewEfresh/neo_7b
        layer_range: [19, 19]
  - sources:
      - model: m-a-p/neo_7b
        layer_range: [14, 14]
      - model: DewEfresh/neo_7b
        layer_range: [19, 19]

  # Group 6 (layers 20-23 to 15-17)
  - sources:
      - model: m-a-p/neo_7b
        layer_range: [15, 15]
      - model: DewEfresh/neo_7b
        layer_range: [23, 23]
  - sources:
      - model: m-a-p/neo_7b
        layer_range: [16, 16]
      - model: DewEfresh/neo_7b
        layer_range: [23, 23]
  - sources:
      - model: m-a-p/neo_7b
        layer_range: [17, 17]
      - model: DewEfresh/neo_7b
        layer_range: [23, 23]

  # Group 7 (layers 24-27 to 18-20)
  - sources:
      - model: m-a-p/neo_7b
        layer_range: [18, 18]
      - model: DewEfresh/neo_7b
        layer_range: [27, 27]
  - sources:
      - model: m-a-p/neo_7b
        layer_range: [19, 19]
      - model: DewEfresh/neo_7b
        layer_range: [27, 27]
  - sources:
      - model: m-a-p/neo_7b
        layer_range: [20, 20]
      - model: DewEfresh/neo_7b
        layer_range: [27, 27]

merge_method: slerp
base_model: m-a-p/neo_7b
parameters:
  t:
    - 0.75  # Weight for m-a-p/neo_7b layer
    - 0.25  # Weight for the 4th DewEfresh/neo_7b layer being merged
dtype: bfloat16
output_path: ./merged_reduced_map_dewefresh_neo_7b
model_config:
  architectures: ["LlamaForCausalLM"]
  attention_bias: false
  attention_dropout: 0.0
  hidden_act: "silu"
  hidden_size: 3072
  intermediate_size: 24576
  max_position_embeddings: 8192
  model_type: "llama"
  num_attention_heads: 16
  num_hidden_layers: 21  # Reduced from 28 to 21
  num_key_value_heads: 16
  rms_norm_eps: 1e-05
  rope_theta: 10000.0
  use_cache: true
  vocab_size: 64256

πŸ’» Usage

!pip install -qU transformers accelerate

from transformers import AutoTokenizer
import transformers
import torch

model = "DewEfresh/Neo_7b-merge8"
messages = [{"role": "user", "content": "What is a large language model?"}]

tokenizer = AutoTokenizer.from_pretrained(model)
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
pipeline = transformers.pipeline(
    "text-generation",
    model=model,
    torch_dtype=torch.float16,
    device_map="auto",
)

outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
Downloads last month
3
Safetensors
Model size
395M params
Tensor type
BF16
Β·
Inference Providers NEW
This model is not currently available via any of the supported third-party Inference Providers, and the model is not deployed on the HF Inference API.

Model tree for DewEfresh/Neo_7b-merge8

Merge model
this model