model_name: "Mistral-7B-Math (LoRA Adapters Only)"
repo: "samzheng/mistral-7b-math-lora"
description:
This repo contains only the LoRA adapter weights (~200 MB) trained for
grade-school symbolic math. Load them on top of the 4-bit base model
to save disk and download time.
quick_start:
from transformers import AutoModelForCausalLM, AutoTokenizer
from peft import PeftModel
base_id = "unsloth/mistral-7b-instruct-v0.3-bnb-4bit"
lora_id = "yourusername/mistral-7b-math-lora"
tok = AutoTokenizer.from_pretrained(base_id)
base = AutoModelForCausalLM.from_pretrained(base_id, load_in_4bit=True, device_map="auto")
model = PeftModel.from_pretrained(base, lora_id) # inject adapters
# generate
prompt = "... Alpaca-formatted prompt ..."
out = model.generate(**tok(prompt, return_tensors="pt").to(model.device), max_new_tokens=256)
print(tok.decode(out[0], skip_special_tokens=True))
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
๐
Ask for provider support
Model tree for samzheng/Mistral-SymbolicMath-7B-Lora
Base model
mistralai/Mistral-7B-v0.3
Finetuned
mistralai/Mistral-7B-Instruct-v0.3