model_name: "Mistral-7B-Math (LoRA Adapters Only)"
repo: "samzheng/mistral-7b-math-lora"
description: This repo contains only the LoRA adapter weights (~200 MB) trained for grade-school symbolic math. Load them on top of the 4-bit base model to save disk and download time.

quick_start:

from transformers import AutoModelForCausalLM, AutoTokenizer
from peft import PeftModel

base_id = "unsloth/mistral-7b-instruct-v0.3-bnb-4bit"
lora_id = "yourusername/mistral-7b-math-lora"

tok   = AutoTokenizer.from_pretrained(base_id)
base  = AutoModelForCausalLM.from_pretrained(base_id, load_in_4bit=True, device_map="auto")
model = PeftModel.from_pretrained(base, lora_id)   # inject adapters

# generate
prompt = "... Alpaca-formatted prompt ..."
out = model.generate(**tok(prompt, return_tensors="pt").to(model.device), max_new_tokens=256)
print(tok.decode(out[0], skip_special_tokens=True))
Downloads last month

-

Downloads are not tracked for this model. How to track
Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Model tree for samzheng/Mistral-SymbolicMath-7B-Lora

Finetuned
(329)
this model