MedMCQA LoRA — Qwen2.5-7B-Instruct
Adapter weights only for Qwen/Qwen2.5-7B-Instruct
, fine-tuned to answer medical multiple-choice questions (A/B/C/D).
Subjects used for fine-tuning and evaluation: Biochemistry and Physiology.
Educational use only. Not medical advice.
What’s inside
adapter_model.safetensors
(LoRA weights)adapter_config.json
- No tokenizer/template changes; load tokenizer from the base model
Quick use (Transformers + PEFT)
from transformers import AutoTokenizer, AutoModelForCausalLM
from peft import PeftModel
import re
BASE = "Qwen/Qwen2.5-7B-Instruct"
ADAPTER = "Pk3112/medmcqa-lora-qwen2.5-7b-instruct"
tok = AutoTokenizer.from_pretrained(BASE, use_fast=True)
base = AutoModelForCausalLM.from_pretrained(BASE, device_map="auto")
model = PeftModel.from_pretrained(base, ADAPTER).eval()
prompt = (
"Question: Which nerve supplies the diaphragm?\n"
"A. Vagus\nB. Phrenic\nC. Intercostal\nD. Accessory\n\n"
"Answer:"
)
inputs = tok(prompt, return_tensors="pt").to(model.device)
out = model.generate(**inputs, max_new_tokens=8, do_sample=False)
text = tok.decode(out[0], skip_special_tokens=True)
m = re.search(r"Answer:\s*([A-D])\b", text)
print(f"Answer: {m.group(1)}" if m else text.strip())
Optional 4-bit: create a BitsAndBytesConfig
and pass as quantization_config
to from_pretrained
(Linux/WSL recommended if using bitsandbytes
).
Results (Biochemistry + Physiology)
Model | Internal val acc (%) | Original val acc (%) | TTFT (ms) | Gen time (ms) | In/Out tokens |
---|---|---|---|---|---|
Qwen2.5-7B (LoRA) | 76.50 | 67.84 | 546 | 1623 | 81 / 15 |
Why Qwen as default: higher external-set accuracy and much lower latency vs Llama in our setup.
Training (summary)
- Frameworks: Unsloth + PEFT/LoRA (QLoRA NF4)
- LoRA:
r=32, alpha=64, dropout=0.0
; targetsq_proj,k_proj,v_proj,o_proj,gate_proj,up_proj,down_proj
- Max seq length:
768
- Objective: answer-only target (
Answer: <A/B/C/D>
) - Split: stratified 70/30 on
subject_name
(Biochemistry, Physiology)
Training code & reproducibility
- GitHub repo: https://github.com/PranavKumarAV/MedMCQA-Chatbot-Finetune-Medical-AI
- Release (code snapshot): https://github.com/PranavKumarAV/MedMCQA-Chatbot-Finetune-Medical-AI/releases/tag/v1.0-medmcqa
License & usage
- Adapter: Apache-2.0 (this repo)
- Base model:
Qwen/Qwen2.5-7B-Instruct
(Apache-2.0) — obtain from its HF page - Dataset:
openlifescienceai/medmcqa
— follow dataset license - Safety: Educational use only. Not medical advice.
- Downloads last month
- 21