Bio-Medical LLaMA 3 8B - Fine-Tuned
π Fine-tuned version of ContactDoctor/Bio-Medical-Llama-3-8B using Unsloth for enhanced medical Q&A capabilities.
π Model Details
- Model Name: Bio-Medical LLaMA 3 8B - Fine-Tuned
- Base Model: ContactDoctor/Bio-Medical-Llama-3-8B
- Fine-Tuning Method: QLoRA with Unsloth
- Domain: Medical Question Answering
- Dataset: Medical Q&A dataset (MQA.json)
π οΈ Training Configuration
- Epochs: 4
- Batch Size: 2
- Gradient Accumulation: 4
- Learning Rate: 2e-4
- Optimizer: AdamW (8-bit)
- Weight Decay: 0.01
- Warmup Steps: 50
π§ LoRA Parameters
- LoRA Rank (r): 16
- LoRA Alpha: 16
- LoRA Dropout: 0
- Bias: None
- Target Layers:
- q_proj
- k_proj
- v_proj
- o_proj
- gate_proj
- up_proj
- down_proj
- Gradient Checkpointing: Enabled (Unsloth)
- Random Seed: 3407
π Model Capabilities
- Optimized for low-memory inference
- Supports long medical queries
- Efficient parameter-efficient tuning (LoRA)
π Usage
This model is suitable for medical question answering, clinical chatbot applications, and biomedical research assistance.
π References
π‘ Contributions & Feedback: Open to collaboration! Feel free to reach out.
Inference Providers
NEW
This model is not currently available via any of the supported Inference Providers.
Model tree for khalednabawi11/Medical-Llama-Finetuned
Base model
meta-llama/Meta-Llama-3-8B-Instruct
Finetuned
ContactDoctor/Bio-Medical-Llama-3-8B