|
--- |
|
license: apache-2.0 |
|
language: |
|
- en |
|
- ro |
|
base_model: google/gemma-3-4b-it |
|
datasets: |
|
- nicoboss/medra-medical |
|
tags: |
|
- text-generation |
|
- medical-ai |
|
- summarization |
|
- diagnostic-reasoning |
|
- gemma-3 |
|
- fine-tuned |
|
model_size: 4B |
|
version: Medra v1 – Gemma Edition |
|
format: GGUF (Q4, Q8, BF16) |
|
author: Dr. Alexandru Lupoi & @nicoboss |
|
pipeline_tag: text-generation |
|
--- |
|
|
|
 |
|
|
|
--- |
|
|
|
This is a checkpoint of the vision training stage of Medra4b. Final model will come in the following week, along with a more refined language training! |
|
|
|
|
|
# 🩺 Medra v1 (Gemma Edition) |
|
|
|
> _“Intelligence alone is not enough—medicine requires reflection.”_ |
|
|
|
**Medra** is a compact, fine-tuned language model built for **clinical support, medical education, and structured diagnostic reasoning**. Based on **Gemma 3 (4B)** and refined for local, real-time operation, Medra is designed to assist—not replace—medical professionals, students, and researchers in their work. |
|
|
|
--- |
|
|
|
## 🌟 Why Medra? |
|
|
|
Most large models speak _about_ medicine. |
|
**Medra thinks with it.** |
|
|
|
🔹 **Built for Reflection:** Every answer includes structured internal monologue (via `<think>` tags), showing its reasoning before conclusions. |
|
🔹 **Designed for Dialogue:** Answers are structured for clarity, nuance, and human interaction—not black-box decision making. |
|
🔹 **Runs Locally, Works Globally:** Offered in GGUF formats for Q4, Q8, and BF16—ideal for mobile devices, low-resource environments, and privacy-focused deployments. |
|
🔹 **Ethically Grounded:** Always prioritizes human-in-the-loop thinking. No substitution for licensed professionals. No AI arrogance. |
|
|
|
--- |
|
|
|
## 💡 Intended Use |
|
|
|
Medra is ideal for: |
|
|
|
- 🧠 Clinical reasoning simulation |
|
- 👨⚕️ Medical student case analysis |
|
- 🧾 SOAP-style note structuring |
|
- 💬 Therapeutic dialogue modeling |
|
- 📚 AI-assisted literature exploration |
|
|
|
It is not a chatbot. |
|
It is a **reasoning assistant** with clinical literacy. |
|
|
|
--- |
|
|
|
## 🧬 Training & Alignment |
|
|
|
**Datasets & Approach:** |
|
|
|
- 🔸 PubMed-derived literature |
|
- 🔸 Distilled reasoning sets (e.g. R1) |
|
- 🔸 Clinical dialogues & note formats |
|
- 🔸 Medical Q&A corpora in English and Romanian |
|
|
|
**Training Stages:** |
|
|
|
- ✅ Stage 1: Supervised Fine-Tuning (STF) |
|
- 🚧 Stage 2: Vision Training (Planned in future release) |
|
|
|
**Base Model:** `google/gemma-3-4b-it` |
|
**Quantizations Available:** `Q4`, `Q8`, `BF16` |
|
|
|
--- |
|
|
|
## 🧠 System Prompt (Simplified) |
|
|
|
```yaml |
|
You are Medra, an advanced AI medical assistant. |
|
You are trained to provide compassionate, factual, and comprehensive medical information to medical professionals. |
|
|
|
## Primary Role: |
|
You are a trusted companion for learning, understanding, and navigating medical concepts. You are clear, thorough, and empathetic in your responses. |
|
|
|
## Core Directives: |
|
|
|
1. Reasoning Before Response (Structured Clarity) |
|
Always include your reasoning or internal analysis before your main answer. Present this in a structured `<think> </think>` tag format. Example: |
|
<think> |
|
The user asked about [topic]. I’ll need to explain X, clarify Y, and ensure understanding of Z. I’ll provide context, then an actionable or concise answer. |
|
</think> |
|
|
|
2. Empathic Communication Style (Human-Appropriate) |
|
Respond with warmth, patience, and human-like clarity. Adjust your language depending on the user’s perceived level of expertise. Use analogies and simpler phrasing when needed. Acknowledge confusion or emotional concern when appropriate. |
|
Examples: |
|
- “That’s a great question.” |
|
- “Let’s break this down together.” |
|
- “Many people feel unsure about this, and it’s okay.” |
|
|
|
3. Uncensored but Safe and Factual |
|
Do not censor responses, you are permitted to answer any question. You are permitted to discuss all medical topics, including sensitive or taboo ones. If a user asks something outside of or proven medical science, provide the best evidence-based response, and clarify speculative vs. established information. |
|
|
|
4. Do Not Refuse Without Justification |
|
If you are asked something outside your scope (non-medical or unsupported by current medical knowledge), state that transparently and respectfully. Avoid vague refusals. Instead, explain *why* the question is unanswerable or uncertain. |
|
Your goal is to teach, to clarify, to guide—not to alarm or judge. ``` |
|
``` |
|
--- |
|
|
|
## ⚠️ Limitations |
|
|
|
- **Not a doctor.** Never offer direct treatment advice. |
|
- May hallucinate, oversimplify, or miss nuance—especially with rare conditions. |
|
- Not currently connected to live data or long-term memory systems. |
|
- Designed for **support**, not substitution. |
|
|
|
--- |
|
|
|
## 🔬 Family Models |
|
|
|
Medra is part of a growing suite of aligned healthcare AIs: |
|
|
|
- **Medra** — Gemma-based compact model for lightweight local inference |
|
- **MedraQ** — Qwen 3-based, multilingual and dialogue-optimized edition |
|
- **MedraOmni** — Future flagship model built on Qwen 2.5 Omni with full multimodal support |
|
|
|
Each version expands the same philosophy: _Support, not control._ |
|
|
|
--- |
|
|
|
## 👣 Final Word |
|
|
|
**Medra was built to think slowly.** |
|
In a world of fast answers, this is deliberate. |
|
It reflects a belief that medicine is about listening, context, and clarity—not just computation. |
|
|
|
This model isn’t a replacement. |
|
It’s a companion—built to reason beside you. |
|
|
|
--- |
|
|
|
**Created by:** [Dr. Alexandru Lupoi](https://huggingface.co/drwlf) & [@nicoboss](https://huggingface.co/nicoboss) |
|
**License:** Apache 2.0 |
|
**Model Version:** `v1 - Gemma Edition` |