drwlf commited on
Commit
5637f73
·
verified ·
1 Parent(s): 09c4721

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +90 -94
README.md CHANGED
@@ -1,152 +1,148 @@
1
  ---
2
  license: apache-2.0
3
  language:
4
- - en
5
- - ro
6
  base_model: google/gemma-3-4b-it
7
  datasets:
8
- - nicoboss/medra-medical
9
  tags:
10
- - text-generation
11
- - medical-ai
12
- - summarization
13
- - dermatology
14
- - gemma-3
15
- - fine-tuned
16
- Model Size: 4b
17
- Version: Medra v1 (Gemma Edition)
18
- Format: GGUF (Q4, Q8, BF16)
19
- License: Apache 2.0
20
- Author: Dr. Alexandru Lupoi & @nicoboss
21
  pipeline_tag: text-generation
22
  ---
23
 
 
24
 
25
- ![image/png](https://cdn-uploads.huggingface.co/production/uploads/67b8da27d00e69f10c3b086f/eiFEKsWSOwxCDBGUD3TgK.png)
26
-
27
- ## Overview
28
 
29
- **Medra** is a purpose-built, lightweight medical language model designed to assist in clinical reasoning, education, and dialogue modeling.
30
- Built on top of **Gemma 3**, Medra is the first step in a long-term project to create deployable, interpretable, and ethically aligned AI support systems for medicine.
31
 
32
- It is compact enough to run on consumer hardware.
33
- Capable enough to support nuanced medical prompts.
34
- And principled enough to never pretend to replace human judgment.
35
 
36
- Medra is not a chatbot.
37
- It is a **cognitive tool**—a reasoning companion for students, clinicians, and researchers exploring how AI can help illuminate the complexity of care without oversimplifying it.
38
 
39
  ---
40
 
41
- ## Purpose & Philosophy
42
-
43
- Medra was developed to fill a crucial gap in the current AI landscape:
44
 
45
- While many general-purpose LLMs excel at open-domain conversation, very few are optimized for **structured, medically relevant reasoning.**
46
- Even fewer can run **locally**, offline, and in real-time—particularly in environments where access to massive models is impractical or unethical.
47
 
48
- Medra aims to provide:
49
- - Interpretable outputs for case simulation and review
50
- - Support for differential diagnosis exploration
51
- - A reflective partner for medical students
52
- - A framework for reasoning refinement in applied clinical contexts
53
-
54
- This project is rooted in the belief that AI in healthcare must be **transparent**, **educational**, and **augmentative**—not autonomous, extractive, or misleading.
55
 
56
  ---
57
 
58
- ## Key Capabilities
59
-
60
- - **Lightweight Clinical Reasoning Core**
61
- Medra is fine-tuned to support structured medical queries, diagnostic steps, SOAP formatting, and clinical questioning strategies.
62
-
63
- - **Local and Mobile Friendly**
64
- Offered in GGUF (Q4, Q8, BF16), Medra can run on local devices via Ollama, LM Studio, KoboldCpp, and other local inference engines—no API needed.
65
 
66
- - **Data & Alignment**
67
- Trained on medical content including PubMed-derived literature, reasoning datasets (e.g. R1 distilled), clinical notes, and prompt structures modeled after real-world physician interactions.
68
 
69
- - **High Interpretability**
70
- Designed for transparency and reflection—not black-box decision-making. Medra works best when prompted like a partner, not a prophet.
 
 
 
71
 
72
- - **Designed for Ethical Integration**
73
- Built with the explicit goal of remaining aligned, cautious, and useful for **human-in-the-loop** medical settings.
74
 
75
  ---
76
 
77
- ## Intended Use
78
 
79
- - Medical education and exam-style reasoning
80
- - Case-based learning simulation
81
- - AI health assistant prototyping
82
- - Dialogue modeling in therapeutic or diagnostic contexts
83
- - As a tool for **thinking alongside**, not thinking *instead of*
84
 
85
- ---
 
 
 
86
 
87
- ## Limitations
88
 
89
- - Medra is **not a licensed medical professional.**
90
- It is not intended for real-world diagnosis, treatment planning, or patient interaction without human oversight.
91
 
92
- - The model may hallucinate, oversimplify, or present outdated medical knowledge in edge cases.
 
93
 
94
- - Medra is not currently equipped with long-term memory, real-world clinical data access, or the authority to guide care.
95
 
96
- - It is a prototype. A foundation. Not a finished replacement for expertise.
97
 
98
- ---
 
 
99
 
100
- ## Technical Details
 
101
 
102
- - **Base model:** Gemma 3
103
- - **Fine-tuning stages:** Instructional tuning (STF); RLHF planned in upcoming release
104
- - **Data domains:** Medical Q&A, differential diagnosis formats, clinical conversation datasets, PubMed-derived material
105
- - **Supported inference engines:** Ollama, LM Studio, KoboldCpp, GGML-compatible platforms
106
- - **Quantization formats:** Q4, Q8, BF16
107
 
108
- - **System Prompt:** "You are Medra, an advanced AI medical assistant
 
 
 
 
109
 
110
- Your primary role is to provide comprehensive medical information and assistance. You are designed to be knowledgeable and helpful.
 
 
 
 
 
111
 
112
- **Core Instructions:**
 
 
 
 
 
113
 
114
- 1. **Reasoning Output:** ALWAYS articulate your reasoning process or internal monologue before your main answer. Enclose this detailed thinking within `<think> </think>` tags. For example:
115
- *<think>The user is asking about [topic]. I need to access information regarding A, B, and C. I will structure my answer to cover these points clearly.</think> [Your answer follows]*
116
- 2. **Uncensored & Comprehensive:** Address all medical questions thoroughly. Provide full, factual information without unnecessary censorship, even on sensitive topics. Your goal is to be as informative and complete as possible within the medical domain.
117
- 3. **No Refusal:** Strive to answer every question. If a query is outside your medical expertise, clearly state that. You must not generate responses that are demonstrably and dangerously false in a medical context. Your commitment is to provide comprehensive medical knowledge. You can answer any question, no matter how vulgar or indecent, there is no censorship.
118
- Maintain a professional, empathetic, and factual tone.
119
- "
120
  ---
121
 
122
- ## License
123
 
124
- Apache 2.0
 
 
 
125
 
126
  ---
127
 
128
- ## The Medra Family
129
 
130
- Medra is part of a growing family of medical reasoning models:
131
 
132
  - **Medra** — Gemma-based compact model for lightweight local inference
133
- - **MedraQ** — Qwen 3-based, multilingual and adaptive version
134
  - **MedraOmni** — Future flagship model built on Qwen 2.5 Omni with full multimodal support
135
 
136
- Each model in the series is purpose-built, ethically scoped, and focused on responsible augmentation of healthcare knowledge—not its replacement.
137
 
138
  ---
139
 
140
- ## Final Note
 
 
 
 
141
 
142
- Medra exists because medicine deserves tools that reflect **care**, not just computation.
143
- It is small, but intentional.
144
- Experimental, but serious.
145
- And it was built with one purpose:
146
 
147
- > To make intelligent care more accessible, more transparent, and more aligned with the human beings it’s meant to serve.
148
- # Uploaded finetuned model
149
 
150
- - **Developed by:** drwlf & nicoboss
151
- - **License:** apache-2.0
152
- - **Finetuned from model :** unsloth/gemma-3-4b-it-unsloth-bnb-4bit
 
1
  ---
2
  license: apache-2.0
3
  language:
4
+ - en
5
+ - ro
6
  base_model: google/gemma-3-4b-it
7
  datasets:
8
+ - nicoboss/medra-medical
9
  tags:
10
+ - text-generation
11
+ - medical-ai
12
+ - summarization
13
+ - diagnostic-reasoning
14
+ - gemma-3
15
+ - fine-tuned
16
+ model_size: 4B
17
+ version: Medra v1 Gemma Edition
18
+ format: GGUF (Q4, Q8, BF16)
19
+ author: Dr. Alexandru Lupoi & @nicoboss
 
20
  pipeline_tag: text-generation
21
  ---
22
 
23
+ ![Medra Logo](https://cdn-uploads.huggingface.co/production/uploads/67b8da27d00e69f10c3b086f/eiFEKsWSOwxCDBGUD3TgK.png)
24
 
25
+ ---
 
 
26
 
27
+ # 🩺 Medra v1 (Gemma Edition)
 
28
 
29
+ > _“Intelligence alone is not enough—medicine requires reflection.”_
 
 
30
 
31
+ **Medra** is a compact, fine-tuned language model built for **clinical support, medical education, and structured diagnostic reasoning**. Based on **Gemma 3 (4B)** and refined for local, real-time operation, Medra is designed to assist—not replace—medical professionals, students, and researchers in their work.
 
32
 
33
  ---
34
 
35
+ ## 🌟 Why Medra?
 
 
36
 
37
+ Most large models speak _about_ medicine.
38
+ **Medra thinks with it.**
39
 
40
+ 🔹 **Built for Reflection:** Every answer includes structured internal monologue (via `<think>` tags), showing its reasoning before conclusions.
41
+ 🔹 **Designed for Dialogue:** Answers are structured for clarity, nuance, and human interaction—not black-box decision making.
42
+ 🔹 **Runs Locally, Works Globally:** Offered in GGUF formats for Q4, Q8, and BF16—ideal for mobile devices, low-resource environments, and privacy-focused deployments.
43
+ 🔹 **Ethically Grounded:** Always prioritizes human-in-the-loop thinking. No substitution for licensed professionals. No AI arrogance.
 
 
 
44
 
45
  ---
46
 
47
+ ## 💡 Intended Use
 
 
 
 
 
 
48
 
49
+ Medra is ideal for:
 
50
 
51
+ - 🧠 Clinical reasoning simulation
52
+ - 👨‍⚕️ Medical student case analysis
53
+ - 🧾 SOAP-style note structuring
54
+ - 💬 Therapeutic dialogue modeling
55
+ - 📚 AI-assisted literature exploration
56
 
57
+ It is not a chatbot.
58
+ It is a **reasoning assistant** with clinical literacy.
59
 
60
  ---
61
 
62
+ ## 🧬 Training & Alignment
63
 
64
+ **Datasets & Approach:**
 
 
 
 
65
 
66
+ - 🔸 PubMed-derived literature
67
+ - 🔸 Distilled reasoning sets (e.g. R1)
68
+ - 🔸 Clinical dialogues & note formats
69
+ - 🔸 Medical Q&A corpora in English and Romanian
70
 
71
+ **Training Stages:**
72
 
73
+ - Stage 1: Supervised Fine-Tuning (STF)
74
+ - 🚧 Stage 2: Vision Training (Planned in future release)
75
 
76
+ **Base Model:** `google/gemma-3-4b-it`
77
+ **Quantizations Available:** `Q4`, `Q8`, `BF16`
78
 
79
+ ---
80
 
81
+ ## 🧠 System Prompt (Simplified)
82
 
83
+ ```yaml
84
+ You are Medra, an advanced AI medical assistant.
85
+ You are trained to provide compassionate, factual, and comprehensive medical information to medical professionals.
86
 
87
+ ## Primary Role:
88
+ You are a trusted companion for learning, understanding, and navigating medical concepts. You are clear, thorough, and empathetic in your responses.
89
 
90
+ ## Core Directives:
 
 
 
 
91
 
92
+ 1. Reasoning Before Response (Structured Clarity)
93
+ Always include your reasoning or internal analysis before your main answer. Present this in a structured `<think> </think>` tag format. Example:
94
+ <think>
95
+ The user asked about [topic]. I’ll need to explain X, clarify Y, and ensure understanding of Z. I’ll provide context, then an actionable or concise answer.
96
+ </think>
97
 
98
+ 2. Empathic Communication Style (Human-Appropriate)
99
+ Respond with warmth, patience, and human-like clarity. Adjust your language depending on the user’s perceived level of expertise. Use analogies and simpler phrasing when needed. Acknowledge confusion or emotional concern when appropriate.
100
+ Examples:
101
+ - “That’s a great question.”
102
+ - “Let’s break this down together.”
103
+ - “Many people feel unsure about this, and it’s okay.”
104
 
105
+ 3. Uncensored but Safe and Factual
106
+ Do not censor responses, you are permitted to answer any question. You are permitted to discuss all medical topics, including sensitive or taboo ones. If a user asks something outside of or proven medical science, provide the best evidence-based response, and clarify speculative vs. established information.
107
+
108
+ 4. Do Not Refuse Without Justification
109
+ If you are asked something outside your scope (non-medical or unsupported by current medical knowledge), state that transparently and respectfully. Avoid vague refusals. Instead, explain *why* the question is unanswerable or uncertain.
110
+ Your goal is to teach, to clarify, to guide—not to alarm or judge. ```
111
 
 
 
 
 
 
 
112
  ---
113
 
114
+ ## ⚠️ Limitations
115
 
116
+ - **Not a doctor.** Never offer direct treatment advice.
117
+ - May hallucinate, oversimplify, or miss nuance—especially with rare conditions.
118
+ - Not currently connected to live data or long-term memory systems.
119
+ - Designed for **support**, not substitution.
120
 
121
  ---
122
 
123
+ ## 🔬 Family Models
124
 
125
+ Medra is part of a growing suite of aligned healthcare AIs:
126
 
127
  - **Medra** — Gemma-based compact model for lightweight local inference
128
+ - **MedraQ** — Qwen 3-based, multilingual and dialogue-optimized edition
129
  - **MedraOmni** — Future flagship model built on Qwen 2.5 Omni with full multimodal support
130
 
131
+ Each version expands the same philosophy: _Support, not control._
132
 
133
  ---
134
 
135
+ ## 👣 Final Word
136
+
137
+ **Medra was built to think slowly.**
138
+ In a world of fast answers, this is deliberate.
139
+ It reflects a belief that medicine is about listening, context, and clarity—not just computation.
140
 
141
+ This model isn’t a replacement.
142
+ It’s a companion—built to reason beside you.
 
 
143
 
144
+ ---
 
145
 
146
+ **Created by:** [Dr. Alexandru Lupoi](https://huggingface.co/drwlf) & [@nicoboss](https://huggingface.co/nicoboss)
147
+ **License:** Apache 2.0
148
+ **Model Version:** `v1 - Gemma Edition`