Added common note for model use-case
Browse files
README.md
CHANGED
@@ -29,6 +29,14 @@ The model was adapted on a curated mixture (≈410K items) blending synthetic ge
|
|
29 |
Across widely used evaluation suites (MedQA, MedMCQA, PubMedQA, MMLU medical subsets), Neeto‑1.0‑8b attains strong 7B‑class results. Public benchmark numbers (table below) show it standing ahead of several prior open biomedical baselines of similar scale. The model will be used on our platform [Medicoplasma](https://medicoplasma.com) as for exam preparation and powering medical applications.
|
30 |
|
31 |
## How to Use
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
32 |
|
33 |
The model follows the default Llama‑3 chat message formatting (no explicit system prompt required). Provide a single user turn containing the question or case vignette; the model returns an answer (option selection, rationale, or free-form explanation depending on the prompt style).
|
34 |
|
|
|
29 |
Across widely used evaluation suites (MedQA, MedMCQA, PubMedQA, MMLU medical subsets), Neeto‑1.0‑8b attains strong 7B‑class results. Public benchmark numbers (table below) show it standing ahead of several prior open biomedical baselines of similar scale. The model will be used on our platform [Medicoplasma](https://medicoplasma.com) as for exam preparation and powering medical applications.
|
30 |
|
31 |
## How to Use
|
32 |
+
**Important Note:**
|
33 |
+
This model has been **strictly trained on medical datasets only**. It is not designed for general chit-chat or off-topic questions.
|
34 |
+
For example, it may not respond meaningfully to prompts like:
|
35 |
+
- "Hello"
|
36 |
+
- "Tell me a joke"
|
37 |
+
- "What’s the weather today?"
|
38 |
+
|
39 |
+
👉 Please use the model **only for medical-related tasks**, as that is its intended purpose.
|
40 |
|
41 |
The model follows the default Llama‑3 chat message formatting (no explicit system prompt required). Provide a single user turn containing the question or case vignette; the model returns an answer (option selection, rationale, or free-form explanation depending on the prompt style).
|
42 |
|