Update README.md
Browse files
README.md
CHANGED
@@ -5,4 +5,27 @@ language:
|
|
5 |
base_model:
|
6 |
- meta-llama/Meta-Llama-3-8B
|
7 |
pipeline_tag: question-answering
|
8 |
-
---
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
5 |
base_model:
|
6 |
- meta-llama/Meta-Llama-3-8B
|
7 |
pipeline_tag: question-answering
|
8 |
+
---
|
9 |
+
|
10 |
+
## Model Overview
|
11 |
+
This model is a fine-tuned version of the LLaMA 3.1-8B model, trained on a curated selection of 1,000 samples from the **ChatDoctor (HealthCareMagic-100k)** dataset. It has been optimized for tasks related to medical consultations.
|
12 |
+
|
13 |
+
- **Base Model**: LLaMA 3.1-8B
|
14 |
+
- **Fine-tuning Dataset**: 1,000 samples from ChatDoctor dataset
|
15 |
+
- **Output Format**: GGUF (Grok-Generated Universal Format)
|
16 |
+
- **Quantization**: Q4_0 for efficient inference
|
17 |
+
|
18 |
+
## Applications
|
19 |
+
This model is designed to assist in:
|
20 |
+
- Medical question-answering
|
21 |
+
- Providing health-related advice
|
22 |
+
- Assisting in basic diagnostic reasoning (non-clinical use)
|
23 |
+
|
24 |
+
## Model Details
|
25 |
+
| **Feature** | **Details** |
|
26 |
+
|------------------------------|----------------------------|
|
27 |
+
| **Model Type** | Causal Language Model |
|
28 |
+
| **Architecture** | LLaMA 3.1-8B |
|
29 |
+
| **Training Data** | ChatDoctor (1,000 samples) |
|
30 |
+
| **Quantization** | Q4_0 |
|
31 |
+
| **Deployment Format** | GGUF |
|