File size: 5,532 Bytes
0566aed a99b512 0566aed a99b512 0566aed a99b512 0566aed a99b512 0566aed a99b512 0566aed a99b512 0566aed a99b512 0566aed a99b512 0566aed a99b512 0566aed a99b512 0566aed a99b512 0566aed a99b512 0566aed a99b512 0566aed a99b512 0566aed a99b512 0566aed a99b512 0566aed a99b512 0566aed a99b512 0566aed a99b512 0566aed a99b512 0566aed a99b512 0566aed a99b512 0566aed a99b512 0566aed a99b512 0566aed a99b512 0566aed a99b512 0566aed a99b512 0566aed a99b512 0566aed a99b512 0566aed a99b512 0566aed a99b512 0566aed a99b512 0566aed a99b512 0566aed a99b512 0566aed a99b512 0566aed a99b512 0566aed a99b512 0566aed a99b512 0566aed a99b512 0566aed a99b512 0566aed a99b512 0566aed a99b512 0566aed a99b512 0566aed a99b512 0566aed a99b512 0566aed a99b512 0566aed a99b512 0566aed a99b512 0566aed a99b512 0566aed a99b512 0566aed a99b512 0566aed a99b512 0566aed a99b512 0566aed a99b512 0566aed a99b512 0566aed a99b512 0566aed a99b512 0566aed a99b512 0566aed a99b512 0566aed a99b512 0566aed a99b512 0566aed |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 |
---
library_name: transformers
license: apache-2.0
language:
- en
metrics:
- code_eval
- accuracy
base_model:
- meta-llama/Llama-3.2-3B-Instruct
pipeline_tag: text-generation
---
# Health Chatbot
Welcome to the official Hugging Face repository for **Health Chatbot**, a conversational AI model fine-tuned to assist with health-related queries. This model is based on [LLaMA 3.2](https://ai.meta.com/llama/), fine-tuned using **QLoRA** for lightweight and efficient training.
---
## Overview
**Health Chatbot** is designed to provide accurate and conversational responses for general health advice and wellness information. The model is intended for educational purposes and is not a substitute for professional medical consultation.
Key Features:
- Fine-tuned using **QLoRA** for parameter-efficient training.
- Trained on a diverse dataset of health-related queries and answers.
- Optimized for conversational and empathetic interactions.
---
## Model Details
- **Base Model**: LLaMA 3.2
- **Training Method**: QLoRA (Quantized Low-Rank Adaptation)
- **Dataset**: Custom curated dataset comprising publicly available health resources, FAQs, and synthetic dialogues.
- **Intended Use**: Conversational health assistance and wellness education.
---
## How to Use the Model
You can load and use the model in your Python environment with the `transformers` library:
### Installation
Make sure you have the necessary dependencies installed:
```bash
pip install transformers accelerate bitsandbytes
```
### Loading the Model
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained("RayyanAhmed9477/Health-Chatbot")
model = AutoModelForCausalLM.from_pretrained(
"RayyanAhmed9477/Health-Chatbot",
device_map="auto",
load_in_8bit=True
)
# Generate a response
def chat(prompt):
inputs = tokenizer(prompt, return_tensors="pt").to("cuda")
outputs = model.generate(**inputs, max_length=150, do_sample=True, temperature=0.7)
response = tokenizer.decode(outputs[0], skip_special_tokens=True)
return response
# Example usage
prompt = "What are some common symptoms of the flu?"
print(chat(prompt))
```
---
## Fine-Tuning the Model
If you want to fine-tune the model further on a custom dataset, follow the steps below.
### Requirements
```bash
pip install datasets peft
```
### Dataset Preparation
Prepare your dataset in a JSON or CSV format with `input` and `output` fields:
**Example Dataset (JSON)**:
```json
[
{"input": "What are some symptoms of dehydration?", "output": "Symptoms include dry mouth, fatigue, and dizziness."},
{"input": "How can I boost my immune system?", "output": "Eat a balanced diet, exercise regularly, and get enough sleep."}
]
```
### Training Script
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
from peft import prepare_model_for_int8_training, LoraConfig, get_peft_model
from datasets import load_dataset
# Load the base model and tokenizer
tokenizer = AutoTokenizer.from_pretrained("RayyanAhmed9477/Health-Chatbot")
model = AutoModelForCausalLM.from_pretrained(
"RayyanAhmed9477/Health-Chatbot",
device_map="auto",
load_in_8bit=True
)
# Prepare model for training
model = prepare_model_for_int8_training(model)
# Define LoRA configuration
lora_config = LoraConfig(
r=8,
lora_alpha=32,
target_modules=["q_proj", "v_proj"],
lora_dropout=0.1,
bias="none",
task_type="CAUSAL_LM"
)
model = get_peft_model(model, lora_config)
# Load your custom dataset
data = load_dataset("json", data_files="your_dataset.json")
# Fine-tune the model
from transformers import TrainingArguments, Trainer
training_args = TrainingArguments(
output_dir="./results",
per_device_train_batch_size=4,
num_train_epochs=3,
logging_dir="./logs",
save_strategy="epoch",
evaluation_strategy="epoch",
learning_rate=1e-4,
fp16=True
)
trainer = Trainer(
model=model,
args=training_args,
train_dataset=data["train"]
)
trainer.train()
# Save the fine-tuned model
model.save_pretrained("./fine_tuned_health_chatbot")
tokenizer.save_pretrained("./fine_tuned_health_chatbot")
```
---
## Model Evaluation
Evaluate the model's performance using metrics like perplexity and BLEU:
```python
from datasets import load_metric
# Load evaluation dataset
eval_data = load_dataset("json", data_files="evaluation_dataset.json")
# Evaluate with perplexity
def compute_perplexity(model, dataset):
metric = load_metric("perplexity")
results = metric.compute(model=model, dataset=dataset)
return results
print(compute_perplexity(model, eval_data["test"]))
```
---
## Limitations and Warnings
- The model is not a substitute for professional medical advice.
- Responses are generated based on patterns in the training data and may not always be accurate or up-to-date.
---
## Contributing
Contributions are welcome! If you have suggestions, improvements, or issues to report, please create a pull request or an issue in this repository.
---
## License
This model is released under the [Apache 2.0 License](LICENSE).
---
## Contact
For any queries or collaborations, reach out to me via [GitHub](https://github.com/Rayyan9477) or email at `[email protected]`, [LinkedIn](https\://www\.linkedin.com/in/rayyan-ahmed9477/) .
---
## Acknowledgements
Special thanks to the Hugging Face and Meta AI teams for their open-source contributions to the NLP and machine learning community. |