metadata
license: apache-2.0
language:
- en
base_model:
- tiiuae/falcon-7b-instruct
pipeline_tag: text-generation
library_name: transformers
tags:
- medical
- diseases
- falcon-7b
- LoRA
- fine-tuned
Fine-Tuned Falcon-7B for Medical Text Generation
This is a fine-tuned version of the Falcon-7B-Instruct model, adapted for generating medical text related to common diseases. The model has been fine-tuned using LoRA (Low-Rank Adaptation) on a dataset of medical texts.
Model Details
- Base Model:
tiiuae/falcon-7b-instruct
- Fine-Tuning Method: LoRA (Low-Rank Adaptation)
- Quantization: 4-bit (using
bitsandbytes
) - Training Dataset: Medical text data (common diseases)
- Training Framework: PyTorch with Hugging Face Transformers
- Fine-Tuning Duration: 3 epochs
- Learning Rate: 1e-3
- Batch Size: 2 (per device)
Usage
You can use this model for generating medical text or answering questions related to common diseases.
Using the Hugging Face Inference API
- Install the
transformers
library:pip install transformers """