Llama2 π¦ finetuned on medical diagnosis
MedText dataset: https://huggingface.co/datasets/BI55/MedText
1412 pairs of diagnosis cases
About:
The primary objective of this fine-tuning process is to equip Llama2 with the ability to assist in diagnosing various medical cases and diseases. However, it is essential to clarify that it is not designed to replace real medical professionals. Instead, its purpose is to provide helpful information to users, suggesting potential next steps based on the input data and the patterns it has learned from the MedText dataset.
Finetuned on guanaco styled instructions
###Human
###Assistant
Training procedure
The following bitsandbytes
quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float16
Framework versions
- PEFT 0.5.0.dev0
- Downloads last month
- 13
Inference Providers
NEW
This model is not currently available via any of the supported third-party Inference Providers, and
the model is not deployed on the HF Inference API.
Model tree for therealcyberlord/llama2-qlora-finetuned-medical
Base model
meta-llama/Llama-2-7b-chat-hf