Join the conversation

Join the community of Machine Learners and AI enthusiasts.

Sign Up
ritvik77Β 
posted an update 3 days ago
Post
1479
Try it out: ritvik77/Medical_Doctor_AI_LoRA-Mistral-7B-Instruct_FullModel

🩺 Medical Diagnosis AI Model - Powered by Mistral-7B & LoRA πŸš€
πŸ”Ή Model Overview:
Base Model: Mistral-7B (7.7 billion parameters)
Fine-Tuning Method: LoRA (Low-Rank Adaptation)
Quantization: bnb_4bit (reduces memory footprint while retaining performance)
πŸ”Ή Parameter Details:
Original Mistral-7B Parameters: 7.7 billion
LoRA Fine-Tuned Parameters: 4.48% of total model parameters (340 million) Final Merged Model Size (bnb_4bit Quantized): ~4.5GB

This can help you in making a AI agent for healthcare, if you need to finetune it for JSON function/tool calling format you can use some medical function calling dataset to again fine fine tine on it.

@ritvik77 , excited to run into this! Is the paper and studies behind it on arxiv or elsewhere?

Β·

Hey @nicolay-r this is still in dev phase, also I am trying to super quantize some 70B+ parameter LLM with active layering then tune again on Medical Data and benchmarks and get approved with some doctors and organizations. This way the log GPU can also handle it and making it accessable to everyone.