File size: 2,921 Bytes
db0a115 b97da6c 9cb6b65 db0a115 93f2785 c6b4150 93f2785 db0a115 d49d174 c6b4150 d49d174 b97da6c |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 |
---
tags:
- text-generation-inference
- transformers
- unsloth
- mllama
license: apache-2.0
language:
- en
datasets:
- unsloth/Radiology_mini
base_model:
- unsloth/Llama-3.2-11B-Vision-Instruct
---
# Uploaded model
- **Developed by:** Anukul
- **Finetuned from model :** unsloth/Llama-3.2-11B-Vision-Instruct
- **Dataset :** unsloth/radiology_mini
# Model Overview
This repository demonstrates how to fine-tune the unsloth/Llama-3.2-11B-Vision-Instruct model for a radiology image captioning task.
The model has been optimized to be twice as fast as the previous version, allowing for efficient fine-tuning.
## Dataset Description
The dataset used for this project is unsloth/radiology_mini, a small-scale dataset derived from the ROCOv2-radiology dataset. It includes:
Train Set, Test Set This dataset represents 0.33% of the original dataset found at ROCOv2-radiology on Hugging Face.
## Usage
```python
from typing import Optional
import gradio as gr
from gradio import Interface
from unsloth import FastVisionModel
import numpy as np
import torch
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
# Load model and tokenizer
model, tokenizer = FastVisionModel.from_pretrained(
"0llheaven/Llama-3.2-11B-Vision-Radiology-mini",
load_in_4bit=True,
use_gradient_checkpointing="unsloth",
)
FastVisionModel.for_inference(model)
model.to(device)
def predict_radiology_description(image, instruction):
try:
messages = [{"role": "user", "content": [
{"type": "image"},
{"type": "text", "text": instruction}
]}]
input_text = tokenizer.apply_chat_template(messages, add_generation_prompt=True)
inputs = tokenizer(
image,
input_text,
add_special_tokens=False,
return_tensors="pt",
).to(device)
output_ids = model.generate(
**inputs,
max_new_tokens=256,
temperature=1.5,
min_p=0.1
)
generated_text = tokenizer.decode(output_ids[0], skip_special_tokens=True)
return generated_text.replace("assistant", "\n\nassistant").strip()
except Exception as e:
return f"Error: {str(e)}"
# Gradio Interface
demo = gr.Interface(
fn=predict_radiology_description,
inputs=[
gr.Image(type="pil", label="Upload Radiology Image"),
gr.Textbox(
placeholder="Enter instruction or leave as default.",
value="You are an expert radiographer. Describe accurately what you see in this image.",
label="Instruction"),
],
outputs="text",
title="radiology image description generate",
description="Upload an image and provide instructions to generate radiology descriptions.\nExample instruction : You are an expert radiographer. Describe accurately what you see in this image.",
)
demo.launch(server_port=8030, debug=True)
``` |