0llheaven's picture
Update README.md
c6b4150 verified
|
raw
history blame
2.92 kB
metadata
tags:
  - text-generation-inference
  - transformers
  - unsloth
  - mllama
license: apache-2.0
language:
  - en
datasets:
  - unsloth/Radiology_mini
base_model:
  - unsloth/Llama-3.2-11B-Vision-Instruct

Uploaded model

  • Developed by: Anukul
  • Finetuned from model : unsloth/Llama-3.2-11B-Vision-Instruct
  • Dataset : unsloth/radiology_mini

Model Overview

This repository demonstrates how to fine-tune the unsloth/Llama-3.2-11B-Vision-Instruct model for a radiology image captioning task. The model has been optimized to be twice as fast as the previous version, allowing for efficient fine-tuning.

Dataset Description

The dataset used for this project is unsloth/radiology_mini, a small-scale dataset derived from the ROCOv2-radiology dataset. It includes: Train Set, Test Set This dataset represents 0.33% of the original dataset found at ROCOv2-radiology on Hugging Face.

Usage

from typing import Optional
import gradio as gr
from gradio import Interface
from unsloth import FastVisionModel
import numpy as np
import torch

device = torch.device("cuda" if torch.cuda.is_available() else "cpu")

# Load model and tokenizer
model, tokenizer = FastVisionModel.from_pretrained(
    "0llheaven/Llama-3.2-11B-Vision-Radiology-mini",
    load_in_4bit=True,
    use_gradient_checkpointing="unsloth",
)
FastVisionModel.for_inference(model)

model.to(device)

def predict_radiology_description(image, instruction):
    try:
        messages = [{"role": "user", "content": [
            {"type": "image"},
            {"type": "text", "text": instruction}
        ]}]
        input_text = tokenizer.apply_chat_template(messages, add_generation_prompt=True)

        inputs = tokenizer(
            image,
            input_text,
            add_special_tokens=False,
            return_tensors="pt",
        ).to(device)

        output_ids = model.generate(
            **inputs,
            max_new_tokens=256,
            temperature=1.5,
            min_p=0.1
        )

        generated_text = tokenizer.decode(output_ids[0], skip_special_tokens=True)
        return generated_text.replace("assistant", "\n\nassistant").strip()
    except Exception as e:
        return f"Error: {str(e)}"

# Gradio Interface
demo = gr.Interface(
    fn=predict_radiology_description,
    inputs=[
        gr.Image(type="pil", label="Upload Radiology Image"),
        gr.Textbox(
            placeholder="Enter instruction or leave as default.",
            value="You are an expert radiographer. Describe accurately what you see in this image.",
            label="Instruction"),
        ],
    outputs="text",
    title="radiology image description generate",
    description="Upload an image and provide instructions to generate radiology descriptions.\nExample instruction : You are an expert radiographer. Describe accurately what you see in this image.",
)
demo.launch(server_port=8030, debug=True)