medrega / README.md
Luxuriant16's picture
Update README.md
33c1f1e verified
---
language:
- en
base_model:
- OpenGVLab/InternVL-Chat-V1-2
pipeline_tag: image-text-to-text
tags:
- medical
---
# MedRegA
Model for paper "[Interpretable Bilingual Multimodal Large Language Model for Diverse Biomedical Tasks](https://arxiv.org/abs/2410.18387)".
🌐 Project Page: [https://medrega.github.io/](https://medrega.github.io/)
πŸ“„ Paper: [https://arxiv.org/abs/2410.18387](https://arxiv.org/abs/2410.18387)
πŸ’» Code: [https://github.com/xmed-lab/MedRegA](https://github.com/xmed-lab/MedRegA)
## Introduction
We propose a **Region-Aware medical MLLM**, **MedRegA**, which is the first bilingual generalist medical AI system to simultaneously handle image-level and region-level medical vision-language tasks across a broad range of modalities.
Our MedRegA not only enables three region-centric tasks, but also achieves the best performance for visual question answering, report generation and medical image classification over 8 modalities, showcasing significant versatility.
![medrega.png](https://cdn-uploads.huggingface.co/production/uploads/65156d6ffccbf319e636279b/x4zUYvaPPjDEdm_NdiE-V.png)