Datasets:
Update README.md
Browse files
README.md
CHANGED
@@ -1,16 +1,31 @@
|
|
1 |
---
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
2 |
dataset_info:
|
3 |
features:
|
|
|
|
|
4 |
- name: answer
|
5 |
dtype: string
|
|
|
|
|
6 |
- name: image_url
|
7 |
dtype: string
|
8 |
-
- name:
|
9 |
dtype: string
|
|
|
|
|
10 |
- name: parquet_path
|
11 |
dtype: string
|
12 |
-
- name: question
|
13 |
-
dtype: string
|
14 |
- name: speciality
|
15 |
dtype: string
|
16 |
- name: flag_answer_format
|
@@ -23,25 +38,67 @@ dataset_info:
|
|
23 |
dtype: string
|
24 |
- name: flag_difficulty_llms
|
25 |
dtype: string
|
26 |
-
- name: image
|
27 |
-
dtype: image
|
28 |
-
- name: original_problem_id
|
29 |
-
dtype: string
|
30 |
-
- name: permutation_number
|
31 |
-
dtype: string
|
32 |
-
- name: problem_id
|
33 |
-
dtype: string
|
34 |
-
- name: order
|
35 |
-
dtype: int64
|
36 |
splits:
|
37 |
- name: train
|
38 |
-
num_bytes:
|
39 |
-
num_examples:
|
40 |
-
download_size:
|
41 |
-
dataset_size:
|
42 |
configs:
|
43 |
- config_name: default
|
44 |
data_files:
|
45 |
- split: train
|
46 |
path: data/train-*
|
47 |
---
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
---
|
2 |
+
license: cc-by-4.0
|
3 |
+
task_categories:
|
4 |
+
- image-text-to-text
|
5 |
+
language:
|
6 |
+
- en
|
7 |
+
tags:
|
8 |
+
- medical
|
9 |
+
- multimodal
|
10 |
+
- in-context-learning
|
11 |
+
- vqa
|
12 |
+
- benchmark
|
13 |
dataset_info:
|
14 |
features:
|
15 |
+
- name: question
|
16 |
+
dtype: string
|
17 |
- name: answer
|
18 |
dtype: string
|
19 |
+
- name: image
|
20 |
+
dtype: image
|
21 |
- name: image_url
|
22 |
dtype: string
|
23 |
+
- name: problem_id
|
24 |
dtype: string
|
25 |
+
- name: order
|
26 |
+
dtype: int64
|
27 |
- name: parquet_path
|
28 |
dtype: string
|
|
|
|
|
29 |
- name: speciality
|
30 |
dtype: string
|
31 |
- name: flag_answer_format
|
|
|
38 |
dtype: string
|
39 |
- name: flag_difficulty_llms
|
40 |
dtype: string
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
41 |
splits:
|
42 |
- name: train
|
43 |
+
num_bytes: 94510405.0
|
44 |
+
num_examples: 517
|
45 |
+
download_size: 90895608
|
46 |
+
dataset_size: 94510405.0
|
47 |
configs:
|
48 |
- config_name: default
|
49 |
data_files:
|
50 |
- split: train
|
51 |
path: data/train-*
|
52 |
---
|
53 |
+
|
54 |
+
# SMMILE: An Expert-Driven Benchmark for Multimodal Medical In-Context Learning
|
55 |
+
|
56 |
+
[Paper](https://huggingface.co/papers/2506.21355) | [Project page](https://smmile-benchmark.github.io) | [Code](https://github.com/eth-medical-ai-lab/smmile)
|
57 |
+
<div align="center">
|
58 |
+
<img src="./logo_final.png" alt="SMMILE Logo" width="400">
|
59 |
+
</div>
|
60 |
+
|
61 |
+
## Introduction
|
62 |
+
|
63 |
+
Multimodal in-context learning (ICL) remains underexplored despite the profound potential it could have in complex application domains such as medicine. Clinicians routinely face a long tail of tasks which they need to learn to solve from few examples, such as considering few relevant previous cases or few differential diagnoses. While MLLMs have shown impressive advances in medical visual question answering (VQA) or multi-turn chatting, their ability to learn multimodal tasks from context is largely unknown.
|
64 |
+
|
65 |
+
We introduce **SMMILE** (Stanford Multimodal Medical In-context Learning Evaluation), the first multimodal medical ICL benchmark. A set of clinical experts curated ICL problems to scrutinize MLLM's ability to learn multimodal tasks at inference time from context.
|
66 |
+
|
67 |
+
## Dataset Access
|
68 |
+
|
69 |
+
The SMMILE dataset is available on HuggingFace:
|
70 |
+
|
71 |
+
```python
|
72 |
+
from datasets import load_dataset
|
73 |
+
load_dataset('smmile/SMMILE', token=YOUR_HF_TOKEN)
|
74 |
+
load_dataset('smmile/SMMILE-plusplus', token=YOUR_HF_TOKEN)
|
75 |
+
```
|
76 |
+
|
77 |
+
Note: You need to set your HuggingFace token as an environment variable:
|
78 |
+
```bash
|
79 |
+
export HF_TOKEN=your_token_here
|
80 |
+
```
|
81 |
+
|
82 |
+
## License
|
83 |
+
|
84 |
+
This work is licensed under a [Creative Commons Attribution 4.0 International License](http://creativecommons.org/licenses/by/4.0/).
|
85 |
+
|
86 |
+
## Citation
|
87 |
+
|
88 |
+
If you find our dataset useful for your research, please cite the following paper:
|
89 |
+
|
90 |
+
```bibtex
|
91 |
+
@article{rieff2025smmile,
|
92 |
+
title={SMMILE: An Expert-Driven Benchmark for Multimodal Medical In-Context Learning},
|
93 |
+
author={Melanie Rieff and Maya Varma and Ossian Rabow and Subathra Adithan and Julie Kim and Ken Chang and Hannah Lee and Nidhi Rohatgi and Christian Bluethgen and Mohamed S. Muneer and Jean-Benoit Delbrouck and Michael Moor},
|
94 |
+
year={2025},
|
95 |
+
eprint={2506.21355},
|
96 |
+
archivePrefix={arXiv},
|
97 |
+
primaryClass={cs.LG},
|
98 |
+
url={https://arxiv.org/abs/2506.21355},
|
99 |
+
}
|
100 |
+
```
|
101 |
+
|
102 |
+
## Acknowledgments
|
103 |
+
|
104 |
+
We thank the clinical experts who contributed to curating the benchmark dataset.
|