metadata
license: mit
library_name: pytorch
tags:
- Medical Vsion-Language Pre-Training
- BenchX
REFERS Checkpoint Model Card
A retrained REFERS model for benchmarking medical vision-language pre-training methods within the BenchX framework.
Model Details
- Model Type: REFERS
- Architecture: Vit-Base image encoder and BERT text encoder
- Original Papers: Generalized Radiograph Representation Learning via Cross-Supervision Between Images and Free-Text Radiology Reports
- Benchmark Paper: BenchX: A Unified Benchmark Framework for Medical Vision-Language Pretraining on Chest X-Rays
- Benchmark Framework: https://github.com/yangzhou12/BenchX
Intended Use
- Primary Use Cases:
- Benchmarking performance for Medical Image Classification
- Benchmarking performance for Medical Image Segmentation
- Benchmarking performance for Medical Report Generation
Pre-Training Data
- Dataset:
- Data source(s): MIMIC-CXR
- Types of medical images: Frontal chest X-rays
- Text data type: Associated radiology reports
Prerequisites
Please follow the instruction to install BenchX.
Training & Evaluation
1. Classification
To fine-tune REFERS for classification, run this command:
python bin/train.py config/classification/<dataset_name>/refers.yml
2. Segmentation
To fine-tune REFERS for segmentation, run this command:
python mmsegmentation/tools/train.py config/benchmark/<dataset_name>/refers.yml
3. Report Generation
To fine-tune REFERS for report generation, run this command:
python bin/train.py config/report_generation/<dataset_name>/refers.yml
4. Evaluation
To evaluate fine-tuned REFERS models, run:
# For classification and report generation
python bin/test.py config/<task_name>/<dataset_name>/refers.yml validator.splits=[test] ckpt_dir=<path_to_checkpoint>
# For segmentation
python mmsegmentation/tools/my_test.py mmsegmentation/config/<dataset_name>/refers.yml <path_to_checkpoint>
Citations
@article{zhou2022generalized,
title={Generalized radiograph representation learning via cross-supervision between images and free-text radiology reports},
author={Zhou, Hong-Yu and Chen, Xiaoyu and Zhang, Yinghao and Luo, Ruibang and Wang, Liansheng and Yu, Yizhou},
journal={Nature Machine Intelligence},
volume={4},
number={1},
pages={32--40},
year={2022}
}
@inproceedings{zhou2024benchx,
title={BenchX: A Unified Benchmark Framework for Medical Vision-Language Pretraining on Chest X-Rays},
author={Yang Zhou, Tan Li Hui Faith, Yanyu Xu, Sicong Leng, Xinxing Xu, Yong Liu, Rick Siow Mong Goh},
booktitle={Proceedings of NeurIPS},
year={2024}
}