Update references to the paper
#4
by
fepegar
- opened
README.md
CHANGED
@@ -15,7 +15,7 @@ library_name: transformers
|
|
15 |
|
16 |
RAD-DINO is a vision transformer model trained to encode chest X-rays using the self-supervised learning method [DINOv2](https://openreview.net/forum?id=a68SUt6zFt).
|
17 |
|
18 |
-
RAD-DINO is described in detail in [
|
19 |
|
20 |
- **Developed by:** Microsoft Health Futures
|
21 |
- **Model type:** Vision transformer
|
@@ -151,7 +151,7 @@ We used 16 nodes with 4 A100 GPUs each, and a batch size of 40 images per GPU.
|
|
151 |
|
152 |
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
|
153 |
|
154 |
-
We refer to the [manuscript](https://
|
155 |
|
156 |
#### Preprocessing
|
157 |
|
@@ -167,27 +167,7 @@ All DICOM files were resized using B-spline interpolation so that their shorter
|
|
167 |
|
168 |
<!-- This section describes the evaluation protocols and provides the results. -->
|
169 |
|
170 |
-
Our evaluation is best described in the [manuscript](https://
|
171 |
-
|
172 |
-
<!-- ### Testing data, factors & metrics
|
173 |
-
|
174 |
-
#### Testing Data
|
175 |
-
|
176 |
-
[More Information Needed]
|
177 |
-
|
178 |
-
#### Factors
|
179 |
-
|
180 |
-
[More Information Needed]
|
181 |
-
|
182 |
-
#### Metrics
|
183 |
-
|
184 |
-
[More Information Needed]
|
185 |
-
|
186 |
-
### Results
|
187 |
-
|
188 |
-
[More Information Needed]
|
189 |
-
|
190 |
-
#### Summary -->
|
191 |
|
192 |
## Environmental impact
|
193 |
|
@@ -226,19 +206,21 @@ We used [SimpleITK](https://simpleitk.org/) and [Pydicom](https://pydicom.github
|
|
226 |
**BibTeX:**
|
227 |
|
228 |
```bibtex
|
229 |
-
@
|
230 |
-
|
231 |
-
|
232 |
-
|
233 |
-
|
234 |
-
|
235 |
-
|
|
|
|
|
236 |
}
|
237 |
```
|
238 |
|
239 |
**APA:**
|
240 |
|
241 |
-
> Pérez-García, F., Sharma, H., Bond-Taylor, S., Bouzid, K., Salvatelli, V., Ilse, M., Bannur, S., Castro, D.C., Schwaighofer, A., Lungren, M.P., Wetscherek, M.T., Codella, N., Hyland, S.L., Alvarez-Valle, J., & Oktay, O. (
|
242 |
|
243 |
## Model card contact
|
244 |
|
|
|
15 |
|
16 |
RAD-DINO is a vision transformer model trained to encode chest X-rays using the self-supervised learning method [DINOv2](https://openreview.net/forum?id=a68SUt6zFt).
|
17 |
|
18 |
+
RAD-DINO is described in detail in [Exploring Scalable Medical Image Encoders Beyond Text Supervision (F. Pérez-García, H. Sharma, S. Bond-Taylor, et al., 2024)](https://www.nature.com/articles/s42256-024-00965-w).
|
19 |
|
20 |
- **Developed by:** Microsoft Health Futures
|
21 |
- **Model type:** Vision transformer
|
|
|
151 |
|
152 |
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
|
153 |
|
154 |
+
We refer to the [manuscript](https://www.nature.com/articles/s42256-024-00965-w) for a detailed description of the training procedure.
|
155 |
|
156 |
#### Preprocessing
|
157 |
|
|
|
167 |
|
168 |
<!-- This section describes the evaluation protocols and provides the results. -->
|
169 |
|
170 |
+
Our evaluation is best described in the [manuscript](https://www.nature.com/articles/s42256-024-00965-w).
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
171 |
|
172 |
## Environmental impact
|
173 |
|
|
|
206 |
**BibTeX:**
|
207 |
|
208 |
```bibtex
|
209 |
+
@article{perez-garcia_exploring_2025,
|
210 |
+
title = {Exploring scalable medical image encoders beyond text supervision},
|
211 |
+
issn = {2522-5839},
|
212 |
+
url = {https://doi.org/10.1038/s42256-024-00965-w},
|
213 |
+
doi = {10.1038/s42256-024-00965-w},
|
214 |
+
journal = {Nature Machine Intelligence},
|
215 |
+
author = {P{\'e}rez-Garc{\'i}a, Fernando and Sharma, Harshita and Bond-Taylor, Sam and Bouzid, Kenza and Salvatelli, Valentina and Ilse, Maximilian and Bannur, Shruthi and Castro, Daniel C. and Schwaighofer, Anton and Lungren, Matthew P. and Wetscherek, Maria Teodora and Codella, Noel and Hyland, Stephanie L. and Alvarez-Valle, Javier and Oktay, Ozan},
|
216 |
+
month = jan,
|
217 |
+
year = {2025},
|
218 |
}
|
219 |
```
|
220 |
|
221 |
**APA:**
|
222 |
|
223 |
+
> Pérez-García, F., Sharma, H., Bond-Taylor, S., Bouzid, K., Salvatelli, V., Ilse, M., Bannur, S., Castro, D. C., Schwaighofer, A., Lungren, M. P., Wetscherek, M. T., Codella, N., Hyland, S. L., Alvarez-Valle, J., & Oktay, O. (2025). *Exploring scalable medical image encoders beyond text supervision*. In Nature Machine Intelligence. Springer Science and Business Media LLC. https://doi.org/10.1038/s42256-024-00965-w
|
224 |
|
225 |
## Model card contact
|
226 |
|