Datasets:
Tasks:
Image Feature Extraction
Modalities:
Image
Formats:
imagefolder
Size:
100B<n<1T
ArXiv:
License:
Update README.md
Browse files
README.md
CHANGED
@@ -12,50 +12,20 @@ size_categories:
|
|
12 |
|
13 |
# PLISM dataset
|
14 |
|
15 |
-
|
16 |
-
|
17 |
-
|
18 |
-
`PLISM-wsi` consists in a group of consecutive slides digitized under 7 different scanners and stained accross 13 H&E conditions.
|
19 |
-
Each of the 91 sample encompasses the same biological information, that is a collection of 46 TMAs (Tissue Micro Arrays) from various organs. Additional details can be found in https://p024eb.github.io/ and the original publication
|
20 |
-
|
21 |
-
<img src="https://p024eb.github.io/images/graph/tissue2.png" alt="drawing" width="600"/>
|
22 |
-
|
23 |
-
_Figure 1: Tissue types included in TMA specimens of the `PLISM-wsi` dataset. Source: https://p024eb.github.io/ (Ochi et al., 2024)_
|
24 |
-
|
25 |
-
|
26 |
-
<img src="https://p024eb.github.io/images/graph/workflow2.png" alt="drawing" width="600"/>
|
27 |
-
|
28 |
-
_Figure 2: Digitization and staining worflow for PLISM dataset. Source: https://p024eb.github.io/ (Ochi et al., 2024)_
|
29 |
-
|
30 |
-
|
31 |
-
# PLISM dataset
|
32 |
-
The original `PLISM-wsi` subset contains a total of 310,947 images.
|
33 |
-
Registration was performed across all scanners and staining conditions using OpenCV's AKAZE (Alcantarilla et al., 2013) key-point matching algorithm.
|
34 |
-
There were 3,417 aligned image groups, with a total of 310,947 (3,417 groups × 91 WSIs) image patches of shape 512x512 at a resolution ranging from 0.22 to 0.26 µm/pixel (40x magnification).
|
35 |
-
|
36 |
-
> [!NOTE]
|
37 |
-
> To follow the spirit of this unique and outstanding contribution, we generated an extended version of the original tiles dataset provided by (Ochi et al. 2024) so as to ease its adoption accross the digital pathology community and serve as a reference dataset for benchmarking the robustess of foundation models to staining and scanner variations.
|
38 |
-
> In particular, our work differs from the original dataset in the following aspects:
|
39 |
-
>
|
40 |
-
> • The original, non-registered WSIs were registered using Elastix (Klein et al., 2010; Shamonin et al., 2014). The reference slide was stained with GMH condition and digitized using Hamamatsu Nanozoomer S60 scanner.
|
41 |
-
>
|
42 |
-
> • Tiles of 224x224 pixels were extracted at mpp 0.5 µm/pixel (20x magnification) using an in-house bidirectionnal U-Net (Ronneberger et al., 2015).
|
43 |
-
>
|
44 |
-
> • All tiles from the original WSI were extracted, resulting in 16,278 tiles for each of the 91 WSIs stored in WSI-level `.h5` files.
|
45 |
-
>
|
46 |
-
> **In total, our dataset encompasses 1,481,298 histology tiles for a total size of 225 Gb.**
|
47 |
-
|
48 |
|
49 |
# How to extract features
|
50 |
|
51 |
> [!IMPORTANT]
|
52 |
-
>
|
53 |
-
> In a nutshell, 91 folders will be created, each named by the `slide_id` and containing a `features.npy` file.
|
54 |
-
> This feature file is a numpy array of shape (16278, 3+d) where d is the output dimension of your model and 3 corresponds to `(deepzoom_level, x_coordinate, y_coordinate)`.
|
55 |
-
> Tile coordinates are in the same order for each slide inside the dataset. No additional sorting is required to compare feature matrices between different slides (first element of each matrix corresponds to the same tile location).
|
56 |
-
>
|
57 |
|
58 |
-
|
|
|
|
|
|
|
|
|
|
|
59 |
|
60 |
|
61 |
# License
|
@@ -97,12 +67,4 @@ _BibTex entry_
|
|
97 |
|
98 |
- (Ochi et al., 2024) Ochi, M., Komura, D., Onoyama, T. et al. Registered multi-device/staining histology image dataset for domain-agnostic machine learning models. Sci Data 11, 330 (2024).
|
99 |
|
100 |
-
- (
|
101 |
-
|
102 |
-
- (Ronneberger et al., 2015) Ronneberger, O., Fischer, P., & Brox, T. (2015). U-Net: Convolutional networks for biomedical image segmentation. arXiv.
|
103 |
-
|
104 |
-
- (Klein et al., 2010) Klein, S., Staring, M., Murphy, K., Viergever, M. A., & Pluim, J. P. W. (2010). Elastix: A toolbox for intensity-based medical image registration. IEEE Transactions on Medical Imaging, 29(1), 196–205.
|
105 |
-
|
106 |
-
- (Shamonin et al., 2014) Shamonin, D. P., Bron, E. E., Lelieveldt, B. P. F., Smits, M., Klein, S., & Staring, M. (2014). Fast parallel image registration on CPU and GPU for diagnostic classification of Alzheimer's disease. Frontiers in Neuroinformatics, 7, 50.
|
107 |
-
|
108 |
-
- (Filiot et al., 2025) Filiot, A., Dop, N., Tchita, O., Riou, A., Peeters, T., Valter, D., Scalbert, M., Saillard, C., Robin, G., & Olivier, A. (2025). Distilling foundation models for robust and efficient models in digital pathology. arXiv. https://arxiv.org/abs/2501.16239
|
|
|
12 |
|
13 |
# PLISM dataset
|
14 |
|
15 |
+
This preprocessed dataset was directly generated from [owkin/plism-dataset-tiles](https://huggingface.co/datasets/owkin/plism-dataset-tiles). It is meant to perform the features extraction in a more convenient way.
|
16 |
+
As such, this dataset contains 91 .h5 files each containing 16,278 images converted into numpy arrays. This allows for easy resuming but require 225 Go storage.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
17 |
|
18 |
# How to extract features
|
19 |
|
20 |
> [!IMPORTANT]
|
21 |
+
> 🎉 Check [plismbench](https://github.com/owkin/plism-benchmark) to perform the feature extraction of PLISM dataset and get run our robustness benchmark 🎉
|
|
|
|
|
|
|
|
|
22 |
|
23 |
+
In a nutshell, 91 folders will be created, each named by the `slide_id` and containing a `features.npy` file.
|
24 |
+
This feature file is a numpy array of shape (16278, 3+d) where d is the output dimension of your model and 3 corresponds to `(deepzoom_level, x_coordinate, y_coordinate)`.
|
25 |
+
Tile coordinates are in the same order for each slide inside the dataset. No additional sorting is required to compare feature matrices between different slides (first element of each matrix corresponds to the same tile location).
|
26 |
+
|
27 |
+
> [!IMPORTANT]
|
28 |
+
> 225 Go are required to store WSI-level `.h5` files, download approximately takes 10 minutes (32 workers). Then, ~10 Gb storage and 1h30 are necessary to extract all features with a ViT-B model, 16 CPUs and 1 Nvidia T4 (16Go).
|
29 |
|
30 |
|
31 |
# License
|
|
|
67 |
|
68 |
- (Ochi et al., 2024) Ochi, M., Komura, D., Onoyama, T. et al. Registered multi-device/staining histology image dataset for domain-agnostic machine learning models. Sci Data 11, 330 (2024).
|
69 |
|
70 |
+
- (Klein et al., 2010) Klein, S., Staring, M., Murphy, K., Viergever, M. A., & Pluim, J. P. W. (2010). Elastix: A toolbox for intensity-based medical image registration. IEEE Transactions on Medical Imaging, 29(1), 196–205.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|