Update README.md
Browse files
README.md
CHANGED
@@ -1,3 +1,31 @@
|
|
1 |
---
|
|
|
2 |
license: cc-by-nc-sa-4.0
|
3 |
---
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
---
|
2 |
+
title: "DENTEX Dataset"
|
3 |
license: cc-by-nc-sa-4.0
|
4 |
---
|
5 |
+
|
6 |
+
Welcome to the official page of the DENTEX dateset, which has been released as part of the [Dental Enumeration and Diagnosis on Panoramic X-rays Challenge (DENTEX)](https://dentex.grand-challenge.org/), organized in conjunction with the International Conference on Medical Image Computing and Computer-Assisted Intervention (MICCAI) in 2023. The primary objective of this challenge is to develop algorithms that can accurately detect abnormal teeth with dental enumeration and associated diagnosis. This not only aids in accurate treatment planning but also helps practitioners carry out procedures with a low margin of error.
|
7 |
+
|
8 |
+
The challenge provides three types of hierarchically annotated data and additional unlabeled X-rays for optional pre-training. The annotation of the data is structured using the Fédération Dentaire Internationale (FDI) system. The first set of data is partially labeled because it only includes quadrant information. The second set of data is also partially labeled but contains additional enumeration information along with the quadrant. The third data is fully labeled because it includes all quadrant-enumeration-diagnosis information for each abnormal tooth, and all participant algorithms will be benchmarked on the third data.
|
9 |
+
|
10 |
+
## CT-RATE: A novel dataset of chest CT volumes with corresponding radiology text reports
|
11 |
+
<p align="center">
|
12 |
+
<img src="https://github.com/ibrahimethemhamamci/CT-CLIP/blob/main/figures/CT-RATE.png?raw=true" width="100%">
|
13 |
+
</p>
|
14 |
+
|
15 |
+
A major challenge in computational research in 3D medical imaging is the lack of comprehensive datasets. Addressing this issue, we present CT-RATE, the first 3D medical imaging dataset that pairs images with textual reports. CT-RATE consists of 25,692 non-contrast chest CT volumes, expanded to 50,188 through various reconstructions, from 21,304 unique patients, along with corresponding radiology text reports, multi-abnormality labels, and metadata.
|
16 |
+
We divided the cohort into two groups: 20,000 patients were allocated to the training set and 1,304 to the validation set. Our folders are structured as split_patientID_scanID_reconstructionID. For instance, "valid_53_a_1" indicates that this is a CT volume from the validation set, scan "a" from patient 53, and reconstruction 1 of scan "a". This naming convention applies to all files.
|
17 |
+
|
18 |
+
## CT-CLIP: CT-focused contrastive language-image pre-training framework
|
19 |
+
<p align="center">
|
20 |
+
<img src="https://github.com/ibrahimethemhamamci/CT-CLIP/blob/main/figures/CT-CLIP.png?raw=true" width="100%">
|
21 |
+
</p>
|
22 |
+
|
23 |
+
Leveraging CT-RATE, we developed CT-CLIP, a CT-focused contrastive language-image pre-training framework. As a versatile, self-supervised model, CT-CLIP is designed for broad application and does not require task-specific training. Remarkably, CT-CLIP outperforms state-of-the-art, fully supervised methods in multi-abnormality detection across all key metrics, thus eliminating the need for manual annotation. We also demonstrate its utility in case retrieval, whether using imagery or textual queries, thereby advancing knowledge dissemination.
|
24 |
+
Our complete codebase is openly available on [our official GitHub repository](https://github.com/ibrahimethemhamamci/CT-CLIP).
|
25 |
+
|
26 |
+
## Citing Us
|
27 |
+
If you use CT-RATE or CT-CLIP, we would appreciate your references to [our paper](https://arxiv.org/abs/2403.17834).
|
28 |
+
|
29 |
+
## License
|
30 |
+
We are committed to fostering innovation and collaboration in the research community. To this end, all elements of the CT-RATE dataset are released under a [Creative Commons Attribution (CC-BY-NC-SA) license](https://creativecommons.org/licenses/by-nc-sa/4.0/). This licensing framework ensures that our contributions can be freely used for non-commercial research purposes, while also encouraging contributions and modifications, provided that the original work is properly cited and any derivative works are shared under similar terms.
|
31 |
+
---
|