Update README.md
Browse files
README.md
CHANGED
@@ -36,18 +36,3 @@ The ViT had the best regression results, with an MSE of 0.5135. Code developed i
|
|
36 |
We were fine-tuned the Vision Transformer (ViT) from HuggingFaces on our training set for 300 epochs using an AdamW optimizer with a learning rate of 0.00001 and a batch size of 10.
|
37 |
Early stopping was applied to the model to prevent over-fitting.
|
38 |
For ViT, training stopped after 70 epochs. The only difference between the preprocessed data in the ViT and the other models was that it normalized to a tensor image with mean and standard deviation during data augmentation.
|
39 |
-
|
40 |
-
## Citation
|
41 |
-
If you use this work in your research, please cite it as follows:
|
42 |
-
|
43 |
-
```bibtex
|
44 |
-
@misc{lara2023covid,
|
45 |
-
title={Diagnosing COVID-19 Severity from Chest X-Ray Images Using ViT and CNN Architectures},
|
46 |
-
author={Luis Lara and Lucia Eve Berger and Rajesh Kumar Raju and Shawn Whitfield},
|
47 |
-
year={2023},
|
48 |
-
archivePrefix={arXiv},
|
49 |
-
eprint={2502.16622},
|
50 |
-
primaryClass={cs.CV},
|
51 |
-
url={https://arxiv.org/abs/2502.16622}
|
52 |
-
}
|
53 |
-
```
|
|
|
36 |
We were fine-tuned the Vision Transformer (ViT) from HuggingFaces on our training set for 300 epochs using an AdamW optimizer with a learning rate of 0.00001 and a batch size of 10.
|
37 |
Early stopping was applied to the model to prevent over-fitting.
|
38 |
For ViT, training stopped after 70 epochs. The only difference between the preprocessed data in the ViT and the other models was that it normalized to a tensor image with mean and standard deviation during data augmentation.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|