File size: 1,319 Bytes
54333d3 43aeb1e 54333d3 15f66a4 54333d3 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 |
## CellCLIP - Learning Perturbation Effects in Cell Painting via Text-Guided Contrastive Learning
CellCLIP, a cross-modal contrastive learning framework for HCS data. CellCLIP leverages pre-trained image encoders coupled with a novel channel encoding scheme to better capture relationships between different microscopy channels in image embeddings, along with natural language encoders for repre senting perturbations. Our framework outperforms current open-source models, demonstrating the best performance in both cross-modal retrieval and biologically meaningful downstream tasks while also achieving significant reductions in computation time.
* [Paper](https://arxiv.org/pdf/2506.06290)
* [Github](https://github.com/suinleelab/CellCLIP/tree/main)
This repository contains model checkpoints for CellCLIP trained with
* Cell painting encodings: Image embeddings extracted using DINOv2-Giant and projected to a feature dimension of 1536.
* Perturbation encodings: Text embeddings generated using BERT as the text encoder.
## Citation
```
@article{lu2025cellclip,
title={CellCLIP--Learning Perturbation Effects in Cell Painting via Text-Guided Contrastive Learning},
author={Lu, Mingyu and Weinberger, Ethan and Kim, Chanwoo and Lee, Su-In},
journal={arXiv preprint arXiv:2506.06290},
year={2025}
}
```
|