Spaces:
Running
Running
OneFormer: one model to segment them all? 🤯 | |
I was looking into paperswithcode leaderboards when I came across OneFormer for the first time so it was time to dig in! | |
 | |
OneFormer is a "truly universal" model for semantic, instance and panoptic segmentation tasks ⚔️ | |
What makes is truly universal is that it's a single model that is trained only once and can be used across all tasks 👇 | |
 | |
The enabler here is the text conditioning, i.e. the model is given a text query that states task type along with the appropriate input, and using contrastive loss, the model learns the difference between different task types 👇 | |
 | |
Thanks to 🤗 Transformers, you can easily use the model! I have drafted a [notebook](https://t.co/cBylk1Uv20) for you to try right away 😊 | |
You can also check out the [Space](https://t.co/31GxlVo1W5) without checking out the code itself | |
 | |
> [!TIP] | |
Ressources: | |
[OneFormer: One Transformer to Rule Universal Image Segmentation](https://arxiv.org/abs/2211.06220) | |
by Jitesh Jain, Jiachen Li, MangTik Chiu, Ali Hassani, Nikita Orlov, Humphrey Shi (2022) | |
[GitHub](https://github.com/SHI-Labs/OneFormer) | |
[Hugging Face documentation](https://huggingface.co/docs/transformers/model_doc/oneformer) | |
> [!NOTE] | |
[Original tweet](https://twitter.com/mervenoyann/status/1739707076501221608) (December 26, 2023) |