Update README.md
Browse files
README.md
CHANGED
@@ -5,7 +5,7 @@ pipeline_tag: image-text-to-text
|
|
5 |
|
6 |
# UGround (The Initial LLaVA-based Version)
|
7 |
|
8 |
-
**Update: We have trained [stronger
|
9 |
|
10 |
UGround is a strong GUI visual grounding model trained with a simple recipe. Check our homepage and paper for more details. This work is a collaboration between [OSUNLP](https://x.com/osunlp) and [Orby AI](https://www.orby.ai/).
|
11 |

|
|
|
5 |
|
6 |
# UGround (The Initial LLaVA-based Version)
|
7 |
|
8 |
+
**Update: We have trained [stronger models](https://huggingface.co/osunlp/UGround-V1-7B) based on Qwen2-VL with the same data. We suggest using them instead for better performance and more convenient training, inference and deployment.**
|
9 |
|
10 |
UGround is a strong GUI visual grounding model trained with a simple recipe. Check our homepage and paper for more details. This work is a collaboration between [OSUNLP](https://x.com/osunlp) and [Orby AI](https://www.orby.ai/).
|
11 |

|