Update README.md
Browse files
README.md
CHANGED
@@ -11,7 +11,7 @@ short_description: The VCR-Wiki datasets
|
|
11 |
|
12 |
This space contains all configurations for VCR-Wiki, introduced in VCR: Visual Caption Restoration (https://arxiv.org/abs/2406.06462).
|
13 |
# News
|
14 |
-
- π₯π₯π₯ **[2024-06-13]** We release the evaluation codes for open-source models, closed-source models and the pipeline of creating the dataset.
|
15 |
- π₯π₯π₯ **[2024-06-12]** We have incorperated the VCR-wiki evaluation process in [lmms-eval](https://github.com/EvolvingLMMs-Lab/lmms-eval) framework. Now, users can use one line command to run the evaluation of models on the VCR-wiki test datasets.
|
16 |
- π₯π₯π₯ **[2024-06-11]** Our paper has been released on the [arXiv](https://arxiv.org/abs/2406.06462), including the evaluation results of a series of models.
|
17 |
- π₯π₯π₯ **[2024-06-10]** We have released the [VCR-wiki dataset](https://huggingface.co/vcr-org), which contains 2.11M English and 346K Chinese entities sourced from Wikipedia, offered in both easy and hard variants. The dataset is available in the Hugging Face Datasets library.
|
|
|
11 |
|
12 |
This space contains all configurations for VCR-Wiki, introduced in VCR: Visual Caption Restoration (https://arxiv.org/abs/2406.06462).
|
13 |
# News
|
14 |
+
- π₯π₯π₯ **[2024-06-13]** We release the evaluation codes for open-source models, closed-source models and the pipeline of creating the dataset in [VCR's Github Repo](https://github.com/tianyu-z/VCR).
|
15 |
- π₯π₯π₯ **[2024-06-12]** We have incorperated the VCR-wiki evaluation process in [lmms-eval](https://github.com/EvolvingLMMs-Lab/lmms-eval) framework. Now, users can use one line command to run the evaluation of models on the VCR-wiki test datasets.
|
16 |
- π₯π₯π₯ **[2024-06-11]** Our paper has been released on the [arXiv](https://arxiv.org/abs/2406.06462), including the evaluation results of a series of models.
|
17 |
- π₯π₯π₯ **[2024-06-10]** We have released the [VCR-wiki dataset](https://huggingface.co/vcr-org), which contains 2.11M English and 346K Chinese entities sourced from Wikipedia, offered in both easy and hard variants. The dataset is available in the Hugging Face Datasets library.
|