Lillianwei commited on
Commit
819bc4b
·
verified ·
1 Parent(s): 8167b24

Update src/about.py

Browse files
Files changed (1) hide show
  1. src/about.py +1 -1
src/about.py CHANGED
@@ -30,7 +30,7 @@ INTRODUCTION_TEXT = """
30
  # MMIE: Massive Multimodal Interleaved Comprehension Benchmark for Large Vision-Language Models
31
  We present MMIE, a Massive Multimodal Interleaved understanding Evaluation benchmark, designed for Large Vision-Language Models (LVLMs). MMIE offers a robust framework for evaluating the interleaved comprehension and generation capabilities of LVLMs across diverse fields, supported by reliable automated metrics.
32
 
33
- [Website](https://mmie-bench.github.io) | [Code](https://github.com/Lillianwei-h/MMIE-Eval) | [Dataset](https://huggingface.co/datasets/MMIE/MMIE) | [Results](https://huggingface.co/spaces/MMIE/Leaderboard) | [Eval Model](https://huggingface.co/MMIE/MMIE-Eval) | [Paper]()
34
  """
35
 
36
  # Which evaluations are you running? how can people reproduce what you have?
 
30
  # MMIE: Massive Multimodal Interleaved Comprehension Benchmark for Large Vision-Language Models
31
  We present MMIE, a Massive Multimodal Interleaved understanding Evaluation benchmark, designed for Large Vision-Language Models (LVLMs). MMIE offers a robust framework for evaluating the interleaved comprehension and generation capabilities of LVLMs across diverse fields, supported by reliable automated metrics.
32
 
33
+ [Website](https://mmie-bench.github.io) | [Code](https://github.com/Lillianwei-h/MMIE) | [Dataset](https://huggingface.co/datasets/MMIE/MMIE) | [Results](https://huggingface.co/spaces/MMIE/Leaderboard) | [Evaluation Model](https://huggingface.co/MMIE/MMIE-Score) | [Paper](https://arxiv.org/abs/2410.10139)
34
  """
35
 
36
  # Which evaluations are you running? how can people reproduce what you have?