Datasets:
Update README.md
Browse files
README.md
CHANGED
|
@@ -43,9 +43,10 @@ We identify a task that is **super easy for humans** but where all LLMs—from e
|
|
| 43 |
|
| 44 |
Classic long-context benchmarks often test retrieving a single "needle" from a massive "haystack." MRCR raises the bar by placing many similar needles in the same context, requiring models to select the correct item (up to 8 needles), and shows that all LLMs struggle with this task.
|
| 45 |
|
|
|
|
| 46 |
- OpenAI MRCR dataset: https://huggingface.co/datasets/openai/mrcr
|
| 47 |
- DeepMind MRCR (Gemini) paper: https://arxiv.org/pdf/2409.12640v2
|
| 48 |
-
|
| 49 |
|
| 50 |
|
| 51 |
## Our test takes this one step further
|
|
|
|
| 43 |
|
| 44 |
Classic long-context benchmarks often test retrieving a single "needle" from a massive "haystack." MRCR raises the bar by placing many similar needles in the same context, requiring models to select the correct item (up to 8 needles), and shows that all LLMs struggle with this task.
|
| 45 |
|
| 46 |
+
- PI-LLM Paper and Model Performance: https://arxiv.org/abs/2506.08184
|
| 47 |
- OpenAI MRCR dataset: https://huggingface.co/datasets/openai/mrcr
|
| 48 |
- DeepMind MRCR (Gemini) paper: https://arxiv.org/pdf/2409.12640v2
|
| 49 |
+
|
| 50 |
|
| 51 |
|
| 52 |
## Our test takes this one step further
|