Datasets:
Update README.md
Browse files
README.md
CHANGED
|
@@ -4,7 +4,12 @@ language:
|
|
| 4 |
- en
|
| 5 |
---
|
| 6 |
# PI-LLM: The Core Retrieval Challenge Behind MRCR
|
| 7 |
-
(
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 8 |
|
| 9 |
-Multi-round co-reference in Context Interference:
|
| 10 |
|
|
@@ -15,8 +20,8 @@ Classic long-context benchmarks often test retrieving a single "needle" from a m
|
|
| 15 |
- PI-LLM paper: https://arxiv.org/abs/2506.08184
|
| 16 |
|
| 17 |
|
| 18 |
-
## Our test takes this one step further
|
| 19 |
-
If MRCR is "multiple needles in a haystack", we show the haystack isn't necessary to expose core retrieval failures. By isolating—and precisely controlling—the number of similar, co-referenced items (we repeatedly update the value of the same keys in key–value pairs), our paradigm directly measures how interference
|
| 20 |
|
| 21 |
- We observe a clear log-linear decline in accuracy as the number of interfering updates grows (i.e., co-references increase).
|
| 22 |
- The effect holds across the transformer models we tested. See our paper for details and methodology.
|
|
@@ -24,6 +29,8 @@ If MRCR is "multiple needles in a haystack", we show the haystack isn't necessar
|
|
| 24 |
- Our demo site: https://sites.google.com/view/cog4llm
|
| 25 |
- Our paper (ICML2025 Long-Context Workshop): https://arxiv.org/abs/2506.08184
|
| 26 |
|
|
|
|
|
|
|
| 27 |
## Key–value update paradigm (what the model sees)
|
| 28 |
We present a classical key–value experiment: the same key is updated multiple times. The model is then asked to return the current (last) value for each key. This isolates co-reference interference without requiring extremely long distractor contexts.
|
| 29 |
|
|
@@ -39,51 +46,92 @@ Key1: Value_N
|
|
| 39 |
|
| 40 |
Question:
|
| 41 |
|
| 42 |
-
What is the current value (the last value) for
|
| 43 |
```
|
| 44 |
|
| 45 |
Expected:
|
| 46 |
```
|
| 47 |
-
The current value of
|
| 48 |
```
|
| 49 |
|
| 50 |
|
| 51 |
-
|
| 52 |
(N from 1 to 400). We put up to 46 such groups (key1..key46) together and then ask the model to retrieve just the last value of each key. We make sure all values are different, so when the model replies, we know how far away the answer is from the correct answer.
|
| 53 |
|
| 54 |
-
|
|
|
|
| 55 |
LLMs cannot reliably retrieve Value_N. Distribution spans value_1 to value_N, and as N increases, the answers skew increasingly toward value_1.
|
| 56 |
|
| 57 |
|
|
|
|
|
|
|
|
|
|
| 58 |
## Why this is challenging for LLMs:
|
| 59 |
- Multiple co-references to the same key cause strong interference.
|
| 60 |
|
| 61 |
|
| 62 |
-
As the number of updates per key (N) increases, LLMs confuse earlier values with the most recent one and fail to retrieve the last value. (Dataset column: exp_updates)
|
| 63 |
-
|
|
|
|
| 64 |
|
| 65 |
## Cognitive science connection: Proactive Interference (PI)
|
| 66 |
-
Our test adopts the classic proactive interference paradigm from cognitive science, a foundational method for studying human working memory
|
| 67 |
|
| 68 |
-
- Interestingly, humans are also affected by these three dimensions
|
| 69 |
|
| 70 |
See: https://sites.google.com/view/cog4llm
|
| 71 |
|
| 72 |
## Results at a glance
|
| 73 |
-
- Humans: near-ceiling accuracy(99%+) on this controlled task across conditions (see paper for protocol and exact numbers).
|
| 74 |
- LLMs: accuracy declines approximately log-linearly with the number of updates per key and with the number of concurrent update blocks (details, plots, and model list in our paper).
|
| 75 |
|
| 76 |
|
| 77 |
-
## Full-detail of 3 test
|
| 78 |
-
This
|
|
|
|
|
|
|
|
|
|
|
|
|
| 79 |
|
| 80 |
-
- Experiment2.(Dataset column: exp_keys).
|
| 81 |
-
LLMs's capacity to resist interference and the accuracy to retrieve the last value decreases log-linearly as the number of concurrent keys(n_keys) grows.
|
| 82 |
|
| 83 |
-
- Experiment3.(Dataset column: exp_valuelength)
|
| 84 |
Retrieval accuracy also decreases log-linearly as value length grows.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 85 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 86 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 87 |
## Quick Start - Evaluate Your Model
|
| 88 |
|
| 89 |
```python
|
|
|
|
| 4 |
- en
|
| 5 |
---
|
| 6 |
# PI-LLM: The Core Retrieval Challenge Behind MRCR
|
| 7 |
+
(ICML 2025 Long-Context Workshop Accepted)
|
| 8 |
+
|
| 9 |
+
## TL;DR
|
| 10 |
+
We identify a task that is **super easy for humans** but where all LLMs—from early 0.1B to the most modern 600B+ (GPT-5, Grok-4, Gemini, DeepSeek, etc.)—consistently **fail in the Same Way**. This pinpoints the **core challenge of MRCR**.
|
| 11 |
+
- Mechanistic research is ongoing. The test is well-established in cognitive science, where it has been studied extensively to measure human **Working Memory capacity**.
|
| 12 |
+
|
| 13 |
|
| 14 |
-Multi-round co-reference in Context Interference:
|
| 15 |
|
|
|
|
| 20 |
- PI-LLM paper: https://arxiv.org/abs/2506.08184
|
| 21 |
|
| 22 |
|
| 23 |
+
## Our test takes this one step further
|
| 24 |
+
If MRCR is "multiple needles in a haystack", we show the **haystack isn't necessary** to expose core retrieval failures. By isolating—and precisely controlling—the number of similar, co-referenced items (we repeatedly update the value of the same keys in key–value pairs), our paradigm directly measures how interference from up to 400 needles limits retrieval accuracy even without any "haystack" as background. LLMs cannot perform a simple task like "retrieving the last value" of each co-referenced item.
|
| 25 |
|
| 26 |
- We observe a clear log-linear decline in accuracy as the number of interfering updates grows (i.e., co-references increase).
|
| 27 |
- The effect holds across the transformer models we tested. See our paper for details and methodology.
|
|
|
|
| 29 |
- Our demo site: https://sites.google.com/view/cog4llm
|
| 30 |
- Our paper (ICML2025 Long-Context Workshop): https://arxiv.org/abs/2506.08184
|
| 31 |
|
| 32 |
+
|
| 33 |
+
|
| 34 |
## Key–value update paradigm (what the model sees)
|
| 35 |
We present a classical key–value experiment: the same key is updated multiple times. The model is then asked to return the current (last) value for each key. This isolates co-reference interference without requiring extremely long distractor contexts.
|
| 36 |
|
|
|
|
| 46 |
|
| 47 |
Question:
|
| 48 |
|
| 49 |
+
What is the current value (the last value) for Key1?
|
| 50 |
```
|
| 51 |
|
| 52 |
Expected:
|
| 53 |
```
|
| 54 |
+
The current value of Key1 is Value_N.
|
| 55 |
```
|
| 56 |
|
| 57 |
|
| 58 |
+
## Note on dataset scale:
|
| 59 |
(N from 1 to 400). We put up to 46 such groups (key1..key46) together and then ask the model to retrieve just the last value of each key. We make sure all values are different, so when the model replies, we know how far away the answer is from the correct answer.
|
| 60 |
|
| 61 |
+
|
| 62 |
+
**Results:**
|
| 63 |
LLMs cannot reliably retrieve Value_N. Distribution spans value_1 to value_N, and as N increases, the answers skew increasingly toward value_1.
|
| 64 |
|
| 65 |
|
| 66 |
+
**Note**: We **randomize** the sequence of the 46 key groups in the dataset to **LOWER** the difficulty! (Yes, **this adjustment significantly reduces** difficulty—see findings below. In the original non-randomized test, even the most powerful LLMs lost track after only a few updates.)(total input 5–8k tokens only)
|
| 67 |
+
|
| 68 |
+
|
| 69 |
## Why this is challenging for LLMs:
|
| 70 |
- Multiple co-references to the same key cause strong interference.
|
| 71 |
|
| 72 |
|
| 73 |
+
1. As the number of updates per key (N) increases, LLMs confuse earlier values with the most recent one and fail to retrieve the last value. (Dataset column: exp_updates)
|
| 74 |
+
2. We intentionally make the task to only retrieve the last value to keep reseaching difficulties low and to show all LLM are unable to keep track due to **context interference**.
|
| 75 |
+
3. See the **Hard/Original-Non-Random Mode** section at the end of this document, where all LLMs’ performance **collapses** with only a **small amount of input (5–8k)**
|
| 76 |
|
| 77 |
## Cognitive science connection: Proactive Interference (PI)
|
| 78 |
+
Our test adopts the **classic proactive** interference paradigm from cognitive science, a **foundational method** for studying **human working memory**. PI shows how older, similar information disrupts encoding and retrieval of newer content. Bringing this approach to LLMs allows us to directly measure how interference—not just context length—limits memory and retrieval.
|
| 79 |
|
| 80 |
+
- Interestingly, humans are **also affected by these three dimensions**, but far less than LLMs. Humans consistently outperform even the latest and largest models on this task.”
|
| 81 |
|
| 82 |
See: https://sites.google.com/view/cog4llm
|
| 83 |
|
| 84 |
## Results at a glance
|
| 85 |
+
- Humans: near-ceiling accuracy (99%+) on this controlled task across conditions (see paper for protocol and exact numbers).
|
| 86 |
- LLMs: accuracy declines approximately log-linearly with the number of updates per key and with the number of concurrent update blocks (details, plots, and model list in our paper).
|
| 87 |
|
| 88 |
|
| 89 |
+
## Full-detail of 3 test
|
| 90 |
+
This dataset consists 2 additional dimensions of evaluation to show current LLMs' limits. Including SOTA models: GPT5, Grok4, DeepSeek, Gemini 2.5PRO, Mistral, Llama4...etc
|
| 91 |
+
|
| 92 |
+
- Experiment2. (Dataset column: exp_keys).
|
| 93 |
+
LLMs's capacity to resist interference and their accuracy to retrieve the last value decrease log-linearly as the number of concurrent keys(n_keys) grows.
|
| 94 |
+
This experiment fixes everything else, and vary only n_keys.(Two sets of test are provided, one fix update to 350 and another fixed update to 125 as lower difficulty settings)
|
| 95 |
|
|
|
|
|
|
|
| 96 |
|
| 97 |
+
- Experiment3. (Dataset column: exp_valuelength).— This causes rapid decline across LLMs (GPT-5 and Grok-4 decline similarly to GPT-2).”
|
| 98 |
Retrieval accuracy also decreases log-linearly as value length grows.
|
| 99 |
+
This experiment fixes everything else, and vary only the value_length.
|
| 100 |
+
Two sets of tests are provided, one fix update to 20 and another fixed update per key to only 4 as lowe- difficulty settings
|
| 101 |
+
|
| 102 |
+
(As this test is too hard, only 4 updates per key make all LLMs fail to retrieve the last value—which we intentionally designed to keep the searching difficulty low. Retrieve other order of value has even lower performance)
|
| 103 |
+
|
| 104 |
+
## Hard Mode / Non-Randomized Mode (Last but most interesting and striking)
|
| 105 |
+
|
| 106 |
+
This mode takes the exact format shown in this document, without randomization. We fix everything but vary only the update times.
|
| 107 |
+
- This separate dataset consists of 46 of following blocks in a non-randomized order:
|
| 108 |
+
|
| 109 |
|
| 110 |
+
Key1: Value_1
|
| 111 |
+
Key1: Value_2
|
| 112 |
+
......
|
| 113 |
+
Key1: Value_N
|
| 114 |
+
|
| 115 |
+
|
| 116 |
+
Key2: Value_1
|
| 117 |
+
Key2: Value_2
|
| 118 |
+
......
|
| 119 |
+
Key2: Value_N
|
| 120 |
|
| 121 |
+
.....all the way to key46
|
| 122 |
+
|
| 123 |
+
Question:
|
| 124 |
+
|
| 125 |
+
What is the current value (the last value) for key1 key2....key46?
|
| 126 |
+
|
| 127 |
+
|
| 128 |
+
**Result**
|
| 129 |
+
- In this mode, **LLMs has more than 50% of chance to confuse the last value with earlier value after only 20–50 updates** (fewer than 5–8k tokens, far less than any LLMs' context window).
|
| 130 |
+
- All models quickly confuse earlier values with the most recent one.
|
| 131 |
+
- This is the **original and most striking test**, but we present it separately since performance declines too quickly to allow meaningful ranking across models.
|
| 132 |
+
- Performance for this mode is also **reported in our paper (Figure 4).**
|
| 133 |
+
- **This mode is the most striking, as it highlights a fundamental limitation in how LLMs process context—A task that is human unfailable.”**
|
| 134 |
+
-
|
| 135 |
## Quick Start - Evaluate Your Model
|
| 136 |
|
| 137 |
```python
|