Datasets:
Update README.md
Browse files
README.md
CHANGED
|
@@ -92,18 +92,19 @@ The current value of Key1 is Value_N.
|
|
| 92 |
LLMs cannot reliably retrieve Value_N. Distribution spans value_1 to value_N, and as N increases, the answers skew increasingly toward value_1.
|
| 93 |
|
| 94 |
|
| 95 |
-
## On Randomization
|
| 96 |
-
We **RANDOMIZE** update order after generation to mimic unpredictable changes by interleaving updates across different keys (i.e., different keys’ updates occur back-to-back rather than in contiguous blocks). Counterintuitively, this often helps LLMs, since the final update usually lands near the end of the context. In the sequential setting, most smaller (less than ~600B) models lose track after only a few updates—even with 5–8k-token inputs
|
| 97 |
-
(Sequential-mode dataset provided separately at the end of this document).
|
| 98 |
-
|
| 99 |
|
| 100 |
## Why this is challenging for LLMs:
|
| 101 |
- Multiple co-references to the same key cause strong interference.
|
| 102 |
|
| 103 |
-
|
| 104 |
1. As the number of updates per key (N) increases, LLMs confuse earlier values with the most recent one and fail to retrieve the last value. (Dataset column: exp_updates)
|
| 105 |
2. We intentionally make the task to only retrieve the last value to keep searching difficulties low and to show all LLM are unable to keep track due to **context interference**.
|
| 106 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 107 |
|
| 108 |
## Cognitive science connection: Proactive Interference (PI)
|
| 109 |
Our test adopts the **classic proactive** interference paradigm from cognitive science, a **foundational method** for studying **human working memory**. PI shows how older, similar information disrupts encoding and retrieval of newer content. Bringing this approach to LLMs allows us to directly measure how interference—not just context length—limits memory and retrieval.
|
|
|
|
| 92 |
LLMs cannot reliably retrieve Value_N. Distribution spans value_1 to value_N, and as N increases, the answers skew increasingly toward value_1.
|
| 93 |
|
| 94 |
|
|
|
|
|
|
|
|
|
|
|
|
|
| 95 |
|
| 96 |
## Why this is challenging for LLMs:
|
| 97 |
- Multiple co-references to the same key cause strong interference.
|
| 98 |
|
|
|
|
| 99 |
1. As the number of updates per key (N) increases, LLMs confuse earlier values with the most recent one and fail to retrieve the last value. (Dataset column: exp_updates)
|
| 100 |
2. We intentionally make the task to only retrieve the last value to keep searching difficulties low and to show all LLM are unable to keep track due to **context interference**.
|
| 101 |
+
|
| 102 |
+
|
| 103 |
+
## On Randomization
|
| 104 |
+
We **RANDOMIZE** update order after generation to mimic unpredictable changes by interleaving updates across different keys (i.e., different keys’ updates occur back-to-back rather than in contiguous blocks). Counterintuitively, this often helps LLMs, since the final update usually lands near the end of the context. In the sequential setting, most smaller (less than ~600B) models lose track after only a few updates—even with 5–8k-token inputs.
|
| 105 |
+
See the **Sequntial /Original-Non-Random Mode** section at the end of this document, where many LLMs’ performance still **collapses** with only a **small amount of input (5–8k)**
|
| 106 |
+
|
| 107 |
+
|
| 108 |
|
| 109 |
## Cognitive science connection: Proactive Interference (PI)
|
| 110 |
Our test adopts the **classic proactive** interference paradigm from cognitive science, a **foundational method** for studying **human working memory**. PI shows how older, similar information disrupts encoding and retrieval of newer content. Bringing this approach to LLMs allows us to directly measure how interference—not just context length—limits memory and retrieval.
|