Datasets:
Update README.md
Browse files
README.md
CHANGED
|
@@ -92,8 +92,10 @@ The current value of Key1 is Value_N.
|
|
| 92 |
LLMs cannot reliably retrieve Value_N. Distribution spans value_1 to value_N, and as N increases, the answers skew increasingly toward value_1.
|
| 93 |
|
| 94 |
|
| 95 |
-
##
|
| 96 |
-
We **RANDOMIZE**
|
|
|
|
|
|
|
| 97 |
|
| 98 |
## Why this is challenging for LLMs:
|
| 99 |
- Multiple co-references to the same key cause strong interference.
|
|
|
|
| 92 |
LLMs cannot reliably retrieve Value_N. Distribution spans value_1 to value_N, and as N increases, the answers skew increasingly toward value_1.
|
| 93 |
|
| 94 |
|
| 95 |
+
## On Randomization
|
| 96 |
+
We **RANDOMIZE** update order after generation to mimic unpredictable changes by interleaving updates across different keys (i.e., different keys’ updates occur back-to-back rather than in contiguous blocks). Counterintuitively, this often helps LLMs, since the final update usually lands near the end of the context. In the sequential setting, most smaller (less than ~600B) models lose track after only a few updates—even with 5–8k-token inputs
|
| 97 |
+
(Sequential-mode dataset provided separately at the end of this document).
|
| 98 |
+
|
| 99 |
|
| 100 |
## Why this is challenging for LLMs:
|
| 101 |
- Multiple co-references to the same key cause strong interference.
|