giantfish-fly commited on
Commit
21d4f44
·
verified ·
1 Parent(s): 5766571

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +7 -5
README.md CHANGED
@@ -28,13 +28,13 @@ configs:
28
 
29
  ---
30
  ---
31
- **Super easy task for humans** that **All SOTA-LLM fails** to retrive the correct answer from context. Including SOTA models: GPT5, Grok4, DeepSeek, Gemini 2.5PRO, Mistral, Llama4...etc
32
  - ICML 2025 Long-Context Foundation Models Workshop Accepted.(https://arxiv.org/abs/2506.08184)
33
  - Update: This dataset is integrated into Moonshot AI(Kimi)'s **internal benchmarking framework** for assessing ** tracking capacity and context interference in LLM/agents**.
34
 
35
 
 
36
 
37
- Simple Task:
38
  ```
39
  Key1: Value_1
40
  Key1: Value_2
@@ -47,7 +47,7 @@ Question:
47
  What is the current value (the last value) for Key1?
48
  ```
49
 
50
- Expected:
51
  ```
52
  The current value of Key1 is Value_N.
53
  ```
@@ -58,13 +58,15 @@ ALL tested SOTA LLMs **cannot reliably retrieve** Value_N. Distribution spans va
58
 
59
 
60
 
61
- ## Done/Full detail read below.
62
  For Full analysis see below or the paper. In short, the **multi-coreferenced structure** of data make all LLMs confuse earlier value with the last, **Larger models tend to resist better**, yet **within (N)100 updates**, **eventually all LLMs fail to retrieve**.
63
 
64
-
65
  ## Note on dataset scale:
66
  (N from 1 to 400). We put up to 46 such groups (key1..key46) together and then ask the model to retrieve just the last value of each key. We make sure all values are different, so when the model replies, we know how far away the answer is from the correct answer.
67
 
 
 
 
68
 
69
 
70
 
 
28
 
29
  ---
30
  ---
31
+ **Super easy task for humans** that **All SOTA LLM fail** to retrieve the correct answer from context. Including SOTA models: GPT5, Grok4, DeepSeek, Gemini 2.5PRO, Mistral, Llama4...etc
32
  - ICML 2025 Long-Context Foundation Models Workshop Accepted.(https://arxiv.org/abs/2506.08184)
33
  - Update: This dataset is integrated into Moonshot AI(Kimi)'s **internal benchmarking framework** for assessing ** tracking capacity and context interference in LLM/agents**.
34
 
35
 
36
+ ## Key–value update paradigm (what the model sees) 1 keys, N updates each
37
 
 
38
  ```
39
  Key1: Value_1
40
  Key1: Value_2
 
47
  What is the current value (the last value) for Key1?
48
  ```
49
 
50
+ Expected Answer:
51
  ```
52
  The current value of Key1 is Value_N.
53
  ```
 
58
 
59
 
60
 
61
+ ## The End/Full details and the cognitive science bases listed below.
62
  For Full analysis see below or the paper. In short, the **multi-coreferenced structure** of data make all LLMs confuse earlier value with the last, **Larger models tend to resist better**, yet **within (N)100 updates**, **eventually all LLMs fail to retrieve**.
63
 
 
64
  ## Note on dataset scale:
65
  (N from 1 to 400). We put up to 46 such groups (key1..key46) together and then ask the model to retrieve just the last value of each key. We make sure all values are different, so when the model replies, we know how far away the answer is from the correct answer.
66
 
67
+ ## Impact
68
+
69
+ This mini dataset's structure is **Common in many data processing tasks**. Finance/health/cursor position...etc. For example, consider a sequence of blood-pressure (BP) readings, where the task is to keep track of the most recent BP value. BP: 120 – triage; BP: 128 – 10 min later; BP: 125 – discharge. --**What is patient's Last/Current BP ?**
70
 
71
 
72