Datasets:
Update README.md
Browse files
README.md
CHANGED
|
@@ -58,15 +58,9 @@ LLMs cannot reliably retrieve Value_N. Distribution spans value_1 to value_N, an
|
|
| 58 |
## Why this is challenging for LLMs:
|
| 59 |
- Multiple co-references to the same key cause strong interference.
|
| 60 |
|
| 61 |
-
As N(number of updated value) gets larger, LLMs increasingly confuse earlier values with the most recent one, and cannot retrieve the last value.
|
| 62 |
-
Experiment1.(check Dataset Column: exp_updates)
|
| 63 |
|
| 64 |
-
|
| 65 |
-
2.As key_n grows, LLMs's capacity to resist interference and retrieve the last value also decrease log-linearly.
|
| 66 |
-
Experiment2.(Dataset Column: exp_keys)
|
| 67 |
|
| 68 |
-
3.As length of value grows, LLMs's accuracy of retrieving also decrease log-linearly.
|
| 69 |
-
Experiment3.(Dataset Column: exp_valuelength)
|
| 70 |
|
| 71 |
## Cognitive science connection: Proactive Interference (PI)
|
| 72 |
Our test adopts the classic proactive interference paradigm from cognitive science, a foundational method for studying human working memory. PI shows how older, similar information disrupts encoding and retrieval of newer content. Bringing this approach to LLMs allows us to directly measure how interference—not just context length—limits memory and retrieval.
|
|
@@ -78,6 +72,17 @@ See: https://sites.google.com/view/cog4llm
|
|
| 78 |
- Humans: near-ceiling accuracy(99%+) on this controlled task across conditions (see paper for protocol and exact numbers).
|
| 79 |
- LLMs: accuracy declines approximately log-linearly with the number of updates per key and with the number of concurrent update blocks (details, plots, and model list in our paper).
|
| 80 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 81 |
## Quick Start - Evaluate Your Model
|
| 82 |
|
| 83 |
```python
|
|
|
|
| 58 |
## Why this is challenging for LLMs:
|
| 59 |
- Multiple co-references to the same key cause strong interference.
|
| 60 |
|
|
|
|
|
|
|
| 61 |
|
| 62 |
+
As the number of updates per key (N) increases, LLMs confuse earlier values with the most recent one and fail to retrieve the last value. (Dataset column: exp_updates)
|
|
|
|
|
|
|
| 63 |
|
|
|
|
|
|
|
| 64 |
|
| 65 |
## Cognitive science connection: Proactive Interference (PI)
|
| 66 |
Our test adopts the classic proactive interference paradigm from cognitive science, a foundational method for studying human working memory. PI shows how older, similar information disrupts encoding and retrieval of newer content. Bringing this approach to LLMs allows us to directly measure how interference—not just context length—limits memory and retrieval.
|
|
|
|
| 72 |
- Humans: near-ceiling accuracy(99%+) on this controlled task across conditions (see paper for protocol and exact numbers).
|
| 73 |
- LLMs: accuracy declines approximately log-linearly with the number of updates per key and with the number of concurrent update blocks (details, plots, and model list in our paper).
|
| 74 |
|
| 75 |
+
|
| 76 |
+
## Full-detail of 3 test.
|
| 77 |
+
This data set consists 2 more dimension of evaluations to show current LLMs' limits. Including SOTA model GPT5, Grok4,DeepSeek,Gemini 2.5PRO,Mistrial,Llama4..etc
|
| 78 |
+
|
| 79 |
+
-Experiment2.(Dataset column: exp_keys)
|
| 80 |
+
LLMs's capacity to resist interference and the accuracy to retrieve the last value decreases log-linearly as the number of concurrent keys(n_keys) grows.
|
| 81 |
+
|
| 82 |
+
-Experiment3.(Dataset column: exp_valuelength)
|
| 83 |
+
Retrieval accuracy also decreases log-linearly as value length grows.
|
| 84 |
+
|
| 85 |
+
|
| 86 |
## Quick Start - Evaluate Your Model
|
| 87 |
|
| 88 |
```python
|