Datasets:
Update README.md
Browse files
README.md
CHANGED
@@ -178,6 +178,23 @@ Currently it includes two files:
|
|
178 |
- **sequential_additional.parquet** → Sequential mode (non-randomized, strict per-key ordered update blocks). Trivial for humans yet still challenging for many LLMs; smaller (all <600B) models are especially affected, with proactive-interference effects clearly exposed (even in short contexts, ~5–8k tokens).
|
179 |
|
180 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
181 |
## Quick Start - Evaluate Your Model
|
182 |
|
183 |
```python
|
@@ -350,6 +367,126 @@ if 'run_id' in results_df.columns:
|
|
350 |
print(f"{r['experiment']}: {r['accuracy_percent']:.1f}% (averaged over runs)")
|
351 |
|
352 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
353 |
|
354 |
|
355 |
|
@@ -368,18 +505,3 @@ if 'run_id' in results_df.columns:
|
|
368 |
url={https://arxiv.org/abs/2506.08184},
|
369 |
}
|
370 |
```
|
371 |
-
|
372 |
-
We are an interdisciplinary group interested and probing the boundaries between human and machine intelligence.
|
373 |
-
|
374 |
-
Chupei Wang*
|
375 |
-
Bachelor, University of Virginia, Physics Department.
|
376 |
-
|
377 |
-
With a foundation in physics and philosophy—including a year at the University of Chicago Divinity School—Chupei explores where logic and mind meet their limits, probing how the edges of science and the humanities intersect. Chupei is driven by a curiosity about where cognitive architectures—biological and artificial—break down, and what these failures teach us about intelligence itself. Currently seeking Lab and Research.
|
378 |
-
|
379 | |
380 |
-
|
381 |
-
Jiaqiu Vince Sun*
|
382 |
-
PhD Candidate, NYU Center for Neuroscience
|
383 |
-
|
384 |
-
A former professional architect turned neuroscientist, Jiaqiu draws on his background in spatial design, cognitive neuroscience, and philosophy of mind to investigate how memory emerges and diverges in brains and artificial systems. His primary focus lies in the higher-level functions of the brain, such as self-monitoring and control.
|
385 |
|
|
178 |
- **sequential_additional.parquet** → Sequential mode (non-randomized, strict per-key ordered update blocks). Trivial for humans yet still challenging for many LLMs; smaller (all <600B) models are especially affected, with proactive-interference effects clearly exposed (even in short contexts, ~5–8k tokens).
|
179 |
|
180 |
|
181 |
+
|
182 |
+
We are an interdisciplinary group interested and probing the boundaries between human and machine intelligence.
|
183 |
+
|
184 |
+
Chupei Wang*
|
185 |
+
Bachelor, University of Virginia, Physics Department.
|
186 |
+
|
187 |
+
With a foundation in physics and philosophy—including a year at the University of Chicago Divinity School—Chupei explores where logic and mind meet their limits, probing how the edges of science and the humanities intersect. Chupei is driven by a curiosity about where cognitive architectures—biological and artificial—break down, and what these failures teach us about intelligence itself. Currently seeking Lab and Research.
|
188 |
+
|
189 | |
190 |
+
|
191 |
+
Jiaqiu Vince Sun*
|
192 |
+
PhD Candidate, NYU Center for Neuroscience
|
193 |
+
|
194 |
+
A former professional architect turned neuroscientist, Jiaqiu draws on his background in spatial design, cognitive neuroscience, and philosophy of mind to investigate how memory emerges and diverges in brains and artificial systems. His primary focus lies in the higher-level functions of the brain, such as self-monitoring and control.
|
195 | |
196 |
+
|
197 |
+
|
198 |
## Quick Start - Evaluate Your Model
|
199 |
|
200 |
```python
|
|
|
367 |
print(f"{r['experiment']}: {r['accuracy_percent']:.1f}% (averaged over runs)")
|
368 |
|
369 |
|
370 |
+
## 🏆 Advanced Evaluation with AUC Scoring
|
371 |
+
|
372 |
+
### Why AUC Scoring?
|
373 |
+
- **Average accuracy** treats all tasks equally → poor model differentiation
|
374 |
+
- **AUC (log base 1.5)** weighs harder tasks more → better high-end model ranking
|
375 |
+
- **Essential for research** comparing SOTA models on difficult ranges
|
376 |
+
|
377 |
+
### Complete Evaluation Function
|
378 |
+
|
379 |
+
```python
|
380 |
+
import math
|
381 |
+
|
382 |
+
def compute_pi_auc_score(results, log_base=1.5):
|
383 |
+
"""
|
384 |
+
PI-LLM AUC score (PRIMARY: 'auc_log1.5'), using log_base(n_updates) weights.
|
385 |
+
- For two-mode experiments (keys/value length), also returns easy/hard AUCs.
|
386 |
+
- For others (updates/sequential), returns a single overall AUC.
|
387 |
+
"""
|
388 |
+
if not results:
|
389 |
+
return {'avg_accuracy': 0.0, 'auc_log1.5': 0.0, 'total_samples': 0}
|
390 |
+
|
391 |
+
def wmean(samples):
|
392 |
+
# weight = log_base(max(n_updates, 2)) to reflect difficulty
|
393 |
+
ws = [math.log(max(s.get('n_updates', 2), 2), log_base) for s in samples]
|
394 |
+
denom = sum(ws)
|
395 |
+
return (sum(s['accuracy'] * w for s, w in zip(samples, ws)) / denom) if denom else 0.0
|
396 |
+
|
397 |
+
exp = results[0].get('experiment', '')
|
398 |
+
avg = sum(s['accuracy'] for s in results) / len(results)
|
399 |
+
overall = wmean(results)
|
400 |
+
|
401 |
+
# Two-mode thresholds
|
402 |
+
if 'exp_keys' in exp:
|
403 |
+
easy_thr, hard_thr = 125, 350
|
404 |
+
elif 'exp_valuelength' in exp:
|
405 |
+
easy_thr, hard_thr = 4, 20
|
406 |
+
else:
|
407 |
+
# Single-mode path
|
408 |
+
return {'avg_accuracy': avg, 'auc_log1.5': overall, 'total_samples': len(results)}
|
409 |
+
|
410 |
+
easy = [s for s in results if s.get('n_updates', 0) <= easy_thr]
|
411 |
+
hard = [s for s in results if s.get('n_updates', 0) >= hard_thr]
|
412 |
+
|
413 |
+
return {
|
414 |
+
'avg_accuracy': avg,
|
415 |
+
'auc_log1.5': overall, # PRIMARY metric
|
416 |
+
'auc_log1.5_easy': wmean(easy) if easy else 0.0,
|
417 |
+
'auc_log1.5_hard': wmean(hard) if hard else 0.0,
|
418 |
+
'total_samples': len(results),
|
419 |
+
}
|
420 |
+
```
|
421 |
+
|
422 |
+
### Usage Example
|
423 |
+
|
424 |
+
```python
|
425 |
+
from datasets import load_dataset
|
426 |
+
|
427 |
+
# Load PI-LLM dataset
|
428 |
+
dataset = load_dataset("giantfish-fly/pi-llm", "core")['test']
|
429 |
+
|
430 |
+
# Run your model and collect results
|
431 |
+
results = []
|
432 |
+
for sample in dataset:
|
433 |
+
pred = your_model(sample['prompt']) # Your model inference
|
434 |
+
accuracy = grade_pi_response(pred, sample['answer_formatted'])
|
435 |
+
results.append({
|
436 |
+
'accuracy': accuracy,
|
437 |
+
'n_updates': sample['n_updates'],
|
438 |
+
'experiment': sample['experiment']
|
439 |
+
})
|
440 |
+
|
441 |
+
# Compute AUC scores
|
442 |
+
scores = compute_pi_auc_score(results)
|
443 |
+
|
444 |
+
# Display results (format varies by experiment)
|
445 |
+
print(f"🏆 AUC Score: {scores['auc_log1.5']:.3f}") # PRIMARY metric
|
446 |
+
if 'auc_log1.5_easy' in scores:
|
447 |
+
print(f"📊 Easy Mode: {scores['auc_log1.5_easy']:.3f}")
|
448 |
+
print(f"📊 Hard Mode: {scores['auc_log1.5_hard']:.3f}")
|
449 |
+
```
|
450 |
+
|
451 |
+
### Output Formats
|
452 |
+
|
453 |
+
**Single-Mode Experiments** (`exp_updates`, `exp_sequential`):
|
454 |
+
```python
|
455 |
+
{'avg_accuracy': 0.600, 'auc_log1.5': 0.412, 'total_samples': 100}
|
456 |
+
```
|
457 |
+
|
458 |
+
**Two-Mode Experiments** (`exp_keys`, `exp_valuelength`):
|
459 |
+
```python
|
460 |
+
{
|
461 |
+
'avg_accuracy': 0.600, 'auc_log1.5': 0.576, # Overall metrics
|
462 |
+
'auc_log1.5_easy': 0.850, 'auc_log1.5_hard': 0.350, # Mode breakdown
|
463 |
+
'total_samples': 150
|
464 |
+
}
|
465 |
+
```
|
466 |
+
|
467 |
+
### 🎯 For Model Ranking: Use `auc_log1.5` as your primary metric!
|
468 |
+
|
469 |
+
### ✅ Finally, Total Score (Macro PI-AUC1.5)
|
470 |
+
|
471 |
+
**Definition:** average of each test’s `auc_log1.5` (simple, clear leaderboard number).
|
472 |
+
|
473 |
+
```python
|
474 |
+
def compute_total_pi_auc(all_tests, log_base=1.5):
|
475 |
+
"""
|
476 |
+
Total PI-AUC1.5 across tests = average of per-test auc_log1.5.
|
477 |
+
all_tests: dict {test_name -> list[results]} where each `results` list
|
478 |
+
is what you'd pass to compute_pi_auc_score(...).
|
479 |
+
"""
|
480 |
+
if not all_tests:
|
481 |
+
return {"per_test_auc_log1.5": {}, "total_auc_log1.5": 0.0}
|
482 |
+
|
483 |
+
per_test = {
|
484 |
+
name: compute_pi_auc_score(rs, log_base)["auc_log1.5"]
|
485 |
+
for name, rs in all_tests.items() if rs
|
486 |
+
}
|
487 |
+
total = sum(per_test.values()) / len(per_test) if per_test else 0.0
|
488 |
+
return {"per_test_auc_log1.5": per_test, "total_auc_log1.5": total}
|
489 |
+
|
490 |
|
491 |
|
492 |
|
|
|
505 |
url={https://arxiv.org/abs/2506.08184},
|
506 |
}
|
507 |
```
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|