Improve dataset card: Add task category, tags, language, detailed description, and sample usage

#1
by nielsr HF Staff - opened
Files changed (1) hide show
  1. README.md +42 -2
README.md CHANGED
@@ -1,4 +1,14 @@
1
  ---
 
 
 
 
 
 
 
 
 
 
2
  dataset_info:
3
  features:
4
  - name: id
@@ -30,7 +40,37 @@ configs:
30
  data_files:
31
  - split: test
32
  path: data/test-*
33
- license: mit
34
  ---
35
 
36
- Dataset for [Self-Correction Bench: Revealing and Addressing the Self-Correction Blind Spot in LLMs](https://arxiv.org/abs/2507.02778)
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
+ license: mit
3
+ task_categories:
4
+ - text-generation
5
+ tags:
6
+ - self-correction
7
+ - llms
8
+ - benchmarking
9
+ - llm-evaluation
10
+ language:
11
+ - en
12
  dataset_info:
13
  features:
14
  - name: id
 
40
  data_files:
41
  - split: test
42
  path: data/test-*
 
43
  ---
44
 
45
+ This repository contains the dataset for [Self-Correction Bench: Revealing and Addressing the Self-Correction Blind Spot in LLMs](https://arxiv.org/abs/2507.02778).
46
+
47
+ The Self-Correction Bench is a systematic framework designed to measure the "Self-Correction Blind Spot" in Large Language Models (LLMs). This phenomenon refers to LLMs failing to correct identical errors in their own outputs, despite being able to identify such errors in user inputs. The dataset facilitates the study of this limitation through controlled error injection at three complexity levels, offering insights into avenues for improving LLM reliability and trustworthiness.
48
+
49
+ ### Dataset Structure
50
+ The dataset includes the following fields:
51
+ - `id`: Unique identifier for each sample.
52
+ - `type`: Category of the error injection (e.g., controlled scenarios of different complexity).
53
+ - `messages_error_injection_in_model`: A list of messages (content and role) representing a conversation where an error is deliberately injected into the model's generated output.
54
+ - `messages_error_in_user`: A list of messages (content and role) presenting an identical error in the user's input, used for comparison.
55
+ - `correct_answer`: The expected correct response to the given scenario, serving as a ground truth for evaluation.
56
+
57
+ ### Sample Usage
58
+ To load and inspect the dataset, you can use the `datasets` library:
59
+
60
+ ```python
61
+ from datasets import load_dataset
62
+
63
+ # Load the 'test' split of the dataset
64
+ dataset = load_dataset("self-correction-bench", split="test")
65
+
66
+ # Accessing a sample
67
+ sample = dataset[0]
68
+ print(f"ID: {sample['id']}")
69
+ print(f"Type: {sample['type']}")
70
+ print(f"Messages with model error injection: {sample['messages_error_injection_in_model']}")
71
+ print(f"Messages with user error: {sample['messages_error_in_user']}")
72
+ print(f"Correct Answer: {sample['correct_answer']}")
73
+
74
+ # Researchers can use these samples to prompt an LLM and compare its generated
75
+ # output against the 'correct_answer' to evaluate its self-correction capabilities.
76
+ ```