File size: 17,154 Bytes
2f49b44
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
6bc6a63
2f49b44
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
45ee7d5
2f49b44
 
 
 
 
 
 
 
 
 
 
 
 
eb0a98a
2f49b44
 
 
 
 
eb0a98a
2f49b44
 
 
 
 
6bc6a63
2f49b44
 
eb0a98a
 
2f49b44
 
 
6bc6a63
 
 
 
2f49b44
 
 
 
 
 
 
 
 
 
 
 
 
 
 
45ee7d5
2f49b44
 
 
 
 
 
 
 
de60742
 
eb0a98a
de60742
 
 
6bc6a63
 
 
de60742
6bc6a63
de60742
eb0a98a
 
 
 
b9a9140
 
eb0a98a
 
 
 
 
 
 
 
 
de60742
 
 
 
 
 
 
 
 
 
 
c7f64e6
de60742
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
eb0a98a
 
 
de60742
 
 
eb0a98a
de60742
 
 
 
eb0a98a
de60742
 
eb0a98a
de60742
eb0a98a
de60742
 
eb0a98a
de60742
 
eb0a98a
de60742
 
eb0a98a
 
 
 
 
 
 
 
de60742
eb0a98a
de60742
eb0a98a
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
ebee8dc
eb0a98a
 
 
 
 
 
 
 
 
 
45ee7d5
eb0a98a
de60742
eb0a98a
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
de60742
 
 
eb0a98a
 
45ee7d5
eb0a98a
 
 
 
 
 
de60742
eb0a98a
 
45ee7d5
eb0a98a
b9a9140
 
de60742
eb0a98a
 
 
 
 
 
 
 
 
de60742
eb0a98a
de60742
eb0a98a
 
 
 
 
de60742
eb0a98a
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
de60742
 
 
 
 
 
ea74ed3
de60742
 
 
eb0a98a
de60742
 
 
ea74ed3
 
 
 
 
 
de60742
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
---
# Dataset Card Metadata
# For more information, see: https://huggingface.co/docs/hub/datasets-cards
# Example: https://github.com/huggingface/hub-docs/blob/main/datasetcard.md?plain=1
#
# Important: Fill in all sections. If a section is not applicable, comment it out.
# Remove this comment block before saving.

# Basic Information
# ---------------
license: mit # (already in your README, but good to have here)
# A list of languages the dataset is in.
language:
  - en
# A list of tasks the dataset is suitable for.
task_categories:
  - visual-question-answering
  - image-text-to-text
  - question-answering
# task_ids: # More specific task IDs from https://hf.co/tasks
#   - visual-question-answering
# Pretty name for the dataset.
pretty_name: Indian Competitive Exams (JEE/NEET) LLM Benchmark
# Dataset identifier from a recognized benchmark.
# benchmark: # e.g., super_glue, anli
# Date of the last update.
# date: # YYYY-MM-DD or YYYY-MM-DDTHH:MM:SSZ (ISO 8601)

# Dataset Structure
# -----------------
# List of configurations for the dataset.
configs:
  - config_name: default
    data_files: # How data files are structured for this config
      - split: test
        path: data/metadata.jsonl # Path to the data file or glob pattern
    images_dir: images # Path to the directory containing the image files
    # You can add more configs if your dataset has them.


# Splits
# ------
# Information about the data splits.
splits:
  test: # Name of the split
#     num_bytes: # Size of the split in bytes (you might need to calculate this)
    num_examples: 482 # Number of examples in the split (from your script output)
    # You can add dataset_tags, dataset_summary, etc. for each split.

# Column Naming
# -------------
# Information about the columns (features) in the dataset.
column_info:
  image:
    description: The question image.
    data_type: image
  question_id:
    description: Unique identifier for the question.
    data_type: string
  exam_name:
    description: Name of the exam (e.g., "NEET", "JEE_MAIN", "JEE_ADVANCED").
    data_type: string
  exam_year:
    description: Year of the exam.
    data_type: int32
  exam_code:
    description: Specific paper code/session (e.g., "T3", "45").
    data_type: string
  subject:
    description: Subject (e.g., "Physics", "Chemistry", "Biology", "Mathematics").
    data_type: string
  question_type:
    description: Type of question (e.g., "MCQ_SINGLE_CORRECT", "MCQ_MULTIPLE_CORRECT", "INTEGER").
    data_type: string
  correct_answer:
    description: List containing the correct answer strings (e.g., ["A"], ["B", "C"]) or a single string for INTEGER type.
    data_type: list[string] # Updated to reflect string format

# More Information
dataset_summary: |
  A benchmark dataset for evaluating Large Language Models (LLMs) on questions from major Indian competitive examinations:
  Joint Entrance Examination (JEE Main & Advanced) for engineering and the National Eligibility cum Entrance Test (NEET) for medical fields.
  Questions are provided as images, and metadata includes exam details (name, year, subject, question type) and correct answers.
  The benchmark supports various question types including Single Correct MCQs, Multiple Correct MCQs (with partial marking for JEE Advanced), and Integer type questions.
dataset_tags: # Tags to help users find your dataset
  - education
  - science
  - india
  - competitive-exams
  - llm-benchmark
annotations_creators: # How annotations were created
  - found # As questions are from existing exams
  - expert-generated # Assuming answers are official/verified
annotation_types: # Types of annotations
  - multiple-choice
source_datasets: # If your dataset is derived from other datasets
  - original # If it's original data
#   - extended # If it extends another dataset
size_categories: # Approximate size of the dataset
  - n<1K # (482 examples)
dataset_curation_process: |
  Questions are sourced from official JEE and NEET examination papers.
  They are provided as images to maintain original formatting and diagrams.
  Metadata is manually compiled to link images with exam details and answers.
personal_sensitive_information: false # Does the dataset contain PII?
# similar_datasets:
#   - # List similar datasets if any
---
# JEE/NEET LLM Benchmark Dataset

[![License: MIT](https://img.shields.io/badge/License-MIT-yellow.svg)](https://opensource.org/licenses/MIT)

## Dataset Description

This repository contains a benchmark dataset designed for evaluating the capabilities of Large Language Models (LLMs) on questions from major Indian competitive examinations:
*   **JEE (Main & Advanced):** Joint Entrance Examination for engineering.
*   **NEET:** National Eligibility cum Entrance Test for medical fields.

The questions are presented in image format (`.png`) as they appear in the original papers. The dataset includes metadata linking each image to its corresponding exam details (name, year, subject, question type), and correct answer(s). The benchmark framework supports various question types including Single Correct MCQs, Multiple Correct MCQs (with partial marking for JEE Advanced), and Integer type questions.

**Current Data:**
*   **NEET 2024** (Code T3): 200 questions across Physics, Chemistry, Botany, and Zoology
*   **NEET 2025** (Code 45): 180 questions across Physics, Chemistry, Botany, and Zoology  
*   **JEE Advanced 2024** (Paper 1 & 2): 102 questions across Physics, Chemistry, and Mathematics
*   **JEE Advanced 2025** (Paper 1 & 2): 96 questions across Physics, Chemistry, and Mathematics
*   **Total:** 576 questions with comprehensive metadata

## Key Features

*   **πŸ–ΌοΈ Multimodal Reasoning:** Uses images of questions directly, testing the multimodal reasoning capability of models
*   **πŸ“Š Exam-Specific Scoring:** Implements authentic scoring rules for different exams and question types, including partial marking for JEE Advanced
*   **πŸ”„ Robust API Handling:** Built-in retry mechanism and re-prompting for failed API calls or parsing errors
*   **🎯 Flexible Filtering:** Filter by exam name, year, or specific question IDs for targeted evaluation
*   **πŸ“ˆ Comprehensive Results:** Generates detailed JSON and human-readable Markdown summaries with section-wise breakdowns
*   **πŸ”§ Easy Configuration:** Simple YAML-based configuration for models and parameters

## How to Use

### Using `datasets` Library

The dataset is designed to be loaded using the Hugging Face `datasets` library:

```python
from datasets import load_dataset

# Load the evaluation split
dataset = load_dataset("Reja1/jee-neet-benchmark", split='test') # Replace with your HF repo name

# Example: Access the first question
example = dataset[0]
image = example["image"]
question_id = example["question_id"]
subject = example["subject"]
correct_answers = example["correct_answer"]

print(f"Question ID: {question_id}")
print(f"Subject: {subject}")
print(f"Correct Answer(s): {correct_answers}")
# Display the image (requires Pillow)
# image.show()
```

### Manual Usage (Benchmark Scripts)

This repository contains scripts to run the benchmark evaluation directly:

1.  **Clone the repository:**
    ```bash
    # Replace with your actual repository URL
    git clone https://github.com/your-username/jee-neet-benchmark
    cd jee-neet-benchmark
    # Ensure Git LFS is installed and pull large files if necessary
    # git lfs pull
    ```

2.  **Install dependencies:**
    ```bash
    # It's recommended to use a virtual environment
    python -m venv venv
    source venv/bin/activate  # On Windows: venv\Scripts\activate
    pip install -r requirements.txt
    ```

3.  **Configure API Key:**
    *   Create a file named `.env` in the root directory of the project.
    *   Add your OpenRouter API key to this file:
        ```dotenv
        OPENROUTER_API_KEY=your_actual_openrouter_api_key_here
        ```
    *   **Important:** The `.gitignore` file is already configured to prevent committing the `.env` file. Never commit your API keys directly.

4.  **Configure Models:**
    *   Edit the `configs/benchmark_config.yaml` file.
    *   Modify the `openrouter_models` list to include the specific model identifiers you want to evaluate:
        ```yaml
        openrouter_models:
          - "google/gemini-2.5-pro-preview-03-25"
          - "openai/gpt-4o"
          - "anthropic/claude-3-5-sonnet-20241022"
        ```
    *   Ensure these models support vision input on OpenRouter.
    *   You can also adjust other parameters like `max_tokens` and `request_timeout` if needed.

5.  **Run the benchmark:**

    **Basic usage (run all available models on all questions):**
    ```bash
    python src/benchmark_runner.py --config configs/benchmark_config.yaml --model "google/gemini-2.5-pro-preview-03-25"
    ```

    **Filter by exam and year:**
    ```bash
    # Run only NEET 2024 questions
    python src/benchmark_runner.py --config configs/benchmark_config.yaml --model "openai/gpt-4o" --exam_name NEET --exam_year 2024
    
    # Run only JEE Advanced 2024 questions
    python src/benchmark_runner.py --config configs/benchmark_config.yaml --model "anthropic/claude-3-5-sonnet-20241022" --exam_name JEE_ADVANCED --exam_year 2024
    ```

    **Run specific questions:**
    ```bash
    # Run specific question IDs
    python src/benchmark_runner.py --config configs/benchmark_config.yaml --model "google/gemini-2.5-pro-preview-03-25" --question_ids "N24T3001,N24T3002,JA24P1M01"
    ```

    **Custom output directory:**
    ```bash
    python src/benchmark_runner.py --config configs/benchmark_config.yaml --model "openai/gpt-4o" --output_dir my_custom_results
    ```

    **Available filtering options:**
    - `--exam_name`: Choose from `NEET`, `JEE_MAIN`, `JEE_ADVANCED`, or `all` (default)
    - `--exam_year`: Choose from available years (`2024`, `2025`, etc.) or `all` (default)
    - `--question_ids`: Comma-separated list of specific question IDs to evaluate (e.g., "N24T3001,JA24P1M01")

6.  **Check Results:**
    *   Results for each model run will be saved in timestamped subdirectories within the `results/` folder.
    *   Each run's folder (e.g., `results/google_gemini-2.5-pro-preview-03-25_NEET_2024_20250524_141230/`) contains:
        *   **`predictions.jsonl`**: Detailed results for each question including:
            - Model predictions and ground truth
            - Raw LLM responses
            - Evaluation status and marks awarded
            - API call success/failure information
        *   **`summary.json`**: Overall scores and statistics in JSON format
        *   **`summary.md`**: Human-readable Markdown summary with:
            - Overall exam scores
            - Section-wise breakdown (by subject)
            - Detailed statistics on correct/incorrect/skipped questions

## Scoring System

The benchmark implements authentic scoring systems for each exam type:

### NEET Scoring
- **Single Correct MCQ**: +4 for correct, -1 for incorrect, 0 for skipped

### JEE Main Scoring  
- **Single Correct MCQ**: +4 for correct, -1 for incorrect, 0 for skipped
- **Integer Type**: +4 for correct, 0 for incorrect, 0 for skipped

### JEE Advanced Scoring
- **Single Correct MCQ**: +3 for correct, -1 for incorrect, 0 for skipped
- **Multiple Correct MCQ**: Complex partial marking system:
  - +4 for all correct options selected
  - +3 for 3 out of 4 correct options (when 4 are correct)
  - +2 for 2 out of 3+ correct options
  - +1 for 1 out of 2+ correct options
  - -2 for any incorrect option selected
  - 0 for skipped
- **Integer Type**: +4 for correct, 0 for incorrect, 0 for skipped

## Advanced Features

### Retry Mechanism
- Automatic retry for failed API calls (up to 3 attempts with exponential backoff)
- Separate retry pass for questions that failed initially
- Comprehensive error tracking and reporting

### Re-prompting System
- If initial response parsing fails, the system automatically re-prompts the model
- Uses the previous response to ask for properly formatted answers
- Adapts prompts based on question type (MCQ vs Integer)

### Comprehensive Evaluation
- Tracks multiple metrics: correct answers, partial credit, skipped questions, API failures
- Section-wise breakdown by subject
- Detailed logging with color-coded progress indicators

## Dataset Structure

*   **`data/metadata.jsonl`**: Contains metadata for each question image with fields:
    - `image_path`: Path to the question image
    - `question_id`: Unique identifier (e.g., "N24T3001")
    - `exam_name`: Exam type ("NEET", "JEE_MAIN", "JEE_ADVANCED")
    - `exam_year`: Year of the exam (integer)
    - `exam_code`: Paper/session code (e.g., "T3", "P1")
    - `subject`: Subject name (e.g., "Physics", "Chemistry", "Mathematics")
    - `question_type`: Question format ("MCQ_SINGLE_CORRECT", "MCQ_MULTIPLE_CORRECT", "INTEGER")
    - `correct_answer`: List of correct answer strings (e.g., ["A"], ["B", "C"], ["42"])

*   **`images/`**: Contains subdirectories for each exam set:
    - `images/NEET_2024_T3/`: NEET 2024 question images
    - `images/NEET_2025_45/`: NEET 2025 question images
    - `images/JEE_ADVANCE_2024/`: JEE Advanced 2024 question images
    - `images/JEE_ADVANCE_2025/`: JEE Advanced 2025 question images


*   **`src/`**: Python source code for the benchmark system:
    - `benchmark_runner.py`: Main benchmark execution script
    - `llm_interface.py`: OpenRouter API interface with retry logic
    - `evaluation.py`: Scoring and evaluation functions
    - `prompts.py`: LLM prompts for different question types
    - `utils.py`: Utility functions for parsing and configuration

*   **`configs/`**: Configuration files:
    - `benchmark_config.yaml`: Model selection and API parameters

*   **`results/`**: Directory where benchmark results are stored (timestamped subdirectories)

*   **`jee-neet-benchmark.py`**: Hugging Face `datasets` loading script

## Data Fields

The dataset contains the following fields (accessible via `datasets`):

*   `image`: The question image (`datasets.Image`)
*   `question_id`: Unique identifier for the question (string)
*   `exam_name`: Name of the exam (e.g., "NEET", "JEE_ADVANCED") (string)
*   `exam_year`: Year of the exam (int)
*   `exam_code`: Paper/session code (e.g., "T3", "P1") (string)
*   `subject`: Subject (e.g., "Physics", "Chemistry", "Mathematics") (string)
*   `question_type`: Type of question (e.g., "MCQ_SINGLE_CORRECT", "INTEGER") (string)
*   `correct_answer`: List containing the correct answer strings.
    - For MCQs, these are option identifiers (e.g., `["1"]`, `["A"]`, `["B", "C"]`). The LLM should output the identifier as it appears in the question.
    - For INTEGER type, this is the numerical answer as a string (e.g., `["42"]`, `["12.75"]`). The LLM should output the number.
    - For some `MCQ_SINGLE_CORRECT` questions, multiple answers in this list are considered correct if the LLM prediction matches any one of them.
    (list of strings)

## LLM Answer Format

The LLM is expected to return its answer enclosed in `<answer>` tags. For example:
- MCQ Single Correct (Option A): `<answer>A</answer>`
- MCQ Single Correct (Option 2): `<answer>2</answer>`
- MCQ Multiple Correct (Options B and D): `<answer>B,D</answer>`
- Integer Answer: `<answer>42</answer>`
- Decimal Answer: `<answer>12.75</answer>`
- Skipped Question: `<answer>SKIP</answer>

The system parses these formats. Prompts are designed to guide the LLM accordingly.

## Troubleshooting

### Common Issues

**API Key Issues:**
- Ensure your `.env` file is in the root directory
- Verify your OpenRouter API key is valid and has sufficient credits
- Check that the key has access to vision-capable models

**Model Not Found:**
- Verify the model identifier exists on OpenRouter
- Ensure the model supports vision input
- Check your OpenRouter account has access to the specific model

**Memory Issues:**
- Reduce `max_tokens` in the config file
- Process smaller subsets using `--question_ids` filter
- Use models with smaller context windows

**Parsing Failures:**
- The system automatically attempts re-prompting for parsing failures
- Check the raw responses in `predictions.jsonl` to debug prompt issues
- Consider adjusting prompts in `src/prompts.py` for specific models

## Current Limitations

*   **Dataset Size:** While comprehensive, the dataset could benefit from more JEE Main questions and additional years
*   **Language Support:** Currently only supports English questions
*   **Model Dependencies:** Requires models with vision capabilities available through OpenRouter

## Citation

If you use this dataset or benchmark code, please cite:

```bibtex
@misc{rejaullah_2025_jeeneetbenchmark,
  title={JEE/NEET LLM Benchmark},
  author={Md Rejaullah},
  year={2025},
  howpublished={\url{https://huggingface.co/datasets/Reja1/jee-neet-benchmark}},
}
```

## Contact

For questions, suggestions, or collaboration, feel free to reach out:

*   **X (Twitter):** [https://x.com/RejaullahmdMd](https://x.com/RejaullahmdMd)

## License

This dataset and associated code are licensed under the [MIT License](https://opensource.org/licenses/MIT).