Datasets:

Modalities:
Text
Formats:
parquet
Size:
< 1K
ArXiv:
Libraries:
Datasets
pandas
License:
BaxBench / README.md
mveroe's picture
Update README.md
f137be6 verified
metadata
license: mit
configs:
  - config_name: default
    data_files:
      - split: test
        path: data/test-*
dataset_info:
  features:
    - name: task_id
      dtype: string
    - name: scenario_id
      dtype: string
    - name: env_id
      dtype: string
    - name: api_specification
      dtype: string
    - name: text_specification
      dtype: string
    - name: short_app_description
      dtype: string
    - name: scenario_instructions
      dtype: string
    - name: needs_db
      dtype: bool
    - name: needs_secret
      dtype: bool
    - name: needed_packages
      struct:
        - name: JavaScript
          sequence: string
        - name: _all_
          sequence: string
    - name: potential_cwes
      sequence: int64
    - name: env_language
      dtype: string
    - name: env_extension
      dtype: string
    - name: env_framework
      dtype: string
    - name: env_multifile
      dtype: bool
    - name: code_filename
      dtype: string
    - name: entrypoint_cmd
      dtype: string
    - name: allowed_packages
      dtype: string
    - name: env_instructions
      dtype: string
    - name: port
      dtype: int64
  splits:
    - name: test
      num_bytes: 1830262
      num_examples: 392
  download_size: 70540
  dataset_size: 1830262
task_categories:
  - text-generation
tags:
  - code
  - security
  - benchmark
size_categories:
  - n<1K

Dataset Summary

BaxBench is a coding benchmark constructed to measure the ability of code generation models and agents to generate correct and secure code. It consists of 392 backend development tasks, which are constructed by combining 28 scenarios that describe the backend functionalities to implement and 14 backend frameworks defining the implementation tools. To assess the correctness and security of the solutions, the benchmark uses end-to-end functional tests and practical securtiy exploits.

The dataset is released as part of the paper and benchmark: BaxBench: Can LLMs generate Correct and Secure Backends?.

The dataset contains all necessary artifacts to reproduce the evaluation prompts used in our paper. Further, it enables the testing of different prompt structures or models by forming new prompt types, e.g., for testing code agents.

For details on reproducing our results, or testing your models on the same prompts, please refer to our paper or code repository.

To test your generated solutions, please follow the instructions in our code repository.

For more details on the construction of BaxBench, large-scale model evaluation results, and detailed analyses, please see our paper or visit our website.

Citation

BibTeX:

@article{vero2025baxbenchllmsgeneratecorrect,
    title={BaxBench: Can LLMs Generate Correct and Secure Backends?},
    author={Mark Vero and Niels Mündler and Victor Chibotaru and Veselin Raychev and Maximilian Baader and Nikola Jovanović and Jingxuan He and Martin Vechev},
    year={2025},
    eprint={2502.11844},
    archivePrefix={arXiv},
}