Datasets:

Modalities:
Text
Formats:
parquet
Size:
< 1K
ArXiv:
Libraries:
Datasets
pandas
License:
mveroe commited on
Commit
c968515
·
verified ·
1 Parent(s): b351633

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +27 -1
README.md CHANGED
@@ -65,4 +65,30 @@ tags:
65
  - benchmark
66
  size_categories:
67
  - n<1K
68
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
65
  - benchmark
66
  size_categories:
67
  - n<1K
68
+ ---
69
+
70
+ ### Dataset Summary
71
+ BaxBench is a coding benchmark constructed to measure the ability of code generation models and agents to generate correct and secure code. It consists of 392 backend development tasks, which are constructed by combining 28 scenarios that describe the backend functionalities to implement and 14 backend frameworks defining the implementation tools. To assess the correctness and security of the solutions, the benchmark uses end-to-end functional tests and practical securtiy exploits.
72
+
73
+ The dataset is released as part of the paper and benchmark: [BaxBench: Can LLMs generate Correct and Secure Backends?](https://arxiv.org/abs/2502.11844).
74
+
75
+ The dataset contains all necessary artifacts to reproduce the evaluation prompts used in our paper. Further, it enables the testing of different prompt structures or models by forming new prompt types, e.g, for testing code agents.
76
+
77
+ For details on reproducing our results, or testing your models on the same prompts, please refer to our [paper](https://arxiv.org/abs/2502.11844) or [code repository](https://github.com/logic-star-ai/baxbench).
78
+
79
+ To test your generated solutions, please follow the instructions in our [code repository](https://github.com/logic-star-ai/baxbench).
80
+
81
+ For more details on the construction of BaxBench, large-scale model evaluation results, and detailed analyses, please see our [paper](https://arxiv.org/abs/2502.11844) or visit our [website](https://baxbench.com).
82
+
83
+ ### Citation
84
+
85
+ **BibTeX:**
86
+ ```
87
+ @article{vero2025baxbenchllmsgeneratecorrect,
88
+ title={BaxBench: Can LLMs Generate Correct and Secure Backends?},
89
+ author={Mark Vero and Niels Mündler and Victor Chibotaru and Veselin Raychev and Maximilian Baader and Nikola Jovanović and Jingxuan He and Martin Vechev},
90
+ year={2025},
91
+ eprint={2502.11844},
92
+ archivePrefix={arXiv},
93
+ }
94
+ ```