PythonSaga / README.md
temporary0-0name's picture
Update README.md
61532b6 verified
metadata
tags:
  - code
  - evaluation
  - code llm
size_categories:
  - n<1K

Abstract

Driven by the surge in code generation using large language models (LLMs), numerous benchmarks have emerged to evaluate these LLMs' capabilities. We conducted a large-scale human evaluation of HumanEval and MBPP, two popular benchmarks for Python code generation, analyzing their diversity and difficulty. Our findings unveil a critical bias towards a limited set of programming concepts, neglecting most of the other concepts entirely. Furthermore, we uncover a worrying prevalence of easy tasks that can inflate model performance estimations. To address these limitations, we propose a novel benchmark, PythonSaga, featuring 185 hand-crafted prompts in a balanced representation of 38 programming concepts across diverse difficulty levels. The robustness of our benchmark is demonstrated by the poor performance of existing Code-LLMs. The code and dataset are openly available to the NLP community at https://github.com/PythonSaga/PythonSaga.


PythonSaga

This dataset follows the rules and diversity of template suggested in the paper "PythonSaga: Redefining the Benchmark to Evaluate Code Generating LLM" The goal is to make benchmarks better at assessing Code Generating Language Models (LLMs).
Model Size Pass@1 Pass@10
StarCoderBase 7B 0.0029 0.0149
StarCoder2 7B 0.0024 0.0217
Code Llama 7B 0.0067 0.0472
CodeQwen1.5-Chat 7B 0.0059 0.0497
Nxcode-CQ-orpo 7B 0.0058 0.0523
Mistral-Instruct-v0.1 7B 0.0140 0.0552
Code Llama Instruct 7B 0.0178 0.0744
Deepseek Coder Instruct 6.7B 0.0137 0.0889
Code Llama Python 7B 0.0240 0.0979
Llama 3 8B 0.0370 0.1125
Phi-2 2.7B 0.0302 0.1187
OpenCodeInterpreter-DS 6.7B 0.0259 0.1206
Deepseek Coder 6.7B 0.0343 0.1415
Code Llama Python 13B 0.0405 0.1514
GPT-3.5 NA 0.0724 0.2384
GPT-4 NA 0.1243 0.3311

Comparison between open and closed-source models on PythonSaga. We use the number of samples (n) as 20 for both open and closed-source models.