The dataset viewer is not available because its heuristics could not detect any supported data files. You can try uploading some data files, or configuring the data files location manually.
Academic Criterion: 2024
This is a LLM benchmark dataset, which uses a different format to other benchmarks, it uses a rubric instead of a definitive answer, alowing for a wide array of valid answers, each answer the LLM gives can be graded in percentages rather than a PASS/FAIL system, giving more granular evaluation of the models capabilities and drawbacks.
Components
When we setup the "Mark Scheme" for the AI Judge to read, we provide 3 things:
Question
Rubric (Break percentage marks into different parts to award % marks for a question's answer, instead of a Pass/Fail system)
Answer with Reasoning (From the LLM)
Tips for running benchmarks
- It is reccomended that you average multiple runs of the LLM over the benchmark to ensure a consistant evaluation(at least 3 repeated runs).
- It is reccomended that the AI judge, makes a judgement 2 times and the result is averaged, to reflect the real world case where a Secondary Marker is used to evaluate the exam.
- Using a Reasoning model like o1-mini or o1-preview for Judging is reccomended, as multi-step reasoning is highly benificial to evaluating the test taker on a academic like rubric.
- The LLM judge should write comments about the answer to each question, not only giving a % grade for a question, but giving their reasoning and feedback for improvement, at the very end of the benchmark, the LLM Judge should summerise their comments on all the questions Judged. and provide a Overall assesment on the models capabilities, this allows model authors to take reasoning the judge gave into account.
Reasoning behing this benchmark
The reason for this benchmark is to provide a more granular approach to benchmarking, while a total % score at the end will be given, its not the main focus of this benchmark, its purpose is to inform model authors of where they can improve in reasoning ability.
WIP
Although a high degree of effort is put in to make unique questions, with robust rubrics(Mark Schemes), this author is not a PhD or Education Professional, this author is a Computer Science graduate, and this benchmark was constructed with knowledge gained on how they were evaluated in the university system. This benchmark may not be as effective as Elo benchmarks which uses crowdsourced human preferenced evaluation methods, but may be a good starting point for model authors and evaluators before further training/considering Elo evaluation on the model.
The main advantage of this benchmark is it provides granular insight instead of a PASS/FAIL for each question, to establish the specific strengths and weaknesses of the model.
- Downloads last month
- 22