ABC-Bench: Benchmarking Agentic Backend Coding in Real-World Development
Paper
•
2601.11077
•
Published
•
61
Qwen3-8B-ABC is a supervised fine-tuned (SFT) variant of Qwen/Qwen3-8B, trained for agentic backend coding and tool-using / instruction-following behaviors.
Qwen3-8B-ABCQwen/Qwen3-8BThis model was fine-tuned on nex-agi/agent-sft.
Please refer to the dataset card for detailed documentation, licensing, and usage constraints.
Following the ABC-Bench paper’s evaluation protocol:
| Model | Setting | Average Pass@1 (%, 3 attempts) |
|---|---|---|
| Qwen3-8B-ABC | w/ SFT | 13.9% |
| Qwen3-8B | w/o SFT | 8.3% |
Qwen3-8B-ABC is intended for:
from transformers import AutoTokenizer, AutoModelForCausalLM
import torch
model_id = "OpenMOSS-Team/Qwen3-8B-ABC"
tokenizer = AutoTokenizer.from_pretrained(model_id, trust_remote_code=True)
model = AutoModelForCausalLM.from_pretrained(
model_id,
torch_dtype=torch.bfloat16,
device_map="auto",
trust_remote_code=True,
)
prompt = "Write a FastAPI endpoint that returns health status as JSON."
inputs = tokenizer(prompt, return_tensors="pt").to(model.device)
with torch.no_grad():
output = model.generate(
**inputs,
max_new_tokens=256,
do_sample=True,
temperature=0.7,
top_p=0.9,
)
print(tokenizer.decode(output[0], skip_special_tokens=True))
@misc{yang2026abcbenchbenchmarkingagenticbackend,
title={ABC-Bench: Benchmarking Agentic Backend Coding in Real-World Development},
author={Jie Yang and Honglin Guo and Li Ji and Jiazheng Zhou and Rui Zheng and Zhikai Lei and Shuo Zhang and Zhiheng Xi and Shichun Liu and Yuxin Wang and Bo Wang and Yining Zheng and Tao Gui and Xipeng Qiu},
year={2026},
eprint={2601.11077},
archivePrefix={arXiv},
primaryClass={cs.SE},
url={https://arxiv.org/abs/2601.11077},
}
Qwen/Qwen3-8Bnex-agi/agent-sft