WritingBench-Critic-Model-Qwen-32B-thinking

πŸ“ƒ [Paper] β€’ πŸš€ [Github Repo] β€’ πŸ“ [Critic Model] β€’ ✍️ [Writer-7B] [Writer-32B]

This model is fine-tuned from Qwen/Qwen2.5-32B-Instruct on a 12K SFT dataset for writing evaluation tasks.

πŸ“ Citation

@misc{wu2025writingbench,
      title={WritingBench: A Comprehensive Benchmark for Generative Writing}, 
      author={Yuning Wu and Jiahao Mei and Ming Yan and Chenliang Li and Shaopeng Lai and Yuran Ren and Zijia Wang and Ji Zhang and Mengyue Wu and Qin Jin and Fei Huang},
      year={2025},
      url={https://arxiv.org/abs/2503.05244}, 
}
Downloads last month
7
Safetensors
Model size
32.8B params
Tensor type
BF16
Β·
Inference Providers NEW
This model is not currently available via any of the supported Inference Providers.

Model tree for AQuarterMile/Writing-Model-Qwen-32B-thinking

Base model

Qwen/Qwen2.5-32B
Finetuned
(147)
this model
Quantizations
2 models