AQuarterMile's picture
Update README.md
d7da7c5 verified
|
raw
history blame
2.01 kB
metadata
library_name: transformers
license: apache-2.0
base_model: Qwen/Qwen2.5-7B-Instruct
tags:
  - llama-factory
  - generated_from_trainer
model-index:
  - name: WritingBench-Writing-Model-7b
    results: []

Writing-Model-Qwen-7b

πŸ“ƒ [Paper] β€’ πŸš€ [Github Repo] β€’ πŸ“ [Critic Model] β€’ ✍️ [Writing Model]

This model is fine-tuned from Qwen/Qwen2.5-7B-Instruct on a 12K SFT dataset for writing evaluation tasks.

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 7e-06
  • train_batch_size: 1
  • eval_batch_size: 8
  • seed: 42
  • distributed_type: multi-GPU
  • num_devices: 32
  • gradient_accumulation_steps: 4
  • total_train_batch_size: 128
  • total_eval_batch_size: 256
  • optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
  • lr_scheduler_type: cosine
  • lr_scheduler_warmup_ratio: 0.1
  • num_epochs: 5

Framework versions

  • Transformers 4.46.1
  • Pytorch 2.4.0+cu121
  • Datasets 3.1.0
  • Tokenizers 0.20.3

πŸ“ Citation

@misc{wu2025writingbench,
      title={WritingBench: A Comprehensive Benchmark for Generative Writing}, 
      author={Yuning Wu and Jiahao Mei and Ming Yan and Chenliang Li and SHaopeng Lai and Yuran Ren and Zijia Wang and Ji Zhang and Mengyue Wu and Qin Jin and Fei Huang},
      year={2025},
      url={https://arxiv.org/abs/2503.05244}, 
}