Model Card for Model ID

This is the model from the paper From Real to Synthetic: Synthesizing Millions of Diversified and Complicated User Instructions with Attributed Grounding.

Model Details

Model Description

  • Model type: Chat Model
  • Language(s) (NLP): English
  • License: CC-BY-4.0
  • Finetuned from model: IgnoraZ/llama3_synthquestions_1m
  • Finetuned with data: Preference dataset from IgnoraZ/SynthQuestions

For more details like hyper-parameters, please refer to our paper.

Model Sources

How to Get Started with the Model

This is a model in HF format, which can be deployed with common inference frameworks like Transformers, vLLM, SGLang and so on.

We finetuned it with custom chat template instead of the default one from LLaMA. Please make sure to use the chat template in the tokenizer_config.json when inferring.

Evaluation

Alignment Benchmarks

Model Arena Hard (WR%) Alpaca Eval 2.0 (LC)
SynthQuestions 24.8 30.16

Citation

@misc{zhu2025realsyntheticsynthesizingmillions,
      title={From Real to Synthetic: Synthesizing Millions of Diversified and Complicated User Instructions with Attributed Grounding}, 
      author={Chiwei Zhu and Benfeng Xu and Xiaorui Wang and Zhendong Mao},
      year={2025},
      eprint={2506.03968},
      archivePrefix={arXiv},
      primaryClass={cs.CL},
      url={https://arxiv.org/abs/2506.03968}, 
}

Model Card Contact

Please contact [email protected].

Downloads last month
10
Safetensors
Model size
8.03B params
Tensor type
BF16
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for IgnoraZ/llama3_synthquestions_dpo_100k

Finetuned
(1)
this model

Collection including IgnoraZ/llama3_synthquestions_dpo_100k