|  | --- | 
					
						
						|  | datasets: | 
					
						
						|  | - HuggingFaceH4/ultrafeedback_binarized | 
					
						
						|  | language: | 
					
						
						|  | - en | 
					
						
						|  | library_name: transformers | 
					
						
						|  | license: mit | 
					
						
						|  | pipeline_tag: text-generation | 
					
						
						|  | --- | 
					
						
						|  |  | 
					
						
						|  | # Zephyr-7B-DICE-Iter2 | 
					
						
						|  |  | 
					
						
						|  | This model was developed using [Bootstrapping Language Models with DPO Implicit Rewards](https://arxiv.org/abs/2406.09760) (DICE) at iteration 2, based on the [HuggingFaceH4/zephyr-7b-beta](https://huggingface.co/HuggingFaceH4/zephyr-7b-beta) as the starting point. | 
					
						
						|  |  | 
					
						
						|  | ## Links to Other Models | 
					
						
						|  | - [Zephyr-7B-DICE-Iter1](https://huggingface.co/sail/Zephyr-7B-DICE-Iter1) | 
					
						
						|  | - [Zephyr-7B-DICE-Iter2](https://huggingface.co/sail/Zephyr-7B-DICE-Iter2) | 
					
						
						|  |  | 
					
						
						|  | ## Model Description | 
					
						
						|  |  | 
					
						
						|  | - Model type: A 7B parameter GPT-like model fine-tuned on synthetic datasets. | 
					
						
						|  | - Language(s) (NLP): Primarily English | 
					
						
						|  | - License: MIT | 
					
						
						|  | - Fine-tuned from model: HuggingFaceH4/zephyr-7b-beta | 
					
						
						|  |  | 
					
						
						|  | ## [AlpacaEval Leaderboard Evaluation Results](https://tatsu-lab.github.io/alpaca_eval/) | 
					
						
						|  |  | 
					
						
						|  | |                Model                           | LC. Win Rate | Win Rate | | 
					
						
						|  | |-------------------------------------------|:------------:|:--------:| | 
					
						
						|  | |[Zephyr-7b-beta](https://huggingface.co/HuggingFaceH4/zephyr-7b-beta) |12.69 |10.71 | 
					
						
						|  | |[Zephyr-7B-DICE-Iter1](https://huggingface.co/sail/Zephyr-7B-DICE-Iter1) |19.03 |17.67 | 
					
						
						|  | |[Zephyr-7B-DICE-Iter2](https://huggingface.co/sail/Zephyr-7B-DICE-Iter2) |**20.71** |**20.16** | 
					
						
						|  |  | 
					
						
						|  | ## Code | 
					
						
						|  | https://github.com/sail-sg/dice | 
					
						
						|  |  | 
					
						
						|  | ## Citation | 
					
						
						|  |  | 
					
						
						|  | ```bibtex | 
					
						
						|  | @article{chen2024bootstrapping, | 
					
						
						|  | title={Bootstrapping Language Models with DPO Implicit Rewards}, | 
					
						
						|  | author={Chen, Changyu and Liu, Zichen and Du, Chao and Pang, Tianyu and Liu, Qian and Sinha, Arunesh and Varakantham, Pradeep and Lin, Min}, | 
					
						
						|  | journal={arXiv preprint arXiv:2406.09760}, | 
					
						
						|  | year={2024} | 
					
						
						|  | } | 
					
						
						|  | ``` |