Model description
This is a Vicuna-like model with only 68M parameters, which is fine-tuned from LLaMA-68m on ShareGPT data.
The training setup follows the Vicuna suite.
The model is mainly developed as a base Small Speculative Model in the MCSD paper. As a comparison, it can be better aligned to the Vicuna models than LLaMA-68m with little loss of alignment to the LLaMA models.
Draft Model | Target Model | Alignment |
---|---|---|
LLaMA-68/160M | LLaMA-13/33B | π |
LLaMA-68/160M | Vicuna-13/33B | π |
Vicuna-68/160M | LLaMA-13/33B | π |
Vicuna-68/160M | Vicuna-13/33B | π |
- Downloads last month
- 1,566
Inference Providers
NEW
This model is not currently available via any of the supported third-party Inference Providers, and
the model is not deployed on the HF Inference API.