How to submit models on the Open LLM Leaderboard

Models added here will be automatically evaluated on the 🤗 cluster. Don’t forget to read the FAQ and the About documentaiton pages for more information!

First steps before submitting a model

1. Ensure Model and Tokenizer Loading:

Make sure you can load your model and tokenizer using AutoClasses:

from transformers import AutoConfig, AutoModel, AutoTokenizer
config = AutoConfig.from_pretrained("your model name", revision=revision)
model = AutoModel.from_pretrained("your model name", revision=revision)
tokenizer = AutoTokenizer.from_pretrained("your model name", revision=revision)

If this step fails, follow the error messages to debug your model before submitting it. It’s likely your model has been improperly uploaded.

Notes:

2. Fill Up Your Model Card:

When we add extra information about models to the leaderboard, it will be automatically taken from the model card.

3. Select the Correct Precision:

Not all models are converted properly from float16 to bfloat16, and selecting the wrong precision can sometimes cause evaluation errors (as loading a bf16 model in fp16 can sometimes generate NaNs, depending on the weight range).

4. Chat Template Toggle :

When submitting a model, you can choose whether to evaluate it using a chat template. The chat template toggle activates automatically for chat models.

Model Types

< > Update on GitHub