Models added here will be automatically evaluated on the 🤗 cluster. Don’t forget to read the FAQ and the About documentation pages for more information!
Make sure you can load your model and tokenizer using AutoClasses:
from transformers import AutoConfig, AutoModel, AutoTokenizer
config = AutoConfig.from_pretrained("your model name", revision=revision)
model = AutoModel.from_pretrained("your model name", revision=revision)
tokenizer = AutoTokenizer.from_pretrained("your model name", revision=revision)
If this step fails, follow the error messages to debug your model before submitting it. It’s likely your model has been improperly uploaded.
Notes:
use_remote_code=True
, we do not support this option yet but are working on adding it. Stay posted!When we add extra information about models to the leaderboard, it will be automatically taken from the model card.
Not all models are converted properly from float16
to bfloat16
, and selecting the wrong precision can sometimes cause evaluation errors (as loading a bf16
model in fp16
can sometimes generate NaNs, depending on the weight range).
When submitting a model, you can choose whether to evaluate it using a chat template. The chat template toggle activates automatically for chat models.