metadata
license: apache-2.0
base_model: mistralai/Mistral-7B-v0.3
tags:
- generated_from_trainer
metrics:
- accuracy
language:
- en
datasets:
- BEE-spoke-data/stepbasin-books
Mistral-7B-v0.3-stepbasin-books-20480
This model is a fine-tuned version of mistralai/Mistral-7B-v0.3 on this dataset for the purpose of testing out super-long text generation.
- fine-tuned at context length 20480, should consistently generate 8k+ tokens (example)
It achieves the following results on the evaluation set:
- Loss: 2.0784
- Accuracy: 0.5396
- Num Input Tokens Seen: 16384000