So it turns out you can change the dataset and tokenizer and run it through the machine again on the same run. #Things I wish I had known earlier

Uploaded model

  • Developed by: UniLLMer
  • License: apache-2.0
  • Finetuned from model : MarinaraSpaghetti/NemoMix-Unleashed-12B

This mistral model was trained 2x faster with Unsloth and Huggingface's TRL library.

Downloads last month
12
GGUF
Model size
12.2B params
Architecture
llama

4-bit

6-bit

Inference Providers NEW
This model is not currently available via any of the supported Inference Providers.
The model cannot be deployed to the HF Inference API: The model has no pipeline_tag.

Model tree for UniLLMer/SpagMarKaa512b328834

Quantized
(17)
this model