This modelfile is the LoRA adapter for Andy-3.6-small
Note:
Andy-4 is right around the corner, and will be released before the end of April, if you are considering fine tuning off of this model I would wait until the higher performance Andy-4 model gets released.
Why this exists
This Repo exists because I wanted to make Andy-3.6-small fully open-source, via Unsloth, you are able to continue fine tuning where I left off, so if you made your own dataset, you can continue tuning Andy-3.6-small for your exact use case.
What if I fine tune off of Andy-3.6-small?
If you fine tune Andy-3.6-small on your dataset, my dataset, or any other dataset, you have to provide credit to me for making the base model, which is Andy-3.6-small, if you wish, you may call the model Andy-3.6-small-base
Why would I want to fine tune off of Andy-3.5?
Andy-3.6-small has a significant amount of knowledge regarding Minecraft and MindCraft, but not unlimited. Andy-3.6-small can be trained further on Minecraft knowledge to make the model better, and if you strive for maximum efficiency, it would be best to continue fine-tuning a model based on similar data to help it.
What should I call my model if I do tune it?
You may name it whatever you'd like, but if I may suggest, I would recommend a name that clearly references the fact it originated from Andy-3.6-small.
If you'd like an example, if I trained Andy-3.6-small on speedrunning tactics, I would call the model Andy-3.6-small-Speedrun or something similar.