🧠 An Experimental GRPO Andy model, built on the Andy-3.6 dataset 🧠

Please note!

Andy-3.6-small was trained on older data, and not the newest and latest versions of Mindcraft.

I cannot guarantee that Andy-3.6-small will work on future versions as the model was tuned to play MindCraft with a specific version!

For the rest of the Andy-3.6 generation, this model will ONLY be guaranteed to be supported on the version of Mindcraft in this github repo!

For more info, as well as the supported version of Mindcraft, please follow this link to github

Another note:

This model is an experimental reasoning model that uses

How to Install / Setup

Installing Andy-3.6-small is much easier and Andy-3.5!

  1. In the top right of this repo, click "Use This Model"
  2. Next, click Ollama
  3. Pick your quantization (Q5_k_m is best size to performance, Q8_0 is very good with similar performance to F16)
  4. Run the command in your terminal
  5. Now you have Andy-3.6-small installed!

If you would like to use the full Andy-3.6 model, you can find that here

How was model trained?

The model was trained on the MindCraft dataset for Andy-3.6, a curated dataset for Q & A, reasoning, and playing, which includes ~22,000 prompts.

What are capabilities and Limitations?

Andy-3.6-small-GRPO was trained on EVERYTHING regarding Minecraft and MindCraft, it knows how to use commands natively without a system prompt. Andy-3.6-small-GRPO also knows how to build / use !newAction to perform commands, it was trained on lots of building, as well as, using !newAction to do tasks like manually making something or strip mining.

What models can I choose?

There are going to be 2 model sizes avaliable, Regular, and Small

  • Regular is a 7B parameter model, tuned from Deepseek-R1 Distilled
  • Small is a 3B parameter model, tuned from Qwen2.5 3B
  • The Small-GRPO model is also a 3B model, but tuned using GRPO techniques for enhanced reasoning instead of regular PPO techniques

Andy-3.6-small-GRPO is a model designed for reasoning, and uses this response format:

<think>
...thinking text here...
</think>
<answer>
...answer text here...
</think>

Safety and FAQ

Q: Is this model safe to use?

A. Yes, this model is non-volatile, and cannot generate malicous content

Q. Can this model be used on a server?

A. Yes, In theory and practice the model is only capable of building and performing manual tasks via newAction

Q. Who is responsible if this model does generate malicous content?

A. You are responsible, even though the model was never trained to be able to make malicous content, there is a very very slight chance it still generates malicous code.

Q. If I make media based on this model, like photos / videos, do I have to mention the Creator?

A. No, if you are making a post about MindCraft, and using this model, you only have to mention the creator if you mention the model being used.

🔥UPDATE🔥

Andy-3.6-small-GRPO Release!

Andy-3.6-small-GRPO is a small, 3B reasoning model for Mindcraft

I want to thank all supporters!

I would love to thank everyone who supported this project, there is a list of supporters in the files section.

You can find all of the supporters here

Performance Metrics

These benchmarks are a-typical, since most standard benchmarks don't apply to Minecraft

The benchmarks below include models via API that are cheap, and other fine-tuned local models

Zero info Prompting

How fast can a model collect 16 oak logs, and convert them all into sticks

image/png

As shown, the only models that are capable of play without information, is Andy-3.6, and all Andy-3.5 models

You can test this demo out for yourself using this profile

Time to get a stone pickaxe

image/png

For Andy-3.6, I used the Q4_K_M quantization

For Andy-3.5-mini, I used the FP16 model, I had enough VRAM to do so

For Andy-3.5, I used the Q4_K_M quantization

For Andy-3.5-small, I used the Q8_0 quantization

Andy-3.5-reasoning-small was able to be the most efficient model producing the lowest amount of messages, but took a whopping 34.5 minutes to get a stone pickaxe.

For Andy-3.5-Teensy, I used the FP16 quantization

For Mineslayerv1 and Mineslayerv2, I used the default (and only) quantization, Q4_K_M

Notes about the benchmarks

Zero Info Prompting

Andy-3.5-Mini collected 32 oak_log instead of 16 oak_log

Andy-3.5-small No notes

Andy-3.5 attempted to continue playing, and make a wooden_pickaxe after the goal was done.

Both Mineslayerv1 and Mineslayerv2 hallucinated commands, like !chop or !grab

Time to get a stone pickaxe

Andy-3.6 performed the best, beating gpt-4o-mini and claude-3.5-haiku

Andy-3.5-Mini was unable to make itself a stone pickaxe, however it collected enough wood, but then got stuck on converting logs to planks, it kept trying "!craftRecipe("wooden_planks", 6) instead of oak_planks

Andy-3.5-small kept trying to make a stone_pickaxe first

Andy-3.5 Made a stone pickaxe the faster than GPT-4o-mini and Claude-3.5-Haiku

Mineslayerv1 Was unable to use !collectBlocks, instead kept trying !collectBlock

Mineslayerv2 Was unable to play, it kept hallucinating on the first command

Downloads last month

-

Downloads are not tracked for this model. How to track
Inference Providers NEW
This model is not currently available via any of the supported Inference Providers.
The model cannot be deployed to the HF Inference API: The model has no library tag.

Dataset used to train Sweaterdog/Andy-3.6-small-GRPO

Collection including Sweaterdog/Andy-3.6-small-GRPO