MoETest-3E2A-3x3B / README.md
DoppelReflEx's picture
Update README.md
b843cf5 verified
metadata
license: apache-2.0
base_model:
  - TroyDoesAI/BlackSheep-Llama3.2-3B
  - bunnycore/Llama-3.2-3B-Mix-Skill
  - SicariusSicariiStuff/Impish_LLAMA_3B
  - NousResearch/Hermes-3-Llama-3.2-3B
tags:
  - merge
  - mergekit
  - moe
  - text-generation-inference
  - roleplay

A 3x3B Mixture of Expert (MoE) experimental model with 2 experts per token. Special thank MoV.

PS: Decent model, fast speed. Maybe it's silly, but fun enough if you use for message chat roleplay.