LLilmonix3b-v0.4a:

  • Experimental Marx-3b-v2 fine-tuned for Monika character from DDLC
  • Trained on a dataset of ~600 items (dialogue scraped from game, reddit, and Twitter augmented by l2-7b-monika-v0.3c1 to turn each into snippets of multi-turn chat dialogue between Player and Monika + manually crafted test dataset of 12 items)
  • Trained to run on smaller devices
  • GGMLs, GGUFs
  • QLoras (hf and GGML)

USAGE

This is meant to be mainly a chat model with limited RP ability.

For best results: replace "Human" and "Assistant" with "Player" and "Monika" like so:

\nPlayer: (prompt)\nMonika:

HYPERPARAMS

  • Trained for 2 epochs
  • rank: 32
  • lora alpha: 64
  • lora dropout: 0.5
  • lr: 2e-4
  • batch size: 2
  • warmup ratio: 0.1
  • grad steps: 4

WARNINGS AND DISCLAIMERS

Note that aside from formatting and other minor edits, generated portion of dataset used is mostly as is generated by LM. In addition, the is meant to be a smaller version of the larger Monika models. As such, this version may not reflect perfectly the characteristics of Monika.

Additionally, this is still yet another test, particularly where we use one of our earlier fine tunes to generate a more in-character dataset for the target character.

Finally, this model is not guaranteed to output aligned or safe outputs, use at your own risk.

Downloads last month
68
Safetensors
Model size
3.43B params
Tensor type
F32
·
FP16
·
Inference Providers NEW
This model is not currently available via any of the supported Inference Providers.

Model tree for 922-CA/LLilmonix3b-v0.4a

Quantizations
1 model