image

Llama3.2 - OpenElla3B

  • OpenElla Model B, is a Llama3.2 3B Parameter Model, That is fine-tuned for Roleplaying purposes, even if it only have a limited Parameters. This is achieved through Series of Dataset Finetuning, using 3 Dataset with different Weight, Aiming to Counter Llama3.2's Generalist Approach and focusing On Specializing with Roleplaying and Acting.

  • OpenElla3A Excells in Outputting RAW and UNCENSORED Output And Acknowledges OpenElla3A's weakness for Following Prompt, Due to this, the model is re-finetuned, which solves the issue with OpenElla3A's Disobidience, This allows the Model to Engage in Uncensored response and with appropriate responses, rivaling its older models

  • OpenElla3B contains more Fine-tuned Dataset so please Report any issues found through our email

    [email protected], about any overfitting, or improvements for the future Model **C**, Once again feel free to Modify the LORA to your likings, However please consider Adding this Page for credits and if you'll increase its **Dataset**, then please handle it with care and ethical considerations
  • OpenElla3B is

    • Developed by: N-Bot-Int
    • License: apache-2.0
    • Parent Model from model: unsloth/llama-3.2-3b-instruct-unsloth-bnb-4bit
    • Sequential Trained from Model: N-Bot-Int/OpenElla3-Llama3.2A
    • Dataset Combined Using: Mosher-R1(Propietary Software)
  • OpenElla3B Is NOT YET RANKED WITH ANY METRICS

  • Notice

    • For a Good Experience, Please use
      • Low temperature 1.5, min_p = 0.1 and max_new_tokens = 128
  • Detail card:

    • Parameter

      • 3 Billion Parameters
      • (Please visit your GPU Vendor if you can Run 3B models)
    • Training

      • 500 steps
        • Mixed-RP Startup Dataset
      • 200 steps
        • PIPPA-ShareGPT for Increased Roleplaying capabilities
      • 150 steps(Re-fining)
        • PIPPA-ShareGPT to further increase weight of PIPPA and to override the noises
      • 500 steps(Lower LR)
        • Character-roleplay-DO to further encourage the Model to respond appropriately with the RP scenario
    • Finetuning tool:

    • Unsloth AI

      • This llama model was trained 2x faster with Unsloth and Huggingface's TRL library.
    • Fine-tuned Using:

    • Google Colab

Downloads last month
110
Inference Providers NEW
This model is not currently available via any of the supported Inference Providers.

Model tree for N-Bot-Int/OpenElla3-Llama3.2B

Adapter
(2)
this model
Quantizations
2 models

Datasets used to train N-Bot-Int/OpenElla3-Llama3.2B

Collection including N-Bot-Int/OpenElla3-Llama3.2B