|
--- |
|
license: apache-2.0 |
|
tags: |
|
- unsloth |
|
- Uncensored |
|
- text-generation-inference |
|
- transformers |
|
- unsloth |
|
- llama |
|
- trl |
|
- roleplay |
|
- conversational |
|
datasets: |
|
- openerotica/mixed-rp |
|
- kingbri/PIPPA-shareGPT |
|
- flammenai/character-roleplay-DPO |
|
language: |
|
- en |
|
base_model: |
|
- N-Bot-Int/OpenRP3B-Llama3.2 |
|
new_version: N-Bot-Int/OpenElla3-Llama3.2B |
|
pipeline_tag: text-generation |
|
library_name: peft |
|
metrics: |
|
- character |
|
--- |
|
<a href="https://ibb.co/GvDjFcVp"><img src="https://raw.githubusercontent.com/ItsMeDevRoland/NexusBotWorkInteractives/refs/heads/main/image%20(1).webp" alt="image" border="0"></a> |
|
|
|
# Llama3.2 - OpenElla3B |
|
- OpenElla Model **B**, is a Llama3.2 **3B** Parameter Model, |
|
That is fine-tuned for Roleplaying purposes, even if it only have a limited Parameters. |
|
This is achieved through Series of Dataset Finetuning, using 3 Dataset with different |
|
Weight, Aiming to Counter Llama3.2's Generalist Approach and focusing On Specializing with |
|
Roleplaying and Acting. |
|
|
|
- OpenElla3A Excells in Outputting **RAW** and **UNCENSORED** Output And Acknowledges OpenElla3A's weakness |
|
for Following Prompt, Due to this, the model is re-finetuned, which **solves the issue with OpenElla3A's |
|
Disobidience**, This allows the Model to Engage in Uncensored response and with appropriate responses, rivaling |
|
its older models |
|
|
|
- OpenElla3B contains more Fine-tuned Dataset so please Report any issues found through our email |
|
<link src="mailto:[email protected]">[email protected]</link>, |
|
about any overfitting, or improvements for the future Model **C**, |
|
Once again feel free to Modify the LORA to your likings, However please consider Adding this Page |
|
for credits and if you'll increase its **Dataset**, then please handle it with care and ethical considerations |
|
|
|
- OpenElla3B is |
|
- **Developed by:** N-Bot-Int |
|
- **License:** apache-2.0 |
|
- **Parent Model from model:** unsloth/llama-3.2-3b-instruct-unsloth-bnb-4bit |
|
- **Sequential Trained from Model:** N-Bot-Int/OpenElla3-Llama3.2A |
|
- **Dataset Combined Using:** Mosher-R1(Propietary Software) |
|
|
|
- OpenElla3B Official Metric Score |
|
- <img src="https://raw.githubusercontent.com/NexusEntertainmentCloutDino/UnslothDirectory/refs/heads/main/roleplay_analysis.png?token=GHSAT0AAAAAADAYSRDXE2QA3SLCSAQMGQ4CZ62Q3CQ" alt="image" border="0"></a> |
|
- Metrics Made By **ItsMeDevRoland** |
|
Which compares: |
|
- **Deepseek R1 3B GGUF** |
|
- **Dolphin 3B GGUF** |
|
- **Hermes 3b Llama GGUFF** |
|
- **OpenElla3-Llama3.2B GGUFF** |
|
Which are All Ranked with the Same Prompt, Same Temperature, Same Hardware(Google Colab), |
|
To Properly Showcase the differences and strength of the Models |
|
|
|
- **THIS MODEL EXCELLS IN LONGER PROMPT AND STAYING IN CHARACTER BUT LAGS BEHIND DEEPSEEK-R1** |
|
|
|
|
|
- # Notice |
|
- **For a Good Experience, Please use** |
|
- Low temperature 1.5, min_p = 0.1 and max_new_tokens = 128 |
|
|
|
|
|
- # Detail card: |
|
- Parameter |
|
- 3 Billion Parameters |
|
- (Please visit your GPU Vendor if you can Run 3B models) |
|
|
|
- Training |
|
- 500 steps |
|
- Mixed-RP Startup Dataset |
|
- 200 steps |
|
- PIPPA-ShareGPT for Increased Roleplaying capabilities |
|
- 150 steps(Re-fining) |
|
- PIPPA-ShareGPT to further increase weight of PIPPA and to override the noises |
|
- 500 steps(Lower LR) |
|
- Character-roleplay-DO to further encourage the Model to respond appropriately with the RP scenario |
|
|
|
- Finetuning tool: |
|
- Unsloth AI |
|
- This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. |
|
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth) |
|
- Fine-tuned Using: |
|
- Google Colab |