--- base_model: - Novaciano/UNCENSORED-Sigil-Of-Satan-3.2-1B - jtatman/llama-3.2-1b-lewd-mental-occult library_name: transformers tags: - mergekit - merge - uncensored - abliterated - llama - llama3.2 - nsfw - not-for-all-audiences language: - es - en model-index: - name: LEWD-Mental-Cultist-3.2-1B results: - task: type: text-generation name: Text Generation dataset: name: IFEval (0-Shot) type: wis-k/instruction-following-eval split: train args: num_few_shot: 0 metrics: - type: inst_level_strict_acc and prompt_level_strict_acc value: 53.09 name: averaged accuracy source: url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard#/?search=Novaciano%2FLEWD-Mental-Cultist-3.2-1B name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: BBH (3-Shot) type: SaylorTwift/bbh split: test args: num_few_shot: 3 metrics: - type: acc_norm value: 8.64 name: normalized accuracy source: url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard#/?search=Novaciano%2FLEWD-Mental-Cultist-3.2-1B name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: MATH Lvl 5 (4-Shot) type: lighteval/MATH-Hard split: test args: num_few_shot: 4 metrics: - type: exact_match value: 5.29 name: exact match source: url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard#/?search=Novaciano%2FLEWD-Mental-Cultist-3.2-1B name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: GPQA (0-shot) type: Idavidrein/gpqa split: train args: num_few_shot: 0 metrics: - type: acc_norm value: 0.89 name: acc_norm source: url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard#/?search=Novaciano%2FLEWD-Mental-Cultist-3.2-1B name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: MuSR (0-shot) type: TAUR-Lab/MuSR args: num_few_shot: 0 metrics: - type: acc_norm value: 1.42 name: acc_norm source: url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard#/?search=Novaciano%2FLEWD-Mental-Cultist-3.2-1B name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: MMLU-PRO (5-shot) type: TIGER-Lab/MMLU-Pro config: main split: test args: num_few_shot: 5 metrics: - type: acc value: 8.54 name: accuracy source: url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard#/?search=Novaciano%2FLEWD-Mental-Cultist-3.2-1B name: Open LLM Leaderboard --- # merge This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit). ## Merge Details
AQMIa-ZG-4-D7-Z6w-s-C-BXrxq9x-OZWQr-Ah-A3-Iw-G1burt9s-OM-Aa-IAa-ACPFij-E4k4s3bt-Un-SLARROuz-TRfm-UEL
### Merge Method This model was merged using the [SLERP](https://en.wikipedia.org/wiki/Slerp) merge method. ### Models Merged The following models were included in the merge: * [Novaciano/UNCENSORED-Sigil-Of-Satan-3.2-1B](https://huggingface.co/Novaciano/UNCENSORED-Sigil-Of-Satan-3.2-1B) * [jtatman/llama-3.2-1b-lewd-mental-occult](https://huggingface.co/jtatman/llama-3.2-1b-lewd-mental-occult) ### Configuration The following YAML configuration was used to produce this model: ```yaml models: - model: jtatman/llama-3.2-1b-lewd-mental-occult - model: Novaciano/UNCENSORED-Sigil-Of-Satan-3.2-1B base_model: jtatman/llama-3.2-1b-lewd-mental-occult merge_method: slerp dtype: bfloat16 parameters: t: [0, 0.5, 1, 0.5, 0] ``` # [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard) Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/Novaciano__LEWD-Mental-Cultist-3.2-1B-details)! Summarized results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/contents/viewer/default/train?q=Novaciano%2FLEWD-Mental-Cultist-3.2-1B&sort[column]=Average%20%E2%AC%86%EF%B8%8F&sort[direction]=desc)! | Metric |Value (%)| |-------------------|--------:| |**Average** | 12.98| |IFEval (0-Shot) | 53.09| |BBH (3-Shot) | 8.64| |MATH Lvl 5 (4-Shot)| 5.29| |GPQA (0-shot) | 0.89| |MuSR (0-shot) | 1.42| |MMLU-PRO (5-shot) | 8.54|