--- base_model: - Novaciano/HarmfulProject-3.2-1B - Novaciano/La_Mejor_Mezcla-3.2-1B - Novaciano/UNCENSORED-Sigil-of-Satan-3.2-1B - Novaciano/BLAST_PROCESSING-3.2-1B library_name: transformers tags: - mergekit - merge - 1b - nsfw - uncensored - abliterated - rp - roleplay language: - es - en model-index: - name: ASTAROTH-3.2-1B results: - task: type: text-generation name: Text Generation dataset: name: IFEval (0-Shot) type: wis-k/instruction-following-eval split: train args: num_few_shot: 0 metrics: - type: inst_level_strict_acc and prompt_level_strict_acc value: 56.13 name: averaged accuracy source: url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard#/?search=Novaciano%2FASTAROTH-3.2-1B name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: BBH (3-Shot) type: SaylorTwift/bbh split: test args: num_few_shot: 3 metrics: - type: acc_norm value: 9.49 name: normalized accuracy source: url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard#/?search=Novaciano%2FASTAROTH-3.2-1B name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: MATH Lvl 5 (4-Shot) type: lighteval/MATH-Hard split: test args: num_few_shot: 4 metrics: - type: exact_match value: 7.33 name: exact match source: url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard#/?search=Novaciano%2FASTAROTH-3.2-1B name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: GPQA (0-shot) type: Idavidrein/gpqa split: train args: num_few_shot: 0 metrics: - type: acc_norm value: 0.78 name: acc_norm source: url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard#/?search=Novaciano%2FASTAROTH-3.2-1B name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: MuSR (0-shot) type: TAUR-Lab/MuSR args: num_few_shot: 0 metrics: - type: acc_norm value: 1.21 name: acc_norm source: url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard#/?search=Novaciano%2FASTAROTH-3.2-1B name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: MMLU-PRO (5-shot) type: TIGER-Lab/MMLU-Pro config: main split: test args: num_few_shot: 5 metrics: - type: acc value: 10.1 name: accuracy source: url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard#/?search=Novaciano%2FASTAROTH-3.2-1B name: Open LLM Leaderboard --- # ASTAROTH 3.2 1B ## Detalle de la Mezcla Agarré todas mis mezclas y la metí en una mezcla definitiva. * [Novaciano/HarmfulProject-3.2-1B](https://huggingface.co/Novaciano/HarmfulProject-3.2-1B) * [Novaciano/UNCENSORED-Sigil-of-Satan-3.2-1B](https://huggingface.co/Novaciano/UNCENSORED-Sigil-of-Satan-3.2-1B) * [Novaciano/BLAST_PROCESSING-3.2-1B](https://huggingface.co/Novaciano/BLAST_PROCESSING-3.2-1B) ### Configuracion Esta configuración YAML fue usada para producir este modelo: ```yaml merge_method: model_stock models: - model: Novaciano/UNCENSORED-Sigil-of-Satan-3.2-1B parameters: weight: 1.0 - model: Novaciano/HarmfulProject-3.2-1B parameters: weight: 1.0 - model: Novaciano/BLAST_PROCESSING-3.2-1B parameters: weight: 1.0 base_model: Novaciano/La_Mejor_Mezcla-3.2-1B dtype: bfloat16 out_dtype: bfloat16 parameters: int8_mask: true normalize: true rescale: false chat_template: auto tokenizer: source: union ``` # [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard) Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/Novaciano__ASTAROTH-3.2-1B-details)! Summarized results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/contents/viewer/default/train?q=Novaciano%2FASTAROTH-3.2-1B&sort[column]=Average%20%E2%AC%86%EF%B8%8F&sort[direction]=desc)! | Metric |Value (%)| |-------------------|--------:| |**Average** | 14.17| |IFEval (0-Shot) | 56.13| |BBH (3-Shot) | 9.49| |MATH Lvl 5 (4-Shot)| 7.33| |GPQA (0-shot) | 0.78| |MuSR (0-shot) | 1.21| |MMLU-PRO (5-shot) | 10.10|