Model Card for Model ID
Merged model using mergekit
This model aimed to act like visual novel character.
Merge Format
models:
- model: spow12/ChatWaifu_modify_data
- model: anthracite-org/magnum-v2-12b
- model: Sao10K/MN-12B-Lyra-v4
- model: Gryphe/Pantheon-RP-1.6-12b-Nemo
- model: mistralai/Mistral-Nemo-Instruct-2407
- model: NeverSleep/Lumimaid-v0.2-12B
- model: Epiculous/Violet_Twilight-v0.1
merge_method: model_stock
base_model: spow12/ChatWaifu_modify_data
dtype: bfloat16
you have to resize chatwaifu and lucimaid's embedding size(131073 to 131072).
WaifuModel Collections
Unified demo
Update
- 2024.09.10 Update Ver 1.4
- Modify data format and applying flitering.
- Merge with model stock
- 2024.08.29 Update Ver 1.3.1
- Merge Ver1.2, mistralai/Mistral-Nemo-Instruct-2407 and NeverSleep/Lumimaid-v0.2-12B, Epiculous/Violet_Twilight-v0.1
- Adjust merge weight.
- 2024.08.16 Update Ver 1.3
- Merge Ver1.2, mistralai/Mistral-Nemo-Instruct-2407 and NeverSleep/Lumimaid-v0.2-12B,
- 2024.08.08 Update Ver 1.2.1
- Merge Ver1.2 and mistralai/Mistral-Nemo-Instruct-2407
- 2024.08.07 Update Ver 1.2
- Add Preference Learning in training pipeline
- 2024.07.29 Update Ver 1.1
- Add dataset format -> generate novel, fill masked sentences
- Remove system role and integrate at user message.
- Remove γγ in conversation.
- 2024.06.20 Upload other chara's sample chat history.
- 2024.06.13 Upload Model
Model Details
Model Description
- Developed by: spow12(yw_nam)
- Shared by : spow12(yw_nam)
- Model type: CausalLM
- Language(s) (NLP): japanese
- Finetuned from model : NeverSleep/Lumimaid-v0.2-12B
Currently, chatbot has below personality.
character | visual_novel |
---|---|
γ γ©γ΅γ‘ | SenrenοΌBanka |
θε | SenrenοΌBanka |
θ³δΉ | SenrenοΌBanka |
γ¬γ | SenrenοΌBanka |
εε² | SenrenοΌBanka |
θ¦θ± | SenrenοΌBanka |
ζθ‘£ | CafΓ© Stella and the Reaper's Butterflies |
ζ ι£ | CafΓ© Stella and the Reaper's Butterflies |
γγγ‘ | CafΓ© Stella and the Reaper's Butterflies |
εΈ | CafΓ© Stella and the Reaper's Butterflies |
ζΆΌι³ | CafΓ© Stella and the Reaper's Butterflies |
γγγ | Riddle Joker |
δΈζ΅· | Riddle Joker |
ηΎ½ζ | Riddle Joker |
θεͺ | Riddle Joker |
ε°ζ₯ | Riddle Joker |
Feature
- Fluent Chat performance
- Reduce repetition problem when generate with many turn(over 20~30)
- Zero Shot character persona using description of character.
- 128k context window
- Memory ability that does not forget even after long-context generation
Demo
You can use Demo in google colab.
Check Here
Future Work.
Now, i'm quite satisfying the model chat performance.
So, i'm going to focus for integrating the vision modality to model so that our waifu can do more general tasks.
Bias, Risks, and Limitations
This model trained by japanese dataset included visual novel which contain nsfw content.
So, The model may generate NSFW content.
Use & Credit
This model is currently available for non-commercial & Research purpose only. Also, since I'm not detailed in licensing, I hope you use it responsibly.
By sharing this model, I hope to contribute to the research efforts of our community (the open-source community and anime persons).
This repository can use Visual novel-based RAG, but i will not distribute it yet because i'm not sure if it is permissible to release the data publicly.
Citation
@misc {ChatWaifu_v1.4,
author = { YoungWoo Nam },
title = { ChatWaifu_v1.4 },
year = 2024,
url = { https://huggingface.co/spow12/ChatWaifu_v1.4 },
publisher = { Hugging Face }
}
Open LLM Leaderboard Evaluation Results
Detailed results can be found here
Metric | Value |
---|---|
Avg. | 25.25 |
IFEval (0-Shot) | 56.91 |
BBH (3-Shot) | 31.63 |
MATH Lvl 5 (4-Shot) | 7.85 |
GPQA (0-shot) | 7.61 |
MuSR (0-shot) | 20.03 |
MMLU-PRO (5-shot) | 27.50 |
- Downloads last month
- 92
Model tree for spow12/ChatWaifu_v1.4
Spaces using spow12/ChatWaifu_v1.4 3
Collection including spow12/ChatWaifu_v1.4
Evaluation results
- strict accuracy on IFEval (0-Shot)Open LLM Leaderboard56.910
- normalized accuracy on BBH (3-Shot)Open LLM Leaderboard31.630
- exact match on MATH Lvl 5 (4-Shot)Open LLM Leaderboard7.850
- acc_norm on GPQA (0-shot)Open LLM Leaderboard7.610
- acc_norm on MuSR (0-shot)Open LLM Leaderboard20.030
- accuracy on MMLU-PRO (5-shot)test set Open LLM Leaderboard27.500