|
--- |
|
base_model: |
|
- tokyotech-llm/Swallow-13b-instruct-hf |
|
- nitky/Superswallow-13b-v0.2 |
|
license: llama2 |
|
language: |
|
- ja |
|
tags: |
|
- mergekit |
|
- merge |
|
- MoE |
|
--- |
|
# Swallow-MoE-2x13B-v0.1 |
|
[**English description here**](#description) |
|
|
|
|
|
## 概要 |
|
Llama-2ベースの学習済み日本語モデルである[tokyotech-llm/Swallow-13b-instruct-hf](https://huggingface.co/tokyotech-llm/Swallow-13b-instruct-hf)と、それを利用したマージモデルである[nitky/Superswallow-13b-v0.2](https://huggingface.co/nitky/Superswallow-13b-v0.2) |
|
を、[mergekit](https://github.com/cg123/mergekit)を使ってMoEを行い作成したモデルです。 |
|
|
|
[GGUF版はこちら](https://huggingface.co/Aratako/Swallow-MoE-2x13B-v0.1-GGUF) |
|
|
|
以下2モデルを利用しています。 |
|
- [tokyotech-llm/Swallow-13b-instruct-hf](https://huggingface.co/tokyotech-llm/Swallow-13b-instruct-hf) |
|
- [nitky/Superswallow-13b-v0.2](https://huggingface.co/nitky/Superswallow-13b-v0.2) |
|
|
|
## ライセンス |
|
Llama2ライセンスを継承しますが、それ以外にもこのマージモデルはLlama2ライセンス以外のライセンスの影響を受けます。 |
|
- [nitky/Superswallow-13b-v0.2](https://huggingface.co/nitky/Superswallow-13b-v0.2)の利用により、[AI2 ImpACT license](https://allenai.org/impact-license)を継承している可能性があります。詳細は元モデルの概要をご確認ください。 |
|
|
|
## ベンチマーク |
|
ベースとしたSwallow-13b-instruct-hf、Superswallow-13b-v0.2と本モデルの[japanese-mt-bench](https://github.com/Stability-AI/FastChat/tree/jp-stable/fastchat/llm_judge)の結果は以下の通りです。 |
|
(シングルターン) |
|
|Model|Size|Coding|Extraction|Humanities|Math|Reasoning|Roleplay|STEM|Writing|avg_score| |
|
|---|---|---|---|---|---|---|---|---|---|---| |
|
| Swallow-13b-instruct-hf | 13B | 2.1 | 5.0 | 6.1 | 1.9 | 4.5 | 5.0 | 4.9 | 5.6 | 4.3875 | |
|
| Superswallow-13b-v0.2 | 13B | **2.7** | **6.3** | **8.4** | 2.2 | **5.9** | 6.8 | 7.2 | 6.8 | 5.7875 | |
|
| This model | 2x13B | 2.6 | 6.2 | **8.4** | **2.3** | 5.3 | **7.4** | **7.7** | **8.4** | **6.0375** | |
|
|
|
 |
|
**ベンチマークに使用したプロンプト** |
|
- Swallow-13b-instruct-hf, Superswallow-13b-v0.2 |
|
``` |
|
以下に、あるタスクを説明する指示があり、それに付随する入力が更なる文脈を提供しています。リクエストを適切に完了するための回答を記述してください。 |
|
### 指示: |
|
{instruction} |
|
### 応答: |
|
``` |
|
- Swallow-MoE-2x13B-v0.1 |
|
``` |
|
### 指示: |
|
{instruction} |
|
### 応答: |
|
``` |
|
|
|
## Description |
|
This model is created using MoE (Mixture of Experts) through mergekit based on [tokyotech-llm/Swallow-13b-instruct-hf](https://huggingface.co/tokyotech-llm/Swallow-13b-instruct-hf) and [nitky/Superswallow-13b-v0.2](https://huggingface.co/nitky/Superswallow-13b-v0.2). |
|
|
|
[Click here for the GGUF version](https://huggingface.co/Aratako/Swallow-MoE-2x13B-v0.1-GGUF) |
|
|
|
It utilizes the following two models: |
|
- [tokyotech-llm/Swallow-13b-instruct-hf](https://huggingface.co/tokyotech-llm/Swallow-13b-instruct-hf) |
|
- [nitky/Superswallow-13b-v0.2](https://huggingface.co/nitky/Superswallow-13b-v0.2) |
|
|
|
## License |
|
While inheriting the Llama2 license, this merged model is also subject to other licenses due to its use of models beyond Llama2. |
|
- Due to the use of [nitky/Superswallow-13b-v0.2](https://huggingface.co/nitky/Superswallow-13b-v0.2), it may inherit the [AI2 ImpACT license](https://allenai.org/impact-license). Please refer to the original model's overview for details. |
|
|
|
## Benchmark |
|
The results of this model and the base Swallow-13b-instruct-hf, Superswallow-13b-v0.2 on japanese-mt-bench are as follows. |
|
(Single turn) |
|
|Model|Size|Coding|Extraction|Humanities|Math|Reasoning|Roleplay|STEM|Writing|avg_score| |
|
|---|---|---|---|---|---|---|---|---|---|---| |
|
| Swallow-13b-instruct-hf | 13B | 2.1 | 5.0 | 6.1 | 1.9 | 4.5 | 5.0 | 4.9 | 5.6 | 4.3875 | |
|
| Superswallow-13b-v0.2 | 13B | **2.7** | **6.3** | **8.4** | 2.2 | **5.9** | 6.8 | 7.2 | 6.8 | 5.7875 | |
|
| This model | 2x13B | 2.6 | 6.2 | **8.4** | **2.3** | 5.3 | **7.4** | **7.7** | **8.4** | **6.0375** | |
|
|
|
 |
|
|
|
**Prompt used for benchmark** |
|
- Swallow-13b-instruct-hf, Superswallow-13b-v0.2 |
|
``` |
|
以下に、あるタスクを説明する指示があり、それに付随する入力が更なる文脈を提供しています。リクエストを適切に完了するための回答を記述してください。 |
|
### 指示: |
|
{instruction} |
|
### 応答: |
|
``` |
|
- Swallow-MoE-2x13B-v0.1 |
|
``` |
|
### 指示: |
|
{instruction} |
|
### 応答: |
|
``` |
|
|
|
## Merge config |
|
[mergekit_config.yml](./mergekit_moe_config.yml) |
|
```yaml |
|
base_model: ./Superswallow-13b-v0.2 |
|
gate_mode: random |
|
dtype: bfloat16 |
|
experts: |
|
- source_model: ./Superswallow-13b-v0.2 |
|
positive_prompts: [] |
|
- source_model: ./Swallow-13b-instruct-hf |
|
positive_prompts: [] |
|
tokenizer_source: union |
|
``` |