Update README.md
Browse files
README.md
CHANGED
@@ -25,25 +25,25 @@ This gemma2 model was trained 2x faster with [Unsloth](https://github.com/unslot
|
|
25 |
|
26 |
### Training Data and License
|
27 |
|
28 |
-
This model is fine-tuned using the dataset [DeL-TaiseiOzaki/Tengentoppa-sft-v1.0](https://huggingface.co/datasets/DeL-TaiseiOzaki/Tengentoppa-sft-v1.0) under the [CC BY 4.0 License](https://creativecommons.org/licenses/by/4.0/).
|
29 |
-
The dataset was compiled from the following publicly available datasets:
|
30 |
-
・Hachi-Alpaca_newans (GENIAC-Team-Ozaki/Hachi-Alpaca_newans)
|
31 |
-
・Chatbot Arena Japanese Dataset for Karakuri LM 8x7B Chat v0.1 AWQ (GENIAC-Team-Ozaki/chatbot-arena-ja-karakuri-lm-8x7b-chat-v0.1-awq)
|
32 |
-
・WikiHow NFQA Japanese Cleaned Dataset (GENIAC-Team-Ozaki/WikiHowNFQA-ja_cleaned)
|
33 |
-
・Evolutionary Alpaca Generation 3 500 Cleaned Dataset (GENIAC-Team-Ozaki/Evol-Alpaca-gen3-500_cleaned)
|
34 |
-
・Open Assistant 33k Japanese Reformatted Dataset (GENIAC-Team-Ozaki/oasst2-33k-ja_reformatted)
|
35 |
-
・SFT Dataset For Self-Taught Evaluators Iteration 1 (Aratako/SFT-Dataset-For-Self-Taught-Evaluators-iter1)
|
36 |
-
・Japanese Debate Argument Instruction Dataset (GENIAC-Team-Ozaki/debate_argument_instruction_dataset_ja)
|
37 |
-
・Japanese Helpful-Harmless RLHF 49k Dataset (fujiki/japanese_hh-rlhf-49k)
|
38 |
-
・Japanese Government FAQs 22k Dataset (GENIAC-Team-Ozaki/JaGovFaqs-22k)
|
39 |
-
・Evolutionary Helpful-Harmless RLHF Generation 3 1k Cleaned Dataset (GENIAC-Team-Ozaki/Evol-hh-rlhf-gen3-1k_cleaned)
|
40 |
-
・Magpie Qwen 2.5 32B Reasoning 100k Dataset (DeL-TaiseiOzaki/magpie-qwen2.5-32b-reasoning-100k)
|
41 |
-
・Japanese Reasoning Finetuning Dataset (DeL-TaiseiOzaki/reasoning-finetuning-ja)
|
42 |
-
・Magpie LLM Japanese 3.13B 20k Dataset (DeL-TaiseiOzaki/magpie-llm-jp-3-13b-20k)
|
43 |
-
・Magpie SFT Version 1.0 Dataset (llm-jp/magpie-sft-v1.0)
|
44 |
-
・Aya Japanese Nemotron DPO Masked Dataset (weblab-GENIAC/aya-ja-nemotron-dpo-masked)
|
45 |
-
・Open Platypus Japanese Masked Dataset (weblab-GENIAC/Open-Platypus-Japanese-masked)
|
46 |
-
・Synthesis sft data by mixtral-8×22B (hatakeyama-llm-team/AutoGeneratedJapaneseQA-CC)
|
47 |
|
48 |
### Interfere Guide
|
49 |
This model is SFTed using data from DeL-TaiseiOzaki/Tengentoppa-sft-v1.0 against Hibiki252/gemma-2-27b-4bit, which is a model stored in google/gemma-2-27b with 4bit settings.
|
|
|
25 |
|
26 |
### Training Data and License
|
27 |
|
28 |
+
This model is fine-tuned using the dataset [DeL-TaiseiOzaki/Tengentoppa-sft-v1.0](https://huggingface.co/datasets/DeL-TaiseiOzaki/Tengentoppa-sft-v1.0) under the [CC BY 4.0 License](https://creativecommons.org/licenses/by/4.0/).
|
29 |
+
The dataset was compiled from the following publicly available datasets:
|
30 |
+
・Hachi-Alpaca_newans (GENIAC-Team-Ozaki/Hachi-Alpaca_newans)
|
31 |
+
・Chatbot Arena Japanese Dataset for Karakuri LM 8x7B Chat v0.1 AWQ (GENIAC-Team-Ozaki/chatbot-arena-ja-karakuri-lm-8x7b-chat-v0.1-awq)
|
32 |
+
・WikiHow NFQA Japanese Cleaned Dataset (GENIAC-Team-Ozaki/WikiHowNFQA-ja_cleaned)
|
33 |
+
・Evolutionary Alpaca Generation 3 500 Cleaned Dataset (GENIAC-Team-Ozaki/Evol-Alpaca-gen3-500_cleaned)
|
34 |
+
・Open Assistant 33k Japanese Reformatted Dataset (GENIAC-Team-Ozaki/oasst2-33k-ja_reformatted)
|
35 |
+
・SFT Dataset For Self-Taught Evaluators Iteration 1 (Aratako/SFT-Dataset-For-Self-Taught-Evaluators-iter1)
|
36 |
+
・Japanese Debate Argument Instruction Dataset (GENIAC-Team-Ozaki/debate_argument_instruction_dataset_ja)
|
37 |
+
・Japanese Helpful-Harmless RLHF 49k Dataset (fujiki/japanese_hh-rlhf-49k)
|
38 |
+
・Japanese Government FAQs 22k Dataset (GENIAC-Team-Ozaki/JaGovFaqs-22k)
|
39 |
+
・Evolutionary Helpful-Harmless RLHF Generation 3 1k Cleaned Dataset (GENIAC-Team-Ozaki/Evol-hh-rlhf-gen3-1k_cleaned)
|
40 |
+
・Magpie Qwen 2.5 32B Reasoning 100k Dataset (DeL-TaiseiOzaki/magpie-qwen2.5-32b-reasoning-100k)
|
41 |
+
・Japanese Reasoning Finetuning Dataset (DeL-TaiseiOzaki/reasoning-finetuning-ja)
|
42 |
+
・Magpie LLM Japanese 3.13B 20k Dataset (DeL-TaiseiOzaki/magpie-llm-jp-3-13b-20k)
|
43 |
+
・Magpie SFT Version 1.0 Dataset (llm-jp/magpie-sft-v1.0)
|
44 |
+
・Aya Japanese Nemotron DPO Masked Dataset (weblab-GENIAC/aya-ja-nemotron-dpo-masked)
|
45 |
+
・Open Platypus Japanese Masked Dataset (weblab-GENIAC/Open-Platypus-Japanese-masked)
|
46 |
+
・Synthesis sft data by mixtral-8×22B (hatakeyama-llm-team/AutoGeneratedJapaneseQA-CC)
|
47 |
|
48 |
### Interfere Guide
|
49 |
This model is SFTed using data from DeL-TaiseiOzaki/Tengentoppa-sft-v1.0 against Hibiki252/gemma-2-27b-4bit, which is a model stored in google/gemma-2-27b with 4bit settings.
|