Hibiki252's picture
Update README.md
720ade4 verified
|
raw
history blame
2.78 kB
metadata
base_model:
  - Google/gemma-2-27b
  - Hibiki252/gemma-2-27b-4bit
tags:
  - text-generation-inference
  - transformers
  - unsloth
  - gemma2
  - trl
license: apache-2.0
language:
  - en

Uploaded model

  • Developed by: Hibiki252
  • License: apache-2.0
  • Finetuned from model : Hibiki252/gemma-2-27b-4bit

This gemma2 model was trained 2x faster with Unsloth and Huggingface's TRL library.

Training Data and License

This model is fine-tuned using the dataset DeL-TaiseiOzaki/Tengentoppa-sft-v1.0 under the CC BY 4.0 License.
The dataset was compiled from the following publicly available datasets:
・Hachi-Alpaca_newans (GENIAC-Team-Ozaki/Hachi-Alpaca_newans)
・Chatbot Arena Japanese Dataset for Karakuri LM 8x7B Chat v0.1 AWQ (GENIAC-Team-Ozaki/chatbot-arena-ja-karakuri-lm-8x7b-chat-v0.1-awq)
・WikiHow NFQA Japanese Cleaned Dataset (GENIAC-Team-Ozaki/WikiHowNFQA-ja_cleaned)
・Evolutionary Alpaca Generation 3 500 Cleaned Dataset (GENIAC-Team-Ozaki/Evol-Alpaca-gen3-500_cleaned)
・Open Assistant 33k Japanese Reformatted Dataset (GENIAC-Team-Ozaki/oasst2-33k-ja_reformatted)
・SFT Dataset For Self-Taught Evaluators Iteration 1 (Aratako/SFT-Dataset-For-Self-Taught-Evaluators-iter1)
・Japanese Debate Argument Instruction Dataset (GENIAC-Team-Ozaki/debate_argument_instruction_dataset_ja)
・Japanese Helpful-Harmless RLHF 49k Dataset (fujiki/japanese_hh-rlhf-49k)
・Japanese Government FAQs 22k Dataset (GENIAC-Team-Ozaki/JaGovFaqs-22k)
・Evolutionary Helpful-Harmless RLHF Generation 3 1k Cleaned Dataset (GENIAC-Team-Ozaki/Evol-hh-rlhf-gen3-1k_cleaned)
・Magpie Qwen 2.5 32B Reasoning 100k Dataset (DeL-TaiseiOzaki/magpie-qwen2.5-32b-reasoning-100k)
・Japanese Reasoning Finetuning Dataset (DeL-TaiseiOzaki/reasoning-finetuning-ja)
・Magpie LLM Japanese 3.13B 20k Dataset (DeL-TaiseiOzaki/magpie-llm-jp-3-13b-20k)
・Magpie SFT Version 1.0 Dataset (llm-jp/magpie-sft-v1.0)
・Aya Japanese Nemotron DPO Masked Dataset (weblab-GENIAC/aya-ja-nemotron-dpo-masked)
・Open Platypus Japanese Masked Dataset (weblab-GENIAC/Open-Platypus-Japanese-masked)
・Synthesis sft data by mixtral-8×22B (hatakeyama-llm-team/AutoGeneratedJapaneseQA-CC)

Interfere Guide

This model is SFTed using data from DeL-TaiseiOzaki/Tengentoppa-sft-v1.0 against Hibiki252/gemma-2-27b-4bit, which is a model stored in google/gemma-2-27b with 4bit settings. To perform inference, execute the following code.

(code)