Hibiki252 commited on
Commit
50104bd
·
verified ·
1 Parent(s): 720ade4

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +1 -1
README.md CHANGED
@@ -19,6 +19,7 @@ language:
19
  - **License:** apache-2.0
20
  - **Finetuned from model :** Hibiki252/gemma-2-27b-4bit
21
 
 
22
  This gemma2 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
23
 
24
  [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
@@ -46,7 +47,6 @@ The dataset was compiled from the following publicly available datasets:
46
  ・Synthesis sft data by mixtral-8×22B (hatakeyama-llm-team/AutoGeneratedJapaneseQA-CC)
47
 
48
  ### Interfere Guide
49
- This model is SFTed using data from DeL-TaiseiOzaki/Tengentoppa-sft-v1.0 against Hibiki252/gemma-2-27b-4bit, which is a model stored in google/gemma-2-27b with 4bit settings.
50
  To perform inference, execute the following code.
51
 
52
  (code)
 
19
  - **License:** apache-2.0
20
  - **Finetuned from model :** Hibiki252/gemma-2-27b-4bit
21
 
22
+ This model is SFTed using data from DeL-TaiseiOzaki/Tengentoppa-sft-v1.0 against Hibiki252/gemma-2-27b-4bit, which is a model stored in google/gemma-2-27b with 4bit settings.
23
  This gemma2 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
24
 
25
  [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
 
47
  ・Synthesis sft data by mixtral-8×22B (hatakeyama-llm-team/AutoGeneratedJapaneseQA-CC)
48
 
49
  ### Interfere Guide
 
50
  To perform inference, execute the following code.
51
 
52
  (code)