HumanF-MarkrAI/Gukbap-Qwen2.5-7B🍚
Model Details🍚
Model Description
- Developed by: HumanF-MarkrAI
- Model type: Ko-Qwen2.5-7B
- Language(s): Korean
- Context Length: 8192
- License: cc-by-nc-4.0
- Finetuned from model: Qwen/Qwen2.5-7B-Instruct.
Model Sources
When training, we used A100 40GB GPU
x4.
Implications🍚
Achieving Top-Level Korean Language Performance Surpassing GPT-4 Using Only Open-Source LLMs🔥
Recently, numerous state-of-the-art (SOTA) models have leveraged data generated by private models (e.g., ChatGPT, GPT-4) for LLM training, as seen in projects like OpenOrca
, Ultrafeedback
, and OpenHermes
.
However, this approach may violate these private models' terms of service (ToS).
For instance, OpenAI's license explicitly states: "⚠️Use Limitation: Creating services that compete with OpenAI.⚠️"
This implies that using data generated by private models to create unrestricted, open LLMs is challenging.
In this context, our model is significant in that it has been trained solely on a proprietary dataset generated through open-source models.** Furthermore, it achieved an impressive score of 🔥8.39🔥 in the korean logickor evaluation, the SOTA for korean based LLM under <7B parameters.
The Gukbap-Series LLM🍚 was developed using the data processing and supervised fine-tuning (SFT) methods proposed by LIMA and WizardLM. This demonstrates ⭐the potential to create unrestricted, general-purpose LLMs using datasets generated solely with open-source LLMs.⭐
한국어버전
오픈소스 LLM만으로 데이터를 생성하여 GPT-4를 넘어 한국어 최고 레벨을 달성🔥
오늘날 수많은 여러 SOTA 모델들은 private model (ChatGPT, GPT4 등)을 활용하여 생성한 데이터를 통해 LLM 훈련을 진행하고 있습니다. (OpenOrca, Ultrafeedback, OpenHermes 등) 하지만, 이는 private model의 이용 약관에 위배될 수도 있습니다. 대표적으로 OpenAI의 license에는 다음과 같은 말이 명시되어 있습니다: "⚠️사용 제한: OpenAI의 경쟁하기 위한 서비스를 만드는 것.⚠️" 즉, private model을 통해 만든 데이터로는 제약이 없는 자유로운 LLM을 만들기는 힘듭니다.
이러한 관점에서 우리 모델은 오직 오픈소스을 통해 생성힌 자체 데이터셋로 학습했다는 것에 큰 의의가 있습니다. 또한 한국어 logickor 자체 평가에서 🔥8.39🔥이라는 고득점을 달성하였고, 이는 7B 이하 한국어 모델 중 SOTA입니다.
Gukbap-Series LLM🍚은 LIMA와 WizardLM에서 제안한 데이터 가공 및 SFT 훈련 방법을 통해 제작되었으며, ⭐오픈소스 LLM만으로 데이터셋을 만들어서 제약이 없는 자체 general LLM을 만들 수 있다는 가능성⭐을 보여줍니다.
Training Method (SFT)
The following papers contain the foundational methodologies for the dataset and training methods we are currently proceeding.
SFT Datasets (Private)
When we made the Open-Source based dataset
, we use microsoft/WizardLM-2-8x22B
through DeepInfra.
Our datasets are made by Evolving system
, which is propsed by WizardLM.
In training, we used 1849 training dataset, and 200 validation dataset.
- Wizard-Korea-Datasets: MarkrAI/Markr_WizardLM_train_ver4.
- Wizard-Korea-Valid: WizardLM_Evol_valid.
Validation loss (epoch 15; Learning rate: 1e-5): 0.9075
Benchmark Score (Zero-shot)
We internally evaluated LogicKor.
We utilized gpt-4-1106-preview in internal evaluation.
It is same manner as Logickor-v2 eval model
.
(GPT-4o occasionally makes errors when grading. For example, it sometimes assigns a score of 0 for English responses to questions that were supposed to be answered in English.)
Model | 추론 | 수학 | 글쓰기 | 코딩 | 이해 | 문법 | 싱글턴 | 멀티턴 | Overall |
---|---|---|---|---|---|---|---|---|---|
OpenAI/gpt-4o-2024-05-13 | 9.50 | 8.71 | 9.42 | 9.21 | 9.71 | 9.42 | 9.42 | 9.23 | 9.33 |
Anthropic/clauide-3-5-sonnet-20240620 | 8.64 | 8.42 | 9.85 | 9.78 | 9.92 | 9.21 | 9.26 | 9.35 | 9.30 |
google/gemini-1.5-pro-001 | 9.07 | 8.57 | 9.57 | 9.78 | 9.57 | 9.21 | 9.40 | 9.19 | 9.23 |
---- | ---- | ---- | ---- | ---- | ---- | ---- | ---- | ---- | ---- |
Gukbap-Qwen2.5-7B🍚 | 8.57 | 8.93 | 9.50 | 9.07 | 9.21 | 5.07 | 8.71 | 8.07 | 8.39 |
Gukbap-Qwen2-7B🍚 | 5.71 | 6.43 | 8.07 | 9.14 | 7.29 | 3.57 | 7.02 | 6.38 | 6.70 |
mirlab/AkaLlama-llama3-70b-v0.1 | 5.14 | 5.35 | 4.14 | 9.00 | 7.85 | 7.50 | 5.97 | 7.02 | 6.50 |
Qwen/Qwen2-7B-Instruct | 6.07 | 4.71 | 7.21 | 7.00 | 8.00 | 4.85 | 6.61 | 6.00 | 6.30 |
yanolja/EEVE-Korean-Instruct-10.8B-v1.0 | 6.00 | 3.64 | 6.64 | 5.64 | 8.42 | 5.85 | 6.61 | 5.45 | 6.01 |
If you want to check model's output, please see our ⭐answer⭐ file!!
Benchmark Code
Our code based on maywell's Logickor code.
We followed maywell's evaluation method such as judge_template
, prompt
, etc.
Chat Prompt
<|im_start|>user
Hello! My favorite food is Gukbap🍚!<|im_end|>
<|im_start|>assistant
(model answer)
Gukbap-Series models🍚🍚
BibTeX
@article{HumanF-MarkrAI,
title={Gukbap-Qwen2.5-7B},
author={MarkrAI},
year={2024},
url={https://huggingface.co/HumanF-MarkrAI}
}
- Downloads last month
- 15