|
<p align="left"> |
|
中文</a>  |  <a href="README_EN.md">English</a>  |
|
</p> |
|
<br> |
|
|
|
<div align="center"> |
|
<h1> |
|
360智脑 |
|
</h1> |
|
</div> |
|
<div align="center"> |
|
🤗 <a href="">Hugging Face</a>   |    |
|
🤖 <a href="">ModelScope</a>   |    |
|
💬 <a href="">WeChat (微信)</a>   |
|
</div> |
|
<br> |
|
<p align="center"> |
|
欢迎访问360智脑官网<a href="https://ai.360.com"> https://ai.360.com </a>体验更多更强大的功能。 |
|
</p> |
|
|
|
# 项目介绍 |
|
🎉🎉🎉我们开源了360智脑大模型的系列工作,本次开源了以下模型: |
|
- **360Zhinao-7B-Base** |
|
- **360Zhinao-7B-Chat-4K** |
|
- **360Zhinao-7B-Chat-32K** |
|
- **360Zhinao-7B-Chat-360K** |
|
|
|
360智脑大模型特点如下: |
|
- **基础模型**:采用 3.4 万亿 Tokens 的高质量语料库训练,以中文、英文、代码为主,在相关基准评测中,同尺寸有竞争力。 |
|
- **对话模型**:具有强大的对话能力,开放4k、32k、360k三种不同窗口长度。据了解,360k(约50万字)在国内目前开源的长文本能力中最长。 |
|
|
|
# 更新信息 |
|
- [2024.04.10] 我们发布了360Zhinao-7B 1.0版本,同时开放Base模型和4k、32k、360k三种文本长度的Chat模型。 |
|
|
|
# 目录 |
|
- [下载地址](#下载地址) |
|
- [模型评估](#模型评估) |
|
- [快速开始](#快速开始) |
|
- [模型推理](#模型推理) |
|
- [模型微调](#模型微调) |
|
- [许可证](#许可证) |
|
|
|
# 下载地址 |
|
本次发布版本和下载链接见下表: |
|
| | Zhinao-Base | Zhinao-Chat | Zhinao-Chat(Int8) | Zhinao-Chat(Int4) | |
|
|-|-|-|-|-| |
|
| 1.8B | <a href="">🤖</a> <a href="">🤗</a> | <a href="">🤖</a> <a href="">🤗</a> | <a href="">🤖</a> <a href="">🤗</a> | <a href="">🤖</a> <a href="">🤗</a> | |
|
| 7B | <a href="">🤖</a> <a href="">🤗</a> | <a href="">🤖</a> <a href="">🤗</a> | <a href="">🤖</a> <a href="">🤗</a> | <a href="">🤖</a> <a href="">🤗</a> | |
|
|
|
# 模型评估 |
|
我们在OpenCompass的主流评测数据集上验证了我们的模型性能,包括C-Eval、AGIEval、MMLU、CMMLU、HellaSwag、MATH、GSM8K、HumanEval、MBPP、BBH、LAMBADA,考察的能力包括自然语言理解、知识、数学计算和推理、代码生成、逻辑推理等。 |
|
|
|
## 基础模型 |
|
| Model | C-Eval | AGIEval | MMLU | CMMLU | HellaSwag | MATH | GSM8K | HumanEval | MBPP | BBH | LAMBADA | |
|
| - | - | - | - | - | - | - | - | - | - | - | - | |
|
| Phi-1.5-1.3B | 27.8 | 23.4 | 44.3 | 26 | 57.1 | 2.6 | 32.5 | 25 | 33 | 29.6 | 54.6 | |
|
| Qwen-1.8B | 53.3 | 36.5 | 46.4 | 51.9 | 58.7 | 2.4 | 10.2 | 7.3 | 14 | 22.6 | 54.3 | |
|
| Qwen-1.5-1.8B | 59.48 | 38.76 | 47.14 | 57.08 | 56.02 | 9.66 | 34.87 | 23.17 | 17.6 | 27.02 | 56.49 | |
|
| Baichuan2-7B-Base | 56.3 | 34.6 | 54.7 | 57 | 67 | 5.4 | 24.6 | 17.7 | 24 | 41.8 | 73.3 | |
|
| ChatGLM3-6B-Base | 67 | 47.4 | 62.8 | 66.5 | 76.5 | 19.2 | 61 | 44.5 | 57.2 | 66.2 | 77.1 | |
|
| DeepSeek-7B-Base | 45 | 24 | 49.3 | 46.8 | 73.4 | 4.2 | 18.3 | 25 | 36.4 | 42.8 | 72.6 | |
|
| InternLM2-7B | 65.7 | 50.2 | 65.5 | 66.2 | 79.6 | 19.9 | 70.6 | 41.5 | 42.4 | 64.4 | 72.1 | |
|
| InternLM-7B | 53.4 | 36.9 | 51 | 51.8 | 70.6 | 6.3 | 31.2 | 13.4 | 14 | 37 | 67 | |
|
| LLaMA-2-7B | 32.5 | 21.8 | 46.8 | 31.8 | 74 | 3.3 | 16.7 | 12.8 | 14.8 | 38.2 | 73.3 | |
|
| LLaMA-7B | 27.3 | 20.6 | 35.6 | 26.8 | 74.3 | 2.9 | 10 | 12.8 | 16.8 | 33.5 | 73.3 | |
|
| Mistral-7B-v0.1 | 47.4 | 32.8 | 64.1 | 44.7 | 78.9 | 11.3 | 47.5 | 27.4 | 38.6 | 56.7 | 75 | |
|
| MPT-7B | 23.5 | 21.3 | 27.5 | 25.9 | 75 | 2.9 | 9.1 | 17.1 | 22.8 | 35.6 | 70 | |
|
| Qwen-7B | 63.4 | 45.3 | 59.7 | 62.5 | 75 | 13.3 | 54.1 | 27.4 | 31.4 | 45.2 | 67.5 | |
|
| XVERSE-7B | 61.1 | 39 | 58.4 | 60.8 | 73.7 | 2.2 | 11.7 | 4.9 | 10.2 | 31 | 24 | |
|
| Yi-6B | 73 | 44.3 | 64 | 73.5 | 73.1 | 6.3 | 39.9 | 15.2 | 23.6 | 44.9 | 68 | |
|
| Zhinao-1.8B-Base | 49.78 | 31.87 | 50.05 | 52.58 | 57.31 | 4.82 | 15.01 | 14.02 | 19.4 | 29.76 | 69.77 | |
|
| 360Zhinao-7B-Base | 74.11 | 49.49 | 67.44 | 72.38 | 83.05 | 16.38 | 53.83 | 35.98 | 42.4 | 43.95 | 78.59 | |
|
|
|
以上结果,在官方[Opencompass](https://rank.opencompass.org.cn/leaderboard-llm)上可查询或可复现。 |
|
|
|
# 快速开始 |
|
简单的示例来说明如何利用🤖 ModelScope和🤗 Transformers快速使用360Zhinao-7B-Base和360Zhinao-7B-Chat |
|
|
|
## 依赖安装 |
|
- python 3.8 and above |
|
- pytorch 2.0 and above |
|
- transformers 4.37.2 and above |
|
- CUDA 11.4 and above are recommended. |
|
|
|
```shell |
|
pip install -r requirements.txt |
|
``` |
|
我们推荐安装flash-attention(当前已支持flash attention 2)来提高你的运行效率以及降低显存占用。(flash-attention只是可选项,不安装也可正常运行该项目) |
|
|
|
>flash-attn >= 2.3.6 |
|
```shell |
|
FLASH_ATTENTION_FORCE_BUILD=TRUE pip install flash-attn==2.3.6 |
|
``` |
|
|
|
|
|
## 🤗 Transformers |
|
### Base模型推理 |
|
|
|
此代码演示使用transformers快速使用360Zhinao-7B-Base模型进行推理 |
|
```python |
|
from transformers import AutoTokenizer, AutoModelForCausalLM |
|
from transformers.generation import GenerationConfig |
|
|
|
MODEL_NAME_OR_PATH = "qihoo360/360Zhinao-7B-Base" |
|
|
|
tokenizer = AutoTokenizer.from_pretrained( |
|
MODEL_NAME_OR_PATH, |
|
trust_remote_code=True) |
|
|
|
model = AutoModelForCausalLM.from_pretrained( |
|
MODEL_NAME_OR_PATH, |
|
device_map="auto", |
|
trust_remote_code=True) |
|
|
|
generation_config = GenerationConfig.from_pretrained( |
|
MODEL_NAME_OR_PATH, |
|
trust_remote_code=True) |
|
|
|
inputs = tokenizer('中国二十四节气\n1. 立春\n2. 雨水\n3. 惊蛰\n4. 春分\n5. 清明\n', return_tensors='pt') |
|
inputs = inputs.to(model.device) |
|
|
|
pred = model.generate(input_ids=inputs["input_ids"], generation_config=generation_config) |
|
print("outputs:\n", tokenizer.decode(pred.cpu()[0], skip_special_tokens=True)) |
|
``` |
|
### Chat模型推理 |
|
|
|
此代码演示使用transformers快速使用360Zhinao-7B-Chat-4K模型进行推理 |
|
```python |
|
from transformers import AutoTokenizer, AutoModelForCausalLM |
|
from transformers.generation import GenerationConfig |
|
|
|
MODEL_NAME_OR_PATH = "qihoo360/360Zhinao-7B-Chat-4K" |
|
|
|
tokenizer = AutoTokenizer.from_pretrained( |
|
MODEL_NAME_OR_PATH, |
|
trust_remote_code=True) |
|
|
|
model = AutoModelForCausalLM.from_pretrained( |
|
MODEL_NAME_OR_PATH, |
|
device_map="auto", |
|
trust_remote_code=True) |
|
|
|
generation_config = GenerationConfig.from_pretrained( |
|
MODEL_NAME_OR_PATH, |
|
trust_remote_code=True) |
|
|
|
messages = [] |
|
#round-1 |
|
messages.append({"role": "user", "content": "介绍一下刘德华"}) |
|
response = model.chat(tokenizer=tokenizer, messages=messages, generation_config=generation_config) |
|
messages.append({"role": "assistant", "content": response}) |
|
print(messages) |
|
|
|
#round-2 |
|
messages.append({"role": "user", "content": "他有什么代表作?"}) |
|
response = model.chat(tokenizer=tokenizer, messages=messages, generation_config=generation_config) |
|
messages.append({"role": "assistant", "content": response}) |
|
print(messages) |
|
``` |
|
|
|
## 🤖 ModelScope |
|
### Base模型推理 |
|
|
|
此代码演示使用ModelScope快速使用360Zhinao-7B-Base模型进行推理 |
|
|
|
|
|
```python |
|
from modelscope import AutoModelForCausalLM, AutoTokenizer |
|
from modelscope import GenerationConfig |
|
|
|
MODEL_NAME_OR_PATH = "qihoo360/360Zhinao-7B-Base" |
|
|
|
tokenizer = AutoTokenizer.from_pretrained( |
|
MODEL_NAME_OR_PATH, |
|
trust_remote_code=True) |
|
|
|
model = AutoModelForCausalLM.from_pretrained( |
|
MODEL_NAME_OR_PATH, |
|
device_map="auto", |
|
trust_remote_code=True) |
|
|
|
generation_config = GenerationConfig.from_pretrained( |
|
MODEL_NAME_OR_PATH, |
|
trust_remote_code=True) |
|
|
|
inputs = tokenizer('中国二十四节气\n1. 立春\n2. 雨水\n3. 惊蛰\n4. 春分\n5. 清明\n', return_tensors='pt') |
|
inputs = inputs.to(model.device) |
|
|
|
pred = model.generate(input_ids=inputs["input_ids"], generation_config=generation_config) |
|
print("outputs:\n", tokenizer.decode(pred.cpu()[0], skip_special_tokens=True)) |
|
``` |
|
|
|
### Chat模型推理 |
|
|
|
此代码演示使用ModelScope快速使用360Zhinao-7B-Chat-4K模型进行推理 |
|
```python |
|
from modelscope import AutoModelForCausalLM, AutoTokenizer |
|
from modelscope import GenerationConfig |
|
|
|
MODEL_NAME_OR_PATH = "qihoo360/360Zhinao-7B-Chat-4K" |
|
|
|
tokenizer = AutoTokenizer.from_pretrained( |
|
MODEL_NAME_OR_PATH, |
|
trust_remote_code=True) |
|
|
|
model = AutoModelForCausalLM.from_pretrained( |
|
MODEL_NAME_OR_PATH, |
|
device_map="auto", |
|
trust_remote_code=True) |
|
|
|
generation_config = GenerationConfig.from_pretrained( |
|
MODEL_NAME_OR_PATH, |
|
trust_remote_code=True) |
|
|
|
messages = [] |
|
#round-1 |
|
messages.append({"role": "user", "content": "介绍一下刘德华"}) |
|
response = model.chat(tokenizer=tokenizer, messages=messages, generation_config=generation_config) |
|
messages.append({"role": "assistant", "content": response}) |
|
print(messages) |
|
|
|
#round-2 |
|
messages.append({"role": "user", "content": "他有什么代表作?"}) |
|
response = model.chat(tokenizer=tokenizer, messages=messages, generation_config=generation_config) |
|
messages.append({"role": "assistant", "content": response}) |
|
print(messages) |
|
``` |
|
|
|
## 终端 Demo |
|
可使用终端交互实现快速体验 |
|
```shell |
|
python cli_demo.py |
|
``` |
|
<p align="center"> |
|
<img src="assets/cli_demo.gif" width="600" /> |
|
<p> |
|
|
|
## 网页 Demo |
|
也可使用网页交互实现快速体验 |
|
```shell |
|
streamlit run web_demo.py |
|
``` |
|
<p align="center"> |
|
<img src="assets/web_demo.gif" width="600" /> |
|
<p> |
|
|
|
## API Demo |
|
启动命令 |
|
```shell |
|
python openai_api.py |
|
``` |
|
|
|
请求参数 |
|
```shell |
|
curl --location --request POST 'http://localhost:8360/v1/chat/completions' \ |
|
--header 'Content-Type: application/json' \ |
|
--data-raw '{ |
|
"max_new_tokens": 200, |
|
"do_sample": true, |
|
"top_k": 0, |
|
"top_p": 0.8, |
|
"temperature": 1.0, |
|
"repetition_penalty": 1.0, |
|
"messages": [ |
|
{ |
|
"role": "user", |
|
"content": "你叫什么名字" |
|
} |
|
] |
|
}' |
|
``` |
|
|
|
# 模型推理 |
|
## 模型量化 |
|
我们提供了基于AutoGPTQ的量化方案,并开源了Int4量化模型。模型的效果损失很小,但能显著降低显存占用并提升推理速度。 |
|
|
|
对BF16,Int8和Int4模型在基准评测上做了测试,结果如下所示: |
|
|
|
| Quantization | MMLU | CEval (val) | GSM8K | Humaneval | |
|
|-|-|-|-|-| |
|
| 360Zhinao-7B-Chat-4K (BF16) |-|-|-|-| |
|
| 360Zhinao-7B-Chat-4K (Int8) |-|-|-|-| |
|
| 360Zhinao-7B-Chat-4K (Int4) |-|-|-|-| |
|
|
|
## 模型部署 |
|
### vLLM安装环境 |
|
如希望部署及加速推理,我们建议你使用 `vLLM==0.3.3`。 |
|
|
|
如果你使用**CUDA 12.1和PyTorch 2.1**,可以直接使用以下命令安装vLLM。 |
|
```shell |
|
pip install vllm==0.3.3 |
|
``` |
|
|
|
否则请参考vLLM官方的[安装说明](https://docs.vllm.ai/en/latest/getting_started/installation.html)。 |
|
|
|
>安装完成后,还需要以下操作~ |
|
1. 把vllm/zhinao.py文件复制到env环境对应的vllm/model_executor/models目录下。 |
|
2. 然后在vllm/model_executor/models/\_\_init\_\_.py文件增加一行代码 |
|
|
|
```shell |
|
"ZhinaoForCausalLM": ("zhinao", "ZhinaoForCausalLM"), |
|
``` |
|
|
|
### vLLM服务启动 |
|
|
|
启动服务 |
|
```shell |
|
python -m vllm.entrypoints.openai.api_server \ |
|
--served-model-name 360Zhinao-7B-Chat-4K \ |
|
--model qihoo360/360Zhinao-7B-Chat-4K \ |
|
--trust-remote-code \ |
|
--tensor-parallel-size 1 |
|
--max-model-len 18000 \ |
|
--host 0.0.0.0 \ |
|
--port 8360 |
|
``` |
|
|
|
使用curl请求服务 |
|
```shell |
|
curl http://localhost:8360/v1/chat/completions \ |
|
-H "Content-Type: application/json" \ |
|
-d '{ |
|
"model": "360Zhinao-7B-Chat-4K", |
|
"max_tokens": 200, |
|
"top_k": 0, |
|
"top_p": 0.8, |
|
"temperature": 1.0, |
|
"presence_penalty": 0.0, |
|
"frequency_penalty": 0.0, |
|
"messages": [ |
|
{"role": "system", "content": "You are a helpful assistant."}, |
|
{"role": "user", "content": "你好"} |
|
], |
|
"stop": [ |
|
"<eod>", |
|
"<|im_end|>", |
|
"<|im_start|>" |
|
] |
|
}' |
|
``` |
|
使用python请求服务 |
|
```python |
|
from openai import OpenAI |
|
# Set OpenAI's API key and API base to use vLLM's API server. |
|
openai_api_key = "EMPTY" |
|
openai_api_base = "http://localhost:8000/v1" |
|
|
|
client = OpenAI( |
|
api_key=openai_api_key, |
|
base_url=openai_api_base, |
|
) |
|
|
|
chat_response = client.chat.completions.create( |
|
model="360Zhinao-7B-Chat-4K", |
|
messages=[ |
|
{"role": "system", "content": "You are a helpful assistant."}, |
|
{"role": "user", "content": "你好"}, |
|
], |
|
stop=[ |
|
"<eod>", |
|
"<|im_end|>", |
|
"<|im_start|>" |
|
], |
|
presence_penalty=0.0, |
|
frequency_penalty=0.0 |
|
) |
|
print("Chat response:", chat_response) |
|
``` |
|
|
|
> 注意:如需要开启重复惩罚,建议使用 *presence_penalty* 和 *frequency_penalty* 参数。 |
|
|
|
# 模型微调 |
|
## 训练数据 |
|
|
|
我们提供了微调训练样例数据 data/test.json,该样例数据是从 [multiturn_chat_0.8M](https://huggingface.co/datasets/BelleGroup/multiturn_chat_0.8M) 采样出 1 万条,并且做了格式转换。 |
|
|
|
数据格式: |
|
```json |
|
[ |
|
{ |
|
"id": 1, |
|
"conversations": [ |
|
{ |
|
"from": "system", |
|
"value": "You are a helpful assistant." |
|
}, |
|
{ |
|
"from": "user", |
|
"value": "您好啊" |
|
}, |
|
{ |
|
"from": "assistant", |
|
"value": "你好!我今天能为您做些什么?有什么问题或需要帮助吗? 我在这里为您提供服务。" |
|
} |
|
] |
|
} |
|
] |
|
``` |
|
## 微调训练 |
|
训练脚本如下: |
|
```shell |
|
set -x |
|
|
|
HOSTFILE=hostfile |
|
DS_CONFIG=./finetune/ds_config_zero2.json |
|
|
|
# PARAMS |
|
LR=5e-6 |
|
EPOCHS=3 |
|
MAX_LEN=4096 |
|
BATCH_SIZE=4 |
|
NUM_NODES=1 |
|
NUM_GPUS=8 |
|
MASTER_PORT=29500 |
|
|
|
IS_CONCAT=False # 是否数据拼接到最大长度(MAX_LEN) |
|
|
|
DATA_PATH="./data/training_data_sample.json" |
|
MODEL_PATH="qihoo360/360Zhinao-7B-Base" |
|
OUTPUT_DIR="./outputs/" |
|
|
|
deepspeed --hostfile ${HOSTFILE} \ |
|
--master_port ${MASTER_PORT} \ |
|
--num_nodes ${NUM_NODES} \ |
|
--num_gpus ${NUM_GPUS} \ |
|
finetune.py \ |
|
--report_to "tensorboard" \ |
|
--data_path ${DATA_PATH} \ |
|
--model_name_or_path ${MODEL_PATH} \ |
|
--output_dir ${OUTPUT_DIR} \ |
|
--model_max_length ${MAX_LEN} \ |
|
--num_train_epochs ${EPOCHS} \ |
|
--per_device_train_batch_size ${BATCH_SIZE} \ |
|
--gradient_accumulation_steps 1 \ |
|
--save_strategy steps \ |
|
--save_steps 200 \ |
|
--learning_rate ${LR} \ |
|
--lr_scheduler_type cosine \ |
|
--adam_beta1 0.9 \ |
|
--adam_beta2 0.95 \ |
|
--adam_epsilon 1e-8 \ |
|
--max_grad_norm 1.0 \ |
|
--weight_decay 0.1 \ |
|
--warmup_ratio 0.01 \ |
|
--gradient_checkpointing True \ |
|
--bf16 True \ |
|
--tf32 True \ |
|
--deepspeed ${DS_CONFIG} \ |
|
--is_concat ${IS_CONCAT} \ |
|
--logging_steps 1 \ |
|
--log_on_each_node False |
|
``` |
|
```shell |
|
bash finetune/ds_finetune.sh |
|
``` |
|
- 可通过配置hostfile,实现单机、多机训练。 |
|
- 可通过配置ds_config,实现zero2、zero3。 |
|
- 可通过配置fp16、bf16实现混合精度训练,建议使用bf16,与预训练模型保持一致。 |
|
- 可通过配置is_concat参数,控制训练数据是否拼接,当训练数据量级较大时,可通过拼接提升训练效率。 |
|
|
|
# 许可证 |
|
|
|
本仓库源码遵循开源许可证Apache 2.0。 |
|
|
|
360智脑开源模型支持商用,若需将本模型及衍生模型用于商业用途,请通过邮箱([email protected])联系进行申请, 具体许可协议请见[《360智脑开源模型许可证》](./360智脑开源模型许可证.txt)。 |
|
|