Update README.md
Browse files
README.md
CHANGED
@@ -1,3 +1,85 @@
|
|
1 |
-
---
|
2 |
-
license:
|
3 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
---
|
2 |
+
license: mit
|
3 |
+
language:
|
4 |
+
- zh
|
5 |
+
- en
|
6 |
+
base_model:
|
7 |
+
- deepseek-ai/deepseek-llm-7b-chat
|
8 |
+
---
|
9 |
+
# Deep **<u>Seek</u>-<u>Fake</u>-<u>News</u>** LLM
|
10 |
+
<!-- markdownlint-disable first-line-h1 -->
|
11 |
+
<!-- markdownlint-disable html -->
|
12 |
+
<!-- markdownlint-disable no-duplicate-header -->
|
13 |
+
|
14 |
+
<div align="center">
|
15 |
+
<img src="figures/logo_v1.0.png" width="60%" alt="DeepSeekFakeNews-LLM" />
|
16 |
+
</div>
|
17 |
+
|
18 |
+
<p align="center">
|
19 |
+
<a href="https://github.com/TAN-OpenLab"><b>Project Link</b>👁️</a>
|
20 |
+
<a href="http://faculty.neu.edu.cn/tanzhenhua/zh_CN/index/100352/list/index.htm"><b>Lab Link</b>👁️</a>
|
21 |
+
</p>
|
22 |
+
|
23 |
+
|
24 |
+
### 1. Introduction of Deep **<u>Seek</u>-<u>Fake</u>-<u>News</u>** LLM
|
25 |
+
|
26 |
+
|
27 |
+
### 2. Model Summary
|
28 |
+
`deepseekfakenews-llm-7b-chat` is a 7B parameter model initialized from `deepseek-llm-7b-chat` and fine-tuned on extra fake news instruction data.
|
29 |
+
|
30 |
+
- **Home Page:** [DeepSeekFakeNews](https://deepseek.com/)
|
31 |
+
- **Repository:** [zt-ai/DeepSeekFakeNews-LLM-7B-Chat](https://github.com/TAN-OpenLab)
|
32 |
+
- **Demo of Chatting With DeepSeekFakeNews-LLM: to comment soon!
|
33 |
+
|
34 |
+
|
35 |
+
### 3. How to Use
|
36 |
+
Here are some examples of how to use our model.
|
37 |
+
|
38 |
+
```python
|
39 |
+
import torch
|
40 |
+
from peft import PeftModel
|
41 |
+
from transformers import AutoTokenizer, AutoModelForCausalLM, GenerationConfig
|
42 |
+
|
43 |
+
model_name = "zt-ai/DeepSeekFakeNews-llm-7b-chat"
|
44 |
+
tokenizer = AutoTokenizer.from_pretrained(model_name)
|
45 |
+
model = AutoModelForCausalLM.from_pretrained(model_name)
|
46 |
+
model.generation_config = GenerationConfig.from_pretrained(model_name)
|
47 |
+
model.generation_config.pad_token_id = model.generation_config.eos_token_id
|
48 |
+
lora_model = PeftModel.from_pretrained(base_model, model_name)
|
49 |
+
|
50 |
+
|
51 |
+
messages = [
|
52 |
+
{
|
53 |
+
"role": "user",
|
54 |
+
"content":
|
55 |
+
"""假新闻的表现可以总结为以下几个方面:1. 逻辑和事实矛盾。2.断章取义和误导性信息。3.夸张标题和吸引眼球的内容。4.情绪化和极端语言。5.偏见和单一立场。请从这几个方面分析新闻的真实性(真新闻或假新闻):
|
56 |
+
发布时间:
|
57 |
+
新闻标题:
|
58 |
+
新闻内容:
|
59 |
+
"""}
|
60 |
+
]
|
61 |
+
input_tensor = tokenizer.apply_chat_template(messages, add_generation_prompt=True, return_tensors="pt")
|
62 |
+
outputs = model.generate(input_tensor.to(model.device), max_new_tokens=100)
|
63 |
+
|
64 |
+
result = tokenizer.decode(outputs[0][input_tensor.shape[1]:], skip_special_tokens=True)
|
65 |
+
print(result)
|
66 |
+
```
|
67 |
+
|
68 |
+
Avoiding the use of the provided function `apply_chat_template`, you can also interact with our model following the sample template. Note that `messages` should be replaced by your input.
|
69 |
+
|
70 |
+
```
|
71 |
+
User: {messages[0]['content']}
|
72 |
+
|
73 |
+
Assistant:
|
74 |
+
```
|
75 |
+
|
76 |
+
**Note:** By default (`add_special_tokens=True`), our tokenizer automatically adds a `bos_token` (`<|begin▁of▁sentence|>`) before the input text. Additionally, since the system prompt is not compatible with this version of our models, we DO NOT RECOMMEND including the system prompt in your input.
|
77 |
+
|
78 |
+
### 4. License
|
79 |
+
This code repository is licensed under the MIT License. The use of DeepSeekFakeNews-LLM models is subject to the Model License. DeepSeekFakeNews-LLM supports commercial use.
|
80 |
+
|
81 |
+
<!-- See the [LICENSE-MODEL](https://github.com/deepseek-ai/deepseek-LLM/blob/main/LICENSE-MODEL) for more details. -->
|
82 |
+
|
83 |
+
### 5. Contact
|
84 |
+
|
85 |
+
If you have any questions, please raise an issue or contact us at [[email protected]](mailto:[email protected]).
|