BaochangRen commited on
Commit
cf0a979
·
verified ·
1 Parent(s): d7ac3e3

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +83 -3
README.md CHANGED
@@ -1,3 +1,83 @@
1
- ---
2
- license: mit
3
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: mit
3
+ ---
4
+ <div align="center">
5
+ <h1 align="center"> KnowRL </h1>
6
+ <h3 align="center"> Exploring Knowledgeable Reinforcement Learning for Factuality </h3>
7
+
8
+ <p align="center">
9
+ <a href="https://arxiv.org/abs/2506.19807">📄arXiv</a> •
10
+ <a href="https://github.com/zjunlp/KnowRL">💻GitHub Repo</a> •
11
+ <a href="https://huggingface.co/datasets/zjunlp/KnowRL-Train-Data">📖Dataset</a>
12
+ </p>
13
+ </div>
14
+
15
+ ---
16
+
17
+ ## Model Description
18
+
19
+ **KnowRL-Skywork-OR1-7B-Preview** is a slow-thinking language model that results from applying our **KnowRL** framework to the base model `Skywork-OR1-7B-Preview`.
20
+
21
+ The **KnowRL (Knowledgeable Reinforcement Learning)** framework is designed to mitigate hallucinations in Large Language Models (LLMs) by integrating external knowledge directly into the training process. The model is trained using **Knowledgeable Reinforcement Learning (RL)**, where a reward signal explicitly encourages factual accuracy in its reasoning process, helping it learn its own knowledge boundaries.
22
+
23
+ As a result, this model demonstrates a significant reduction in hallucinations on factual benchmarks while preserving or even enhancing the strong reasoning capabilities inherited from its base model.
24
+
25
+ ## How to Use
26
+
27
+ ### Using the `transformers` Library
28
+
29
+ You can use this model with the `transformers` library for text generation tasks. It is important to follow the specific prompt format, which includes `<think>` and `<answer>` tags, to get the best results.
30
+
31
+ ```python
32
+ import torch
33
+ from transformers import AutoModelForCausalLM, AutoTokenizer
34
+
35
+ # Set the device
36
+ device = "cuda" if torch.cuda.is_available() else "cpu"
37
+
38
+ # Load the model and tokenizer
39
+ model_name = "zjunlp/KnowRL-Skywork-OR1-7B-Preview"
40
+ tokenizer = AutoTokenizer.from_pretrained(model_name)
41
+ model = AutoModelForCausalLM.from_pretrained(model_name, torch_dtype=torch.bfloat16).to(device)
42
+
43
+ # Define the prompt using the model's template
44
+ prompt = "What is the main function of the mitochondria?"
45
+ messages = [
46
+ {"role": "user", "content": prompt}
47
+ ]
48
+ text = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
49
+
50
+ # Generate a response
51
+ inputs = tokenizer(text, return_tensors="pt").to(device)
52
+ outputs = model.generate(**inputs, max_new_tokens=512)
53
+
54
+ # Decode and print the output
55
+ response = tokenizer.decode(outputs[0], skip_special_tokens=True)
56
+ print(response)
57
+ ```
58
+ ### Using `huggingface-cli`
59
+ You can also download the model from the command line using `huggingface-cli`.
60
+
61
+ ```bash
62
+ huggingface-cli download zjunlp/KnowRL-Skywork-OR1-7B-Preview --local-dir KnowRL-Skywork-OR1-7B-Preview
63
+ ```
64
+
65
+ ## Training Details
66
+
67
+ The model is trained using Knowledgeable Reinforcement Learning (RL) (specifically GRPO) using data from the `zjunlp/KnowRL-Train-Data`.
68
+
69
+ For complete details on the training configuration and hyperparameters, please refer to our [GitHub repository](https://github.com/zjunlp/KnowRL
70
+ ).
71
+
72
+ ---
73
+
74
+ ## Citation
75
+ If you find this model useful in your research, please consider citing our paper:
76
+ ```bibtex
77
+ @article{ren2025knowrl,
78
+ title={KnowRL: Exploring Knowledgeable Reinforcement Learning for Factuality},
79
+ author={Ren, Baochang and Qiao, Shuofei and Yu, Wenhao and Chen, Huajun and Zhang, Ningyu},
80
+ journal={arXiv preprint arXiv:2506.19807},
81
+ year={2025}
82
+ }
83
+ ```