RichardErkhov commited on
Commit
f1f6b0c
·
verified ·
1 Parent(s): b654ed6

uploaded readme

Browse files
Files changed (1) hide show
  1. README.md +140 -0
README.md ADDED
@@ -0,0 +1,140 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ Quantization made by Richard Erkhov.
2
+
3
+ [Github](https://github.com/RichardErkhov)
4
+
5
+ [Discord](https://discord.gg/pvy7H8DZMG)
6
+
7
+ [Request more models](https://github.com/RichardErkhov/quant_request)
8
+
9
+
10
+ cut-13b - GGUF
11
+ - Model creator: https://huggingface.co/xww033/
12
+ - Original model: https://huggingface.co/xww033/cut-13b/
13
+
14
+
15
+ | Name | Quant method | Size |
16
+ | ---- | ---- | ---- |
17
+ | [cut-13b.Q2_K.gguf](https://huggingface.co/RichardErkhov/xww033_-_cut-13b-gguf/blob/main/cut-13b.Q2_K.gguf) | Q2_K | 4.52GB |
18
+ | [cut-13b.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/xww033_-_cut-13b-gguf/blob/main/cut-13b.Q3_K_S.gguf) | Q3_K_S | 5.27GB |
19
+ | [cut-13b.Q3_K.gguf](https://huggingface.co/RichardErkhov/xww033_-_cut-13b-gguf/blob/main/cut-13b.Q3_K.gguf) | Q3_K | 5.9GB |
20
+ | [cut-13b.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/xww033_-_cut-13b-gguf/blob/main/cut-13b.Q3_K_M.gguf) | Q3_K_M | 5.9GB |
21
+ | [cut-13b.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/xww033_-_cut-13b-gguf/blob/main/cut-13b.Q3_K_L.gguf) | Q3_K_L | 6.45GB |
22
+ | [cut-13b.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/xww033_-_cut-13b-gguf/blob/main/cut-13b.IQ4_XS.gguf) | IQ4_XS | 6.54GB |
23
+ | [cut-13b.Q4_0.gguf](https://huggingface.co/RichardErkhov/xww033_-_cut-13b-gguf/blob/main/cut-13b.Q4_0.gguf) | Q4_0 | 6.86GB |
24
+ | [cut-13b.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/xww033_-_cut-13b-gguf/blob/main/cut-13b.IQ4_NL.gguf) | IQ4_NL | 6.9GB |
25
+ | [cut-13b.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/xww033_-_cut-13b-gguf/blob/main/cut-13b.Q4_K_S.gguf) | Q4_K_S | 6.91GB |
26
+ | [cut-13b.Q4_K.gguf](https://huggingface.co/RichardErkhov/xww033_-_cut-13b-gguf/blob/main/cut-13b.Q4_K.gguf) | Q4_K | 7.33GB |
27
+ | [cut-13b.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/xww033_-_cut-13b-gguf/blob/main/cut-13b.Q4_K_M.gguf) | Q4_K_M | 7.33GB |
28
+ | [cut-13b.Q4_1.gguf](https://huggingface.co/RichardErkhov/xww033_-_cut-13b-gguf/blob/main/cut-13b.Q4_1.gguf) | Q4_1 | 7.61GB |
29
+ | [cut-13b.Q5_0.gguf](https://huggingface.co/RichardErkhov/xww033_-_cut-13b-gguf/blob/main/cut-13b.Q5_0.gguf) | Q5_0 | 8.36GB |
30
+ | [cut-13b.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/xww033_-_cut-13b-gguf/blob/main/cut-13b.Q5_K_S.gguf) | Q5_K_S | 8.36GB |
31
+ | [cut-13b.Q5_K.gguf](https://huggingface.co/RichardErkhov/xww033_-_cut-13b-gguf/blob/main/cut-13b.Q5_K.gguf) | Q5_K | 8.6GB |
32
+ | [cut-13b.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/xww033_-_cut-13b-gguf/blob/main/cut-13b.Q5_K_M.gguf) | Q5_K_M | 8.6GB |
33
+ | [cut-13b.Q5_1.gguf](https://huggingface.co/RichardErkhov/xww033_-_cut-13b-gguf/blob/main/cut-13b.Q5_1.gguf) | Q5_1 | 9.1GB |
34
+ | [cut-13b.Q6_K.gguf](https://huggingface.co/RichardErkhov/xww033_-_cut-13b-gguf/blob/main/cut-13b.Q6_K.gguf) | Q6_K | 9.95GB |
35
+ | [cut-13b.Q8_0.gguf](https://huggingface.co/RichardErkhov/xww033_-_cut-13b-gguf/blob/main/cut-13b.Q8_0.gguf) | Q8_0 | 12.88GB |
36
+
37
+
38
+
39
+
40
+ Original model description:
41
+ ---
42
+ license: apache-2.0
43
+ ---
44
+
45
+ # Reasons to Reject? Aligning Language Models with Judgments.
46
+ This repository contains the CUT model from our work,
47
+
48
+ [Reasons to Reject? Aligning Language Models with Judgments](https://arxiv.org/abs/2312.14591).
49
+
50
+ Weiwen Xu, Deng Cai, Zhisong Zhang, Wai Lam, Shuming Shi
51
+
52
+ The source codes can be found in https://github.com/wwxu21/CUT
53
+ ****
54
+
55
+ ## 1. Model description
56
+
57
+ This model achieves 91.36 on [AlpacaEval](https://tatsu-lab.github.io/alpaca_eval).
58
+ It is tuned after 4 iterations of online alignment. In each iteration, we apply the following three steps:
59
+
60
+ - Step 1: Collect instructions, and obtain the responses from the target model.
61
+
62
+ - Step 2: Annotate judgments for the responses.
63
+
64
+ - Step 3: Apply CUT to fine-tune the target model with the above instruction-response-judgment triplets.
65
+
66
+ Specifically, we use [LLaMA2-chat-13b](https://huggingface.co/meta-llama/Llama-2-13b-chat-hf) as the base LLM. In each iteration, we sample 1000 instructions from [Stanford Alpaca](https://github.com/tatsu-lab/stanford_alpaca).
67
+ To avoid over-fitting, we ensure that the sampled data are different in each iteration.
68
+ We then ask GPT4 for the judgment annotation.
69
+
70
+
71
+ ## 2. Template
72
+ The CUT model is a chat model and it uses the following [Alpaca template](https://github.com/tatsu-lab/stanford_alpaca):
73
+ ```
74
+ Below is an instruction that describes a task. Write a response that appropriately completes the request.
75
+
76
+ ### Instruction:
77
+ {instruction}
78
+
79
+ ### Response:
80
+ ```
81
+
82
+ ### 3. How to use
83
+
84
+ #### 3.1. Huggingface
85
+
86
+ ```python
87
+ import torch
88
+ from transformers import AutoModelForCausalLM, AutoTokenizer
89
+
90
+ torch.set_default_device("cuda")
91
+
92
+ model = AutoModelForCausalLM.from_pretrained("xww033/cut-13b", torch_dtype=torch.float16)
93
+ tokenizer = AutoTokenizer.from_pretrained("xww033/cut-13b")
94
+
95
+ inputs = tokenizer('''Below is an instruction that describes a task. Write a response that appropriately completes the request.
96
+
97
+ ### Instruction:
98
+ How did US states get their names?
99
+
100
+ ### Response:''', return_tensors="pt", return_attention_mask=False)
101
+
102
+ outputs = model.generate(**inputs, max_length=2048)
103
+ text = tokenizer.batch_decode(outputs)[0]
104
+ print(text)
105
+ ```
106
+
107
+ #### 3.2. FastChat
108
+
109
+ [Fastchat](https://github.com/lm-sys/FastChat) provides a simple setup for those interested in trying our aligned model. After downloading the [CUT model](https://huggingface.co/xww033/cut-13b) through HuggingFace, clone the Fastchat repository:
110
+
111
+ ```bash
112
+ git clone https://github.com/lm-sys/FastChat.git
113
+ cd FastChat
114
+ ```
115
+
116
+ Download the required packages:
117
+
118
+ ```bash
119
+ pip install --upgrade pip # enable PEP 660 support
120
+ pip install -e .
121
+ ```
122
+
123
+ Finally, run the following:
124
+
125
+ ```bash
126
+ python -m fastchat.serve.cli --model-path xww033/cut-13b --conv-template alpaca
127
+ ```
128
+
129
+
130
+ ### 4. BibTeX entry and citation info
131
+ ```bibtxt
132
+ @article{xu2023reasons,
133
+ title={Reasons to Reject? Aligning Language Models with Judgments},
134
+ author={Xu, Weiwen and Cai, Deng and Zhang, Zhisong and Lam, Wai and Shi, Shuming},
135
+ journal={arXiv preprint arXiv:2312.14591},
136
+ year={2023}
137
+ }
138
+ ```
139
+
140
+