YAML Metadata Warning: empty or missing yaml metadata in repo card (https://huggingface.co/docs/hub/model-cards#model-card-metadata)

Quantization made by Richard Erkhov.

Github

Discord

Request more models

cut-13b - GGUF

Name Quant method Size
cut-13b.Q2_K.gguf Q2_K 4.52GB
cut-13b.Q3_K_S.gguf Q3_K_S 5.27GB
cut-13b.Q3_K.gguf Q3_K 5.9GB
cut-13b.Q3_K_M.gguf Q3_K_M 5.9GB
cut-13b.Q3_K_L.gguf Q3_K_L 6.45GB
cut-13b.IQ4_XS.gguf IQ4_XS 6.54GB
cut-13b.Q4_0.gguf Q4_0 6.86GB
cut-13b.IQ4_NL.gguf IQ4_NL 6.9GB
cut-13b.Q4_K_S.gguf Q4_K_S 6.91GB
cut-13b.Q4_K.gguf Q4_K 7.33GB
cut-13b.Q4_K_M.gguf Q4_K_M 7.33GB
cut-13b.Q4_1.gguf Q4_1 7.61GB
cut-13b.Q5_0.gguf Q5_0 8.36GB
cut-13b.Q5_K_S.gguf Q5_K_S 8.36GB
cut-13b.Q5_K.gguf Q5_K 8.6GB
cut-13b.Q5_K_M.gguf Q5_K_M 8.6GB
cut-13b.Q5_1.gguf Q5_1 9.1GB
cut-13b.Q6_K.gguf Q6_K 9.95GB
cut-13b.Q8_0.gguf Q8_0 12.88GB

Original model description:

license: apache-2.0

Reasons to Reject? Aligning Language Models with Judgments.

This repository contains the CUT model from our work,

Reasons to Reject? Aligning Language Models with Judgments.

Weiwen Xu, Deng Cai, Zhisong Zhang, Wai Lam, Shuming Shi

The source codes can be found in https://github.com/wwxu21/CUT


1. Model description

This model achieves 91.36 on AlpacaEval. It is tuned after 4 iterations of online alignment. In each iteration, we apply the following three steps:

  • Step 1: Collect instructions, and obtain the responses from the target model.

  • Step 2: Annotate judgments for the responses.

  • Step 3: Apply CUT to fine-tune the target model with the above instruction-response-judgment triplets.

Specifically, we use LLaMA2-chat-13b as the base LLM. In each iteration, we sample 1000 instructions from Stanford Alpaca. To avoid over-fitting, we ensure that the sampled data are different in each iteration. We then ask GPT4 for the judgment annotation.

2. Template

The CUT model is a chat model and it uses the following Alpaca template:

Below is an instruction that describes a task. Write a response that appropriately completes the request.

### Instruction:
{instruction}

### Response:

3. How to use

3.1. Huggingface

import torch
from transformers import AutoModelForCausalLM, AutoTokenizer

torch.set_default_device("cuda")

model = AutoModelForCausalLM.from_pretrained("xww033/cut-13b", torch_dtype=torch.float16)
tokenizer = AutoTokenizer.from_pretrained("xww033/cut-13b")

inputs = tokenizer('''Below is an instruction that describes a task. Write a response that appropriately completes the request.

### Instruction:
How did US states get their names?

### Response:''', return_tensors="pt", return_attention_mask=False)

outputs = model.generate(**inputs, max_length=2048)
text = tokenizer.batch_decode(outputs)[0]
print(text)

3.2. FastChat

Fastchat provides a simple setup for those interested in trying our aligned model. After downloading the CUT model through HuggingFace, clone the Fastchat repository:

git clone https://github.com/lm-sys/FastChat.git
cd FastChat

Download the required packages:

pip install --upgrade pip  # enable PEP 660 support
pip install -e .

Finally, run the following:

python -m fastchat.serve.cli --model-path xww033/cut-13b --conv-template alpaca

4. BibTeX entry and citation info

@article{xu2023reasons,
  title={Reasons to Reject? Aligning Language Models with Judgments},
  author={Xu, Weiwen and Cai, Deng and Zhang, Zhisong and Lam, Wai and Shi, Shuming},
  journal={arXiv preprint arXiv:2312.14591},
  year={2023}
}
Downloads last month
88
GGUF
Model size
13B params
Architecture
llama

2-bit

3-bit

4-bit

5-bit

6-bit

8-bit

Inference Providers NEW
This model is not currently available via any of the supported third-party Inference Providers, and HF Inference API was unable to determine this model's library.