--- license: apache-2.0 --- # Model Card for Deita Quality Scorer Deita is an open-sourced project designed to facilitate Automatic Data Selection for instruction tuning in Large Language Models (LLMs). Deita Quality Scorer is a tool for automatically annotating the Instruction Quality of SFT data. ## Model description - **Model type:** Model fine tuned to automatically annotate the Instruction-Response Pair Quality - **Language(s) (NLP):** Primarily English - **Finetuned from model:** Llama-1-13b-hf ### Model Sources - **Repository:** https://github.com/hkust-nlp/deita - **Model Family:** Other models and the dataset are found in the [Deita collection](https://huggingface.co/collections/hkust-nlp/deita-6569c198c174808d94cf5bd4). ## Usage Please use the following format to score the quality of Instruction-Response Pair ```python from transformers import AutoTokenizer, AutoModelForCausalLM import numpy as np from scipy.special import softmax model_name = "hkust-nlp/deita-quality-scorer" tokenizer = AutoTokenizer.from_pretrained(model_name) model = AutoModelForCausalLM.from_pretrained(model_name) def infer_Quality(model, tokenizer, input_text, resp_text): quality_template = ("You are a helpful assistant. Please identify the quality score of the Response corresponding to the Question. \n #Question#:\n{instruction}\n#Response#:\n{output} \n##Quality: ") user_input = quality_template.format(instruction=input_text, output=resp_text) input_ids = tokenizer.encode(user_input, return_tensors="pt") max_length = 512 outputs = model.generate(input_ids, max_length=512, num_return_sequences=1, return_dict_in_generate=True, output_scores=True) logprobs_list = outputs.scores[0][0] score_logits = [] id2score = { 29896: "1", 29906: "2", 29941: "3", 29946: "4", 29945: "5", 29953: "6" } score_template = np.array([1,2,3,4,5,6]) for k in id2score: score_logits.append(logprobs_list[k]) score_logits = np.array(score_logits) score_npy = softmax(score_logits, axis=0) score_npy = score_npy * score_template score_npy = np.sum(score_npy, axis=0) return score_npy input_text = "word to describe UI with helpful tooltips" # Example Input output_text = "User-friendly or intuitive UI" # Example Output quality_score = infer_quality(model, tokenizer, input_text) print(quality_score) ```