--- base_model: vihangd/shearedplats-2.7b-v2 datasets: - mwitiderrick/OpenPlatypus inference: true model_type: llama prompt_template: | ### Instruction:\n {prompt} ### Response: created_by: mwitiderrick tags: - transformers license: apache-2.0 language: - en library_name: transformers pipeline_tag: text-generation model-index: - name: mwitiderrick/open_llama_3b_instruct_v_0.2 results: - task: type: text-generation dataset: name: hellaswag type: hellaswag metrics: - name: hellaswag(0-Shot) type: hellaswag (0-Shot) value: 0.4882 - task: type: text-generation dataset: name: winogrande type: winogrande metrics: - name: winogrande(0-Shot) type: winogrande (0-Shot) value: 0.6133 - task: type: text-generation dataset: name: arc_challenge type: arc_challenge metrics: - name: arc_challenge(0-Shot) type: arc_challenge (0-Shot) value: 0.3362 source: name: open_llama_3b_instruct_v_0.2 model card url: https://huggingface.co/mwitiderrick/open_llama_3b_instruct_v_0.2 --- # ShearedPlats-7b Instruct This is an [ShearedPlats-7b model](https://huggingface.co/vihangd/shearedplats-2.7b-v2) that has been fine-tuned on 1 epoch of the [Open-Platypus](https://huggingface.co/datasets/garage-bAInd/Open-Platypus) dataset. The modified version of the dataset can be found [here](mwitiderrick/Open-Platypus) ## Prompt Template ``` ### Instruction: {query} ### Response: ``` ## Usage ```python from transformers import AutoTokenizer, AutoModelForCausalLM,pipeline tokenizer = AutoTokenizer.from_pretrained("mwitiderrick/shearedplats-2.7b-v2-instruct-v0.1") model = AutoModelForCausalLM.from_pretrained("mwitiderrick/shearedplats-2.7b-v2-instruct-v0.1") Provide step-by-step instructions for making a sweet chicken bugger text_gen = pipeline(task="text-generation", model=model, tokenizer=tokenizer, max_length=500) output = text_gen(f"### Instruction:\n{query}\n### Response:\n") print(output[0]['generated_text']) """ """ ``` ## TruthfulQA metrics ``` ```