mwitiderrick commited on
Commit
c10432f
·
1 Parent(s): 96b4670

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +89 -0
README.md ADDED
@@ -0,0 +1,89 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ base_model: vihangd/shearedplats-2.7b-v2
3
+ datasets:
4
+ - mwitiderrick/OpenPlatypus
5
+ inference: true
6
+ model_type: llama
7
+ prompt_template: |
8
+ ### Instruction:\n
9
+ {prompt}
10
+ ### Response:
11
+ created_by: mwitiderrick
12
+ tags:
13
+ - transformers
14
+ license: apache-2.0
15
+ language:
16
+ - en
17
+ library_name: transformers
18
+ pipeline_tag: text-generation
19
+
20
+ model-index:
21
+ - name: mwitiderrick/open_llama_3b_instruct_v_0.2
22
+ results:
23
+ - task:
24
+ type: text-generation
25
+ dataset:
26
+ name: hellaswag
27
+ type: hellaswag
28
+ metrics:
29
+ - name: hellaswag(0-Shot)
30
+ type: hellaswag (0-Shot)
31
+ value: 0.4882
32
+ - task:
33
+ type: text-generation
34
+ dataset:
35
+ name: winogrande
36
+ type: winogrande
37
+ metrics:
38
+ - name: winogrande(0-Shot)
39
+ type: winogrande (0-Shot)
40
+ value: 0.6133
41
+
42
+ - task:
43
+ type: text-generation
44
+ dataset:
45
+ name: arc_challenge
46
+ type: arc_challenge
47
+ metrics:
48
+ - name: arc_challenge(0-Shot)
49
+ type: arc_challenge (0-Shot)
50
+ value: 0.3362
51
+ source:
52
+ name: open_llama_3b_instruct_v_0.2 model card
53
+ url: https://huggingface.co/mwitiderrick/open_llama_3b_instruct_v_0.2
54
+
55
+
56
+ ---
57
+ # OpenLLaMA Instruct: An Open Reproduction of LLaMA
58
+
59
+ This is an [OpenLlama model](https://huggingface.co/openlm-research/open_llama_3b) that has been fine-tuned on 1 epoch of the
60
+ [Open-Platypus](https://huggingface.co/datasets/garage-bAInd/Open-Platypus) dataset.
61
+
62
+ The modified version of the dataset can be found [here](mwitiderrick/Open-Platypus)
63
+ ## Prompt Template
64
+ ```
65
+ ### Instruction:
66
+
67
+ {query}
68
+
69
+ ### Response:
70
+ <Leave new line for model to respond>
71
+ ```
72
+ ## Usage
73
+ ```python
74
+ from transformers import AutoTokenizer, AutoModelForCausalLM,pipeline
75
+
76
+ tokenizer = AutoTokenizer.from_pretrained("mwitiderrick/shearedplats-2.7b-v2-instruct-v0.1")
77
+ model = AutoModelForCausalLM.from_pretrained("mwitiderrick/shearedplats-2.7b-v2-instruct-v0.1")
78
+ Provide step-by-step instructions for making a sweet chicken bugger
79
+ text_gen = pipeline(task="text-generation", model=model, tokenizer=tokenizer, max_length=500)
80
+ output = text_gen(f"### Instruction:\n{query}\n### Response:\n")
81
+ print(output[0]['generated_text'])
82
+ """
83
+
84
+ """
85
+ ```
86
+ ## TruthfulQA metrics
87
+ ```
88
+
89
+ ```