s3nh
/

Text Generation
English
text-generation-inference
s3nh commited on
Commit
a63539b
·
1 Parent(s): c8ce3d4

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +156 -0
README.md ADDED
@@ -0,0 +1,156 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: cc-by-sa-4.0
3
+ language:
4
+ - en
5
+ tags:
6
+ - text-generation-inference
7
+ ---
8
+
9
+
10
+ ## Original model card
11
+
12
+ Buy me a coffee if you like this project ;)
13
+ <a href="https://www.buymeacoffee.com/s3nh"><img src="https://www.buymeacoffee.com/assets/img/guidelines/download-assets-sm-1.svg" alt=""></a>
14
+
15
+ #### Description
16
+
17
+ GGML Format model files for [This project](https://huggingface.co/Open-Orca/OpenOrca-Preview1-13B).
18
+
19
+
20
+ ### inference
21
+
22
+
23
+ ```python
24
+
25
+ import ctransformers
26
+
27
+ from ctransformers import AutoModelForCausalLM
28
+
29
+ model = AutoModelForCausalLM.from_pretrained(output_dir, ggml_file,
30
+ gpu_layers=32, model_type="llama")
31
+
32
+ manual_input: str = "Tell me about your last dream, please."
33
+
34
+
35
+ llm(manual_input,
36
+ max_new_tokens=256,
37
+ temperature=0.9,
38
+ top_p= 0.7)
39
+
40
+ ```
41
+
42
+
43
+
44
+ ### Original model card
45
+
46
+
47
+
48
+ # OpenOrca-Preview1-13B
49
+
50
+ We have used our own [OpenOrca dataset](https://huggingface.co/datasets/Open-Orca/OpenOrca) to fine-tune LLaMA-13B.
51
+ This dataset is our attempt to reproduce the dataset generated for Microsoft Research's [Orca Paper](https://arxiv.org/abs/2306.02707).
52
+
53
+ We have trained on less than 6% of our data, just to give a preview of what is possible while we further refine our dataset!
54
+ We trained a refined selection of 200k GPT-4 entries from OpenOrca.
55
+ We have filtered our GPT-4 augmentations to remove statements like, "As an AI language model..." and other responses which have been shown to harm model reasoning capabilities. Further details on our dataset curation practices will be forthcoming with our full model releases.
56
+
57
+ This release highlights that even a small portion of our training data can produce state of the art results in this model class with training costs <$200 in total.
58
+
59
+ Want to visualize our full (pre-filtering) dataset? Check out our [Nomic Atlas Map](https://atlas.nomic.ai/map/c1b88b47-2d9b-47e0-9002-b80766792582/2560fd25-52fe-42f1-a58f-ff5eccc890d2).
60
+ [<img src="https://huggingface.co/Open-Orca/OpenOrca-Preview1-13B/resolve/main/OpenOrca%20Nomic%20Atlas.png" alt="Atlas Nomic Dataset Map" width="400" height="400" />](https://atlas.nomic.ai/map/c1b88b47-2d9b-47e0-9002-b80766792582/2560fd25-52fe-42f1-a58f-ff5eccc890d2)
61
+
62
+ We are in-process with training more models, so keep a look out on our org for releases coming soon with exciting partners.
63
+
64
+ We will also give sneak-peak announcements on our Discord, which you can find here:
65
+
66
+ https://AlignmentLab.ai
67
+
68
+
69
+ # Evaluation
70
+
71
+ We have evaluated OpenOrca-Preview1-13B on hard reasoning tasks from BigBench-Hard and AGIEval as outlined in the Orca paper.
72
+
73
+ Our average performance for BigBench-Hard: 0.3753
74
+
75
+ Average for AGIEval: 0.3638
76
+
77
+ In the Orca paper, they measured their score relative to Vicuna on these evals.
78
+ We've done the same and have found our score averages to ~60% of the total improvement that was shown in the Orca paper.
79
+
80
+ So we got 60% of the improvement with 6% of the data!
81
+
82
+ ## BigBench-Hard Performance
83
+ ![OpenOrca Preview1 BigBench-Hard Performance](https://huggingface.co/Open-Orca/OpenOrca-Preview1-13B/resolve/main/OO_Preview1_BigBenchHard.png "BigBench-Hard Performance")
84
+
85
+ ## AGIEval Performance
86
+ ![OpenOrca Preview1 AGIEval Performance](https://huggingface.co/Open-Orca/OpenOrca-Preview1-13B/resolve/main/OO_Preview1_AGIEval.png "AGIEval Performance")
87
+
88
+ We will report our results on [HuggingFaceH4 Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard) Evals once we receive them.
89
+
90
+
91
+ # Dataset
92
+
93
+ We used a small (6%, 200k) subset of our data from OpenOrca, which aims to reproduce the Orca Research Paper dataset.
94
+
95
+ As this release is intended as a preview, please await our full releases for further details on the training data.
96
+
97
+
98
+ # Training
99
+
100
+ [<img src="https://raw.githubusercontent.com/OpenAccess-AI-Collective/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/OpenAccess-AI-Collective/axolotl)
101
+
102
+ We trained with 8x A100-80G GPUs for 15 hours. Commodity cost was < $200.
103
+
104
+ We trained for 4 epochs and selected a snapshot at 3 epochs for peak performance.
105
+
106
+ Please await our full releases for further training details.
107
+
108
+ # Prompting
109
+
110
+ It uses the Alpaca format (see [FastChat implementation example](https://github.com/lm-sys/FastChat/blob/daa2b9abe20597ebf34dc5df164d450456610c74/fastchat/conversation.py#L198-L229)):
111
+ ```
112
+ ### Instruction:
113
+
114
+ ### Response:
115
+ ```
116
+
117
+ # Citation
118
+
119
+ ```bibtex
120
+ @software{OpenOrca_Preview1,
121
+ title = {OpenOrca_Preview1: A LLaMA-13B Model Fine-tuned on Small Portion of OpenOrcaV1 Dataset},
122
+ author = {Wing Lian and Bleys Goodson and Eugene Pentland and Austin Cook and Chanvichet Vong and "Teknium"},
123
+ year = {2023},
124
+ publisher = {HuggingFace},
125
+ journal = {HuggingFace repository},
126
+ howpublished = {\url{https://https://huggingface.co/Open-Orca/OpenOrca-Preview1-13B},
127
+ }
128
+ ```
129
+ ```bibtex
130
+ @misc{mukherjee2023orca,
131
+ title={Orca: Progressive Learning from Complex Explanation Traces of GPT-4},
132
+ author={Subhabrata Mukherjee and Arindam Mitra and Ganesh Jawahar and Sahaj Agarwal and Hamid Palangi and Ahmed Awadallah},
133
+ year={2023},
134
+ eprint={2306.02707},
135
+ archivePrefix={arXiv},
136
+ primaryClass={cs.CL}
137
+ }
138
+ ```
139
+ ```bibtex
140
+ @misc{longpre2023flan,
141
+ title={The Flan Collection: Designing Data and Methods for Effective Instruction Tuning},
142
+ author={Shayne Longpre and Le Hou and Tu Vu and Albert Webson and Hyung Won Chung and Yi Tay and Denny Zhou and Quoc V. Le and Barret Zoph and Jason Wei and Adam Roberts},
143
+ year={2023},
144
+ eprint={2301.13688},
145
+ archivePrefix={arXiv},
146
+ primaryClass={cs.AI}
147
+ }
148
+ ```
149
+ ```bibtex
150
+ @software{touvron2023llama,
151
+ title={LLaMA: Open and Efficient Foundation Language Models},
152
+ author={Touvron, Hugo and Lavril, Thibaut and Izacard, Gautier and Martinet, Xavier and Lachaux, Marie-Anne and Lacroix, Timoth{\'e}e and Rozi{\`e}re, Baptiste and Goyal, Naman and Hambro, Eric and Azhar, Faisal and Rodriguez, Aurelien and Joulin, Armand and Grave, Edouard and Lample, Guillaume},
153
+ journal={arXiv preprint arXiv:2302.13971},
154
+ year={2023}
155
+ }
156
+ ```