Commit
·
7429a27
1
Parent(s):
42d0b1d
Update README.md
Browse files
README.md
CHANGED
@@ -22,6 +22,10 @@ Code snippets
|
|
22 |
|
23 |
Alpaca GPT4
|
24 |
|
|
|
|
|
|
|
|
|
25 |
You should prompt the LoRA the same way you would prompt Alpaca or Alpacino:
|
26 |
|
27 |
```
|
@@ -37,11 +41,18 @@ Below is an instruction that describes a task, paired with an input that provide
|
|
37 |
<make sure to leave a single new-line here for optimal results>
|
38 |
```
|
39 |
|
40 |
-
|
41 |
|
42 |
-
|
|
|
|
|
|
|
|
|
43 |
|
44 |
-
|
|
|
|
|
|
|
45 |
|
46 |
### Citations
|
47 |
Alpaca COT datasets
|
|
|
22 |
|
23 |
Alpaca GPT4
|
24 |
|
25 |
+
### Compatibility
|
26 |
+
This LoRA is compatible with any 13B or 30B 4-bit quantized LLaMa model, including ggml quantized converted bins
|
27 |
+
|
28 |
+
### Prompting
|
29 |
You should prompt the LoRA the same way you would prompt Alpaca or Alpacino:
|
30 |
|
31 |
```
|
|
|
41 |
<make sure to leave a single new-line here for optimal results>
|
42 |
```
|
43 |
|
44 |
+
Remember that with lower parameter sizes, the structure of the prompt becomes more important. The same prompt worded differently can give wildly different answers. Consider using the following suggestion suffixes to improve output quality:
|
45 |
|
46 |
+
- "Think through this step by step"
|
47 |
+
- "Let's think about this logically"
|
48 |
+
- "Explain your reasoning"
|
49 |
+
- "Provide details to support your answer"
|
50 |
+
- "Compare and contrast your answer with alternatives"
|
51 |
|
52 |
+
### Coming Soon
|
53 |
+
- 2048 7B version
|
54 |
+
- 1024 and 512 variants of 13B and 7B
|
55 |
+
- merged ggml models
|
56 |
|
57 |
### Citations
|
58 |
Alpaca COT datasets
|