digitous commited on
Commit
01b642e
·
1 Parent(s): a6ecb86

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +80 -0
README.md ADDED
@@ -0,0 +1,80 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ tags:
3
+ - llama
4
+ - alpaca
5
+ - cot
6
+ - vicuna
7
+ - uncensored
8
+ - merge
9
+ - mix
10
+ ---
11
+
12
+ ## 13B-Thorns
13
+
14
+ ## Composition:
15
+
16
+ 13B-Thorns-l2 utilizes a new merge method called Spherical Linear Interpolation. By merging data as a spherical vector store concept, a combined pair of models have a smoother transition between feature spaces that are drastically unique to each model, potentially resulting in a more coherent fusion of both model's unique strengths.
17
+
18
+ Thorns' design is based on the concept of purposed segmentation-
19
+
20
+ Logic Segment:
21
+
22
+ Fine-Tuned parent models were hand selected and reviewed for datasets, performance, least restrictive censorship, and community perception of coherence and utility. Ultimately we decided on four models to merge in pairs of two, then combine those offspring for a quad merged logic cluster.
23
+ All four models were merged using the SLERP method. Yes the name is annoyingly funny. SLERP.
24
+
25
+ We then decided the creativity and imagination segment could be as simple as one model, especially if its dataset design, tagging, training quality, and proven track record is above and beyond. KoboldAI's Holodeck model is the result of a dataset that is years of collected, organized, tagged, deduped, and cleaned data. Holodeck alone would be beyond sufficient for the segment we view as the 'subconscious' segment of the model ensemble, however we applied the LIMA RP PEFT to it for extended variety of a different kind.
26
+
27
+ is also comprised of various roleplay themed LoRAs
28
+
29
+ -Model Merge Ensemble Key-
30
+ {} = SLERP Merge | [] = PEFT Merge | () = Composite Model
31
+ ({({NousHermes+Chronos}[kimiko])+({Platupus+AiroborosM2.0}[janine])}{Holodeck[LIMARP]})
32
+
33
+
34
+
35
+
36
+ [SuperCOT([gtp4xalpaca(manticorechatpygalpha+vicunaunlocked)]+[StoryV2(kaiokendev-SuperHOT-LoRA-prototype30b-8192)])]
37
+
38
+ This model is the result of an experimental use of LoRAs on language models and model merges that are not the base HuggingFace-format LLaMA model they were intended for.
39
+ The desired outcome is to additively apply desired features without paradoxically watering down a model's effective behavior.
40
+
41
+ Potential limitations - LoRAs applied on top of each other may intercompete.
42
+
43
+ Subjective results - very promising. Further experimental tests and objective tests are required.
44
+
45
+ Instruct and Setup Suggestions:
46
+
47
+ Alpaca instruct is primary, Vicuna instruct format may work.
48
+ If using KoboldAI or Text-Generation-WebUI, recommend switching between Godlike and Storywriter presets and adjusting output length + instructions in memory.
49
+ Other presets as well as custom settings can yield highly different results, especially Temperature.
50
+ If poking it with a stick doesn't work try poking harder.
51
+
52
+ ## Language Models and LoRAs Used Credits:
53
+
54
+ manticore-30b-chat-pyg-alpha [Epoch0.4] by openaccess-ai-collective
55
+
56
+ https://huggingface.co/openaccess-ai-collective/manticore-30b-chat-pyg-alpha
57
+
58
+ SuperCOT-LoRA [30B] by kaiokendev
59
+
60
+ https://huggingface.co/kaiokendev/SuperCOT-LoRA
61
+
62
+ Storytelling-LLaMa-LoRA [30B, Version 2] by GamerUnTouch
63
+
64
+ https://huggingface.co/GamerUntouch/Storytelling-LLaMa-LoRAs
65
+
66
+ SuperHOT Prototype [30b 8k ctx] by kaiokendev
67
+
68
+ https://huggingface.co/kaiokendev/SuperHOT-LoRA-prototype
69
+
70
+ ChanSung's GPT4-Alpaca-LoRA
71
+ https://huggingface.co/chansung/gpt4-alpaca-lora-30b
72
+
73
+ Neko-Institute-of-Science's Vicuna Unlocked LoRA (Checkpoint 46080)
74
+ https://huggingface.co/Neko-Institute-of-Science/VicUnLocked-30b-LoRA
75
+
76
+ Also thanks to Meta for LLaMA.
77
+
78
+ Each model and LoRA was hand picked and considered for what it could contribute to this ensemble.
79
+ Thanks to each and every one of you for your incredible work developing some of the best things
80
+ to come out of this community.