Update README.md
Browse files
README.md
CHANGED
@@ -6,9 +6,9 @@ language:
|
|
6 |
|
7 |
# Talosian-7B
|
8 |
|
9 |
-
Talosian-7B is
|
10 |
|
11 |
-
It is trained from the new Mistral-7B v0.2 base model on a long-context dataset of smut stories.
|
12 |
|
13 |
## Prompt Format
|
14 |
|
@@ -134,4 +134,12 @@ Finally, Juliet spoke up. “Maybe we could run a layer interleaving model merge
|
|
134 |
...
|
135 |
</details>
|
136 |
|
137 |
-
Talosian is _not_ a user <-> assistant chat-formatted model. All prompting should be done as completions (for example, in text-generation-webui's Notebook mode.)
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
6 |
|
7 |
# Talosian-7B
|
8 |
|
9 |
+
Talosian-7B is a storytelling model built for the specific purpose of controllably writing new stories section-by-section.
|
10 |
|
11 |
+
It is trained from the new Mistral-7B v0.2 base model on a long-context dataset of smut stories. It can generalize to a variety of types of romance or erotic stories, but its voice is by default similar to traditional smut writing.
|
12 |
|
13 |
## Prompt Format
|
14 |
|
|
|
134 |
...
|
135 |
</details>
|
136 |
|
137 |
+
Talosian is _not_ a user <-> assistant chat-formatted model. All prompting should be done as completions (for example, in text-generation-webui's Notebook mode.)
|
138 |
+
|
139 |
+
## Generation Parameters
|
140 |
+
|
141 |
+
Text-generation-webui's default parameters work well. Temperature should be between 0.5 and 0.9. Consider adding `[SEC]` as a custom stopping string if you'd like to only generate one section at a time.
|
142 |
+
|
143 |
+
## Model Details
|
144 |
+
|
145 |
+
Talosian shares Mistral 7B v0.2's context length of 32k tokens. Ensure that `rope_frequency_base`/`rope_theta` is set to `1000000` when loading the model.
|