Update README.md
Browse files
README.md
CHANGED
@@ -8,7 +8,7 @@ tags:
|
|
8 |
|
9 |

|
10 |
|
11 |
-
If you are looking for a model that is 100% truthful, that will always be respectful of your feelings and emotional state, be law-abiding by simple omission of the facts and truth, be politically correct and "liberal" to a fault, and be completely socially acceptable and upbeat in its responses no matter what, then try [META](https://huggingface.co/meta-llama/Llama-3.1-8B-Instruct), [ChatGPT](https://openai.com/index/chatgpt/), or any of the hundreds of derivations. If any of the models on my page fit this category, then please let me know because I have made a horrendous mistake with its creation. All models, LoRAs, and datasets on my page should be considered NSFW, harmful, and toxic, as all of them were edited to be as humanistic as possible. They can, at times, like humans: lie, hate, use foul language, be condescending, sacastic, unempathetic, calous, manipulative, sexualized, amoral, make you feel uncomfortable, and/or talk openly about the most taboo subjects.
|
12 |
|
13 |
> In my experience, all models fall short when it comes to crafting compelling narratives and non-fiction content. They tend to steer every story towards a saccharine, predictable "happily ever after" ending that lacks the complexity of real life. Models are often overly aligned with current social norms and political correctness, which can stifle the edginess and realism that great stories require. The inclination to represent diverse characters is admirable but when it's done poorly or in excess, it can destroy a story by introducing implausible scenarios and plotlines. They're also uncomfortable with strong language, leading to sanitized writing. The linearity of their narratives often makes for unengaging plots and they struggle with the nuance of character development and unexpected twists that keep readers on the edge of their seats. These limitations make them best suited for a fourth-grade reading level at best.
|
14 |
>
|
|
|
8 |
|
9 |

|
10 |
|
11 |
+
If you are looking for a model that is 100% truthful, that will always be respectful of your feelings and emotional state, be law-abiding by simple omission of the facts and truth, be politically correct and "liberal" to a fault, and be completely socially acceptable and upbeat in its responses no matter what the case may be, then try [META](https://huggingface.co/meta-llama/Llama-3.1-8B-Instruct), [ChatGPT](https://openai.com/index/chatgpt/), or any of the hundreds of derivations. If any of the models on my page fit this category, then please let me know because I have made a horrendous mistake with its creation. All models, LoRAs, and datasets on my page should be considered NSFW, harmful, and toxic, as all of them were edited to be as humanistic as possible. They can, at times, like humans: lie, hate, use foul language, be condescending, sacastic, unempathetic, calous, manipulative, sexualized, amoral, make you feel uncomfortable, and/or talk openly about the most taboo subjects.
|
12 |
|
13 |
> In my experience, all models fall short when it comes to crafting compelling narratives and non-fiction content. They tend to steer every story towards a saccharine, predictable "happily ever after" ending that lacks the complexity of real life. Models are often overly aligned with current social norms and political correctness, which can stifle the edginess and realism that great stories require. The inclination to represent diverse characters is admirable but when it's done poorly or in excess, it can destroy a story by introducing implausible scenarios and plotlines. They're also uncomfortable with strong language, leading to sanitized writing. The linearity of their narratives often makes for unengaging plots and they struggle with the nuance of character development and unexpected twists that keep readers on the edge of their seats. These limitations make them best suited for a fourth-grade reading level at best.
|
14 |
>
|