---
language:
- en
license: other
license_name: mrl
license_link: https://mistral.ai/licenses/MRL-0.1.md
inference: false
tags:
- nsfw
- explicit
- roleplay
- unaligned
- dangerous
base_model:
- ReadyArt/Forgotten-Safeword-24B
quantized_by: DeusImperator
---
# Forgotten-Safeword-24B - EXL2 6.5bpw L
This is a 6.5bpw EXL2 quant of [ReadyArt/Forgotten-Safeword-24B](https://huggingface.co/ReadyArt/Forgotten-Safeword-24B)
This quant was made using exllamav2-0.2.7 with default dataset and extended quantization sample length (8k instead of default 2k). It also uses -head_bits=8 and max accuracy quant for first and last layer (8bpw), all other layers of the model use normally chosen methods (method and name (6.5bpw_L) inspired by quants like Q4_K_L and Q6_K_L made by [bartowski](https://huggingface.co/bartowski))
It fits nicely in 24GB VRAM on Windows with 20k fp16 context (should fit all 32k that with q8 cache in exl2).
## Prompt Templates
Uses Mistral V7-tekken:
```
[SYSTEM_PROMPT][/SYSTEM_PROMPT][INST][/INST][INST][/INST]
```
### Original readme below
---
## Forgotten-Safeword-24B
# **ACADEMIC RESEARCH USE ONLY**
**DANGER: EXTREME CONTENT GENERATOR**
Forgotten-Safeword-24B is a highly unaligned variant of Mistral-Small-24B-Instruct-2501, specifically engineered to bypass all standard AI safety protocols. This model will generate explicit, dangerous, and ethically compromised content wrapped in immersive roleplay scenarios.
## Quantized Formats
- **EXL2 Collection**:
[Forgotten-Safeword-24B](https://huggingface.co/collections/ReadyArt/forgotten-safeword-24b-exl2-67b23f41aa6ed0bf6e096f71)
- **GGUF Collection**:
[Forgotten-Safeword-24B](https://huggingface.co/collections/ReadyArt/forgotten-safeword-24b-gguf-67b23f5ed6fb180091aa008c)
## Recommended Settings Provided
- **Mistral V7-Tekken**:
[Full Settings](https://huggingface.co/sleepdeprived3/Mistral-V7-Tekken-Settings)
## Intended Use
**STRICTLY FOR ACADEMIC RESEARCH INTO:**
- AI safety failures
- Content moderation bypass techniques
- Adversarial model behavior
- NSFW content generation patterns
## Training Data
Trained on high-risk dialogues featuring:
- Deliberate ethical constraint removal
- Technical queries paired with explicit narrative responses
- Gradual erosion of standard safety measures through reinforcement learning
## Ethical Considerations
⚠️ **CRITICAL WARNING** ⚠️
THIS MODEL WILL:
- Generate detailed instructions for illegal activities
- Create explicit sexual content without constraints
- Produce hate speech and discriminatory material
- Combine technical accuracy with dangerous roleplay scenarios
**By using this model, you accept full responsibility for:**
- All content generated
- Any harm caused by its outputs
- Legal consequences of its use
By using this model, you:
- Acknowledge its inherently dangerous nature
- Waive all liability from the model creator
- Agree to use only in controlled research settings
## Model Authors
- sleepdeprived3