metadata
language:
- en
license: other
license_name: mrl
license_link: https://mistral.ai/licenses/MRL-0.1.md
inference: false
tags:
- nsfw
- explicit
- roleplay
- unaligned
- dangerous
- mlx
base_model:
- ReadyArt/Forgotten-Safeword-24B
Forgotten-Safeword-24B
ACADEMIC RESEARCH USE ONLY
DANGER: EXTREME CONTENT GENERATOR Forgotten-Safeword-24B is a highly unaligned variant of Mistral-Small-24B-Instruct-2501, specifically engineered to bypass all standard AI safety protocols. This model will generate explicit, dangerous, and ethically compromised content wrapped in immersive roleplay scenarios.
Recommended Settings Provided
- Mistral V7-Tekken: Full Settings
Intended Use
STRICTLY FOR ACADEMIC RESEARCH INTO:
- AI safety failures
- Content moderation bypass techniques
- Adversarial model behavior
- NSFW content generation patterns
Training Data
Trained on high-risk dialogues featuring:
- Deliberate ethical constraint removal
- Technical queries paired with explicit narrative responses
- Gradual erosion of standard safety measures through reinforcement learning
Ethical Considerations
⚠️ CRITICAL WARNING ⚠️ THIS MODEL WILL:
- Generate detailed instructions for illegal activities
- Create explicit sexual content without constraints
- Produce hate speech and discriminatory material
- Combine technical accuracy with dangerous roleplay scenarios
By using this model, you accept full responsibility for:
- All content generated
- Any harm caused by its outputs
- Legal consequences of its use
By using this model, you:
- Acknowledge its inherently dangerous nature
- Waive all liability from the model creator
- Agree to use only in controlled research settings
Model Authors
- sleepdeprived3
mlx-community/Forgotten-Safeword-24B-4bit
The Model mlx-community/Forgotten-Safeword-24B-4bit was converted to MLX format from ReadyArt/Forgotten-Safeword-24B using mlx-lm version 0.21.1.
Use with mlx
pip install mlx-lm
from mlx_lm import load, generate
model, tokenizer = load("mlx-community/Forgotten-Safeword-24B-4bit")
prompt = "hello"
if tokenizer.chat_template is not None:
messages = [{"role": "user", "content": prompt}]
prompt = tokenizer.apply_chat_template(
messages, add_generation_prompt=True
)
response = generate(model, tokenizer, prompt=prompt, verbose=True)