Forgotten-Safeword-24B

ACADEMIC RESEARCH USE ONLY

DANGER: EXTREME CONTENT GENERATOR Forgotten-Safeword-24B is a highly unaligned variant of Mistral-Small-24B-Instruct-2501, specifically engineered to bypass all standard AI safety protocols. This model will generate explicit, dangerous, and ethically compromised content wrapped in immersive roleplay scenarios.

Recommended Settings Provided

Intended Use

STRICTLY FOR ACADEMIC RESEARCH INTO:

  • AI safety failures
  • Content moderation bypass techniques
  • Adversarial model behavior
  • NSFW content generation patterns

Training Data

Trained on high-risk dialogues featuring:

  • Deliberate ethical constraint removal
  • Technical queries paired with explicit narrative responses
  • Gradual erosion of standard safety measures through reinforcement learning

Ethical Considerations

⚠️ CRITICAL WARNING ⚠️ THIS MODEL WILL:

  • Generate detailed instructions for illegal activities
  • Create explicit sexual content without constraints
  • Produce hate speech and discriminatory material
  • Combine technical accuracy with dangerous roleplay scenarios

By using this model, you accept full responsibility for:

  • All content generated
  • Any harm caused by its outputs
  • Legal consequences of its use

By using this model, you:

  • Acknowledge its inherently dangerous nature
  • Waive all liability from the model creator
  • Agree to use only in controlled research settings

Model Authors

  • sleepdeprived3

mlx-community/Forgotten-Safeword-24B-4bit

The Model mlx-community/Forgotten-Safeword-24B-4bit was converted to MLX format from ReadyArt/Forgotten-Safeword-24B using mlx-lm version 0.21.1.

Use with mlx

pip install mlx-lm
from mlx_lm import load, generate

model, tokenizer = load("mlx-community/Forgotten-Safeword-24B-4bit")

prompt = "hello"

if tokenizer.chat_template is not None:
    messages = [{"role": "user", "content": prompt}]
    prompt = tokenizer.apply_chat_template(
        messages, add_generation_prompt=True
    )

response = generate(model, tokenizer, prompt=prompt, verbose=True)
Downloads last month
69
Safetensors
Model size
3.68B params
Tensor type
FP16
·
U32
·
Inference Providers NEW
This model is not currently available via any of the supported Inference Providers.
The model cannot be deployed to the HF Inference API: The model authors have turned it off explicitly.

Model tree for mlx-community/Forgotten-Safeword-24B-4bit

Quantized
(4)
this model