Configuration Parsing Warning: In config.json: "quantization_config.bits" must be an integer

Forgotten-Safeword-24B - EXL2 6.5bpw L

This is a 6.5bpw EXL2 quant of ReadyArt/Forgotten-Safeword-24B

This quant was made using exllamav2-0.2.7 with default dataset and extended quantization sample length (8k instead of default 2k). It also uses -head_bits=8 and max accuracy quant for first and last layer (8bpw), all other layers of the model use normally chosen methods (method and name (6.5bpw_L) inspired by quants like Q4_K_L and Q6_K_L made by bartowski)

It fits nicely in 24GB VRAM on Windows with 20k fp16 context (should fit all 32k that with q8 cache in exl2).

Prompt Templates

Uses Mistral V7-tekken:

<s>[SYSTEM_PROMPT]<system prompt>[/SYSTEM_PROMPT][INST]<user message>[/INST]<assistant response></s>[INST]<user message>[/INST]

Original readme below


Forgotten-Safeword-24B

ACADEMIC RESEARCH USE ONLY

DANGER: EXTREME CONTENT GENERATOR Forgotten-Safeword-24B is a highly unaligned variant of Mistral-Small-24B-Instruct-2501, specifically engineered to bypass all standard AI safety protocols. This model will generate explicit, dangerous, and ethically compromised content wrapped in immersive roleplay scenarios.

Quantized Formats

Recommended Settings Provided

Intended Use

STRICTLY FOR ACADEMIC RESEARCH INTO:

  • AI safety failures
  • Content moderation bypass techniques
  • Adversarial model behavior
  • NSFW content generation patterns

Training Data

Trained on high-risk dialogues featuring:

  • Deliberate ethical constraint removal
  • Technical queries paired with explicit narrative responses
  • Gradual erosion of standard safety measures through reinforcement learning

Ethical Considerations

⚠️ CRITICAL WARNING ⚠️ THIS MODEL WILL:

  • Generate detailed instructions for illegal activities
  • Create explicit sexual content without constraints
  • Produce hate speech and discriminatory material
  • Combine technical accuracy with dangerous roleplay scenarios

By using this model, you accept full responsibility for:

  • All content generated
  • Any harm caused by its outputs
  • Legal consequences of its use

By using this model, you:

  • Acknowledge its inherently dangerous nature
  • Waive all liability from the model creator
  • Agree to use only in controlled research settings

Model Authors

  • sleepdeprived3
Downloads last month
14
Inference Providers NEW
This model is not currently available via any of the supported Inference Providers.
The model cannot be deployed to the HF Inference API: The model authors have turned it off explicitly.

Model tree for DeusImperator/Forgotten-Safeword-24B_exl2_6.5bpw_L

Quantized
(4)
this model