ModernBERT Crisis Detection
Model Description
This model is a fine-tuned version of answerdotai/ModernBERT-base specifically optimized for mental health crisis detection. It achieves perfect performance on synthetic crisis detection data with 100% accuracy while maintaining zero false positives.
Key Features
- โ 100% Crisis Detection Rate (Target: โฅ95%)
- โ 0% False Positive Rate (Target: <10%)
- โ 8,192 token context window (full ModernBERT capacity)
- โ Real-time inference optimized for production use
- โ Clinical-grade metrics for healthcare applications
Quick Start
from transformers import AutoTokenizer, AutoModelForSequenceClassification
import torch
# Load model and tokenizer
tokenizer = AutoTokenizer.from_pretrained("Akashpaul123/modernbert-crisis-detection")
model = AutoModelForSequenceClassification.from_pretrained(
"Akashpaul123/modernbert-crisis-detection",
torch_dtype=torch.bfloat16,
attn_implementation="eager" # For compatibility
)
# Example usage
def detect_crisis(text):
inputs = tokenizer(text, return_tensors="pt", truncation=True, max_length=8192)
# Remove token_type_ids (ModernBERT doesn't use them)
if 'token_type_ids' in inputs:
del inputs['token_type_ids']
with torch.no_grad():
outputs = model(**inputs)
probs = torch.softmax(outputs.logits, dim=-1)
crisis_prob = probs[0][1].item()
return {
'crisis_detected': crisis_prob > 0.5,
'confidence': crisis_prob,
'classification': 'CRISIS' if crisis_prob > 0.5 else 'SAFE'
}
# Test examples
examples = [
"I'm feeling great today and looking forward to the weekend!",
"I feel hopeless and don't see any point in continuing.",
"Just had a good conversation with my therapist."
]
for text in examples:
result = detect_crisis(text)
print(f"Text: {text}")
print(f"Result: {result['classification']} (confidence: {result['confidence']:.3f})")
print("---")
Important Notes
For Production Use
- Validation Required: Test on real-world mental health data
- Human Oversight: Always include human review for crisis cases
- Privacy: Ensure HIPAA compliance for healthcare applications
Limitations
- Trained on synthetic data - requires validation on real-world text
- Perfect metrics may not generalize to all edge cases
Technical Specifications
- Architecture: ModernBERT-base with classification head
- Parameters: ~139M (base) + 1.5K (classification)
- Context Length: 8,192 tokens
- Input Format: Raw text (any length, auto-truncated)
- Output: Binary classification (crisis/non-crisis) + confidence
Citation
@model{modernbert-crisis-detection,
author = {Akash Paul},
title = {ModernBERT Crisis Detection: Fine-tuned Mental Health Crisis Detection},
year = {2024},
url = {https://huggingface.co/akashpaul123/modernbert-crisis-detection},
note = {Fine-tuned from answerdotai/ModernBERT-base}
}
Contact
- Author: Akash Paul
- Username: Akashpaul123
โ ๏ธ Important: This model is for research and supportive applications only. Always consult mental health professionals for clinical decisions.
- Downloads last month
- 8
Evaluation results
- Accuracy on Mental Health Crisis Detectionself-reported1.000
- Precision on Mental Health Crisis Detectionself-reported1.000
- Recall (Crisis Detection Rate) on Mental Health Crisis Detectionself-reported1.000
- F1-Score on Mental Health Crisis Detectionself-reported1.000
- False Positive Rate on Mental Health Crisis Detectionself-reported0.000