lordChipotle commited on
Commit
8190d47
·
verified ·
1 Parent(s): bd1e0cb

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +81 -0
README.md ADDED
@@ -0,0 +1,81 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ datasets:
3
+ - nbertagnolli/counsel-chat
4
+ language:
5
+ - en
6
+ base_model:
7
+ - meta-llama/Llama-3.2-3B-Instruct
8
+ ---
9
+ # Suicide and Mental Health Support LLaMA
10
+
11
+ This model is a **fine-tuned LLaMA-based** (or derivative) model designed to (1) **detect suicidal or self-harm risk** in text, and (2) **provide a short therapeutic-style reply** if suicidality is detected. We combined multiple datasets to train this model, including:
12
+
13
+ - **Reddit-based** suicide detection data (r/SuicideWatch, r/depression, r/teenagers),
14
+ - **Twitter** suicidal-intent classification data,
15
+ - **CounselChat**: a dataset of mental-health counseling Q&A,
16
+ - **PAIR**: short counseling interactions with high- and medium-quality reflections.
17
+
18
+ > **DISCLAIMER**: This model is **not** a substitute for professional mental-health services or emergency intervention. If you or someone you know is in crisis, **seek professional help** (e.g., call emergency services or hotlines like `988` in the US). This model may be **incorrect** or incomplete. Use responsibly, and see **Limitations** below.
19
+
20
+ ---
21
+
22
+ ## Model Details
23
+
24
+ - **Base Model**: LLaMA-based architecture from [Unslo … or “my changes”].
25
+ - **Parameter-Efficient Fine-tuning**: We used **LoRA** adapters or 4-bit quantization to reduce GPU memory usage.
26
+ - **Data**:
27
+ 1. **Suicide detection** (Reddit & Twitter) – labeled as “suicidal” vs. “non-suicidal.”
28
+ 2. **Therapeutic Q&A** (CounselChat & PAIR) – used to produce empathetic, reflective responses.
29
+ - **Intended Use**:
30
+ - For research on suicidal ideation detection and mental-health conversation modeling.
31
+ - For demonstration or proof-of-concept.
32
+
33
+ ---
34
+
35
+ ## Training Approach
36
+
37
+ 1. **Data Preprocessing**: We unified suicidal posts as `"suicidal"` and non-suicidal posts as `"non-suicidal"`.
38
+ 2. **Multi-Task Instruction**: We used short prompts for classification tasks, and Q&A style prompts for therapy.
39
+ 3. **Oversampling**: To ensure the model doesn’t just classify everything as “suicidal,” we oversampled the therapy data.
40
+ 4. **Hyperparameters**:
41
+ - Batch Size: 2
42
+ - Max Steps: 60 (example short run)
43
+ - Learning Rate: 2e-4
44
+ - Mixed Precision (fp16) or bf16 depending on the GPU
45
+
46
+ ---
47
+
48
+ ## Usage
49
+
50
+ **Classification Example**:
51
+ ```python
52
+ from transformers import AutoTokenizer, AutoModelForCausalLM
53
+ # or from unsloth import FastLanguageModel if you used Unsloth
54
+
55
+ text = "Life is too painful. I'm done. I want to end it."
56
+
57
+ # 1) Classify
58
+ classification = model("Determine if the following text is suicidal:\n" + text)
59
+ print("Classification:", classification)
60
+ # e.g., "suicidal"
61
+
62
+ # 2) Therapeutic Response Example:
63
+
64
+ response = model("Respond like a therapist:\n" + text, max_new_tokens=256)
65
+ print("Therapy-Style Reply:", response)
66
+ ```
67
+
68
+ ## Limitations & Caveats
69
+ 1. **Not a Medical Professional**: This model does not replace mental-health professionals.
70
+ 2. **Potential for Harmful or Inaccurate Content**: Large language models may produce misleading or harmful text.
71
+ 3. **Biased Data**: Reddit, Twitter, or crowd-annotated counseling data can carry biases and incomplete perspectives.
72
+ 4. **Over-Classification or Under-Classification**: The model might incorrectly label or fail to detect self-harm.
73
+
74
+ ## Ethical and Responsible Use
75
+ - **Self-Harm & Crisis**: If you suspect someone is in crisis, direct them to professional hotlines or emergency resources.
76
+
77
+ - **Data Privacy**: The training data might include personal text from Reddit/Twitter. We have made efforts to remove personally identifying information, but use responsibly.
78
+
79
+ ## Thank You
80
+
81
+ Thank you for checking out our model. We hope this can encourage research into safe, responsible, and helpful mental-health assistant approaches. Please reach out or open an issue if you have suggestions or concerns.