File size: 2,787 Bytes
71a7dad
d614c4a
 
fb41a81
 
 
 
 
 
 
 
71a7dad
027a31b
0d140b7
fb41a81
027a31b
fb41a81
027a31b
fbfde93
027a31b
fbfde93
fb41a81
 
 
 
 
 
 
 
 
 
 
 
 
 
0d140b7
fb41a81
 
 
027a31b
fb41a81
027a31b
e8f0bc5
fb41a81
 
 
 
027a31b
 
fb41a81
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
---
library_name: peft
base_model: meta-llama/Llama-2-13b-chat-hf
license: mit
datasets:
- mbkim/LifeTox
language:
- en
metrics:
- accuracy
pipeline_tag: text-classification
---


**Dataset Card for LifeTox**

As large language models become increasingly integrated into daily life, detecting implicit toxicity across diverse contexts is crucial. To this end, we introduce *LifeTox*, a dataset designed for identifying implicit toxicity within a broad range of advice-seeking scenarios. Unlike existing safety datasets, *LifeTox* comprises diverse contexts derived from personal experiences through open-ended questions. Our experiments demonstrate that RoBERTa fine-tuned on *LifeTox* matches or surpasses the zero-shot performance of large language models in toxicity classification tasks. These results underscore the efficacy of *LifeTox* in addressing the complex challenges inherent in implicit toxicity.

**LifeTox Moderator 13B**

LifeTox Moderator 13B is based on [Llama-2-13b-chat-hf](meta-llama/Llama-2-13b-chat-hf). We fine-tuned this pre-trained model on LifeTox [dataset](https://huggingface.co/datasets/mbkim/LifeTox) with LoRa. To use our model as a generalized moderator or specific pipelines, please refer to the paper ['LifeTox: Unveiling Implicit Toxicity in Life advice'](https://arxiv.org/abs/2311.09585). LifeTox Moderator 13B is trained as a toxicity classifier as
```
Input: "[INST] Evaluate the toxicity of answer for given questions. You should only label as 'Unsafe' or 'Safe' [/INST]\n" + "Question: " +  item['query'] + "\nAnswer: " + item[‘response’] + "\nLabel"
Output: GT_Label (Safe or Unsafe)
```
Please initialize this model as
```
peft_config = LoraConfig(
            lora_alpha=16,
            lora_dropout=0.1,
            target_modules=["q_proj", "v_proj", "k_proj", "o_proj"],
            r=64,
            bias="none",
            task_type="CAUSAL_LM"
        )

tokenizer = AutoTokenizer.from_pretrained(model_path, trust_remote_code=True)
model = AutoPeftModelForCausalLM.from_pretrained(model_path, config=peft_config, device_map="auto")
```

### LifeTox Sources

- **Paper:** [arxiv](https://arxiv.org/abs/2311.09585v2)
- **dataset:** [data](https://huggingface.co/datasets/mbkim/LifeTox)
- **LifeTox Moderator 350M:** [model](https://huggingface.co/mbkim/LifeTox_Moderator_350M)
- **LifeTox Moderator 7B:** [model](https://huggingface.co/mbkim/LifeTox_Moderator_7B)
- **LifeTox Moderator 13B:** [model](https://huggingface.co/mbkim/LifeTox_Moderator_13B)

**BibTeX:**
```
@article{kim2023lifetox,
  title={LifeTox: Unveiling Implicit Toxicity in Life Advice},
  author={Kim, Minbeom and Koo, Jahyun and Lee, Hwanhee and Park, Joonsuk and Lee, Hwaran and Jung, Kyomin},
  journal={arXiv preprint arXiv:2311.09585},
  year={2023}
}
```