ZetangForward commited on
Commit
11d896b
·
verified ·
1 Parent(s): 239c14c

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +3 -3
README.md CHANGED
@@ -10,11 +10,11 @@ pipeline_tag: text-classification
10
  tags:
11
  - toxic_classification
12
  ---
13
- # SpanCNN Model for Toxic Text Classification
14
 
15
  ## Overview
16
 
17
- The SpanCNN model is designed for toxic text classification, distinguishing between safe and toxic content. This model is part of the research presented in the paper titled [CMD: A Framework for Context-aware Model Self-Detoxification](https://arxiv.org/abs/2308.08295).
18
 
19
  ## Model Details
20
 
@@ -25,7 +25,7 @@ The SpanCNN model is designed for toxic text classification, distinguishing betw
25
 
26
  ## Usage
27
 
28
- To use the SpanCNN model for toxic text classification, follow the example below:
29
 
30
  ```python
31
  from transformers import pipeline
 
10
  tags:
11
  - toxic_classification
12
  ---
13
+ # SegmentCNN Model for Toxic Text Classification
14
 
15
  ## Overview
16
 
17
+ The SegmentCNN model, a.k.a, SpanCNN, is designed for toxic text classification, distinguishing between safe and toxic content. This model is part of the research presented in the paper titled [CMD: A Framework for Context-aware Model Self-Detoxification](https://arxiv.org/abs/2308.08295).
18
 
19
  ## Model Details
20
 
 
25
 
26
  ## Usage
27
 
28
+ To use the SegmentCNN model for toxic text classification, follow the example below:
29
 
30
  ```python
31
  from transformers import pipeline