seanius commited on
Commit
f530d79
·
verified ·
1 Parent(s): 0be062a

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +15 -3
README.md CHANGED
@@ -1,3 +1,15 @@
1
- ---
2
- license: mit
3
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ONNX model - a fine tuned version of DistilBERT which can be used to classify text as one of:
2
+ - neutral, offensive_language, harmful_behaviour, hate_speech
3
+
4
+ The model was trained using the [csfy tool](https://github.com/mrseanryan/csfy) and the dataset [seanius/toxic-or-neutral-text-labelled](https://huggingface.co/datasets/seanius/toxic-or-neutral-text-labelled)
5
+
6
+ The base model is required (distilbert-base-uncased)
7
+
8
+ For an example of how to run the model, see the [csfy tool](https://github.com/mrseanryan/csfy).
9
+
10
+ The output is a number indicating the class - you can decode that via the label_mapping.json file.
11
+
12
+
13
+ ---
14
+ license: mit
15
+ ---