Update README.md
Browse files
README.md
CHANGED
@@ -1,3 +1,10 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
# Student Chat Toxicity Classifier
|
2 |
|
3 |
This model is a fine-tuned version of the `s-nlp/roberta_toxicity_classifier` and is designed to classify text-based messages in student conversations as **toxic** or **non-toxic**. It is specifically tailored to detect and flag malpractice suggestions, unethical advice, or any toxic communication while encouraging ethical and positive interactions among students.
|
@@ -71,4 +78,4 @@ def predict_toxicity(text):
|
|
71 |
# Test the model
|
72 |
message = "You can copy answers during the exam."
|
73 |
prediction = predict_toxicity(message)
|
74 |
-
print(f"Message: {message}\nPrediction: {prediction}")
|
|
|
1 |
+
---
|
2 |
+
language:
|
3 |
+
- en
|
4 |
+
base_model:
|
5 |
+
- s-nlp/roberta_toxicity_classifier
|
6 |
+
pipeline_tag: text-classification
|
7 |
+
---
|
8 |
# Student Chat Toxicity Classifier
|
9 |
|
10 |
This model is a fine-tuned version of the `s-nlp/roberta_toxicity_classifier` and is designed to classify text-based messages in student conversations as **toxic** or **non-toxic**. It is specifically tailored to detect and flag malpractice suggestions, unethical advice, or any toxic communication while encouraging ethical and positive interactions among students.
|
|
|
78 |
# Test the model
|
79 |
message = "You can copy answers during the exam."
|
80 |
prediction = predict_toxicity(message)
|
81 |
+
print(f"Message: {message}\nPrediction: {prediction}")
|