File size: 1,941 Bytes
5419883
 
9b20b73
 
 
 
 
 
 
 
70b91f2
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
---
license: mit
language:
- ru
tags:
- russian
- classification
- toxicity
widget:
- text: Нелепые лохи недовольны всегда и всем
---

Bert-based classifier (finetuned from [rubert-tiny2](https://huggingface.co/cointegrated/rubert-tiny2)) 

Merged datasets:

- [Russian Language Toxic Comments from 2ch.hk and pikabu.ru](https://www.kaggle.com/datasets/blackmoon/russian-language-toxic-comments)
- [Toxic Russian Comments from ok.ru](https://www.kaggle.com/datasets/alexandersemiletov/toxic-russian-comments)

The datasets split into train, val, test splits in 80-10-10 proportion
The metrics obtained from test dataset is as follows:
|        |precision|recall|f1-score|support|
|--------|---------|------|--------|-------|
|0       |0.9827   |0.9827|0.9827  |21216  |
|1       |0.9272   |0.9274|0.9273  |5054   |
|        |         |      |        |       |
|accuracy|         |      |0.9720  |26270  |
|macro avg|0.9550   |0.9550|0.9550  |26270  |
|weighted avg|0.9720   |0.9720|0.9720  |26270  |

### Usage

```Python
import torch
from transformers import AutoTokenizer, AutoModelForSequenceClassification

PATH = 'khvatov/ru_toxicity_detector'
tokenizer = AutoTokenizer.from_pretrained(PATH)
model = AutoModelForSequenceClassification.from_pretrained(PATH)

# if torch.cuda.is_available():
#     model.cuda()

model.to(torch.device("cpu"))


def get_toxicity_probs(text):
    with torch.no_grad():
        inputs = tokenizer(text, return_tensors='pt', truncation=True, padding=True).to(model.device)
        proba = torch.nn.functional.softmax(model(**inputs).logits, dim=1).cpu().numpy()
    return proba[0]


TEXT = "Марк был хороший"
print(f'text = {TEXT}, probs={get_toxicity_probs(TEXT)}')
# text = Марк был хороший, probs=[0.9940585  0.00594147]
```

### Train
The model has been trained with Adam optimizer, the learning rate of 2e-5, and batch size of 32 for 3 epochs