Commit
·
59f34ff
1
Parent(s):
321fade
update source
Browse files
README.md
CHANGED
@@ -5,9 +5,6 @@ task_categories:
|
|
5 |
- token-classification
|
6 |
size_categories:
|
7 |
- 1M<n<10M
|
8 |
-
datasets:
|
9 |
-
- tomekkorbak/pile-toxicity-balanced2
|
10 |
-
- datasets/thai_toxicity_tweet
|
11 |
language:
|
12 |
- ar
|
13 |
- es
|
@@ -27,7 +24,7 @@ language:
|
|
27 |
- sk
|
28 |
- gu
|
29 |
- he
|
30 |
-
- af
|
31 |
- te
|
32 |
- ro
|
33 |
- lv
|
@@ -71,7 +68,7 @@ tags:
|
|
71 |
|
72 |
About 11 months ago, I downloaded and preprocessed 2.7M rows of text data, but completely forgot the original source of these datasets...
|
73 |
All I know is that I looked everywhere: HuggingFace, research papers, GitHub, Kaggle, and Google search. I even fetched 20K+ tweets using the Twitter API.
|
74 |
-
Today (6/28/2023) I came across
|
75 |
|
76 |
|
77 |
The deduplicated training data alone consists of 2,880,230 rows of comments and messages. Among these rows, 416,457 are classified as toxic, while the remaining 2,463,773 are considered neutral. Below is a table to illustrate the data composition:
|
@@ -140,6 +137,12 @@ Each CSV file has two columns: `text` and `is_toxic`.
|
|
140 |
- Welsh
|
141 |
|
142 |
|
|
|
|
|
|
|
|
|
|
|
|
|
143 |
<br>
|
144 |
|
145 |
Have fun modelling!
|
|
|
5 |
- token-classification
|
6 |
size_categories:
|
7 |
- 1M<n<10M
|
|
|
|
|
|
|
8 |
language:
|
9 |
- ar
|
10 |
- es
|
|
|
24 |
- sk
|
25 |
- gu
|
26 |
- he
|
27 |
+
- af
|
28 |
- te
|
29 |
- ro
|
30 |
- lv
|
|
|
68 |
|
69 |
About 11 months ago, I downloaded and preprocessed 2.7M rows of text data, but completely forgot the original source of these datasets...
|
70 |
All I know is that I looked everywhere: HuggingFace, research papers, GitHub, Kaggle, and Google search. I even fetched 20K+ tweets using the Twitter API.
|
71 |
+
Today (6/28/2023) I came across two newer HuggingFace datasets, so I remembered to credit them at the bottom of the page.
|
72 |
|
73 |
|
74 |
The deduplicated training data alone consists of 2,880,230 rows of comments and messages. Among these rows, 416,457 are classified as toxic, while the remaining 2,463,773 are considered neutral. Below is a table to illustrate the data composition:
|
|
|
137 |
- Welsh
|
138 |
|
139 |
|
140 |
+
<br>
|
141 |
+
|
142 |
+
Known datasets:
|
143 |
+
- tomekkorbak/pile-toxicity-balanced2
|
144 |
+
- datasets/thai_toxicity_tweet
|
145 |
+
|
146 |
<br>
|
147 |
|
148 |
Have fun modelling!
|