FredZhang7 commited on
Commit
406ed73
·
1 Parent(s): 11331d7

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +4 -2
README.md CHANGED
@@ -149,18 +149,20 @@ Supported languages:
149
  ### Original Source?
150
  Around 11 months ago, I downloaded and preprocessed 2.7M rows of text data, but completely forgot the original source of these datasets...
151
  All I remember is that I downloaded datasets from everywhere I could: HuggingFace, research papers, GitHub, Kaggle, SurgeAI, and Google search. I even fetched 20K+ tweets using the Twitter API.
152
- Today (6/28/2023) I came across two newer HuggingFace datasets, so I remembered to credit them below.
153
 
154
  Known datasets:
155
  - tomekkorbak/pile-toxicity-balanced2
156
  - datasets/thai_toxicity_tweet
 
157
 
158
  <br>
159
 
160
  ### Limitations
161
- Some limitations include:
162
  - All labels were rounded to the nearest integer. If a text was classified as 46%-54% toxic, the text itself might not be noticeably toxic or neutral.
163
  - There were disagreements among moderators on some labels, due to ambiguity and lack of context.
164
  - When there're only URL(s), emojis, or anything that's unrecognizable as natural language in the "text" column, the corresponding "lang" is "unkown".
 
165
 
166
  Have fun modelling!
 
149
  ### Original Source?
150
  Around 11 months ago, I downloaded and preprocessed 2.7M rows of text data, but completely forgot the original source of these datasets...
151
  All I remember is that I downloaded datasets from everywhere I could: HuggingFace, research papers, GitHub, Kaggle, SurgeAI, and Google search. I even fetched 20K+ tweets using the Twitter API.
152
+ Recently, I came across two newer HuggingFace datasets, so I remembered to credit them below.
153
 
154
  Known datasets:
155
  - tomekkorbak/pile-toxicity-balanced2
156
  - datasets/thai_toxicity_tweet
157
+ - inspection-ai/japanese-toxic-dataset (GitHub)
158
 
159
  <br>
160
 
161
  ### Limitations
162
+ Limitations include:
163
  - All labels were rounded to the nearest integer. If a text was classified as 46%-54% toxic, the text itself might not be noticeably toxic or neutral.
164
  - There were disagreements among moderators on some labels, due to ambiguity and lack of context.
165
  - When there're only URL(s), emojis, or anything that's unrecognizable as natural language in the "text" column, the corresponding "lang" is "unkown".
166
+ - The validation data is not representative of the training data.
167
 
168
  Have fun modelling!