TTA-DQA commited on
Commit
4f3eefa
·
verified ·
1 Parent(s): 4da07b1

Update readme-eng.md

Browse files
Files changed (1) hide show
  1. readme-eng.md +60 -1
readme-eng.md CHANGED
@@ -1 +1,60 @@
1
- Model Card - English
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Model Detail Information
2
+
3
+ ### 1. Overview
4
+
5
+ This model is trained to detect the presence of harmful expressions in Korean sentences.<br>
6
+ It performs binary classification to determine whether a given sentence contains hateful expressions or is a general, non-hateful sentence.<br>
7
+ This model is designed for the AI task of 'text classification', using the 'TTA-DQA/hate_sentence' dataset.<br>
8
+
9
+ The classification labels are:
10
+ - "0": "no_hate"
11
+ - "1": "hate"
12
+
13
+ ### 2. Training Information
14
+
15
+ - Base Model: KcElectra (a pre-trained Korean language model based on Electra)
16
+ - Source: beomi/KcELECTRA-base-v2022(https://huggingface.co/beomi/KcELECTRA-base-v2022)
17
+ - Model Type: Casual Language Model
18
+ - Pre-training (Korean): Approximately 17GB (over 180 million sentences)
19
+ - Fine-tuning (hate dataset): Approximately 22.3MB(TTA-DQA/hate_sentence)
20
+ - Learning Rate: 5e-6
21
+ - Weight Decay: 0.01
22
+ - Epochs: 20
23
+ - Batch Size: 16
24
+ - Data Loader Workers: 2
25
+ - Tokenizer: BertWordPieceTokenizer
26
+ - Model Size: Approximately 512MB
27
+
28
+ ### 3. Requirements
29
+
30
+ To use this model, ensure the following dependencies are installed:
31
+ - pytorch ~= 1.8.0
32
+ - transformers ~= 4.11.3
33
+ - emoji ~= 0.6.0
34
+ - soynlp ~= 0.0.493
35
+
36
+ ### 4. Quick Start
37
+
38
+ - python
39
+ ```python
40
+ from transformers import AutoTokenizer, AutoModel
41
+
42
+ tokenizer = AutoTokenizer.from_pretrained("TTA-DQA/HateDetection-KcElectra-FineTuning")
43
+ model = AutoModel.from_pretrained("TTA-DQA/HateDetection-KcElectra-FineTuning")
44
+
45
+ ```
46
+
47
+ ### 5. Citation
48
+
49
+ - This model was developed as part of the Quality Validation Project for Super-Giant AI Training Data (305-2100-2131, 2024 Quality Validation for Super-Giant AI Training).
50
+
51
+ ### 6. Bias, Risks, and Limitations
52
+
53
+ - The determination of harmful expressions may vary depending on language, culture, application context, and personal perspectives.
54
+ - Results may reflect biases or lead to controversy due to the subjective nature of evaluating harmful content.
55
+ - This model's outputs should not be considered as definitive standards for identifying harmful expressions.
56
+
57
+ # Results
58
+ - type : binary classification(text-classification)
59
+ - f1-score : 0.9928
60
+ - accuracy : 0.9928