SandeepVvigneshwar commited on
Commit
809af7d
·
verified ·
1 Parent(s): 5068df3

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +73 -0
README.md CHANGED
@@ -1,3 +1,76 @@
1
  ---
2
  license: apache-2.0
 
 
 
 
 
 
 
 
 
 
 
 
3
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
  license: apache-2.0
3
+ datasets:
4
+ - dair-ai/emotion
5
+ language:
6
+ - en
7
+ metrics:
8
+ - accuracy
9
+ - f1
10
+ - precision
11
+ - recall
12
+ base_model:
13
+ - albert/albert-large-v2
14
+ pipeline_tag: text-classification
15
  ---
16
+ # Sentiment classification using Albert-large-v2
17
+
18
+ ### Model Description
19
+
20
+ This model is a fine-tuned version of the ALBERT-Large model designed for **emotion sentiment classification**. This model is capable of detecting six different emotional categories in text: **Anger**, **Disgust**, **Fear**, **Happiness**, **Sadness**, and **Surprise**. It achieves high performance on sentiment classification tasks, making it suitable for a variety of real-world applications such as emotion detection, content moderation, and sentiment analysis.
21
+
22
+ ## How to Get Started
23
+
24
+ Use the code below to get started with the model.
25
+
26
+ ```python
27
+ from transformers import pipeline
28
+
29
+ emotion_classifier = pipeline("text-classification", model="SandeepVvigneshwar/sentiment-classification-albert-large-v2")
30
+
31
+ text = "I am so happy to be part of this project!"
32
+ emotion = emotion_classifier(text)
33
+ print(emotion)
34
+ ```
35
+
36
+
37
+ ## Requirements
38
+
39
+ - Python 3.x
40
+ - Hugging Face `transformers` library
41
+ - PyTorch or TensorFlow
42
+
43
+
44
+ ### Training Data
45
+
46
+ [dair-ai/emotion](https://huggingface.co/datasets/dair-ai/emotion)
47
+
48
+ #### Training Hyperparameters
49
+
50
+ - learning_rate = 2e-5
51
+ - per_device_train_batch_size = 8
52
+ - per_device_eval_batch_size = 8
53
+ - gradient_accumulation_steps = 2
54
+ - num_train_epochs = 8
55
+ - weight_decay = 0.01
56
+ - fp16 = True
57
+ - metric_for_best_model = "f1"
58
+ - dataloader_num_workers = 4
59
+ - max_grad_norm = 1.0
60
+ - lr_scheduler_type = "linear"
61
+
62
+ ### Limits
63
+
64
+ - Domain-specific Text: The model may not perform well on specialized or highly technical texts.
65
+ - Languages: The model has been fine-tuned on English-language data and may not generalize well to other languages.
66
+ - Input Length: The model performs best with shorter text inputs. For longer, more complex texts, performance may vary.
67
+
68
+ ## Evaluation
69
+
70
+ | Metric | Value |
71
+ |----------------------------|--------|
72
+ | **Evaluation Loss** | 0.08795 |
73
+ | **Evaluation Accuracy** | 94.31% |
74
+ | **Evaluation F1-Score** | 94.39% |
75
+ | **Evaluation Precision** | 94.99% |
76
+ | **Evaluation Recall** | 94.31% |