Emotion Detection Model for MindPadi (emotion_model
)
This model is part of the MindPadi ecosystem β a mental health chatbot designed to offer empathetic, context-aware responses. emotion_model
is a transformer-based sequence classification model trained to detect a range of emotional states from user input. It helps personalize chatbot responses by understanding the emotional tone of each message.
π§ Model Summary
- Task: Emotion Classification
- Architecture: Transformer-based (likely BERT or DistilBERT)
- Labels:
happy
,sad
,angry
,neutral
,fearful
,disgust
,surprised
, etc. - Framework: π€ Transformers (PyTorch backend)
- Use Case: Core emotion recognition module in
app/chatbot/emotion.py
π§Ύ Intended Use
βοΈ Primary Use Cases
- Detect user emotions in chat messages.
- Adjust response tone and therapy prompts in MindPadi.
- Support emotional trend tracking in mood analytics.
π« Not Recommended For
- Clinical diagnosis or treatment decisions.
- Emotion detection in highly formal or technical language (e.g., legal, medical).
- Non-English inputs (English-only training data).
π Training Details
- Training Script:
training/train_emotion_model.py
- Datasets: A mix of publicly available emotion corpora (e.g., GoEmotions) and proprietary datasets stored in
training/datasets/
- Preprocessing:
- Cleaned for offensive language and class imbalance.
- Tokenized using
AutoTokenizer
from Hugging Face Transformers.
- Hyperparameters:
- Epochs: ~4β6
- Batch Size: 16β32
- Learning Rate: 2e-5 to 3e-5
- Loss Function: CrossEntropyLoss
- Optimizer: AdamW
β Evaluation
- Metrics: Accuracy, F1-score (micro, macro), confusion matrix
- Evaluation Script:
training/evaluate_model.py
- Performance:
- Accuracy: ~87%
- Macro F1: ~85%
- Robust across common emotional states like
sad
,happy
,angry
- Visualization: See
lstm_accuracy_bert.png
for comparisons
π¦ Files
The model directory includes:
File | Purpose |
---|---|
config.json |
Model architecture configuration |
model.safetensors |
Trained model weights |
tokenizer.json , vocab.txt |
Tokenizer config |
merges.txt (if BPE-based) |
Byte-pair encoding rules |
checkpoint-*/ (optional) |
Intermediate training checkpoints |
π Example Usage
from transformers import AutoTokenizer, AutoModelForSequenceClassification
import torch
model_name = "mindpadi/emotion_model"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForSequenceClassification.from_pretrained(model_name)
text = "I feel so overwhelmed and tired."
inputs = tokenizer(text, return_tensors="pt")
outputs = model(**inputs)
predicted_class = torch.argmax(outputs.logits, dim=1).item()
print("Predicted emotion class:", predicted_class)
π‘ Integration
Integrated in:
app/chatbot/emotion.py
: Emotion detection during each chat turn.app/utils/analytics.py
: Aggregates emotions for weekly mood charts.LangGraph
: Used in flow state personalization nodes.
β οΈ Limitations
- Bias: May inherit cultural or gender biases from training data.
- Language: English only.
- False Positives: Sarcasm or ambiguous text may confuse predictions.
- Not Clinical: Should not be relied upon for medical-level emotional assessments.
π§ββοΈ Ethical Considerations
- MindPadi informs users that they are interacting with AI.
- Emotion analysis is used only to guide and personalize chatbot responses.
- All usage must respect user privacy (see
app/tools/encryption.py
for encryption methods).
π§© License
MIT License. You are free to use, modify, and distribute the model with attribution.
π¬ Contact
- Project: MindPadi Mental Health Chatbot
- Author: MindPadi Team
- Email: [[email protected]]
- GitHub: [github.com/mindpadi]
Last updated: May 2025
- Downloads last month
- 10
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
π
Ask for provider support