π XLM-R Multi-Emotion Classifier π
π Mission Statement
The XLM-R Multi-Emotion Classifier is built to understand human emotions across multiple languages, helping researchers, developers, and businesses analyze sentiment in text at scale.
From social media monitoring to mental health insights, this model is designed to decode emotions with accuracy and fairness.
π― Vision
Our goal is to create an AI-powered emotion recognition model that: β’ π Understands emotions across cultures and languages β’ π€ Bridges the gap between AI and human psychology β’ π‘ Empowers businesses, researchers, and developers to extract valuable insights from text
π Model Overview
Model Name: msgfrom96/xlm_emo_multi Architecture: XLM-RoBERTa (Multi-Lingual Transformer) Task: Multi-label Emotion Classification Languages: English, Arabic Dataset: SemEval-2018 Task 1: Affect in Tweets
The model predicts multiple emotions per text using multi-label classification. It can recognize emotions like: β’ π Anger, Anticipation, Disgust, Fear, Joy, Sadness, Surprise, Trust, Love, Optimism, Pessimism
π¦ How to Use
Load Model and Tokenizer
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model_name = "msgfrom96/xlm_emo_multi"
Load model and tokenizer
model = AutoModelForSequenceClassification.from_pretrained(model_name) tokenizer = AutoTokenizer.from_pretrained(model_name)
Example text
text = "I can't believe how amazing this is! So happy and excited!"
Tokenize input
inputs = tokenizer(text, return_tensors="pt", padding=True, truncation=True)
Get model predictions
outputs = model(**inputs) print(outputs.logits) # Raw emotion scores
Interpreting Results
The model outputs logits (raw scores) for each emotion. Apply a sigmoid activation to convert these into probabilities:
import torch
probs = torch.sigmoid(outputs.logits) print(probs)
Each score represents the probability of an emotion being present in the text.
β‘ Training & Fine-Tuning Details β’ Base Model: XLM-RoBERTa (xlm-roberta-base) π β’ Dataset: SemEval-2018 (English & Arabic Tweets) π β’ Training Strategy: Multi-label classification π₯ β’ Optimizer: AdamW βοΈ β’ Batch Size: 16 ποΈββοΈ β’ Learning Rate: 2e-5 π― β’ Hardware: Trained on AWS SageMaker with CUDA GPU support π β’ Evaluation Metric: Macro-F1 & Micro-F1 π β’ Best Model Selection: Auto-selected via load_best_model_at_end=True β
π Citations & References
If you use this model, please cite the following sources:
π SemEval-2018 Dataset Mohammad, S., Bravo-Marquez, F., Salameh, M., & Kiritchenko, S. (2018). βSemEval-2018 Task 1: Affect in Tweets.β Proceedings of SemEval-2018. π Paper Link
π XLM-RoBERTa Conneau, A., Khandelwal, K., Goyal, N., Chaudhary, V., Wenzek, G., GuzmΓ‘n, F., Grave, E., Ott, M., Zettlemoyer, L., & Stoyanov, V. (2020). βUnsupervised Cross-lingual Representation Learning at Scale.β Proceedings of ACL 2020. π Paper Link
π Transformers Library Hugging Face (2020). βπ€ Transformers: State-of-the-art Natural Language Processing for Pytorch and TensorFlow 2.0.β π Library Docs
π€ Contributing
Want to improve the model? Feel free to: β’ Train it on more languages π β’ Optimize for low-resource devices π₯ β’ Integrate it into real-world applications π‘ β’ Submit pull requests or discussions π
π Acknowledgments
Special thanks to the Hugging Face team, SemEval organizers, and the NLP research community for providing the tools and datasets that made this model possible. π
π Connect & Feedback
π¬ Questions? Issues? Create a discussion on the Hugging Face Model Hub π§ Email: [email protected]
license: mit
- Downloads last month
- 36
Model tree for msgfrom96/xlm_emo_multi
Base model
FacebookAI/xlm-roberta-base