|
--- |
|
tags: |
|
- bert |
|
- transformer |
|
- text-classification |
|
license: apache-2.0 |
|
--- |
|
|
|
# Model Card for BERT |
|
|
|
## Model Description |
|
This is a BERT model fine-tuned for sentiment analysis. BERT (Bidirectional Encoder Representations from Transformers) is a transformer-based model designed to understand the context of words in search queries. |
|
|
|
## Intended Use |
|
- **Primary use case:** Sentiment analysis on social media posts. |
|
- **Limitations:** The model may exhibit biases present in the training data and may not perform well on out-of-domain data. |
|
|
|
## Training Data |
|
This model was trained on the [Stanford Sentiment Treebank]. The dataset consists of 11,855 labeled sentences for sentiment classification. |
|
|
|
## Evaluation Results |
|
The model achieves the following results on the Stanford Sentiment Treebank: |
|
- Accuracy: 92% |
|
- F1 Score: 0.91 |
|
|
|
## How to Use |
|
Here’s how to load and use the model in Python: |
|
|
|
```python |
|
from transformers import AutoModelForSequenceClassification, AutoTokenizer |
|
|
|
model_name = "FoundationsofInformationRetrieval/my_model_repo" |
|
model = AutoModelForSequenceClassification.from_pretrained(model_name) |
|
tokenizer = AutoTokenizer.from_pretrained(model_name) |
|
|
|
# Example usage |
|
inputs = tokenizer("I love using Hugging Face!", return_tensors="pt") |
|
outputs = model(**inputs) |
|
|