--- license: mit datasets: - stanfordnlp/imdb language: - en metrics: - accuracy base_model: - google-bert/bert-base-uncased pipeline_tag: text-classification library_name: transformers tags: - code - sentiment-analysis - bert - imdb - text-classification - nlp --- # BERT IMDb Sentiment Analysis Model This repository contains a fine-tuned BERT model for sentiment analysis on IMDb movie reviews. The model classifies text as either **Positive** or **Negative** sentiment. ## Model Details - **Base Model**: `bert-base-uncased` - **Dataset**: IMDb Movie Reviews - **Task**: Sentiment Analysis (Binary Classification) - **Fine-tuned on**: IMDb dataset - **Labels**: - `0`: Negative - `1`: Positive ## Usage ### Load the Model using `transformers` ```python from transformers import BertTokenizer, BertForSequenceClassification import torch model_name = "philipobiorah/bert-imdb-model" # Load tokenizer and model tokenizer = BertTokenizer.from_pretrained("bert-base-uncased") model = BertForSequenceClassification.from_pretrained(model_name) # Define function for sentiment prediction def predict_sentiment(text): inputs = tokenizer(text, return_tensors="pt", truncation=True, padding=True, max_length=512) with torch.no_grad(): logits = model(**inputs).logits return "Positive" if logits.argmax().item() == 1 else "Negative" # Test the model print(predict_sentiment("This movie was absolutely fantastic!")) print(predict_sentiment("I really disliked this movie, it was terrible."))