File size: 3,926 Bytes
88a665d
 
 
 
 
 
74157e6
4022668
 
 
 
 
 
 
 
 
 
 
0f39635
88a665d
503c861
88a665d
58b4e6d
b130e3d
 
 
 
 
88a665d
 
 
 
503c861
88a665d
 
 
58b4e6d
 
 
88a665d
4dc7b92
88a665d
63abb5d
4dc7b92
 
 
 
 
 
 
 
 
88a665d
 
 
 
 
 
 
 
 
0f39635
88a665d
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
58b4e6d
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
---
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
widget:

- text: "Absolutely thrilled with my new wireless earbuds! The sound quality is exceptional, and they stay securely in my ears during workouts. Plus, the charging case is so convenient for on-the-go"
- text: "Absolutely disappointed with this product! It arrived damaged and looked nothing like the picture. Total waste of money."
- text: "This coffee maker has truly simplified my mornings. It brews quickly and the programmable features allow me to wake up to the aroma of freshly brewed coffee. Plus, the sleek design looks great on my countertop."
- text: "Do not buy this item! It broke within a week of use. Poor quality and not worth the price at all."
- text: "I'm impressed with the durability of this laptop backpack. It comfortably fits my 15-inch laptop, charger, and other essentials without feeling bulky. The USB charging port is a lifesaver for staying connected on the move."
- text: "Terrible experience with this purchase. The product had a weird smell and caused skin irritation. Highly regret buying it."
- text: "As someone who loves to cook, this chef's knife is a game-changer. The sharpblade effortlessly cuts through vegetables, meats, and herbs, making prepwork a breeze. The ergonomic handle ensures comfort even during longchopping sessions."
- text: "Extremely misleading description! The size was way smaller than advertised, and the material felt cheap. Save your money and look elsewhere."
- text: "This smart thermostat has made managing my home's temperature a breeze. Theintuitive app allows me to adjust settings remotely, and the energy-savingfeatures have noticeably reduced my utility bills. Installation was also abreeze thanks to clear instructions."
- text: "Worst purchase ever! Not only did it not work as described, but the customer service was also non-existent when I tried to resolve the issue. Avoid at all costs."
  
model-index:
- name: gpt2-amazon-sentiment-classifier-V1.0
  results: []
license: mit
datasets:
- McAuley-Lab/Amazon-Reviews-2023
language:
- en
library_name: transformers
---

<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# gpt2-amazon-sentiment-classifier-V1.0

This model was trained from scratch on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0320
- Accuracy: 0.9680
- F1: 0.9680

## Model description


Hi! I'd be happy to share some insights about the Amazon Sentiment Analysis model I developed. The model is based on GPT-2, a transformer-based language model, which I fine-tuned using Amazon user reviews from 2023. The purpose of fine-tuning GPT-2 was to adapt it specifically for understanding and generating text related to sentiment analysis in Amazon reviews.

During the fine-tuning process, I trained the model to recognize different sentiments (positive, negative, neutral) by leveraging real user feedback. The fine-tuned GPT-2 model can now predict the sentiment of new reviews by generating relevant responses or categorizing them based on the emotions conveyed in the text.

You can use my model by using API

import transformers
from transformers import pipeline
sentiment_model = pipeline(model="ashok2216/gpt2-amazon-sentiment-classifier")

## Intended uses & limitations

More information needed

## Training and evaluation data

More information needed

## Training procedure -->

### Training hyperparameters

The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2

### Training results



### Framework versions

- Transformers 4.39.3
- Pytorch 2.2.1+cu121
- Datasets 2.18.0
- Tokenizers 0.15.2