trichter's picture
Update README.md
9c38139 verified
|
raw
history blame
1.79 kB
metadata
language:
  - en
base_model:
  - google-t5/t5-large

Model: t5-DistillingSbS-ABSA

Task: Aspect-Based Sentiment Analysis (ABSA) - specifically, Aspect Pair Sentiment Extraction

Technique: Distilling Step-by-Step (DistillingSbS)

Model Description

t5-DistillingSbS-ABSA is a fine-tuned t5-large model designed to perform Aspect-Based Sentiment Analysis (ABSA), particularly for the task of Aspect Pair Sentiment Extraction. I used a training approach called Distilling Step-by-Step originally proposed in This Paper by Hsieh et al. at Google Research

Dataset

The dataset consisted of customer reviews of mobile apps that were originally unannotated. They were scraped and collected by Martens et al. for their paper titled "On the Emotion of Users in App Reviews". The data was annotated via the OpenAI API and the model gpt-3.5-turbo, with each review labeled for specific aspects (e.g., UI, functionality, performance) and the corresponding sentiment (positive, negative, neutral). Additionally, sentence-long rationales were extracted to justify the aspect-sentiment pair annotations, aiding in the Distilling Step-by-Step training.

Training was performed using Hugging Face's Trainer API in Google Colaboratory using 1 A100 GPU with 40 GB of VRAM. Training took around 6 hours with a cost of about 80 compute units. With a custom loss function, tokenization function and training loop. All code can be found at my My GitHub Repository

Hyperparameters

Some of the key hyperparameters used for fine-tuning:

Batch Size: 3

Gradient Accumulation Steps: 12

Optimizer: AdamW

Learning Rate: 1e-4

Epochs: 5

Max Sequence Length: 512