trichter commited on
Commit
9c38139
·
verified ·
1 Parent(s): 86880a4

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +3 -0
README.md CHANGED
@@ -11,10 +11,12 @@ Task: Aspect-Based Sentiment Analysis (ABSA) - specifically, Aspect Pair Sentime
11
  Technique: Distilling Step-by-Step (DistillingSbS)
12
 
13
  Model Description
 
14
  t5-DistillingSbS-ABSA is a fine-tuned t5-large model designed to perform Aspect-Based Sentiment Analysis (ABSA), particularly for the task of Aspect Pair Sentiment Extraction.
15
  I used a training approach called Distilling Step-by-Step originally proposed in [This Paper](https://arxiv.org/abs/2305.02301) by Hsieh et al. at Google Research
16
 
17
  Dataset
 
18
  The dataset consisted of customer reviews of mobile apps that were originally unannotated. They were scraped and collected by Martens et al. for their paper titled ["On the Emotion of Users in App Reviews"](https://ieeexplore.ieee.org/document/7961885).
19
  The data was annotated via the OpenAI API and the model gpt-3.5-turbo, with each review labeled for specific aspects (e.g., UI, functionality, performance) and the corresponding sentiment (positive, negative, neutral).
20
  Additionally, sentence-long rationales were extracted to justify the aspect-sentiment pair annotations, aiding in the Distilling Step-by-Step training.
@@ -24,6 +26,7 @@ Training took around 6 hours with a cost of about 80 compute units.
24
  With a custom loss function, tokenization function and training loop. All code can be found at my [My GitHub Repository](https://github.com/trichter93/ABSA-LLMs-DistillingSbS/)
25
 
26
  Hyperparameters
 
27
  Some of the key hyperparameters used for fine-tuning:
28
 
29
  Batch Size: 3
 
11
  Technique: Distilling Step-by-Step (DistillingSbS)
12
 
13
  Model Description
14
+
15
  t5-DistillingSbS-ABSA is a fine-tuned t5-large model designed to perform Aspect-Based Sentiment Analysis (ABSA), particularly for the task of Aspect Pair Sentiment Extraction.
16
  I used a training approach called Distilling Step-by-Step originally proposed in [This Paper](https://arxiv.org/abs/2305.02301) by Hsieh et al. at Google Research
17
 
18
  Dataset
19
+
20
  The dataset consisted of customer reviews of mobile apps that were originally unannotated. They were scraped and collected by Martens et al. for their paper titled ["On the Emotion of Users in App Reviews"](https://ieeexplore.ieee.org/document/7961885).
21
  The data was annotated via the OpenAI API and the model gpt-3.5-turbo, with each review labeled for specific aspects (e.g., UI, functionality, performance) and the corresponding sentiment (positive, negative, neutral).
22
  Additionally, sentence-long rationales were extracted to justify the aspect-sentiment pair annotations, aiding in the Distilling Step-by-Step training.
 
26
  With a custom loss function, tokenization function and training loop. All code can be found at my [My GitHub Repository](https://github.com/trichter93/ABSA-LLMs-DistillingSbS/)
27
 
28
  Hyperparameters
29
+
30
  Some of the key hyperparameters used for fine-tuning:
31
 
32
  Batch Size: 3