File size: 4,839 Bytes
1b53a0e
 
 
9bb8b3b
 
 
 
 
 
 
 
 
 
 
 
 
1b53a0e
 
9bb8b3b
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
0c1665a
 
9bb8b3b
0c1665a
 
9bb8b3b
0c1665a
 
9bb8b3b
0c1665a
 
9bb8b3b
0c1665a
 
9bb8b3b
0c1665a
 
9bb8b3b
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1b53a0e
9bb8b3b
1b53a0e
9bb8b3b
 
 
 
 
 
 
 
 
 
1b53a0e
9bb8b3b
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
---
language:
- en
license: apache-2.0
datasets:
- semeval2014
tags:
- aspect-based-sentiment-analysis
- llama
- instructabsa
- alpaca
- unsloth
base_model:
- meta-llama/Llama-3.1-8B-Instruct
pipeline_tag: text-generation
library_name: transformers
---

# Aspect Extraction Model for Restaurant Reviews using Llama 3.1 8b

This repository contains a fine-tuned version of [unsloth/meta-llama-3.1-8b-instruct-bnb-4bit](https://huggingface.co/unsloth/meta-llama-3.1-8b-instruct-bnb-4bit), trained specifically for Aspect Extraction tasks using the **SemEval 2014 Restaurant Dataset**. The model employs the **InstructABSA** instruction prompt format combined with the **Alpaca** prompting structure, optimizing its performance on real-world restaurant review analysis.

## Model Overview

- **Base Model:** [unsloth/meta-llama-3.1-8b-instruct-bnb-4bit](https://huggingface.co/unsloth/meta-llama-3.1-8b-instruct-bnb-4bit)
- **Fine-tuning Dataset:** [SemEval 2014 Restaurant Dataset](https://alt.qcri.org/semeval2014/task4/)
- **Task:** Aspect Extraction
- **Prompt Format:** InstructABSA within Alpaca prompt format

## Performance Metrics

| Dataset | F1 Score |
|---------|----------|
| Train   | 93.76%   |
| Test    | 94.03%   |

## Use Cases

This model is well-suited for:
- **Research purposes:** Explore novel methodologies or validate existing theories in ABSA.
- **Real-world applications:** Deriving actionable insights from restaurant reviews for businesses, marketers, and product developers.

## Inference Speed

- **Approximate inference time:** ~1 second per review (tested on NVIDIA GPUs with 4-bit quantization).



## Installation

Install the required dependencies using pip:

```python
import os
if "COLAB_" not in "".join(os.environ.keys()):
    !pip install unsloth
else:
    # Do this only in Colab notebooks! Otherwise, use pip install unsloth
    !pip install --no-deps bitsandbytes accelerate xformers==0.0.29 peft trl triton
    !pip install --no-deps cut_cross_entropy unsloth_zoo
    !pip install sentencepiece protobuf datasets huggingface_hub hf_transfer
    !pip install --no-deps unsloth

```
## Example Usage

```python
from unsloth import FastLanguageModel
import torch

model, tokenizer = FastLanguageModel.from_pretrained(
    "RichardLu/Llama3_AE_res",
    load_in_4bit=True,
    max_seq_length=2048,
)

FastLanguageModel.for_inference(model)

# Define the instruction for aspect extraction
instructabsa_instruction = """Definition: The output will be the aspects (both implicit and explicit) which have an associated opinion that are extracted from the input text. In cases where there are no aspects the output should be noaspectterm.
Positive example 1-
input: With the great variety on the menu, I eat here often and never get bored.
output: menu
Positive example 2-
input: Great food, good size menu, great service and an unpretensious setting.
output: food, menu, service, setting
Negative example 1-
input: They did not have mayonnaise, forgot our toast, left out ingredients (ie cheese in an omelet), below hot temperatures and the bacon was so over cooked it crumbled on the plate when you touched it.
output: toast, mayonnaise, bacon, ingredients, plate
Negative example 2-
input: The seats are uncomfortable if you are sitting against the wall on wooden benches.
output: seats
Neutral example 1-
input: I asked for seltzer with lime, no ice.
output: seltzer with lime
Neutral example 2-
input: They wouldnt even let me finish my glass of wine before offering another.
output: glass of wine
Now complete the following example:"""
alpaca_prompt = """Below is an instruction that describes a task, paired with an input that provides further context. Write a response that appropriately completes the request.
### Instruction:
{}
### Input:
{}
### Response:
{}"""

prompt = alpaca_prompt.format(instructabsa_instruction, "Great food, good size menu, great service and an unpretensious setting.", "")

inputs = tokenizer(prompt, return_tensors="pt").to("cuda")
output_ids = model.generate(**inputs, max_new_tokens=128)
output_text = tokenizer.decode(output_ids[0], skip_special_tokens=True)

print(output_text.split("### Response:")[-1].strip())
```

## License

This model is intended for research and educational purposes. Please ensure proper citation if utilized in academic or industry research.

## Citation

If you utilize this model in your research, please cite it appropriately and reference this repository.

```bibtex
@misc{yourcitation2024,
  author = {Lu Phone Maw},
  title = {Aspect Extraction Model for Restaurant Reviews using Llama 3.1 8b},
  year = {2025},
  publisher = {Lu Phone Maw},
  journal = {Hugging Face repository},
  howpublished = {\url{https://huggingface.co/RichardLu/Llama3_AE_res}}
}
```

For any questions or feedback, please contact the repository maintainer.