Update README.md
Browse files
README.md
CHANGED
@@ -8,35 +8,7 @@ library_name: transformers
|
|
8 |
pipeline_tag: text-classification
|
9 |
---
|
10 |
This LLaMA-2 7B was fined-tuned on nuclear energy data from twitter/X. The classification accuracy obtained is 96%. \
|
11 |
-
|
12 |
The number of labels is 3: {0: Negative, 1: Neutral, 2: Positive} \
|
13 |
Warning: You need sufficient GPU to run this model.
|
14 |
|
15 |
-
This is an example to use it, it worked on 8 GB Nvidia-RTX 4060
|
16 |
-
```bash
|
17 |
-
from transformers import AutoTokenizer
|
18 |
-
from transformers import pipeline
|
19 |
-
from transformers import AutoModelForSequenceClassification
|
20 |
-
import torch
|
21 |
-
|
22 |
-
checkpoint = 'kumo24/llama2-sentiment-nuclear'
|
23 |
-
tokenizer=AutoTokenizer.from_pretrained(checkpoint)
|
24 |
-
id2label = {0: "negative", 1: "neutral", 2: "positive"}
|
25 |
-
label2id = {"negative": 0, "neutral": 1, "positive": 2}
|
26 |
-
|
27 |
-
|
28 |
-
if tokenizer.pad_token is None:
|
29 |
-
tokenizer.add_special_tokens({'pad_token': '[PAD]'})
|
30 |
-
|
31 |
-
model = AutoModelForSequenceClassification.from_pretrained(checkpoint,
|
32 |
-
num_labels=3,
|
33 |
-
id2label=id2label,
|
34 |
-
label2id=label2id,
|
35 |
-
device_map='auto')
|
36 |
-
|
37 |
-
sentiment_task = pipeline("sentiment-analysis",
|
38 |
-
model=model,
|
39 |
-
tokenizer=tokenizer)
|
40 |
-
|
41 |
-
print(sentiment_task("Michigan Wolverines are Champions, Go Blue!"))
|
42 |
-
```
|
|
|
8 |
pipeline_tag: text-classification
|
9 |
---
|
10 |
This LLaMA-2 7B was fined-tuned on nuclear energy data from twitter/X. The classification accuracy obtained is 96%. \
|
11 |
+
You need access to use the LLaMA-2 model. To use it, you need a valid token generated by huggingface. \
|
12 |
The number of labels is 3: {0: Negative, 1: Neutral, 2: Positive} \
|
13 |
Warning: You need sufficient GPU to run this model.
|
14 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|