Minekorkmz commited on
Commit
063792d
·
verified ·
1 Parent(s): 1bd6c7e

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +40 -1
README.md CHANGED
@@ -1,6 +1,40 @@
1
  ---
2
  library_name: peft
3
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
4
  ## Training procedure
5
 
6
 
@@ -15,7 +49,12 @@ The following `bitsandbytes` quantization config was used during training:
15
  - bnb_4bit_quant_type: nf4
16
  - bnb_4bit_use_double_quant: True
17
  - bnb_4bit_compute_dtype: float16
18
- ### Framework versions
19
 
20
 
 
 
21
  - PEFT 0.4.0
 
 
 
 
 
1
  ---
2
  library_name: peft
3
  ---
4
+ ## Description
5
+ This model was obtained by fine-tuning the Llama-2 7B large language model with the LoRA technique. The aim is to develop a sentiment analysis system in Turkish language by training the model according to the sentences in the given data set.
6
+ The evaluation metrics of the model were calculated and the following results were obtained.
7
+
8
+
9
+ ## Dataset
10
+
11
+ The training data set consists of 152715 rows and the eval data set consists of 16968 rows. It includes social media posts and product reviews.
12
+
13
+
14
+ ## Uses
15
+ from transformers import AutoConfig
16
+ from transformers import AutoModelForSequenceClassification
17
+
18
+ config = AutoConfig.from_pretrained("Minekorkmz/model_yurt_1200")
19
+ num_labels = config.num_labels
20
+
21
+ base_model = AutoModelForSequenceClassification.from_pretrained(
22
+ "meta-llama/Llama-2-7b-chat-hf",
23
+ num_labels=num_labels
24
+ )
25
+
26
+ model = PeftModel.from_pretrained(base_model, "Minekorkmz/model_yurt_1200")
27
+ tokenizer = AutoTokenizer.from_pretrained("Minekorkmz/model_yurt_1200")
28
+
29
+ from transformers import pipeline
30
+
31
+ sentiment_task = pipeline("sentiment-analysis",
32
+ model=model,
33
+ tokenizer=tokenizer,
34
+ return_all_scores=True)
35
+
36
+ print(sentiment_task("çok kötü bir ürün oldu sevemedim"))
37
+
38
  ## Training procedure
39
 
40
 
 
49
  - bnb_4bit_quant_type: nf4
50
  - bnb_4bit_use_double_quant: True
51
  - bnb_4bit_compute_dtype: float16
 
52
 
53
 
54
+ ### Framework versions
55
+
56
  - PEFT 0.4.0
57
+ - accelerate 0.26.0
58
+ - bitsandbytes 0.41.1
59
+ - transformers 4.35.0
60
+ - trl 0.4.7