zera09 commited on
Commit
dcd06a3
·
verified ·
1 Parent(s): b956e2a

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +77 -0
README.md CHANGED
@@ -22,7 +22,84 @@ This model is a fine-tuned version of [meta-llama/Llama-2-13b-chat-hf](https://h
22
 
23
 
24
  ## Intended uses & limitations
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
25
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
26
 
27
  ### Training hyperparameters
28
 
 
22
 
23
 
24
  ## Intended uses & limitations
25
+ ```
26
+ from peft import PeftModel, PeftConfig
27
+ from transformers import AutoModelForCausalLM,AutoTokenizer
28
+ ```
29
+ Loading Tokenizer
30
+ ```
31
+ model_id = "meta-llama/Llama-2-13b-chat-hf"
32
+ tokenizer = AutoTokenizer.from_pretrained(model_id)
33
+ tokenizer.pad_token = tokenizer.eos_token
34
+ ```
35
+ Loading Model
36
+ ```
37
+ config = PeftConfig.from_pretrained("zera09/llama_FT")
38
+ base_model = AutoModelForCausalLM.from_pretrained(model_id,load_in_4bit=True, device_map='cuda')
39
+ model = PeftModel.from_pretrained(base_model, "zera09/llama_FT")
40
+ ```
41
 
42
+ Template for inference
43
+ ```
44
+ template = """### Instruction
45
+ Given this context: {context} and price:{price}output onle one decision from the square bracket [buy,sell,hold] and provide reasoning om why.
46
+
47
+ ### Response:
48
+ Decision:
49
+ Reasonong:
50
+ ```"""
51
+ ```
52
+ ```
53
+ from transformers import set_seed
54
+
55
+
56
+ def gen(text):
57
+ toks = tokenizer(text, return_tensors="pt").to("cuda")
58
+
59
+ set_seed(32)
60
+ model.eval()
61
+ with torch.no_grad():
62
+ out = model.generate(
63
+ **toks,
64
+ max_new_tokens=350,
65
+ top_k=5,
66
+ do_sample=True,
67
+ )
68
+ return tokenizer.decode(
69
+ out[0][len(toks["input_ids"][0]) :], skip_special_tokens=True
70
+ )
71
+ ```
72
+
73
+ Runign Inference On single text
74
+ ```
75
+ context = "The global recloser control market is expected to grow significantly, driven by increasing demand for power quality and reliability, especially in the electric segment and emerging economies like China. The positive score for this news is 1.1491235518690246e-08. The neutral score for this news is 0.9999998807907104. The negative score for this news is 6.358970239261907e-08"
76
+ price = str(12.1)
77
+ print(gen(template.format(context=news,price=price)).split("```"))
78
+ ```
79
+
80
+ For multiple text
81
+ ```
82
+ import pandas as pd
83
+
84
+ data = panda.read_pickle('./DRIV_train.pkl')
85
+ data = pd.DataFrame(data).T
86
+
87
+ model.eval()
88
+ answer_list = []
89
+ for idx,row in ans_sum.iterrows():
90
+ toks = tokenizer(template.format(context=row['news']['DRIV'][0],price=str(row['price']['DRIV'][0]))), return_tensors="pt").to("cuda")
91
+ with torch.no_grad():
92
+ out = model.generate(
93
+ **toks,
94
+ max_new_tokens=350,
95
+ top_k=5,
96
+ do_sample=True,
97
+ )
98
+
99
+ ans_list.append(tokenizer.decode(
100
+ out[0][len(toks["input_ids"][0]) :], skip_special_tokens=True
101
+ )
102
+ ```
103
 
104
  ### Training hyperparameters
105