Commit
·
d346fb2
1
Parent(s):
e5d188b
Create README.md
Browse files
README.md
ADDED
@@ -0,0 +1,154 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
---
|
2 |
+
license: llama2
|
3 |
+
datasets:
|
4 |
+
- AlfredPros/smart-contracts-instructions
|
5 |
+
language:
|
6 |
+
- en
|
7 |
+
tags:
|
8 |
+
- blockchain
|
9 |
+
- solidity
|
10 |
+
- smart contract
|
11 |
+
---
|
12 |
+
# Code LLaMA 7b Instruct Solidity
|
13 |
+
|
14 |
+
A finetuned 7 billion parameters Code LLaMA - Instruct model to generate Solidity smart contract using 4-bit QLoRA finetuning provided by PEFT library.
|
15 |
+
|
16 |
+
# Training Dataset
|
17 |
+
|
18 |
+
Dataset used to finetune the model is AlfredPros' Smart Contracts Instructions (https://huggingface.co/datasets/AlfredPros/smart-contracts-instructions).
|
19 |
+
A dataset containing 6,003 GPT-generated human instruction and Solidity source code data pairs. This dataset has been processed for training LLMs.
|
20 |
+
|
21 |
+
# Training Parameters
|
22 |
+
|
23 |
+
## Bitsandbytes quantization configurations
|
24 |
+
- Load in 4-bit: true
|
25 |
+
- 4-bit quantization type: NF4
|
26 |
+
- 4-bit compute dtype: float16
|
27 |
+
- 4-bit use double quantization: true
|
28 |
+
|
29 |
+
## Supervised finetuning trainer parameters
|
30 |
+
- Number of train epochs: 1
|
31 |
+
- FP16: true
|
32 |
+
- FP16 option level: O1
|
33 |
+
- BF16: false
|
34 |
+
- Per device train batch size: 1
|
35 |
+
- Gradient accumulation steps: 1
|
36 |
+
- Gradient checkpointing: true
|
37 |
+
- Max gradient normal: 0.3
|
38 |
+
- Learning rate: 2e-4
|
39 |
+
- Weight decay: 0.001
|
40 |
+
- Optimizer: paged AdamW 32-bit
|
41 |
+
- Learning rate scheduler type: cosine
|
42 |
+
- Warmup ratio: 0.03
|
43 |
+
|
44 |
+
# Training Loss
|
45 |
+
```
|
46 |
+
Step Training Loss
|
47 |
+
100 0.330900
|
48 |
+
200 0.293000
|
49 |
+
300 0.276500
|
50 |
+
400 0.290900
|
51 |
+
500 0.306100
|
52 |
+
600 0.302600
|
53 |
+
700 0.337200
|
54 |
+
800 0.295000
|
55 |
+
900 0.297800
|
56 |
+
1000 0.299500
|
57 |
+
1100 0.268900
|
58 |
+
1200 0.257800
|
59 |
+
1300 0.264100
|
60 |
+
1400 0.294400
|
61 |
+
1500 0.293900
|
62 |
+
1600 0.287600
|
63 |
+
1700 0.281200
|
64 |
+
1800 0.273400
|
65 |
+
1900 0.266600
|
66 |
+
2000 0.227500
|
67 |
+
2100 0.261600
|
68 |
+
2200 0.275700
|
69 |
+
2300 0.290100
|
70 |
+
2400 0.290900
|
71 |
+
2500 0.316200
|
72 |
+
2600 0.296500
|
73 |
+
2700 0.291400
|
74 |
+
2800 0.253300
|
75 |
+
2900 0.321500
|
76 |
+
3000 0.269500
|
77 |
+
3100 0.295600
|
78 |
+
3200 0.265800
|
79 |
+
3300 0.262800
|
80 |
+
3400 0.274900
|
81 |
+
3500 0.259800
|
82 |
+
3600 0.226300
|
83 |
+
3700 0.325700
|
84 |
+
3800 0.249000
|
85 |
+
3900 0.237200
|
86 |
+
4000 0.251400
|
87 |
+
4100 0.247000
|
88 |
+
4200 0.278700
|
89 |
+
4300 0.264000
|
90 |
+
4400 0.245000
|
91 |
+
4500 0.235900
|
92 |
+
4600 0.240400
|
93 |
+
4700 0.235200
|
94 |
+
4800 0.220300
|
95 |
+
4900 0.202700
|
96 |
+
5000 0.240500
|
97 |
+
5100 0.258500
|
98 |
+
5200 0.236300
|
99 |
+
5300 0.267500
|
100 |
+
5400 0.236700
|
101 |
+
5500 0.265900
|
102 |
+
5600 0.244900
|
103 |
+
5700 0.297900
|
104 |
+
5800 0.281200
|
105 |
+
5900 0.313800
|
106 |
+
6000 0.249800
|
107 |
+
6003 0.271939
|
108 |
+
```
|
109 |
+
|
110 |
+
# Example Usage
|
111 |
+
```py
|
112 |
+
from transformers import BitsAndBytesConfig, AutoTokenizer, AutoModelForCausalLM
|
113 |
+
import torch
|
114 |
+
import accelerate
|
115 |
+
|
116 |
+
use_4bit = True
|
117 |
+
bnb_4bit_compute_dtype = "float16"
|
118 |
+
bnb_4bit_quant_type = "nf4"
|
119 |
+
use_double_nested_quant = True
|
120 |
+
compute_dtype = getattr(torch, bnb_4bit_compute_dtype)
|
121 |
+
|
122 |
+
# BitsAndBytesConfig int-4 config
|
123 |
+
bnb_config = BitsAndBytesConfig(
|
124 |
+
load_in_4bit=use_4bit,
|
125 |
+
bnb_4bit_use_double_quant=use_double_nested_quant,
|
126 |
+
bnb_4bit_quant_type=bnb_4bit_quant_type,
|
127 |
+
bnb_4bit_compute_dtype=compute_dtype,
|
128 |
+
load_in_8bit_fp32_cpu_offload=True
|
129 |
+
)
|
130 |
+
|
131 |
+
# Load model in 4-bit
|
132 |
+
tokenizer = AutoTokenizer.from_pretrained("AlfredPros/CodeLlama-7b-Instruct-Solidity")
|
133 |
+
model = AutoModelForCausalLM.from_pretrained("AlfredPros/CodeLlama-7b-Instruct-Solidity", quantization_config=bnb_config, device_map="balanced_low_0")
|
134 |
+
|
135 |
+
# Make input
|
136 |
+
input='Make a smart contract to create a whitelist of approved wallets. The purpose of this contract is to allow the DAO (Decentralized Autonomous Organization) to approve or revoke certain wallets, and also set a checker address for additional validation if needed. The current owner address can be changed by the current owner.'
|
137 |
+
|
138 |
+
prompt = f"""### Instruction:
|
139 |
+
Use the Task below and the Input given to write the Response, which is a programming code that can solve the following Task:
|
140 |
+
|
141 |
+
### Task:
|
142 |
+
{input}
|
143 |
+
|
144 |
+
### Solution:
|
145 |
+
"""
|
146 |
+
|
147 |
+
# Tokenize the input
|
148 |
+
input_ids = tokenizer(prompt, return_tensors="pt", truncation=True).input_ids.cuda()
|
149 |
+
# Run the model to infere an output
|
150 |
+
outputs = model.generate(input_ids=input_ids, max_new_tokens=256, do_sample=True, top_p=0.9, temperature=0.001, pad_token_id=1)
|
151 |
+
|
152 |
+
# Display the generated output
|
153 |
+
print(tokenizer.batch_decode(outputs.detach().cpu().numpy(), skip_special_tokens=True)[0][len(prompt):])
|
154 |
+
```
|