GGUF
English
code
mav23 commited on
Commit
ee2306a
·
verified ·
1 Parent(s): 0d78977

Upload folder using huggingface_hub

Browse files
Files changed (3) hide show
  1. .gitattributes +1 -0
  2. README.md +53 -0
  3. glaive-coder-7b.Q4_0.gguf +3 -0
.gitattributes CHANGED
@@ -33,3 +33,4 @@ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
33
  *.zip filter=lfs diff=lfs merge=lfs -text
34
  *.zst filter=lfs diff=lfs merge=lfs -text
35
  *tfevents* filter=lfs diff=lfs merge=lfs -text
 
 
33
  *.zip filter=lfs diff=lfs merge=lfs -text
34
  *.zst filter=lfs diff=lfs merge=lfs -text
35
  *tfevents* filter=lfs diff=lfs merge=lfs -text
36
+ glaive-coder-7b.Q4_0.gguf filter=lfs diff=lfs merge=lfs -text
README.md ADDED
@@ -0,0 +1,53 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: llama2
3
+ datasets:
4
+ - glaiveai/glaive-code-assistant
5
+ language:
6
+ - en
7
+ tags:
8
+ - code
9
+ ---
10
+
11
+ # Glaive-coder-7b
12
+
13
+ Glaive-coder-7b is a 7B parameter code model trained on a dataset of ~140k programming related problems and solutions generated from Glaive’s synthetic data generation platform.
14
+
15
+ The model is fine-tuned on the CodeLlama-7b model.
16
+
17
+ ## Usage:
18
+
19
+ The model is trained to act as a code assistant, and can do both single instruction following and multi-turn conversations.
20
+ It follows the same prompt format as CodeLlama-7b-Instruct-
21
+ ```
22
+ <s>[INST]
23
+ <<SYS>>
24
+ {{ system_prompt }}
25
+ <</SYS>>
26
+
27
+ {{ user_msg }} [/INST] {{ model_answer }} </s>
28
+ <s>[INST] {{ user_msg }} [/INST]
29
+ ```
30
+
31
+ You can run the model in the following way-
32
+
33
+ ```python
34
+ from transformers import AutoModelForCausalLM , AutoTokenizer
35
+
36
+ tokenizer = AutoTokenizer.from_pretrained("glaiveai/glaive-coder-7b")
37
+ model = AutoModelForCausalLM.from_pretrained("glaiveai/glaive-coder-7b").half().cuda()
38
+
39
+ def fmt_prompt(prompt):
40
+ return f"<s> [INST] {prompt} [/INST]"
41
+
42
+ inputs = tokenizer(fmt_prompt(prompt),return_tensors="pt").to(model.device)
43
+
44
+ outputs = model.generate(**inputs,do_sample=True,temperature=0.1,top_p=0.95,max_new_tokens=100)
45
+
46
+ print(tokenizer.decode(outputs[0],skip_special_tokens=True,clean_up_tokenization_spaces=False))
47
+ ```
48
+
49
+ ## Benchmarks:
50
+
51
+ The model achieves a 63.1% pass@1 on HumanEval and a 45.2% pass@1 on MBPP, however it is evident that these benchmarks are not representative of real-world usage of code models so we are launching the [Code Models Arena](https://arena.glaive.ai/) to let users vote on model outputs so we can have a better understanding of user preference on code models and come up with new and better benchmarks. We plan to release the Arena results as soon as we have a sufficient amount of data.
52
+
53
+ Join the Glaive [discord](https://discord.gg/fjQ4uf3yWD) for improvement suggestions, bug-reports and collaborating on more open-source projects.
glaive-coder-7b.Q4_0.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:9773c2c812f44c687bddcd0beea31afc6c5f7e374bef3df3bb47d158480547b9
3
+ size 3825904384