BallisticAI commited on
Commit
9f2477a
·
1 Parent(s): 63fe63a

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +98 -0
README.md CHANGED
@@ -1,3 +1,101 @@
1
  ---
2
  license: llama2
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
3
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
  license: llama2
3
+ tags:
4
+ - code llama
5
+ base_model: BallisticAI/Ballistic-CodeLlama-34B-v1
6
+ inference: false
7
+ model_creator: BallisticAI
8
+ model_type: llama
9
+ prompt_template: '### System Prompt
10
+
11
+ {system_message}
12
+
13
+
14
+ ### User Message
15
+
16
+ {prompt}
17
+
18
+
19
+ ### Assistant
20
+
21
+ '
22
+ quantized_by: BallisticAI
23
+ model-index:
24
+ - name: Ballistic-CodeLlama-34B-v1
25
+ results:
26
+ - task:
27
+ type: text-generation
28
+ dataset:
29
+ name: HumanEval
30
+ type: openai_humaneval
31
+ metrics:
32
+ - type: n/a
33
+ value: n/a
34
+ name: n/a
35
+ verified: false
36
  ---
37
+
38
+ # CodeLlama 34B v1
39
+ - Model creator: [BallisticAI](https://huggingface.co/BallisticAI)
40
+ - Base model: [CodeLlama 34B hf](https://huggingface.co/codellama/CodeLlama-34b-hf)
41
+ - Merged with: [CodeLlama 34B v2](https://huggingface.co/Phind/Phind-CodeLlama-34B-v2) && [speechless-codellama-34b-v2](https://huggingface.co/uukuguy/speechless-codellama-34b-v2.0)
42
+ - Additional training with: [jondurbin/airoboros-2.2](https://huggingface.co/datasets/jondurbin/airoboros-2.2)
43
+
44
+
45
+ <!-- description start -->
46
+ ## Description
47
+
48
+ This repo contains GGUF format model files for [Ballistic-CodeLlama-34B-v1](https://huggingface.co/BallisticAI/Ballistic-CodeLlama-34B-v1).
49
+
50
+ <!-- description end -->
51
+
52
+ ### About AWQ
53
+
54
+ AWQ is an efficient, accurate and blazing-fast low-bit weight quantization method, currently supporting 4-bit quantization. Compared to GPTQ, it offers faster Transformers-based inference.
55
+
56
+ It is also now supported by continuous batching server [vLLM](https://github.com/vllm-project/vllm), allowing use of AWQ models for high-throughput concurrent inference in multi-user server scenarios. Note that, at the time of writing, overall throughput is still lower than running vLLM with unquantised models, however using AWQ enables using much smaller GPUs which can lead to easier deployment and overall cost savings. For example, a 70B model can be run on 1 x 48GB GPU instead of 2 x 80GB.
57
+ <!-- description end -->
58
+ <!-- repositories-available start -->
59
+ ## Repositories available
60
+
61
+ * [GGUF model(s) for CPU inference.](https://huggingface.co/BallisticAI/Ballistic-CodeLlama-34B-v1-GGUF)
62
+ * [Unquantised fp16 model in pytorch format, for GPU inference and for further conversions](BallisticAI/Ballistic-CodeLlama-34B-v1)
63
+ <!-- repositories-available end -->
64
+
65
+ <!-- prompt-template start -->
66
+ ## How to Prompt the Model
67
+ This model accepts the Alpaca/Vicuna instruction format.
68
+
69
+ For example:
70
+
71
+ ```
72
+ ### System Prompt
73
+ You are an intelligent programming assistant.
74
+
75
+ ### User Message
76
+ Implement a linked list in C++
77
+
78
+ ### Assistant
79
+ ...
80
+ ```
81
+
82
+ <!-- prompt-template end -->
83
+
84
+
85
+ ## Bias, Risks, and Limitations
86
+
87
+ <!-- This section is meant to convey both technical and sociotechnical limitations. -->
88
+ This model has undergone very limited testing. Additional safety testing should be performed before any real-world deployments.
89
+
90
+
91
+
92
+ ## Thanks
93
+
94
+ Thanks to:
95
+
96
+ - The Original Llama team
97
+ - [Phind](https://huggingface.co/phind)
98
+ - [uukuguy](https://huggingface.co/uukuguy)
99
+ - [jondurbin](https://huggingface.co/jondurbin)
100
+ - And everyone else who's involved in the Open Source AI/ML Community.
101
+