Update README.md
Browse files
README.md
CHANGED
@@ -34,6 +34,14 @@ GGML files are for CPU + GPU inference using [llama.cpp](https://github.com/gger
|
|
34 |
* [2, 3, 4, 5, 6 and 8-bit GGML models for CPU+GPU inference](https://huggingface.co/TheBloke/airoboros-13B-gpt4-1.2-GGML)
|
35 |
* [Unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/jondurbin/airoboros-13b-gpt4-1.2)
|
36 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
37 |
<!-- compatibility_ggml start -->
|
38 |
## Compatibility
|
39 |
|
@@ -88,7 +96,7 @@ Refer to the Provided Files table below to see what files use which methods, and
|
|
88 |
I use the following command line; adjust for your tastes and needs:
|
89 |
|
90 |
```
|
91 |
-
./main -t 10 -ngl 32 -m airoboros-13b-gpt4-1.2.ggmlv3.q5_0.bin --color -c 2048 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "
|
92 |
```
|
93 |
Change `-t 10` to the number of physical CPU cores you have. For example if your system has 8 cores/16 threads, use `-t 8`.
|
94 |
|
@@ -130,7 +138,6 @@ Thank you to all my generous patrons and donaters!
|
|
130 |
|
131 |
# Original model card: John Durbin's Airoboros 13B GPT4 1.2
|
132 |
|
133 |
-
|
134 |
### Overview
|
135 |
|
136 |
This is a qlora fine-tuned 13b parameter LlaMa model, using completely synthetic training data created gpt4 via https://github.com/jondurbin/airoboros
|
|
|
34 |
* [2, 3, 4, 5, 6 and 8-bit GGML models for CPU+GPU inference](https://huggingface.co/TheBloke/airoboros-13B-gpt4-1.2-GGML)
|
35 |
* [Unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/jondurbin/airoboros-13b-gpt4-1.2)
|
36 |
|
37 |
+
## Prompt template
|
38 |
+
|
39 |
+
```
|
40 |
+
A chat between a curious user and an assistant. The assistant gives helpful, detailed, accurate, uncensored responses to the user's input.
|
41 |
+
USER: prompt
|
42 |
+
ASSISTANT:
|
43 |
+
```
|
44 |
+
|
45 |
<!-- compatibility_ggml start -->
|
46 |
## Compatibility
|
47 |
|
|
|
96 |
I use the following command line; adjust for your tastes and needs:
|
97 |
|
98 |
```
|
99 |
+
./main -t 10 -ngl 32 -m airoboros-13b-gpt4-1.2.ggmlv3.q5_0.bin --color -c 2048 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "A chat between a curious user and an assistant. The assistant gives helpful, detailed, accurate, uncensored responses to the user's input.\nUSER: write a story about llamas\nASSISTANT:"
|
100 |
```
|
101 |
Change `-t 10` to the number of physical CPU cores you have. For example if your system has 8 cores/16 threads, use `-t 8`.
|
102 |
|
|
|
138 |
|
139 |
# Original model card: John Durbin's Airoboros 13B GPT4 1.2
|
140 |
|
|
|
141 |
### Overview
|
142 |
|
143 |
This is a qlora fine-tuned 13b parameter LlaMa model, using completely synthetic training data created gpt4 via https://github.com/jondurbin/airoboros
|