bartowski commited on
Commit
040ac11
·
verified ·
1 Parent(s): ff0cf75

Llamacpp quants

Browse files
.gitattributes CHANGED
@@ -33,3 +33,15 @@ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
33
  *.zip filter=lfs diff=lfs merge=lfs -text
34
  *.zst filter=lfs diff=lfs merge=lfs -text
35
  *tfevents* filter=lfs diff=lfs merge=lfs -text
 
 
 
 
 
 
 
 
 
 
 
 
 
33
  *.zip filter=lfs diff=lfs merge=lfs -text
34
  *.zst filter=lfs diff=lfs merge=lfs -text
35
  *tfevents* filter=lfs diff=lfs merge=lfs -text
36
+ NeuralKybalion-7B-slerp-v3-Q2_K.gguf filter=lfs diff=lfs merge=lfs -text
37
+ NeuralKybalion-7B-slerp-v3-Q3_K_L.gguf filter=lfs diff=lfs merge=lfs -text
38
+ NeuralKybalion-7B-slerp-v3-Q3_K_M.gguf filter=lfs diff=lfs merge=lfs -text
39
+ NeuralKybalion-7B-slerp-v3-Q3_K_S.gguf filter=lfs diff=lfs merge=lfs -text
40
+ NeuralKybalion-7B-slerp-v3-Q4_0.gguf filter=lfs diff=lfs merge=lfs -text
41
+ NeuralKybalion-7B-slerp-v3-Q4_K_M.gguf filter=lfs diff=lfs merge=lfs -text
42
+ NeuralKybalion-7B-slerp-v3-Q4_K_S.gguf filter=lfs diff=lfs merge=lfs -text
43
+ NeuralKybalion-7B-slerp-v3-Q5_0.gguf filter=lfs diff=lfs merge=lfs -text
44
+ NeuralKybalion-7B-slerp-v3-Q5_K_M.gguf filter=lfs diff=lfs merge=lfs -text
45
+ NeuralKybalion-7B-slerp-v3-Q5_K_S.gguf filter=lfs diff=lfs merge=lfs -text
46
+ NeuralKybalion-7B-slerp-v3-Q6_K.gguf filter=lfs diff=lfs merge=lfs -text
47
+ NeuralKybalion-7B-slerp-v3-Q8_0.gguf filter=lfs diff=lfs merge=lfs -text
NeuralKybalion-7B-slerp-v3-Q2_K.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:325987dce365f06e8b89567f6e7dc60c274ece7c68a6e6c7d68455aa3ca5504f
3
+ size 2719242016
NeuralKybalion-7B-slerp-v3-Q3_K_L.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:41c2cec6a4e4019f3bdd1f79fa34145be076e9e1538076d65e2770cb70aa0f6d
3
+ size 3822024480
NeuralKybalion-7B-slerp-v3-Q3_K_M.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:1f7c53b2c0f0c79926038f11b3c39fb5988b58f87cda8cf8025ff39a32bb2251
3
+ size 3518986016
NeuralKybalion-7B-slerp-v3-Q3_K_S.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:0fe49ce0062a7ebab865b16d1229af5bbcb50c49cad5823aa95172b078bb2a69
3
+ size 3164567328
NeuralKybalion-7B-slerp-v3-Q4_0.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:29c52b1a410fa6bf73717ffd8125eca6888e2db815c7d57399bb0c404d0902fd
3
+ size 4108916512
NeuralKybalion-7B-slerp-v3-Q4_K_M.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:d6e7185807a2dab7ce33c13166518f9d4dda7e69d573dc83f7dec898dcf33d4b
3
+ size 4368439072
NeuralKybalion-7B-slerp-v3-Q4_K_S.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:1641c34fee47c25046dde2062a7bb87348aeffdade009cc3da946908b5b82261
3
+ size 4140373792
NeuralKybalion-7B-slerp-v3-Q5_0.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:f88431dd1b319ee92ccd90924ab5a2817fabf3011e0dab35eb01fd93cf980625
3
+ size 4997715744
NeuralKybalion-7B-slerp-v3-Q5_K_M.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:23bf983e13007410b6987c17bcc0cbdcb20e721b3eb6d855fe1f68546bd2818b
3
+ size 5131409184
NeuralKybalion-7B-slerp-v3-Q5_K_S.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:b9eb459aec9db95f21d6626c4c37546644b4c4b5ee7d76b1aeb789725647e207
3
+ size 4997715744
NeuralKybalion-7B-slerp-v3-Q6_K.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:fb8a2a14ceb04ed5b0d04f7134e5d8e6248261cc3712f0cfad635c072453f2ed
3
+ size 5942064928
NeuralKybalion-7B-slerp-v3-Q8_0.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:e9a853ddae8bbd52a99cd0af7af8fb65be9b0fef329a13cd1d00eb58fe2708bd
3
+ size 7695857440
README.md ADDED
@@ -0,0 +1,41 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ tags:
3
+ - merge
4
+ - mergekit
5
+ - lazymergekit
6
+ - Kukedlc/NeuralKybalion-7B-slerp
7
+ - Kukedlc/NeuralKybalion-7B-slerp-v2
8
+ - rwitz/experiment26-truthy-iter-0
9
+ base_model:
10
+ - Kukedlc/NeuralKybalion-7B-slerp
11
+ - Kukedlc/NeuralKybalion-7B-slerp-v2
12
+ - rwitz/experiment26-truthy-iter-0
13
+ license: apache-2.0
14
+ quantized_by: bartowski
15
+ pipeline_tag: text-generation
16
+ ---
17
+
18
+ ## Llamacpp Quantizations of NeuralKybalion-7B-slerp-v3
19
+
20
+ Using <a href="https://github.com/ggerganov/llama.cpp/">llama.cpp</a> release <a href="https://github.com/ggerganov/llama.cpp/releases/tag/b2440">b2440</a> for quantization.
21
+
22
+ Original model: https://huggingface.co/Kukedlc/NeuralKybalion-7B-slerp-v3
23
+
24
+ Download a file (not the whole branch) from below:
25
+
26
+ | Filename | Quant type | File Size | Description |
27
+ | -------- | ---------- | --------- | ----------- |
28
+ | [NeuralKybalion-7B-slerp-v3-Q8_0.gguf](https://huggingface.co/bartowski/NeuralKybalion-7B-slerp-v3-GGUF/blob/main/NeuralKybalion-7B-slerp-v3-Q8_0.gguf) | Q8_0 | 7.69GB | Extremely high quality, generally unneeded but max available quant. |
29
+ | [NeuralKybalion-7B-slerp-v3-Q6_K.gguf](https://huggingface.co/bartowski/NeuralKybalion-7B-slerp-v3-GGUF/blob/main/NeuralKybalion-7B-slerp-v3-Q6_K.gguf) | Q6_K | 5.94GB | Very high quality, near perfect, *recommended*. |
30
+ | [NeuralKybalion-7B-slerp-v3-Q5_K_M.gguf](https://huggingface.co/bartowski/NeuralKybalion-7B-slerp-v3-GGUF/blob/main/NeuralKybalion-7B-slerp-v3-Q5_K_M.gguf) | Q5_K_M | 5.13GB | High quality, very usable. |
31
+ | [NeuralKybalion-7B-slerp-v3-Q5_K_S.gguf](https://huggingface.co/bartowski/NeuralKybalion-7B-slerp-v3-GGUF/blob/main/NeuralKybalion-7B-slerp-v3-Q5_K_S.gguf) | Q5_K_S | 4.99GB | High quality, very usable. |
32
+ | [NeuralKybalion-7B-slerp-v3-Q5_0.gguf](https://huggingface.co/bartowski/NeuralKybalion-7B-slerp-v3-GGUF/blob/main/NeuralKybalion-7B-slerp-v3-Q5_0.gguf) | Q5_0 | 4.99GB | High quality, older format, generally not recommended. |
33
+ | [NeuralKybalion-7B-slerp-v3-Q4_K_M.gguf](https://huggingface.co/bartowski/NeuralKybalion-7B-slerp-v3-GGUF/blob/main/NeuralKybalion-7B-slerp-v3-Q4_K_M.gguf) | Q4_K_M | 4.36GB | Good quality, similar to 4.25 bpw. |
34
+ | [NeuralKybalion-7B-slerp-v3-Q4_K_S.gguf](https://huggingface.co/bartowski/NeuralKybalion-7B-slerp-v3-GGUF/blob/main/NeuralKybalion-7B-slerp-v3-Q4_K_S.gguf) | Q4_K_S | 4.14GB | Slightly lower quality with small space savings. |
35
+ | [NeuralKybalion-7B-slerp-v3-Q4_0.gguf](https://huggingface.co/bartowski/NeuralKybalion-7B-slerp-v3-GGUF/blob/main/NeuralKybalion-7B-slerp-v3-Q4_0.gguf) | Q4_0 | 4.10GB | Decent quality, older format, generally not recommended. |
36
+ | [NeuralKybalion-7B-slerp-v3-Q3_K_L.gguf](https://huggingface.co/bartowski/NeuralKybalion-7B-slerp-v3-GGUF/blob/main/NeuralKybalion-7B-slerp-v3-Q3_K_L.gguf) | Q3_K_L | 3.82GB | Lower quality but usable, good for low RAM availability. |
37
+ | [NeuralKybalion-7B-slerp-v3-Q3_K_M.gguf](https://huggingface.co/bartowski/NeuralKybalion-7B-slerp-v3-GGUF/blob/main/NeuralKybalion-7B-slerp-v3-Q3_K_M.gguf) | Q3_K_M | 3.51GB | Even lower quality. |
38
+ | [NeuralKybalion-7B-slerp-v3-Q3_K_S.gguf](https://huggingface.co/bartowski/NeuralKybalion-7B-slerp-v3-GGUF/blob/main/NeuralKybalion-7B-slerp-v3-Q3_K_S.gguf) | Q3_K_S | 3.16GB | Low quality, not recommended. |
39
+ | [NeuralKybalion-7B-slerp-v3-Q2_K.gguf](https://huggingface.co/bartowski/NeuralKybalion-7B-slerp-v3-GGUF/blob/main/NeuralKybalion-7B-slerp-v3-Q2_K.gguf) | Q2_K | 2.71GB | Extremely low quality, *not* recommended.
40
+
41
+ Want to support my work? Visit my ko-fi page here: https://ko-fi.com/bartowski