Update quant info
Browse files
README.md
CHANGED
@@ -36,9 +36,9 @@ The Refact 1.6B FIM GGUF model is a state-of-the-art AI-powered coding assistant
|
|
36 |
|
37 |
The model comes in various quantized versions to suit different computational needs:
|
38 |
|
39 |
-
- **refact-1.6B-fim-q4_0.gguf**: A 4-bit quantized model with a file size of
|
40 |
-
- **refact-1.6B-fim-
|
41 |
-
- **refact-1.6B-fim-
|
42 |
|
43 |
## Features and Usage
|
44 |
|
|
|
36 |
|
37 |
The model comes in various quantized versions to suit different computational needs:
|
38 |
|
39 |
+
- **refact-1.6B-fim-q4_0.gguf**: A 4-bit quantized model with a file size of 920 MB.
|
40 |
+
- **refact-1.6B-fim-q8_0.gguf**: A 8-bit quantized model with a file size of 1.69 GB.
|
41 |
+
- **refact-1.6B-fim-f16.gguf**: A half precision model with a file size of 3.17 GB.
|
42 |
|
43 |
## Features and Usage
|
44 |
|