mradermacher commited on
Commit
7712667
·
verified ·
1 Parent(s): 2f72bd3

auto-patch README.md

Browse files
Files changed (1) hide show
  1. README.md +2 -3
README.md CHANGED
@@ -4,12 +4,12 @@ language:
4
  - en
5
  library_name: transformers
6
  license: apache-2.0
 
 
7
  quantized_by: mradermacher
8
  tags:
9
  - language
10
  - granite-3.0
11
- no_imatrix: Missing importance matrix for tensor blk.0.ffn_gate_exps.weight in a very
12
- low-bit quantization
13
  ---
14
  ## About
15
 
@@ -21,7 +21,6 @@ no_imatrix: Missing importance matrix for tensor blk.0.ffn_gate_exps.weight in a
21
  static quants of https://huggingface.co/ibm-granite/granite-3.0-3b-a800m-base
22
 
23
  <!-- provided-files -->
24
- weighted/imatrix quants are available at https://huggingface.co/mradermacher/granite-3.0-3b-a800m-base-i1-GGUF
25
  ## Usage
26
 
27
  If you are unsure how to use GGUF files, refer to one of [TheBloke's
 
4
  - en
5
  library_name: transformers
6
  license: apache-2.0
7
+ no_imatrix: Missing importance matrix for tensor blk.0.ffn_gate_exps.weight in a very
8
+ low-bit quantization
9
  quantized_by: mradermacher
10
  tags:
11
  - language
12
  - granite-3.0
 
 
13
  ---
14
  ## About
15
 
 
21
  static quants of https://huggingface.co/ibm-granite/granite-3.0-3b-a800m-base
22
 
23
  <!-- provided-files -->
 
24
  ## Usage
25
 
26
  If you are unsure how to use GGUF files, refer to one of [TheBloke's