TheBloke commited on
Commit
1d24d1d
·
1 Parent(s): b12b3f5

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +0 -1
README.md CHANGED
@@ -53,7 +53,6 @@ Note that, at the time of writing, overall throughput is still lower than runnin
53
 
54
  * [AWQ model(s) for GPU inference.](https://huggingface.co/TheBloke/UltraRM-13B-AWQ)
55
  * [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/UltraRM-13B-GPTQ)
56
- * [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/UltraRM-13B-GGUF)
57
  * [OpenBMB's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/openbmb/UltraRM-13b)
58
  <!-- repositories-available end -->
59
 
 
53
 
54
  * [AWQ model(s) for GPU inference.](https://huggingface.co/TheBloke/UltraRM-13B-AWQ)
55
  * [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/UltraRM-13B-GPTQ)
 
56
  * [OpenBMB's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/openbmb/UltraRM-13b)
57
  <!-- repositories-available end -->
58