dclipca commited on
Commit
c6798c2
·
verified ·
1 Parent(s): 92b0b9f

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +20 -5
README.md CHANGED
@@ -9,18 +9,33 @@ tags:
9
  - i1-GGUF
10
  ---
11
 
 
12
 
13
- Quantized to `i1-GGUF` using [SpongeQuant](https://github.com/SpongeEngine/SpongeQuant), the Oobabooga of LLM quantization. Chat & support at [Sponge Engine](https://discord.gg/azNmr2Gdgy).
 
 
 
 
 
 
 
14
 
 
15
  <figure>
16
- <img src="https://huggingface.co/spaces/SpongeEngine/README/resolve/main/078.png" alt="78. Underwater scene with diver and fish">
17
- <figcaption>78. Underwater scene with diver and fish</figcaption>
18
  </figure>
19
 
20
  <figure>
21
  <audio controls>
22
- <source src="https://huggingface.co/spaces/SpongeEngine/README/resolve/main/025.mp3" type="audio/mp3">
23
  Your browser does not support the audio element.
24
  </audio>
25
- <figcaption>25. The Fairie Round Early Music Consort of London / David Munrow (Anthony Holborne)</figcaption>
26
  </figure>
 
 
 
 
 
 
 
9
  - i1-GGUF
10
  ---
11
 
12
+ Quantized to `i1-GGUF` using [SpongeQuant](https://github.com/SpongeEngine/SpongeQuant), the Oobabooga of LLM quantization.
13
 
14
+ <div style="display: flex; gap: 20px; align-items: center; margin-top:0; ">
15
+ <a href="https://github.com/SpongeEngine/SpongeQuant">
16
+ <img src="https://huggingface.co/spaces/SpongeEngine/README/resolve/main/github-button.png".png" width="173">
17
+ </a>
18
+ <a href="https://discord.gg/azNmr2Gdgy">
19
+ <img src="https://huggingface.co/spaces/SpongeEngine/README/resolve/main/discord-button.png".png" width="173">
20
+ </a>
21
+ </div>
22
 
23
+ ***
24
  <figure>
25
+ <img src="https://huggingface.co/spaces/SpongeEngine/README/resolve/main/078.png" alt="Underwater scene with diver and fish">
26
+ <figcaption>Underwater scene with diver and fish</figcaption>
27
  </figure>
28
 
29
  <figure>
30
  <audio controls>
31
+ <source src="https://huggingface.co/spaces/SpongeEngine/README/resolve/main/007.mp3" type="audio/mp3">
32
  Your browser does not support the audio element.
33
  </audio>
34
+ <figcaption>BlurSong 2 (UK, 1997)</figcaption>
35
  </figure>
36
+
37
+ ***
38
+ ### What is a `GGUF`?
39
+ `GGUF` is a file format used for running large language models (LLMs) on different types of computers. It supports both regular processors (CPUs) and graphics cards (GPUs), making it easier to run models across a wide range of hardware. Many LLMs require powerful and expensive GPUs, but `GGUF` improves compatibility and efficiency by optimizing how models are loaded and executed. If a GPU doesn’t have enough memory, `GGUF` can offload parts of the model to the CPU, allowing it to run even when GPU resources are limited. `GGUF` is designed to work well with quantized models, which use less memory and run faster, making them ideal for lower-end hardware. However, it can also store full-precision models when needed. Thanks to these optimizations, `GGUF` allows LLMs to run efficiently on everything from high-end GPUs to laptops and even CPU-only systems.
40
+ ### What is an `i1-GGUF`?
41
+ `i1-GGUF` is an enhanced type of GGUF model that uses imatrix quantization—a smarter way of reducing model size while preserving key details. Instead of shrinking everything equally, it analyzes the importance of different model components and keeps the most crucial parts more accurate. Like standard `GGUF`, `i1-GGUF` allows LLMs to run on various hardware, including CPUs and lower-end GPUs. However, because it prioritizes important weights, `i1-GGUF` models deliver better responses than traditional `GGUF` models while maintaining efficiency.