dclipca commited on
Commit
1489196
·
verified ·
1 Parent(s): 65a44ba

Upload folder using huggingface_hub

Browse files
README.md CHANGED
@@ -10,21 +10,37 @@ tags:
10
  ---
11
 
12
 
13
- Quantized to `i1-GGUF` using [SpongeQuant](https://github.com/SpongeEngine/SpongeQuant), the Oobabooga of LLM quantization. Chat & support at [Sponge Engine](https://discord.gg/azNmr2Gdgy).
14
 
 
 
 
 
 
 
 
 
 
 
15
  <figure>
16
- <img src="https://huggingface.co/spaces/SpongeEngine/README/resolve/main/013.png" alt="13. Egypt, Red Sea, Sinai Peninsula and the Nile">
17
- <figcaption>13. Egypt, Red Sea, Sinai Peninsula and the Nile</figcaption>
18
  </figure>
19
 
20
  <figure>
21
  <audio controls>
22
- <source src="https://huggingface.co/spaces/SpongeEngine/README/resolve/main/015.mp3" type="audio/mp3">
23
  Your browser does not support the audio element.
24
  </audio>
25
- <figcaption>15. The Magic Flute (Die Zauberflöte), K. 620, Act II: Hell’s Vengeance Boils in My Heart – Bavarian State Opera Orchestra and Chorus / Wolfgang Sawallisch (Wolfgang Amadeus Mozart)</figcaption>
26
  </figure>
27
 
28
  ***
 
29
  ### What is a GGUF?
30
- GGUF is a type of file format used for running LLMs (large language models) on different types of computers. It works on both regular processors (CPU) and graphics cards (GPU). Some LLMs need powerful and expensive hardware, but GGUF makes it possible to run them on a wider range of computers, even ones without high-end GPUs. To make this possible, GGUF models use a technique called quantization, which reduces their size and memory usage. This helps them run more efficiently, but at lower settings, the model might lose some accuracy or detail in its responses.
 
 
 
 
 
 
10
  ---
11
 
12
 
13
+ Quantized to `i1-GGUF` using [SpongeQuant](https://github.com/SpongeEngine/SpongeQuant), the Oobabooga of LLM quantization.
14
 
15
+ <div style="display: flex; gap: 20px; align-items: center; margin-top:0;">
16
+ <a href="https://github.com/SpongeEngine/SpongeQuant">
17
+ <img src="https://huggingface.co/spaces/SpongeEngine/README/resolve/main/github-button.png" width="173">
18
+ </a>
19
+ <a href="https://discord.gg/azNmr2Gdgy">
20
+ <img src="https://huggingface.co/spaces/SpongeEngine/README/resolve/main/discord-button.png" width="173">
21
+ </a>
22
+ </div>
23
+
24
+ ***
25
  <figure>
26
+ <img src="https://huggingface.co/spaces/SpongeEngine/README/resolve/main/071.png" alt="Gymnast">
27
+ <figcaption>Gymnast</figcaption>
28
  </figure>
29
 
30
  <figure>
31
  <audio controls>
32
+ <source src="https://huggingface.co/spaces/SpongeEngine/README/resolve/main/009.mp3" type="audio/mp3">
33
  Your browser does not support the audio element.
34
  </audio>
35
+ <figcaption>M83 Midnight City (France, 2011)</figcaption>
36
  </figure>
37
 
38
  ***
39
+
40
  ### What is a GGUF?
41
+ GGUF is a file format used for running large language models (LLMs) on different types of computers. It supports both regular processors (CPUs) and graphics cards (GPUs), making it easier to run models across a wide range of hardware. Many LLMs require powerful and expensive GPUs, but GGUF improves compatibility and efficiency by optimizing how models are loaded and executed. If a GPU doesn't have enough memory, GGUF can offload parts of the model to the CPU, allowing it to run even when GPU resources are limited. GGUF is designed to work well with quantized models, which use less memory and run faster, making them ideal for lower-end hardware. However, it can also store full-precision models when needed. Thanks to these optimizations, GGUF allows LLMs to run efficiently on everything from high-end GPUs to laptops and even CPU-only systems.
42
+
43
+
44
+ ### What is an i1-GGUF?
45
+ i1-GGUF is an enhanced type of GGUF model that uses imatrix quantization—a smarter way of reducing model size while preserving key details. Instead of shrinking everything equally, it analyzes the importance of different model components and keeps the most crucial parts more accurate. Like standard GGUF, i1-GGUF allows LLMs to run on various hardware, including CPUs and lower-end GPUs. However, because it prioritizes important weights, i1-GGUF models deliver better responses than traditional GGUF models while maintaining efficiency.
46
+
Skyfall-36B-v2.imatrix.dat CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:9854bcfb658b91b6ca12b89ae2846c82f2d73d8d2de5909752ed84930fffe2ce
3
  size 16005665
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:e615edd5fa35949a2b523534035041a50037b89b53d7d5dd363521a4be6176ae
3
  size 16005665
skyfall-36b-v2-i1-IQ1_S.gguf CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:1c74789fc465598888dfedcbebb48d6f19751a64c4bfa5c6d86679cb35e46aa9
3
  size 8024280096
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:80954644183d7105ef9189eaeae25b48acda9a78c96bf871ad4fa4b7874c07fa
3
  size 8024280096