File size: 1,799 Bytes
6fe5258
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
---
license: other
license_name: sacla
license_link: >-
  https://huggingface.co/stabilityai/stable-diffusion-3.5-large/blob/main/LICENSE.md
base_model:
- stabilityai/stable-diffusion-3.5-large
base_model_relation: quantized
---
## Overview
These models are made to work with [stable-diffusion.cpp](https://github.com/leejet/stable-diffusion.cpp) release [master-ac54e00](https://github.com/leejet/stable-diffusion.cpp/releases/tag/master-ac54e00) onwards. Support for other inference backends is not guarenteed.

Quantized using this PR https://github.com/leejet/stable-diffusion.cpp/pull/447

Normal K-quants are not working properly with SD3.5-Large models because around 90% of the weights are in tensors whose shape doesn't match the 256 superblock size of K-quants and therefore can't be quantized this way. Mixing quantization types allows us to take adventage of the better fidelity of k-quants to some extent while keeping the model file size relatively small.

## Files:

### Mixed Types:

TODO

### Legacy types:

TODO

## Outputs:

Sorted by model size (Note that q4_0 and q4_k_4_0 are the exact same size)

| Quantization       | Robot girl                       | Text                               | Cute kitten                        |
| ------------------ | -------------------------------- | ---------------------------------- | ---------------------------------- |


Generated with a modified version of sdcpp with [this PR](https://github.com/leejet/stable-diffusion.cpp/pull/397) applied to enable clip timestep embeddings support.

Text encoders used: q4_k quant of t5xxl, full precision clip_g, and q8 quant of [ViT-L-14-TEXT-detail-improved-hiT-GmP-TE-only-HF](https://huggingface.co/zer0int/CLIP-GmP-ViT-L-14) in place of clip_l.

Full prompts and settings in png metadata.