File size: 1,987 Bytes
c651d9c
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1cf516e
165ed60
64d3947
8991842
1cf516e
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
e0ed1b5
 
 
 
 
 
 
1cf516e
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
---
dataset_info:
  features:
  - name: latents
    sequence:
      sequence:
        sequence: float32
  - name: label_latent
    dtype: int64
  splits:
  - name: train
    num_bytes: 21682470308
    num_examples: 1281167
  - name: validation
    num_bytes: 846200000
    num_examples: 50000
  - name: test
    num_bytes: 1692400000
    num_examples: 100000
  download_size: 24417155228
  dataset_size: 24221070308
configs:
- config_name: default
  data_files:
  - split: train
    path: data/train-*
  - split: validation
    path: data/validation-*
  - split: test
    path: data/test-*
---

> [!WARNING]
> **Better latent**: I advise you to use another dataset https://huggingface.co/datasets/cloneofsimo/imagenet.int8 which is already compressed (5Go only) and use a better latent model (SDXL)

This dataset is the latent representation of the imagenet dataset using the stability VAE stabilityai/sd-vae-ft-ema.

Every image_latent is of shape (4, 32, 32).

If you want to retrieve the original image you have to use the model used to create the latent image :

```python
vae_model = "stabilityai/sd-vae-ft-ema"
vae = AutoencoderKL.from_pretrained(vae_model)
vae.eval()
```

The images have been encoded using :

```python
images = [DEFAULT_TRANSFORM(image.convert("RGB")) for image in examples["image"]]
images = torch.stack(images)
images = vaeprocess.preprocess(images)
images = images.to(device="cuda", dtype=torch.float)
with torch.no_grad():
    latents = vae.encode(images).latent_dist.sample()
```

With DEFAULT_TRANSFORM being :

```python
DEFAULT_IMAGE_SIZE = 256

DEFAULT_TRANSFORM = transforms.Compose(
    [
        transforms.Resize((DEFAULT_IMAGE_SIZE, DEFAULT_IMAGE_SIZE)),
        transforms.ToTensor(),
    ]
)
```
 
The images can be decoded using :

```
import datasets

latent_dataset = datasets.load_dataset(
            "Forbu14/imagenet-1k-latent"
        )

latent = torch.tensor(latent_dataset["train"][0]["latents"])
image = vae.decode(latent).sample
```