Update README.md
Browse files
README.md
CHANGED
@@ -52,10 +52,11 @@ Below is an example code snippet that demonstrates how to load an image directly
|
|
52 |
### Code Example
|
53 |
|
54 |
```python
|
55 |
-
from diffusers import AutoencoderKL
|
56 |
import torch
|
57 |
-
from torchvision.transforms.functional import to_tensor, to_pil_image
|
58 |
from PIL import Image
|
|
|
|
|
|
|
59 |
|
60 |
# Load the pre-trained Emuru VAE from Hugging Face Hub.
|
61 |
model = AutoencoderKL.from_pretrained("vpippi/emuru_vae")
|
@@ -74,8 +75,9 @@ def postprocess_tensor(tensor):
|
|
74 |
return to_pil_image(tensor)
|
75 |
|
76 |
# Example: Encode and decode an image.
|
77 |
-
# Replace with your image path.
|
78 |
-
image_path = "/
|
|
|
79 |
input_image = preprocess_image(image_path)
|
80 |
|
81 |
# Encode the image to the latent space.
|
@@ -96,20 +98,4 @@ reconstructed_image = postprocess_tensor(reconstructed)
|
|
96 |
|
97 |
# Save the reconstructed image.
|
98 |
reconstructed_image.save("reconstructed_image.png")
|
99 |
-
```
|
100 |
-
|
101 |
-
## Additional Information
|
102 |
-
|
103 |
-
If you'd like to test with images hosted directly on the Hugging Face Hub, consider:
|
104 |
-
- **Including sample images in your repository:** Place them in a folder (e.g., `samples/`) and reference them directly.
|
105 |
-
- **Using the `huggingface_hub` API:** For example:
|
106 |
-
|
107 |
-
```python
|
108 |
-
from huggingface_hub import hf_hub_download
|
109 |
-
from PIL import Image
|
110 |
-
|
111 |
-
# Replace 'vpippi/emuru_vae' and 'samples/lam_sample.jpg' with your details.
|
112 |
-
image_path = hf_hub_download(repo_id="vpippi/emuru_vae", filename="samples/lam_sample.jpg")
|
113 |
-
sample_image = Image.open(image_path).convert("RGB")
|
114 |
-
sample_image.show()
|
115 |
-
```
|
|
|
52 |
### Code Example
|
53 |
|
54 |
```python
|
|
|
55 |
import torch
|
|
|
56 |
from PIL import Image
|
57 |
+
from diffusers import AutoencoderKL
|
58 |
+
from huggingface_hub import hf_hub_download
|
59 |
+
from torchvision.transforms.functional import to_tensor, to_pil_image
|
60 |
|
61 |
# Load the pre-trained Emuru VAE from Hugging Face Hub.
|
62 |
model = AutoencoderKL.from_pretrained("vpippi/emuru_vae")
|
|
|
75 |
return to_pil_image(tensor)
|
76 |
|
77 |
# Example: Encode and decode an image.
|
78 |
+
# Replace the following line with your image path.
|
79 |
+
image_path = hf_hub_download(repo_id="vpippi/emuru_vae", filename="samples/lam_sample.jpg")
|
80 |
+
|
81 |
input_image = preprocess_image(image_path)
|
82 |
|
83 |
# Encode the image to the latent space.
|
|
|
98 |
|
99 |
# Save the reconstructed image.
|
100 |
reconstructed_image.save("reconstructed_image.png")
|
101 |
+
```
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|