Update README.md
Browse files
README.md
CHANGED
@@ -11,9 +11,12 @@ tags:
|
|
11 |
inference: false
|
12 |
---
|
13 |
|
14 |
-
# SDXL-controlnet: Depth
|
15 |
|
16 |
-
These are
|
|
|
|
|
|
|
17 |
|
18 |

|
19 |
|
@@ -82,7 +85,7 @@ def colorize(value, vmin=None, vmax=None, cmap='gray_r', invalid_val=-99, invali
|
|
82 |
def get_zoe_depth_map(image):
|
83 |
with torch.autocast("cuda", enabled=True):
|
84 |
depth = model_zoe_n.infer_pil(image)
|
85 |
-
depth = colorize(depth, cmap="gray_r"
|
86 |
return depth
|
87 |
```
|
88 |
|
|
|
11 |
inference: false
|
12 |
---
|
13 |
|
14 |
+
# SDXL-controlnet: Zoe-Depth
|
15 |
|
16 |
+
These are ControlNet weights trained on stabilityai/stable-diffusion-xl-base-1.0 with zoe depth conditioning. [Zoe-depth](https://github.com/isl-org/ZoeDepth) is an open-source
|
17 |
+
SOTA depth estimation model which produces high-quality depth maps, which are better suited for conditioning.
|
18 |
+
|
19 |
+
You can find some example images in the following.
|
20 |
|
21 |

|
22 |
|
|
|
85 |
def get_zoe_depth_map(image):
|
86 |
with torch.autocast("cuda", enabled=True):
|
87 |
depth = model_zoe_n.infer_pil(image)
|
88 |
+
depth = colorize(depth, cmap="gray_r")
|
89 |
return depth
|
90 |
```
|
91 |
|