AsmaAILab commited on
Commit
25c7edb
·
verified ·
1 Parent(s): d2177a2

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +25 -0
README.md CHANGED
@@ -11,4 +11,29 @@ license: mit
11
  short_description: Interior design using controlnet depth model
12
  ---
13
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
14
  Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
 
11
  short_description: Interior design using controlnet depth model
12
  ---
13
 
14
+ Stable Diffusion ControlNet Depth Demo
15
+ This Space demonstrates a Stable Diffusion model combined with a ControlNet model fine-tuned for depth, and includes automatic depth map estimation from your input image.
16
+
17
+ How to use:
18
+
19
+ Upload an Input Image: Provide any photo (e.g., of a room, an object, a scene). The app will automatically estimate its depth map.
20
+
21
+ Enter a Text Prompt: Describe the image you want to generate. The model will try to apply your prompt while respecting the structure derived from the depth map.
22
+
23
+ Adjust Parameters: Experiment with "Inference Steps" and "Guidance Scale" for different results.
24
+
25
+ Click "Submit" to generate the image.
26
+
27
+ Model Details:
28
+
29
+ Base Diffusion Model: runwayml/stable-diffusion-v1-5 (downloaded from Hugging Face Hub)
30
+
31
+ ControlNet Model: Fine-tuned for depth (uploaded as ./Output_ControlNet_Finetune)
32
+
33
+ Depth Estimator: Intel/dpt-hybrid-midas (downloaded from Hugging Face Hub)
34
+
35
+ Note: This model is quite large, so the first generation after a "cold start" (when the Space wakes up) might take a few minutes to load the models. Subsequent generations will be faster.
36
+
37
+ Enjoy!
38
+
39
  Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference