Update README.md
Browse files
README.md
CHANGED
|
@@ -3,6 +3,11 @@ base_model:
|
|
| 3 |
- stabilityai/stable-diffusion-2-inpainting
|
| 4 |
- stabilityai/stable-diffusion-2-1
|
| 5 |
pipeline_tag: image-to-image
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 6 |
---
|
| 7 |
# **Example Outputs**
|
| 8 |
|
|
@@ -66,4 +71,4 @@ with torch.autocast('cuda',dtype=torch.bfloat16):
|
|
| 66 |
with torch.no_grad():
|
| 67 |
# each model's input image should be one of PIL.Image, List[PIL.Image], preprocessed tensor (B,3,H,W). Image must be 3 channel
|
| 68 |
image_gray_restored = gray_inpaintor(image_gray_masked, num_inference_steps=250, seed=10)[0].convert('L') # you can pass 'mask' arg explictly. mask : Tensor (B,1,512,512)
|
| 69 |
-
image_restored = gray2rgb(image_gray_restored.convert('RGB'))
|
|
|
|
| 3 |
- stabilityai/stable-diffusion-2-inpainting
|
| 4 |
- stabilityai/stable-diffusion-2-1
|
| 5 |
pipeline_tag: image-to-image
|
| 6 |
+
library_name: diffusers
|
| 7 |
+
tags:
|
| 8 |
+
- inpaint
|
| 9 |
+
- colorization
|
| 10 |
+
- stable-diffusion
|
| 11 |
---
|
| 12 |
# **Example Outputs**
|
| 13 |
|
|
|
|
| 71 |
with torch.no_grad():
|
| 72 |
# each model's input image should be one of PIL.Image, List[PIL.Image], preprocessed tensor (B,3,H,W). Image must be 3 channel
|
| 73 |
image_gray_restored = gray_inpaintor(image_gray_masked, num_inference_steps=250, seed=10)[0].convert('L') # you can pass 'mask' arg explictly. mask : Tensor (B,1,512,512)
|
| 74 |
+
image_restored = gray2rgb(image_gray_restored.convert('RGB'))
|