Upload folder using huggingface_hub
Browse files- main/README.md +101 -1
- main/pipeline_faithdiff_stable_diffusion_xl.py +0 -0
main/README.md
CHANGED
@@ -85,7 +85,7 @@ PIXART-α Controlnet pipeline | Implementation of the controlnet model for pixar
|
|
85 |
| Stable Diffusion XL Attentive Eraser Pipeline |[[AAAI2025 Oral] Attentive Eraser](https://github.com/Anonym0u3/AttentiveEraser) is a novel tuning-free method that enhances object removal capabilities in pre-trained diffusion models.|[Stable Diffusion XL Attentive Eraser Pipeline](#stable-diffusion-xl-attentive-eraser-pipeline)|-|[Wenhao Sun](https://github.com/Anonym0u3) and [Benlei Cui](https://github.com/Benny079)|
|
86 |
| Perturbed-Attention Guidance |StableDiffusionPAGPipeline is a modification of StableDiffusionPipeline to support Perturbed-Attention Guidance (PAG).|[Perturbed-Attention Guidance](#perturbed-attention-guidance)|[Notebook](https://github.com/huggingface/notebooks/blob/main/diffusers/perturbed_attention_guidance.ipynb)|[Hyoungwon Cho](https://github.com/HyoungwonCho)|
|
87 |
| CogVideoX DDIM Inversion Pipeline | Implementation of DDIM inversion and guided attention-based editing denoising process on CogVideoX. | [CogVideoX DDIM Inversion Pipeline](#cogvideox-ddim-inversion-pipeline) | - | [LittleNyima](https://github.com/LittleNyima) |
|
88 |
-
|
89 |
To load a custom pipeline you just need to pass the `custom_pipeline` argument to `DiffusionPipeline`, as one of the files in `diffusers/examples/community`. Feel free to send a PR with your own pipelines, we will merge them quickly.
|
90 |
|
91 |
```py
|
@@ -5334,3 +5334,103 @@ output = pipeline_for_inversion(
|
|
5334 |
pipeline.export_latents_to_video(output.inverse_latents[-1], "path/to/inverse_video.mp4", fps=8)
|
5335 |
pipeline.export_latents_to_video(output.recon_latents[-1], "path/to/recon_video.mp4", fps=8)
|
5336 |
```
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
85 |
| Stable Diffusion XL Attentive Eraser Pipeline |[[AAAI2025 Oral] Attentive Eraser](https://github.com/Anonym0u3/AttentiveEraser) is a novel tuning-free method that enhances object removal capabilities in pre-trained diffusion models.|[Stable Diffusion XL Attentive Eraser Pipeline](#stable-diffusion-xl-attentive-eraser-pipeline)|-|[Wenhao Sun](https://github.com/Anonym0u3) and [Benlei Cui](https://github.com/Benny079)|
|
86 |
| Perturbed-Attention Guidance |StableDiffusionPAGPipeline is a modification of StableDiffusionPipeline to support Perturbed-Attention Guidance (PAG).|[Perturbed-Attention Guidance](#perturbed-attention-guidance)|[Notebook](https://github.com/huggingface/notebooks/blob/main/diffusers/perturbed_attention_guidance.ipynb)|[Hyoungwon Cho](https://github.com/HyoungwonCho)|
|
87 |
| CogVideoX DDIM Inversion Pipeline | Implementation of DDIM inversion and guided attention-based editing denoising process on CogVideoX. | [CogVideoX DDIM Inversion Pipeline](#cogvideox-ddim-inversion-pipeline) | - | [LittleNyima](https://github.com/LittleNyima) |
|
88 |
+
| FaithDiff Stable Diffusion XL Pipeline | Implementation of [(CVPR 2025) FaithDiff: Unleashing Diffusion Priors for Faithful Image Super-resolutionUnleashing Diffusion Priors for Faithful Image Super-resolution](https://arxiv.org/abs/2411.18824) - FaithDiff is a faithful image super-resolution method that leverages latent diffusion models by actively adapting the diffusion prior and jointly fine-tuning its components (encoder and diffusion model) with an alignment module to ensure high fidelity and structural consistency. | [FaithDiff Stable Diffusion XL Pipeline](#faithdiff-stable-diffusion-xl-pipeline) | [](https://huggingface.co/jychen9811/FaithDiff) | [Junyang Chen, Jinshan Pan, Jiangxin Dong, IMAG Lab, (Adapted by Eliseu Silva)](https://github.com/JyChen9811/FaithDiff) |
|
89 |
To load a custom pipeline you just need to pass the `custom_pipeline` argument to `DiffusionPipeline`, as one of the files in `diffusers/examples/community`. Feel free to send a PR with your own pipelines, we will merge them quickly.
|
90 |
|
91 |
```py
|
|
|
5334 |
pipeline.export_latents_to_video(output.inverse_latents[-1], "path/to/inverse_video.mp4", fps=8)
|
5335 |
pipeline.export_latents_to_video(output.recon_latents[-1], "path/to/recon_video.mp4", fps=8)
|
5336 |
```
|
5337 |
+
# FaithDiff Stable Diffusion XL Pipeline
|
5338 |
+
|
5339 |
+
[Project](https://jychen9811.github.io/FaithDiff_page/) / [GitHub](https://github.com/JyChen9811/FaithDiff/)
|
5340 |
+
|
5341 |
+
This the implementation of the FaithDiff pipeline for SDXL, adapted to use the HuggingFace Diffusers.
|
5342 |
+
|
5343 |
+
For more details see the project links above.
|
5344 |
+
|
5345 |
+
## Example Usage
|
5346 |
+
|
5347 |
+
This example upscale and restores a low-quality image. The input image has a resolution of 512x512 and will be upscaled at a scale of 2x, to a final resolution of 1024x1024. It is possible to upscale to a larger scale, but it is recommended that the input image be at least 1024x1024 in these cases. To upscale this image by 4x, for example, it would be recommended to re-input the result into a new 2x processing, thus performing progressive scaling.
|
5348 |
+
|
5349 |
+
````py
|
5350 |
+
import random
|
5351 |
+
import numpy as np
|
5352 |
+
import torch
|
5353 |
+
from diffusers import DiffusionPipeline, AutoencoderKL, UniPCMultistepScheduler
|
5354 |
+
from huggingface_hub import hf_hub_download
|
5355 |
+
from diffusers.utils import load_image
|
5356 |
+
from PIL import Image
|
5357 |
+
|
5358 |
+
device = "cuda"
|
5359 |
+
dtype = torch.float16
|
5360 |
+
MAX_SEED = np.iinfo(np.int32).max
|
5361 |
+
|
5362 |
+
# Download weights for additional unet layers
|
5363 |
+
model_file = hf_hub_download(
|
5364 |
+
"jychen9811/FaithDiff",
|
5365 |
+
filename="FaithDiff.bin", local_dir="./proc_data/faithdiff", local_dir_use_symlinks=False
|
5366 |
+
)
|
5367 |
+
|
5368 |
+
# Initialize the models and pipeline
|
5369 |
+
vae = AutoencoderKL.from_pretrained("madebyollin/sdxl-vae-fp16-fix", torch_dtype=dtype)
|
5370 |
+
|
5371 |
+
model_id = "SG161222/RealVisXL_V4.0"
|
5372 |
+
pipe = DiffusionPipeline.from_pretrained(
|
5373 |
+
model_id,
|
5374 |
+
torch_dtype=dtype,
|
5375 |
+
vae=vae,
|
5376 |
+
unet=None, #<- Do not load with original model.
|
5377 |
+
custom_pipeline="pipeline_faithdiff_stable_diffusion_xl",
|
5378 |
+
use_safetensors=True,
|
5379 |
+
variant="fp16",
|
5380 |
+
).to(device)
|
5381 |
+
|
5382 |
+
# Here we need use pipeline internal unet model
|
5383 |
+
pipe.unet = pipe.unet_model.from_pretrained(model_id, subfolder="unet", variant="fp16", use_safetensors=True)
|
5384 |
+
|
5385 |
+
# Load aditional layers to the model
|
5386 |
+
pipe.unet.load_additional_layers(weight_path="proc_data/faithdiff/FaithDiff.bin", dtype=dtype)
|
5387 |
+
|
5388 |
+
# Enable vae tiling
|
5389 |
+
pipe.set_encoder_tile_settings()
|
5390 |
+
pipe.enable_vae_tiling()
|
5391 |
+
|
5392 |
+
# Optimization
|
5393 |
+
pipe.enable_model_cpu_offload()
|
5394 |
+
|
5395 |
+
# Set selected scheduler
|
5396 |
+
pipe.scheduler = UniPCMultistepScheduler.from_config(pipe.scheduler.config)
|
5397 |
+
|
5398 |
+
#input params
|
5399 |
+
prompt = "The image features a woman in her 55s with blonde hair and a white shirt, smiling at the camera. She appears to be in a good mood and is wearing a white scarf around her neck. "
|
5400 |
+
upscale = 2 # scale here
|
5401 |
+
start_point = "lr" # or "noise"
|
5402 |
+
latent_tiled_overlap = 0.5
|
5403 |
+
latent_tiled_size = 1024
|
5404 |
+
|
5405 |
+
# Load image
|
5406 |
+
lq_image = load_image("https://huggingface.co/datasets/DEVAIEXP/assets/resolve/main/woman.png")
|
5407 |
+
original_height = lq_image.height
|
5408 |
+
original_width = lq_image.width
|
5409 |
+
print(f"Current resolution: H:{original_height} x W:{original_width}")
|
5410 |
+
|
5411 |
+
width = original_width * int(upscale)
|
5412 |
+
height = original_height * int(upscale)
|
5413 |
+
print(f"Final resolution: H:{height} x W:{width}")
|
5414 |
+
|
5415 |
+
# Restoration
|
5416 |
+
image = lq_image.resize((width, height), Image.LANCZOS)
|
5417 |
+
input_image, width_init, height_init, width_now, height_now = pipe.check_image_size(image)
|
5418 |
+
|
5419 |
+
generator = torch.Generator(device=device).manual_seed(random.randint(0, MAX_SEED))
|
5420 |
+
gen_image = pipe(lr_img=input_image,
|
5421 |
+
prompt = prompt,
|
5422 |
+
num_inference_steps=20,
|
5423 |
+
guidance_scale=5,
|
5424 |
+
generator=generator,
|
5425 |
+
start_point=start_point,
|
5426 |
+
height = height_now,
|
5427 |
+
width=width_now,
|
5428 |
+
overlap=latent_tiled_overlap,
|
5429 |
+
target_size=(latent_tiled_size, latent_tiled_size)
|
5430 |
+
).images[0]
|
5431 |
+
|
5432 |
+
cropped_image = gen_image.crop((0, 0, width_init, height_init))
|
5433 |
+
cropped_image.save("data/result.png")
|
5434 |
+
````
|
5435 |
+
### Result
|
5436 |
+
[<img src="https://huggingface.co/datasets/DEVAIEXP/assets/resolve/main/faithdiff_restored.PNG" width="512px" height="512px"/>](https://imgsli.com/MzY1NzE2)
|
main/pipeline_faithdiff_stable_diffusion_xl.py
ADDED
The diff for this file is too large to render.
See raw diff
|
|