|
--- |
|
dataset_info: |
|
features: |
|
- name: Prompt |
|
dtype: string |
|
- name: Category |
|
dtype: string |
|
- name: Challenge |
|
dtype: string |
|
- name: Note |
|
dtype: string |
|
- name: images |
|
dtype: image |
|
- name: model_name |
|
dtype: string |
|
- name: seed |
|
dtype: int64 |
|
splits: |
|
- name: train |
|
num_bytes: 166170790.0 |
|
num_examples: 1632 |
|
download_size: 166034308 |
|
dataset_size: 166170790.0 |
|
--- |
|
# Images of Parti Prompts for "if-v-1.0" |
|
|
|
Code that was used to get the results: |
|
|
|
```py |
|
from diffusers import DiffusionPipeline |
|
import torch |
|
|
|
pipe_low = DiffusionPipeline.from_pretrained("DeepFloyd/IF-I-XL-v1.0", safety_checker=None, watermarker=None, torch_dtype=torch.float16, variant="fp16") |
|
pipe_low.enable_model_cpu_offload() |
|
|
|
pipe_up = DiffusionPipeline.from_pretrained("DeepFloyd/IF-II-L-v1.0", safety_checker=None, watermarker=None, text_encoder=pipe_low.text_encoder, torch_dtype=torch.float16, variant="fp16") |
|
pipe_up.enable_model_cpu_offload() |
|
|
|
prompt = "" # a parti prompt |
|
generator = torch.Generator("cuda").manual_seed(0) |
|
|
|
prompt_embeds, negative_prompt_embeds = pipe_low.encode_prompt(prompt) |
|
images = pipe_low(prompt_embeds=prompt_embeds, negative_prompt_embeds=negative_prompt_embeds, num_inference_steps=100, generator=generator, output_type="pt").images |
|
images = pipe_up(prompt_embeds=prompt_embeds, negative_prompt_embeds=negative_prompt_embeds, image=images, num_inference_steps=100, generator=generator).images[0] |
|
``` |