File size: 1,452 Bytes
02dac64 d36b774 02dac64 8589b87 02dac64 8589b87 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 |
---
dataset_info:
features:
- name: Prompt
dtype: string
- name: Category
dtype: string
- name: Challenge
dtype: string
- name: Note
dtype: string
- name: images
dtype: image
- name: model_name
dtype: string
- name: seed
dtype: int64
splits:
- name: train
num_bytes: 166170790.0
num_examples: 1632
download_size: 166034308
dataset_size: 166170790.0
---
# Images of Parti Prompts for "if-v-1.0"
Code that was used to get the results:
```py
from diffusers import DiffusionPipeline
import torch
pipe_low = DiffusionPipeline.from_pretrained("DeepFloyd/IF-I-XL-v1.0", safety_checker=None, watermarker=None, torch_dtype=torch.float16, variant="fp16")
pipe_low.enable_model_cpu_offload()
pipe_up = DiffusionPipeline.from_pretrained("DeepFloyd/IF-II-L-v1.0", safety_checker=None, watermarker=None, text_encoder=pipe_low.text_encoder, torch_dtype=torch.float16, variant="fp16")
pipe_up.enable_model_cpu_offload()
prompt = "" # a parti prompt
generator = torch.Generator("cuda").manual_seed(0)
prompt_embeds, negative_prompt_embeds = pipe_low.encode_prompt(prompt)
images = pipe_low(prompt_embeds=prompt_embeds, negative_prompt_embeds=negative_prompt_embeds, num_inference_steps=100, generator=generator, output_type="pt").images
images = pipe_up(prompt_embeds=prompt_embeds, negative_prompt_embeds=negative_prompt_embeds, image=images, num_inference_steps=100, generator=generator).images[0]
``` |