metadata
tags:
- text-to-image
- stable-diffusion
- lora
- diffusers
language:
- en
library_name: diffusers
pipeline_tag: text-to-image
base_model: stabilityai/stable-diffusion-2-base
instance_prompt: 'Mobile app:'
UI-Diffuser-V2
UI-Diffuser-V2 is fine tuned from "stabilityai/stable-diffusion-2-base" with the GPSCap dataset for mobile UI generation.
This iteration, UI-Diffuser-V2, represents the second version of the UI-Diffuser model.
The first version, UI-Diffuser-V1, was introduced in our paper titled Boosting GUI Prototyping with Diffusion Models
Using with Diffusers
import torch
from diffusers import StableDiffusionPipeline, EulerDiscreteScheduler
model_id = "stabilityai/stable-diffusion-2-base"
scheduler = EulerDiscreteScheduler.from_pretrained(model_id, subfolder="scheduler")
pipe = StableDiffusionPipeline.from_pretrained(model_id, scheduler=scheduler, torch_dtype=torch.float16)
lora_path = "Jl-wei/ui-diffuser-v2"
pipe.load_lora_weights(lora_path)
pipe.to("cuda")
prompt = "Mobile app: health monitoring report"
images = pipe(prompt, num_inference_steps=30, guidance_scale=7.5, height=512, width=288, num_images_per_prompt=10).images
columns = 5
fig = plt.figure(figsize=(20,10))
for i, image in enumerate(images):
plt.subplot(int(len(images) / columns), columns, i + 1)
plt.imshow(image)
for ax in fig.axes:
ax.axis("off")
Please note that the model can only be used for academic purpose.