metadata
base_model: stabilityai/stable-diffusion-3-medium-diffusers
library_name: diffusers
license: other
instance_prompt: THUYTIEN
widget:
- text: A photo of THUYTIEN in the office
output:
url: image_0.png
- text: A photo of THUYTIEN in the office
output:
url: image_1.png
- text: A photo of THUYTIEN in the office
output:
url: image_2.png
- text: A photo of THUYTIEN in the office
output:
url: image_3.png
tags:
- text-to-image
- diffusers-training
- diffusers
- template:sd-lora
- sd3
- sd3-diffusers
SD3 DreamBooth - tuenguyen/sd3_thuy_tien
![](https://huggingface.co/tuenguyen/sd3_thuy_tien/resolve/main/image_0.png)
- Prompt
- A photo of THUYTIEN in the office
![](https://huggingface.co/tuenguyen/sd3_thuy_tien/resolve/main/image_1.png)
- Prompt
- A photo of THUYTIEN in the office
![](https://huggingface.co/tuenguyen/sd3_thuy_tien/resolve/main/image_2.png)
- Prompt
- A photo of THUYTIEN in the office
![](https://huggingface.co/tuenguyen/sd3_thuy_tien/resolve/main/image_3.png)
- Prompt
- A photo of THUYTIEN in the office
Model description
These are tuenguyen/sd3_thuy_tien DreamBooth weights for stabilityai/stable-diffusion-3-medium-diffusers.
The weights were trained using DreamBooth with the SD3 diffusers trainer.
Was the text encoder fine-tuned? False.
Trigger words
You should use THUYTIEN
to trigger the image generation.
Use it with the 🧨 diffusers library
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('tuenguyen/sd3_thuy_tien', torch_dtype=torch.float16).to('cuda')
image = pipeline('A photo of THUYTIEN in the office').images[0]
License
Please adhere to the licensing terms as described [here](https://huggingface.co/stabilityai/stable-diffusion-3-medium/blob/main/LICENSE.md)
.
Intended uses & limitations
How to use
# TODO: add an example code snippet for running this diffusion pipeline
Limitations and bias
[TODO: provide examples of latent issues and potential remediations]
Training details
[TODO: describe the data used to train the model]