Luffuly's picture
update readme
4247a5e
---
license: mit
---
# Unique3d-Normal-Diffuser Model Card
[🌟GitHub](https://github.com/TingtingLiao/unique3d_diffuser) | [🦸 Project Page](https://wukailu.github.io/Unique3D/) | [🔋MVImage Diffuser](https://huggingface.co/Luffuly/unique3d-mvimage-diffuser)
![mv-normal](https://github.com/user-attachments/assets/de91a83b-a14f-4878-a950-4d5cba786f69)
## Example
Note the input image is suppose to be **white background**.
![mv-normal](https://github.com/user-attachments/assets/f0b56d70-d1fb-4f18-a205-f41f85ec72d7)
```bash
import torch
import numpy as np
from PIL import Image
from pipeline import Unique3dDiffusionPipeline
# opts
seed = -1
generator = torch.Generator(device='cuda').manual_seed(-1)
forward_args = dict(
width=512,
height=512,
width_cond=512,
height_cond=512,
generator=generator,
guidance_scale=1.5,
num_inference_steps=30,
num_images_per_prompt=1,
)
# load
pipe = Unique3dDiffusionPipeline.from_pretrained(
"Luffuly/unique3d-normal-diffuser",
torch_dtype=torch.bfloat16,
trust_remote_code=True,
).to("cuda")
# load image
image = Image.open('image.png').convert("RGB")
# forward
out = pipe(image, **forward_args).images
out[0].save(f"out.png")
```
## Citation
```bash
@misc{wu2024unique3d,
title={Unique3D: High-Quality and Efficient 3D Mesh Generation from a Single Image},
author={Kailu Wu and Fangfu Liu and Zhihan Cai and Runjie Yan and Hanyang Wang and Yating Hu and Yueqi Duan and Kaisheng Ma},
year={2024},
eprint={2405.20343},
archivePrefix={arXiv},
primaryClass={cs.CV}
}
```