Update README.md
Browse files
README.md
CHANGED
|
@@ -11,3 +11,26 @@ license: mit
|
|
| 11 |
---
|
| 12 |
|
| 13 |
Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 11 |
---
|
| 12 |
|
| 13 |
Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
|
| 14 |
+
|
| 15 |
+
# Diffree
|
| 16 |
+
|
| 17 |
+
<p align="center">
|
| 18 |
+
<a href="https://arxiv.org/pdf/2407.16982"><u>[📜 Arxiv]</u></a>
|
| 19 |
+
|
| 20 |
+
<a href="https://github.com/OpenGVLab/Diffree"><u>[🔍 Code]</u></a>
|
| 21 |
+
</p>
|
| 22 |
+
|
| 23 |
+
[Diffree](https://arxiv.org/pdf/2407.16982) is a diffusion model that enables the addition of new objects to images using only text descriptions, seamlessly integrating them with consistent background and spatial context.
|
| 24 |
+
|
| 25 |
+
In this repo, we provide the [🤗 Hugging Face demo](https://huggingface.co/spaces/LiruiZhao/Diffree) for Diffree, and you can also download our model via [🤗 Checkpoint](https://huggingface.co/LiruiZhao/Diffree).
|
| 26 |
+
|
| 27 |
+
## Citation
|
| 28 |
+
If you found this work useful, please consider citing:
|
| 29 |
+
```
|
| 30 |
+
@article{zhao2024diffree,
|
| 31 |
+
title={Diffree: Text-Guided Shape Free Object Inpainting with Diffusion Model},
|
| 32 |
+
author={Zhao, Lirui and Yang, Tianshuo and Shao, Wenqi and Zhang, Yuxin and Qiao, Yu and Luo, Ping and Zhang, Kaipeng and Ji, Rongrong},
|
| 33 |
+
journal={arXiv preprint arXiv:2407.16982},
|
| 34 |
+
year={2024}
|
| 35 |
+
}
|
| 36 |
+
```
|