huanngzh commited on
Commit
3863eb3
·
verified ·
1 Parent(s): 66d99e0

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +41 -0
README.md CHANGED
@@ -1,3 +1,44 @@
1
  ---
2
  license: mit
3
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
  license: mit
3
  ---
4
+
5
+ # Model Card for EpiDiff
6
+
7
+ <!-- Provide a quick summary of what the model is/does. -->
8
+
9
+ [EpiDiff](https://huanngzh.github.io/EpiDiff/) is a generative model based on Zero123 that takes an image of an object as a conditioning frame, and generates 16 multiviews of that object.
10
+
11
+ ![image/gif](https://cdn-uploads.huggingface.co/production/uploads/6375d136dee28348a9c63cbf/1bJcM1NyflPWO4C1HDHPM.gif)
12
+
13
+ ## Model Details
14
+
15
+ ### Model Description
16
+
17
+ <!-- Provide a longer summary of what this model is. -->
18
+
19
+
20
+
21
+ - **Model type:** Generative image-to-multiview model
22
+ - **License:** [More Information Needed]
23
+
24
+ ### Model Sources
25
+
26
+ <!-- Provide the basic links for the model. -->
27
+
28
+ - **Repository:** https://github.com/huanngzh/EpiDiff
29
+ - **Paper:** https://arxiv.org/abs/2312.06725
30
+ - **Demo:** https://huanngzh.github.io/EpiDiff/
31
+
32
+ ## Uses
33
+
34
+ <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
35
+
36
+ For usage instructions, please refer to [our EpiDiff GitHub repository](https://github.com/huanngzh/EpiDiff).
37
+
38
+ ## Training Details
39
+
40
+ ### Training Data
41
+
42
+ <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
43
+
44
+ We use renders from the LVIS dataset, utilizing [huanngzh/render-toolbox](https://github.com/huanngzh/render-toolbox).