CodeChris commited on
Commit
06bb090
·
1 Parent(s): cc6a2e4

Temporarily remove README

Browse files
Files changed (2) hide show
  1. README.md +0 -100
  2. ReadMe.md +0 -100
README.md DELETED
@@ -1,100 +0,0 @@
1
- # AnimagineXL-v3-openvino
2
-
3
- This is an *unofficial* [OpenVINO](https://github.com/openvinotoolkit/openvino) variant of [cagliostrolab/animagine-xl-3.0](https://huggingface.co/cagliostrolab/animagine-xl-3.0).
4
-
5
- The repo is provided for convenience of running the Animagine XL v3 model on Intel CPU/GPU, as loading & converting a SDXL model to openvino can be pretty slow (dozens of minutes).
6
-
7
- Table of contents:
8
- - [Usage](#usage)
9
- - [How the conversion was done](#how-the-conversion-was-done)
10
- - [Appendix](#appendix)
11
-
12
-
13
- ## Usage
14
-
15
- Take CPU for example:
16
-
17
- ```python
18
- from optimum.intel.openvino import OVStableDiffusionXLPipeline
19
- from diffusers import (
20
- EulerAncestralDiscreteScheduler,
21
- DPMSolverMultistepScheduler
22
- )
23
-
24
- model_id = "CodeChris/AnimagineXL-v3-openvino"
25
- pipe = OVStableDiffusionXLPipeline.from_pretrained(model_model)
26
- # Fix output image size & batch_size for faster speed
27
- img_w, img_h = 832, 1216 # Example
28
- pipe.reshape(width=img_w, height=img_h,
29
- batch_size=1, num_images_per_prompt=1)
30
-
31
- ## Change scheduler
32
- # AnimagineXL recommand Euler A:
33
- # pipe.scheduler = EulerAncestralDiscreteScheduler.from_config(pipe.scheduler.config)
34
- pipe.scheduler = DPMSolverMultistepScheduler.from_config(
35
- pipe.scheduler.config,
36
- use_karras_sigmas=True,
37
- algorithm_type="dpmsolver++"
38
- ) # I prefer DPM++ 2M Karras
39
- # Turn off the filter
40
- pipe.safety_checker = None
41
-
42
- # If run on a GPU, you need:
43
- # pipe.to('cuda')
44
- ```
45
-
46
- After the pipe is prepared, a txt2img task can be executed as below:
47
- ```python
48
- prompt = "1girl, dress, day, masterpiece, best quality"
49
- negative_prompt = "lowres, bad anatomy, bad hands, text, error, missing fingers, extra digit, fewer digits, cropped, worst quality, low quality, normal quality, jpeg artifacts, signature, watermark, username, blurry, artist name"
50
-
51
- images = pipe(
52
- prompt,
53
- negative_prompt,
54
- # If reshaped, image size must equal the reshaped size
55
- width=img_w, height=img_h,
56
- guidance_scale=7,
57
- num_inference_steps=20
58
- )
59
- img = images[0]
60
- img.save('sample.png')
61
- ```
62
-
63
- For convenience, here is the recommended image sizes from the official AnimagineXL doc:
64
-
65
- ```
66
- # Or their transpose
67
- 896 x 1152
68
- 832 x 1216
69
- 768 x 1344
70
- 640 x 1536
71
- 1024 x 1024
72
- ```
73
-
74
- ## How the conversion was done
75
-
76
- First, install optimum:
77
-
78
- ```powershell
79
- pip install --upgrade-strategy eager optimum[openvino,nncf]
80
- ```
81
-
82
- Then, the repo is converted using the following command:
83
-
84
- ```powershell
85
- optimum-cli export openvino --model 'cagliostrolab/animagine-xl-3.0' 'models/openvino/AnimagineXL-v3' --task 'stable-diffusion-xl'
86
- ```
87
-
88
- ## Appendix
89
-
90
- Push large files:
91
-
92
- ```
93
- git lfs install
94
- huggingface-cli lfs-enable-largefiles .
95
- ```
96
-
97
- Other notes:
98
-
99
- * The conversion was done using `optimum==1.16.1` and `openvino==2023.2.0`.
100
- * You may query `optimum-cli export openvino --help` for more usage details.
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
ReadMe.md DELETED
@@ -1,100 +0,0 @@
1
- # AnimagineXL-v3-openvino
2
-
3
- This is an *unofficial* [OpenVINO](https://github.com/openvinotoolkit/openvino) variant of [cagliostrolab/animagine-xl-3.0](https://huggingface.co/cagliostrolab/animagine-xl-3.0).
4
-
5
- The repo is provided for convenience of running the Animagine XL v3 model on Intel CPU/GPU, as loading & converting a SDXL model to openvino can be pretty slow (dozens of minutes).
6
-
7
- Table of contents:
8
- - [Usage](#usage)
9
- - [How the conversion was done](#how-the-conversion-was-done)
10
- - [Appendix](#appendix)
11
-
12
-
13
- ## Usage
14
-
15
- Take CPU for example:
16
-
17
- ```python
18
- from optimum.intel.openvino import OVStableDiffusionXLPipeline
19
- from diffusers import (
20
- EulerAncestralDiscreteScheduler,
21
- DPMSolverMultistepScheduler
22
- )
23
-
24
- model_id = "CodeChris/AnimagineXL-v3-openvino"
25
- pipe = OVStableDiffusionXLPipeline.from_pretrained(model_model)
26
- # Fix output image size & batch_size for faster speed
27
- img_w, img_h = 832, 1216 # Example
28
- pipe.reshape(width=img_w, height=img_h,
29
- batch_size=1, num_images_per_prompt=1)
30
-
31
- ## Change scheduler
32
- # AnimagineXL recommand Euler A:
33
- # pipe.scheduler = EulerAncestralDiscreteScheduler.from_config(pipe.scheduler.config)
34
- pipe.scheduler = DPMSolverMultistepScheduler.from_config(
35
- pipe.scheduler.config,
36
- use_karras_sigmas=True,
37
- algorithm_type="dpmsolver++"
38
- ) # I prefer DPM++ 2M Karras
39
- # Turn off the filter
40
- pipe.safety_checker = None
41
-
42
- # If run on a GPU, you need:
43
- # pipe.to('cuda')
44
- ```
45
-
46
- After the pipe is prepared, a txt2img task can be executed as below:
47
- ```python
48
- prompt = "1girl, dress, day, masterpiece, best quality"
49
- negative_prompt = "lowres, bad anatomy, bad hands, text, error, missing fingers, extra digit, fewer digits, cropped, worst quality, low quality, normal quality, jpeg artifacts, signature, watermark, username, blurry, artist name"
50
-
51
- images = pipe(
52
- prompt,
53
- negative_prompt,
54
- # If reshaped, image size must equal the reshaped size
55
- width=img_w, height=img_h,
56
- guidance_scale=7,
57
- num_inference_steps=20
58
- )
59
- img = images[0]
60
- img.save('sample.png')
61
- ```
62
-
63
- For convenience, here is the recommended image sizes from the official AnimagineXL doc:
64
-
65
- ```
66
- # Or their transpose
67
- 896 x 1152
68
- 832 x 1216
69
- 768 x 1344
70
- 640 x 1536
71
- 1024 x 1024
72
- ```
73
-
74
- ## How the conversion was done
75
-
76
- First, install optimum:
77
-
78
- ```powershell
79
- pip install --upgrade-strategy eager optimum[openvino,nncf]
80
- ```
81
-
82
- Then, the repo is converted using the following command:
83
-
84
- ```powershell
85
- optimum-cli export openvino --model 'cagliostrolab/animagine-xl-3.0' 'models/openvino/AnimagineXL-v3' --task 'stable-diffusion-xl'
86
- ```
87
-
88
- ## Appendix
89
-
90
- Push large files:
91
-
92
- ```
93
- git lfs install
94
- huggingface-cli lfs-enable-largefiles .
95
- ```
96
-
97
- Other notes:
98
-
99
- * The conversion was done using `optimum==1.16.1` and `openvino==2023.2.0`.
100
- * You may query `optimum-cli export openvino --help` for more usage details.