--- license: other license_name: faipl license_link: https://freedevproject.org/faipl-1.0-sd language: - en tags: - text-to-image - stable-diffusion - safetensors - stable-diffusion-xl base_model: Minthy/RouWei-0.6 widget: - text: >- 1girl, green hair, sweater, looking at viewer, upper body, beanie, outdoors, night, turtleneck, masterpiece, best quality parameter: negative_prompt: >- lowres, bad anatomy, bad hands, text, error, missing fingers, extra digit, fewer digits, cropped, worst quality, low quality, signature, watermark, username, blurry example_title: 1girl ---

AkashicPulse v1.0

**AkashicPulse** is a finetune based on RouWei, an Illustrious-based model. The model has gone through 1 step of merging, and 3 steps of finetuning to make sure the model able to give stunning results, superior from the competitions. ### Recommended settings: - Sampling: Euler a - Steps: 20-30, the sweet spot is 28. - CFG: 4-10, the sweet spot is 7. - [Not mandatory] On reForge or ComfyUI, have MaHiRo CFG enabled. ### Recommended prompting format: - Prompt: [1girl/1boy], [character name], [series], by [artist name], [the rest of the prompt], masterpiece, best quality - Negative prompt: lowres, bad anatomy, bad hands, text, error, missing fingers, extra digit, fewer digits, cropped, worst quality, low quality, signature, watermark, username, blurry, [the rest of the negative prompt] ### Training Process: - Step 1: - Giving RouWei a CyberFix treatment. - Step 2: - Training new concept - Dataset size: ~10.000 images - GPU: 2xA100 80GB - Optimizer: AdaFactor - Unet Learning Rate: 7.5e-6 - Text Encoder Learning Rate: 3.75e-6 - Batch Size: 16 - Gradient Accumulation: 3 - Warmup steps: 2 * 100 steps - Min SNR: 5 - Epoch: 10 - Random Cropping: True - Loss: Huber - Huber Schedule: SNR - Step 3: - Finetuning I - Dataset size: ~4.500 images - GPU: 1xA100 80GB - Optimizer: AdaFactor - Unet Learning Rate: 3e-6 - Text Encoder Learning Rate: N/A - Batch Size: 16 - Gradient Accumulation: 3 - Warmup steps: 5% - Min SNR: 5 - Epoch: 15 - Random Cropping: True - Loss: Huber - Huber Schedule: SNR - Multires Noise Iteration: 8 - Step 4: - Finetuning II - Dataset size: ~4.500 images - GPU: 1xA100 80GB - Optimizer: AdaFactor - Unet Learning Rate: 3e-6 - Text Encoder Learning Rate: N/A - Batch Size: 48 - Gradient Accumulation: 1 - Warmup steps: 5% - Min SNR: 5 - Epoch: 15 - Loss: L2 - Noise Offset: 0.0357 ### Added series: - DanDaDan The model falls under Fair AI Public License 1.0-SD with no additional terms.