Bhaskar009 commited on
Commit
d8d5e14
·
verified ·
1 Parent(s): 3a0df25

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +44 -1
README.md CHANGED
@@ -10,6 +10,8 @@ tags:
10
  - diffusers
11
  - diffusers-training
12
  - lora
 
 
13
  ---
14
 
15
  <!-- This model card has been generated automatically according to the information the training script had access to. You
@@ -40,4 +42,45 @@ These are LoRA adaption weights for stable-diffusion-v1-5/stable-diffusion-v1-5.
40
 
41
  ## Training details
42
 
43
- [TODO: describe the data used to train the model]
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
10
  - diffusers
11
  - diffusers-training
12
  - lora
13
+ datasets:
14
+ - lambdalabs/naruto-blip-captions
15
  ---
16
 
17
  <!-- This model card has been generated automatically according to the information the training script had access to. You
 
42
 
43
  ## Training details
44
 
45
+ # Training Details - Stable Diffusion LoRA
46
+
47
+ # Dataset
48
+ # ------------------------------------------
49
+ # The model was trained using the 'lambdalabs/naruto-blip-captions' dataset.
50
+ # This dataset consists of Naruto character images with BLIP-generated captions.
51
+ # It provides a diverse set of characters, poses, and backgrounds,
52
+ # making it suitable for fine-tuning Stable Diffusion on anime-style images.
53
+
54
+ # Model
55
+ # ------------------------------------------
56
+ # Base Model: Stable Diffusion v1.5 (stable-diffusion-v1-5/stable-diffusion-v1-5)
57
+ # Fine-tuning Method: LoRA (Low-Rank Adaptation)
58
+ # Purpose: Specializing Stable Diffusion to generate Naruto-style anime characters.
59
+
60
+ # Preprocessing
61
+ # ------------------------------------------
62
+ # - Images were resized to 512x512 resolution.
63
+ # - Center cropping was applied to maintain aspect ratio.
64
+ # - Random flipping was used as a data augmentation technique.
65
+
66
+ # Training Configuration
67
+ # ------------------------------------------
68
+ # Batch Size: 1
69
+ # Gradient Accumulation Steps: 4 # Simulates a larger batch size
70
+ # Gradient Checkpointing: Enabled # Reduces memory consumption
71
+ # Max Training Steps: 800
72
+ # Learning Rate: 1e-5 (constant schedule, no warmup)
73
+ # Max Gradient Norm: 1 # Prevents gradient explosion
74
+ # Memory Optimization: xFormers enabled for efficient attention computation
75
+
76
+ # Validation
77
+ # ------------------------------------------
78
+ # - A validation prompt "A Naruto character" was used.
79
+ # - 4 validation images were generated during training.
80
+ # - Model checkpoints were saved every 500 steps.
81
+
82
+ # Model Output
83
+ # ------------------------------------------
84
+ # - The fine-tuned LoRA model was saved to "sd-naruto-model".
85
+ # - The model was pushed to the Hugging Face Hub:
86
+ # Repository: Bhaskar009/SD_1.5_LoRA