The full dataset viewer is not available (click to read why). Only showing a preview of the rows.
The dataset generation failed because of a cast error
Error code: DatasetGenerationCastError Exception: DatasetGenerationCastError Message: An error occurred while generating the dataset All the data files must have the same columns, but at some point there are 38 new columns ({'dora_wd', 'unit', 'use_tucker', 'block_alphas', 'train_norm', 'dim_from_weights', 'constrain', 'module_dropout', 'network_alpha', 'training_comment', 'scale_weight_norms', 'stop_text_encoder_training_pct', 'rescaled', 'down_lr_weight', 'train_on_input', 'decompose_both', 'LoRA_type', 'network_dropout', 'LyCORIS_preset', 'conv_dim', 'block_dims', 'network_dim', 'text_encoder_lr', 'use_cp', 'conv_block_alphas', 'block_lr_zero_threshold', 'rank_dropout_scale', 'unet_lr', 'mid_lr_weight', 'conv_block_dims', 'max_grad_norm', 'up_lr_weight', 'use_scalar', 'bypass_mode', 'network_weights', 'factor', 'rank_dropout', 'conv_alpha'}) and 51 missing columns ({'sd3_cache_text_encoder_outputs', 'logit_std', 'save_last_n_epochs', 'sd3_text_encoder_batch_size', 'weighting_scheme', 'learning_rate_te1', 'no_token_padding', 'discrete_flow_shift', 'cpu_offload_checkpointing', 'log_config', 'timestep_sampling', 'flux1_t5xxl', 'skip_cache_check', 'fused_backward_pass', 'fused_optimizer_groups', 'mem_eff_save', 'clip_l', 'lr_scheduler_type', 'save_t5xxl', 'model_prediction_type', 'blocks_to_swap', 'sd3_cache_text_encoder_outputs_to_disk', 'flux1_clip_l', 'flux1_cache_text_encoder_outputs_to_disk', 'flux_fused_backward_pass', 'ae', 'learning_rate_te', 'logit_mean', 'disable_mmap_load_safetensors', 'mode_scale', 'apply_t5_attn_mask', 'flux1_checkbox', 'blockwise_fused_optimizers', 'single_blocks_to_swap', 'split_mode', 'save_clip', 't5xxl_device', 'clip_g', 'save_last_n_epochs_state', 'flux1_cache_text_encoder_outputs', 'save_as_bool', 'stop_text_encoder_training', 'learning_rate_te2', 'double_blocks_to_swap', 't5xxl_dtype', 't5xxl', 't5xxl_max_token_length', 'train_blocks', 'guidance_scale', 'sd3_checkbox', 'lr_warmup_steps'}). This happened while the json dataset builder was generating data using hf://datasets/kratosboy507/kratos_configs/senajuo2idol_noobv75_20241209-003658.json (at revision c8e8663bd8a4d74931fd0031d515da7ca06e6f51) Please either edit the data files to have matching columns, or separate them into different configurations (see docs at https://hf.co/docs/hub/datasets-manual-configuration#multiple-configurations) Traceback: Traceback (most recent call last): File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 1870, in _prepare_split_single writer.write_table(table) File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/arrow_writer.py", line 622, in write_table pa_table = table_cast(pa_table, self._schema) File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/table.py", line 2292, in table_cast return cast_table_to_schema(table, schema) File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/table.py", line 2240, in cast_table_to_schema raise CastError( datasets.table.CastError: Couldn't cast LoRA_type: string LyCORIS_preset: string adaptive_noise_scale: int64 additional_parameters: string async_upload: bool block_alphas: string block_dims: string block_lr_zero_threshold: string bucket_no_upscale: bool bucket_reso_steps: int64 bypass_mode: bool cache_latents: bool cache_latents_to_disk: bool caption_dropout_every_n_epochs: int64 caption_dropout_rate: int64 caption_extension: string clip_skip: int64 color_aug: bool constrain: int64 conv_alpha: int64 conv_block_alphas: string conv_block_dims: string conv_dim: int64 dataset_config: string debiased_estimation_loss: bool decompose_both: bool dim_from_weights: bool dora_wd: bool down_lr_weight: string dynamo_backend: string dynamo_mode: string dynamo_use_dynamic: bool dynamo_use_fullgraph: bool enable_bucket: bool epoch: int64 extra_accelerate_launch_args: string factor: int64 flip_aug: bool fp8_base: bool full_bf16: bool full_fp16: bool gpu_ids: string gradient_accumulation_steps: int64 gradient_checkpointing: bool huber_c: int64 huber_schedule: string huggingface_path_in_repo: string huggingface_repo_id: string huggingface_repo_type: string huggingface_repo_visibility: string huggingface_token: string ip_noise_gamma: int64 ip_noise_gamma_random_strength: bool keep_tokens: int64 learning_rate: int64 log_tracker_config: string log_tracker_name: string log_with: string logging_dir: string loss_type: string lr_scheduler: string lr_scheduler_args: string lr_scheduler_num_cycles: int64 lr_scheduler_power: int64 lr_warmup: i ... etwork_dim: int64 network_dropout: int64 network_weights: string noise_offset: double noise_offset_random_strength: bool noise_offset_type: string num_cpu_threads_per_process: int64 num_machines: int64 num_processes: int64 optimizer: string optimizer_args: string output_dir: string output_name: string persistent_data_loader_workers: bool pretrained_model_name_or_path: string prior_loss_weight: int64 random_crop: bool rank_dropout: int64 rank_dropout_scale: bool reg_data_dir: string rescaled: bool resume: string resume_from_huggingface: string sample_every_n_epochs: int64 sample_every_n_steps: int64 sample_prompts: string sample_sampler: string save_every_n_epochs: int64 save_every_n_steps: int64 save_last_n_steps: int64 save_last_n_steps_state: int64 save_model_as: string save_precision: string save_state: bool save_state_on_train_end: bool save_state_to_huggingface: bool scale_v_pred_loss_like_noise_pred: bool scale_weight_norms: int64 sdxl: bool sdxl_cache_text_encoder_outputs: bool sdxl_no_half_vae: bool seed: int64 shuffle_caption: bool stop_text_encoder_training_pct: int64 text_encoder_lr: int64 train_batch_size: int64 train_data_dir: string train_norm: bool train_on_input: bool training_comment: string unet_lr: int64 unit: int64 up_lr_weight: string use_cp: bool use_scalar: bool use_tucker: bool v2: bool v_parameterization: bool v_pred_like_loss: int64 vae: string vae_batch_size: int64 wandb_api_key: string wandb_run_name: string weighted_captions: bool xformers: string to {'adaptive_noise_scale': Value(dtype='int64', id=None), 'additional_parameters': Value(dtype='string', id=None), 'ae': Value(dtype='string', id=None), 'apply_t5_attn_mask': Value(dtype='bool', id=None), 'async_upload': Value(dtype='bool', id=None), 'blocks_to_swap': Value(dtype='int64', id=None), 'blockwise_fused_optimizers': Value(dtype='bool', id=None), 'bucket_no_upscale': Value(dtype='bool', id=None), 'bucket_reso_steps': Value(dtype='int64', id=None), 'cache_latents': Value(dtype='bool', id=None), 'cache_latents_to_disk': Value(dtype='bool', id=None), 'caption_dropout_every_n_epochs': Value(dtype='int64', id=None), 'caption_dropout_rate': Value(dtype='int64', id=None), 'caption_extension': Value(dtype='string', id=None), 'clip_g': Value(dtype='string', id=None), 'clip_l': Value(dtype='string', id=None), 'clip_skip': Value(dtype='int64', id=None), 'color_aug': Value(dtype='bool', id=None), 'cpu_offload_checkpointing': Value(dtype='bool', id=None), 'dataset_config': Value(dtype='string', id=None), 'debiased_estimation_loss': Value(dtype='bool', id=None), 'disable_mmap_load_safetensors': Value(dtype='bool', id=None), 'discrete_flow_shift': Value(dtype='float64', id=None), 'double_blocks_to_swap': Value(dtype='int64', id=None), 'dynamo_backend': Value(dtype='string', id=None), 'dynamo_mode': Value(dtype='string', id=None), 'dynamo_use_dynamic': Value(dtype='bool', id=None), 'dynamo_use_fullgraph': Value(dtype='bool', id=None), 'enable_bucket': Value(dtype='bool', id=None), ' ... 'sd3_cache_text_encoder_outputs_to_disk': Value(dtype='bool', id=None), 'sd3_checkbox': Value(dtype='bool', id=None), 'sd3_text_encoder_batch_size': Value(dtype='int64', id=None), 'sdxl': Value(dtype='bool', id=None), 'sdxl_cache_text_encoder_outputs': Value(dtype='bool', id=None), 'sdxl_no_half_vae': Value(dtype='bool', id=None), 'seed': Value(dtype='int64', id=None), 'shuffle_caption': Value(dtype='bool', id=None), 'single_blocks_to_swap': Value(dtype='int64', id=None), 'skip_cache_check': Value(dtype='bool', id=None), 'split_mode': Value(dtype='bool', id=None), 'stop_text_encoder_training': Value(dtype='int64', id=None), 't5xxl': Value(dtype='string', id=None), 't5xxl_device': Value(dtype='string', id=None), 't5xxl_dtype': Value(dtype='string', id=None), 't5xxl_max_token_length': Value(dtype='int64', id=None), 'timestep_sampling': Value(dtype='string', id=None), 'train_batch_size': Value(dtype='int64', id=None), 'train_blocks': Value(dtype='string', id=None), 'train_data_dir': Value(dtype='string', id=None), 'v2': Value(dtype='bool', id=None), 'v_parameterization': Value(dtype='bool', id=None), 'v_pred_like_loss': Value(dtype='int64', id=None), 'vae': Value(dtype='string', id=None), 'vae_batch_size': Value(dtype='int64', id=None), 'wandb_api_key': Value(dtype='string', id=None), 'wandb_run_name': Value(dtype='string', id=None), 'weighted_captions': Value(dtype='bool', id=None), 'weighting_scheme': Value(dtype='string', id=None), 'xformers': Value(dtype='string', id=None)} because column names don't match During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/src/services/worker/src/worker/job_runners/config/parquet_and_info.py", line 1417, in compute_config_parquet_and_info_response parquet_operations = convert_to_parquet(builder) File "/src/services/worker/src/worker/job_runners/config/parquet_and_info.py", line 1049, in convert_to_parquet builder.download_and_prepare( File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 924, in download_and_prepare self._download_and_prepare( File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 1000, in _download_and_prepare self._prepare_split(split_generator, **prepare_split_kwargs) File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 1741, in _prepare_split for job_id, done, content in self._prepare_split_single( File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 1872, in _prepare_split_single raise DatasetGenerationCastError.from_cast_error( datasets.exceptions.DatasetGenerationCastError: An error occurred while generating the dataset All the data files must have the same columns, but at some point there are 38 new columns ({'dora_wd', 'unit', 'use_tucker', 'block_alphas', 'train_norm', 'dim_from_weights', 'constrain', 'module_dropout', 'network_alpha', 'training_comment', 'scale_weight_norms', 'stop_text_encoder_training_pct', 'rescaled', 'down_lr_weight', 'train_on_input', 'decompose_both', 'LoRA_type', 'network_dropout', 'LyCORIS_preset', 'conv_dim', 'block_dims', 'network_dim', 'text_encoder_lr', 'use_cp', 'conv_block_alphas', 'block_lr_zero_threshold', 'rank_dropout_scale', 'unet_lr', 'mid_lr_weight', 'conv_block_dims', 'max_grad_norm', 'up_lr_weight', 'use_scalar', 'bypass_mode', 'network_weights', 'factor', 'rank_dropout', 'conv_alpha'}) and 51 missing columns ({'sd3_cache_text_encoder_outputs', 'logit_std', 'save_last_n_epochs', 'sd3_text_encoder_batch_size', 'weighting_scheme', 'learning_rate_te1', 'no_token_padding', 'discrete_flow_shift', 'cpu_offload_checkpointing', 'log_config', 'timestep_sampling', 'flux1_t5xxl', 'skip_cache_check', 'fused_backward_pass', 'fused_optimizer_groups', 'mem_eff_save', 'clip_l', 'lr_scheduler_type', 'save_t5xxl', 'model_prediction_type', 'blocks_to_swap', 'sd3_cache_text_encoder_outputs_to_disk', 'flux1_clip_l', 'flux1_cache_text_encoder_outputs_to_disk', 'flux_fused_backward_pass', 'ae', 'learning_rate_te', 'logit_mean', 'disable_mmap_load_safetensors', 'mode_scale', 'apply_t5_attn_mask', 'flux1_checkbox', 'blockwise_fused_optimizers', 'single_blocks_to_swap', 'split_mode', 'save_clip', 't5xxl_device', 'clip_g', 'save_last_n_epochs_state', 'flux1_cache_text_encoder_outputs', 'save_as_bool', 'stop_text_encoder_training', 'learning_rate_te2', 'double_blocks_to_swap', 't5xxl_dtype', 't5xxl', 't5xxl_max_token_length', 'train_blocks', 'guidance_scale', 'sd3_checkbox', 'lr_warmup_steps'}). This happened while the json dataset builder was generating data using hf://datasets/kratosboy507/kratos_configs/senajuo2idol_noobv75_20241209-003658.json (at revision c8e8663bd8a4d74931fd0031d515da7ca06e6f51) Please either edit the data files to have matching columns, or separate them into different configurations (see docs at https://hf.co/docs/hub/datasets-manual-configuration#multiple-configurations)
Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.
adaptive_noise_scale
int64 | additional_parameters
string | ae
string | apply_t5_attn_mask
bool | async_upload
bool | blocks_to_swap
int64 | blockwise_fused_optimizers
bool | bucket_no_upscale
bool | bucket_reso_steps
int64 | cache_latents
bool | cache_latents_to_disk
bool | caption_dropout_every_n_epochs
int64 | caption_dropout_rate
int64 | caption_extension
string | clip_g
string | clip_l
string | clip_skip
int64 | color_aug
bool | cpu_offload_checkpointing
bool | dataset_config
string | debiased_estimation_loss
bool | disable_mmap_load_safetensors
bool | discrete_flow_shift
float64 | double_blocks_to_swap
int64 | dynamo_backend
string | dynamo_mode
string | dynamo_use_dynamic
bool | dynamo_use_fullgraph
bool | enable_bucket
bool | epoch
int64 | extra_accelerate_launch_args
string | flip_aug
bool | flux1_cache_text_encoder_outputs
bool | flux1_cache_text_encoder_outputs_to_disk
bool | flux1_checkbox
bool | flux1_clip_l
string | flux1_t5xxl
string | flux_fused_backward_pass
bool | fp8_base
bool | full_bf16
bool | full_fp16
bool | fused_backward_pass
bool | fused_optimizer_groups
int64 | gpu_ids
string | gradient_accumulation_steps
int64 | gradient_checkpointing
bool | guidance_scale
int64 | huber_c
float64 | huber_schedule
string | huggingface_path_in_repo
string | huggingface_repo_id
string | huggingface_repo_type
string | huggingface_repo_visibility
string | huggingface_token
string | ip_noise_gamma
int64 | ip_noise_gamma_random_strength
bool | keep_tokens
int64 | learning_rate
float64 | learning_rate_te
int64 | learning_rate_te1
float64 | learning_rate_te2
float64 | log_config
bool | log_tracker_config
string | log_tracker_name
string | log_with
string | logging_dir
string | logit_mean
int64 | logit_std
int64 | loss_type
string | lr_scheduler
string | lr_scheduler_args
string | lr_scheduler_num_cycles
int64 | lr_scheduler_power
int64 | lr_scheduler_type
string | lr_warmup
int64 | lr_warmup_steps
int64 | main_process_port
int64 | masked_loss
bool | max_bucket_reso
int64 | max_data_loader_n_workers
int64 | max_resolution
string | max_timestep
int64 | max_token_length
int64 | max_train_epochs
int64 | max_train_steps
int64 | mem_eff_attn
bool | mem_eff_save
bool | metadata_author
string | metadata_description
string | metadata_license
string | metadata_tags
string | metadata_title
string | min_bucket_reso
int64 | min_snr_gamma
int64 | min_timestep
int64 | mixed_precision
string | mode_scale
float64 | model_list
string | model_prediction_type
string | multi_gpu
bool | multires_noise_discount
float64 | multires_noise_iterations
int64 | no_token_padding
bool | noise_offset
float64 | noise_offset_random_strength
bool | noise_offset_type
string | num_cpu_threads_per_process
int64 | num_machines
int64 | num_processes
int64 | optimizer
string | optimizer_args
string | output_dir
string | output_name
string | persistent_data_loader_workers
bool | pretrained_model_name_or_path
string | prior_loss_weight
int64 | random_crop
bool | reg_data_dir
string | resume
string | resume_from_huggingface
string | sample_every_n_epochs
int64 | sample_every_n_steps
int64 | sample_prompts
string | sample_sampler
string | save_as_bool
bool | save_clip
bool | save_every_n_epochs
int64 | save_every_n_steps
int64 | save_last_n_epochs
int64 | save_last_n_epochs_state
int64 | save_last_n_steps
int64 | save_last_n_steps_state
int64 | save_model_as
string | save_precision
string | save_state
bool | save_state_on_train_end
bool | save_state_to_huggingface
bool | save_t5xxl
bool | scale_v_pred_loss_like_noise_pred
bool | sd3_cache_text_encoder_outputs
bool | sd3_cache_text_encoder_outputs_to_disk
bool | sd3_checkbox
bool | sd3_text_encoder_batch_size
int64 | sdxl
bool | sdxl_cache_text_encoder_outputs
bool | sdxl_no_half_vae
bool | seed
int64 | shuffle_caption
bool | single_blocks_to_swap
int64 | skip_cache_check
bool | split_mode
bool | stop_text_encoder_training
int64 | t5xxl
string | t5xxl_device
string | t5xxl_dtype
string | t5xxl_max_token_length
int64 | timestep_sampling
string | train_batch_size
int64 | train_blocks
string | train_data_dir
string | v2
bool | v_parameterization
bool | v_pred_like_loss
int64 | vae
string | vae_batch_size
int64 | wandb_api_key
string | wandb_run_name
string | weighted_captions
bool | weighting_scheme
string | xformers
string | LoRA_type
string | LyCORIS_preset
string | block_alphas
string | block_dims
string | block_lr_zero_threshold
string | bypass_mode
bool | constrain
int64 | conv_alpha
int64 | conv_block_alphas
string | conv_block_dims
string | conv_dim
int64 | decompose_both
bool | dim_from_weights
bool | dora_wd
bool | down_lr_weight
string | factor
int64 | max_grad_norm
int64 | mid_lr_weight
string | module_dropout
int64 | network_alpha
int64 | network_dim
int64 | network_dropout
int64 | network_weights
string | rank_dropout
int64 | rank_dropout_scale
bool | rescaled
bool | scale_weight_norms
int64 | stop_text_encoder_training_pct
int64 | text_encoder_lr
int64 | train_norm
bool | train_on_input
bool | training_comment
string | unet_lr
int64 | unit
int64 | up_lr_weight
string | use_cp
bool | use_scalar
bool | use_tucker
bool |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
0 | /home/Ubuntu/Downloads/ae.safetensors | false | false | 0 | false | true | 64 | true | true | 0 | 0 | .txt | 0 | false | false | false | false | 3.1582 | 0 | no | default | false | false | false | 200 | false | true | true | true | /home/Ubuntu/Downloads/clip_l.safetensors | /home/Ubuntu/Downloads/t5xxl_fp16.safetensors | true | false | true | false | false | 0 | 0 | 1 | true | 1 | 0.1 | snr | 0 | false | 0 | 0.000004 | 0 | 0.00001 | 0.00001 | false | 0 | 1 | l2 | constant | 1 | 1 | 0 | 0 | 0 | false | 2,048 | 0 | 1024,1024 | 1,000 | 75 | 0 | 0 | false | true | 256 | 0 | 0 | bf16 | 1.29 | custom | raw | false | 0.3 | 0 | false | 0 | false | Original | 2 | 1 | 1 | Adafactor | scale_parameter=False relative_step=False warmup_init=False weight_decay=0.01 | /home/Ubuntu/apps/StableSwarmUI/Models/diffusion_models | Quality_1 | false | /home/Ubuntu/Downloads/flux1-dev.safetensors | 1 | false | 0 | 0 | euler_a | false | false | 25 | 0 | 0 | 0 | 0 | 0 | safetensors | fp16 | false | false | false | false | false | false | false | false | 1 | false | false | false | 1 | false | 0 | false | false | 0 | bf16 | 512 | sigmoid | 1 | all | /home/Ubuntu/Downloads/training_imgs | false | false | 0 | 4 | false | logit_normal | xformers | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null |
||||||||||||||||||||||||||||||
0 | /home/Ubuntu/Downloads/ae.safetensors | false | false | 0 | false | true | 64 | true | true | 0 | 0 | .txt | 0 | false | false | false | false | 3.1582 | 0 | no | default | false | false | false | 200 | false | true | true | true | /home/Ubuntu/Downloads/clip_l.safetensors | /home/Ubuntu/Downloads/t5xxl_fp16.safetensors | true | false | true | false | false | 0 | 0 | 1 | true | 1 | 0.1 | snr | 0 | false | 0 | 0.00001 | 0 | 0.00001 | 0.00001 | false | 0 | 1 | l2 | constant | 1 | 1 | 0 | 0 | 0 | false | 2,048 | 0 | 1024,1024 | 1,000 | 75 | 0 | 0 | false | true | 256 | 0 | 0 | bf16 | 1.29 | custom | raw | false | 0.3 | 0 | false | 0 | false | Original | 2 | 1 | 1 | Adafactor | scale_parameter=False relative_step=False warmup_init=False weight_decay=0.01 | /home/Ubuntu/apps/StableSwarmUI/Models/diffusion_models | Quality_1 | false | /home/Ubuntu/Downloads/flux1-dev.safetensors | 1 | false | 0 | 0 | euler_a | false | false | 25 | 0 | 0 | 0 | 0 | 0 | safetensors | fp16 | false | false | false | false | false | false | false | false | 1 | false | false | false | 1 | false | 0 | false | false | 0 | bf16 | 512 | sigmoid | 7 | all | /home/Ubuntu/Downloads/training_imgs | false | false | 0 | 4 | false | logit_normal | xformers | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null |
||||||||||||||||||||||||||||||
0 | --zero_terminal_snr | null | null | false | null | null | true | 32 | false | true | 0 | 0 | .txt | null | null | 0 | false | null | false | null | null | null | no | default | false | false | true | 12 | false | null | null | null | null | null | null | false | false | false | null | null | 1 | true | null | 0 | snr | 0 | false | 3 | 1 | null | null | null | null | D:\lora_model\log | null | null | l2 | cosine_with_restarts | 1 | 1 | null | 5 | null | 0 | false | 2,048 | 0 | 1024,1024 | 1,000 | 225 | 0 | 0 | false | null | 256 | 0 | 0 | bf16 | null | custom | null | false | 0.2 | 6 | null | 0.12 | true | Original | 2 | 1 | 2 | Prodigy | decouple=True weight_decay=0.05 d_coef=1.2 betas=0.9,0.99 use_bias_correction=True safeguard_warmup=True | D:/lora_model/model | senajuo2idol_noobv75 | false | E:/stable-diffusion-webui-master/models/Stable-diffusion/noobaiXLNAIXL_vPred075SVersion.safetensors | 1 | false | 0 | 0 | dpm_2_a | null | null | 1 | 0 | null | null | 0 | 0 | safetensors | fp16 | false | false | false | null | true | null | null | null | null | true | false | true | 4,134 | true | null | null | null | null | null | null | null | null | null | 8 | null | E:\角色相片\學偶\十王\done - 複製 | false | true | 0 | 0 | false | null | xformers | Standard | full | false | 0 | 1 | 1 | false | false | false | -1 | 1 | 0 | 8 | 8 | 0 | 0 | false | false | 0 | 0 | 1 | false | true | 1 | 1 | false | false | false |
Subsets and Splits