2025/02/10 23:59:45 - mmengine - INFO - ------------------------------------------------------------ System environment: sys.platform: linux Python: 3.10.16 (main, Dec 11 2024, 16:24:50) [GCC 11.2.0] CUDA available: True MUSA available: False numpy_random_seed: 678900884 GPU 0: NVIDIA A100-SXM4-80GB CUDA_HOME: /usr/local/cuda NVCC: Cuda compilation tools, release 12.2, V12.2.140 GCC: gcc (Ubuntu 9.4.0-1ubuntu1~20.04.2) 9.4.0 PyTorch: 2.2.1+cu121 PyTorch compiling details: PyTorch built with: - GCC 9.3 - C++ Version: 201703 - Intel(R) oneAPI Math Kernel Library Version 2022.2-Product Build 20220804 for Intel(R) 64 architecture applications - Intel(R) MKL-DNN v3.3.2 (Git Hash 2dc95a2ad0841e29db8b22fbccaf3e5da7992b01) - OpenMP 201511 (a.k.a. OpenMP 4.5) - LAPACK is enabled (usually provided by MKL) - NNPACK is enabled - CPU capability usage: AVX512 - CUDA Runtime 12.1 - NVCC architecture flags: -gencode;arch=compute_50,code=sm_50;-gencode;arch=compute_60,code=sm_60;-gencode;arch=compute_70,code=sm_70;-gencode;arch=compute_75,code=sm_75;-gencode;arch=compute_80,code=sm_80;-gencode;arch=compute_86,code=sm_86;-gencode;arch=compute_90,code=sm_90 - CuDNN 8.9.2 - Magma 2.6.1 - Build settings: BLAS_INFO=mkl, BUILD_TYPE=Release, CUDA_VERSION=12.1, CUDNN_VERSION=8.9.2, CXX_COMPILER=/opt/rh/devtoolset-9/root/usr/bin/c++, CXX_FLAGS= -D_GLIBCXX_USE_CXX11_ABI=0 -fabi-version=11 -fvisibility-inlines-hidden -DUSE_PTHREADPOOL -DNDEBUG -DUSE_KINETO -DLIBKINETO_NOROCTRACER -DUSE_FBGEMM -DUSE_QNNPACK -DUSE_PYTORCH_QNNPACK -DUSE_XNNPACK -DSYMBOLICATE_MOBILE_DEBUG_HANDLE -O2 -fPIC -Wall -Wextra -Werror=return-type -Werror=non-virtual-dtor -Werror=bool-operation -Wnarrowing -Wno-missing-field-initializers -Wno-type-limits -Wno-array-bounds -Wno-unknown-pragmas -Wno-unused-parameter -Wno-unused-function -Wno-unused-result -Wno-strict-overflow -Wno-strict-aliasing -Wno-stringop-overflow -Wsuggest-override -Wno-psabi -Wno-error=pedantic -Wno-error=old-style-cast -Wno-missing-braces -fdiagnostics-color=always -faligned-new -Wno-unused-but-set-variable -Wno-maybe-uninitialized -fno-math-errno -fno-trapping-math -Werror=format -Wno-stringop-overflow, LAPACK_INFO=mkl, PERF_WITH_AVX=1, PERF_WITH_AVX2=1, PERF_WITH_AVX512=1, TORCH_VERSION=2.2.1, USE_CUDA=ON, USE_CUDNN=ON, USE_EXCEPTION_PTR=1, USE_GFLAGS=OFF, USE_GLOG=OFF, USE_MKL=ON, USE_MKLDNN=ON, USE_MPI=OFF, USE_NCCL=1, USE_NNPACK=ON, USE_OPENMP=ON, USE_ROCM=OFF, USE_ROCM_KERNEL_ASSERT=OFF, TorchVision: 0.17.1+cu121 OpenCV: 4.9.0 MMEngine: 0.10.3 Runtime environment: launcher: none randomness: {'seed': None, 'deterministic': False} cudnn_benchmark: False mp_cfg: {'mp_start_method': 'fork', 'opencv_num_threads': 0} dist_cfg: {'backend': 'nccl'} seed: None deterministic: False Distributed launcher: none Distributed training: False GPU number: 1 ------------------------------------------------------------ 2025/02/10 23:59:45 - mmengine - INFO - Config: accumulative_counts = 2 batch_size = 4 betas = ( 0.9, 0.999, ) custom_hooks = [ dict( tokenizer=dict( pretrained_model_name_or_path= '/root/share/new_models/OpenGVLab/InternVL2-2B', trust_remote_code=True, type='transformers.AutoTokenizer.from_pretrained'), type='xtuner.engine.hooks.DatasetInfoHook'), ] data_path = '/root/share/datasets/FoodieQA/sivqa_llava.json' data_root = '/root/share/datasets/FoodieQA/' dataloader_num_workers = 4 default_hooks = dict( checkpoint=dict( by_epoch=False, interval=64, max_keep_ckpts=-1, save_optimizer=False, type='mmengine.hooks.CheckpointHook'), logger=dict( interval=10, log_metric_by_epoch=False, type='mmengine.hooks.LoggerHook'), param_scheduler=dict(type='mmengine.hooks.ParamSchedulerHook'), sampler_seed=dict(type='mmengine.hooks.DistSamplerSeedHook'), timer=dict(type='mmengine.hooks.IterTimerHook')) env_cfg = dict( cudnn_benchmark=False, dist_cfg=dict(backend='nccl'), mp_cfg=dict(mp_start_method='fork', opencv_num_threads=0)) image_folder = '/root/share/datasets/FoodieQA/' launcher = 'none' llava_dataset = dict( data_paths='/root/share/datasets/FoodieQA/sivqa_llava.json', image_folders='/root/share/datasets/FoodieQA/', max_length=8192, model_path='/root/share/new_models/OpenGVLab/InternVL2-2B', template='xtuner.utils.PROMPT_TEMPLATE.internlm2_chat', type='xtuner.dataset.InternVL_V1_5_Dataset') load_from = None log_level = 'INFO' log_processor = dict(by_epoch=False) lr = 3e-05 max_epochs = 10 max_length = 8192 max_norm = 1 model = dict( freeze_llm=True, freeze_visual_encoder=True, llm_lora=dict( lora_alpha=256, lora_dropout=0.05, r=128, target_modules=None, task_type='CAUSAL_LM', type='peft.LoraConfig'), model_path='/root/share/new_models/OpenGVLab/InternVL2-2B', type='xtuner.model.InternVL_V1_5') optim_type = 'torch.optim.AdamW' optim_wrapper = dict( optimizer=dict( betas=( 0.9, 0.999, ), lr=3e-05, type='torch.optim.AdamW', weight_decay=0.05), type='DeepSpeedOptimWrapper') param_scheduler = [ dict( begin=0, by_epoch=True, convert_to_iter_based=True, end=0.3, start_factor=1e-05, type='mmengine.optim.LinearLR'), dict( begin=0.3, by_epoch=True, convert_to_iter_based=True, end=10, eta_min=0.0, type='mmengine.optim.CosineAnnealingLR'), ] path = '/root/share/new_models/OpenGVLab/InternVL2-2B' prompt_template = 'xtuner.utils.PROMPT_TEMPLATE.internlm2_chat' randomness = dict(deterministic=False, seed=None) resume = False runner_type = 'FlexibleRunner' save_steps = 64 save_total_limit = -1 strategy = dict( config=dict( bf16=dict(enabled=True), fp16=dict(enabled=False, initial_scale_power=16), gradient_accumulation_steps='auto', gradient_clipping='auto', train_micro_batch_size_per_gpu='auto', zero_allow_untested_optimizer=True, zero_force_ds_cpu_optimizer=False, zero_optimization=dict(overlap_comm=True, stage=2)), exclude_frozen_parameters=True, gradient_accumulation_steps=2, gradient_clipping=1, sequence_parallel_size=1, train_micro_batch_size_per_gpu=4, type='xtuner.engine.DeepSpeedStrategy') tokenizer = dict( pretrained_model_name_or_path= '/root/share/new_models/OpenGVLab/InternVL2-2B', trust_remote_code=True, type='transformers.AutoTokenizer.from_pretrained') train_cfg = dict(max_epochs=10, type='xtuner.engine.runner.TrainLoop') train_dataloader = dict( batch_size=4, collate_fn=dict(type='xtuner.dataset.collate_fns.default_collate_fn'), dataset=dict( data_paths='/root/share/datasets/FoodieQA/sivqa_llava.json', image_folders='/root/share/datasets/FoodieQA/', max_length=8192, model_path='/root/share/new_models/OpenGVLab/InternVL2-2B', template='xtuner.utils.PROMPT_TEMPLATE.internlm2_chat', type='xtuner.dataset.InternVL_V1_5_Dataset'), num_workers=4, sampler=dict( length_property='modality_length', per_device_batch_size=8, type='xtuner.dataset.samplers.LengthGroupedSampler')) visualizer = None warmup_ratio = 0.03 weight_decay = 0.05 work_dir = './work_dirs/internvl_v2_internlm2_2b_lora_finetune_food' 2025/02/10 23:59:45 - mmengine - WARNING - Failed to search registry with scope "mmengine" in the "builder" registry tree. As a workaround, the current "builder" registry in "xtuner" is used to build instance. This may cause unexpected failure when running the built modules. Please check whether "mmengine" is a correct scope, or whether the registry is initialized. 2025/02/10 23:59:45 - mmengine - INFO - Hooks will be executed in the following order: before_run: (VERY_HIGH ) RuntimeInfoHook (BELOW_NORMAL) LoggerHook -------------------- before_train: (VERY_HIGH ) RuntimeInfoHook (NORMAL ) IterTimerHook (NORMAL ) DatasetInfoHook (VERY_LOW ) CheckpointHook -------------------- before_train_epoch: (VERY_HIGH ) RuntimeInfoHook (NORMAL ) IterTimerHook (NORMAL ) DistSamplerSeedHook -------------------- before_train_iter: (VERY_HIGH ) RuntimeInfoHook (NORMAL ) IterTimerHook -------------------- after_train_iter: (VERY_HIGH ) RuntimeInfoHook (NORMAL ) IterTimerHook (BELOW_NORMAL) LoggerHook (LOW ) ParamSchedulerHook (VERY_LOW ) CheckpointHook -------------------- after_train_epoch: (NORMAL ) IterTimerHook (LOW ) ParamSchedulerHook (VERY_LOW ) CheckpointHook -------------------- before_val: (VERY_HIGH ) RuntimeInfoHook (NORMAL ) DatasetInfoHook -------------------- before_val_epoch: (NORMAL ) IterTimerHook -------------------- before_val_iter: (NORMAL ) IterTimerHook -------------------- after_val_iter: (NORMAL ) IterTimerHook (BELOW_NORMAL) LoggerHook -------------------- after_val_epoch: (VERY_HIGH ) RuntimeInfoHook (NORMAL ) IterTimerHook (BELOW_NORMAL) LoggerHook (LOW ) ParamSchedulerHook (VERY_LOW ) CheckpointHook -------------------- after_val: (VERY_HIGH ) RuntimeInfoHook -------------------- after_train: (VERY_HIGH ) RuntimeInfoHook (VERY_LOW ) CheckpointHook -------------------- before_test: (VERY_HIGH ) RuntimeInfoHook (NORMAL ) DatasetInfoHook -------------------- before_test_epoch: (NORMAL ) IterTimerHook -------------------- before_test_iter: (NORMAL ) IterTimerHook -------------------- after_test_iter: (NORMAL ) IterTimerHook (BELOW_NORMAL) LoggerHook -------------------- after_test_epoch: (VERY_HIGH ) RuntimeInfoHook (NORMAL ) IterTimerHook (BELOW_NORMAL) LoggerHook -------------------- after_test: (VERY_HIGH ) RuntimeInfoHook -------------------- after_run: (BELOW_NORMAL) LoggerHook -------------------- 2025/02/10 23:59:46 - mmengine - INFO - Starting to loading data and calc length 2025/02/10 23:59:46 - mmengine - INFO - =======Starting to process /root/share/datasets/FoodieQA/sivqa_llava.json ======= 2025/02/10 23:59:46 - mmengine - INFO - =======total 256 samples of /root/share/datasets/FoodieQA/sivqa_llava.json======= 2025/02/10 23:59:46 - mmengine - INFO - end loading data and calc length 2025/02/10 23:59:46 - mmengine - INFO - =======total 256 samples======= 2025/02/10 23:59:46 - mmengine - INFO - LengthGroupedSampler is used. 2025/02/10 23:59:46 - mmengine - INFO - LengthGroupedSampler construction is complete, and the selected attribute is modality_length 2025/02/10 23:59:46 - mmengine - WARNING - Dataset InternVL_V1_5_Dataset has no metainfo. ``dataset_meta`` in visualizer will be None. 2025/02/10 23:59:46 - mmengine - INFO - Start to load InternVL_V1_5 model. 2025/02/11 00:00:09 - mmengine - INFO - InternVL_V1_5( (data_preprocessor): BaseDataPreprocessor() (model): InternVLChatModel( (vision_model): InternVisionModel( (embeddings): InternVisionEmbeddings( (patch_embedding): Conv2d(3, 1024, kernel_size=(14, 14), stride=(14, 14)) ) (encoder): InternVisionEncoder( (layers): ModuleList( (0-23): 24 x InternVisionEncoderLayer( (attn): InternAttention( (qkv): Linear(in_features=1024, out_features=3072, bias=True) (attn_drop): Dropout(p=0.0, inplace=False) (proj_drop): Dropout(p=0.0, inplace=False) (proj): Linear(in_features=1024, out_features=1024, bias=True) ) (mlp): InternMLP( (act): GELUActivation() (fc1): Linear(in_features=1024, out_features=4096, bias=True) (fc2): Linear(in_features=4096, out_features=1024, bias=True) ) (norm1): LayerNorm((1024,), eps=1e-06, elementwise_affine=True) (norm2): LayerNorm((1024,), eps=1e-06, elementwise_affine=True) (drop_path1): Identity() (drop_path2): Identity() ) ) ) ) (language_model): PeftModelForCausalLM( (base_model): LoraModel( (model): InternLM2ForCausalLM( (model): InternLM2Model( (tok_embeddings): Embedding(92553, 2048, padding_idx=2) (layers): ModuleList( (0-23): 24 x InternLM2DecoderLayer( (attention): InternLM2Attention( (wqkv): lora.Linear( (base_layer): Linear(in_features=2048, out_features=4096, bias=False) (lora_dropout): ModuleDict( (default): Dropout(p=0.05, inplace=False) ) (lora_A): ModuleDict( (default): Linear(in_features=2048, out_features=128, bias=False) ) (lora_B): ModuleDict( (default): Linear(in_features=128, out_features=4096, bias=False) ) (lora_embedding_A): ParameterDict() (lora_embedding_B): ParameterDict() ) (wo): lora.Linear( (base_layer): Linear(in_features=2048, out_features=2048, bias=False) (lora_dropout): ModuleDict( (default): Dropout(p=0.05, inplace=False) ) (lora_A): ModuleDict( (default): Linear(in_features=2048, out_features=128, bias=False) ) (lora_B): ModuleDict( (default): Linear(in_features=128, out_features=2048, bias=False) ) (lora_embedding_A): ParameterDict() (lora_embedding_B): ParameterDict() ) (rotary_emb): InternLM2DynamicNTKScalingRotaryEmbedding() ) (feed_forward): InternLM2MLP( (w1): lora.Linear( (base_layer): Linear(in_features=2048, out_features=8192, bias=False) (lora_dropout): ModuleDict( (default): Dropout(p=0.05, inplace=False) ) (lora_A): ModuleDict( (default): Linear(in_features=2048, out_features=128, bias=False) ) (lora_B): ModuleDict( (default): Linear(in_features=128, out_features=8192, bias=False) ) (lora_embedding_A): ParameterDict() (lora_embedding_B): ParameterDict() ) (w3): lora.Linear( (base_layer): Linear(in_features=2048, out_features=8192, bias=False) (lora_dropout): ModuleDict( (default): Dropout(p=0.05, inplace=False) ) (lora_A): ModuleDict( (default): Linear(in_features=2048, out_features=128, bias=False) ) (lora_B): ModuleDict( (default): Linear(in_features=128, out_features=8192, bias=False) ) (lora_embedding_A): ParameterDict() (lora_embedding_B): ParameterDict() ) (w2): lora.Linear( (base_layer): Linear(in_features=8192, out_features=2048, bias=False) (lora_dropout): ModuleDict( (default): Dropout(p=0.05, inplace=False) ) (lora_A): ModuleDict( (default): Linear(in_features=8192, out_features=128, bias=False) ) (lora_B): ModuleDict( (default): Linear(in_features=128, out_features=2048, bias=False) ) (lora_embedding_A): ParameterDict() (lora_embedding_B): ParameterDict() ) (act_fn): SiLU() ) (attention_norm): InternLM2RMSNorm() (ffn_norm): InternLM2RMSNorm() ) ) (norm): InternLM2RMSNorm() ) (output): lora.Linear( (base_layer): Linear(in_features=2048, out_features=92553, bias=False) (lora_dropout): ModuleDict( (default): Dropout(p=0.05, inplace=False) ) (lora_A): ModuleDict( (default): Linear(in_features=2048, out_features=128, bias=False) ) (lora_B): ModuleDict( (default): Linear(in_features=128, out_features=92553, bias=False) ) (lora_embedding_A): ParameterDict() (lora_embedding_B): ParameterDict() ) ) ) ) (mlp1): Sequential( (0): LayerNorm((4096,), eps=1e-05, elementwise_affine=True) (1): Linear(in_features=4096, out_features=2048, bias=True) (2): GELU(approximate='none') (3): Linear(in_features=2048, out_features=2048, bias=True) ) ) ) 2025/02/11 00:00:09 - mmengine - INFO - InternVL_V1_5 construction is complete 2025/02/11 00:00:27 - mmengine - INFO - Num train samples 256 2025/02/11 00:00:27 - mmengine - INFO - train example: 2025/02/11 00:00:29 - mmengine - INFO - <|im_start|> system You are an AI assistant whose name is InternLM (书生·浦语).<|im_end|><|im_start|>user 图片中的食物通常属于哪个菜系?<|im_end|><|im_start|> assistant 新疆菜,图中的菜是烤羊肉串<|im_end|> 2025/02/11 00:00:29 - mmengine - WARNING - "FileClient" will be deprecated in future. Please use io functions in https://mmengine.readthedocs.io/en/latest/api/fileio.html#file-io 2025/02/11 00:00:29 - mmengine - WARNING - "HardDiskBackend" is the alias of "LocalBackend" and the former will be deprecated in future. 2025/02/11 00:00:29 - mmengine - INFO - Checkpoints will be saved to /root/work_dirs/internvl_v2_internlm2_2b_lora_finetune_food. 2025/02/11 00:02:01 - mmengine - INFO - Iter(train) [ 10/640] lr: 1.5000e-05 eta: 1:36:38 time: 9.2038 data_time: 0.0222 memory: 25174 loss: 5.1763 2025/02/11 00:03:25 - mmengine - INFO - Iter(train) [ 20/640] lr: 3.0000e-05 eta: 1:31:08 time: 8.4372 data_time: 0.0294 memory: 25135 loss: 2.7321 2025/02/11 00:04:51 - mmengine - INFO - Iter(train) [ 30/640] lr: 2.9981e-05 eta: 1:28:47 time: 8.5598 data_time: 0.0263 memory: 25139 loss: 1.4861 2025/02/11 00:06:20 - mmengine - INFO - Iter(train) [ 40/640] lr: 2.9923e-05 eta: 1:27:41 time: 8.8740 data_time: 0.0256 memory: 25144 loss: 1.2963 2025/02/11 00:07:46 - mmengine - INFO - Iter(train) [ 50/640] lr: 2.9828e-05 eta: 1:26:00 time: 8.6546 data_time: 0.0259 memory: 25105 loss: 0.8960 2025/02/11 00:09:12 - mmengine - INFO - Iter(train) [ 60/640] lr: 2.9694e-05 eta: 1:24:17 time: 8.5909 data_time: 0.0246 memory: 25144 loss: 0.9319 2025/02/11 00:09:47 - mmengine - INFO - Exp name: internvl_v2_internlm2_2b_lora_finetune_food_20250210_235945 2025/02/11 00:09:47 - mmengine - INFO - Saving checkpoint at 64 iterations 2025/02/11 00:09:49 - mmengine - WARNING - Reach the end of the dataloader, it will be restarted and continue to iterate. It is recommended to use `mmengine.dataset.InfiniteSampler` to enable the dataloader to iterate infinitely. 2025/02/11 00:10:43 - mmengine - INFO - Iter(train) [ 70/640] lr: 2.9523e-05 eta: 1:23:21 time: 9.0989 data_time: 0.5891 memory: 25149 loss: 0.8135 2025/02/11 00:12:14 - mmengine - INFO - Iter(train) [ 80/640] lr: 2.9314e-05 eta: 1:22:14 time: 9.0675 data_time: 0.0264 memory: 25135 loss: 0.5199 2025/02/11 00:13:44 - mmengine - INFO - Iter(train) [ 90/640] lr: 2.9069e-05 eta: 1:20:59 time: 9.0273 data_time: 0.0263 memory: 25175 loss: 0.5036 2025/02/11 00:15:12 - mmengine - INFO - Iter(train) [100/640] lr: 2.8788e-05 eta: 1:19:27 time: 8.7764 data_time: 0.0258 memory: 25125 loss: 0.4994 2025/02/11 00:16:37 - mmengine - INFO - Iter(train) [110/640] lr: 2.8472e-05 eta: 1:17:43 time: 8.5085 data_time: 0.0257 memory: 25130 loss: 0.3883 2025/02/11 00:18:01 - mmengine - INFO - Iter(train) [120/640] lr: 2.8121e-05 eta: 1:15:57 time: 8.3743 data_time: 0.0282 memory: 25158 loss: 0.2551 2025/02/11 00:19:10 - mmengine - INFO - Saving checkpoint at 128 iterations 2025/02/11 00:19:34 - mmengine - INFO - Iter(train) [130/640] lr: 2.7737e-05 eta: 1:14:50 time: 9.2995 data_time: 0.6131 memory: 25135 loss: 0.2501 2025/02/11 00:21:04 - mmengine - INFO - Iter(train) [140/640] lr: 2.7320e-05 eta: 1:13:29 time: 9.0039 data_time: 0.0258 memory: 25135 loss: 0.1940 2025/02/11 00:22:29 - mmengine - INFO - Iter(train) [150/640] lr: 2.6871e-05 eta: 1:11:51 time: 8.4930 data_time: 0.0258 memory: 25175 loss: 0.1228 2025/02/11 00:23:53 - mmengine - INFO - Iter(train) [160/640] lr: 2.6393e-05 eta: 1:10:12 time: 8.4453 data_time: 0.0259 memory: 25144 loss: 0.1453 2025/02/11 00:25:19 - mmengine - INFO - Iter(train) [170/640] lr: 2.5885e-05 eta: 1:08:38 time: 8.5415 data_time: 0.0253 memory: 25149 loss: 0.1565 2025/02/11 00:26:45 - mmengine - INFO - Iter(train) [180/640] lr: 2.5349e-05 eta: 1:07:07 time: 8.6362 data_time: 0.0249 memory: 25130 loss: 0.2205 2025/02/11 00:28:11 - mmengine - INFO - Iter(train) [190/640] lr: 2.4786e-05 eta: 1:05:36 time: 8.6213 data_time: 0.0770 memory: 25139 loss: 0.2491 2025/02/11 00:28:28 - mmengine - INFO - Saving checkpoint at 192 iterations 2025/02/11 00:29:46 - mmengine - INFO - Iter(train) [200/640] lr: 2.4199e-05 eta: 1:04:24 time: 9.4409 data_time: 0.6209 memory: 25139 loss: 0.0875 2025/02/11 00:31:20 - mmengine - INFO - Iter(train) [210/640] lr: 2.3588e-05 eta: 1:03:10 time: 9.4660 data_time: 0.0250 memory: 25097 loss: 0.0584 2025/02/11 00:32:51 - mmengine - INFO - Iter(train) [220/640] lr: 2.2955e-05 eta: 1:01:47 time: 9.0711 data_time: 0.0241 memory: 25144 loss: 0.0547 2025/02/11 00:34:22 - mmengine - INFO - Iter(train) [230/640] lr: 2.2302e-05 eta: 1:00:24 time: 9.1179 data_time: 0.0254 memory: 25158 loss: 0.0786 2025/02/11 00:35:54 - mmengine - INFO - Iter(train) [240/640] lr: 2.1630e-05 eta: 0:59:02 time: 9.2350 data_time: 0.0242 memory: 25139 loss: 0.0948 2025/02/11 00:37:28 - mmengine - INFO - Iter(train) [250/640] lr: 2.0941e-05 eta: 0:57:41 time: 9.3748 data_time: 0.0247 memory: 25153 loss: 0.0300 2025/02/11 00:38:23 - mmengine - INFO - Saving checkpoint at 256 iterations 2025/02/11 00:39:06 - mmengine - INFO - Iter(train) [260/640] lr: 2.0237e-05 eta: 0:56:25 time: 9.7503 data_time: 0.6292 memory: 25139 loss: 0.0201 2025/02/11 00:40:37 - mmengine - INFO - Iter(train) [270/640] lr: 1.9520e-05 eta: 0:55:00 time: 9.1539 data_time: 0.0254 memory: 25130 loss: 0.0178 2025/02/11 00:42:03 - mmengine - INFO - Iter(train) [280/640] lr: 1.8791e-05 eta: 0:53:25 time: 8.5299 data_time: 0.0250 memory: 25139 loss: 0.0170 2025/02/11 00:43:30 - mmengine - INFO - Iter(train) [290/640] lr: 1.8052e-05 eta: 0:51:54 time: 8.7283 data_time: 0.0241 memory: 25142 loss: 0.0420 2025/02/11 00:44:56 - mmengine - INFO - Iter(train) [300/640] lr: 1.7305e-05 eta: 0:50:22 time: 8.5869 data_time: 0.0240 memory: 25153 loss: 0.0294 2025/02/11 00:46:20 - mmengine - INFO - Iter(train) [310/640] lr: 1.6553e-05 eta: 0:48:48 time: 8.4015 data_time: 0.0238 memory: 25139 loss: 0.0242 2025/02/11 00:47:48 - mmengine - INFO - Iter(train) [320/640] lr: 1.5796e-05 eta: 0:47:18 time: 8.8187 data_time: 0.0244 memory: 25144 loss: 0.0383 2025/02/11 00:47:48 - mmengine - INFO - Saving checkpoint at 320 iterations 2025/02/11 00:49:21 - mmengine - INFO - Iter(train) [330/640] lr: 1.5038e-05 eta: 0:45:54 time: 9.3398 data_time: 0.5641 memory: 25149 loss: 0.0039 2025/02/11 00:50:50 - mmengine - INFO - Iter(train) [340/640] lr: 1.4279e-05 eta: 0:44:25 time: 8.8535 data_time: 0.0248 memory: 25135 loss: 0.0039 2025/02/11 00:52:19 - mmengine - INFO - Iter(train) [350/640] lr: 1.3523e-05 eta: 0:42:57 time: 8.9362 data_time: 0.0253 memory: 25139 loss: 0.0233 2025/02/11 00:53:51 - mmengine - INFO - Iter(train) [360/640] lr: 1.2770e-05 eta: 0:41:30 time: 9.1619 data_time: 0.0240 memory: 25175 loss: 0.0132 2025/02/11 00:55:21 - mmengine - INFO - Iter(train) [370/640] lr: 1.2022e-05 eta: 0:40:02 time: 9.0181 data_time: 0.0238 memory: 25130 loss: 0.0070 2025/02/11 00:56:51 - mmengine - INFO - Iter(train) [380/640] lr: 1.1283e-05 eta: 0:38:34 time: 9.0376 data_time: 0.0231 memory: 25149 loss: 0.0024 2025/02/11 00:57:26 - mmengine - INFO - Saving checkpoint at 384 iterations 2025/02/11 00:58:26 - mmengine - INFO - Iter(train) [390/640] lr: 1.0553e-05 eta: 0:37:08 time: 9.4561 data_time: 0.6165 memory: 25125 loss: 0.0008 2025/02/11 00:59:57 - mmengine - INFO - Iter(train) [400/640] lr: 9.8341e-06 eta: 0:35:40 time: 9.1085 data_time: 0.0233 memory: 25139 loss: 0.0024 2025/02/11 01:01:27 - mmengine - INFO - Iter(train) [410/640] lr: 9.1286e-06 eta: 0:34:12 time: 9.0287 data_time: 0.0242 memory: 25144 loss: 0.0006 2025/02/11 01:02:59 - mmengine - INFO - Iter(train) [420/640] lr: 8.4381e-06 eta: 0:32:44 time: 9.1984 data_time: 0.0247 memory: 25149 loss: 0.0009 2025/02/11 01:04:33 - mmengine - INFO - Iter(train) [430/640] lr: 7.7644e-06 eta: 0:31:17 time: 9.4121 data_time: 0.0240 memory: 25135 loss: 0.0006 2025/02/11 01:05:59 - mmengine - INFO - Iter(train) [440/640] lr: 7.1092e-06 eta: 0:29:46 time: 8.5776 data_time: 0.0230 memory: 25158 loss: 0.0005 2025/02/11 01:07:04 - mmengine - INFO - Saving checkpoint at 448 iterations 2025/02/11 01:07:25 - mmengine - INFO - Iter(train) [450/640] lr: 6.4742e-06 eta: 0:28:15 time: 8.5610 data_time: 0.6080 memory: 25130 loss: 0.0403 2025/02/11 01:08:42 - mmengine - INFO - Iter(train) [460/640] lr: 5.8611e-06 eta: 0:26:41 time: 7.7108 data_time: 0.0229 memory: 25144 loss: 0.0004 2025/02/11 01:09:59 - mmengine - INFO - Iter(train) [470/640] lr: 5.2713e-06 eta: 0:25:08 time: 7.7024 data_time: 0.0239 memory: 25130 loss: 0.0004 2025/02/11 01:11:15 - mmengine - INFO - Iter(train) [480/640] lr: 4.7064e-06 eta: 0:23:35 time: 7.5613 data_time: 0.0228 memory: 25153 loss: 0.0054 2025/02/11 01:12:30 - mmengine - INFO - Iter(train) [490/640] lr: 4.1678e-06 eta: 0:22:02 time: 7.5233 data_time: 0.0227 memory: 25158 loss: 0.0003 2025/02/11 01:13:46 - mmengine - INFO - Iter(train) [500/640] lr: 3.6570e-06 eta: 0:20:31 time: 7.5981 data_time: 0.0235 memory: 25139 loss: 0.0003 2025/02/11 01:15:02 - mmengine - INFO - Iter(train) [510/640] lr: 3.1752e-06 eta: 0:19:00 time: 7.6055 data_time: 0.0244 memory: 25135 loss: 0.0004 2025/02/11 01:15:17 - mmengine - INFO - Saving checkpoint at 512 iterations 2025/02/11 01:16:23 - mmengine - INFO - Iter(train) [520/640] lr: 2.7236e-06 eta: 0:17:30 time: 8.1235 data_time: 0.5964 memory: 25142 loss: 0.0003 2025/02/11 01:17:38 - mmengine - INFO - Iter(train) [530/640] lr: 2.3035e-06 eta: 0:16:00 time: 7.4863 data_time: 0.0244 memory: 25130 loss: 0.0003 2025/02/11 01:18:53 - mmengine - INFO - Iter(train) [540/640] lr: 1.9158e-06 eta: 0:14:31 time: 7.5562 data_time: 0.0234 memory: 25139 loss: 0.0004 2025/02/11 01:20:09 - mmengine - INFO - Iter(train) [550/640] lr: 1.5616e-06 eta: 0:13:02 time: 7.5522 data_time: 0.0239 memory: 25144 loss: 0.0003 2025/02/11 01:21:24 - mmengine - INFO - Iter(train) [560/640] lr: 1.2418e-06 eta: 0:11:33 time: 7.4767 data_time: 0.0234 memory: 25135 loss: 0.0003 2025/02/11 01:22:40 - mmengine - INFO - Iter(train) [570/640] lr: 9.5724e-07 eta: 0:10:05 time: 7.5872 data_time: 0.0237 memory: 25158 loss: 0.0003 2025/02/11 01:23:25 - mmengine - INFO - Saving checkpoint at 576 iterations 2025/02/11 01:24:01 - mmengine - INFO - Iter(train) [580/640] lr: 7.0858e-07 eta: 0:08:38 time: 8.1358 data_time: 0.6679 memory: 25153 loss: 0.0004 2025/02/11 01:25:15 - mmengine - INFO - Iter(train) [590/640] lr: 4.9649e-07 eta: 0:07:11 time: 7.4456 data_time: 0.0249 memory: 25102 loss: 0.0003 2025/02/11 01:26:30 - mmengine - INFO - Iter(train) [600/640] lr: 3.2151e-07 eta: 0:05:44 time: 7.4134 data_time: 0.0244 memory: 25139 loss: 0.0002 2025/02/11 01:27:45 - mmengine - INFO - Iter(train) [610/640] lr: 1.8408e-07 eta: 0:04:17 time: 7.5139 data_time: 0.0505 memory: 25158 loss: 0.0003 2025/02/11 01:29:00 - mmengine - INFO - Iter(train) [620/640] lr: 8.4568e-08 eta: 0:02:51 time: 7.5494 data_time: 0.0232 memory: 25139 loss: 0.0003 2025/02/11 01:30:16 - mmengine - INFO - Iter(train) [630/640] lr: 2.3219e-08 eta: 0:01:25 time: 7.5852 data_time: 0.0226 memory: 25139 loss: 0.0004 2025/02/11 01:31:31 - mmengine - INFO - Iter(train) [640/640] lr: 1.9195e-10 eta: 0:00:00 time: 7.5125 data_time: 0.0233 memory: 25153 loss: 0.0003 2025/02/11 01:31:31 - mmengine - INFO - Saving checkpoint at 640 iterations