Training in progress, step 1000
Browse files
LLaMA-Factory/wandb/run-20250305_233246-9ct1o6yk/files/output.log
CHANGED
@@ -109,3 +109,115 @@
|
|
109 |
[INFO|tokenization_utils_base.py:2491] 2025-03-06 02:08:13,215 >> tokenizer config file saved in /kaggle/working/tokenizer_config.json
|
110 |
[INFO|tokenization_utils_base.py:2500] 2025-03-06 02:08:13,215 >> Special tokens file saved in /kaggle/working/special_tokens_map.json
|
111 |
It seems you are trying to upload a large folder at once. This might take some time and then fail if the folder is too large. For such cases, it is recommended to upload in smaller batches or to use `HfApi().upload_large_folder(...)`/`huggingface-cli upload-large-folder` instead. For more details, check out https://huggingface.co/docs/huggingface_hub/main/en/guides/upload#upload-a-large-folder.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
109 |
[INFO|tokenization_utils_base.py:2491] 2025-03-06 02:08:13,215 >> tokenizer config file saved in /kaggle/working/tokenizer_config.json
|
110 |
[INFO|tokenization_utils_base.py:2500] 2025-03-06 02:08:13,215 >> Special tokens file saved in /kaggle/working/special_tokens_map.json
|
111 |
It seems you are trying to upload a large folder at once. This might take some time and then fail if the folder is too large. For such cases, it is recommended to upload in smaller batches or to use `HfApi().upload_large_folder(...)`/`huggingface-cli upload-large-folder` instead. For more details, check out https://huggingface.co/docs/huggingface_hub/main/en/guides/upload#upload-a-large-folder.
|
112 |
+
14%|█████▏ | 600/4197 [2:50:07<9:49:22, 9.83s/it][INFO|trainer.py:4226] 2025-03-06 02:22:54,558 >>
|
113 |
+
{'loss': 0.3636, 'grad_norm': 1.282714605331421, 'learning_rate': 9.985996777749747e-05, 'epoch': 0.36}
|
114 |
+
{'loss': 0.4467, 'grad_norm': 2.0360989570617676, 'learning_rate': 9.982713965133122e-05, 'epoch': 0.37}
|
115 |
+
{'loss': 0.3875, 'grad_norm': 1.7432626485824585, 'learning_rate': 9.979086430335417e-05, 'epoch': 0.38}
|
116 |
+
{'loss': 0.3646, 'grad_norm': 1.6053438186645508, 'learning_rate': 9.975114424322609e-05, 'epoch': 0.39}
|
117 |
+
{'loss': 0.353, 'grad_norm': 1.2323070764541626, 'learning_rate': 9.970798221892452e-05, 'epoch': 0.39}
|
118 |
+
{'loss': 0.331, 'grad_norm': 1.16932213306427, 'learning_rate': 9.966138121655445e-05, 'epoch': 0.4}
|
119 |
+
{'loss': 0.3132, 'grad_norm': 1.8134998083114624, 'learning_rate': 9.961134446014184e-05, 'epoch': 0.41}
|
120 |
+
{'loss': 0.3017, 'grad_norm': 1.4292124509811401, 'learning_rate': 9.955787541141055e-05, 'epoch': 0.41}
|
121 |
+
{'loss': 0.3596, 'grad_norm': 1.4605034589767456, 'learning_rate': 9.950097776954284e-05, 'epoch': 0.42}
|
122 |
+
{'loss': 0.3399, 'grad_norm': 1.2365972995758057, 'learning_rate': 9.944065547092345e-05, 'epoch': 0.43}
|
123 |
+
***** Running Evaluation *****
|
124 |
+
[INFO|trainer.py:4228] 2025-03-06 02:22:54,558 >> Num examples = 1400
|
125 |
+
[INFO|trainer.py:4231] 2025-03-06 02:22:54,558 >> Batch size = 1
|
126 |
+
17%|██████ | 700/4197 [3:20:56<7:41:30, 7.92s/it][INFO|trainer.py:4226] 2025-03-06 02:53:44,086 >>
|
127 |
+
***** Running Evaluation *****
|
128 |
+
{'eval_news_finetune_val_loss': 0.36549311876296997, 'eval_news_finetune_val_runtime': 1002.8044, 'eval_news_finetune_val_samples_per_second': 1.396, 'eval_news_finetune_val_steps_per_second': 1.396, 'epoch': 0.43}
|
129 |
+
{'loss': 0.3747, 'grad_norm': 1.0590678453445435, 'learning_rate': 9.937691268886725e-05, 'epoch': 0.44}
|
130 |
+
{'loss': 0.2868, 'grad_norm': 0.9111473560333252, 'learning_rate': 9.930975383333056e-05, 'epoch': 0.44}
|
131 |
+
{'loss': 0.3289, 'grad_norm': 2.0456018447875977, 'learning_rate': 9.923918355060599e-05, 'epoch': 0.45}
|
132 |
+
{'loss': 0.3664, 'grad_norm': 1.5998501777648926, 'learning_rate': 9.916520672300107e-05, 'epoch': 0.46}
|
133 |
+
{'loss': 0.3432, 'grad_norm': 1.0773181915283203, 'learning_rate': 9.908782846850037e-05, 'epoch': 0.46}
|
134 |
+
{'loss': 0.3242, 'grad_norm': 1.244042158126831, 'learning_rate': 9.900705414041154e-05, 'epoch': 0.47}
|
135 |
+
{'loss': 0.317, 'grad_norm': 1.8120310306549072, 'learning_rate': 9.892288932699484e-05, 'epoch': 0.48}
|
136 |
+
{'loss': 0.322, 'grad_norm': 0.7863224148750305, 'learning_rate': 9.883533985107663e-05, 'epoch': 0.49}
|
137 |
+
{'loss': 0.343, 'grad_norm': 1.223832130432129, 'learning_rate': 9.874441176964642e-05, 'epoch': 0.49}
|
138 |
+
{'loss': 0.3278, 'grad_norm': 0.9870743155479431, 'learning_rate': 9.865011137343787e-05, 'epoch': 0.5}
|
139 |
+
[INFO|trainer.py:4228] 2025-03-06 02:53:44,086 >> Num examples = 1400
|
140 |
+
[INFO|trainer.py:4231] 2025-03-06 02:53:44,087 >> Batch size = 1
|
141 |
+
19%|██████▊ | 800/4197 [3:51:36<8:06:03, 8.59s/it][INFO|trainer.py:4226] 2025-03-06 03:24:24,072 >>
|
142 |
+
***** Running Evaluation *****
|
143 |
+
{'eval_news_finetune_val_loss': 0.35386842489242554, 'eval_news_finetune_val_runtime': 1003.4109, 'eval_news_finetune_val_samples_per_second': 1.395, 'eval_news_finetune_val_steps_per_second': 1.395, 'epoch': 0.5}
|
144 |
+
{'loss': 0.3902, 'grad_norm': 1.3699963092803955, 'learning_rate': 9.85524451864936e-05, 'epoch': 0.51}
|
145 |
+
{'loss': 0.369, 'grad_norm': 1.7188071012496948, 'learning_rate': 9.845141996571384e-05, 'epoch': 0.51}
|
146 |
+
{'loss': 0.3174, 'grad_norm': 0.4889034628868103, 'learning_rate': 9.834704270038888e-05, 'epoch': 0.52}
|
147 |
+
{'loss': 0.3501, 'grad_norm': 0.8782143592834473, 'learning_rate': 9.823932061171561e-05, 'epoch': 0.53}
|
148 |
+
{'loss': 0.3292, 'grad_norm': 2.4089126586914062, 'learning_rate': 9.812826115229789e-05, 'epoch': 0.54}
|
149 |
+
{'loss': 0.459, 'grad_norm': 1.6382787227630615, 'learning_rate': 9.801387200563096e-05, 'epoch': 0.54}
|
150 |
+
{'loss': 0.3409, 'grad_norm': 1.443916916847229, 'learning_rate': 9.789616108556992e-05, 'epoch': 0.55}
|
151 |
+
{'loss': 0.281, 'grad_norm': 1.632278323173523, 'learning_rate': 9.77751365357821e-05, 'epoch': 0.56}
|
152 |
+
{'loss': 0.3511, 'grad_norm': 2.1452109813690186, 'learning_rate': 9.765080672918374e-05, 'epoch': 0.56}
|
153 |
+
{'loss': 0.2298, 'grad_norm': 1.2721842527389526, 'learning_rate': 9.752318026736078e-05, 'epoch': 0.57}
|
154 |
+
[INFO|trainer.py:4228] 2025-03-06 03:24:24,072 >> Num examples = 1400
|
155 |
+
[INFO|trainer.py:4231] 2025-03-06 03:24:24,072 >> Batch size = 1
|
156 |
+
21%|███████▋ | 900/4197 [4:22:16<7:58:49, 8.71s/it][INFO|trainer.py:4226] 2025-03-06 03:55:03,850 >>
|
157 |
+
***** Running Evaluation *****
|
158 |
+
{'eval_news_finetune_val_loss': 0.34554028511047363, 'eval_news_finetune_val_runtime': 1003.3342, 'eval_news_finetune_val_samples_per_second': 1.395, 'eval_news_finetune_val_steps_per_second': 1.395, 'epoch': 0.57}
|
159 |
+
{'loss': 0.3214, 'grad_norm': 2.5264174938201904, 'learning_rate': 9.739226597997359e-05, 'epoch': 0.58}
|
160 |
+
{'loss': 0.2697, 'grad_norm': 1.4553183317184448, 'learning_rate': 9.725807292414629e-05, 'epoch': 0.59}
|
161 |
+
{'loss': 0.3315, 'grad_norm': 2.2111873626708984, 'learning_rate': 9.712061038384002e-05, 'epoch': 0.59}
|
162 |
+
{'loss': 0.4036, 'grad_norm': 1.4308302402496338, 'learning_rate': 9.697988786921071e-05, 'epoch': 0.6}
|
163 |
+
{'loss': 0.2946, 'grad_norm': 1.8136054277420044, 'learning_rate': 9.683591511595107e-05, 'epoch': 0.61}
|
164 |
+
{'loss': 0.2259, 'grad_norm': 1.8586084842681885, 'learning_rate': 9.668870208461713e-05, 'epoch': 0.61}
|
165 |
+
{'loss': 0.4, 'grad_norm': 1.1640444993972778, 'learning_rate': 9.653825895993908e-05, 'epoch': 0.62}
|
166 |
+
{'loss': 0.2804, 'grad_norm': 1.386013388633728, 'learning_rate': 9.63845961501166e-05, 'epoch': 0.63}
|
167 |
+
{'loss': 0.3593, 'grad_norm': 2.1413650512695312, 'learning_rate': 9.622772428609887e-05, 'epoch': 0.64}
|
168 |
+
{'loss': 0.3058, 'grad_norm': 1.5462217330932617, 'learning_rate': 9.606765422084908e-05, 'epoch': 0.64}
|
169 |
+
[INFO|trainer.py:4228] 2025-03-06 03:55:03,850 >> Num examples = 1400
|
170 |
+
[INFO|trainer.py:4231] 2025-03-06 03:55:03,850 >> Batch size = 1
|
171 |
+
24%|████████▎ | 1000/4197 [4:52:59<7:27:15, 8.39s/it][INFO|trainer.py:4226] 2025-03-06 04:25:46,860 >>
|
172 |
+
***** Running Evaluation *****
|
173 |
+
{'eval_news_finetune_val_loss': 0.3292103707790375, 'eval_news_finetune_val_runtime': 1003.4558, 'eval_news_finetune_val_samples_per_second': 1.395, 'eval_news_finetune_val_steps_per_second': 1.395, 'epoch': 0.64}
|
174 |
+
{'loss': 0.3318, 'grad_norm': 1.0373942852020264, 'learning_rate': 9.590439702859351e-05, 'epoch': 0.65}
|
175 |
+
{'loss': 0.3328, 'grad_norm': 1.2724213600158691, 'learning_rate': 9.573796400405544e-05, 'epoch': 0.66}
|
176 |
+
{'loss': 0.2673, 'grad_norm': 0.8528966903686523, 'learning_rate': 9.55683666616737e-05, 'epoch': 0.66}
|
177 |
+
{'loss': 0.3538, 'grad_norm': 1.65499746799469, 'learning_rate': 9.539561673480612e-05, 'epoch': 0.67}
|
178 |
+
{'loss': 0.3228, 'grad_norm': 2.341379404067993, 'learning_rate': 9.521972617491767e-05, 'epoch': 0.68}
|
179 |
+
{'loss': 0.3974, 'grad_norm': 1.4938244819641113, 'learning_rate': 9.504070715075372e-05, 'epoch': 0.69}
|
180 |
+
{'loss': 0.3236, 'grad_norm': 1.0390361547470093, 'learning_rate': 9.485857204749811e-05, 'epoch': 0.69}
|
181 |
+
{'loss': 0.3027, 'grad_norm': 3.8845393657684326, 'learning_rate': 9.467333346591632e-05, 'epoch': 0.7}
|
182 |
+
{'loss': 0.3005, 'grad_norm': 1.3295674324035645, 'learning_rate': 9.448500422148364e-05, 'epoch': 0.71}
|
183 |
+
{'loss': 0.294, 'grad_norm': 1.0146369934082031, 'learning_rate': 9.429359734349863e-05, 'epoch': 0.71}
|
184 |
+
[INFO|trainer.py:4228] 2025-03-06 04:25:46,860 >> Num examples = 1400
|
185 |
+
[INFO|trainer.py:4231] 2025-03-06 04:25:46,860 >> Batch size = 1
|
186 |
+
24%|████████▎ | 1000/4197 [5:09:42<7:27:15, 8.39s/it][INFO|trainer.py:3910] 2025-03-06 04:42:30,113 >> Saving model checkpoint to /kaggle/working/checkpoint-1000
|
187 |
+
[INFO|configuration_utils.py:696] 2025-03-06 04:42:30,590 >> loading configuration file config.json from cache at /root/.cache/huggingface/hub/models--Qwen--Qwen2.5-1.5B-Instruct/snapshots/989aa7980e4cf806f80c7fef2b1adb7bc71aa306/config.json
|
188 |
+
{'eval_news_finetune_val_loss': 0.3208242654800415, 'eval_news_finetune_val_runtime': 1003.2491, 'eval_news_finetune_val_samples_per_second': 1.395, 'eval_news_finetune_val_steps_per_second': 1.395, 'epoch': 0.71}
|
189 |
+
[INFO|configuration_utils.py:768] 2025-03-06 04:42:30,591 >> Model config Qwen2Config {
|
190 |
+
"architectures": [
|
191 |
+
"Qwen2ForCausalLM"
|
192 |
+
],
|
193 |
+
"attention_dropout": 0.0,
|
194 |
+
"bos_token_id": 151643,
|
195 |
+
"eos_token_id": 151645,
|
196 |
+
"hidden_act": "silu",
|
197 |
+
"hidden_size": 1536,
|
198 |
+
"initializer_range": 0.02,
|
199 |
+
"intermediate_size": 8960,
|
200 |
+
"max_position_embeddings": 32768,
|
201 |
+
"max_window_layers": 21,
|
202 |
+
"model_type": "qwen2",
|
203 |
+
"num_attention_heads": 12,
|
204 |
+
"num_hidden_layers": 28,
|
205 |
+
"num_key_value_heads": 2,
|
206 |
+
"rms_norm_eps": 1e-06,
|
207 |
+
"rope_scaling": null,
|
208 |
+
"rope_theta": 1000000.0,
|
209 |
+
"sliding_window": null,
|
210 |
+
"tie_word_embeddings": true,
|
211 |
+
"torch_dtype": "bfloat16",
|
212 |
+
"transformers_version": "4.48.3",
|
213 |
+
"use_cache": true,
|
214 |
+
"use_sliding_window": false,
|
215 |
+
"vocab_size": 151936
|
216 |
+
}
|
217 |
+
|
218 |
+
[INFO|tokenization_utils_base.py:2491] 2025-03-06 04:42:31,254 >> tokenizer config file saved in /kaggle/working/checkpoint-1000/tokenizer_config.json
|
219 |
+
[INFO|tokenization_utils_base.py:2500] 2025-03-06 04:42:31,254 >> Special tokens file saved in /kaggle/working/checkpoint-1000/special_tokens_map.json
|
220 |
+
[INFO|tokenization_utils_base.py:2491] 2025-03-06 04:42:32,803 >> tokenizer config file saved in /kaggle/working/tokenizer_config.json
|
221 |
+
[INFO|tokenization_utils_base.py:2500] 2025-03-06 04:42:32,803 >> Special tokens file saved in /kaggle/working/special_tokens_map.json
|
222 |
+
It seems you are trying to upload a large folder at once. This might take some time and then fail if the folder is too large. For such cases, it is recommended to upload in smaller batches or to use `HfApi().upload_large_folder(...)`/`huggingface-cli upload-large-folder` instead. For more details, check out https://huggingface.co/docs/huggingface_hub/main/en/guides/upload#upload-a-large-folder.
|
223 |
+
24%|███████▋ | 1003/4197 [5:10:16<139:42:10, 157.46s/it]
|
LLaMA-Factory/wandb/run-20250305_233246-9ct1o6yk/run-9ct1o6yk.wandb
CHANGED
@@ -1,3 +1,3 @@
|
|
1 |
version https://git-lfs.github.com/spec/v1
|
2 |
-
oid sha256:
|
3 |
-
size
|
|
|
1 |
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:6713a73557d0cd69067d7a9ad7748c6768fdc0007b478c226343d9c67f7f86f2
|
3 |
+
size 5668864
|
adapter_model.safetensors
CHANGED
@@ -1,3 +1,3 @@
|
|
1 |
version https://git-lfs.github.com/spec/v1
|
2 |
-
oid sha256:
|
3 |
size 295488936
|
|
|
1 |
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:17ba270b888a201fead48ad37c2c2e228e832cc5e2304c9d48ddcc2a4ab95b9d
|
3 |
size 295488936
|
trainer_log.jsonl
CHANGED
@@ -53,3 +53,58 @@
|
|
53 |
{"current_steps": 490, "total_steps": 4197, "loss": 0.3274, "lr": 9.991527351837174e-05, "epoch": 0.35012504465880673, "percentage": 11.68, "elapsed_time": "2:17:03", "remaining_time": "17:16:51"}
|
54 |
{"current_steps": 500, "total_steps": 4197, "loss": 0.4301, "lr": 9.988934641068436e-05, "epoch": 0.35727045373347627, "percentage": 11.91, "elapsed_time": "2:18:40", "remaining_time": "17:05:21"}
|
55 |
{"current_steps": 500, "total_steps": 4197, "epoch": 0.35727045373347627, "percentage": 11.91, "elapsed_time": "2:35:23", "remaining_time": "19:08:56"}
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
53 |
{"current_steps": 490, "total_steps": 4197, "loss": 0.3274, "lr": 9.991527351837174e-05, "epoch": 0.35012504465880673, "percentage": 11.68, "elapsed_time": "2:17:03", "remaining_time": "17:16:51"}
|
54 |
{"current_steps": 500, "total_steps": 4197, "loss": 0.4301, "lr": 9.988934641068436e-05, "epoch": 0.35727045373347627, "percentage": 11.91, "elapsed_time": "2:18:40", "remaining_time": "17:05:21"}
|
55 |
{"current_steps": 500, "total_steps": 4197, "epoch": 0.35727045373347627, "percentage": 11.91, "elapsed_time": "2:35:23", "remaining_time": "19:08:56"}
|
56 |
+
{"current_steps": 510, "total_steps": 4197, "loss": 0.3636, "lr": 9.985996777749747e-05, "epoch": 0.36441586280814575, "percentage": 12.15, "elapsed_time": "2:36:50", "remaining_time": "18:53:55"}
|
57 |
+
{"current_steps": 520, "total_steps": 4197, "loss": 0.4467, "lr": 9.982713965133122e-05, "epoch": 0.3715612718828153, "percentage": 12.39, "elapsed_time": "2:38:18", "remaining_time": "18:39:24"}
|
58 |
+
{"current_steps": 530, "total_steps": 4197, "loss": 0.3875, "lr": 9.979086430335417e-05, "epoch": 0.37870668095748483, "percentage": 12.63, "elapsed_time": "2:39:43", "remaining_time": "18:25:08"}
|
59 |
+
{"current_steps": 540, "total_steps": 4197, "loss": 0.3646, "lr": 9.975114424322609e-05, "epoch": 0.3858520900321543, "percentage": 12.87, "elapsed_time": "2:41:08", "remaining_time": "18:11:18"}
|
60 |
+
{"current_steps": 550, "total_steps": 4197, "loss": 0.353, "lr": 9.970798221892452e-05, "epoch": 0.39299749910682386, "percentage": 13.1, "elapsed_time": "2:42:31", "remaining_time": "17:57:41"}
|
61 |
+
{"current_steps": 560, "total_steps": 4197, "loss": 0.331, "lr": 9.966138121655445e-05, "epoch": 0.4001429081814934, "percentage": 13.34, "elapsed_time": "2:44:04", "remaining_time": "17:45:34"}
|
62 |
+
{"current_steps": 570, "total_steps": 4197, "loss": 0.3132, "lr": 9.961134446014184e-05, "epoch": 0.40728831725616294, "percentage": 13.58, "elapsed_time": "2:45:30", "remaining_time": "17:33:09"}
|
63 |
+
{"current_steps": 580, "total_steps": 4197, "loss": 0.3017, "lr": 9.955787541141055e-05, "epoch": 0.4144337263308324, "percentage": 13.82, "elapsed_time": "2:47:03", "remaining_time": "17:21:50"}
|
64 |
+
{"current_steps": 590, "total_steps": 4197, "loss": 0.3596, "lr": 9.950097776954284e-05, "epoch": 0.42157913540550196, "percentage": 14.06, "elapsed_time": "2:48:28", "remaining_time": "17:09:57"}
|
65 |
+
{"current_steps": 600, "total_steps": 4197, "loss": 0.3399, "lr": 9.944065547092345e-05, "epoch": 0.4287245444801715, "percentage": 14.3, "elapsed_time": "2:50:07", "remaining_time": "16:59:51"}
|
66 |
+
{"current_steps": 600, "total_steps": 4197, "epoch": 0.4287245444801715, "percentage": 14.3, "elapsed_time": "3:06:49", "remaining_time": "18:40:03"}
|
67 |
+
{"current_steps": 610, "total_steps": 4197, "loss": 0.3747, "lr": 9.937691268886725e-05, "epoch": 0.43586995355484104, "percentage": 14.53, "elapsed_time": "3:08:20", "remaining_time": "18:27:30"}
|
68 |
+
{"current_steps": 620, "total_steps": 4197, "loss": 0.2868, "lr": 9.930975383333056e-05, "epoch": 0.4430153626295105, "percentage": 14.77, "elapsed_time": "3:09:47", "remaining_time": "18:14:56"}
|
69 |
+
{"current_steps": 630, "total_steps": 4197, "loss": 0.3289, "lr": 9.923918355060599e-05, "epoch": 0.45016077170418006, "percentage": 15.01, "elapsed_time": "3:11:18", "remaining_time": "18:03:10"}
|
70 |
+
{"current_steps": 640, "total_steps": 4197, "loss": 0.3664, "lr": 9.916520672300107e-05, "epoch": 0.4573061807788496, "percentage": 15.25, "elapsed_time": "3:12:27", "remaining_time": "17:49:38"}
|
71 |
+
{"current_steps": 650, "total_steps": 4197, "loss": 0.3432, "lr": 9.908782846850037e-05, "epoch": 0.4644515898535191, "percentage": 15.49, "elapsed_time": "3:14:00", "remaining_time": "17:38:42"}
|
72 |
+
{"current_steps": 660, "total_steps": 4197, "loss": 0.3242, "lr": 9.900705414041154e-05, "epoch": 0.4715969989281886, "percentage": 15.73, "elapsed_time": "3:15:23", "remaining_time": "17:27:08"}
|
73 |
+
{"current_steps": 670, "total_steps": 4197, "loss": 0.317, "lr": 9.892288932699484e-05, "epoch": 0.47874240800285817, "percentage": 15.96, "elapsed_time": "3:16:38", "remaining_time": "17:15:08"}
|
74 |
+
{"current_steps": 680, "total_steps": 4197, "loss": 0.322, "lr": 9.883533985107663e-05, "epoch": 0.4858878170775277, "percentage": 16.2, "elapsed_time": "3:18:09", "remaining_time": "17:04:55"}
|
75 |
+
{"current_steps": 690, "total_steps": 4197, "loss": 0.343, "lr": 9.874441176964642e-05, "epoch": 0.4930332261521972, "percentage": 16.44, "elapsed_time": "3:19:36", "remaining_time": "16:54:32"}
|
76 |
+
{"current_steps": 700, "total_steps": 4197, "loss": 0.3278, "lr": 9.865011137343787e-05, "epoch": 0.5001786352268668, "percentage": 16.68, "elapsed_time": "3:20:56", "remaining_time": "16:43:51"}
|
77 |
+
{"current_steps": 700, "total_steps": 4197, "epoch": 0.5001786352268668, "percentage": 16.68, "elapsed_time": "3:37:40", "remaining_time": "18:07:24"}
|
78 |
+
{"current_steps": 710, "total_steps": 4197, "loss": 0.3902, "lr": 9.85524451864936e-05, "epoch": 0.5073240443015362, "percentage": 16.92, "elapsed_time": "3:39:06", "remaining_time": "17:56:07"}
|
79 |
+
{"current_steps": 720, "total_steps": 4197, "loss": 0.369, "lr": 9.845141996571384e-05, "epoch": 0.5144694533762058, "percentage": 17.16, "elapsed_time": "3:40:27", "remaining_time": "17:44:38"}
|
80 |
+
{"current_steps": 730, "total_steps": 4197, "loss": 0.3174, "lr": 9.834704270038888e-05, "epoch": 0.5216148624508753, "percentage": 17.39, "elapsed_time": "3:41:49", "remaining_time": "17:33:29"}
|
81 |
+
{"current_steps": 740, "total_steps": 4197, "loss": 0.3501, "lr": 9.823932061171561e-05, "epoch": 0.5287602715255448, "percentage": 17.63, "elapsed_time": "3:43:16", "remaining_time": "17:23:04"}
|
82 |
+
{"current_steps": 750, "total_steps": 4197, "loss": 0.3292, "lr": 9.812826115229789e-05, "epoch": 0.5359056806002144, "percentage": 17.87, "elapsed_time": "3:44:44", "remaining_time": "17:12:54"}
|
83 |
+
{"current_steps": 760, "total_steps": 4197, "loss": 0.459, "lr": 9.801387200563096e-05, "epoch": 0.5430510896748839, "percentage": 18.11, "elapsed_time": "3:46:09", "remaining_time": "17:02:47"}
|
84 |
+
{"current_steps": 770, "total_steps": 4197, "loss": 0.3409, "lr": 9.789616108556992e-05, "epoch": 0.5501964987495535, "percentage": 18.35, "elapsed_time": "3:47:27", "remaining_time": "16:52:19"}
|
85 |
+
{"current_steps": 780, "total_steps": 4197, "loss": 0.281, "lr": 9.77751365357821e-05, "epoch": 0.5573419078242229, "percentage": 18.58, "elapsed_time": "3:48:48", "remaining_time": "16:42:22"}
|
86 |
+
{"current_steps": 790, "total_steps": 4197, "loss": 0.3511, "lr": 9.765080672918374e-05, "epoch": 0.5644873168988924, "percentage": 18.82, "elapsed_time": "3:50:17", "remaining_time": "16:33:11"}
|
87 |
+
{"current_steps": 800, "total_steps": 4197, "loss": 0.2298, "lr": 9.752318026736078e-05, "epoch": 0.571632725973562, "percentage": 19.06, "elapsed_time": "3:51:36", "remaining_time": "16:23:28"}
|
88 |
+
{"current_steps": 800, "total_steps": 4197, "epoch": 0.571632725973562, "percentage": 19.06, "elapsed_time": "4:08:19", "remaining_time": "17:34:28"}
|
89 |
+
{"current_steps": 810, "total_steps": 4197, "loss": 0.3214, "lr": 9.739226597997359e-05, "epoch": 0.5787781350482315, "percentage": 19.3, "elapsed_time": "4:09:44", "remaining_time": "17:24:18"}
|
90 |
+
{"current_steps": 820, "total_steps": 4197, "loss": 0.2697, "lr": 9.725807292414629e-05, "epoch": 0.585923544122901, "percentage": 19.54, "elapsed_time": "4:11:02", "remaining_time": "17:13:51"}
|
91 |
+
{"current_steps": 830, "total_steps": 4197, "loss": 0.3315, "lr": 9.712061038384002e-05, "epoch": 0.5930689531975706, "percentage": 19.78, "elapsed_time": "4:12:24", "remaining_time": "17:03:57"}
|
92 |
+
{"current_steps": 840, "total_steps": 4197, "loss": 0.4036, "lr": 9.697988786921071e-05, "epoch": 0.6002143622722401, "percentage": 20.01, "elapsed_time": "4:13:56", "remaining_time": "16:54:53"}
|
93 |
+
{"current_steps": 850, "total_steps": 4197, "loss": 0.2946, "lr": 9.683591511595107e-05, "epoch": 0.6073597713469097, "percentage": 20.25, "elapsed_time": "4:15:13", "remaining_time": "16:44:57"}
|
94 |
+
{"current_steps": 860, "total_steps": 4197, "loss": 0.2259, "lr": 9.668870208461713e-05, "epoch": 0.6145051804215791, "percentage": 20.49, "elapsed_time": "4:16:32", "remaining_time": "16:35:25"}
|
95 |
+
{"current_steps": 870, "total_steps": 4197, "loss": 0.4, "lr": 9.653825895993908e-05, "epoch": 0.6216505894962486, "percentage": 20.73, "elapsed_time": "4:17:53", "remaining_time": "16:26:12"}
|
96 |
+
{"current_steps": 880, "total_steps": 4197, "loss": 0.2804, "lr": 9.63845961501166e-05, "epoch": 0.6287959985709182, "percentage": 20.97, "elapsed_time": "4:19:21", "remaining_time": "16:17:37"}
|
97 |
+
{"current_steps": 890, "total_steps": 4197, "loss": 0.3593, "lr": 9.622772428609887e-05, "epoch": 0.6359414076455877, "percentage": 21.21, "elapsed_time": "4:20:42", "remaining_time": "16:08:42"}
|
98 |
+
{"current_steps": 900, "total_steps": 4197, "loss": 0.3058, "lr": 9.606765422084908e-05, "epoch": 0.6430868167202572, "percentage": 21.44, "elapsed_time": "4:22:16", "remaining_time": "16:00:47"}
|
99 |
+
{"current_steps": 900, "total_steps": 4197, "epoch": 0.6430868167202572, "percentage": 21.44, "elapsed_time": "4:38:59", "remaining_time": "17:02:03"}
|
100 |
+
{"current_steps": 910, "total_steps": 4197, "loss": 0.3318, "lr": 9.590439702859351e-05, "epoch": 0.6502322257949268, "percentage": 21.68, "elapsed_time": "4:40:16", "remaining_time": "16:52:21"}
|
101 |
+
{"current_steps": 920, "total_steps": 4197, "loss": 0.3328, "lr": 9.573796400405544e-05, "epoch": 0.6573776348695963, "percentage": 21.92, "elapsed_time": "4:41:47", "remaining_time": "16:43:42"}
|
102 |
+
{"current_steps": 930, "total_steps": 4197, "loss": 0.2673, "lr": 9.55683666616737e-05, "epoch": 0.6645230439442658, "percentage": 22.16, "elapsed_time": "4:43:15", "remaining_time": "16:35:02"}
|
103 |
+
{"current_steps": 940, "total_steps": 4197, "loss": 0.3538, "lr": 9.539561673480612e-05, "epoch": 0.6716684530189353, "percentage": 22.4, "elapsed_time": "4:44:36", "remaining_time": "16:26:09"}
|
104 |
+
{"current_steps": 950, "total_steps": 4197, "loss": 0.3228, "lr": 9.521972617491767e-05, "epoch": 0.6788138620936048, "percentage": 22.64, "elapsed_time": "4:45:52", "remaining_time": "16:17:04"}
|
105 |
+
{"current_steps": 960, "total_steps": 4197, "loss": 0.3974, "lr": 9.504070715075372e-05, "epoch": 0.6859592711682744, "percentage": 22.87, "elapsed_time": "4:47:21", "remaining_time": "16:08:55"}
|
106 |
+
{"current_steps": 970, "total_steps": 4197, "loss": 0.3236, "lr": 9.485857204749811e-05, "epoch": 0.6931046802429439, "percentage": 23.11, "elapsed_time": "4:48:45", "remaining_time": "16:00:37"}
|
107 |
+
{"current_steps": 980, "total_steps": 4197, "loss": 0.3027, "lr": 9.467333346591632e-05, "epoch": 0.7002500893176135, "percentage": 23.35, "elapsed_time": "4:50:06", "remaining_time": "15:52:18"}
|
108 |
+
{"current_steps": 990, "total_steps": 4197, "loss": 0.3005, "lr": 9.448500422148364e-05, "epoch": 0.707395498392283, "percentage": 23.59, "elapsed_time": "4:51:35", "remaining_time": "15:44:34"}
|
109 |
+
{"current_steps": 1000, "total_steps": 4197, "loss": 0.294, "lr": 9.429359734349863e-05, "epoch": 0.7145409074669525, "percentage": 23.83, "elapsed_time": "4:52:59", "remaining_time": "15:36:41"}
|
110 |
+
{"current_steps": 1000, "total_steps": 4197, "epoch": 0.7145409074669525, "percentage": 23.83, "elapsed_time": "5:09:42", "remaining_time": "16:30:08"}
|