Alignment-Lab-AI's picture
Upload folder using huggingface_hub
1bad0bb verified
wandb: WARNING Saving files without folders. If you want to preserve subdirectories pass base_path to wandb.save, i.e. wandb.save("/mnt/folder/file.h5", base_path="/mnt")
0%| | 0/9378 [00:00<?, ?it/s]You're using a PreTrainedTokenizerFast tokenizer. Please note that with a fast tokenizer, using the `__call__` method is faster than using a method to encode the text followed by a call to the `pad` method to get a padded encoding.
/root/miniconda3/envs/py3.10/lib/python3.10/site-packages/torch/utils/checkpoint.py:429: UserWarning: torch.utils.checkpoint: please pass in use_reentrant=True or use_reentrant=False explicitly. The default value of use_reentrant will be updated to be False in the future. To maintain current behavior, pass use_reentrant=True. It is recommended that you use use_reentrant=False. Refer to docs for more details on the differences between the two variants.
warnings.warn(
[2024-05-15 12:19:40,452] [INFO] [axolotl.callbacks.on_train_begin:770] [PID:13671] [RANK:0] The Axolotl config has been saved to the WandB run under files.
[2024-05-15 12:19:41,543] [INFO] [axolotl.utils.samplers.multipack._len_est:184] [PID:13671] [RANK:0] packing_efficiency_estimate: 0.92 total_num_tokens per device: 47609834
[2024-05-15 12:19:42,607] [INFO] [axolotl.utils.samplers.multipack._len_est:184] [PID:13671] [RANK:0] packing_efficiency_estimate: 0.92 total_num_tokens per device: 47609834
/root/miniconda3/envs/py3.10/lib/python3.10/site-packages/deepspeed/runtime/zero/stage_1_and_2.py:1290: UserWarning: The torch.cuda.*DtypeTensor constructors are no longer recommended. It's best to use methods such as torch.tensor(data, dtype=*, device='cuda') to create tensors. (Triggered internally at ../torch/csrc/tensor/python_tensor.cpp:83.)
total_norm_cuda = get_accelerator().FloatTensor([float(total_norm)])
0%| | 1/9378 [00:40<106:00:01, 40.70s/it]
0%| | 2/9378 [01:11<90:57:52, 34.93s/it]
[2024-05-15 12:20:52,036] [INFO] [axolotl.callbacks.on_step_end:125] [PID:13671] [RANK:0] GPU memory usage while training: 16.077GB (+26.679GB cache, +1.247GB misc)
{'loss': 2.2459, 'grad_norm': 55.8736017722144, 'learning_rate': 3.480682213713888e-08, 'epoch': 0.0}
0%| | 4/9378 [02:15<85:56:29, 33.01s/it]
0%| | 5/9378 [02:46<83:57:52, 32.25s/it]
0%| | 6/9378 [03:17<82:53:32, 31.84s/it]
0%| | 7/9378 [03:50<83:39:27, 32.14s/it]
0%| | 8/9378 [04:21<82:44:26, 31.79s/it]
0%| | 9/9378 [04:51<81:28:35, 31.31s/it]
0%| | 10/9378 [05:23<82:12:43, 31.59s/it]
0%| | 11/9378 [05:55<82:17:21, 31.63s/it]
0%| | 12/9378 [06:26<81:50:49, 31.46s/it]
0%| | 13/9378 [06:57<81:10:49, 31.21s/it]
0%| | 14/9378 [07:30<82:33:45, 31.74s/it]