tangled-1.0-0.5b-base
time python -B prepare_core_datasets.py
# 4096 x 4000
Progress: 100%|████████| 194/194 [1:18:02<00:00, 24.14s/it]
Workers are finished.██| 194/194 [1:18:02<00:00, 24.14s/it]
Progress: 100%|████████| 194/194 [1:19:51<00:00, 24.70s/it]
Workers are finished███| 194/194 [1:19:51<00:00, 24.70s/it]
i=0, block_size=4096, chunk_size=16384000, len(dataset)=2082568, len(dataset) * block_size=8530198528
Total number of tokens in the optimized dataset '../core-data-0-4096-4000' is 8,530,198,528
i=1, block_size=32768, chunk_size=16384000, len(dataset)=259888, len(dataset) * block_size=8516009984
Total number of tokens in the optimized dataset '../core-data-1-32768-500' is 8,516,009,984
CUDA_VISIBLE_DEVICES=0 CUDA_LAUNCH_BLOCKING=0 PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True litgpt pretrain --config pretrain-core-model.yaml
Seed set to 23
Time to instantiate model: 0.20 seconds.
Total parameters: 151,047,936
Verifying settings ...
Measured TFLOPs: 82.48
Epoch 1 | iter 256 step 1 | loss train: 12.109, val: n/a | iter time: 271.40 ms (step) remaining time: 2 days, 22:59:18
Epoch 1 | iter 512 step 2 | loss train: 12.114, val: n/a | iter time: 212.21 ms (step) remaining time: 2 days, 16:36:01
Epoch 1 | iter 768 step 3 | loss train: 12.109, val: n/a | iter time: 212.84 ms (step) remaining time: 2 days, 14:27:03
Epoch 1 | iter 1024 step 4 | loss train: 12.108, val: n/a | iter time: 211.38 ms (step) remaining time: 2 days, 13:22:08
Epoch 1 | iter 1280 step 5 | loss train: 12.110, val: n/a | iter time: 211.72 ms (step) remaining time: 2 days, 12:43:01
Epoch 1 | iter 1536 step 6 | loss train: 12.107, val: n/a | iter time: 211.54 ms (step) remaining time: 2 days, 12:16:52
Epoch 1 | iter 1792 step 7 | loss train: 12.108, val: n/a | iter time: 211.76 ms (step) remaining time: 2 days, 11:57:55
Epoch 1 | iter 2048 step 8 | loss train: 12.109, val: n/a | iter time: 212.06 ms (step) remaining time: 2 days, 11:43:27
Epoch 1 | iter 2304 step 9 | loss train: 12.100, val: n/a | iter time: 212.64 ms (step) remaining time: 2 days, 11:32:08
Epoch 1 | iter 2560 step 10 | loss train: 12.108, val: n/a | iter time: 212.05 ms (step) remaining time: 2 days, 11:22:53
Epoch 1 | iter 2816 step 11 | loss train: 12.106, val: n/a | iter time: 212.49 ms (step) remaining time: 2 days, 11:15:09
Epoch 1 | iter 3072 step 12 | loss train: 12.105, val: n/a | iter time: 212.50 ms (step) remaining time: 2 days, 11:08:34
Epoch 1 | iter 3328 step 13 | loss train: 12.102, val: n/a | iter time: 211.33 ms (step) remaining time: 2 days, 11:02:51
Epoch 1 | iter 3584 step 14 | loss train: 12.103, val: n/a | iter time: 212.73 ms (step) remaining time: 2 days, 10:57:54
# ...
Inference Providers
NEW
This model is not currently available via any of the supported third-party Inference Providers, and
the model is not deployed on the HF Inference API.