text
stringlengths
54
260
06-11 12:11 - modeling.trainer - INFO - train - iter 1883200: loss 2.8307, time 6.57s
06-11 12:11 - modeling.trainer - INFO - train - iter 1883250: loss 2.8406, time 6.67s
06-11 12:11 - modeling.trainer - INFO - train - iter 1883300: loss 2.8364, time 6.65s
06-11 12:11 - modeling.trainer - INFO - train - iter 1883350: loss 2.8244, time 6.86s
06-11 12:11 - modeling.trainer - INFO - train - iter 1883400: loss 2.8280, time 6.63s
06-11 12:12 - modeling.trainer - INFO - train - iter 1883450: loss 2.8307, time 6.67s
06-11 12:12 - modeling.trainer - INFO - train - iter 1883500: loss 2.8336, time 6.69s
06-11 12:12 - modeling.trainer - INFO - train - iter 1883550: loss 2.8357, time 6.64s
06-11 12:12 - modeling.trainer - INFO - train - iter 1883600: loss 2.8352, time 6.72s
06-11 12:12 - modeling.trainer - INFO - train - iter 1883650: loss 2.8421, time 6.86s
06-11 12:12 - modeling.trainer - INFO - train - iter 1883700: loss 2.8407, time 6.79s
06-11 12:12 - modeling.trainer - INFO - train - iter 1883750: loss 2.8398, time 6.71s
06-11 12:12 - modeling.trainer - INFO - train - iter 1883800: loss 2.8357, time 6.75s
06-11 12:12 - modeling.trainer - INFO - train - iter 1883850: loss 2.8252, time 6.78s
06-11 12:13 - modeling.trainer - INFO - train - iter 1883900: loss 2.8326, time 6.64s
06-11 12:13 - modeling.trainer - INFO - train - iter 1883950: loss 2.8349, time 6.63s
06-11 12:13 - modeling.trainer - INFO - train - iter 1884000: loss 2.8278, time 6.65s
06-11 12:13 - modeling.trainer - INFO - train - iter 1884050: loss 2.8283, time 6.87s
06-11 12:13 - modeling.trainer - INFO - train - iter 1884100: loss 2.8267, time 6.72s
06-11 12:13 - modeling.trainer - INFO - train - iter 1884150: loss 2.8328, time 6.84s
06-11 12:13 - modeling.trainer - INFO - train - iter 1884200: loss 2.8393, time 6.59s
06-11 12:13 - modeling.trainer - INFO - train - iter 1884250: loss 2.8314, time 6.61s
06-11 12:13 - modeling.trainer - INFO - train - iter 1884300: loss 2.8312, time 6.62s
06-11 12:14 - modeling.trainer - INFO - train - iter 1884350: loss 2.8315, time 6.63s
06-11 12:14 - modeling.trainer - INFO - train - iter 1884400: loss 2.8375, time 6.75s
06-11 12:14 - modeling.trainer - INFO - train - iter 1884450: loss 2.8410, time 6.59s
06-11 12:14 - modeling.trainer - INFO - train - iter 1884500: loss 2.8400, time 6.73s
06-11 12:14 - modeling.trainer - INFO - train - iter 1884550: loss 2.8457, time 7.38s
06-11 12:14 - modeling.trainer - INFO - train - iter 1884600: loss 2.8355, time 6.83s
06-11 12:14 - modeling.trainer - INFO - train - iter 1884650: loss 2.8328, time 6.66s
06-11 12:14 - modeling.trainer - INFO - train - iter 1884700: loss 2.8366, time 6.80s
06-11 12:14 - modeling.trainer - INFO - train - iter 1884750: loss 2.8366, time 6.91s
06-11 12:15 - modeling.trainer - INFO - train - iter 1884800: loss 2.8370, time 6.69s
06-11 12:15 - modeling.trainer - INFO - train - iter 1884850: loss 2.8329, time 6.64s
06-11 12:15 - modeling.trainer - INFO - train - iter 1884900: loss 2.8366, time 6.78s
06-11 12:15 - modeling.trainer - INFO - train - iter 1884950: loss 2.8467, time 6.83s
06-11 12:15 - modeling.trainer - INFO - train - iter 1885000: loss 2.8445, time 6.76s
06-11 12:15 - modeling.trainer - INFO - train - iter 1885050: loss 2.8344, time 6.78s
06-11 12:15 - modeling.trainer - INFO - train - iter 1885100: loss 2.8269, time 6.82s
06-11 12:15 - modeling.trainer - INFO - train - iter 1885150: loss 2.8394, time 6.79s
06-11 12:15 - modeling.trainer - INFO - train - iter 1885200: loss 2.8436, time 6.66s
06-11 12:16 - modeling.trainer - INFO - train - iter 1885250: loss 2.8282, time 6.68s
06-11 12:16 - modeling.trainer - INFO - train - iter 1885300: loss 2.8251, time 6.87s
06-11 12:16 - modeling.trainer - INFO - train - iter 1885350: loss 2.8360, time 6.87s
06-11 12:16 - modeling.trainer - INFO - train - iter 1885400: loss 2.8428, time 6.97s
06-11 12:16 - modeling.trainer - INFO - train - iter 1885450: loss 2.8472, time 7.07s
06-11 12:16 - modeling.trainer - INFO - train - iter 1885500: loss 2.8460, time 7.02s
06-11 12:16 - modeling.trainer - INFO - train - iter 1885550: loss 2.8342, time 7.06s
06-11 12:16 - modeling.trainer - INFO - train - iter 1885600: loss 2.8339, time 6.99s
06-11 12:17 - modeling.trainer - INFO - train - iter 1885650: loss 2.8330, time 6.83s
06-12 01:39 - modeling.utils - INFO - not setting manual seed to 42 due to dataloader behavior after requeue
06-12 01:39 - modeling.trainer - INFO - saving experiment configuration
06-12 01:39 - modeling.trainer - INFO - model parameters: 0.31B
06-12 01:39 - modeling.trainer - INFO - using fused AdamW optimizer
06-12 01:39 - modeling.trainer - INFO - optimizer initialized
06-12 01:39 - modeling.trainer - INFO - model compiled
06-12 01:40 - modeling.trainer - INFO - loading last checkpoint from iter 1880000: best_val_loss 2.75325268273676
06-12 01:42 - modeling.trainer - INFO - val - iter 1880000: lm_loss 1.3553, value_loss 0.7340, time_loss 0.6639, loss 2.7533, time 166.64s
06-12 01:42 - modeling.trainer - INFO - new best val loss 2.7533
06-12 01:43 - modeling.trainer - INFO - saved checkpoint to models/medium/best.pt
06-12 01:43 - modeling.trainer - INFO - saved checkpoint to models/medium/last.pt
06-12 01:44 - modeling.trainer - INFO - train - iter 1880000: loss 2.7822, time 257.80s
06-12 01:44 - modeling.trainer - INFO - train - iter 1880050: loss 2.8371, time 7.23s
06-12 01:44 - modeling.trainer - INFO - train - iter 1880100: loss 2.8312, time 7.22s
06-12 01:44 - modeling.trainer - INFO - train - iter 1880150: loss 2.8299, time 8.30s
06-12 01:44 - modeling.trainer - INFO - train - iter 1880200: loss 2.8316, time 7.28s
06-12 01:45 - modeling.trainer - INFO - train - iter 1880250: loss 2.8252, time 7.40s
06-12 01:45 - modeling.trainer - INFO - train - iter 1880300: loss 2.8242, time 7.43s
06-12 01:45 - modeling.trainer - INFO - train - iter 1880350: loss 2.8296, time 7.27s
06-12 01:45 - modeling.trainer - INFO - train - iter 1880400: loss 2.8363, time 7.16s
06-12 01:45 - modeling.trainer - INFO - train - iter 1880450: loss 2.8386, time 7.35s
06-12 01:45 - modeling.trainer - INFO - train - iter 1880500: loss 2.8420, time 7.21s
06-12 01:45 - modeling.trainer - INFO - train - iter 1880550: loss 2.8324, time 7.13s
06-12 01:45 - modeling.trainer - INFO - train - iter 1880600: loss 2.8301, time 7.34s
06-12 01:46 - modeling.trainer - INFO - train - iter 1880650: loss 2.8425, time 7.23s
06-12 01:46 - modeling.trainer - INFO - train - iter 1880700: loss 2.8443, time 7.22s
06-12 01:46 - modeling.trainer - INFO - train - iter 1880750: loss 2.8364, time 7.09s
06-12 01:46 - modeling.trainer - INFO - train - iter 1880800: loss 2.8265, time 7.27s
06-12 01:46 - modeling.trainer - INFO - train - iter 1880850: loss 2.8255, time 7.17s
06-12 01:46 - modeling.trainer - INFO - train - iter 1880900: loss 2.8340, time 7.13s
06-12 01:46 - modeling.trainer - INFO - train - iter 1880950: loss 2.8365, time 7.13s
06-12 01:46 - modeling.trainer - INFO - train - iter 1881000: loss 2.8360, time 7.11s
06-12 01:47 - modeling.trainer - INFO - train - iter 1881050: loss 2.8380, time 7.05s
06-12 01:47 - modeling.trainer - INFO - train - iter 1881100: loss 2.8417, time 7.13s
06-12 01:47 - modeling.trainer - INFO - train - iter 1881150: loss 2.8456, time 7.15s
06-12 01:47 - modeling.trainer - INFO - train - iter 1881200: loss 2.8406, time 7.15s
06-12 01:47 - modeling.trainer - INFO - train - iter 1881250: loss 2.8372, time 7.09s
06-12 01:47 - modeling.trainer - INFO - train - iter 1881300: loss 2.8353, time 7.13s
06-12 01:47 - modeling.trainer - INFO - train - iter 1881350: loss 2.8379, time 7.14s
06-12 01:47 - modeling.trainer - INFO - train - iter 1881400: loss 2.8425, time 7.11s
06-12 01:47 - modeling.trainer - INFO - train - iter 1881450: loss 2.8355, time 7.06s
06-12 01:48 - modeling.trainer - INFO - train - iter 1881500: loss 2.8331, time 7.21s
06-12 01:48 - modeling.trainer - INFO - train - iter 1881550: loss 2.8333, time 7.16s
06-12 01:48 - modeling.trainer - INFO - train - iter 1881600: loss 2.8347, time 7.03s
06-12 01:48 - modeling.trainer - INFO - train - iter 1881650: loss 2.8336, time 7.08s
06-12 01:48 - modeling.trainer - INFO - train - iter 1881700: loss 2.8312, time 7.14s
06-12 01:48 - modeling.trainer - INFO - train - iter 1881750: loss 2.8376, time 7.13s
06-12 01:48 - modeling.trainer - INFO - train - iter 1881800: loss 2.8348, time 7.01s
06-12 01:48 - modeling.trainer - INFO - train - iter 1881850: loss 2.8347, time 7.11s
06-12 01:49 - modeling.trainer - INFO - train - iter 1881900: loss 2.8277, time 7.76s