|
[2025-01-10 14:07:10,187][00511] Saving configuration to /content/train_dir/default_experiment/config.json... |
|
[2025-01-10 14:07:10,191][00511] Rollout worker 0 uses device cpu |
|
[2025-01-10 14:07:10,192][00511] Rollout worker 1 uses device cpu |
|
[2025-01-10 14:07:10,194][00511] Rollout worker 2 uses device cpu |
|
[2025-01-10 14:07:10,196][00511] Rollout worker 3 uses device cpu |
|
[2025-01-10 14:07:10,198][00511] Rollout worker 4 uses device cpu |
|
[2025-01-10 14:07:10,199][00511] Rollout worker 5 uses device cpu |
|
[2025-01-10 14:07:10,201][00511] Rollout worker 6 uses device cpu |
|
[2025-01-10 14:07:10,202][00511] Rollout worker 7 uses device cpu |
|
[2025-01-10 14:07:10,365][00511] Using GPUs [0] for process 0 (actually maps to GPUs [0]) |
|
[2025-01-10 14:07:10,368][00511] InferenceWorker_p0-w0: min num requests: 2 |
|
[2025-01-10 14:07:10,400][00511] Starting all processes... |
|
[2025-01-10 14:07:10,402][00511] Starting process learner_proc0 |
|
[2025-01-10 14:07:10,449][00511] Starting all processes... |
|
[2025-01-10 14:07:10,458][00511] Starting process inference_proc0-0 |
|
[2025-01-10 14:07:10,459][00511] Starting process rollout_proc0 |
|
[2025-01-10 14:07:10,461][00511] Starting process rollout_proc1 |
|
[2025-01-10 14:07:10,461][00511] Starting process rollout_proc2 |
|
[2025-01-10 14:07:10,461][00511] Starting process rollout_proc3 |
|
[2025-01-10 14:07:10,461][00511] Starting process rollout_proc4 |
|
[2025-01-10 14:07:10,461][00511] Starting process rollout_proc5 |
|
[2025-01-10 14:07:10,461][00511] Starting process rollout_proc6 |
|
[2025-01-10 14:07:10,461][00511] Starting process rollout_proc7 |
|
[2025-01-10 14:07:28,686][02470] Worker 6 uses CPU cores [0] |
|
[2025-01-10 14:07:28,853][02472] Worker 5 uses CPU cores [1] |
|
[2025-01-10 14:07:28,877][02469] Worker 3 uses CPU cores [1] |
|
[2025-01-10 14:07:28,961][02452] Using GPUs [0] for process 0 (actually maps to GPUs [0]) |
|
[2025-01-10 14:07:28,962][02452] Set environment var CUDA_VISIBLE_DEVICES to '0' (GPU indices [0]) for learning process 0 |
|
[2025-01-10 14:07:28,997][02466] Worker 0 uses CPU cores [0] |
|
[2025-01-10 14:07:29,004][02452] Num visible devices: 1 |
|
[2025-01-10 14:07:29,033][02452] Starting seed is not provided |
|
[2025-01-10 14:07:29,034][02452] Using GPUs [0] for process 0 (actually maps to GPUs [0]) |
|
[2025-01-10 14:07:29,034][02452] Initializing actor-critic model on device cuda:0 |
|
[2025-01-10 14:07:29,035][02452] RunningMeanStd input shape: (3, 72, 128) |
|
[2025-01-10 14:07:29,039][02452] RunningMeanStd input shape: (1,) |
|
[2025-01-10 14:07:29,050][02473] Worker 7 uses CPU cores [1] |
|
[2025-01-10 14:07:29,100][02452] ConvEncoder: input_channels=3 |
|
[2025-01-10 14:07:29,113][02465] Using GPUs [0] for process 0 (actually maps to GPUs [0]) |
|
[2025-01-10 14:07:29,114][02465] Set environment var CUDA_VISIBLE_DEVICES to '0' (GPU indices [0]) for inference process 0 |
|
[2025-01-10 14:07:29,132][02465] Num visible devices: 1 |
|
[2025-01-10 14:07:29,143][02467] Worker 1 uses CPU cores [1] |
|
[2025-01-10 14:07:29,150][02471] Worker 4 uses CPU cores [0] |
|
[2025-01-10 14:07:29,160][02468] Worker 2 uses CPU cores [0] |
|
[2025-01-10 14:07:29,370][02452] Conv encoder output size: 512 |
|
[2025-01-10 14:07:29,371][02452] Policy head output size: 512 |
|
[2025-01-10 14:07:29,417][02452] Created Actor Critic model with architecture: |
|
[2025-01-10 14:07:29,417][02452] ActorCriticSharedWeights( |
|
(obs_normalizer): ObservationNormalizer( |
|
(running_mean_std): RunningMeanStdDictInPlace( |
|
(running_mean_std): ModuleDict( |
|
(obs): RunningMeanStdInPlace() |
|
) |
|
) |
|
) |
|
(returns_normalizer): RecursiveScriptModule(original_name=RunningMeanStdInPlace) |
|
(encoder): VizdoomEncoder( |
|
(basic_encoder): ConvEncoder( |
|
(enc): RecursiveScriptModule( |
|
original_name=ConvEncoderImpl |
|
(conv_head): RecursiveScriptModule( |
|
original_name=Sequential |
|
(0): RecursiveScriptModule(original_name=Conv2d) |
|
(1): RecursiveScriptModule(original_name=ELU) |
|
(2): RecursiveScriptModule(original_name=Conv2d) |
|
(3): RecursiveScriptModule(original_name=ELU) |
|
(4): RecursiveScriptModule(original_name=Conv2d) |
|
(5): RecursiveScriptModule(original_name=ELU) |
|
) |
|
(mlp_layers): RecursiveScriptModule( |
|
original_name=Sequential |
|
(0): RecursiveScriptModule(original_name=Linear) |
|
(1): RecursiveScriptModule(original_name=ELU) |
|
) |
|
) |
|
) |
|
) |
|
(core): ModelCoreRNN( |
|
(core): GRU(512, 512) |
|
) |
|
(decoder): MlpDecoder( |
|
(mlp): Identity() |
|
) |
|
(critic_linear): Linear(in_features=512, out_features=1, bias=True) |
|
(action_parameterization): ActionParameterizationDefault( |
|
(distribution_linear): Linear(in_features=512, out_features=5, bias=True) |
|
) |
|
) |
|
[2025-01-10 14:07:29,794][02452] Using optimizer <class 'torch.optim.adam.Adam'> |
|
[2025-01-10 14:07:30,358][00511] Heartbeat connected on Batcher_0 |
|
[2025-01-10 14:07:30,366][00511] Heartbeat connected on InferenceWorker_p0-w0 |
|
[2025-01-10 14:07:30,375][00511] Heartbeat connected on RolloutWorker_w0 |
|
[2025-01-10 14:07:30,379][00511] Heartbeat connected on RolloutWorker_w1 |
|
[2025-01-10 14:07:30,383][00511] Heartbeat connected on RolloutWorker_w2 |
|
[2025-01-10 14:07:30,386][00511] Heartbeat connected on RolloutWorker_w3 |
|
[2025-01-10 14:07:30,389][00511] Heartbeat connected on RolloutWorker_w4 |
|
[2025-01-10 14:07:30,393][00511] Heartbeat connected on RolloutWorker_w5 |
|
[2025-01-10 14:07:30,397][00511] Heartbeat connected on RolloutWorker_w6 |
|
[2025-01-10 14:07:30,400][00511] Heartbeat connected on RolloutWorker_w7 |
|
[2025-01-10 14:07:33,105][02452] No checkpoints found |
|
[2025-01-10 14:07:33,106][02452] Did not load from checkpoint, starting from scratch! |
|
[2025-01-10 14:07:33,106][02452] Initialized policy 0 weights for model version 0 |
|
[2025-01-10 14:07:33,109][02452] LearnerWorker_p0 finished initialization! |
|
[2025-01-10 14:07:33,110][02452] Using GPUs [0] for process 0 (actually maps to GPUs [0]) |
|
[2025-01-10 14:07:33,110][00511] Heartbeat connected on LearnerWorker_p0 |
|
[2025-01-10 14:07:33,297][02465] RunningMeanStd input shape: (3, 72, 128) |
|
[2025-01-10 14:07:33,299][02465] RunningMeanStd input shape: (1,) |
|
[2025-01-10 14:07:33,311][02465] ConvEncoder: input_channels=3 |
|
[2025-01-10 14:07:33,413][02465] Conv encoder output size: 512 |
|
[2025-01-10 14:07:33,413][02465] Policy head output size: 512 |
|
[2025-01-10 14:07:33,469][00511] Inference worker 0-0 is ready! |
|
[2025-01-10 14:07:33,471][00511] All inference workers are ready! Signal rollout workers to start! |
|
[2025-01-10 14:07:33,664][02467] Doom resolution: 160x120, resize resolution: (128, 72) |
|
[2025-01-10 14:07:33,665][02473] Doom resolution: 160x120, resize resolution: (128, 72) |
|
[2025-01-10 14:07:33,669][02469] Doom resolution: 160x120, resize resolution: (128, 72) |
|
[2025-01-10 14:07:33,667][02472] Doom resolution: 160x120, resize resolution: (128, 72) |
|
[2025-01-10 14:07:33,674][02471] Doom resolution: 160x120, resize resolution: (128, 72) |
|
[2025-01-10 14:07:33,667][02468] Doom resolution: 160x120, resize resolution: (128, 72) |
|
[2025-01-10 14:07:33,680][02466] Doom resolution: 160x120, resize resolution: (128, 72) |
|
[2025-01-10 14:07:33,684][02470] Doom resolution: 160x120, resize resolution: (128, 72) |
|
[2025-01-10 14:07:34,334][02466] Decorrelating experience for 0 frames... |
|
[2025-01-10 14:07:34,741][02466] Decorrelating experience for 32 frames... |
|
[2025-01-10 14:07:34,935][02469] Decorrelating experience for 0 frames... |
|
[2025-01-10 14:07:34,938][02467] Decorrelating experience for 0 frames... |
|
[2025-01-10 14:07:34,932][02472] Decorrelating experience for 0 frames... |
|
[2025-01-10 14:07:35,524][02466] Decorrelating experience for 64 frames... |
|
[2025-01-10 14:07:35,730][02471] Decorrelating experience for 0 frames... |
|
[2025-01-10 14:07:36,155][02472] Decorrelating experience for 32 frames... |
|
[2025-01-10 14:07:36,161][02467] Decorrelating experience for 32 frames... |
|
[2025-01-10 14:07:36,201][02473] Decorrelating experience for 0 frames... |
|
[2025-01-10 14:07:36,657][00511] Fps is (10 sec: nan, 60 sec: nan, 300 sec: nan). Total num frames: 0. Throughput: 0: nan. Samples: 0. Policy #0 lag: (min: -1.0, avg: -1.0, max: -1.0) |
|
[2025-01-10 14:07:36,735][02469] Decorrelating experience for 32 frames... |
|
[2025-01-10 14:07:36,819][02466] Decorrelating experience for 96 frames... |
|
[2025-01-10 14:07:37,103][02471] Decorrelating experience for 32 frames... |
|
[2025-01-10 14:07:37,106][02468] Decorrelating experience for 0 frames... |
|
[2025-01-10 14:07:37,598][02470] Decorrelating experience for 0 frames... |
|
[2025-01-10 14:07:38,005][02467] Decorrelating experience for 64 frames... |
|
[2025-01-10 14:07:38,575][02472] Decorrelating experience for 64 frames... |
|
[2025-01-10 14:07:38,628][02468] Decorrelating experience for 32 frames... |
|
[2025-01-10 14:07:38,730][02473] Decorrelating experience for 32 frames... |
|
[2025-01-10 14:07:39,174][02469] Decorrelating experience for 64 frames... |
|
[2025-01-10 14:07:39,276][02471] Decorrelating experience for 64 frames... |
|
[2025-01-10 14:07:39,713][02470] Decorrelating experience for 32 frames... |
|
[2025-01-10 14:07:40,327][02467] Decorrelating experience for 96 frames... |
|
[2025-01-10 14:07:41,156][02472] Decorrelating experience for 96 frames... |
|
[2025-01-10 14:07:41,659][00511] Fps is (10 sec: 0.0, 60 sec: 0.0, 300 sec: 0.0). Total num frames: 0. Throughput: 0: 8.8. Samples: 44. Policy #0 lag: (min: -1.0, avg: -1.0, max: -1.0) |
|
[2025-01-10 14:07:41,665][00511] Avg episode reward: [(0, '1.165')] |
|
[2025-01-10 14:07:41,914][02473] Decorrelating experience for 64 frames... |
|
[2025-01-10 14:07:41,984][02469] Decorrelating experience for 96 frames... |
|
[2025-01-10 14:07:42,508][02468] Decorrelating experience for 64 frames... |
|
[2025-01-10 14:07:42,603][02471] Decorrelating experience for 96 frames... |
|
[2025-01-10 14:07:45,834][02452] Signal inference workers to stop experience collection... |
|
[2025-01-10 14:07:45,844][02465] InferenceWorker_p0-w0: stopping experience collection |
|
[2025-01-10 14:07:45,885][02473] Decorrelating experience for 96 frames... |
|
[2025-01-10 14:07:46,004][02470] Decorrelating experience for 64 frames... |
|
[2025-01-10 14:07:46,012][02468] Decorrelating experience for 96 frames... |
|
[2025-01-10 14:07:46,487][02470] Decorrelating experience for 96 frames... |
|
[2025-01-10 14:07:46,657][00511] Fps is (10 sec: 0.0, 60 sec: 0.0, 300 sec: 0.0). Total num frames: 0. Throughput: 0: 204.0. Samples: 2040. Policy #0 lag: (min: -1.0, avg: -1.0, max: -1.0) |
|
[2025-01-10 14:07:46,659][00511] Avg episode reward: [(0, '2.724')] |
|
[2025-01-10 14:07:48,995][02452] Signal inference workers to resume experience collection... |
|
[2025-01-10 14:07:48,997][02465] InferenceWorker_p0-w0: resuming experience collection |
|
[2025-01-10 14:07:51,657][00511] Fps is (10 sec: 1638.7, 60 sec: 1092.3, 300 sec: 1092.3). Total num frames: 16384. Throughput: 0: 327.7. Samples: 4916. Policy #0 lag: (min: 0.0, avg: 0.3, max: 2.0) |
|
[2025-01-10 14:07:51,659][00511] Avg episode reward: [(0, '3.322')] |
|
[2025-01-10 14:07:56,568][02465] Updated weights for policy 0, policy_version 10 (0.0145) |
|
[2025-01-10 14:07:56,657][00511] Fps is (10 sec: 4096.0, 60 sec: 2048.0, 300 sec: 2048.0). Total num frames: 40960. Throughput: 0: 426.0. Samples: 8520. Policy #0 lag: (min: 0.0, avg: 0.3, max: 2.0) |
|
[2025-01-10 14:07:56,662][00511] Avg episode reward: [(0, '4.040')] |
|
[2025-01-10 14:08:01,659][00511] Fps is (10 sec: 3685.9, 60 sec: 2129.8, 300 sec: 2129.8). Total num frames: 53248. Throughput: 0: 539.2. Samples: 13480. Policy #0 lag: (min: 0.0, avg: 0.3, max: 2.0) |
|
[2025-01-10 14:08:01,666][00511] Avg episode reward: [(0, '4.449')] |
|
[2025-01-10 14:08:06,657][00511] Fps is (10 sec: 3686.4, 60 sec: 2594.1, 300 sec: 2594.1). Total num frames: 77824. Throughput: 0: 666.3. Samples: 19990. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2025-01-10 14:08:06,660][00511] Avg episode reward: [(0, '4.232')] |
|
[2025-01-10 14:08:07,398][02465] Updated weights for policy 0, policy_version 20 (0.0034) |
|
[2025-01-10 14:08:11,657][00511] Fps is (10 sec: 4915.8, 60 sec: 2925.7, 300 sec: 2925.7). Total num frames: 102400. Throughput: 0: 676.1. Samples: 23664. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2025-01-10 14:08:11,660][00511] Avg episode reward: [(0, '4.240')] |
|
[2025-01-10 14:08:11,667][02452] Saving new best policy, reward=4.240! |
|
[2025-01-10 14:08:16,657][00511] Fps is (10 sec: 3686.4, 60 sec: 2867.2, 300 sec: 2867.2). Total num frames: 114688. Throughput: 0: 735.3. Samples: 29412. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) |
|
[2025-01-10 14:08:16,662][00511] Avg episode reward: [(0, '4.356')] |
|
[2025-01-10 14:08:16,667][02452] Saving new best policy, reward=4.356! |
|
[2025-01-10 14:08:19,110][02465] Updated weights for policy 0, policy_version 30 (0.0019) |
|
[2025-01-10 14:08:21,657][00511] Fps is (10 sec: 2867.2, 60 sec: 2912.7, 300 sec: 2912.7). Total num frames: 131072. Throughput: 0: 756.1. Samples: 34026. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) |
|
[2025-01-10 14:08:21,660][00511] Avg episode reward: [(0, '4.351')] |
|
[2025-01-10 14:08:26,658][00511] Fps is (10 sec: 4095.9, 60 sec: 3112.9, 300 sec: 3112.9). Total num frames: 155648. Throughput: 0: 831.1. Samples: 37440. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) |
|
[2025-01-10 14:08:26,660][00511] Avg episode reward: [(0, '4.319')] |
|
[2025-01-10 14:08:27,912][02465] Updated weights for policy 0, policy_version 40 (0.0015) |
|
[2025-01-10 14:08:31,657][00511] Fps is (10 sec: 4505.6, 60 sec: 3202.3, 300 sec: 3202.3). Total num frames: 176128. Throughput: 0: 934.8. Samples: 44104. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2025-01-10 14:08:31,663][00511] Avg episode reward: [(0, '4.295')] |
|
[2025-01-10 14:08:36,657][00511] Fps is (10 sec: 3686.5, 60 sec: 3208.5, 300 sec: 3208.5). Total num frames: 192512. Throughput: 0: 973.4. Samples: 48718. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2025-01-10 14:08:36,660][00511] Avg episode reward: [(0, '4.376')] |
|
[2025-01-10 14:08:36,667][02452] Saving new best policy, reward=4.376! |
|
[2025-01-10 14:08:38,947][02465] Updated weights for policy 0, policy_version 50 (0.0033) |
|
[2025-01-10 14:08:41,657][00511] Fps is (10 sec: 4096.1, 60 sec: 3618.3, 300 sec: 3339.8). Total num frames: 217088. Throughput: 0: 974.1. Samples: 52354. Policy #0 lag: (min: 0.0, avg: 0.4, max: 2.0) |
|
[2025-01-10 14:08:41,660][00511] Avg episode reward: [(0, '4.562')] |
|
[2025-01-10 14:08:41,671][02452] Saving new best policy, reward=4.562! |
|
[2025-01-10 14:08:46,661][00511] Fps is (10 sec: 4503.8, 60 sec: 3959.2, 300 sec: 3393.6). Total num frames: 237568. Throughput: 0: 1025.2. Samples: 59616. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) |
|
[2025-01-10 14:08:46,664][00511] Avg episode reward: [(0, '4.546')] |
|
[2025-01-10 14:08:48,283][02465] Updated weights for policy 0, policy_version 60 (0.0015) |
|
[2025-01-10 14:08:51,657][00511] Fps is (10 sec: 3686.4, 60 sec: 3959.5, 300 sec: 3386.0). Total num frames: 253952. Throughput: 0: 982.3. Samples: 64192. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) |
|
[2025-01-10 14:08:51,660][00511] Avg episode reward: [(0, '4.459')] |
|
[2025-01-10 14:08:56,657][00511] Fps is (10 sec: 3687.9, 60 sec: 3891.2, 300 sec: 3430.4). Total num frames: 274432. Throughput: 0: 965.6. Samples: 67116. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2025-01-10 14:08:56,664][00511] Avg episode reward: [(0, '4.451')] |
|
[2025-01-10 14:08:58,728][02465] Updated weights for policy 0, policy_version 70 (0.0036) |
|
[2025-01-10 14:09:01,657][00511] Fps is (10 sec: 4505.6, 60 sec: 4096.1, 300 sec: 3517.7). Total num frames: 299008. Throughput: 0: 998.5. Samples: 74346. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) |
|
[2025-01-10 14:09:01,660][00511] Avg episode reward: [(0, '4.447')] |
|
[2025-01-10 14:09:01,669][02452] Saving /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000073_299008.pth... |
|
[2025-01-10 14:09:06,657][00511] Fps is (10 sec: 4096.0, 60 sec: 3959.5, 300 sec: 3504.4). Total num frames: 315392. Throughput: 0: 1018.7. Samples: 79868. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) |
|
[2025-01-10 14:09:06,660][00511] Avg episode reward: [(0, '4.404')] |
|
[2025-01-10 14:09:09,968][02465] Updated weights for policy 0, policy_version 80 (0.0025) |
|
[2025-01-10 14:09:11,657][00511] Fps is (10 sec: 3276.8, 60 sec: 3822.9, 300 sec: 3492.4). Total num frames: 331776. Throughput: 0: 992.8. Samples: 82116. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2025-01-10 14:09:11,664][00511] Avg episode reward: [(0, '4.379')] |
|
[2025-01-10 14:09:16,657][00511] Fps is (10 sec: 4096.0, 60 sec: 4027.7, 300 sec: 3563.5). Total num frames: 356352. Throughput: 0: 996.1. Samples: 88928. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) |
|
[2025-01-10 14:09:16,660][00511] Avg episode reward: [(0, '4.470')] |
|
[2025-01-10 14:09:18,569][02465] Updated weights for policy 0, policy_version 90 (0.0021) |
|
[2025-01-10 14:09:21,657][00511] Fps is (10 sec: 4505.6, 60 sec: 4096.0, 300 sec: 3588.9). Total num frames: 376832. Throughput: 0: 1040.4. Samples: 95536. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2025-01-10 14:09:21,662][00511] Avg episode reward: [(0, '4.457')] |
|
[2025-01-10 14:09:26,658][00511] Fps is (10 sec: 3686.1, 60 sec: 3959.4, 300 sec: 3574.7). Total num frames: 393216. Throughput: 0: 1007.0. Samples: 97668. Policy #0 lag: (min: 0.0, avg: 0.4, max: 2.0) |
|
[2025-01-10 14:09:26,662][00511] Avg episode reward: [(0, '4.554')] |
|
[2025-01-10 14:09:29,785][02465] Updated weights for policy 0, policy_version 100 (0.0026) |
|
[2025-01-10 14:09:31,657][00511] Fps is (10 sec: 4095.9, 60 sec: 4027.7, 300 sec: 3633.0). Total num frames: 417792. Throughput: 0: 980.4. Samples: 103732. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) |
|
[2025-01-10 14:09:31,662][00511] Avg episode reward: [(0, '4.647')] |
|
[2025-01-10 14:09:31,669][02452] Saving new best policy, reward=4.647! |
|
[2025-01-10 14:09:36,657][00511] Fps is (10 sec: 4505.9, 60 sec: 4096.0, 300 sec: 3652.3). Total num frames: 438272. Throughput: 0: 1038.6. Samples: 110928. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) |
|
[2025-01-10 14:09:36,660][00511] Avg episode reward: [(0, '4.745')] |
|
[2025-01-10 14:09:36,669][02452] Saving new best policy, reward=4.745! |
|
[2025-01-10 14:09:39,394][02465] Updated weights for policy 0, policy_version 110 (0.0022) |
|
[2025-01-10 14:09:41,657][00511] Fps is (10 sec: 3686.5, 60 sec: 3959.5, 300 sec: 3637.2). Total num frames: 454656. Throughput: 0: 1026.8. Samples: 113320. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) |
|
[2025-01-10 14:09:41,661][00511] Avg episode reward: [(0, '4.665')] |
|
[2025-01-10 14:09:46,657][00511] Fps is (10 sec: 3686.4, 60 sec: 3959.7, 300 sec: 3654.9). Total num frames: 475136. Throughput: 0: 980.3. Samples: 118458. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) |
|
[2025-01-10 14:09:46,663][00511] Avg episode reward: [(0, '4.436')] |
|
[2025-01-10 14:09:49,725][02465] Updated weights for policy 0, policy_version 120 (0.0013) |
|
[2025-01-10 14:09:51,657][00511] Fps is (10 sec: 4505.6, 60 sec: 4096.0, 300 sec: 3701.6). Total num frames: 499712. Throughput: 0: 1016.8. Samples: 125626. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) |
|
[2025-01-10 14:09:51,660][00511] Avg episode reward: [(0, '4.430')] |
|
[2025-01-10 14:09:56,662][00511] Fps is (10 sec: 4094.0, 60 sec: 4027.4, 300 sec: 3686.3). Total num frames: 516096. Throughput: 0: 1042.3. Samples: 129026. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) |
|
[2025-01-10 14:09:56,669][00511] Avg episode reward: [(0, '4.452')] |
|
[2025-01-10 14:10:00,614][02465] Updated weights for policy 0, policy_version 130 (0.0031) |
|
[2025-01-10 14:10:01,657][00511] Fps is (10 sec: 3686.4, 60 sec: 3959.5, 300 sec: 3700.5). Total num frames: 536576. Throughput: 0: 991.8. Samples: 133558. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2025-01-10 14:10:01,663][00511] Avg episode reward: [(0, '4.392')] |
|
[2025-01-10 14:10:06,657][00511] Fps is (10 sec: 4507.8, 60 sec: 4096.0, 300 sec: 3741.0). Total num frames: 561152. Throughput: 0: 1005.6. Samples: 140788. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2025-01-10 14:10:06,660][00511] Avg episode reward: [(0, '4.491')] |
|
[2025-01-10 14:10:09,157][02465] Updated weights for policy 0, policy_version 140 (0.0021) |
|
[2025-01-10 14:10:11,657][00511] Fps is (10 sec: 4505.6, 60 sec: 4164.3, 300 sec: 3752.5). Total num frames: 581632. Throughput: 0: 1039.3. Samples: 144438. Policy #0 lag: (min: 0.0, avg: 0.6, max: 1.0) |
|
[2025-01-10 14:10:11,663][00511] Avg episode reward: [(0, '4.514')] |
|
[2025-01-10 14:10:16,657][00511] Fps is (10 sec: 3686.4, 60 sec: 4027.7, 300 sec: 3737.6). Total num frames: 598016. Throughput: 0: 1014.4. Samples: 149380. Policy #0 lag: (min: 0.0, avg: 0.4, max: 2.0) |
|
[2025-01-10 14:10:16,662][00511] Avg episode reward: [(0, '4.347')] |
|
[2025-01-10 14:10:21,135][02465] Updated weights for policy 0, policy_version 150 (0.0026) |
|
[2025-01-10 14:10:21,657][00511] Fps is (10 sec: 3276.8, 60 sec: 3959.5, 300 sec: 3723.6). Total num frames: 614400. Throughput: 0: 965.7. Samples: 154384. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2025-01-10 14:10:21,660][00511] Avg episode reward: [(0, '4.359')] |
|
[2025-01-10 14:10:26,657][00511] Fps is (10 sec: 3276.8, 60 sec: 3959.5, 300 sec: 3710.5). Total num frames: 630784. Throughput: 0: 961.6. Samples: 156592. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) |
|
[2025-01-10 14:10:26,660][00511] Avg episode reward: [(0, '4.388')] |
|
[2025-01-10 14:10:31,657][00511] Fps is (10 sec: 3276.8, 60 sec: 3822.9, 300 sec: 3698.1). Total num frames: 647168. Throughput: 0: 965.6. Samples: 161910. Policy #0 lag: (min: 0.0, avg: 0.4, max: 2.0) |
|
[2025-01-10 14:10:31,660][00511] Avg episode reward: [(0, '4.436')] |
|
[2025-01-10 14:10:33,871][02465] Updated weights for policy 0, policy_version 160 (0.0033) |
|
[2025-01-10 14:10:36,657][00511] Fps is (10 sec: 3686.4, 60 sec: 3822.9, 300 sec: 3709.2). Total num frames: 667648. Throughput: 0: 931.2. Samples: 167530. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) |
|
[2025-01-10 14:10:36,664][00511] Avg episode reward: [(0, '4.496')] |
|
[2025-01-10 14:10:41,657][00511] Fps is (10 sec: 4505.6, 60 sec: 3959.5, 300 sec: 3741.8). Total num frames: 692224. Throughput: 0: 937.1. Samples: 171190. Policy #0 lag: (min: 0.0, avg: 0.4, max: 2.0) |
|
[2025-01-10 14:10:41,659][00511] Avg episode reward: [(0, '4.585')] |
|
[2025-01-10 14:10:42,302][02465] Updated weights for policy 0, policy_version 170 (0.0014) |
|
[2025-01-10 14:10:46,657][00511] Fps is (10 sec: 4505.5, 60 sec: 3959.5, 300 sec: 3751.1). Total num frames: 712704. Throughput: 0: 984.8. Samples: 177874. Policy #0 lag: (min: 0.0, avg: 0.4, max: 2.0) |
|
[2025-01-10 14:10:46,663][00511] Avg episode reward: [(0, '4.551')] |
|
[2025-01-10 14:10:51,657][00511] Fps is (10 sec: 3276.8, 60 sec: 3754.7, 300 sec: 3717.9). Total num frames: 724992. Throughput: 0: 926.3. Samples: 182470. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2025-01-10 14:10:51,663][00511] Avg episode reward: [(0, '4.660')] |
|
[2025-01-10 14:10:53,613][02465] Updated weights for policy 0, policy_version 180 (0.0026) |
|
[2025-01-10 14:10:56,657][00511] Fps is (10 sec: 3686.5, 60 sec: 3891.5, 300 sec: 3747.8). Total num frames: 749568. Throughput: 0: 925.6. Samples: 186088. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2025-01-10 14:10:56,664][00511] Avg episode reward: [(0, '4.701')] |
|
[2025-01-10 14:11:01,658][00511] Fps is (10 sec: 4915.0, 60 sec: 3959.4, 300 sec: 3776.3). Total num frames: 774144. Throughput: 0: 980.0. Samples: 193480. Policy #0 lag: (min: 0.0, avg: 0.4, max: 2.0) |
|
[2025-01-10 14:11:01,663][00511] Avg episode reward: [(0, '4.891')] |
|
[2025-01-10 14:11:01,678][02452] Saving /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000189_774144.pth... |
|
[2025-01-10 14:11:01,828][02452] Saving new best policy, reward=4.891! |
|
[2025-01-10 14:11:02,292][02465] Updated weights for policy 0, policy_version 190 (0.0017) |
|
[2025-01-10 14:11:06,661][00511] Fps is (10 sec: 4094.4, 60 sec: 3822.7, 300 sec: 3764.3). Total num frames: 790528. Throughput: 0: 972.5. Samples: 198148. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2025-01-10 14:11:06,663][00511] Avg episode reward: [(0, '4.973')] |
|
[2025-01-10 14:11:06,669][02452] Saving new best policy, reward=4.973! |
|
[2025-01-10 14:11:11,657][00511] Fps is (10 sec: 3686.5, 60 sec: 3822.9, 300 sec: 3772.1). Total num frames: 811008. Throughput: 0: 988.0. Samples: 201050. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) |
|
[2025-01-10 14:11:11,660][00511] Avg episode reward: [(0, '4.926')] |
|
[2025-01-10 14:11:13,039][02465] Updated weights for policy 0, policy_version 200 (0.0014) |
|
[2025-01-10 14:11:16,657][00511] Fps is (10 sec: 4507.3, 60 sec: 3959.5, 300 sec: 3798.1). Total num frames: 835584. Throughput: 0: 1031.5. Samples: 208328. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) |
|
[2025-01-10 14:11:16,660][00511] Avg episode reward: [(0, '4.698')] |
|
[2025-01-10 14:11:21,659][00511] Fps is (10 sec: 4095.5, 60 sec: 3959.4, 300 sec: 3786.5). Total num frames: 851968. Throughput: 0: 1026.1. Samples: 213704. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) |
|
[2025-01-10 14:11:21,661][00511] Avg episode reward: [(0, '4.649')] |
|
[2025-01-10 14:11:23,896][02465] Updated weights for policy 0, policy_version 210 (0.0025) |
|
[2025-01-10 14:11:26,657][00511] Fps is (10 sec: 3276.8, 60 sec: 3959.5, 300 sec: 3775.4). Total num frames: 868352. Throughput: 0: 996.2. Samples: 216020. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2025-01-10 14:11:26,665][00511] Avg episode reward: [(0, '4.647')] |
|
[2025-01-10 14:11:31,657][00511] Fps is (10 sec: 4096.5, 60 sec: 4096.0, 300 sec: 3799.7). Total num frames: 892928. Throughput: 0: 1003.1. Samples: 223012. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2025-01-10 14:11:31,660][00511] Avg episode reward: [(0, '4.504')] |
|
[2025-01-10 14:11:32,763][02465] Updated weights for policy 0, policy_version 220 (0.0024) |
|
[2025-01-10 14:11:36,658][00511] Fps is (10 sec: 4505.2, 60 sec: 4095.9, 300 sec: 3805.9). Total num frames: 913408. Throughput: 0: 1049.8. Samples: 229714. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2025-01-10 14:11:36,667][00511] Avg episode reward: [(0, '4.618')] |
|
[2025-01-10 14:11:41,657][00511] Fps is (10 sec: 3686.4, 60 sec: 3959.5, 300 sec: 3795.1). Total num frames: 929792. Throughput: 0: 1016.6. Samples: 231836. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) |
|
[2025-01-10 14:11:41,665][00511] Avg episode reward: [(0, '4.695')] |
|
[2025-01-10 14:11:43,957][02465] Updated weights for policy 0, policy_version 230 (0.0017) |
|
[2025-01-10 14:11:46,657][00511] Fps is (10 sec: 4096.4, 60 sec: 4027.8, 300 sec: 3817.5). Total num frames: 954368. Throughput: 0: 990.1. Samples: 238032. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) |
|
[2025-01-10 14:11:46,665][00511] Avg episode reward: [(0, '4.837')] |
|
[2025-01-10 14:11:51,657][00511] Fps is (10 sec: 4915.2, 60 sec: 4232.5, 300 sec: 3839.0). Total num frames: 978944. Throughput: 0: 1046.5. Samples: 245236. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) |
|
[2025-01-10 14:11:51,667][00511] Avg episode reward: [(0, '4.731')] |
|
[2025-01-10 14:11:52,638][02465] Updated weights for policy 0, policy_version 240 (0.0017) |
|
[2025-01-10 14:11:56,657][00511] Fps is (10 sec: 3686.4, 60 sec: 4027.7, 300 sec: 3812.4). Total num frames: 991232. Throughput: 0: 1035.5. Samples: 247646. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2025-01-10 14:11:56,665][00511] Avg episode reward: [(0, '4.880')] |
|
[2025-01-10 14:12:01,657][00511] Fps is (10 sec: 3276.8, 60 sec: 3959.5, 300 sec: 3817.8). Total num frames: 1011712. Throughput: 0: 991.5. Samples: 252946. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2025-01-10 14:12:01,664][00511] Avg episode reward: [(0, '5.064')] |
|
[2025-01-10 14:12:01,675][02452] Saving new best policy, reward=5.064! |
|
[2025-01-10 14:12:03,610][02465] Updated weights for policy 0, policy_version 250 (0.0016) |
|
[2025-01-10 14:12:06,657][00511] Fps is (10 sec: 4505.6, 60 sec: 4096.3, 300 sec: 3838.1). Total num frames: 1036288. Throughput: 0: 1032.9. Samples: 260184. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) |
|
[2025-01-10 14:12:06,660][00511] Avg episode reward: [(0, '4.747')] |
|
[2025-01-10 14:12:11,659][00511] Fps is (10 sec: 4504.7, 60 sec: 4095.9, 300 sec: 3842.8). Total num frames: 1056768. Throughput: 0: 1055.6. Samples: 263522. Policy #0 lag: (min: 0.0, avg: 0.6, max: 1.0) |
|
[2025-01-10 14:12:11,664][00511] Avg episode reward: [(0, '4.577')] |
|
[2025-01-10 14:12:14,108][02465] Updated weights for policy 0, policy_version 260 (0.0029) |
|
[2025-01-10 14:12:16,657][00511] Fps is (10 sec: 3686.4, 60 sec: 3959.5, 300 sec: 3832.7). Total num frames: 1073152. Throughput: 0: 999.5. Samples: 267988. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) |
|
[2025-01-10 14:12:16,660][00511] Avg episode reward: [(0, '4.977')] |
|
[2025-01-10 14:12:21,657][00511] Fps is (10 sec: 4096.8, 60 sec: 4096.1, 300 sec: 3851.7). Total num frames: 1097728. Throughput: 0: 1011.0. Samples: 275208. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2025-01-10 14:12:21,660][00511] Avg episode reward: [(0, '5.288')] |
|
[2025-01-10 14:12:21,666][02452] Saving new best policy, reward=5.288! |
|
[2025-01-10 14:12:23,070][02465] Updated weights for policy 0, policy_version 270 (0.0021) |
|
[2025-01-10 14:12:26,661][00511] Fps is (10 sec: 4503.8, 60 sec: 4164.0, 300 sec: 3855.8). Total num frames: 1118208. Throughput: 0: 1045.2. Samples: 278872. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) |
|
[2025-01-10 14:12:26,667][00511] Avg episode reward: [(0, '5.458')] |
|
[2025-01-10 14:12:26,669][02452] Saving new best policy, reward=5.458! |
|
[2025-01-10 14:12:31,661][00511] Fps is (10 sec: 3684.9, 60 sec: 4027.5, 300 sec: 3846.0). Total num frames: 1134592. Throughput: 0: 1013.6. Samples: 283648. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) |
|
[2025-01-10 14:12:31,664][00511] Avg episode reward: [(0, '5.255')] |
|
[2025-01-10 14:12:34,212][02465] Updated weights for policy 0, policy_version 280 (0.0021) |
|
[2025-01-10 14:12:36,657][00511] Fps is (10 sec: 3687.8, 60 sec: 4027.8, 300 sec: 3915.5). Total num frames: 1155072. Throughput: 0: 996.9. Samples: 290096. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) |
|
[2025-01-10 14:12:36,665][00511] Avg episode reward: [(0, '4.946')] |
|
[2025-01-10 14:12:41,657][00511] Fps is (10 sec: 4507.4, 60 sec: 4164.3, 300 sec: 3998.8). Total num frames: 1179648. Throughput: 0: 1023.1. Samples: 293686. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) |
|
[2025-01-10 14:12:41,664][00511] Avg episode reward: [(0, '5.345')] |
|
[2025-01-10 14:12:42,759][02465] Updated weights for policy 0, policy_version 290 (0.0024) |
|
[2025-01-10 14:12:46,659][00511] Fps is (10 sec: 4095.4, 60 sec: 4027.6, 300 sec: 3998.8). Total num frames: 1196032. Throughput: 0: 1033.7. Samples: 299462. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) |
|
[2025-01-10 14:12:46,665][00511] Avg episode reward: [(0, '5.428')] |
|
[2025-01-10 14:12:51,657][00511] Fps is (10 sec: 3686.3, 60 sec: 3959.5, 300 sec: 3984.9). Total num frames: 1216512. Throughput: 0: 995.4. Samples: 304978. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) |
|
[2025-01-10 14:12:51,664][00511] Avg episode reward: [(0, '5.521')] |
|
[2025-01-10 14:12:51,674][02452] Saving new best policy, reward=5.521! |
|
[2025-01-10 14:12:53,826][02465] Updated weights for policy 0, policy_version 300 (0.0013) |
|
[2025-01-10 14:12:56,657][00511] Fps is (10 sec: 4506.2, 60 sec: 4164.3, 300 sec: 4026.6). Total num frames: 1241088. Throughput: 0: 1003.0. Samples: 308656. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) |
|
[2025-01-10 14:12:56,665][00511] Avg episode reward: [(0, '5.659')] |
|
[2025-01-10 14:12:56,667][02452] Saving new best policy, reward=5.659! |
|
[2025-01-10 14:13:01,658][00511] Fps is (10 sec: 4095.6, 60 sec: 4095.9, 300 sec: 3998.8). Total num frames: 1257472. Throughput: 0: 1048.1. Samples: 315156. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) |
|
[2025-01-10 14:13:01,664][00511] Avg episode reward: [(0, '5.414')] |
|
[2025-01-10 14:13:01,678][02452] Saving /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000308_1261568.pth... |
|
[2025-01-10 14:13:01,840][02452] Removing /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000073_299008.pth |
|
[2025-01-10 14:13:04,425][02465] Updated weights for policy 0, policy_version 310 (0.0019) |
|
[2025-01-10 14:13:06,657][00511] Fps is (10 sec: 3686.4, 60 sec: 4027.7, 300 sec: 3984.9). Total num frames: 1277952. Throughput: 0: 992.5. Samples: 319870. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2025-01-10 14:13:06,660][00511] Avg episode reward: [(0, '5.679')] |
|
[2025-01-10 14:13:06,663][02452] Saving new best policy, reward=5.679! |
|
[2025-01-10 14:13:11,657][00511] Fps is (10 sec: 4096.5, 60 sec: 4027.9, 300 sec: 4012.7). Total num frames: 1298432. Throughput: 0: 990.0. Samples: 323416. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2025-01-10 14:13:11,661][00511] Avg episode reward: [(0, '5.836')] |
|
[2025-01-10 14:13:11,718][02452] Saving new best policy, reward=5.836! |
|
[2025-01-10 14:13:13,471][02465] Updated weights for policy 0, policy_version 320 (0.0017) |
|
[2025-01-10 14:13:16,657][00511] Fps is (10 sec: 4505.6, 60 sec: 4164.3, 300 sec: 4040.5). Total num frames: 1323008. Throughput: 0: 1044.9. Samples: 330666. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) |
|
[2025-01-10 14:13:16,660][00511] Avg episode reward: [(0, '5.669')] |
|
[2025-01-10 14:13:21,657][00511] Fps is (10 sec: 3686.4, 60 sec: 3959.5, 300 sec: 3998.8). Total num frames: 1335296. Throughput: 0: 1003.6. Samples: 335256. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) |
|
[2025-01-10 14:13:21,660][00511] Avg episode reward: [(0, '6.018')] |
|
[2025-01-10 14:13:21,678][02452] Saving new best policy, reward=6.018! |
|
[2025-01-10 14:13:24,722][02465] Updated weights for policy 0, policy_version 330 (0.0024) |
|
[2025-01-10 14:13:26,657][00511] Fps is (10 sec: 3686.4, 60 sec: 4028.0, 300 sec: 4012.7). Total num frames: 1359872. Throughput: 0: 990.3. Samples: 338250. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) |
|
[2025-01-10 14:13:26,665][00511] Avg episode reward: [(0, '6.016')] |
|
[2025-01-10 14:13:31,657][00511] Fps is (10 sec: 4915.2, 60 sec: 4164.5, 300 sec: 4040.5). Total num frames: 1384448. Throughput: 0: 1025.9. Samples: 345624. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) |
|
[2025-01-10 14:13:31,660][00511] Avg episode reward: [(0, '5.905')] |
|
[2025-01-10 14:13:33,259][02465] Updated weights for policy 0, policy_version 340 (0.0019) |
|
[2025-01-10 14:13:36,657][00511] Fps is (10 sec: 4095.9, 60 sec: 4096.0, 300 sec: 4012.7). Total num frames: 1400832. Throughput: 0: 1024.7. Samples: 351088. Policy #0 lag: (min: 0.0, avg: 0.4, max: 2.0) |
|
[2025-01-10 14:13:36,661][00511] Avg episode reward: [(0, '6.195')] |
|
[2025-01-10 14:13:36,663][02452] Saving new best policy, reward=6.195! |
|
[2025-01-10 14:13:41,657][00511] Fps is (10 sec: 3686.4, 60 sec: 4027.7, 300 sec: 4012.7). Total num frames: 1421312. Throughput: 0: 992.5. Samples: 353320. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) |
|
[2025-01-10 14:13:41,660][00511] Avg episode reward: [(0, '6.364')] |
|
[2025-01-10 14:13:41,669][02452] Saving new best policy, reward=6.364! |
|
[2025-01-10 14:13:44,178][02465] Updated weights for policy 0, policy_version 350 (0.0018) |
|
[2025-01-10 14:13:46,657][00511] Fps is (10 sec: 4096.1, 60 sec: 4096.1, 300 sec: 4026.6). Total num frames: 1441792. Throughput: 0: 1006.8. Samples: 360460. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2025-01-10 14:13:46,660][00511] Avg episode reward: [(0, '6.303')] |
|
[2025-01-10 14:13:51,660][00511] Fps is (10 sec: 4094.8, 60 sec: 4095.8, 300 sec: 4026.5). Total num frames: 1462272. Throughput: 0: 1036.4. Samples: 366512. Policy #0 lag: (min: 0.0, avg: 0.4, max: 2.0) |
|
[2025-01-10 14:13:51,663][00511] Avg episode reward: [(0, '6.240')] |
|
[2025-01-10 14:13:55,558][02465] Updated weights for policy 0, policy_version 360 (0.0015) |
|
[2025-01-10 14:13:56,657][00511] Fps is (10 sec: 3276.8, 60 sec: 3891.2, 300 sec: 3984.9). Total num frames: 1474560. Throughput: 0: 1002.3. Samples: 368520. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) |
|
[2025-01-10 14:13:56,661][00511] Avg episode reward: [(0, '6.329')] |
|
[2025-01-10 14:14:01,657][00511] Fps is (10 sec: 3687.4, 60 sec: 4027.8, 300 sec: 4012.7). Total num frames: 1499136. Throughput: 0: 976.3. Samples: 374598. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) |
|
[2025-01-10 14:14:01,662][00511] Avg episode reward: [(0, '6.879')] |
|
[2025-01-10 14:14:01,670][02452] Saving new best policy, reward=6.879! |
|
[2025-01-10 14:14:04,534][02465] Updated weights for policy 0, policy_version 370 (0.0022) |
|
[2025-01-10 14:14:06,657][00511] Fps is (10 sec: 4915.2, 60 sec: 4096.0, 300 sec: 4040.5). Total num frames: 1523712. Throughput: 0: 1036.1. Samples: 381880. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) |
|
[2025-01-10 14:14:06,662][00511] Avg episode reward: [(0, '7.192')] |
|
[2025-01-10 14:14:06,667][02452] Saving new best policy, reward=7.192! |
|
[2025-01-10 14:14:11,660][00511] Fps is (10 sec: 4094.8, 60 sec: 4027.5, 300 sec: 4012.7). Total num frames: 1540096. Throughput: 0: 1021.2. Samples: 384206. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) |
|
[2025-01-10 14:14:11,663][00511] Avg episode reward: [(0, '6.983')] |
|
[2025-01-10 14:14:15,611][02465] Updated weights for policy 0, policy_version 380 (0.0043) |
|
[2025-01-10 14:14:16,657][00511] Fps is (10 sec: 3686.3, 60 sec: 3959.5, 300 sec: 4012.7). Total num frames: 1560576. Throughput: 0: 975.8. Samples: 389534. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2025-01-10 14:14:16,660][00511] Avg episode reward: [(0, '7.187')] |
|
[2025-01-10 14:14:21,658][00511] Fps is (10 sec: 3687.3, 60 sec: 4027.7, 300 sec: 4012.7). Total num frames: 1576960. Throughput: 0: 985.2. Samples: 395420. Policy #0 lag: (min: 0.0, avg: 0.6, max: 1.0) |
|
[2025-01-10 14:14:21,662][00511] Avg episode reward: [(0, '7.753')] |
|
[2025-01-10 14:14:21,670][02452] Saving new best policy, reward=7.753! |
|
[2025-01-10 14:14:26,660][00511] Fps is (10 sec: 2866.4, 60 sec: 3822.7, 300 sec: 3971.0). Total num frames: 1589248. Throughput: 0: 978.2. Samples: 397340. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2025-01-10 14:14:26,665][00511] Avg episode reward: [(0, '7.867')] |
|
[2025-01-10 14:14:26,666][02452] Saving new best policy, reward=7.867! |
|
[2025-01-10 14:14:28,459][02465] Updated weights for policy 0, policy_version 390 (0.0021) |
|
[2025-01-10 14:14:31,657][00511] Fps is (10 sec: 2867.3, 60 sec: 3686.4, 300 sec: 3957.2). Total num frames: 1605632. Throughput: 0: 909.1. Samples: 401370. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) |
|
[2025-01-10 14:14:31,660][00511] Avg episode reward: [(0, '8.294')] |
|
[2025-01-10 14:14:31,675][02452] Saving new best policy, reward=8.294! |
|
[2025-01-10 14:14:36,657][00511] Fps is (10 sec: 4097.3, 60 sec: 3822.9, 300 sec: 3984.9). Total num frames: 1630208. Throughput: 0: 929.0. Samples: 408316. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2025-01-10 14:14:36,665][00511] Avg episode reward: [(0, '7.915')] |
|
[2025-01-10 14:14:37,895][02465] Updated weights for policy 0, policy_version 400 (0.0034) |
|
[2025-01-10 14:14:41,657][00511] Fps is (10 sec: 4915.2, 60 sec: 3891.2, 300 sec: 3998.8). Total num frames: 1654784. Throughput: 0: 966.6. Samples: 412018. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) |
|
[2025-01-10 14:14:41,662][00511] Avg episode reward: [(0, '8.397')] |
|
[2025-01-10 14:14:41,674][02452] Saving new best policy, reward=8.397! |
|
[2025-01-10 14:14:46,657][00511] Fps is (10 sec: 3686.4, 60 sec: 3754.7, 300 sec: 3957.2). Total num frames: 1667072. Throughput: 0: 940.3. Samples: 416910. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) |
|
[2025-01-10 14:14:46,660][00511] Avg episode reward: [(0, '8.009')] |
|
[2025-01-10 14:14:49,421][02465] Updated weights for policy 0, policy_version 410 (0.0024) |
|
[2025-01-10 14:14:51,657][00511] Fps is (10 sec: 3276.8, 60 sec: 3754.8, 300 sec: 3971.1). Total num frames: 1687552. Throughput: 0: 910.9. Samples: 422872. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2025-01-10 14:14:51,662][00511] Avg episode reward: [(0, '8.370')] |
|
[2025-01-10 14:14:56,657][00511] Fps is (10 sec: 4505.6, 60 sec: 3959.5, 300 sec: 3984.9). Total num frames: 1712128. Throughput: 0: 935.8. Samples: 426316. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) |
|
[2025-01-10 14:14:56,661][00511] Avg episode reward: [(0, '8.850')] |
|
[2025-01-10 14:14:56,667][02452] Saving new best policy, reward=8.850! |
|
[2025-01-10 14:14:58,197][02465] Updated weights for policy 0, policy_version 420 (0.0016) |
|
[2025-01-10 14:15:01,659][00511] Fps is (10 sec: 4095.2, 60 sec: 3822.8, 300 sec: 3957.1). Total num frames: 1728512. Throughput: 0: 951.4. Samples: 432348. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) |
|
[2025-01-10 14:15:01,661][00511] Avg episode reward: [(0, '9.362')] |
|
[2025-01-10 14:15:01,677][02452] Saving /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000422_1728512.pth... |
|
[2025-01-10 14:15:01,867][02452] Removing /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000189_774144.pth |
|
[2025-01-10 14:15:01,882][02452] Saving new best policy, reward=9.362! |
|
[2025-01-10 14:15:06,657][00511] Fps is (10 sec: 3276.8, 60 sec: 3686.4, 300 sec: 3943.3). Total num frames: 1744896. Throughput: 0: 930.8. Samples: 437306. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) |
|
[2025-01-10 14:15:06,660][00511] Avg episode reward: [(0, '9.755')] |
|
[2025-01-10 14:15:06,669][02452] Saving new best policy, reward=9.755! |
|
[2025-01-10 14:15:09,367][02465] Updated weights for policy 0, policy_version 430 (0.0031) |
|
[2025-01-10 14:15:11,657][00511] Fps is (10 sec: 4096.8, 60 sec: 3823.1, 300 sec: 3971.0). Total num frames: 1769472. Throughput: 0: 967.8. Samples: 440886. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2025-01-10 14:15:11,662][00511] Avg episode reward: [(0, '10.264')] |
|
[2025-01-10 14:15:11,668][02452] Saving new best policy, reward=10.264! |
|
[2025-01-10 14:15:16,657][00511] Fps is (10 sec: 4505.6, 60 sec: 3822.9, 300 sec: 3984.9). Total num frames: 1789952. Throughput: 0: 1034.9. Samples: 447942. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2025-01-10 14:15:16,665][00511] Avg episode reward: [(0, '10.597')] |
|
[2025-01-10 14:15:16,667][02452] Saving new best policy, reward=10.597! |
|
[2025-01-10 14:15:19,643][02465] Updated weights for policy 0, policy_version 440 (0.0023) |
|
[2025-01-10 14:15:21,657][00511] Fps is (10 sec: 3686.4, 60 sec: 3823.0, 300 sec: 3984.9). Total num frames: 1806336. Throughput: 0: 980.8. Samples: 452452. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2025-01-10 14:15:21,665][00511] Avg episode reward: [(0, '9.879')] |
|
[2025-01-10 14:15:26,657][00511] Fps is (10 sec: 4096.0, 60 sec: 4027.9, 300 sec: 4012.7). Total num frames: 1830912. Throughput: 0: 970.0. Samples: 455668. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2025-01-10 14:15:26,659][00511] Avg episode reward: [(0, '10.007')] |
|
[2025-01-10 14:15:29,139][02465] Updated weights for policy 0, policy_version 450 (0.0024) |
|
[2025-01-10 14:15:31,657][00511] Fps is (10 sec: 4505.6, 60 sec: 4096.0, 300 sec: 4012.7). Total num frames: 1851392. Throughput: 0: 1025.3. Samples: 463048. Policy #0 lag: (min: 0.0, avg: 0.6, max: 1.0) |
|
[2025-01-10 14:15:31,660][00511] Avg episode reward: [(0, '9.918')] |
|
[2025-01-10 14:15:36,659][00511] Fps is (10 sec: 3685.7, 60 sec: 3959.3, 300 sec: 3984.9). Total num frames: 1867776. Throughput: 0: 1007.5. Samples: 468212. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2025-01-10 14:15:36,668][00511] Avg episode reward: [(0, '10.659')] |
|
[2025-01-10 14:15:36,670][02452] Saving new best policy, reward=10.659! |
|
[2025-01-10 14:15:40,255][02465] Updated weights for policy 0, policy_version 460 (0.0026) |
|
[2025-01-10 14:15:41,657][00511] Fps is (10 sec: 3686.3, 60 sec: 3891.2, 300 sec: 3984.9). Total num frames: 1888256. Throughput: 0: 987.6. Samples: 470760. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) |
|
[2025-01-10 14:15:41,660][00511] Avg episode reward: [(0, '10.748')] |
|
[2025-01-10 14:15:41,666][02452] Saving new best policy, reward=10.748! |
|
[2025-01-10 14:15:46,657][00511] Fps is (10 sec: 4506.5, 60 sec: 4096.0, 300 sec: 4026.6). Total num frames: 1912832. Throughput: 0: 1016.4. Samples: 478082. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2025-01-10 14:15:46,661][00511] Avg episode reward: [(0, '11.800')] |
|
[2025-01-10 14:15:46,669][02452] Saving new best policy, reward=11.800! |
|
[2025-01-10 14:15:48,658][02465] Updated weights for policy 0, policy_version 470 (0.0015) |
|
[2025-01-10 14:15:51,657][00511] Fps is (10 sec: 4505.7, 60 sec: 4096.0, 300 sec: 4012.7). Total num frames: 1933312. Throughput: 0: 1035.6. Samples: 483908. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2025-01-10 14:15:51,666][00511] Avg episode reward: [(0, '11.848')] |
|
[2025-01-10 14:15:51,677][02452] Saving new best policy, reward=11.848! |
|
[2025-01-10 14:15:56,657][00511] Fps is (10 sec: 3686.4, 60 sec: 3959.5, 300 sec: 3984.9). Total num frames: 1949696. Throughput: 0: 1003.2. Samples: 486028. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) |
|
[2025-01-10 14:15:56,662][00511] Avg episode reward: [(0, '12.453')] |
|
[2025-01-10 14:15:56,665][02452] Saving new best policy, reward=12.453! |
|
[2025-01-10 14:16:00,102][02465] Updated weights for policy 0, policy_version 480 (0.0020) |
|
[2025-01-10 14:16:01,657][00511] Fps is (10 sec: 3686.4, 60 sec: 4027.9, 300 sec: 3998.9). Total num frames: 1970176. Throughput: 0: 994.3. Samples: 492684. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2025-01-10 14:16:01,659][00511] Avg episode reward: [(0, '13.092')] |
|
[2025-01-10 14:16:01,729][02452] Saving new best policy, reward=13.092! |
|
[2025-01-10 14:16:06,662][00511] Fps is (10 sec: 4503.4, 60 sec: 4163.9, 300 sec: 4012.6). Total num frames: 1994752. Throughput: 0: 1049.0. Samples: 499662. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) |
|
[2025-01-10 14:16:06,666][00511] Avg episode reward: [(0, '13.673')] |
|
[2025-01-10 14:16:06,673][02452] Saving new best policy, reward=13.673! |
|
[2025-01-10 14:16:10,273][02465] Updated weights for policy 0, policy_version 490 (0.0020) |
|
[2025-01-10 14:16:11,657][00511] Fps is (10 sec: 4096.0, 60 sec: 4027.7, 300 sec: 3984.9). Total num frames: 2011136. Throughput: 0: 1025.3. Samples: 501808. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) |
|
[2025-01-10 14:16:11,662][00511] Avg episode reward: [(0, '14.858')] |
|
[2025-01-10 14:16:11,675][02452] Saving new best policy, reward=14.858! |
|
[2025-01-10 14:16:16,657][00511] Fps is (10 sec: 3688.2, 60 sec: 4027.7, 300 sec: 3998.8). Total num frames: 2031616. Throughput: 0: 990.9. Samples: 507638. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) |
|
[2025-01-10 14:16:16,660][00511] Avg episode reward: [(0, '14.971')] |
|
[2025-01-10 14:16:16,665][02452] Saving new best policy, reward=14.971! |
|
[2025-01-10 14:16:19,485][02465] Updated weights for policy 0, policy_version 500 (0.0034) |
|
[2025-01-10 14:16:21,657][00511] Fps is (10 sec: 4505.6, 60 sec: 4164.3, 300 sec: 4026.6). Total num frames: 2056192. Throughput: 0: 1038.9. Samples: 514960. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2025-01-10 14:16:21,660][00511] Avg episode reward: [(0, '14.293')] |
|
[2025-01-10 14:16:26,657][00511] Fps is (10 sec: 4096.0, 60 sec: 4027.7, 300 sec: 3998.8). Total num frames: 2072576. Throughput: 0: 1044.0. Samples: 517738. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2025-01-10 14:16:26,665][00511] Avg episode reward: [(0, '14.122')] |
|
[2025-01-10 14:16:30,574][02465] Updated weights for policy 0, policy_version 510 (0.0027) |
|
[2025-01-10 14:16:31,657][00511] Fps is (10 sec: 3686.4, 60 sec: 4027.7, 300 sec: 3998.8). Total num frames: 2093056. Throughput: 0: 991.7. Samples: 522708. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2025-01-10 14:16:31,664][00511] Avg episode reward: [(0, '13.520')] |
|
[2025-01-10 14:16:36,657][00511] Fps is (10 sec: 4505.6, 60 sec: 4164.4, 300 sec: 4026.6). Total num frames: 2117632. Throughput: 0: 1027.4. Samples: 530142. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2025-01-10 14:16:36,663][00511] Avg episode reward: [(0, '14.262')] |
|
[2025-01-10 14:16:38,930][02465] Updated weights for policy 0, policy_version 520 (0.0024) |
|
[2025-01-10 14:16:41,658][00511] Fps is (10 sec: 4505.2, 60 sec: 4164.2, 300 sec: 4012.7). Total num frames: 2138112. Throughput: 0: 1062.9. Samples: 533860. Policy #0 lag: (min: 0.0, avg: 0.4, max: 2.0) |
|
[2025-01-10 14:16:41,663][00511] Avg episode reward: [(0, '14.764')] |
|
[2025-01-10 14:16:46,657][00511] Fps is (10 sec: 3686.4, 60 sec: 4027.7, 300 sec: 3984.9). Total num frames: 2154496. Throughput: 0: 1015.2. Samples: 538370. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) |
|
[2025-01-10 14:16:46,660][00511] Avg episode reward: [(0, '15.744')] |
|
[2025-01-10 14:16:46,664][02452] Saving new best policy, reward=15.744! |
|
[2025-01-10 14:16:49,912][02465] Updated weights for policy 0, policy_version 530 (0.0019) |
|
[2025-01-10 14:16:51,658][00511] Fps is (10 sec: 4096.3, 60 sec: 4096.0, 300 sec: 4026.6). Total num frames: 2179072. Throughput: 0: 1014.9. Samples: 545326. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) |
|
[2025-01-10 14:16:51,661][00511] Avg episode reward: [(0, '16.213')] |
|
[2025-01-10 14:16:51,669][02452] Saving new best policy, reward=16.213! |
|
[2025-01-10 14:16:56,657][00511] Fps is (10 sec: 4505.6, 60 sec: 4164.3, 300 sec: 4026.6). Total num frames: 2199552. Throughput: 0: 1045.6. Samples: 548860. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) |
|
[2025-01-10 14:16:56,660][00511] Avg episode reward: [(0, '16.611')] |
|
[2025-01-10 14:16:56,668][02452] Saving new best policy, reward=16.611! |
|
[2025-01-10 14:16:59,626][02465] Updated weights for policy 0, policy_version 540 (0.0022) |
|
[2025-01-10 14:17:01,657][00511] Fps is (10 sec: 3686.5, 60 sec: 4096.0, 300 sec: 3998.8). Total num frames: 2215936. Throughput: 0: 1034.9. Samples: 554208. Policy #0 lag: (min: 0.0, avg: 0.6, max: 1.0) |
|
[2025-01-10 14:17:01,663][00511] Avg episode reward: [(0, '16.402')] |
|
[2025-01-10 14:17:01,676][02452] Saving /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000541_2215936.pth... |
|
[2025-01-10 14:17:01,836][02452] Removing /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000308_1261568.pth |
|
[2025-01-10 14:17:06,657][00511] Fps is (10 sec: 3686.4, 60 sec: 4028.1, 300 sec: 3998.8). Total num frames: 2236416. Throughput: 0: 1008.8. Samples: 560354. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) |
|
[2025-01-10 14:17:06,660][00511] Avg episode reward: [(0, '15.473')] |
|
[2025-01-10 14:17:09,281][02465] Updated weights for policy 0, policy_version 550 (0.0016) |
|
[2025-01-10 14:17:11,657][00511] Fps is (10 sec: 4505.6, 60 sec: 4164.3, 300 sec: 4026.6). Total num frames: 2260992. Throughput: 0: 1028.0. Samples: 563998. Policy #0 lag: (min: 0.0, avg: 0.4, max: 2.0) |
|
[2025-01-10 14:17:11,662][00511] Avg episode reward: [(0, '15.160')] |
|
[2025-01-10 14:17:16,657][00511] Fps is (10 sec: 4505.6, 60 sec: 4164.3, 300 sec: 4012.7). Total num frames: 2281472. Throughput: 0: 1056.1. Samples: 570232. Policy #0 lag: (min: 0.0, avg: 0.4, max: 2.0) |
|
[2025-01-10 14:17:16,663][00511] Avg episode reward: [(0, '14.548')] |
|
[2025-01-10 14:17:20,313][02465] Updated weights for policy 0, policy_version 560 (0.0041) |
|
[2025-01-10 14:17:21,657][00511] Fps is (10 sec: 3686.4, 60 sec: 4027.7, 300 sec: 3998.9). Total num frames: 2297856. Throughput: 0: 1009.1. Samples: 575552. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2025-01-10 14:17:21,669][00511] Avg episode reward: [(0, '13.953')] |
|
[2025-01-10 14:17:26,657][00511] Fps is (10 sec: 4096.0, 60 sec: 4164.3, 300 sec: 4026.6). Total num frames: 2322432. Throughput: 0: 1004.6. Samples: 579068. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2025-01-10 14:17:26,673][00511] Avg episode reward: [(0, '14.247')] |
|
[2025-01-10 14:17:28,692][02465] Updated weights for policy 0, policy_version 570 (0.0025) |
|
[2025-01-10 14:17:31,659][00511] Fps is (10 sec: 4504.7, 60 sec: 4164.1, 300 sec: 4026.5). Total num frames: 2342912. Throughput: 0: 1061.5. Samples: 586140. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) |
|
[2025-01-10 14:17:31,661][00511] Avg episode reward: [(0, '14.491')] |
|
[2025-01-10 14:17:36,657][00511] Fps is (10 sec: 3686.3, 60 sec: 4027.7, 300 sec: 3998.8). Total num frames: 2359296. Throughput: 0: 1005.7. Samples: 590582. Policy #0 lag: (min: 0.0, avg: 0.4, max: 2.0) |
|
[2025-01-10 14:17:36,660][00511] Avg episode reward: [(0, '14.069')] |
|
[2025-01-10 14:17:39,677][02465] Updated weights for policy 0, policy_version 580 (0.0015) |
|
[2025-01-10 14:17:41,657][00511] Fps is (10 sec: 4096.9, 60 sec: 4096.1, 300 sec: 4026.6). Total num frames: 2383872. Throughput: 0: 1008.2. Samples: 594230. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) |
|
[2025-01-10 14:17:41,661][00511] Avg episode reward: [(0, '15.625')] |
|
[2025-01-10 14:17:46,657][00511] Fps is (10 sec: 4915.3, 60 sec: 4232.5, 300 sec: 4040.5). Total num frames: 2408448. Throughput: 0: 1053.5. Samples: 601616. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) |
|
[2025-01-10 14:17:46,664][00511] Avg episode reward: [(0, '15.986')] |
|
[2025-01-10 14:17:48,973][02465] Updated weights for policy 0, policy_version 590 (0.0019) |
|
[2025-01-10 14:17:51,662][00511] Fps is (10 sec: 3684.6, 60 sec: 4027.4, 300 sec: 3998.7). Total num frames: 2420736. Throughput: 0: 1027.0. Samples: 606574. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2025-01-10 14:17:51,669][00511] Avg episode reward: [(0, '16.332')] |
|
[2025-01-10 14:17:56,657][00511] Fps is (10 sec: 3686.4, 60 sec: 4096.0, 300 sec: 4026.6). Total num frames: 2445312. Throughput: 0: 1005.2. Samples: 609230. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2025-01-10 14:17:56,659][00511] Avg episode reward: [(0, '16.597')] |
|
[2025-01-10 14:17:59,061][02465] Updated weights for policy 0, policy_version 600 (0.0031) |
|
[2025-01-10 14:18:01,657][00511] Fps is (10 sec: 4917.6, 60 sec: 4232.5, 300 sec: 4040.5). Total num frames: 2469888. Throughput: 0: 1032.6. Samples: 616698. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2025-01-10 14:18:01,662][00511] Avg episode reward: [(0, '15.921')] |
|
[2025-01-10 14:18:06,657][00511] Fps is (10 sec: 4096.0, 60 sec: 4164.3, 300 sec: 4026.6). Total num frames: 2486272. Throughput: 0: 1044.5. Samples: 622556. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) |
|
[2025-01-10 14:18:06,659][00511] Avg episode reward: [(0, '15.128')] |
|
[2025-01-10 14:18:10,076][02465] Updated weights for policy 0, policy_version 610 (0.0042) |
|
[2025-01-10 14:18:11,657][00511] Fps is (10 sec: 3276.8, 60 sec: 4027.7, 300 sec: 3998.8). Total num frames: 2502656. Throughput: 0: 1016.0. Samples: 624790. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) |
|
[2025-01-10 14:18:11,659][00511] Avg episode reward: [(0, '15.145')] |
|
[2025-01-10 14:18:16,657][00511] Fps is (10 sec: 4505.5, 60 sec: 4164.3, 300 sec: 4054.3). Total num frames: 2531328. Throughput: 0: 1016.0. Samples: 631860. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) |
|
[2025-01-10 14:18:16,662][00511] Avg episode reward: [(0, '16.051')] |
|
[2025-01-10 14:18:18,424][02465] Updated weights for policy 0, policy_version 620 (0.0029) |
|
[2025-01-10 14:18:21,658][00511] Fps is (10 sec: 4505.2, 60 sec: 4164.2, 300 sec: 4026.6). Total num frames: 2547712. Throughput: 0: 1046.1. Samples: 637656. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) |
|
[2025-01-10 14:18:21,661][00511] Avg episode reward: [(0, '15.739')] |
|
[2025-01-10 14:18:26,657][00511] Fps is (10 sec: 2867.2, 60 sec: 3959.5, 300 sec: 3984.9). Total num frames: 2560000. Throughput: 0: 1004.5. Samples: 639434. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) |
|
[2025-01-10 14:18:26,660][00511] Avg episode reward: [(0, '15.303')] |
|
[2025-01-10 14:18:31,657][00511] Fps is (10 sec: 2867.5, 60 sec: 3891.3, 300 sec: 3984.9). Total num frames: 2576384. Throughput: 0: 931.2. Samples: 643520. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2025-01-10 14:18:31,664][00511] Avg episode reward: [(0, '15.898')] |
|
[2025-01-10 14:18:32,433][02465] Updated weights for policy 0, policy_version 630 (0.0027) |
|
[2025-01-10 14:18:36,657][00511] Fps is (10 sec: 3686.4, 60 sec: 3959.5, 300 sec: 3984.9). Total num frames: 2596864. Throughput: 0: 977.4. Samples: 650550. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) |
|
[2025-01-10 14:18:36,664][00511] Avg episode reward: [(0, '14.586')] |
|
[2025-01-10 14:18:41,520][02465] Updated weights for policy 0, policy_version 640 (0.0022) |
|
[2025-01-10 14:18:41,657][00511] Fps is (10 sec: 4505.5, 60 sec: 3959.5, 300 sec: 3998.8). Total num frames: 2621440. Throughput: 0: 998.1. Samples: 654144. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2025-01-10 14:18:41,660][00511] Avg episode reward: [(0, '13.978')] |
|
[2025-01-10 14:18:46,657][00511] Fps is (10 sec: 3686.4, 60 sec: 3754.7, 300 sec: 3971.1). Total num frames: 2633728. Throughput: 0: 934.1. Samples: 658732. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2025-01-10 14:18:46,660][00511] Avg episode reward: [(0, '15.839')] |
|
[2025-01-10 14:18:51,657][00511] Fps is (10 sec: 3686.4, 60 sec: 3959.8, 300 sec: 4012.7). Total num frames: 2658304. Throughput: 0: 956.5. Samples: 665598. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) |
|
[2025-01-10 14:18:51,659][00511] Avg episode reward: [(0, '16.734')] |
|
[2025-01-10 14:18:51,679][02452] Saving new best policy, reward=16.734! |
|
[2025-01-10 14:18:52,008][02465] Updated weights for policy 0, policy_version 650 (0.0026) |
|
[2025-01-10 14:18:56,657][00511] Fps is (10 sec: 4915.2, 60 sec: 3959.5, 300 sec: 4012.7). Total num frames: 2682880. Throughput: 0: 985.4. Samples: 669132. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) |
|
[2025-01-10 14:18:56,661][00511] Avg episode reward: [(0, '17.855')] |
|
[2025-01-10 14:18:56,668][02452] Saving new best policy, reward=17.855! |
|
[2025-01-10 14:19:01,659][00511] Fps is (10 sec: 3685.7, 60 sec: 3754.5, 300 sec: 3971.0). Total num frames: 2695168. Throughput: 0: 944.2. Samples: 674352. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) |
|
[2025-01-10 14:19:01,673][00511] Avg episode reward: [(0, '17.789')] |
|
[2025-01-10 14:19:01,687][02452] Saving /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000658_2695168.pth... |
|
[2025-01-10 14:19:01,886][02452] Removing /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000422_1728512.pth |
|
[2025-01-10 14:19:03,452][02465] Updated weights for policy 0, policy_version 660 (0.0021) |
|
[2025-01-10 14:19:06,657][00511] Fps is (10 sec: 3276.8, 60 sec: 3822.9, 300 sec: 3985.0). Total num frames: 2715648. Throughput: 0: 947.3. Samples: 680284. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) |
|
[2025-01-10 14:19:06,660][00511] Avg episode reward: [(0, '17.000')] |
|
[2025-01-10 14:19:11,661][00511] Fps is (10 sec: 4504.8, 60 sec: 3959.2, 300 sec: 3998.8). Total num frames: 2740224. Throughput: 0: 989.5. Samples: 683964. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2025-01-10 14:19:11,667][00511] Avg episode reward: [(0, '16.881')] |
|
[2025-01-10 14:19:11,796][02465] Updated weights for policy 0, policy_version 670 (0.0021) |
|
[2025-01-10 14:19:16,659][00511] Fps is (10 sec: 4504.8, 60 sec: 3822.8, 300 sec: 4012.7). Total num frames: 2760704. Throughput: 0: 1035.2. Samples: 690106. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) |
|
[2025-01-10 14:19:16,664][00511] Avg episode reward: [(0, '16.421')] |
|
[2025-01-10 14:19:21,657][00511] Fps is (10 sec: 3687.8, 60 sec: 3823.0, 300 sec: 4026.6). Total num frames: 2777088. Throughput: 0: 991.6. Samples: 695174. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) |
|
[2025-01-10 14:19:21,664][00511] Avg episode reward: [(0, '17.444')] |
|
[2025-01-10 14:19:22,889][02465] Updated weights for policy 0, policy_version 680 (0.0035) |
|
[2025-01-10 14:19:26,657][00511] Fps is (10 sec: 4096.7, 60 sec: 4027.7, 300 sec: 4054.3). Total num frames: 2801664. Throughput: 0: 993.6. Samples: 698858. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) |
|
[2025-01-10 14:19:26,666][00511] Avg episode reward: [(0, '18.457')] |
|
[2025-01-10 14:19:26,668][02452] Saving new best policy, reward=18.457! |
|
[2025-01-10 14:19:31,658][00511] Fps is (10 sec: 4505.5, 60 sec: 4096.0, 300 sec: 4040.5). Total num frames: 2822144. Throughput: 0: 1046.0. Samples: 705802. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) |
|
[2025-01-10 14:19:31,662][00511] Avg episode reward: [(0, '19.432')] |
|
[2025-01-10 14:19:31,674][02452] Saving new best policy, reward=19.432! |
|
[2025-01-10 14:19:32,299][02465] Updated weights for policy 0, policy_version 690 (0.0039) |
|
[2025-01-10 14:19:36,657][00511] Fps is (10 sec: 3686.4, 60 sec: 4027.7, 300 sec: 4012.7). Total num frames: 2838528. Throughput: 0: 989.7. Samples: 710136. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) |
|
[2025-01-10 14:19:36,659][00511] Avg episode reward: [(0, '20.120')] |
|
[2025-01-10 14:19:36,670][02452] Saving new best policy, reward=20.120! |
|
[2025-01-10 14:19:41,658][00511] Fps is (10 sec: 3686.4, 60 sec: 3959.5, 300 sec: 4040.5). Total num frames: 2859008. Throughput: 0: 985.0. Samples: 713456. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2025-01-10 14:19:41,663][00511] Avg episode reward: [(0, '20.666')] |
|
[2025-01-10 14:19:41,672][02452] Saving new best policy, reward=20.666! |
|
[2025-01-10 14:19:42,720][02465] Updated weights for policy 0, policy_version 700 (0.0020) |
|
[2025-01-10 14:19:46,657][00511] Fps is (10 sec: 4505.6, 60 sec: 4164.3, 300 sec: 4054.3). Total num frames: 2883584. Throughput: 0: 1028.3. Samples: 720624. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2025-01-10 14:19:46,661][00511] Avg episode reward: [(0, '20.007')] |
|
[2025-01-10 14:19:51,662][00511] Fps is (10 sec: 4094.2, 60 sec: 4027.4, 300 sec: 4026.5). Total num frames: 2899968. Throughput: 0: 1012.1. Samples: 725832. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) |
|
[2025-01-10 14:19:51,664][00511] Avg episode reward: [(0, '18.222')] |
|
[2025-01-10 14:19:53,804][02465] Updated weights for policy 0, policy_version 710 (0.0029) |
|
[2025-01-10 14:19:56,658][00511] Fps is (10 sec: 3686.3, 60 sec: 3959.4, 300 sec: 4040.5). Total num frames: 2920448. Throughput: 0: 988.6. Samples: 728448. Policy #0 lag: (min: 0.0, avg: 0.6, max: 1.0) |
|
[2025-01-10 14:19:56,660][00511] Avg episode reward: [(0, '18.870')] |
|
[2025-01-10 14:20:01,657][00511] Fps is (10 sec: 3688.1, 60 sec: 4027.9, 300 sec: 4040.5). Total num frames: 2936832. Throughput: 0: 995.4. Samples: 734896. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) |
|
[2025-01-10 14:20:01,668][00511] Avg episode reward: [(0, '19.346')] |
|
[2025-01-10 14:20:04,951][02465] Updated weights for policy 0, policy_version 720 (0.0015) |
|
[2025-01-10 14:20:06,657][00511] Fps is (10 sec: 3276.9, 60 sec: 3959.5, 300 sec: 4012.7). Total num frames: 2953216. Throughput: 0: 971.9. Samples: 738910. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) |
|
[2025-01-10 14:20:06,660][00511] Avg episode reward: [(0, '18.763')] |
|
[2025-01-10 14:20:11,659][00511] Fps is (10 sec: 3276.3, 60 sec: 3823.1, 300 sec: 3998.8). Total num frames: 2969600. Throughput: 0: 939.9. Samples: 741154. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2025-01-10 14:20:11,663][00511] Avg episode reward: [(0, '19.623')] |
|
[2025-01-10 14:20:15,546][02465] Updated weights for policy 0, policy_version 730 (0.0022) |
|
[2025-01-10 14:20:16,657][00511] Fps is (10 sec: 4096.0, 60 sec: 3891.3, 300 sec: 4026.6). Total num frames: 2994176. Throughput: 0: 937.5. Samples: 747990. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2025-01-10 14:20:16,660][00511] Avg episode reward: [(0, '20.886')] |
|
[2025-01-10 14:20:16,667][02452] Saving new best policy, reward=20.886! |
|
[2025-01-10 14:20:21,660][00511] Fps is (10 sec: 4505.0, 60 sec: 3959.3, 300 sec: 4012.7). Total num frames: 3014656. Throughput: 0: 994.3. Samples: 754884. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) |
|
[2025-01-10 14:20:21,663][00511] Avg episode reward: [(0, '19.008')] |
|
[2025-01-10 14:20:26,107][02465] Updated weights for policy 0, policy_version 740 (0.0023) |
|
[2025-01-10 14:20:26,657][00511] Fps is (10 sec: 3686.4, 60 sec: 3822.9, 300 sec: 3998.8). Total num frames: 3031040. Throughput: 0: 970.9. Samples: 757148. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2025-01-10 14:20:26,663][00511] Avg episode reward: [(0, '17.923')] |
|
[2025-01-10 14:20:31,657][00511] Fps is (10 sec: 3687.4, 60 sec: 3822.9, 300 sec: 4012.7). Total num frames: 3051520. Throughput: 0: 939.4. Samples: 762896. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) |
|
[2025-01-10 14:20:31,661][00511] Avg episode reward: [(0, '17.135')] |
|
[2025-01-10 14:20:35,197][02465] Updated weights for policy 0, policy_version 750 (0.0020) |
|
[2025-01-10 14:20:36,657][00511] Fps is (10 sec: 4505.6, 60 sec: 3959.5, 300 sec: 4026.6). Total num frames: 3076096. Throughput: 0: 984.0. Samples: 770108. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2025-01-10 14:20:36,663][00511] Avg episode reward: [(0, '17.142')] |
|
[2025-01-10 14:20:41,657][00511] Fps is (10 sec: 4096.0, 60 sec: 3891.2, 300 sec: 3998.8). Total num frames: 3092480. Throughput: 0: 986.4. Samples: 772836. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) |
|
[2025-01-10 14:20:41,664][00511] Avg episode reward: [(0, '18.022')] |
|
[2025-01-10 14:20:46,287][02465] Updated weights for policy 0, policy_version 760 (0.0027) |
|
[2025-01-10 14:20:46,657][00511] Fps is (10 sec: 3686.4, 60 sec: 3822.9, 300 sec: 3998.8). Total num frames: 3112960. Throughput: 0: 950.2. Samples: 777656. Policy #0 lag: (min: 0.0, avg: 0.4, max: 2.0) |
|
[2025-01-10 14:20:46,660][00511] Avg episode reward: [(0, '17.802')] |
|
[2025-01-10 14:20:51,657][00511] Fps is (10 sec: 4505.6, 60 sec: 3959.8, 300 sec: 4026.6). Total num frames: 3137536. Throughput: 0: 1025.9. Samples: 785074. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) |
|
[2025-01-10 14:20:51,661][00511] Avg episode reward: [(0, '19.430')] |
|
[2025-01-10 14:20:54,781][02465] Updated weights for policy 0, policy_version 770 (0.0021) |
|
[2025-01-10 14:20:56,657][00511] Fps is (10 sec: 4505.6, 60 sec: 3959.5, 300 sec: 4026.6). Total num frames: 3158016. Throughput: 0: 1057.7. Samples: 788750. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2025-01-10 14:20:56,660][00511] Avg episode reward: [(0, '19.762')] |
|
[2025-01-10 14:21:01,657][00511] Fps is (10 sec: 3276.8, 60 sec: 3891.2, 300 sec: 3985.0). Total num frames: 3170304. Throughput: 0: 1004.5. Samples: 793194. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2025-01-10 14:21:01,660][00511] Avg episode reward: [(0, '19.766')] |
|
[2025-01-10 14:21:01,671][02452] Saving /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000774_3170304.pth... |
|
[2025-01-10 14:21:01,792][02452] Removing /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000541_2215936.pth |
|
[2025-01-10 14:21:06,298][02465] Updated weights for policy 0, policy_version 780 (0.0020) |
|
[2025-01-10 14:21:06,658][00511] Fps is (10 sec: 3686.3, 60 sec: 4027.7, 300 sec: 4012.7). Total num frames: 3194880. Throughput: 0: 993.2. Samples: 799574. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) |
|
[2025-01-10 14:21:06,660][00511] Avg episode reward: [(0, '19.900')] |
|
[2025-01-10 14:21:11,657][00511] Fps is (10 sec: 4915.2, 60 sec: 4164.4, 300 sec: 4026.6). Total num frames: 3219456. Throughput: 0: 1025.6. Samples: 803298. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2025-01-10 14:21:11,664][00511] Avg episode reward: [(0, '20.866')] |
|
[2025-01-10 14:21:16,435][02465] Updated weights for policy 0, policy_version 790 (0.0029) |
|
[2025-01-10 14:21:16,657][00511] Fps is (10 sec: 4096.1, 60 sec: 4027.7, 300 sec: 3998.8). Total num frames: 3235840. Throughput: 0: 1019.7. Samples: 808784. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2025-01-10 14:21:16,660][00511] Avg episode reward: [(0, '20.276')] |
|
[2025-01-10 14:21:21,657][00511] Fps is (10 sec: 3686.4, 60 sec: 4027.9, 300 sec: 4012.7). Total num frames: 3256320. Throughput: 0: 989.2. Samples: 814624. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) |
|
[2025-01-10 14:21:21,665][00511] Avg episode reward: [(0, '20.651')] |
|
[2025-01-10 14:21:25,790][02465] Updated weights for policy 0, policy_version 800 (0.0015) |
|
[2025-01-10 14:21:26,657][00511] Fps is (10 sec: 4096.0, 60 sec: 4096.0, 300 sec: 4012.7). Total num frames: 3276800. Throughput: 0: 1010.1. Samples: 818290. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) |
|
[2025-01-10 14:21:26,660][00511] Avg episode reward: [(0, '20.413')] |
|
[2025-01-10 14:21:31,658][00511] Fps is (10 sec: 4095.7, 60 sec: 4095.9, 300 sec: 3998.8). Total num frames: 3297280. Throughput: 0: 1044.2. Samples: 824646. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) |
|
[2025-01-10 14:21:31,666][00511] Avg episode reward: [(0, '20.612')] |
|
[2025-01-10 14:21:36,657][00511] Fps is (10 sec: 3686.4, 60 sec: 3959.5, 300 sec: 3984.9). Total num frames: 3313664. Throughput: 0: 990.0. Samples: 829622. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2025-01-10 14:21:36,669][00511] Avg episode reward: [(0, '21.123')] |
|
[2025-01-10 14:21:36,748][02452] Saving new best policy, reward=21.123! |
|
[2025-01-10 14:21:36,752][02465] Updated weights for policy 0, policy_version 810 (0.0025) |
|
[2025-01-10 14:21:41,657][00511] Fps is (10 sec: 4096.4, 60 sec: 4096.0, 300 sec: 4012.7). Total num frames: 3338240. Throughput: 0: 986.5. Samples: 833144. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) |
|
[2025-01-10 14:21:41,662][00511] Avg episode reward: [(0, '21.925')] |
|
[2025-01-10 14:21:41,672][02452] Saving new best policy, reward=21.925! |
|
[2025-01-10 14:21:45,486][02465] Updated weights for policy 0, policy_version 820 (0.0032) |
|
[2025-01-10 14:21:46,657][00511] Fps is (10 sec: 4505.6, 60 sec: 4096.0, 300 sec: 3998.8). Total num frames: 3358720. Throughput: 0: 1046.6. Samples: 840292. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) |
|
[2025-01-10 14:21:46,660][00511] Avg episode reward: [(0, '21.647')] |
|
[2025-01-10 14:21:51,657][00511] Fps is (10 sec: 3686.4, 60 sec: 3959.5, 300 sec: 3984.9). Total num frames: 3375104. Throughput: 0: 998.4. Samples: 844500. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) |
|
[2025-01-10 14:21:51,660][00511] Avg episode reward: [(0, '21.423')] |
|
[2025-01-10 14:21:56,657][00511] Fps is (10 sec: 3686.4, 60 sec: 3959.5, 300 sec: 3998.8). Total num frames: 3395584. Throughput: 0: 984.3. Samples: 847590. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) |
|
[2025-01-10 14:21:56,660][00511] Avg episode reward: [(0, '21.616')] |
|
[2025-01-10 14:21:57,063][02465] Updated weights for policy 0, policy_version 830 (0.0026) |
|
[2025-01-10 14:22:01,658][00511] Fps is (10 sec: 4505.4, 60 sec: 4164.2, 300 sec: 4012.7). Total num frames: 3420160. Throughput: 0: 1021.2. Samples: 854740. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2025-01-10 14:22:01,660][00511] Avg episode reward: [(0, '22.908')] |
|
[2025-01-10 14:22:01,668][02452] Saving new best policy, reward=22.908! |
|
[2025-01-10 14:22:06,661][00511] Fps is (10 sec: 4094.4, 60 sec: 4027.5, 300 sec: 3984.9). Total num frames: 3436544. Throughput: 0: 1007.7. Samples: 859974. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2025-01-10 14:22:06,664][00511] Avg episode reward: [(0, '22.815')] |
|
[2025-01-10 14:22:07,455][02465] Updated weights for policy 0, policy_version 840 (0.0028) |
|
[2025-01-10 14:22:11,657][00511] Fps is (10 sec: 3686.5, 60 sec: 3959.5, 300 sec: 3984.9). Total num frames: 3457024. Throughput: 0: 980.3. Samples: 862404. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2025-01-10 14:22:11,660][00511] Avg episode reward: [(0, '22.086')] |
|
[2025-01-10 14:22:16,503][02465] Updated weights for policy 0, policy_version 850 (0.0019) |
|
[2025-01-10 14:22:16,657][00511] Fps is (10 sec: 4507.4, 60 sec: 4096.0, 300 sec: 4012.7). Total num frames: 3481600. Throughput: 0: 1001.9. Samples: 869732. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) |
|
[2025-01-10 14:22:16,663][00511] Avg episode reward: [(0, '22.182')] |
|
[2025-01-10 14:22:21,660][00511] Fps is (10 sec: 4504.2, 60 sec: 4095.8, 300 sec: 3998.8). Total num frames: 3502080. Throughput: 0: 1031.6. Samples: 876048. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) |
|
[2025-01-10 14:22:21,663][00511] Avg episode reward: [(0, '20.580')] |
|
[2025-01-10 14:22:26,657][00511] Fps is (10 sec: 2867.2, 60 sec: 3891.2, 300 sec: 3957.2). Total num frames: 3510272. Throughput: 0: 993.4. Samples: 877848. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2025-01-10 14:22:26,660][00511] Avg episode reward: [(0, '19.740')] |
|
[2025-01-10 14:22:29,965][02465] Updated weights for policy 0, policy_version 860 (0.0037) |
|
[2025-01-10 14:22:31,657][00511] Fps is (10 sec: 2458.3, 60 sec: 3823.0, 300 sec: 3957.2). Total num frames: 3526656. Throughput: 0: 918.3. Samples: 881614. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2025-01-10 14:22:31,664][00511] Avg episode reward: [(0, '19.903')] |
|
[2025-01-10 14:22:36,657][00511] Fps is (10 sec: 4095.9, 60 sec: 3959.5, 300 sec: 3957.1). Total num frames: 3551232. Throughput: 0: 989.7. Samples: 889038. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) |
|
[2025-01-10 14:22:36,663][00511] Avg episode reward: [(0, '19.728')] |
|
[2025-01-10 14:22:38,476][02465] Updated weights for policy 0, policy_version 870 (0.0021) |
|
[2025-01-10 14:22:41,657][00511] Fps is (10 sec: 4505.6, 60 sec: 3891.2, 300 sec: 3943.3). Total num frames: 3571712. Throughput: 0: 993.8. Samples: 892312. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2025-01-10 14:22:41,665][00511] Avg episode reward: [(0, '19.093')] |
|
[2025-01-10 14:22:46,657][00511] Fps is (10 sec: 3686.5, 60 sec: 3822.9, 300 sec: 3957.2). Total num frames: 3588096. Throughput: 0: 929.7. Samples: 896578. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) |
|
[2025-01-10 14:22:46,664][00511] Avg episode reward: [(0, '19.253')] |
|
[2025-01-10 14:22:49,909][02465] Updated weights for policy 0, policy_version 880 (0.0034) |
|
[2025-01-10 14:22:51,657][00511] Fps is (10 sec: 4096.0, 60 sec: 3959.5, 300 sec: 3957.2). Total num frames: 3612672. Throughput: 0: 971.6. Samples: 903690. Policy #0 lag: (min: 0.0, avg: 0.6, max: 1.0) |
|
[2025-01-10 14:22:51,659][00511] Avg episode reward: [(0, '19.383')] |
|
[2025-01-10 14:22:56,663][00511] Fps is (10 sec: 4502.9, 60 sec: 3959.1, 300 sec: 3943.2). Total num frames: 3633152. Throughput: 0: 998.4. Samples: 907338. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) |
|
[2025-01-10 14:22:56,666][00511] Avg episode reward: [(0, '19.941')] |
|
[2025-01-10 14:22:59,976][02465] Updated weights for policy 0, policy_version 890 (0.0026) |
|
[2025-01-10 14:23:01,657][00511] Fps is (10 sec: 3686.4, 60 sec: 3823.0, 300 sec: 3943.3). Total num frames: 3649536. Throughput: 0: 949.5. Samples: 912460. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) |
|
[2025-01-10 14:23:01,660][00511] Avg episode reward: [(0, '20.118')] |
|
[2025-01-10 14:23:01,667][02452] Saving /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000891_3649536.pth... |
|
[2025-01-10 14:23:01,827][02452] Removing /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000658_2695168.pth |
|
[2025-01-10 14:23:06,657][00511] Fps is (10 sec: 3688.6, 60 sec: 3891.5, 300 sec: 3957.2). Total num frames: 3670016. Throughput: 0: 946.1. Samples: 918618. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2025-01-10 14:23:06,663][00511] Avg episode reward: [(0, '21.536')] |
|
[2025-01-10 14:23:09,391][02465] Updated weights for policy 0, policy_version 900 (0.0016) |
|
[2025-01-10 14:23:11,657][00511] Fps is (10 sec: 4505.6, 60 sec: 3959.5, 300 sec: 3943.3). Total num frames: 3694592. Throughput: 0: 987.5. Samples: 922284. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) |
|
[2025-01-10 14:23:11,665][00511] Avg episode reward: [(0, '21.548')] |
|
[2025-01-10 14:23:16,657][00511] Fps is (10 sec: 4505.6, 60 sec: 3891.2, 300 sec: 3957.2). Total num frames: 3715072. Throughput: 0: 1039.9. Samples: 928410. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2025-01-10 14:23:16,663][00511] Avg episode reward: [(0, '21.539')] |
|
[2025-01-10 14:23:20,485][02465] Updated weights for policy 0, policy_version 910 (0.0022) |
|
[2025-01-10 14:23:21,657][00511] Fps is (10 sec: 3686.4, 60 sec: 3823.1, 300 sec: 3971.0). Total num frames: 3731456. Throughput: 0: 987.9. Samples: 933494. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) |
|
[2025-01-10 14:23:21,664][00511] Avg episode reward: [(0, '21.140')] |
|
[2025-01-10 14:23:26,657][00511] Fps is (10 sec: 4096.0, 60 sec: 4096.0, 300 sec: 3998.8). Total num frames: 3756032. Throughput: 0: 997.2. Samples: 937188. Policy #0 lag: (min: 0.0, avg: 0.6, max: 1.0) |
|
[2025-01-10 14:23:26,664][00511] Avg episode reward: [(0, '19.147')] |
|
[2025-01-10 14:23:29,060][02465] Updated weights for policy 0, policy_version 920 (0.0020) |
|
[2025-01-10 14:23:31,657][00511] Fps is (10 sec: 4505.6, 60 sec: 4164.3, 300 sec: 3998.8). Total num frames: 3776512. Throughput: 0: 1058.6. Samples: 944216. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2025-01-10 14:23:31,661][00511] Avg episode reward: [(0, '18.644')] |
|
[2025-01-10 14:23:36,658][00511] Fps is (10 sec: 3276.7, 60 sec: 3959.5, 300 sec: 3957.2). Total num frames: 3788800. Throughput: 0: 997.9. Samples: 948596. Policy #0 lag: (min: 0.0, avg: 0.4, max: 2.0) |
|
[2025-01-10 14:23:36,664][00511] Avg episode reward: [(0, '19.741')] |
|
[2025-01-10 14:23:40,191][02465] Updated weights for policy 0, policy_version 930 (0.0022) |
|
[2025-01-10 14:23:41,657][00511] Fps is (10 sec: 3686.4, 60 sec: 4027.7, 300 sec: 3998.8). Total num frames: 3813376. Throughput: 0: 994.5. Samples: 952086. Policy #0 lag: (min: 0.0, avg: 0.6, max: 1.0) |
|
[2025-01-10 14:23:41,664][00511] Avg episode reward: [(0, '19.413')] |
|
[2025-01-10 14:23:46,657][00511] Fps is (10 sec: 4915.4, 60 sec: 4164.3, 300 sec: 3998.8). Total num frames: 3837952. Throughput: 0: 1044.7. Samples: 959472. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) |
|
[2025-01-10 14:23:46,664][00511] Avg episode reward: [(0, '20.770')] |
|
[2025-01-10 14:23:49,572][02465] Updated weights for policy 0, policy_version 940 (0.0024) |
|
[2025-01-10 14:23:51,657][00511] Fps is (10 sec: 4096.0, 60 sec: 4027.7, 300 sec: 3971.0). Total num frames: 3854336. Throughput: 0: 1019.8. Samples: 964510. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) |
|
[2025-01-10 14:23:51,662][00511] Avg episode reward: [(0, '21.126')] |
|
[2025-01-10 14:23:56,657][00511] Fps is (10 sec: 3686.4, 60 sec: 4028.1, 300 sec: 3998.8). Total num frames: 3874816. Throughput: 0: 998.7. Samples: 967226. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) |
|
[2025-01-10 14:23:56,666][00511] Avg episode reward: [(0, '21.359')] |
|
[2025-01-10 14:23:59,672][02465] Updated weights for policy 0, policy_version 950 (0.0026) |
|
[2025-01-10 14:24:01,657][00511] Fps is (10 sec: 4505.6, 60 sec: 4164.3, 300 sec: 4012.7). Total num frames: 3899392. Throughput: 0: 1020.7. Samples: 974340. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2025-01-10 14:24:01,663][00511] Avg episode reward: [(0, '21.743')] |
|
[2025-01-10 14:24:06,657][00511] Fps is (10 sec: 4096.0, 60 sec: 4096.0, 300 sec: 3985.0). Total num frames: 3915776. Throughput: 0: 1041.1. Samples: 980344. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) |
|
[2025-01-10 14:24:06,662][00511] Avg episode reward: [(0, '21.932')] |
|
[2025-01-10 14:24:10,799][02465] Updated weights for policy 0, policy_version 960 (0.0024) |
|
[2025-01-10 14:24:11,658][00511] Fps is (10 sec: 3686.3, 60 sec: 4027.7, 300 sec: 3984.9). Total num frames: 3936256. Throughput: 0: 1009.5. Samples: 982614. Policy #0 lag: (min: 0.0, avg: 0.6, max: 1.0) |
|
[2025-01-10 14:24:11,663][00511] Avg episode reward: [(0, '22.533')] |
|
[2025-01-10 14:24:16,657][00511] Fps is (10 sec: 4505.6, 60 sec: 4096.0, 300 sec: 4012.7). Total num frames: 3960832. Throughput: 0: 1006.4. Samples: 989506. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) |
|
[2025-01-10 14:24:16,659][00511] Avg episode reward: [(0, '22.318')] |
|
[2025-01-10 14:24:19,046][02465] Updated weights for policy 0, policy_version 970 (0.0015) |
|
[2025-01-10 14:24:21,658][00511] Fps is (10 sec: 4505.6, 60 sec: 4164.2, 300 sec: 3998.8). Total num frames: 3981312. Throughput: 0: 1062.5. Samples: 996410. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) |
|
[2025-01-10 14:24:21,661][00511] Avg episode reward: [(0, '22.941')] |
|
[2025-01-10 14:24:21,676][02452] Saving new best policy, reward=22.941! |
|
[2025-01-10 14:24:26,657][00511] Fps is (10 sec: 3686.4, 60 sec: 4027.7, 300 sec: 3984.9). Total num frames: 3997696. Throughput: 0: 1031.7. Samples: 998512. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) |
|
[2025-01-10 14:24:26,661][00511] Avg episode reward: [(0, '23.437')] |
|
[2025-01-10 14:24:26,662][02452] Saving new best policy, reward=23.437! |
|
[2025-01-10 14:24:28,671][02452] Stopping Batcher_0... |
|
[2025-01-10 14:24:28,672][02452] Loop batcher_evt_loop terminating... |
|
[2025-01-10 14:24:28,673][00511] Component Batcher_0 stopped! |
|
[2025-01-10 14:24:28,677][02452] Saving /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000978_4005888.pth... |
|
[2025-01-10 14:24:28,730][02465] Weights refcount: 2 0 |
|
[2025-01-10 14:24:28,737][02465] Stopping InferenceWorker_p0-w0... |
|
[2025-01-10 14:24:28,737][02465] Loop inference_proc0-0_evt_loop terminating... |
|
[2025-01-10 14:24:28,737][00511] Component InferenceWorker_p0-w0 stopped! |
|
[2025-01-10 14:24:28,796][02452] Removing /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000774_3170304.pth |
|
[2025-01-10 14:24:28,820][02452] Saving /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000978_4005888.pth... |
|
[2025-01-10 14:24:29,016][02470] Stopping RolloutWorker_w6... |
|
[2025-01-10 14:24:29,022][02468] Stopping RolloutWorker_w2... |
|
[2025-01-10 14:24:29,017][00511] Component RolloutWorker_w6 stopped! |
|
[2025-01-10 14:24:29,018][02470] Loop rollout_proc6_evt_loop terminating... |
|
[2025-01-10 14:24:29,024][00511] Component RolloutWorker_w2 stopped! |
|
[2025-01-10 14:24:29,023][02468] Loop rollout_proc2_evt_loop terminating... |
|
[2025-01-10 14:24:29,038][02471] Stopping RolloutWorker_w4... |
|
[2025-01-10 14:24:29,036][00511] Component RolloutWorker_w0 stopped! |
|
[2025-01-10 14:24:29,036][02466] Stopping RolloutWorker_w0... |
|
[2025-01-10 14:24:29,041][00511] Component RolloutWorker_w4 stopped! |
|
[2025-01-10 14:24:29,039][02471] Loop rollout_proc4_evt_loop terminating... |
|
[2025-01-10 14:24:29,042][02466] Loop rollout_proc0_evt_loop terminating... |
|
[2025-01-10 14:24:29,051][02452] Stopping LearnerWorker_p0... |
|
[2025-01-10 14:24:29,051][00511] Component LearnerWorker_p0 stopped! |
|
[2025-01-10 14:24:29,055][02452] Loop learner_proc0_evt_loop terminating... |
|
[2025-01-10 14:24:29,245][02472] Stopping RolloutWorker_w5... |
|
[2025-01-10 14:24:29,245][00511] Component RolloutWorker_w5 stopped! |
|
[2025-01-10 14:24:29,261][02472] Loop rollout_proc5_evt_loop terminating... |
|
[2025-01-10 14:24:29,282][02467] Stopping RolloutWorker_w1... |
|
[2025-01-10 14:24:29,282][00511] Component RolloutWorker_w1 stopped! |
|
[2025-01-10 14:24:29,283][02467] Loop rollout_proc1_evt_loop terminating... |
|
[2025-01-10 14:24:29,303][02473] Stopping RolloutWorker_w7... |
|
[2025-01-10 14:24:29,303][00511] Component RolloutWorker_w7 stopped! |
|
[2025-01-10 14:24:29,313][02469] Stopping RolloutWorker_w3... |
|
[2025-01-10 14:24:29,313][00511] Component RolloutWorker_w3 stopped! |
|
[2025-01-10 14:24:29,314][00511] Waiting for process learner_proc0 to stop... |
|
[2025-01-10 14:24:29,314][02473] Loop rollout_proc7_evt_loop terminating... |
|
[2025-01-10 14:24:29,323][02469] Loop rollout_proc3_evt_loop terminating... |
|
[2025-01-10 14:24:30,584][00511] Waiting for process inference_proc0-0 to join... |
|
[2025-01-10 14:24:30,587][00511] Waiting for process rollout_proc0 to join... |
|
[2025-01-10 14:24:32,578][00511] Waiting for process rollout_proc1 to join... |
|
[2025-01-10 14:24:32,583][00511] Waiting for process rollout_proc2 to join... |
|
[2025-01-10 14:24:32,598][00511] Waiting for process rollout_proc3 to join... |
|
[2025-01-10 14:24:32,601][00511] Waiting for process rollout_proc4 to join... |
|
[2025-01-10 14:24:32,606][00511] Waiting for process rollout_proc5 to join... |
|
[2025-01-10 14:24:32,611][00511] Waiting for process rollout_proc6 to join... |
|
[2025-01-10 14:24:32,615][00511] Waiting for process rollout_proc7 to join... |
|
[2025-01-10 14:24:32,625][00511] Batcher 0 profile tree view: |
|
batching: 25.7811, releasing_batches: 0.0309 |
|
[2025-01-10 14:24:32,626][00511] InferenceWorker_p0-w0 profile tree view: |
|
wait_policy: 0.0000 |
|
wait_policy_total: 403.2432 |
|
update_model: 8.2783 |
|
weight_update: 0.0014 |
|
one_step: 0.0036 |
|
handle_policy_step: 559.7621 |
|
deserialize: 14.3042, stack: 3.0882, obs_to_device_normalize: 119.7659, forward: 280.2002, send_messages: 28.1750 |
|
prepare_outputs: 85.9643 |
|
to_cpu: 52.2802 |
|
[2025-01-10 14:24:32,629][00511] Learner 0 profile tree view: |
|
misc: 0.0048, prepare_batch: 13.7059 |
|
train: 73.0190 |
|
epoch_init: 0.0120, minibatch_init: 0.0231, losses_postprocess: 0.6373, kl_divergence: 0.5625, after_optimizer: 33.4615 |
|
calculate_losses: 25.9927 |
|
losses_init: 0.0037, forward_head: 1.1987, bptt_initial: 17.4203, tail: 1.0484, advantages_returns: 0.2719, losses: 3.8710 |
|
bptt: 1.8814 |
|
bptt_forward_core: 1.7648 |
|
update: 11.7027 |
|
clip: 0.8782 |
|
[2025-01-10 14:24:32,631][00511] RolloutWorker_w0 profile tree view: |
|
wait_for_trajectories: 0.3997, enqueue_policy_requests: 94.3370, env_step: 799.1866, overhead: 12.2621, complete_rollouts: 6.6368 |
|
save_policy_outputs: 19.7150 |
|
split_output_tensors: 8.0609 |
|
[2025-01-10 14:24:32,632][00511] RolloutWorker_w7 profile tree view: |
|
wait_for_trajectories: 0.3078, enqueue_policy_requests: 96.1806, env_step: 793.3599, overhead: 11.6855, complete_rollouts: 6.2006 |
|
save_policy_outputs: 19.7934 |
|
split_output_tensors: 7.9137 |
|
[2025-01-10 14:24:32,633][00511] Loop Runner_EvtLoop terminating... |
|
[2025-01-10 14:24:32,634][00511] Runner profile tree view: |
|
main_loop: 1042.2342 |
|
[2025-01-10 14:24:32,635][00511] Collected {0: 4005888}, FPS: 3843.6 |
|
[2025-01-10 14:24:33,035][00511] Loading existing experiment configuration from /content/train_dir/default_experiment/config.json |
|
[2025-01-10 14:24:33,037][00511] Overriding arg 'num_workers' with value 1 passed from command line |
|
[2025-01-10 14:24:33,040][00511] Adding new argument 'no_render'=True that is not in the saved config file! |
|
[2025-01-10 14:24:33,042][00511] Adding new argument 'save_video'=True that is not in the saved config file! |
|
[2025-01-10 14:24:33,045][00511] Adding new argument 'video_frames'=1000000000.0 that is not in the saved config file! |
|
[2025-01-10 14:24:33,046][00511] Adding new argument 'video_name'=None that is not in the saved config file! |
|
[2025-01-10 14:24:33,048][00511] Adding new argument 'max_num_frames'=1000000000.0 that is not in the saved config file! |
|
[2025-01-10 14:24:33,050][00511] Adding new argument 'max_num_episodes'=10 that is not in the saved config file! |
|
[2025-01-10 14:24:33,051][00511] Adding new argument 'push_to_hub'=False that is not in the saved config file! |
|
[2025-01-10 14:24:33,051][00511] Adding new argument 'hf_repository'=None that is not in the saved config file! |
|
[2025-01-10 14:24:33,053][00511] Adding new argument 'policy_index'=0 that is not in the saved config file! |
|
[2025-01-10 14:24:33,054][00511] Adding new argument 'eval_deterministic'=False that is not in the saved config file! |
|
[2025-01-10 14:24:33,054][00511] Adding new argument 'train_script'=None that is not in the saved config file! |
|
[2025-01-10 14:24:33,055][00511] Adding new argument 'enjoy_script'=None that is not in the saved config file! |
|
[2025-01-10 14:24:33,056][00511] Using frameskip 1 and render_action_repeat=4 for evaluation |
|
[2025-01-10 14:24:33,098][00511] Doom resolution: 160x120, resize resolution: (128, 72) |
|
[2025-01-10 14:24:33,103][00511] RunningMeanStd input shape: (3, 72, 128) |
|
[2025-01-10 14:24:33,105][00511] RunningMeanStd input shape: (1,) |
|
[2025-01-10 14:24:33,124][00511] ConvEncoder: input_channels=3 |
|
[2025-01-10 14:24:33,244][00511] Conv encoder output size: 512 |
|
[2025-01-10 14:24:33,245][00511] Policy head output size: 512 |
|
[2025-01-10 14:24:33,505][00511] Loading state from checkpoint /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000978_4005888.pth... |
|
[2025-01-10 14:24:34,271][00511] Num frames 100... |
|
[2025-01-10 14:24:34,388][00511] Num frames 200... |
|
[2025-01-10 14:24:34,507][00511] Num frames 300... |
|
[2025-01-10 14:24:34,630][00511] Num frames 400... |
|
[2025-01-10 14:24:34,751][00511] Num frames 500... |
|
[2025-01-10 14:24:34,828][00511] Avg episode rewards: #0: 9.120, true rewards: #0: 5.120 |
|
[2025-01-10 14:24:34,830][00511] Avg episode reward: 9.120, avg true_objective: 5.120 |
|
[2025-01-10 14:24:34,935][00511] Num frames 600... |
|
[2025-01-10 14:24:35,063][00511] Num frames 700... |
|
[2025-01-10 14:24:35,187][00511] Num frames 800... |
|
[2025-01-10 14:24:35,312][00511] Avg episode rewards: #0: 7.790, true rewards: #0: 4.290 |
|
[2025-01-10 14:24:35,314][00511] Avg episode reward: 7.790, avg true_objective: 4.290 |
|
[2025-01-10 14:24:35,365][00511] Num frames 900... |
|
[2025-01-10 14:24:35,488][00511] Num frames 1000... |
|
[2025-01-10 14:24:35,609][00511] Num frames 1100... |
|
[2025-01-10 14:24:35,727][00511] Num frames 1200... |
|
[2025-01-10 14:24:35,855][00511] Num frames 1300... |
|
[2025-01-10 14:24:35,973][00511] Num frames 1400... |
|
[2025-01-10 14:24:36,100][00511] Num frames 1500... |
|
[2025-01-10 14:24:36,222][00511] Num frames 1600... |
|
[2025-01-10 14:24:36,340][00511] Num frames 1700... |
|
[2025-01-10 14:24:36,456][00511] Num frames 1800... |
|
[2025-01-10 14:24:36,573][00511] Avg episode rewards: #0: 11.167, true rewards: #0: 6.167 |
|
[2025-01-10 14:24:36,574][00511] Avg episode reward: 11.167, avg true_objective: 6.167 |
|
[2025-01-10 14:24:36,636][00511] Num frames 1900... |
|
[2025-01-10 14:24:36,760][00511] Num frames 2000... |
|
[2025-01-10 14:24:36,888][00511] Num frames 2100... |
|
[2025-01-10 14:24:37,009][00511] Num frames 2200... |
|
[2025-01-10 14:24:37,135][00511] Num frames 2300... |
|
[2025-01-10 14:24:37,255][00511] Num frames 2400... |
|
[2025-01-10 14:24:37,417][00511] Num frames 2500... |
|
[2025-01-10 14:24:37,582][00511] Num frames 2600... |
|
[2025-01-10 14:24:37,746][00511] Num frames 2700... |
|
[2025-01-10 14:24:37,914][00511] Num frames 2800... |
|
[2025-01-10 14:24:38,086][00511] Num frames 2900... |
|
[2025-01-10 14:24:38,250][00511] Num frames 3000... |
|
[2025-01-10 14:24:38,411][00511] Num frames 3100... |
|
[2025-01-10 14:24:38,571][00511] Avg episode rewards: #0: 16.655, true rewards: #0: 7.905 |
|
[2025-01-10 14:24:38,573][00511] Avg episode reward: 16.655, avg true_objective: 7.905 |
|
[2025-01-10 14:24:38,642][00511] Num frames 3200... |
|
[2025-01-10 14:24:38,808][00511] Num frames 3300... |
|
[2025-01-10 14:24:38,985][00511] Num frames 3400... |
|
[2025-01-10 14:24:39,164][00511] Num frames 3500... |
|
[2025-01-10 14:24:39,334][00511] Num frames 3600... |
|
[2025-01-10 14:24:39,508][00511] Num frames 3700... |
|
[2025-01-10 14:24:39,680][00511] Num frames 3800... |
|
[2025-01-10 14:24:39,825][00511] Num frames 3900... |
|
[2025-01-10 14:24:39,950][00511] Num frames 4000... |
|
[2025-01-10 14:24:40,078][00511] Num frames 4100... |
|
[2025-01-10 14:24:40,199][00511] Num frames 4200... |
|
[2025-01-10 14:24:40,318][00511] Num frames 4300... |
|
[2025-01-10 14:24:40,440][00511] Num frames 4400... |
|
[2025-01-10 14:24:40,557][00511] Num frames 4500... |
|
[2025-01-10 14:24:40,677][00511] Num frames 4600... |
|
[2025-01-10 14:24:40,800][00511] Num frames 4700... |
|
[2025-01-10 14:24:40,921][00511] Num frames 4800... |
|
[2025-01-10 14:24:41,052][00511] Num frames 4900... |
|
[2025-01-10 14:24:41,139][00511] Avg episode rewards: #0: 21.444, true rewards: #0: 9.844 |
|
[2025-01-10 14:24:41,141][00511] Avg episode reward: 21.444, avg true_objective: 9.844 |
|
[2025-01-10 14:24:41,237][00511] Num frames 5000... |
|
[2025-01-10 14:24:41,357][00511] Num frames 5100... |
|
[2025-01-10 14:24:41,479][00511] Num frames 5200... |
|
[2025-01-10 14:24:41,602][00511] Num frames 5300... |
|
[2025-01-10 14:24:41,721][00511] Num frames 5400... |
|
[2025-01-10 14:24:41,840][00511] Num frames 5500... |
|
[2025-01-10 14:24:41,959][00511] Num frames 5600... |
|
[2025-01-10 14:24:42,095][00511] Num frames 5700... |
|
[2025-01-10 14:24:42,214][00511] Num frames 5800... |
|
[2025-01-10 14:24:42,333][00511] Num frames 5900... |
|
[2025-01-10 14:24:42,406][00511] Avg episode rewards: #0: 21.190, true rewards: #0: 9.857 |
|
[2025-01-10 14:24:42,407][00511] Avg episode reward: 21.190, avg true_objective: 9.857 |
|
[2025-01-10 14:24:42,511][00511] Num frames 6000... |
|
[2025-01-10 14:24:42,639][00511] Num frames 6100... |
|
[2025-01-10 14:24:42,764][00511] Num frames 6200... |
|
[2025-01-10 14:24:42,886][00511] Num frames 6300... |
|
[2025-01-10 14:24:43,009][00511] Num frames 6400... |
|
[2025-01-10 14:24:43,141][00511] Num frames 6500... |
|
[2025-01-10 14:24:43,265][00511] Num frames 6600... |
|
[2025-01-10 14:24:43,388][00511] Num frames 6700... |
|
[2025-01-10 14:24:43,510][00511] Num frames 6800... |
|
[2025-01-10 14:24:43,630][00511] Num frames 6900... |
|
[2025-01-10 14:24:43,749][00511] Num frames 7000... |
|
[2025-01-10 14:24:43,923][00511] Avg episode rewards: #0: 21.854, true rewards: #0: 10.140 |
|
[2025-01-10 14:24:43,925][00511] Avg episode reward: 21.854, avg true_objective: 10.140 |
|
[2025-01-10 14:24:43,931][00511] Num frames 7100... |
|
[2025-01-10 14:24:44,068][00511] Num frames 7200... |
|
[2025-01-10 14:24:44,192][00511] Num frames 7300... |
|
[2025-01-10 14:24:44,314][00511] Num frames 7400... |
|
[2025-01-10 14:24:44,435][00511] Num frames 7500... |
|
[2025-01-10 14:24:44,560][00511] Num frames 7600... |
|
[2025-01-10 14:24:44,690][00511] Num frames 7700... |
|
[2025-01-10 14:24:44,815][00511] Num frames 7800... |
|
[2025-01-10 14:24:44,937][00511] Num frames 7900... |
|
[2025-01-10 14:24:45,067][00511] Num frames 8000... |
|
[2025-01-10 14:24:45,193][00511] Num frames 8100... |
|
[2025-01-10 14:24:45,312][00511] Num frames 8200... |
|
[2025-01-10 14:24:45,437][00511] Num frames 8300... |
|
[2025-01-10 14:24:45,560][00511] Num frames 8400... |
|
[2025-01-10 14:24:45,683][00511] Num frames 8500... |
|
[2025-01-10 14:24:45,809][00511] Num frames 8600... |
|
[2025-01-10 14:24:45,930][00511] Num frames 8700... |
|
[2025-01-10 14:24:46,054][00511] Num frames 8800... |
|
[2025-01-10 14:24:46,188][00511] Num frames 8900... |
|
[2025-01-10 14:24:46,309][00511] Num frames 9000... |
|
[2025-01-10 14:24:46,431][00511] Num frames 9100... |
|
[2025-01-10 14:24:46,607][00511] Avg episode rewards: #0: 26.622, true rewards: #0: 11.497 |
|
[2025-01-10 14:24:46,608][00511] Avg episode reward: 26.622, avg true_objective: 11.497 |
|
[2025-01-10 14:24:46,615][00511] Num frames 9200... |
|
[2025-01-10 14:24:46,733][00511] Num frames 9300... |
|
[2025-01-10 14:24:46,856][00511] Num frames 9400... |
|
[2025-01-10 14:24:46,980][00511] Num frames 9500... |
|
[2025-01-10 14:24:47,112][00511] Num frames 9600... |
|
[2025-01-10 14:24:47,236][00511] Num frames 9700... |
|
[2025-01-10 14:24:47,357][00511] Num frames 9800... |
|
[2025-01-10 14:24:47,479][00511] Num frames 9900... |
|
[2025-01-10 14:24:47,605][00511] Num frames 10000... |
|
[2025-01-10 14:24:47,724][00511] Num frames 10100... |
|
[2025-01-10 14:24:47,849][00511] Num frames 10200... |
|
[2025-01-10 14:24:47,970][00511] Num frames 10300... |
|
[2025-01-10 14:24:48,097][00511] Num frames 10400... |
|
[2025-01-10 14:24:48,231][00511] Num frames 10500... |
|
[2025-01-10 14:24:48,355][00511] Num frames 10600... |
|
[2025-01-10 14:24:48,480][00511] Num frames 10700... |
|
[2025-01-10 14:24:48,603][00511] Num frames 10800... |
|
[2025-01-10 14:24:48,727][00511] Num frames 10900... |
|
[2025-01-10 14:24:48,851][00511] Num frames 11000... |
|
[2025-01-10 14:24:48,951][00511] Avg episode rewards: #0: 29.373, true rewards: #0: 12.262 |
|
[2025-01-10 14:24:48,954][00511] Avg episode reward: 29.373, avg true_objective: 12.262 |
|
[2025-01-10 14:24:49,032][00511] Num frames 11100... |
|
[2025-01-10 14:24:49,168][00511] Num frames 11200... |
|
[2025-01-10 14:24:49,291][00511] Num frames 11300... |
|
[2025-01-10 14:24:49,411][00511] Num frames 11400... |
|
[2025-01-10 14:24:49,529][00511] Num frames 11500... |
|
[2025-01-10 14:24:49,648][00511] Num frames 11600... |
|
[2025-01-10 14:24:49,776][00511] Num frames 11700... |
|
[2025-01-10 14:24:49,948][00511] Num frames 11800... |
|
[2025-01-10 14:24:50,124][00511] Num frames 11900... |
|
[2025-01-10 14:24:50,338][00511] Avg episode rewards: #0: 28.196, true rewards: #0: 11.996 |
|
[2025-01-10 14:24:50,343][00511] Avg episode reward: 28.196, avg true_objective: 11.996 |
|
[2025-01-10 14:25:56,576][00511] Replay video saved to /content/train_dir/default_experiment/replay.mp4! |
|
[2025-01-10 14:25:57,388][00511] Loading existing experiment configuration from /content/train_dir/default_experiment/config.json |
|
[2025-01-10 14:25:57,390][00511] Overriding arg 'num_workers' with value 1 passed from command line |
|
[2025-01-10 14:25:57,391][00511] Adding new argument 'no_render'=True that is not in the saved config file! |
|
[2025-01-10 14:25:57,393][00511] Adding new argument 'save_video'=True that is not in the saved config file! |
|
[2025-01-10 14:25:57,394][00511] Adding new argument 'video_frames'=1000000000.0 that is not in the saved config file! |
|
[2025-01-10 14:25:57,396][00511] Adding new argument 'video_name'=None that is not in the saved config file! |
|
[2025-01-10 14:25:57,397][00511] Adding new argument 'max_num_frames'=100000 that is not in the saved config file! |
|
[2025-01-10 14:25:57,399][00511] Adding new argument 'max_num_episodes'=10 that is not in the saved config file! |
|
[2025-01-10 14:25:57,400][00511] Adding new argument 'push_to_hub'=True that is not in the saved config file! |
|
[2025-01-10 14:25:57,401][00511] Adding new argument 'hf_repository'='Yooniel/rl_course_vizdoom_health_gathering_supreme' that is not in the saved config file! |
|
[2025-01-10 14:25:57,402][00511] Adding new argument 'policy_index'=0 that is not in the saved config file! |
|
[2025-01-10 14:25:57,403][00511] Adding new argument 'eval_deterministic'=False that is not in the saved config file! |
|
[2025-01-10 14:25:57,404][00511] Adding new argument 'train_script'=None that is not in the saved config file! |
|
[2025-01-10 14:25:57,405][00511] Adding new argument 'enjoy_script'=None that is not in the saved config file! |
|
[2025-01-10 14:25:57,406][00511] Using frameskip 1 and render_action_repeat=4 for evaluation |
|
[2025-01-10 14:25:57,444][00511] RunningMeanStd input shape: (3, 72, 128) |
|
[2025-01-10 14:25:57,446][00511] RunningMeanStd input shape: (1,) |
|
[2025-01-10 14:25:57,462][00511] ConvEncoder: input_channels=3 |
|
[2025-01-10 14:25:57,525][00511] Conv encoder output size: 512 |
|
[2025-01-10 14:25:57,527][00511] Policy head output size: 512 |
|
[2025-01-10 14:25:57,558][00511] Loading state from checkpoint /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000978_4005888.pth... |
|
[2025-01-10 14:25:58,249][00511] Num frames 100... |
|
[2025-01-10 14:25:58,423][00511] Num frames 200... |
|
[2025-01-10 14:25:58,581][00511] Num frames 300... |
|
[2025-01-10 14:25:58,766][00511] Num frames 400... |
|
[2025-01-10 14:25:58,919][00511] Num frames 500... |
|
[2025-01-10 14:25:59,098][00511] Num frames 600... |
|
[2025-01-10 14:25:59,264][00511] Num frames 700... |
|
[2025-01-10 14:25:59,349][00511] Avg episode rewards: #0: 14.160, true rewards: #0: 7.160 |
|
[2025-01-10 14:25:59,351][00511] Avg episode reward: 14.160, avg true_objective: 7.160 |
|
[2025-01-10 14:25:59,477][00511] Num frames 800... |
|
[2025-01-10 14:25:59,637][00511] Num frames 900... |
|
[2025-01-10 14:25:59,799][00511] Num frames 1000... |
|
[2025-01-10 14:25:59,954][00511] Num frames 1100... |
|
[2025-01-10 14:26:00,132][00511] Num frames 1200... |
|
[2025-01-10 14:26:00,307][00511] Num frames 1300... |
|
[2025-01-10 14:26:00,476][00511] Num frames 1400... |
|
[2025-01-10 14:26:00,650][00511] Num frames 1500... |
|
[2025-01-10 14:26:00,810][00511] Num frames 1600... |
|
[2025-01-10 14:26:00,991][00511] Num frames 1700... |
|
[2025-01-10 14:26:01,195][00511] Num frames 1800... |
|
[2025-01-10 14:26:01,358][00511] Num frames 1900... |
|
[2025-01-10 14:26:01,530][00511] Num frames 2000... |
|
[2025-01-10 14:26:01,716][00511] Num frames 2100... |
|
[2025-01-10 14:26:01,874][00511] Avg episode rewards: #0: 26.280, true rewards: #0: 10.780 |
|
[2025-01-10 14:26:01,877][00511] Avg episode reward: 26.280, avg true_objective: 10.780 |
|
[2025-01-10 14:26:01,989][00511] Num frames 2200... |
|
[2025-01-10 14:26:02,174][00511] Num frames 2300... |
|
[2025-01-10 14:26:02,352][00511] Num frames 2400... |
|
[2025-01-10 14:26:02,531][00511] Num frames 2500... |
|
[2025-01-10 14:26:02,712][00511] Num frames 2600... |
|
[2025-01-10 14:26:02,892][00511] Num frames 2700... |
|
[2025-01-10 14:26:03,084][00511] Num frames 2800... |
|
[2025-01-10 14:26:03,259][00511] Avg episode rewards: #0: 21.867, true rewards: #0: 9.533 |
|
[2025-01-10 14:26:03,261][00511] Avg episode reward: 21.867, avg true_objective: 9.533 |
|
[2025-01-10 14:26:03,342][00511] Num frames 2900... |
|
[2025-01-10 14:26:03,532][00511] Num frames 3000... |
|
[2025-01-10 14:26:03,718][00511] Num frames 3100... |
|
[2025-01-10 14:26:03,897][00511] Num frames 3200... |
|
[2025-01-10 14:26:04,072][00511] Num frames 3300... |
|
[2025-01-10 14:26:04,224][00511] Num frames 3400... |
|
[2025-01-10 14:26:04,347][00511] Num frames 3500... |
|
[2025-01-10 14:26:04,471][00511] Num frames 3600... |
|
[2025-01-10 14:26:04,589][00511] Num frames 3700... |
|
[2025-01-10 14:26:04,710][00511] Num frames 3800... |
|
[2025-01-10 14:26:04,832][00511] Num frames 3900... |
|
[2025-01-10 14:26:04,952][00511] Num frames 4000... |
|
[2025-01-10 14:26:05,079][00511] Num frames 4100... |
|
[2025-01-10 14:26:05,152][00511] Avg episode rewards: #0: 24.020, true rewards: #0: 10.270 |
|
[2025-01-10 14:26:05,155][00511] Avg episode reward: 24.020, avg true_objective: 10.270 |
|
[2025-01-10 14:26:05,264][00511] Num frames 4200... |
|
[2025-01-10 14:26:05,384][00511] Num frames 4300... |
|
[2025-01-10 14:26:05,501][00511] Num frames 4400... |
|
[2025-01-10 14:26:05,623][00511] Num frames 4500... |
|
[2025-01-10 14:26:05,745][00511] Num frames 4600... |
|
[2025-01-10 14:26:05,863][00511] Num frames 4700... |
|
[2025-01-10 14:26:05,981][00511] Num frames 4800... |
|
[2025-01-10 14:26:06,089][00511] Avg episode rewards: #0: 21.888, true rewards: #0: 9.688 |
|
[2025-01-10 14:26:06,090][00511] Avg episode reward: 21.888, avg true_objective: 9.688 |
|
[2025-01-10 14:26:06,167][00511] Num frames 4900... |
|
[2025-01-10 14:26:06,284][00511] Num frames 5000... |
|
[2025-01-10 14:26:06,401][00511] Num frames 5100... |
|
[2025-01-10 14:26:06,521][00511] Num frames 5200... |
|
[2025-01-10 14:26:06,638][00511] Num frames 5300... |
|
[2025-01-10 14:26:06,755][00511] Num frames 5400... |
|
[2025-01-10 14:26:06,920][00511] Num frames 5500... |
|
[2025-01-10 14:26:07,114][00511] Avg episode rewards: #0: 20.467, true rewards: #0: 9.300 |
|
[2025-01-10 14:26:07,117][00511] Avg episode reward: 20.467, avg true_objective: 9.300 |
|
[2025-01-10 14:26:07,159][00511] Num frames 5600... |
|
[2025-01-10 14:26:07,322][00511] Num frames 5700... |
|
[2025-01-10 14:26:07,486][00511] Num frames 5800... |
|
[2025-01-10 14:26:07,648][00511] Num frames 5900... |
|
[2025-01-10 14:26:07,809][00511] Num frames 6000... |
|
[2025-01-10 14:26:07,962][00511] Num frames 6100... |
|
[2025-01-10 14:26:08,131][00511] Num frames 6200... |
|
[2025-01-10 14:26:08,306][00511] Num frames 6300... |
|
[2025-01-10 14:26:08,474][00511] Num frames 6400... |
|
[2025-01-10 14:26:08,655][00511] Num frames 6500... |
|
[2025-01-10 14:26:08,831][00511] Num frames 6600... |
|
[2025-01-10 14:26:08,998][00511] Num frames 6700... |
|
[2025-01-10 14:26:09,102][00511] Avg episode rewards: #0: 20.890, true rewards: #0: 9.604 |
|
[2025-01-10 14:26:09,104][00511] Avg episode reward: 20.890, avg true_objective: 9.604 |
|
[2025-01-10 14:26:09,221][00511] Num frames 6800... |
|
[2025-01-10 14:26:09,349][00511] Num frames 6900... |
|
[2025-01-10 14:26:09,468][00511] Num frames 7000... |
|
[2025-01-10 14:26:09,593][00511] Num frames 7100... |
|
[2025-01-10 14:26:09,712][00511] Num frames 7200... |
|
[2025-01-10 14:26:09,883][00511] Avg episode rewards: #0: 19.499, true rewards: #0: 9.124 |
|
[2025-01-10 14:26:09,884][00511] Avg episode reward: 19.499, avg true_objective: 9.124 |
|
[2025-01-10 14:26:09,889][00511] Num frames 7300... |
|
[2025-01-10 14:26:10,006][00511] Num frames 7400... |
|
[2025-01-10 14:26:10,137][00511] Num frames 7500... |
|
[2025-01-10 14:26:10,257][00511] Num frames 7600... |
|
[2025-01-10 14:26:10,383][00511] Num frames 7700... |
|
[2025-01-10 14:26:10,504][00511] Num frames 7800... |
|
[2025-01-10 14:26:10,623][00511] Num frames 7900... |
|
[2025-01-10 14:26:10,742][00511] Num frames 8000... |
|
[2025-01-10 14:26:10,862][00511] Num frames 8100... |
|
[2025-01-10 14:26:10,980][00511] Num frames 8200... |
|
[2025-01-10 14:26:11,043][00511] Avg episode rewards: #0: 19.450, true rewards: #0: 9.117 |
|
[2025-01-10 14:26:11,045][00511] Avg episode reward: 19.450, avg true_objective: 9.117 |
|
[2025-01-10 14:26:11,166][00511] Num frames 8300... |
|
[2025-01-10 14:26:11,292][00511] Num frames 8400... |
|
[2025-01-10 14:26:11,417][00511] Num frames 8500... |
|
[2025-01-10 14:26:11,535][00511] Num frames 8600... |
|
[2025-01-10 14:26:11,655][00511] Num frames 8700... |
|
[2025-01-10 14:26:11,812][00511] Avg episode rewards: #0: 18.789, true rewards: #0: 8.789 |
|
[2025-01-10 14:26:11,814][00511] Avg episode reward: 18.789, avg true_objective: 8.789 |
|
[2025-01-10 14:27:01,150][00511] Replay video saved to /content/train_dir/default_experiment/replay.mp4! |
|
[2025-01-10 14:33:28,853][00511] Loading existing experiment configuration from /content/train_dir/default_experiment/config.json |
|
[2025-01-10 14:33:28,855][00511] Overriding arg 'num_workers' with value 1 passed from command line |
|
[2025-01-10 14:33:28,858][00511] Adding new argument 'no_render'=True that is not in the saved config file! |
|
[2025-01-10 14:33:28,859][00511] Adding new argument 'save_video'=True that is not in the saved config file! |
|
[2025-01-10 14:33:28,861][00511] Adding new argument 'video_frames'=1000000000.0 that is not in the saved config file! |
|
[2025-01-10 14:33:28,863][00511] Adding new argument 'video_name'=None that is not in the saved config file! |
|
[2025-01-10 14:33:28,864][00511] Adding new argument 'max_num_frames'=100000 that is not in the saved config file! |
|
[2025-01-10 14:33:28,866][00511] Adding new argument 'max_num_episodes'=10 that is not in the saved config file! |
|
[2025-01-10 14:33:28,867][00511] Adding new argument 'push_to_hub'=True that is not in the saved config file! |
|
[2025-01-10 14:33:28,871][00511] Adding new argument 'hf_repository'='Yooniel/rl_course_vizdoom_health_gathering_supreme' that is not in the saved config file! |
|
[2025-01-10 14:33:28,872][00511] Adding new argument 'policy_index'=0 that is not in the saved config file! |
|
[2025-01-10 14:33:28,873][00511] Adding new argument 'eval_deterministic'=False that is not in the saved config file! |
|
[2025-01-10 14:33:28,875][00511] Adding new argument 'train_script'=None that is not in the saved config file! |
|
[2025-01-10 14:33:28,877][00511] Adding new argument 'enjoy_script'=None that is not in the saved config file! |
|
[2025-01-10 14:33:28,882][00511] Using frameskip 1 and render_action_repeat=4 for evaluation |
|
[2025-01-10 14:33:28,925][00511] RunningMeanStd input shape: (3, 72, 128) |
|
[2025-01-10 14:33:28,927][00511] RunningMeanStd input shape: (1,) |
|
[2025-01-10 14:33:28,946][00511] ConvEncoder: input_channels=3 |
|
[2025-01-10 14:33:29,007][00511] Conv encoder output size: 512 |
|
[2025-01-10 14:33:29,009][00511] Policy head output size: 512 |
|
[2025-01-10 14:33:29,039][00511] Loading state from checkpoint /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000978_4005888.pth... |
|
[2025-01-10 14:33:29,655][00511] Num frames 100... |
|
[2025-01-10 14:33:29,818][00511] Num frames 200... |
|
[2025-01-10 14:33:29,978][00511] Num frames 300... |
|
[2025-01-10 14:33:30,145][00511] Num frames 400... |
|
[2025-01-10 14:33:30,350][00511] Num frames 500... |
|
[2025-01-10 14:33:30,522][00511] Num frames 600... |
|
[2025-01-10 14:33:30,693][00511] Num frames 700... |
|
[2025-01-10 14:33:30,820][00511] Avg episode rewards: #0: 12.360, true rewards: #0: 7.360 |
|
[2025-01-10 14:33:30,822][00511] Avg episode reward: 12.360, avg true_objective: 7.360 |
|
[2025-01-10 14:33:30,937][00511] Num frames 800... |
|
[2025-01-10 14:33:31,116][00511] Num frames 900... |
|
[2025-01-10 14:33:31,283][00511] Num frames 1000... |
|
[2025-01-10 14:33:31,433][00511] Num frames 1100... |
|
[2025-01-10 14:33:31,560][00511] Num frames 1200... |
|
[2025-01-10 14:33:31,690][00511] Num frames 1300... |
|
[2025-01-10 14:33:31,817][00511] Num frames 1400... |
|
[2025-01-10 14:33:31,937][00511] Num frames 1500... |
|
[2025-01-10 14:33:32,070][00511] Num frames 1600... |
|
[2025-01-10 14:33:32,191][00511] Num frames 1700... |
|
[2025-01-10 14:33:32,310][00511] Num frames 1800... |
|
[2025-01-10 14:33:32,442][00511] Num frames 1900... |
|
[2025-01-10 14:33:32,562][00511] Num frames 2000... |
|
[2025-01-10 14:33:32,682][00511] Num frames 2100... |
|
[2025-01-10 14:33:32,816][00511] Num frames 2200... |
|
[2025-01-10 14:33:32,938][00511] Num frames 2300... |
|
[2025-01-10 14:33:33,041][00511] Avg episode rewards: #0: 25.180, true rewards: #0: 11.680 |
|
[2025-01-10 14:33:33,043][00511] Avg episode reward: 25.180, avg true_objective: 11.680 |
|
[2025-01-10 14:33:33,132][00511] Num frames 2400... |
|
[2025-01-10 14:33:33,255][00511] Num frames 2500... |
|
[2025-01-10 14:33:33,377][00511] Num frames 2600... |
|
[2025-01-10 14:33:33,497][00511] Num frames 2700... |
|
[2025-01-10 14:33:33,619][00511] Num frames 2800... |
|
[2025-01-10 14:33:33,739][00511] Num frames 2900... |
|
[2025-01-10 14:33:33,870][00511] Num frames 3000... |
|
[2025-01-10 14:33:33,973][00511] Avg episode rewards: #0: 21.800, true rewards: #0: 10.133 |
|
[2025-01-10 14:33:33,975][00511] Avg episode reward: 21.800, avg true_objective: 10.133 |
|
[2025-01-10 14:33:34,053][00511] Num frames 3100... |
|
[2025-01-10 14:33:34,178][00511] Num frames 3200... |
|
[2025-01-10 14:33:34,300][00511] Num frames 3300... |
|
[2025-01-10 14:33:34,418][00511] Num frames 3400... |
|
[2025-01-10 14:33:34,537][00511] Num frames 3500... |
|
[2025-01-10 14:33:34,656][00511] Num frames 3600... |
|
[2025-01-10 14:33:34,774][00511] Num frames 3700... |
|
[2025-01-10 14:33:34,899][00511] Num frames 3800... |
|
[2025-01-10 14:33:35,001][00511] Avg episode rewards: #0: 20.100, true rewards: #0: 9.600 |
|
[2025-01-10 14:33:35,002][00511] Avg episode reward: 20.100, avg true_objective: 9.600 |
|
[2025-01-10 14:33:35,081][00511] Num frames 3900... |
|
[2025-01-10 14:33:35,199][00511] Num frames 4000... |
|
[2025-01-10 14:33:35,321][00511] Num frames 4100... |
|
[2025-01-10 14:33:35,443][00511] Num frames 4200... |
|
[2025-01-10 14:33:35,563][00511] Num frames 4300... |
|
[2025-01-10 14:33:35,684][00511] Num frames 4400... |
|
[2025-01-10 14:33:35,813][00511] Num frames 4500... |
|
[2025-01-10 14:33:35,941][00511] Num frames 4600... |
|
[2025-01-10 14:33:36,069][00511] Num frames 4700... |
|
[2025-01-10 14:33:36,194][00511] Num frames 4800... |
|
[2025-01-10 14:33:36,315][00511] Num frames 4900... |
|
[2025-01-10 14:33:36,436][00511] Num frames 5000... |
|
[2025-01-10 14:33:36,564][00511] Num frames 5100... |
|
[2025-01-10 14:33:36,686][00511] Num frames 5200... |
|
[2025-01-10 14:33:36,809][00511] Num frames 5300... |
|
[2025-01-10 14:33:36,940][00511] Num frames 5400... |
|
[2025-01-10 14:33:37,008][00511] Avg episode rewards: #0: 23.418, true rewards: #0: 10.818 |
|
[2025-01-10 14:33:37,009][00511] Avg episode reward: 23.418, avg true_objective: 10.818 |
|
[2025-01-10 14:33:37,130][00511] Num frames 5500... |
|
[2025-01-10 14:33:37,248][00511] Num frames 5600... |
|
[2025-01-10 14:33:37,370][00511] Num frames 5700... |
|
[2025-01-10 14:33:37,491][00511] Num frames 5800... |
|
[2025-01-10 14:33:37,614][00511] Num frames 5900... |
|
[2025-01-10 14:33:37,733][00511] Num frames 6000... |
|
[2025-01-10 14:33:37,855][00511] Num frames 6100... |
|
[2025-01-10 14:33:37,984][00511] Num frames 6200... |
|
[2025-01-10 14:33:38,112][00511] Num frames 6300... |
|
[2025-01-10 14:33:38,233][00511] Num frames 6400... |
|
[2025-01-10 14:33:38,353][00511] Num frames 6500... |
|
[2025-01-10 14:33:38,522][00511] Avg episode rewards: #0: 24.155, true rewards: #0: 10.988 |
|
[2025-01-10 14:33:38,524][00511] Avg episode reward: 24.155, avg true_objective: 10.988 |
|
[2025-01-10 14:33:38,535][00511] Num frames 6600... |
|
[2025-01-10 14:33:38,654][00511] Num frames 6700... |
|
[2025-01-10 14:33:38,776][00511] Num frames 6800... |
|
[2025-01-10 14:33:38,893][00511] Num frames 6900... |
|
[2025-01-10 14:33:39,022][00511] Num frames 7000... |
|
[2025-01-10 14:33:39,150][00511] Num frames 7100... |
|
[2025-01-10 14:33:39,270][00511] Num frames 7200... |
|
[2025-01-10 14:33:39,365][00511] Avg episode rewards: #0: 22.619, true rewards: #0: 10.333 |
|
[2025-01-10 14:33:39,367][00511] Avg episode reward: 22.619, avg true_objective: 10.333 |
|
[2025-01-10 14:33:39,446][00511] Num frames 7300... |
|
[2025-01-10 14:33:39,569][00511] Num frames 7400... |
|
[2025-01-10 14:33:39,691][00511] Num frames 7500... |
|
[2025-01-10 14:33:39,811][00511] Num frames 7600... |
|
[2025-01-10 14:33:39,932][00511] Num frames 7700... |
|
[2025-01-10 14:33:40,065][00511] Num frames 7800... |
|
[2025-01-10 14:33:40,189][00511] Num frames 7900... |
|
[2025-01-10 14:33:40,310][00511] Num frames 8000... |
|
[2025-01-10 14:33:40,431][00511] Num frames 8100... |
|
[2025-01-10 14:33:40,599][00511] Avg episode rewards: #0: 22.116, true rewards: #0: 10.241 |
|
[2025-01-10 14:33:40,601][00511] Avg episode reward: 22.116, avg true_objective: 10.241 |
|
[2025-01-10 14:33:40,612][00511] Num frames 8200... |
|
[2025-01-10 14:33:40,732][00511] Num frames 8300... |
|
[2025-01-10 14:33:40,853][00511] Num frames 8400... |
|
[2025-01-10 14:33:40,982][00511] Num frames 8500... |
|
[2025-01-10 14:33:41,127][00511] Num frames 8600... |
|
[2025-01-10 14:33:41,245][00511] Num frames 8700... |
|
[2025-01-10 14:33:41,373][00511] Num frames 8800... |
|
[2025-01-10 14:33:41,552][00511] Num frames 8900... |
|
[2025-01-10 14:33:41,738][00511] Num frames 9000... |
|
[2025-01-10 14:33:41,905][00511] Num frames 9100... |
|
[2025-01-10 14:33:42,083][00511] Num frames 9200... |
|
[2025-01-10 14:33:42,259][00511] Num frames 9300... |
|
[2025-01-10 14:33:42,343][00511] Avg episode rewards: #0: 22.681, true rewards: #0: 10.348 |
|
[2025-01-10 14:33:42,347][00511] Avg episode reward: 22.681, avg true_objective: 10.348 |
|
[2025-01-10 14:33:42,491][00511] Num frames 9400... |
|
[2025-01-10 14:33:42,659][00511] Num frames 9500... |
|
[2025-01-10 14:33:42,834][00511] Num frames 9600... |
|
[2025-01-10 14:33:43,008][00511] Avg episode rewards: #0: 20.869, true rewards: #0: 9.669 |
|
[2025-01-10 14:33:43,010][00511] Avg episode reward: 20.869, avg true_objective: 9.669 |
|
[2025-01-10 14:34:36,946][00511] Replay video saved to /content/train_dir/default_experiment/replay.mp4! |
|
|