diff --git "a/sf_log.txt" "b/sf_log.txt" new file mode 100644--- /dev/null +++ "b/sf_log.txt" @@ -0,0 +1,2149 @@ +[2023-09-27 03:29:15,159][24596] Saving configuration to ./train_atari/atari_skiing/config.json... +[2023-09-27 03:29:15,475][24596] Rollout worker 0 uses device cpu +[2023-09-27 03:29:15,476][24596] Rollout worker 1 uses device cpu +[2023-09-27 03:29:15,477][24596] Rollout worker 2 uses device cpu +[2023-09-27 03:29:15,477][24596] Rollout worker 3 uses device cpu +[2023-09-27 03:29:15,478][24596] Rollout worker 4 uses device cpu +[2023-09-27 03:29:15,478][24596] Rollout worker 5 uses device cpu +[2023-09-27 03:29:15,479][24596] Rollout worker 6 uses device cpu +[2023-09-27 03:29:15,479][24596] Rollout worker 7 uses device cpu +[2023-09-27 03:29:15,479][24596] In synchronous mode, we only accumulate one batch. Setting num_batches_to_accumulate to 1 +[2023-09-27 03:29:15,527][24596] Using GPUs [0] for process 0 (actually maps to GPUs [0]) +[2023-09-27 03:29:15,528][24596] InferenceWorker_p0-w0: min num requests: 1 +[2023-09-27 03:29:15,531][24596] Using GPUs [1] for process 1 (actually maps to GPUs [1]) +[2023-09-27 03:29:15,531][24596] InferenceWorker_p1-w0: min num requests: 1 +[2023-09-27 03:29:15,555][24596] Starting all processes... +[2023-09-27 03:29:15,555][24596] Starting process learner_proc0 +[2023-09-27 03:29:17,120][24596] Starting process learner_proc1 +[2023-09-27 03:29:17,123][25378] Using GPUs [0] for process 0 (actually maps to GPUs [0]) +[2023-09-27 03:29:17,123][25378] Set environment var CUDA_VISIBLE_DEVICES to '0' (GPU indices [0]) for learning process 0 +[2023-09-27 03:29:17,141][25378] Num visible devices: 1 +[2023-09-27 03:29:17,157][25378] Starting seed is not provided +[2023-09-27 03:29:17,158][25378] Using GPUs [0] for process 0 (actually maps to GPUs [0]) +[2023-09-27 03:29:17,158][25378] Initializing actor-critic model on device cuda:0 +[2023-09-27 03:29:17,158][25378] RunningMeanStd input shape: (4, 84, 84) +[2023-09-27 03:29:17,159][25378] RunningMeanStd input shape: (1,) +[2023-09-27 03:29:17,170][25378] ConvEncoder: input_channels=4 +[2023-09-27 03:29:17,337][25378] Conv encoder output size: 512 +[2023-09-27 03:29:17,338][25378] Created Actor Critic model with architecture: +[2023-09-27 03:29:17,339][25378] ActorCriticSharedWeights( + (obs_normalizer): ObservationNormalizer( + (running_mean_std): RunningMeanStdDictInPlace( + (running_mean_std): ModuleDict( + (obs): RunningMeanStdInPlace() + ) + ) + ) + (returns_normalizer): RecursiveScriptModule(original_name=RunningMeanStdInPlace) + (encoder): MultiInputEncoder( + (encoders): ModuleDict( + (obs): ConvEncoder( + (enc): RecursiveScriptModule( + original_name=ConvEncoderImpl + (conv_head): RecursiveScriptModule( + original_name=Sequential + (0): RecursiveScriptModule(original_name=Conv2d) + (1): RecursiveScriptModule(original_name=ReLU) + (2): RecursiveScriptModule(original_name=Conv2d) + (3): RecursiveScriptModule(original_name=ReLU) + (4): RecursiveScriptModule(original_name=Conv2d) + (5): RecursiveScriptModule(original_name=ReLU) + ) + (mlp_layers): RecursiveScriptModule( + original_name=Sequential + (0): RecursiveScriptModule(original_name=Linear) + (1): RecursiveScriptModule(original_name=ReLU) + ) + ) + ) + ) + ) + (core): ModelCoreIdentity() + (decoder): MlpDecoder( + (mlp): Identity() + ) + (critic_linear): Linear(in_features=512, out_features=1, bias=True) + (action_parameterization): ActionParameterizationDefault( + (distribution_linear): Linear(in_features=512, out_features=3, bias=True) + ) +) +[2023-09-27 03:29:17,916][25378] Using optimizer +[2023-09-27 03:29:17,917][25378] No checkpoints found +[2023-09-27 03:29:17,917][25378] Did not load from checkpoint, starting from scratch! +[2023-09-27 03:29:17,917][25378] Initialized policy 0 weights for model version 0 +[2023-09-27 03:29:17,919][25378] LearnerWorker_p0 finished initialization! +[2023-09-27 03:29:17,919][25378] Using GPUs [0] for process 0 (actually maps to GPUs [0]) +[2023-09-27 03:29:18,742][24596] Starting all processes... +[2023-09-27 03:29:18,745][25443] Using GPUs [1] for process 1 (actually maps to GPUs [1]) +[2023-09-27 03:29:18,745][25443] Set environment var CUDA_VISIBLE_DEVICES to '1' (GPU indices [1]) for learning process 1 +[2023-09-27 03:29:18,749][24596] Starting process inference_proc0-0 +[2023-09-27 03:29:18,749][24596] Starting process inference_proc1-0 +[2023-09-27 03:29:18,750][24596] Starting process rollout_proc0 +[2023-09-27 03:29:18,750][24596] Starting process rollout_proc1 +[2023-09-27 03:29:18,764][25443] Num visible devices: 1 +[2023-09-27 03:29:18,750][24596] Starting process rollout_proc2 +[2023-09-27 03:29:18,751][24596] Starting process rollout_proc3 +[2023-09-27 03:29:18,780][25443] Starting seed is not provided +[2023-09-27 03:29:18,780][25443] Using GPUs [0] for process 1 (actually maps to GPUs [1]) +[2023-09-27 03:29:18,781][25443] Initializing actor-critic model on device cuda:0 +[2023-09-27 03:29:18,781][25443] RunningMeanStd input shape: (4, 84, 84) +[2023-09-27 03:29:18,781][25443] RunningMeanStd input shape: (1,) +[2023-09-27 03:29:18,754][24596] Starting process rollout_proc4 +[2023-09-27 03:29:18,755][24596] Starting process rollout_proc5 +[2023-09-27 03:29:18,759][24596] Starting process rollout_proc6 +[2023-09-27 03:29:18,760][24596] Starting process rollout_proc7 +[2023-09-27 03:29:18,794][25443] ConvEncoder: input_channels=4 +[2023-09-27 03:29:19,125][25443] Conv encoder output size: 512 +[2023-09-27 03:29:19,127][25443] Created Actor Critic model with architecture: +[2023-09-27 03:29:19,128][25443] ActorCriticSharedWeights( + (obs_normalizer): ObservationNormalizer( + (running_mean_std): RunningMeanStdDictInPlace( + (running_mean_std): ModuleDict( + (obs): RunningMeanStdInPlace() + ) + ) + ) + (returns_normalizer): RecursiveScriptModule(original_name=RunningMeanStdInPlace) + (encoder): MultiInputEncoder( + (encoders): ModuleDict( + (obs): ConvEncoder( + (enc): RecursiveScriptModule( + original_name=ConvEncoderImpl + (conv_head): RecursiveScriptModule( + original_name=Sequential + (0): RecursiveScriptModule(original_name=Conv2d) + (1): RecursiveScriptModule(original_name=ReLU) + (2): RecursiveScriptModule(original_name=Conv2d) + (3): RecursiveScriptModule(original_name=ReLU) + (4): RecursiveScriptModule(original_name=Conv2d) + (5): RecursiveScriptModule(original_name=ReLU) + ) + (mlp_layers): RecursiveScriptModule( + original_name=Sequential + (0): RecursiveScriptModule(original_name=Linear) + (1): RecursiveScriptModule(original_name=ReLU) + ) + ) + ) + ) + ) + (core): ModelCoreIdentity() + (decoder): MlpDecoder( + (mlp): Identity() + ) + (critic_linear): Linear(in_features=512, out_features=1, bias=True) + (action_parameterization): ActionParameterizationDefault( + (distribution_linear): Linear(in_features=512, out_features=3, bias=True) + ) +) +[2023-09-27 03:29:19,740][25443] Using optimizer +[2023-09-27 03:29:19,741][25443] No checkpoints found +[2023-09-27 03:29:19,741][25443] Did not load from checkpoint, starting from scratch! +[2023-09-27 03:29:19,741][25443] Initialized policy 1 weights for model version 0 +[2023-09-27 03:29:19,742][25443] LearnerWorker_p1 finished initialization! +[2023-09-27 03:29:19,743][25443] Using GPUs [0] for process 1 (actually maps to GPUs [1]) +[2023-09-27 03:29:20,709][25601] Worker 0 uses CPU cores [0, 1, 2, 3] +[2023-09-27 03:29:20,720][25604] Worker 2 uses CPU cores [8, 9, 10, 11] +[2023-09-27 03:29:20,730][25606] Worker 4 uses CPU cores [16, 17, 18, 19] +[2023-09-27 03:29:20,754][25562] Using GPUs [0] for process 0 (actually maps to GPUs [0]) +[2023-09-27 03:29:20,755][25562] Set environment var CUDA_VISIBLE_DEVICES to '0' (GPU indices [0]) for inference process 0 +[2023-09-27 03:29:20,758][25610] Worker 7 uses CPU cores [28, 29, 30, 31] +[2023-09-27 03:29:20,763][25564] Using GPUs [1] for process 1 (actually maps to GPUs [1]) +[2023-09-27 03:29:20,763][25564] Set environment var CUDA_VISIBLE_DEVICES to '1' (GPU indices [1]) for inference process 1 +[2023-09-27 03:29:20,773][25562] Num visible devices: 1 +[2023-09-27 03:29:20,782][25564] Num visible devices: 1 +[2023-09-27 03:29:20,788][25608] Worker 6 uses CPU cores [24, 25, 26, 27] +[2023-09-27 03:29:20,800][25605] Worker 3 uses CPU cores [12, 13, 14, 15] +[2023-09-27 03:29:20,855][25602] Worker 1 uses CPU cores [4, 5, 6, 7] +[2023-09-27 03:29:20,870][25609] Worker 5 uses CPU cores [20, 21, 22, 23] +[2023-09-27 03:29:21,407][25562] RunningMeanStd input shape: (4, 84, 84) +[2023-09-27 03:29:21,407][25562] RunningMeanStd input shape: (1,) +[2023-09-27 03:29:21,414][24596] Fps is (10 sec: nan, 60 sec: nan, 300 sec: nan). Total num frames: 0. Throughput: 0: nan, 1: nan. Samples: 0. Policy #0 lag: (min: -1.0, avg: -1.0, max: -1.0) +[2023-09-27 03:29:21,419][25562] ConvEncoder: input_channels=4 +[2023-09-27 03:29:21,422][25564] RunningMeanStd input shape: (4, 84, 84) +[2023-09-27 03:29:21,423][25564] RunningMeanStd input shape: (1,) +[2023-09-27 03:29:21,433][25564] ConvEncoder: input_channels=4 +[2023-09-27 03:29:21,513][25562] Conv encoder output size: 512 +[2023-09-27 03:29:21,519][24596] Inference worker 0-0 is ready! +[2023-09-27 03:29:21,528][25564] Conv encoder output size: 512 +[2023-09-27 03:29:21,534][24596] Inference worker 1-0 is ready! +[2023-09-27 03:29:21,535][24596] All inference workers are ready! Signal rollout workers to start! +[2023-09-27 03:29:21,980][25608] Decorrelating experience for 0 frames... +[2023-09-27 03:29:21,980][25604] Decorrelating experience for 0 frames... +[2023-09-27 03:29:22,071][25609] Decorrelating experience for 0 frames... +[2023-09-27 03:29:22,073][25602] Decorrelating experience for 0 frames... +[2023-09-27 03:29:22,334][25606] Decorrelating experience for 0 frames... +[2023-09-27 03:29:22,339][25601] Decorrelating experience for 0 frames... +[2023-09-27 03:29:22,339][25610] Decorrelating experience for 0 frames... +[2023-09-27 03:29:22,346][25605] Decorrelating experience for 0 frames... +[2023-09-27 03:29:26,414][24596] Fps is (10 sec: 1638.4, 60 sec: 1638.4, 300 sec: 1638.4). Total num frames: 8192. Throughput: 0: 204.8, 1: 204.8. Samples: 2048. Policy #0 lag: (min: 7.0, avg: 7.0, max: 7.0) +[2023-09-27 03:29:31,414][24596] Fps is (10 sec: 3276.9, 60 sec: 3276.9, 300 sec: 3276.9). Total num frames: 32768. Throughput: 0: 409.4, 1: 409.6. Samples: 8190. Policy #0 lag: (min: 4.0, avg: 4.0, max: 4.0) +[2023-09-27 03:29:31,415][24596] Avg episode reward: [(0, '-979.000'), (1, '-1148.000')] +[2023-09-27 03:29:35,515][24596] Heartbeat connected on Batcher_0 +[2023-09-27 03:29:35,518][24596] Heartbeat connected on LearnerWorker_p0 +[2023-09-27 03:29:35,521][24596] Heartbeat connected on Batcher_1 +[2023-09-27 03:29:35,524][24596] Heartbeat connected on LearnerWorker_p1 +[2023-09-27 03:29:35,530][24596] Heartbeat connected on InferenceWorker_p0-w0 +[2023-09-27 03:29:35,533][24596] Heartbeat connected on InferenceWorker_p1-w0 +[2023-09-27 03:29:35,535][24596] Heartbeat connected on RolloutWorker_w0 +[2023-09-27 03:29:35,537][24596] Heartbeat connected on RolloutWorker_w1 +[2023-09-27 03:29:35,542][24596] Heartbeat connected on RolloutWorker_w2 +[2023-09-27 03:29:35,543][24596] Heartbeat connected on RolloutWorker_w3 +[2023-09-27 03:29:35,546][24596] Heartbeat connected on RolloutWorker_w4 +[2023-09-27 03:29:35,549][24596] Heartbeat connected on RolloutWorker_w5 +[2023-09-27 03:29:35,551][24596] Heartbeat connected on RolloutWorker_w6 +[2023-09-27 03:29:35,554][24596] Heartbeat connected on RolloutWorker_w7 +[2023-09-27 03:29:36,414][24596] Fps is (10 sec: 5734.4, 60 sec: 4369.1, 300 sec: 4369.1). Total num frames: 65536. Throughput: 0: 423.5, 1: 424.9. Samples: 12726. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-27 03:29:36,415][24596] Avg episode reward: [(0, '-1214.500'), (1, '-1335.800')] +[2023-09-27 03:29:38,316][25564] Updated weights for policy 1, policy_version 160 (0.0017) +[2023-09-27 03:29:38,316][25562] Updated weights for policy 0, policy_version 160 (0.0018) +[2023-09-27 03:29:41,414][24596] Fps is (10 sec: 6553.6, 60 sec: 4915.3, 300 sec: 4915.3). Total num frames: 98304. Throughput: 0: 563.2, 1: 563.2. Samples: 22528. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-27 03:29:41,414][24596] Avg episode reward: [(0, '-1284.750'), (1, '-1336.667')] +[2023-09-27 03:29:46,414][24596] Fps is (10 sec: 6553.6, 60 sec: 5242.9, 300 sec: 5242.9). Total num frames: 131072. Throughput: 0: 636.0, 1: 638.0. Samples: 31851. Policy #0 lag: (min: 3.0, avg: 3.0, max: 3.0) +[2023-09-27 03:29:46,415][24596] Avg episode reward: [(0, '-1329.400'), (1, '-1612.125')] +[2023-09-27 03:29:51,414][24596] Fps is (10 sec: 5734.4, 60 sec: 5188.3, 300 sec: 5188.3). Total num frames: 155648. Throughput: 0: 607.2, 1: 610.5. Samples: 36531. Policy #0 lag: (min: 1.0, avg: 1.0, max: 1.0) +[2023-09-27 03:29:51,415][24596] Avg episode reward: [(0, '-1384.000'), (1, '-1733.500')] +[2023-09-27 03:29:51,600][25564] Updated weights for policy 1, policy_version 320 (0.0019) +[2023-09-27 03:29:51,600][25562] Updated weights for policy 0, policy_version 320 (0.0019) +[2023-09-27 03:29:56,414][24596] Fps is (10 sec: 5734.4, 60 sec: 5383.3, 300 sec: 5383.3). Total num frames: 188416. Throughput: 0: 655.9, 1: 654.8. Samples: 45874. Policy #0 lag: (min: 9.0, avg: 9.0, max: 9.0) +[2023-09-27 03:29:56,415][24596] Avg episode reward: [(0, '-1537.000'), (1, '-1709.000')] +[2023-09-27 03:30:01,414][24596] Fps is (10 sec: 6553.5, 60 sec: 5529.6, 300 sec: 5529.6). Total num frames: 221184. Throughput: 0: 691.7, 1: 692.2. Samples: 55356. Policy #0 lag: (min: 4.0, avg: 4.0, max: 4.0) +[2023-09-27 03:30:01,416][24596] Avg episode reward: [(0, '-1537.000'), (1, '-1760.333')] +[2023-09-27 03:30:01,417][25378] Saving new best policy, reward=-1537.000! +[2023-09-27 03:30:01,417][25443] Saving new best policy, reward=-1760.333! +[2023-09-27 03:30:04,300][25564] Updated weights for policy 1, policy_version 480 (0.0016) +[2023-09-27 03:30:04,302][25562] Updated weights for policy 0, policy_version 480 (0.0018) +[2023-09-27 03:30:06,414][24596] Fps is (10 sec: 6553.6, 60 sec: 5643.4, 300 sec: 5643.4). Total num frames: 253952. Throughput: 0: 671.1, 1: 671.3. Samples: 60411. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-27 03:30:06,415][24596] Avg episode reward: [(0, '-1537.000'), (1, '-1885.733')] +[2023-09-27 03:30:11,414][24596] Fps is (10 sec: 6553.5, 60 sec: 5734.4, 300 sec: 5734.4). Total num frames: 286720. Throughput: 0: 756.0, 1: 756.5. Samples: 70113. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-27 03:30:11,415][24596] Avg episode reward: [(0, '-1734.533'), (1, '-1960.882')] +[2023-09-27 03:30:16,414][24596] Fps is (10 sec: 6553.5, 60 sec: 5808.9, 300 sec: 5808.9). Total num frames: 319488. Throughput: 0: 796.5, 1: 796.4. Samples: 79872. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-27 03:30:16,415][24596] Avg episode reward: [(0, '-2060.529'), (1, '-1871.450')] +[2023-09-27 03:30:16,895][25562] Updated weights for policy 0, policy_version 640 (0.0016) +[2023-09-27 03:30:16,895][25564] Updated weights for policy 1, policy_version 640 (0.0016) +[2023-09-27 03:30:21,414][24596] Fps is (10 sec: 6553.8, 60 sec: 5871.0, 300 sec: 5871.0). Total num frames: 352256. Throughput: 0: 800.3, 1: 800.1. Samples: 84745. Policy #0 lag: (min: 0.0, avg: 0.0, max: 0.0) +[2023-09-27 03:30:21,415][24596] Avg episode reward: [(0, '-2196.056'), (1, '-1867.773')] +[2023-09-27 03:30:26,414][24596] Fps is (10 sec: 6553.7, 60 sec: 6280.5, 300 sec: 5923.4). Total num frames: 385024. Throughput: 0: 798.3, 1: 798.7. Samples: 94392. Policy #0 lag: (min: 13.0, avg: 13.0, max: 13.0) +[2023-09-27 03:30:26,415][24596] Avg episode reward: [(0, '-2196.056'), (1, '-1837.417')] +[2023-09-27 03:30:29,502][25562] Updated weights for policy 0, policy_version 800 (0.0017) +[2023-09-27 03:30:29,502][25564] Updated weights for policy 1, policy_version 800 (0.0015) +[2023-09-27 03:30:31,414][24596] Fps is (10 sec: 6553.4, 60 sec: 6417.0, 300 sec: 5968.5). Total num frames: 417792. Throughput: 0: 806.9, 1: 805.3. Samples: 104401. Policy #0 lag: (min: 7.0, avg: 7.0, max: 7.0) +[2023-09-27 03:30:31,415][24596] Avg episode reward: [(0, '-2196.056'), (1, '-1836.080')] +[2023-09-27 03:30:36,414][24596] Fps is (10 sec: 6553.7, 60 sec: 6417.1, 300 sec: 6007.5). Total num frames: 450560. Throughput: 0: 805.4, 1: 803.5. Samples: 108930. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-27 03:30:36,415][24596] Avg episode reward: [(0, '-2317.526'), (1, '-1836.080')] +[2023-09-27 03:30:41,414][24596] Fps is (10 sec: 6144.1, 60 sec: 6348.8, 300 sec: 5990.4). Total num frames: 479232. Throughput: 0: 803.3, 1: 804.8. Samples: 118238. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-27 03:30:41,415][24596] Avg episode reward: [(0, '-2615.636'), (1, '-1973.074')] +[2023-09-27 03:30:42,903][25562] Updated weights for policy 0, policy_version 960 (0.0017) +[2023-09-27 03:30:42,903][25564] Updated weights for policy 1, policy_version 960 (0.0021) +[2023-09-27 03:30:46,414][24596] Fps is (10 sec: 5734.4, 60 sec: 6280.5, 300 sec: 5975.3). Total num frames: 507904. Throughput: 0: 799.1, 1: 798.9. Samples: 127266. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-27 03:30:46,415][24596] Avg episode reward: [(0, '-2615.636'), (1, '-2063.321')] +[2023-09-27 03:30:51,414][24596] Fps is (10 sec: 6144.0, 60 sec: 6417.1, 300 sec: 6007.5). Total num frames: 540672. Throughput: 0: 797.5, 1: 798.2. Samples: 132216. Policy #0 lag: (min: 4.0, avg: 4.0, max: 4.0) +[2023-09-27 03:30:51,415][24596] Avg episode reward: [(0, '-2615.636'), (1, '-2063.321')] +[2023-09-27 03:30:55,613][25562] Updated weights for policy 0, policy_version 1120 (0.0015) +[2023-09-27 03:30:55,614][25564] Updated weights for policy 1, policy_version 1120 (0.0017) +[2023-09-27 03:30:56,414][24596] Fps is (10 sec: 6553.6, 60 sec: 6417.1, 300 sec: 6036.2). Total num frames: 573440. Throughput: 0: 795.0, 1: 794.8. Samples: 141653. Policy #0 lag: (min: 5.0, avg: 5.0, max: 5.0) +[2023-09-27 03:30:56,415][24596] Avg episode reward: [(0, '-2700.346'), (1, '-2167.484')] +[2023-09-27 03:31:01,414][24596] Fps is (10 sec: 6553.7, 60 sec: 6417.1, 300 sec: 6062.1). Total num frames: 606208. Throughput: 0: 795.0, 1: 794.8. Samples: 151409. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-27 03:31:01,414][24596] Avg episode reward: [(0, '-2643.037'), (1, '-2222.848')] +[2023-09-27 03:31:06,414][24596] Fps is (10 sec: 6553.6, 60 sec: 6417.1, 300 sec: 6085.5). Total num frames: 638976. Throughput: 0: 792.8, 1: 792.8. Samples: 156096. Policy #0 lag: (min: 11.0, avg: 11.0, max: 11.0) +[2023-09-27 03:31:06,415][24596] Avg episode reward: [(0, '-2643.037'), (1, '-2222.848')] +[2023-09-27 03:31:08,448][25562] Updated weights for policy 0, policy_version 1280 (0.0017) +[2023-09-27 03:31:08,448][25564] Updated weights for policy 1, policy_version 1280 (0.0018) +[2023-09-27 03:31:11,414][24596] Fps is (10 sec: 6553.5, 60 sec: 6417.1, 300 sec: 6106.8). Total num frames: 671744. Throughput: 0: 794.4, 1: 794.1. Samples: 165877. Policy #0 lag: (min: 14.0, avg: 14.0, max: 14.0) +[2023-09-27 03:31:11,414][24596] Avg episode reward: [(0, '-2643.037'), (1, '-2218.265')] +[2023-09-27 03:31:11,419][25378] Saving ./train_atari/atari_skiing/checkpoint_p0/checkpoint_000001312_335872.pth... +[2023-09-27 03:31:11,419][25443] Saving ./train_atari/atari_skiing/checkpoint_p1/checkpoint_000001312_335872.pth... +[2023-09-27 03:31:16,414][24596] Fps is (10 sec: 5734.5, 60 sec: 6280.6, 300 sec: 6055.0). Total num frames: 696320. Throughput: 0: 778.3, 1: 779.9. Samples: 174518. Policy #0 lag: (min: 9.0, avg: 9.0, max: 9.0) +[2023-09-27 03:31:16,414][24596] Avg episode reward: [(0, '-2714.552'), (1, '-2281.167')] +[2023-09-27 03:31:21,414][24596] Fps is (10 sec: 5734.3, 60 sec: 6280.5, 300 sec: 6075.7). Total num frames: 729088. Throughput: 0: 783.1, 1: 782.9. Samples: 179399. Policy #0 lag: (min: 8.0, avg: 8.0, max: 8.0) +[2023-09-27 03:31:21,415][24596] Avg episode reward: [(0, '-2830.000'), (1, '-2315.500')] +[2023-09-27 03:31:21,758][25562] Updated weights for policy 0, policy_version 1440 (0.0016) +[2023-09-27 03:31:21,758][25564] Updated weights for policy 1, policy_version 1440 (0.0019) +[2023-09-27 03:31:26,414][24596] Fps is (10 sec: 6553.5, 60 sec: 6280.6, 300 sec: 6094.9). Total num frames: 761856. Throughput: 0: 784.2, 1: 784.4. Samples: 188825. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-27 03:31:26,414][24596] Avg episode reward: [(0, '-2782.344'), (1, '-2309.513')] +[2023-09-27 03:31:31,414][24596] Fps is (10 sec: 6553.6, 60 sec: 6280.5, 300 sec: 6112.5). Total num frames: 794624. Throughput: 0: 791.3, 1: 792.9. Samples: 198553. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-27 03:31:31,415][24596] Avg episode reward: [(0, '-2772.853'), (1, '-2321.927')] +[2023-09-27 03:31:34,743][25562] Updated weights for policy 0, policy_version 1600 (0.0019) +[2023-09-27 03:31:34,743][25564] Updated weights for policy 1, policy_version 1600 (0.0019) +[2023-09-27 03:31:36,414][24596] Fps is (10 sec: 6553.7, 60 sec: 6280.6, 300 sec: 6128.8). Total num frames: 827392. Throughput: 0: 786.0, 1: 785.3. Samples: 202924. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-27 03:31:36,414][24596] Avg episode reward: [(0, '-2783.286'), (1, '-2330.071')] +[2023-09-27 03:31:41,414][24596] Fps is (10 sec: 6553.7, 60 sec: 6348.8, 300 sec: 6144.0). Total num frames: 860160. Throughput: 0: 790.4, 1: 789.1. Samples: 212730. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-27 03:31:41,415][24596] Avg episode reward: [(0, '-2783.286'), (1, '-2302.795')] +[2023-09-27 03:31:46,414][24596] Fps is (10 sec: 6553.4, 60 sec: 6417.1, 300 sec: 6158.1). Total num frames: 892928. Throughput: 0: 784.1, 1: 784.2. Samples: 221983. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-27 03:31:46,415][24596] Avg episode reward: [(0, '-2802.250'), (1, '-2306.311')] +[2023-09-27 03:31:47,665][25562] Updated weights for policy 0, policy_version 1760 (0.0017) +[2023-09-27 03:31:47,665][25564] Updated weights for policy 1, policy_version 1760 (0.0016) +[2023-09-27 03:31:51,414][24596] Fps is (10 sec: 5734.3, 60 sec: 6280.5, 300 sec: 6116.7). Total num frames: 917504. Throughput: 0: 787.8, 1: 787.6. Samples: 226989. Policy #0 lag: (min: 8.0, avg: 8.0, max: 8.0) +[2023-09-27 03:31:51,415][24596] Avg episode reward: [(0, '-2888.711'), (1, '-2355.234')] +[2023-09-27 03:31:56,414][24596] Fps is (10 sec: 5734.3, 60 sec: 6280.5, 300 sec: 6130.8). Total num frames: 950272. Throughput: 0: 784.7, 1: 784.4. Samples: 236485. Policy #0 lag: (min: 4.0, avg: 4.0, max: 4.0) +[2023-09-27 03:31:56,415][24596] Avg episode reward: [(0, '-2930.077'), (1, '-2341.104')] +[2023-09-27 03:32:00,493][25562] Updated weights for policy 0, policy_version 1920 (0.0017) +[2023-09-27 03:32:00,493][25564] Updated weights for policy 1, policy_version 1920 (0.0016) +[2023-09-27 03:32:01,414][24596] Fps is (10 sec: 6553.6, 60 sec: 6280.5, 300 sec: 6144.0). Total num frames: 983040. Throughput: 0: 792.6, 1: 791.8. Samples: 245816. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-27 03:32:01,415][24596] Avg episode reward: [(0, '-2953.700'), (1, '-2341.104')] +[2023-09-27 03:32:06,414][24596] Fps is (10 sec: 6553.7, 60 sec: 6280.5, 300 sec: 6156.4). Total num frames: 1015808. Throughput: 0: 791.2, 1: 791.0. Samples: 250595. Policy #0 lag: (min: 9.0, avg: 9.0, max: 9.0) +[2023-09-27 03:32:06,415][24596] Avg episode reward: [(0, '-2953.700'), (1, '-2430.824')] +[2023-09-27 03:32:11,414][24596] Fps is (10 sec: 6553.6, 60 sec: 6280.5, 300 sec: 6168.1). Total num frames: 1048576. Throughput: 0: 792.4, 1: 791.5. Samples: 260100. Policy #0 lag: (min: 10.0, avg: 10.0, max: 10.0) +[2023-09-27 03:32:11,415][24596] Avg episode reward: [(0, '-2988.268'), (1, '-2430.824')] +[2023-09-27 03:32:13,439][25564] Updated weights for policy 1, policy_version 2080 (0.0018) +[2023-09-27 03:32:13,439][25562] Updated weights for policy 0, policy_version 2080 (0.0017) +[2023-09-27 03:32:16,414][24596] Fps is (10 sec: 6553.6, 60 sec: 6417.0, 300 sec: 6179.1). Total num frames: 1081344. Throughput: 0: 791.2, 1: 789.5. Samples: 269688. Policy #0 lag: (min: 0.0, avg: 0.0, max: 0.0) +[2023-09-27 03:32:16,415][24596] Avg episode reward: [(0, '-3024.262'), (1, '-2430.824')] +[2023-09-27 03:32:21,414][24596] Fps is (10 sec: 5734.4, 60 sec: 6280.5, 300 sec: 6144.0). Total num frames: 1105920. Throughput: 0: 794.6, 1: 793.3. Samples: 274381. Policy #0 lag: (min: 6.0, avg: 6.0, max: 6.0) +[2023-09-27 03:32:21,415][24596] Avg episode reward: [(0, '-3048.953'), (1, '-2486.170')] +[2023-09-27 03:32:26,414][24596] Fps is (10 sec: 5734.4, 60 sec: 6280.5, 300 sec: 6155.1). Total num frames: 1138688. Throughput: 0: 785.8, 1: 787.7. Samples: 283540. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-27 03:32:26,415][24596] Avg episode reward: [(0, '-3082.023'), (1, '-2548.927')] +[2023-09-27 03:32:26,620][25562] Updated weights for policy 0, policy_version 2240 (0.0019) +[2023-09-27 03:32:26,620][25564] Updated weights for policy 1, policy_version 2240 (0.0018) +[2023-09-27 03:32:31,414][24596] Fps is (10 sec: 6553.7, 60 sec: 6280.5, 300 sec: 6165.6). Total num frames: 1171456. Throughput: 0: 787.5, 1: 787.6. Samples: 292864. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-27 03:32:31,414][24596] Avg episode reward: [(0, '-3077.851'), (1, '-2548.927')] +[2023-09-27 03:32:36,414][24596] Fps is (10 sec: 6553.5, 60 sec: 6280.5, 300 sec: 6175.5). Total num frames: 1204224. Throughput: 0: 782.5, 1: 782.4. Samples: 297406. Policy #0 lag: (min: 13.0, avg: 13.0, max: 13.0) +[2023-09-27 03:32:36,415][24596] Avg episode reward: [(0, '-3077.851'), (1, '-2561.304')] +[2023-09-27 03:32:39,597][25562] Updated weights for policy 0, policy_version 2400 (0.0016) +[2023-09-27 03:32:39,597][25564] Updated weights for policy 1, policy_version 2400 (0.0016) +[2023-09-27 03:32:41,414][24596] Fps is (10 sec: 6553.6, 60 sec: 6280.5, 300 sec: 6185.0). Total num frames: 1236992. Throughput: 0: 785.6, 1: 785.8. Samples: 307200. Policy #0 lag: (min: 5.0, avg: 5.0, max: 5.0) +[2023-09-27 03:32:41,414][24596] Avg episode reward: [(0, '-3065.250'), (1, '-2563.632')] +[2023-09-27 03:32:46,414][24596] Fps is (10 sec: 6553.6, 60 sec: 6280.5, 300 sec: 6194.0). Total num frames: 1269760. Throughput: 0: 787.5, 1: 787.3. Samples: 316682. Policy #0 lag: (min: 9.0, avg: 9.0, max: 9.0) +[2023-09-27 03:32:46,415][24596] Avg episode reward: [(0, '-3051.020'), (1, '-2597.052')] +[2023-09-27 03:32:51,414][24596] Fps is (10 sec: 5734.3, 60 sec: 6280.5, 300 sec: 6163.5). Total num frames: 1294336. Throughput: 0: 787.0, 1: 786.9. Samples: 321423. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-27 03:32:51,415][24596] Avg episode reward: [(0, '-2927.145'), (1, '-2629.305')] +[2023-09-27 03:32:52,734][25562] Updated weights for policy 0, policy_version 2560 (0.0017) +[2023-09-27 03:32:52,735][25564] Updated weights for policy 1, policy_version 2560 (0.0019) +[2023-09-27 03:32:56,414][24596] Fps is (10 sec: 5734.4, 60 sec: 6280.5, 300 sec: 6172.6). Total num frames: 1327104. Throughput: 0: 784.1, 1: 785.2. Samples: 330717. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-27 03:32:56,415][24596] Avg episode reward: [(0, '-2894.000'), (1, '-2623.517')] +[2023-09-27 03:33:01,414][24596] Fps is (10 sec: 6553.7, 60 sec: 6280.5, 300 sec: 6181.2). Total num frames: 1359872. Throughput: 0: 782.2, 1: 782.5. Samples: 340097. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-27 03:33:01,414][24596] Avg episode reward: [(0, '-2821.593'), (1, '-2673.476')] +[2023-09-27 03:33:05,576][25562] Updated weights for policy 0, policy_version 2720 (0.0018) +[2023-09-27 03:33:05,576][25564] Updated weights for policy 1, policy_version 2720 (0.0017) +[2023-09-27 03:33:06,414][24596] Fps is (10 sec: 6553.8, 60 sec: 6280.6, 300 sec: 6189.5). Total num frames: 1392640. Throughput: 0: 785.6, 1: 786.7. Samples: 345133. Policy #0 lag: (min: 7.0, avg: 7.0, max: 7.0) +[2023-09-27 03:33:06,414][24596] Avg episode reward: [(0, '-2793.623'), (1, '-2670.828')] +[2023-09-27 03:33:11,414][24596] Fps is (10 sec: 6553.5, 60 sec: 6280.5, 300 sec: 6197.4). Total num frames: 1425408. Throughput: 0: 789.2, 1: 789.1. Samples: 354567. Policy #0 lag: (min: 5.0, avg: 5.0, max: 5.0) +[2023-09-27 03:33:11,415][24596] Avg episode reward: [(0, '-2781.177'), (1, '-2639.682')] +[2023-09-27 03:33:11,420][25378] Saving ./train_atari/atari_skiing/checkpoint_p0/checkpoint_000002784_712704.pth... +[2023-09-27 03:33:11,420][25443] Saving ./train_atari/atari_skiing/checkpoint_p1/checkpoint_000002784_712704.pth... +[2023-09-27 03:33:16,414][24596] Fps is (10 sec: 6553.4, 60 sec: 6280.5, 300 sec: 6205.0). Total num frames: 1458176. Throughput: 0: 794.6, 1: 795.1. Samples: 364399. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-27 03:33:16,415][24596] Avg episode reward: [(0, '-2740.578'), (1, '-2604.145')] +[2023-09-27 03:33:18,422][25564] Updated weights for policy 1, policy_version 2880 (0.0018) +[2023-09-27 03:33:18,422][25562] Updated weights for policy 0, policy_version 2880 (0.0017) +[2023-09-27 03:33:21,414][24596] Fps is (10 sec: 6553.6, 60 sec: 6417.1, 300 sec: 6212.3). Total num frames: 1490944. Throughput: 0: 793.3, 1: 793.7. Samples: 368823. Policy #0 lag: (min: 8.0, avg: 8.0, max: 8.0) +[2023-09-27 03:33:21,415][24596] Avg episode reward: [(0, '-2750.104'), (1, '-2604.145')] +[2023-09-27 03:33:26,414][24596] Fps is (10 sec: 6553.6, 60 sec: 6417.1, 300 sec: 6219.2). Total num frames: 1523712. Throughput: 0: 795.9, 1: 796.4. Samples: 378856. Policy #0 lag: (min: 14.0, avg: 14.0, max: 14.0) +[2023-09-27 03:33:26,415][24596] Avg episode reward: [(0, '-2737.706'), (1, '-2604.145')] +[2023-09-27 03:33:31,068][25564] Updated weights for policy 1, policy_version 3040 (0.0017) +[2023-09-27 03:33:31,069][25562] Updated weights for policy 0, policy_version 3040 (0.0017) +[2023-09-27 03:33:31,414][24596] Fps is (10 sec: 6553.6, 60 sec: 6417.1, 300 sec: 6225.9). Total num frames: 1556480. Throughput: 0: 797.4, 1: 797.9. Samples: 388472. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-27 03:33:31,415][24596] Avg episode reward: [(0, '-2710.629'), (1, '-2631.314')] +[2023-09-27 03:33:36,414][24596] Fps is (10 sec: 6553.7, 60 sec: 6417.1, 300 sec: 6232.3). Total num frames: 1589248. Throughput: 0: 797.6, 1: 797.8. Samples: 393216. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-27 03:33:36,415][24596] Avg episode reward: [(0, '-2680.712'), (1, '-2657.676')] +[2023-09-27 03:33:41,414][24596] Fps is (10 sec: 6553.5, 60 sec: 6417.0, 300 sec: 6238.5). Total num frames: 1622016. Throughput: 0: 802.9, 1: 802.5. Samples: 402960. Policy #0 lag: (min: 6.0, avg: 6.0, max: 6.0) +[2023-09-27 03:33:41,415][24596] Avg episode reward: [(0, '-2664.716'), (1, '-2697.068')] +[2023-09-27 03:33:43,800][25564] Updated weights for policy 1, policy_version 3200 (0.0017) +[2023-09-27 03:33:43,801][25562] Updated weights for policy 0, policy_version 3200 (0.0018) +[2023-09-27 03:33:46,414][24596] Fps is (10 sec: 6553.7, 60 sec: 6417.1, 300 sec: 6244.5). Total num frames: 1654784. Throughput: 0: 805.1, 1: 804.3. Samples: 412519. Policy #0 lag: (min: 6.0, avg: 6.0, max: 6.0) +[2023-09-27 03:33:46,414][24596] Avg episode reward: [(0, '-2665.867'), (1, '-2697.068')] +[2023-09-27 03:33:51,414][24596] Fps is (10 sec: 5734.5, 60 sec: 6417.1, 300 sec: 6219.9). Total num frames: 1679360. Throughput: 0: 803.2, 1: 803.1. Samples: 417418. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-27 03:33:51,414][24596] Avg episode reward: [(0, '-2653.947'), (1, '-2672.260')] +[2023-09-27 03:33:56,414][24596] Fps is (10 sec: 5734.4, 60 sec: 6417.1, 300 sec: 6225.9). Total num frames: 1712128. Throughput: 0: 802.7, 1: 802.7. Samples: 426810. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-27 03:33:56,414][24596] Avg episode reward: [(0, '-2652.188'), (1, '-2672.260')] +[2023-09-27 03:33:56,808][25562] Updated weights for policy 0, policy_version 3360 (0.0016) +[2023-09-27 03:33:56,809][25564] Updated weights for policy 1, policy_version 3360 (0.0017) +[2023-09-27 03:34:01,414][24596] Fps is (10 sec: 6553.5, 60 sec: 6417.1, 300 sec: 6231.8). Total num frames: 1744896. Throughput: 0: 798.3, 1: 797.8. Samples: 436224. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-27 03:34:01,414][24596] Avg episode reward: [(0, '-2652.188'), (1, '-2692.038')] +[2023-09-27 03:34:06,414][24596] Fps is (10 sec: 6553.6, 60 sec: 6417.0, 300 sec: 6237.4). Total num frames: 1777664. Throughput: 0: 802.3, 1: 802.5. Samples: 441039. Policy #0 lag: (min: 13.0, avg: 13.0, max: 13.0) +[2023-09-27 03:34:06,414][24596] Avg episode reward: [(0, '-2626.000'), (1, '-2692.038')] +[2023-09-27 03:34:09,630][25562] Updated weights for policy 0, policy_version 3520 (0.0016) +[2023-09-27 03:34:09,631][25564] Updated weights for policy 1, policy_version 3520 (0.0016) +[2023-09-27 03:34:11,414][24596] Fps is (10 sec: 6553.6, 60 sec: 6417.1, 300 sec: 6242.9). Total num frames: 1810432. Throughput: 0: 797.0, 1: 796.4. Samples: 450560. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-27 03:34:11,414][24596] Avg episode reward: [(0, '-2616.435'), (1, '-2714.650')] +[2023-09-27 03:34:16,414][24596] Fps is (10 sec: 6553.6, 60 sec: 6417.1, 300 sec: 6248.1). Total num frames: 1843200. Throughput: 0: 800.7, 1: 800.4. Samples: 460524. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-27 03:34:16,414][24596] Avg episode reward: [(0, '-2616.435'), (1, '-2736.716')] +[2023-09-27 03:34:21,414][24596] Fps is (10 sec: 6553.6, 60 sec: 6417.1, 300 sec: 6331.4). Total num frames: 1875968. Throughput: 0: 798.5, 1: 799.0. Samples: 465105. Policy #0 lag: (min: 13.0, avg: 13.0, max: 13.0) +[2023-09-27 03:34:21,415][24596] Avg episode reward: [(0, '-2621.791'), (1, '-2758.220')] +[2023-09-27 03:34:22,229][25562] Updated weights for policy 0, policy_version 3680 (0.0018) +[2023-09-27 03:34:22,229][25564] Updated weights for policy 1, policy_version 3680 (0.0018) +[2023-09-27 03:34:26,414][24596] Fps is (10 sec: 6553.6, 60 sec: 6417.1, 300 sec: 6359.2). Total num frames: 1908736. Throughput: 0: 801.6, 1: 801.1. Samples: 475082. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-27 03:34:26,414][24596] Avg episode reward: [(0, '-2643.448'), (1, '-2773.786')] +[2023-09-27 03:34:31,414][24596] Fps is (10 sec: 6553.6, 60 sec: 6417.1, 300 sec: 6359.2). Total num frames: 1941504. Throughput: 0: 800.5, 1: 801.4. Samples: 484602. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-27 03:34:31,415][24596] Avg episode reward: [(0, '-2664.625'), (1, '-2773.786')] +[2023-09-27 03:34:35,060][25564] Updated weights for policy 1, policy_version 3840 (0.0016) +[2023-09-27 03:34:35,060][25562] Updated weights for policy 0, policy_version 3840 (0.0016) +[2023-09-27 03:34:36,414][24596] Fps is (10 sec: 6553.6, 60 sec: 6417.1, 300 sec: 6359.2). Total num frames: 1974272. Throughput: 0: 800.0, 1: 800.6. Samples: 489447. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-27 03:34:36,415][24596] Avg episode reward: [(0, '-2685.270'), (1, '-2794.165')] +[2023-09-27 03:34:41,414][24596] Fps is (10 sec: 6144.0, 60 sec: 6348.8, 300 sec: 6345.3). Total num frames: 2002944. Throughput: 0: 801.4, 1: 800.7. Samples: 498903. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-27 03:34:41,415][24596] Avg episode reward: [(0, '-2705.467'), (1, '-2794.165')] +[2023-09-27 03:34:46,414][24596] Fps is (10 sec: 5734.3, 60 sec: 6280.5, 300 sec: 6359.2). Total num frames: 2031616. Throughput: 0: 801.4, 1: 801.7. Samples: 508363. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-27 03:34:46,415][24596] Avg episode reward: [(0, '-2705.467'), (1, '-2833.494')] +[2023-09-27 03:34:47,878][25564] Updated weights for policy 1, policy_version 4000 (0.0018) +[2023-09-27 03:34:47,878][25562] Updated weights for policy 0, policy_version 4000 (0.0018) +[2023-09-27 03:34:51,414][24596] Fps is (10 sec: 6144.0, 60 sec: 6417.1, 300 sec: 6359.2). Total num frames: 2064384. Throughput: 0: 803.2, 1: 802.5. Samples: 513295. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-27 03:34:51,415][24596] Avg episode reward: [(0, '-2725.220'), (1, '-2852.500')] +[2023-09-27 03:34:56,414][24596] Fps is (10 sec: 6553.6, 60 sec: 6417.0, 300 sec: 6359.2). Total num frames: 2097152. Throughput: 0: 805.5, 1: 805.4. Samples: 523051. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-27 03:34:56,415][24596] Avg episode reward: [(0, '-2763.462'), (1, '-2871.067')] +[2023-09-27 03:35:00,470][25562] Updated weights for policy 0, policy_version 4160 (0.0015) +[2023-09-27 03:35:00,470][25564] Updated weights for policy 1, policy_version 4160 (0.0019) +[2023-09-27 03:35:01,414][24596] Fps is (10 sec: 6553.6, 60 sec: 6417.1, 300 sec: 6359.2). Total num frames: 2129920. Throughput: 0: 800.5, 1: 800.7. Samples: 532579. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-27 03:35:01,415][24596] Avg episode reward: [(0, '-2763.462'), (1, '-2871.067')] +[2023-09-27 03:35:06,414][24596] Fps is (10 sec: 6553.6, 60 sec: 6417.1, 300 sec: 6359.2). Total num frames: 2162688. Throughput: 0: 803.0, 1: 802.7. Samples: 537363. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-27 03:35:06,415][24596] Avg episode reward: [(0, '-2781.957'), (1, '-2883.945')] +[2023-09-27 03:35:11,414][24596] Fps is (10 sec: 6553.7, 60 sec: 6417.1, 300 sec: 6359.2). Total num frames: 2195456. Throughput: 0: 797.2, 1: 796.9. Samples: 546816. Policy #0 lag: (min: 10.0, avg: 10.0, max: 10.0) +[2023-09-27 03:35:11,414][24596] Avg episode reward: [(0, '-2781.957'), (1, '-2918.753')] +[2023-09-27 03:35:11,419][25378] Saving ./train_atari/atari_skiing/checkpoint_p0/checkpoint_000004288_1097728.pth... +[2023-09-27 03:35:11,419][25443] Saving ./train_atari/atari_skiing/checkpoint_p1/checkpoint_000004288_1097728.pth... +[2023-09-27 03:35:11,470][25443] Removing ./train_atari/atari_skiing/checkpoint_p1/checkpoint_000001312_335872.pth +[2023-09-27 03:35:11,471][25378] Removing ./train_atari/atari_skiing/checkpoint_p0/checkpoint_000001312_335872.pth +[2023-09-27 03:35:13,730][25562] Updated weights for policy 0, policy_version 4320 (0.0017) +[2023-09-27 03:35:13,731][25564] Updated weights for policy 1, policy_version 4320 (0.0018) +[2023-09-27 03:35:16,414][24596] Fps is (10 sec: 6553.6, 60 sec: 6417.1, 300 sec: 6359.2). Total num frames: 2228224. Throughput: 0: 792.1, 1: 792.2. Samples: 555896. Policy #0 lag: (min: 10.0, avg: 10.0, max: 10.0) +[2023-09-27 03:35:16,415][24596] Avg episode reward: [(0, '-2817.865'), (1, '-2918.753')] +[2023-09-27 03:35:21,414][24596] Fps is (10 sec: 6144.0, 60 sec: 6348.8, 300 sec: 6345.3). Total num frames: 2256896. Throughput: 0: 794.0, 1: 793.6. Samples: 560886. Policy #0 lag: (min: 4.0, avg: 4.0, max: 4.0) +[2023-09-27 03:35:21,415][24596] Avg episode reward: [(0, '-2835.216'), (1, '-2918.753')] +[2023-09-27 03:35:26,414][24596] Fps is (10 sec: 6144.0, 60 sec: 6348.8, 300 sec: 6345.3). Total num frames: 2289664. Throughput: 0: 795.9, 1: 795.6. Samples: 570523. Policy #0 lag: (min: 10.0, avg: 10.0, max: 10.0) +[2023-09-27 03:35:26,415][24596] Avg episode reward: [(0, '-2851.879'), (1, '-2918.753')] +[2023-09-27 03:35:26,456][25564] Updated weights for policy 1, policy_version 4480 (0.0018) +[2023-09-27 03:35:26,457][25562] Updated weights for policy 0, policy_version 4480 (0.0016) +[2023-09-27 03:35:31,414][24596] Fps is (10 sec: 6144.0, 60 sec: 6280.5, 300 sec: 6331.4). Total num frames: 2318336. Throughput: 0: 794.5, 1: 794.3. Samples: 579859. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-27 03:35:31,415][24596] Avg episode reward: [(0, '-2851.879'), (1, '-2968.250')] +[2023-09-27 03:35:36,414][24596] Fps is (10 sec: 6144.1, 60 sec: 6280.5, 300 sec: 6345.3). Total num frames: 2351104. Throughput: 0: 794.1, 1: 794.6. Samples: 584787. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-27 03:35:36,414][24596] Avg episode reward: [(0, '-2851.879'), (1, '-2984.082')] +[2023-09-27 03:35:39,366][25562] Updated weights for policy 0, policy_version 4640 (0.0016) +[2023-09-27 03:35:39,366][25564] Updated weights for policy 1, policy_version 4640 (0.0017) +[2023-09-27 03:35:41,414][24596] Fps is (10 sec: 6553.6, 60 sec: 6348.8, 300 sec: 6359.2). Total num frames: 2383872. Throughput: 0: 790.4, 1: 790.7. Samples: 594203. Policy #0 lag: (min: 1.0, avg: 1.0, max: 1.0) +[2023-09-27 03:35:41,414][24596] Avg episode reward: [(0, '-2910.570'), (1, '-2984.082')] +[2023-09-27 03:35:46,414][24596] Fps is (10 sec: 6553.5, 60 sec: 6417.1, 300 sec: 6359.2). Total num frames: 2416640. Throughput: 0: 794.8, 1: 794.1. Samples: 604077. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-27 03:35:46,415][24596] Avg episode reward: [(0, '-2937.670'), (1, '-2984.082')] +[2023-09-27 03:35:51,414][24596] Fps is (10 sec: 6553.7, 60 sec: 6417.1, 300 sec: 6359.2). Total num frames: 2449408. Throughput: 0: 793.0, 1: 792.9. Samples: 608728. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-27 03:35:51,414][24596] Avg episode reward: [(0, '-2941.340'), (1, '-3014.808')] +[2023-09-27 03:35:52,064][25562] Updated weights for policy 0, policy_version 4800 (0.0017) +[2023-09-27 03:35:52,064][25564] Updated weights for policy 1, policy_version 4800 (0.0017) +[2023-09-27 03:35:56,414][24596] Fps is (10 sec: 6553.7, 60 sec: 6417.1, 300 sec: 6359.2). Total num frames: 2482176. Throughput: 0: 796.4, 1: 796.4. Samples: 618496. Policy #0 lag: (min: 7.0, avg: 7.0, max: 7.0) +[2023-09-27 03:35:56,414][24596] Avg episode reward: [(0, '-2941.340'), (1, '-3063.210')] +[2023-09-27 03:36:01,414][24596] Fps is (10 sec: 6553.5, 60 sec: 6417.1, 300 sec: 6359.2). Total num frames: 2514944. Throughput: 0: 800.7, 1: 800.5. Samples: 627953. Policy #0 lag: (min: 0.0, avg: 0.0, max: 0.0) +[2023-09-27 03:36:01,415][24596] Avg episode reward: [(0, '-3007.010'), (1, '-3063.210')] +[2023-09-27 03:36:04,980][25562] Updated weights for policy 0, policy_version 4960 (0.0016) +[2023-09-27 03:36:04,981][25564] Updated weights for policy 1, policy_version 4960 (0.0019) +[2023-09-27 03:36:06,414][24596] Fps is (10 sec: 6553.6, 60 sec: 6417.1, 300 sec: 6359.2). Total num frames: 2547712. Throughput: 0: 799.5, 1: 799.3. Samples: 632832. Policy #0 lag: (min: 4.0, avg: 4.0, max: 4.0) +[2023-09-27 03:36:06,414][24596] Avg episode reward: [(0, '-3007.010'), (1, '-3063.210')] +[2023-09-27 03:36:11,414][24596] Fps is (10 sec: 6144.0, 60 sec: 6348.8, 300 sec: 6373.1). Total num frames: 2576384. Throughput: 0: 797.3, 1: 798.6. Samples: 642340. Policy #0 lag: (min: 4.0, avg: 4.0, max: 4.0) +[2023-09-27 03:36:11,415][24596] Avg episode reward: [(0, '-3067.290'), (1, '-3063.210')] +[2023-09-27 03:36:16,414][24596] Fps is (10 sec: 5734.3, 60 sec: 6280.5, 300 sec: 6359.2). Total num frames: 2605056. Throughput: 0: 797.6, 1: 797.4. Samples: 651637. Policy #0 lag: (min: 8.0, avg: 8.0, max: 8.0) +[2023-09-27 03:36:16,415][24596] Avg episode reward: [(0, '-3067.290'), (1, '-3152.490')] +[2023-09-27 03:36:17,968][25562] Updated weights for policy 0, policy_version 5120 (0.0014) +[2023-09-27 03:36:17,968][25564] Updated weights for policy 1, policy_version 5120 (0.0016) +[2023-09-27 03:36:21,414][24596] Fps is (10 sec: 6143.9, 60 sec: 6348.8, 300 sec: 6359.2). Total num frames: 2637824. Throughput: 0: 796.2, 1: 796.3. Samples: 656450. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-27 03:36:21,415][24596] Avg episode reward: [(0, '-3067.290'), (1, '-3188.080')] +[2023-09-27 03:36:26,414][24596] Fps is (10 sec: 6553.6, 60 sec: 6348.8, 300 sec: 6359.2). Total num frames: 2670592. Throughput: 0: 798.1, 1: 798.0. Samples: 666029. Policy #0 lag: (min: 5.0, avg: 5.0, max: 5.0) +[2023-09-27 03:36:26,415][24596] Avg episode reward: [(0, '-3127.200'), (1, '-3188.080')] +[2023-09-27 03:36:30,655][25562] Updated weights for policy 0, policy_version 5280 (0.0017) +[2023-09-27 03:36:30,655][25564] Updated weights for policy 1, policy_version 5280 (0.0017) +[2023-09-27 03:36:31,414][24596] Fps is (10 sec: 6553.7, 60 sec: 6417.1, 300 sec: 6359.2). Total num frames: 2703360. Throughput: 0: 797.2, 1: 797.5. Samples: 675840. Policy #0 lag: (min: 4.0, avg: 4.0, max: 4.0) +[2023-09-27 03:36:31,414][24596] Avg episode reward: [(0, '-3152.970'), (1, '-3188.080')] +[2023-09-27 03:36:36,414][24596] Fps is (10 sec: 6553.6, 60 sec: 6417.1, 300 sec: 6359.2). Total num frames: 2736128. Throughput: 0: 794.8, 1: 794.7. Samples: 680255. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-27 03:36:36,415][24596] Avg episode reward: [(0, '-3171.040'), (1, '-3219.710')] +[2023-09-27 03:36:41,414][24596] Fps is (10 sec: 6553.5, 60 sec: 6417.0, 300 sec: 6359.2). Total num frames: 2768896. Throughput: 0: 795.2, 1: 796.0. Samples: 690098. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-27 03:36:41,415][24596] Avg episode reward: [(0, '-3171.040'), (1, '-3281.290')] +[2023-09-27 03:36:43,570][25564] Updated weights for policy 1, policy_version 5440 (0.0018) +[2023-09-27 03:36:43,570][25562] Updated weights for policy 0, policy_version 5440 (0.0016) +[2023-09-27 03:36:46,414][24596] Fps is (10 sec: 6553.5, 60 sec: 6417.1, 300 sec: 6387.0). Total num frames: 2801664. Throughput: 0: 794.8, 1: 796.8. Samples: 699576. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-27 03:36:46,415][24596] Avg episode reward: [(0, '-3225.140'), (1, '-3281.290')] +[2023-09-27 03:36:51,414][24596] Fps is (10 sec: 5734.4, 60 sec: 6280.5, 300 sec: 6359.2). Total num frames: 2826240. Throughput: 0: 792.3, 1: 791.9. Samples: 704120. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-27 03:36:51,415][24596] Avg episode reward: [(0, '-3225.140'), (1, '-3281.290')] +[2023-09-27 03:36:56,414][24596] Fps is (10 sec: 5734.4, 60 sec: 6280.5, 300 sec: 6359.2). Total num frames: 2859008. Throughput: 0: 791.0, 1: 790.5. Samples: 713507. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-27 03:36:56,415][24596] Avg episode reward: [(0, '-3225.190'), (1, '-3281.290')] +[2023-09-27 03:36:56,751][25564] Updated weights for policy 1, policy_version 5600 (0.0019) +[2023-09-27 03:36:56,751][25562] Updated weights for policy 0, policy_version 5600 (0.0017) +[2023-09-27 03:37:01,414][24596] Fps is (10 sec: 6553.6, 60 sec: 6280.5, 300 sec: 6359.2). Total num frames: 2891776. Throughput: 0: 792.4, 1: 792.4. Samples: 722953. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-27 03:37:01,416][24596] Avg episode reward: [(0, '-3225.190'), (1, '-3358.810')] +[2023-09-27 03:37:06,414][24596] Fps is (10 sec: 6553.7, 60 sec: 6280.5, 300 sec: 6359.2). Total num frames: 2924544. Throughput: 0: 793.4, 1: 793.1. Samples: 727846. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-27 03:37:06,415][24596] Avg episode reward: [(0, '-3225.190'), (1, '-3373.010')] +[2023-09-27 03:37:09,468][25564] Updated weights for policy 1, policy_version 5760 (0.0016) +[2023-09-27 03:37:09,469][25562] Updated weights for policy 0, policy_version 5760 (0.0017) +[2023-09-27 03:37:11,414][24596] Fps is (10 sec: 6553.6, 60 sec: 6348.8, 300 sec: 6359.2). Total num frames: 2957312. Throughput: 0: 792.7, 1: 793.1. Samples: 737389. Policy #0 lag: (min: 6.0, avg: 6.0, max: 6.0) +[2023-09-27 03:37:11,415][24596] Avg episode reward: [(0, '-3225.230'), (1, '-3373.010')] +[2023-09-27 03:37:11,425][25378] Saving ./train_atari/atari_skiing/checkpoint_p0/checkpoint_000005776_1478656.pth... +[2023-09-27 03:37:11,425][25443] Saving ./train_atari/atari_skiing/checkpoint_p1/checkpoint_000005776_1478656.pth... +[2023-09-27 03:37:11,453][25378] Removing ./train_atari/atari_skiing/checkpoint_p0/checkpoint_000002784_712704.pth +[2023-09-27 03:37:11,462][25443] Removing ./train_atari/atari_skiing/checkpoint_p1/checkpoint_000002784_712704.pth +[2023-09-27 03:37:16,414][24596] Fps is (10 sec: 6553.6, 60 sec: 6417.1, 300 sec: 6387.0). Total num frames: 2990080. Throughput: 0: 795.1, 1: 796.3. Samples: 747453. Policy #0 lag: (min: 1.0, avg: 1.0, max: 1.0) +[2023-09-27 03:37:16,415][24596] Avg episode reward: [(0, '-3225.230'), (1, '-3373.010')] +[2023-09-27 03:37:21,414][24596] Fps is (10 sec: 6553.7, 60 sec: 6417.1, 300 sec: 6387.0). Total num frames: 3022848. Throughput: 0: 796.1, 1: 796.1. Samples: 751904. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-27 03:37:21,415][24596] Avg episode reward: [(0, '-3225.240'), (1, '-3403.870')] +[2023-09-27 03:37:22,341][25564] Updated weights for policy 1, policy_version 5920 (0.0018) +[2023-09-27 03:37:22,341][25562] Updated weights for policy 0, policy_version 5920 (0.0018) +[2023-09-27 03:37:26,414][24596] Fps is (10 sec: 6553.6, 60 sec: 6417.1, 300 sec: 6387.0). Total num frames: 3055616. Throughput: 0: 792.0, 1: 793.8. Samples: 761461. Policy #0 lag: (min: 6.0, avg: 6.0, max: 6.0) +[2023-09-27 03:37:26,415][24596] Avg episode reward: [(0, '-3225.240'), (1, '-3461.820')] +[2023-09-27 03:37:31,414][24596] Fps is (10 sec: 6553.7, 60 sec: 6417.1, 300 sec: 6387.0). Total num frames: 3088384. Throughput: 0: 797.3, 1: 795.6. Samples: 771254. Policy #0 lag: (min: 6.0, avg: 6.0, max: 6.0) +[2023-09-27 03:37:31,414][24596] Avg episode reward: [(0, '-3225.270'), (1, '-3461.820')] +[2023-09-27 03:37:35,070][25564] Updated weights for policy 1, policy_version 6080 (0.0017) +[2023-09-27 03:37:35,070][25562] Updated weights for policy 0, policy_version 6080 (0.0018) +[2023-09-27 03:37:36,414][24596] Fps is (10 sec: 6553.8, 60 sec: 6417.1, 300 sec: 6387.0). Total num frames: 3121152. Throughput: 0: 800.2, 1: 800.7. Samples: 776163. Policy #0 lag: (min: 7.0, avg: 7.0, max: 7.0) +[2023-09-27 03:37:36,414][24596] Avg episode reward: [(0, '-3225.250'), (1, '-3461.820')] +[2023-09-27 03:37:41,414][24596] Fps is (10 sec: 6553.4, 60 sec: 6417.1, 300 sec: 6387.0). Total num frames: 3153920. Throughput: 0: 803.2, 1: 803.3. Samples: 785797. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-27 03:37:41,415][24596] Avg episode reward: [(0, '-3248.650'), (1, '-3461.820')] +[2023-09-27 03:37:46,414][24596] Fps is (10 sec: 6553.5, 60 sec: 6417.1, 300 sec: 6414.8). Total num frames: 3186688. Throughput: 0: 804.1, 1: 804.9. Samples: 795357. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-27 03:37:46,415][24596] Avg episode reward: [(0, '-3248.650'), (1, '-3555.960')] +[2023-09-27 03:37:47,622][25564] Updated weights for policy 1, policy_version 6240 (0.0017) +[2023-09-27 03:37:47,622][25562] Updated weights for policy 0, policy_version 6240 (0.0015) +[2023-09-27 03:37:51,414][24596] Fps is (10 sec: 5734.4, 60 sec: 6417.1, 300 sec: 6387.0). Total num frames: 3211264. Throughput: 0: 807.8, 1: 807.6. Samples: 800536. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-27 03:37:51,415][24596] Avg episode reward: [(0, '-3248.650'), (1, '-3581.530')] +[2023-09-27 03:37:56,414][24596] Fps is (10 sec: 5734.4, 60 sec: 6417.1, 300 sec: 6387.0). Total num frames: 3244032. Throughput: 0: 807.6, 1: 807.8. Samples: 810080. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-27 03:37:56,415][24596] Avg episode reward: [(0, '-3278.720'), (1, '-3581.530')] +[2023-09-27 03:38:00,339][25564] Updated weights for policy 1, policy_version 6400 (0.0015) +[2023-09-27 03:38:00,339][25562] Updated weights for policy 0, policy_version 6400 (0.0014) +[2023-09-27 03:38:01,414][24596] Fps is (10 sec: 6553.6, 60 sec: 6417.1, 300 sec: 6387.0). Total num frames: 3276800. Throughput: 0: 801.6, 1: 800.6. Samples: 819549. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-27 03:38:01,415][24596] Avg episode reward: [(0, '-3312.240'), (1, '-3581.530')] +[2023-09-27 03:38:06,414][24596] Fps is (10 sec: 6553.7, 60 sec: 6417.1, 300 sec: 6387.0). Total num frames: 3309568. Throughput: 0: 806.1, 1: 806.7. Samples: 824482. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-27 03:38:06,415][24596] Avg episode reward: [(0, '-3328.710'), (1, '-3609.450')] +[2023-09-27 03:38:11,414][24596] Fps is (10 sec: 6553.7, 60 sec: 6417.1, 300 sec: 6387.0). Total num frames: 3342336. Throughput: 0: 807.4, 1: 805.1. Samples: 834021. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-27 03:38:11,414][24596] Avg episode reward: [(0, '-3328.710'), (1, '-3696.530')] +[2023-09-27 03:38:13,062][25562] Updated weights for policy 0, policy_version 6560 (0.0017) +[2023-09-27 03:38:13,062][25564] Updated weights for policy 1, policy_version 6560 (0.0018) +[2023-09-27 03:38:16,414][24596] Fps is (10 sec: 6553.7, 60 sec: 6417.1, 300 sec: 6387.0). Total num frames: 3375104. Throughput: 0: 806.1, 1: 805.5. Samples: 843776. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-27 03:38:16,414][24596] Avg episode reward: [(0, '-3328.710'), (1, '-3696.530')] +[2023-09-27 03:38:21,414][24596] Fps is (10 sec: 6553.5, 60 sec: 6417.1, 300 sec: 6387.0). Total num frames: 3407872. Throughput: 0: 801.0, 1: 801.0. Samples: 848252. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-27 03:38:21,415][24596] Avg episode reward: [(0, '-3328.710'), (1, '-3696.530')] +[2023-09-27 03:38:25,929][25562] Updated weights for policy 0, policy_version 6720 (0.0016) +[2023-09-27 03:38:25,929][25564] Updated weights for policy 1, policy_version 6720 (0.0017) +[2023-09-27 03:38:26,414][24596] Fps is (10 sec: 6553.6, 60 sec: 6417.1, 300 sec: 6387.0). Total num frames: 3440640. Throughput: 0: 803.8, 1: 803.2. Samples: 858112. Policy #0 lag: (min: 11.0, avg: 11.0, max: 11.0) +[2023-09-27 03:38:26,414][24596] Avg episode reward: [(0, '-3360.630'), (1, '-3696.530')] +[2023-09-27 03:38:31,414][24596] Fps is (10 sec: 6553.7, 60 sec: 6417.1, 300 sec: 6387.0). Total num frames: 3473408. Throughput: 0: 801.7, 1: 801.5. Samples: 867504. Policy #0 lag: (min: 7.0, avg: 7.0, max: 7.0) +[2023-09-27 03:38:31,415][24596] Avg episode reward: [(0, '-3360.630'), (1, '-3712.880')] +[2023-09-27 03:38:36,414][24596] Fps is (10 sec: 6553.6, 60 sec: 6417.1, 300 sec: 6387.0). Total num frames: 3506176. Throughput: 0: 799.2, 1: 798.9. Samples: 872448. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-27 03:38:36,414][24596] Avg episode reward: [(0, '-3360.630'), (1, '-3716.210')] +[2023-09-27 03:38:38,912][25564] Updated weights for policy 1, policy_version 6880 (0.0019) +[2023-09-27 03:38:38,912][25562] Updated weights for policy 0, policy_version 6880 (0.0017) +[2023-09-27 03:38:41,414][24596] Fps is (10 sec: 5734.3, 60 sec: 6280.5, 300 sec: 6359.2). Total num frames: 3530752. Throughput: 0: 797.2, 1: 795.6. Samples: 881753. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-27 03:38:41,415][24596] Avg episode reward: [(0, '-3398.280'), (1, '-3716.210')] +[2023-09-27 03:38:46,414][24596] Fps is (10 sec: 5734.3, 60 sec: 6280.5, 300 sec: 6387.0). Total num frames: 3563520. Throughput: 0: 796.8, 1: 796.8. Samples: 891263. Policy #0 lag: (min: 12.0, avg: 12.0, max: 12.0) +[2023-09-27 03:38:46,415][24596] Avg episode reward: [(0, '-3411.910'), (1, '-3716.210')] +[2023-09-27 03:38:51,414][24596] Fps is (10 sec: 6553.6, 60 sec: 6417.1, 300 sec: 6387.0). Total num frames: 3596288. Throughput: 0: 799.1, 1: 798.9. Samples: 896391. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-27 03:38:51,415][24596] Avg episode reward: [(0, '-3422.300'), (1, '-3732.770')] +[2023-09-27 03:38:51,576][25562] Updated weights for policy 0, policy_version 7040 (0.0017) +[2023-09-27 03:38:51,576][25564] Updated weights for policy 1, policy_version 7040 (0.0017) +[2023-09-27 03:38:56,414][24596] Fps is (10 sec: 6553.6, 60 sec: 6417.1, 300 sec: 6387.0). Total num frames: 3629056. Throughput: 0: 799.8, 1: 800.3. Samples: 906025. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-27 03:38:56,415][24596] Avg episode reward: [(0, '-3422.300'), (1, '-3782.260')] +[2023-09-27 03:39:01,414][24596] Fps is (10 sec: 6553.6, 60 sec: 6417.1, 300 sec: 6387.0). Total num frames: 3661824. Throughput: 0: 797.8, 1: 798.2. Samples: 915597. Policy #0 lag: (min: 8.0, avg: 8.0, max: 8.0) +[2023-09-27 03:39:01,415][24596] Avg episode reward: [(0, '-3423.490'), (1, '-3782.260')] +[2023-09-27 03:39:04,250][25562] Updated weights for policy 0, policy_version 7200 (0.0018) +[2023-09-27 03:39:04,251][25564] Updated weights for policy 1, policy_version 7200 (0.0019) +[2023-09-27 03:39:06,414][24596] Fps is (10 sec: 6553.7, 60 sec: 6417.1, 300 sec: 6387.0). Total num frames: 3694592. Throughput: 0: 803.8, 1: 804.2. Samples: 920611. Policy #0 lag: (min: 3.0, avg: 3.0, max: 3.0) +[2023-09-27 03:39:06,414][24596] Avg episode reward: [(0, '-3423.490'), (1, '-3782.260')] +[2023-09-27 03:39:11,414][24596] Fps is (10 sec: 6553.5, 60 sec: 6417.0, 300 sec: 6387.0). Total num frames: 3727360. Throughput: 0: 800.2, 1: 800.5. Samples: 930142. Policy #0 lag: (min: 11.0, avg: 11.0, max: 11.0) +[2023-09-27 03:39:11,415][24596] Avg episode reward: [(0, '-3425.920'), (1, '-3782.260')] +[2023-09-27 03:39:11,424][25443] Saving ./train_atari/atari_skiing/checkpoint_p1/checkpoint_000007280_1863680.pth... +[2023-09-27 03:39:11,424][25378] Saving ./train_atari/atari_skiing/checkpoint_p0/checkpoint_000007280_1863680.pth... +[2023-09-27 03:39:11,460][25378] Removing ./train_atari/atari_skiing/checkpoint_p0/checkpoint_000004288_1097728.pth +[2023-09-27 03:39:11,462][25443] Removing ./train_atari/atari_skiing/checkpoint_p1/checkpoint_000004288_1097728.pth +[2023-09-27 03:39:16,414][24596] Fps is (10 sec: 6553.4, 60 sec: 6417.0, 300 sec: 6387.0). Total num frames: 3760128. Throughput: 0: 806.2, 1: 805.6. Samples: 940032. Policy #0 lag: (min: 11.0, avg: 11.0, max: 11.0) +[2023-09-27 03:39:16,415][24596] Avg episode reward: [(0, '-3425.920'), (1, '-3829.710')] +[2023-09-27 03:39:16,973][25564] Updated weights for policy 1, policy_version 7360 (0.0019) +[2023-09-27 03:39:16,974][25562] Updated weights for policy 0, policy_version 7360 (0.0019) +[2023-09-27 03:39:21,414][24596] Fps is (10 sec: 6553.8, 60 sec: 6417.1, 300 sec: 6387.0). Total num frames: 3792896. Throughput: 0: 800.9, 1: 801.6. Samples: 944562. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-27 03:39:21,414][24596] Avg episode reward: [(0, '-3425.920'), (1, '-3861.160')] +[2023-09-27 03:39:26,414][24596] Fps is (10 sec: 6553.6, 60 sec: 6417.0, 300 sec: 6387.0). Total num frames: 3825664. Throughput: 0: 805.2, 1: 806.8. Samples: 954294. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-27 03:39:26,415][24596] Avg episode reward: [(0, '-3427.260'), (1, '-3861.160')] +[2023-09-27 03:39:29,754][25564] Updated weights for policy 1, policy_version 7520 (0.0018) +[2023-09-27 03:39:29,754][25562] Updated weights for policy 0, policy_version 7520 (0.0017) +[2023-09-27 03:39:31,414][24596] Fps is (10 sec: 6553.4, 60 sec: 6417.0, 300 sec: 6387.0). Total num frames: 3858432. Throughput: 0: 809.1, 1: 809.6. Samples: 964101. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-27 03:39:31,415][24596] Avg episode reward: [(0, '-3431.450'), (1, '-3861.160')] +[2023-09-27 03:39:36,414][24596] Fps is (10 sec: 6553.6, 60 sec: 6417.1, 300 sec: 6400.9). Total num frames: 3891200. Throughput: 0: 803.8, 1: 803.2. Samples: 968704. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-27 03:39:36,415][24596] Avg episode reward: [(0, '-3431.450'), (1, '-3861.170')] +[2023-09-27 03:39:41,414][24596] Fps is (10 sec: 6553.6, 60 sec: 6553.6, 300 sec: 6414.8). Total num frames: 3923968. Throughput: 0: 806.1, 1: 805.8. Samples: 978562. Policy #0 lag: (min: 5.0, avg: 5.0, max: 5.0) +[2023-09-27 03:39:41,415][24596] Avg episode reward: [(0, '-3431.450'), (1, '-3924.170')] +[2023-09-27 03:39:42,497][25562] Updated weights for policy 0, policy_version 7680 (0.0016) +[2023-09-27 03:39:42,497][25564] Updated weights for policy 1, policy_version 7680 (0.0019) +[2023-09-27 03:39:46,414][24596] Fps is (10 sec: 6553.6, 60 sec: 6553.6, 300 sec: 6414.8). Total num frames: 3956736. Throughput: 0: 805.1, 1: 805.4. Samples: 988067. Policy #0 lag: (min: 10.0, avg: 10.0, max: 10.0) +[2023-09-27 03:39:46,415][24596] Avg episode reward: [(0, '-3461.970'), (1, '-3924.170')] +[2023-09-27 03:39:51,414][24596] Fps is (10 sec: 6553.7, 60 sec: 6553.6, 300 sec: 6414.8). Total num frames: 3989504. Throughput: 0: 805.5, 1: 804.7. Samples: 993070. Policy #0 lag: (min: 10.0, avg: 10.0, max: 10.0) +[2023-09-27 03:39:51,415][24596] Avg episode reward: [(0, '-3472.290'), (1, '-3924.170')] +[2023-09-27 03:39:55,256][25562] Updated weights for policy 0, policy_version 7840 (0.0018) +[2023-09-27 03:39:55,257][25564] Updated weights for policy 1, policy_version 7840 (0.0020) +[2023-09-27 03:39:56,414][24596] Fps is (10 sec: 5734.4, 60 sec: 6417.1, 300 sec: 6387.0). Total num frames: 4014080. Throughput: 0: 803.5, 1: 804.1. Samples: 1002483. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-27 03:39:56,415][24596] Avg episode reward: [(0, '-3472.290'), (1, '-3924.170')] +[2023-09-27 03:40:01,414][24596] Fps is (10 sec: 5734.5, 60 sec: 6417.1, 300 sec: 6387.0). Total num frames: 4046848. Throughput: 0: 798.9, 1: 799.7. Samples: 1011966. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-27 03:40:01,414][24596] Avg episode reward: [(0, '-3467.030'), (1, '-3998.000')] +[2023-09-27 03:40:06,414][24596] Fps is (10 sec: 6553.8, 60 sec: 6417.1, 300 sec: 6387.0). Total num frames: 4079616. Throughput: 0: 804.7, 1: 804.3. Samples: 1016967. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-27 03:40:06,414][24596] Avg episode reward: [(0, '-3500.730'), (1, '-4018.420')] +[2023-09-27 03:40:08,106][25564] Updated weights for policy 1, policy_version 8000 (0.0019) +[2023-09-27 03:40:08,106][25562] Updated weights for policy 0, policy_version 8000 (0.0018) +[2023-09-27 03:40:11,414][24596] Fps is (10 sec: 6553.5, 60 sec: 6417.1, 300 sec: 6387.0). Total num frames: 4112384. Throughput: 0: 800.0, 1: 799.8. Samples: 1026282. Policy #0 lag: (min: 7.0, avg: 7.0, max: 7.0) +[2023-09-27 03:40:11,414][24596] Avg episode reward: [(0, '-3512.910'), (1, '-4018.420')] +[2023-09-27 03:40:16,414][24596] Fps is (10 sec: 6553.6, 60 sec: 6417.1, 300 sec: 6400.9). Total num frames: 4145152. Throughput: 0: 797.8, 1: 797.0. Samples: 1035866. Policy #0 lag: (min: 7.0, avg: 7.0, max: 7.0) +[2023-09-27 03:40:16,415][24596] Avg episode reward: [(0, '-3521.980'), (1, '-4018.420')] +[2023-09-27 03:40:21,165][25562] Updated weights for policy 0, policy_version 8160 (0.0018) +[2023-09-27 03:40:21,165][25564] Updated weights for policy 1, policy_version 8160 (0.0017) +[2023-09-27 03:40:21,414][24596] Fps is (10 sec: 6553.4, 60 sec: 6417.0, 300 sec: 6400.9). Total num frames: 4177920. Throughput: 0: 797.1, 1: 797.6. Samples: 1040466. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-27 03:40:21,415][24596] Avg episode reward: [(0, '-3548.600'), (1, '-4014.410')] +[2023-09-27 03:40:26,414][24596] Fps is (10 sec: 6553.6, 60 sec: 6417.1, 300 sec: 6414.8). Total num frames: 4210688. Throughput: 0: 796.3, 1: 797.0. Samples: 1050263. Policy #0 lag: (min: 6.0, avg: 6.0, max: 6.0) +[2023-09-27 03:40:26,414][24596] Avg episode reward: [(0, '-3570.260'), (1, '-4069.040')] +[2023-09-27 03:40:31,414][24596] Fps is (10 sec: 6553.6, 60 sec: 6417.1, 300 sec: 6414.7). Total num frames: 4243456. Throughput: 0: 798.3, 1: 797.9. Samples: 1059899. Policy #0 lag: (min: 6.0, avg: 6.0, max: 6.0) +[2023-09-27 03:40:31,415][24596] Avg episode reward: [(0, '-3567.990'), (1, '-4069.040')] +[2023-09-27 03:40:33,781][25564] Updated weights for policy 1, policy_version 8320 (0.0017) +[2023-09-27 03:40:33,781][25562] Updated weights for policy 0, policy_version 8320 (0.0018) +[2023-09-27 03:40:36,414][24596] Fps is (10 sec: 6553.6, 60 sec: 6417.1, 300 sec: 6414.8). Total num frames: 4276224. Throughput: 0: 797.8, 1: 798.9. Samples: 1064919. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-27 03:40:36,414][24596] Avg episode reward: [(0, '-3567.990'), (1, '-4052.880')] +[2023-09-27 03:40:41,414][24596] Fps is (10 sec: 5734.5, 60 sec: 6280.5, 300 sec: 6387.0). Total num frames: 4300800. Throughput: 0: 798.2, 1: 799.3. Samples: 1074372. Policy #0 lag: (min: 0.0, avg: 0.0, max: 0.0) +[2023-09-27 03:40:41,415][24596] Avg episode reward: [(0, '-3568.860'), (1, '-4052.880')] +[2023-09-27 03:40:46,414][24596] Fps is (10 sec: 5734.3, 60 sec: 6280.5, 300 sec: 6387.0). Total num frames: 4333568. Throughput: 0: 795.1, 1: 794.6. Samples: 1083502. Policy #0 lag: (min: 0.0, avg: 0.0, max: 0.0) +[2023-09-27 03:40:46,415][24596] Avg episode reward: [(0, '-3568.860'), (1, '-4066.490')] +[2023-09-27 03:40:46,703][25562] Updated weights for policy 0, policy_version 8480 (0.0017) +[2023-09-27 03:40:46,704][25564] Updated weights for policy 1, policy_version 8480 (0.0018) +[2023-09-27 03:40:51,414][24596] Fps is (10 sec: 6553.6, 60 sec: 6280.5, 300 sec: 6387.0). Total num frames: 4366336. Throughput: 0: 795.7, 1: 795.5. Samples: 1088574. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-27 03:40:51,415][24596] Avg episode reward: [(0, '-3622.720'), (1, '-4078.580')] +[2023-09-27 03:40:56,414][24596] Fps is (10 sec: 6553.6, 60 sec: 6417.1, 300 sec: 6387.0). Total num frames: 4399104. Throughput: 0: 798.7, 1: 799.4. Samples: 1098199. Policy #0 lag: (min: 2.0, avg: 2.0, max: 2.0) +[2023-09-27 03:40:56,415][24596] Avg episode reward: [(0, '-3623.450'), (1, '-4084.420')] +[2023-09-27 03:40:59,322][25562] Updated weights for policy 0, policy_version 8640 (0.0016) +[2023-09-27 03:40:59,323][25564] Updated weights for policy 1, policy_version 8640 (0.0017) +[2023-09-27 03:41:01,414][24596] Fps is (10 sec: 6553.6, 60 sec: 6417.0, 300 sec: 6387.0). Total num frames: 4431872. Throughput: 0: 801.1, 1: 801.2. Samples: 1107969. Policy #0 lag: (min: 2.0, avg: 2.0, max: 2.0) +[2023-09-27 03:41:01,415][24596] Avg episode reward: [(0, '-3635.670'), (1, '-4084.420')] +[2023-09-27 03:41:06,414][24596] Fps is (10 sec: 6553.6, 60 sec: 6417.0, 300 sec: 6400.9). Total num frames: 4464640. Throughput: 0: 804.5, 1: 804.7. Samples: 1112881. Policy #0 lag: (min: 14.0, avg: 14.0, max: 14.0) +[2023-09-27 03:41:06,415][24596] Avg episode reward: [(0, '-3635.670'), (1, '-4084.420')] +[2023-09-27 03:41:11,414][24596] Fps is (10 sec: 6553.5, 60 sec: 6417.0, 300 sec: 6414.8). Total num frames: 4497408. Throughput: 0: 801.1, 1: 800.0. Samples: 1122309. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-27 03:41:11,415][24596] Avg episode reward: [(0, '-3661.650'), (1, '-4115.220')] +[2023-09-27 03:41:11,426][25378] Saving ./train_atari/atari_skiing/checkpoint_p0/checkpoint_000008784_2248704.pth... +[2023-09-27 03:41:11,426][25443] Saving ./train_atari/atari_skiing/checkpoint_p1/checkpoint_000008784_2248704.pth... +[2023-09-27 03:41:11,454][25378] Removing ./train_atari/atari_skiing/checkpoint_p0/checkpoint_000005776_1478656.pth +[2023-09-27 03:41:11,460][25443] Removing ./train_atari/atari_skiing/checkpoint_p1/checkpoint_000005776_1478656.pth +[2023-09-27 03:41:12,058][25562] Updated weights for policy 0, policy_version 8800 (0.0017) +[2023-09-27 03:41:12,058][25564] Updated weights for policy 1, policy_version 8800 (0.0017) +[2023-09-27 03:41:16,414][24596] Fps is (10 sec: 6553.6, 60 sec: 6417.0, 300 sec: 6414.8). Total num frames: 4530176. Throughput: 0: 804.3, 1: 804.3. Samples: 1132287. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-27 03:41:16,415][24596] Avg episode reward: [(0, '-3715.900'), (1, '-4115.220')] +[2023-09-27 03:41:21,414][24596] Fps is (10 sec: 6553.7, 60 sec: 6417.1, 300 sec: 6414.8). Total num frames: 4562944. Throughput: 0: 799.0, 1: 798.4. Samples: 1136802. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-27 03:41:21,415][24596] Avg episode reward: [(0, '-3739.410'), (1, '-4115.250')] +[2023-09-27 03:41:24,767][25562] Updated weights for policy 0, policy_version 8960 (0.0018) +[2023-09-27 03:41:24,767][25564] Updated weights for policy 1, policy_version 8960 (0.0018) +[2023-09-27 03:41:26,414][24596] Fps is (10 sec: 6553.6, 60 sec: 6417.0, 300 sec: 6414.7). Total num frames: 4595712. Throughput: 0: 806.1, 1: 804.6. Samples: 1146855. Policy #0 lag: (min: 2.0, avg: 2.0, max: 2.0) +[2023-09-27 03:41:26,415][24596] Avg episode reward: [(0, '-3733.080'), (1, '-4115.250')] +[2023-09-27 03:41:31,414][24596] Fps is (10 sec: 5734.4, 60 sec: 6280.5, 300 sec: 6387.0). Total num frames: 4620288. Throughput: 0: 803.7, 1: 803.8. Samples: 1155836. Policy #0 lag: (min: 2.0, avg: 2.0, max: 2.0) +[2023-09-27 03:41:31,415][24596] Avg episode reward: [(0, '-3731.120'), (1, '-4137.500')] +[2023-09-27 03:41:36,414][24596] Fps is (10 sec: 5734.5, 60 sec: 6280.5, 300 sec: 6387.0). Total num frames: 4653056. Throughput: 0: 800.6, 1: 800.6. Samples: 1160627. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-27 03:41:36,415][24596] Avg episode reward: [(0, '-3730.850'), (1, '-4137.550')] +[2023-09-27 03:41:37,802][25564] Updated weights for policy 1, policy_version 9120 (0.0017) +[2023-09-27 03:41:37,803][25562] Updated weights for policy 0, policy_version 9120 (0.0016) +[2023-09-27 03:41:41,414][24596] Fps is (10 sec: 6553.6, 60 sec: 6417.1, 300 sec: 6387.0). Total num frames: 4685824. Throughput: 0: 801.9, 1: 801.4. Samples: 1170347. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-27 03:41:41,415][24596] Avg episode reward: [(0, '-3722.550'), (1, '-4159.950')] +[2023-09-27 03:41:46,414][24596] Fps is (10 sec: 6553.6, 60 sec: 6417.1, 300 sec: 6414.8). Total num frames: 4718592. Throughput: 0: 796.6, 1: 796.7. Samples: 1179664. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-27 03:41:46,415][24596] Avg episode reward: [(0, '-3722.550'), (1, '-4159.950')] +[2023-09-27 03:41:50,745][25562] Updated weights for policy 0, policy_version 9280 (0.0018) +[2023-09-27 03:41:50,746][25564] Updated weights for policy 1, policy_version 9280 (0.0015) +[2023-09-27 03:41:51,414][24596] Fps is (10 sec: 6553.6, 60 sec: 6417.1, 300 sec: 6414.8). Total num frames: 4751360. Throughput: 0: 796.8, 1: 797.9. Samples: 1184646. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-27 03:41:51,415][24596] Avg episode reward: [(0, '-3722.550'), (1, '-4159.950')] +[2023-09-27 03:41:56,414][24596] Fps is (10 sec: 6553.4, 60 sec: 6417.0, 300 sec: 6414.7). Total num frames: 4784128. Throughput: 0: 796.6, 1: 796.8. Samples: 1194008. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-27 03:41:56,415][24596] Avg episode reward: [(0, '-3774.000'), (1, '-4211.580')] +[2023-09-27 03:42:01,414][24596] Fps is (10 sec: 6553.6, 60 sec: 6417.1, 300 sec: 6414.8). Total num frames: 4816896. Throughput: 0: 798.6, 1: 799.1. Samples: 1204184. Policy #0 lag: (min: 13.0, avg: 13.0, max: 13.0) +[2023-09-27 03:42:01,415][24596] Avg episode reward: [(0, '-3799.790'), (1, '-4211.580')] +[2023-09-27 03:42:03,272][25564] Updated weights for policy 1, policy_version 9440 (0.0018) +[2023-09-27 03:42:03,273][25562] Updated weights for policy 0, policy_version 9440 (0.0019) +[2023-09-27 03:42:06,414][24596] Fps is (10 sec: 6553.8, 60 sec: 6417.1, 300 sec: 6414.8). Total num frames: 4849664. Throughput: 0: 800.1, 1: 799.9. Samples: 1208804. Policy #0 lag: (min: 13.0, avg: 13.0, max: 13.0) +[2023-09-27 03:42:06,415][24596] Avg episode reward: [(0, '-3786.180'), (1, '-4239.590')] +[2023-09-27 03:42:11,414][24596] Fps is (10 sec: 6553.6, 60 sec: 6417.1, 300 sec: 6414.8). Total num frames: 4882432. Throughput: 0: 797.0, 1: 796.5. Samples: 1218564. Policy #0 lag: (min: 4.0, avg: 4.0, max: 4.0) +[2023-09-27 03:42:11,415][24596] Avg episode reward: [(0, '-3761.970'), (1, '-4239.590')] +[2023-09-27 03:42:15,931][25564] Updated weights for policy 1, policy_version 9600 (0.0018) +[2023-09-27 03:42:15,931][25562] Updated weights for policy 0, policy_version 9600 (0.0017) +[2023-09-27 03:42:16,414][24596] Fps is (10 sec: 6553.6, 60 sec: 6417.1, 300 sec: 6414.8). Total num frames: 4915200. Throughput: 0: 808.2, 1: 807.1. Samples: 1228525. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-27 03:42:16,415][24596] Avg episode reward: [(0, '-3737.200'), (1, '-4292.240')] +[2023-09-27 03:42:21,414][24596] Fps is (10 sec: 6553.6, 60 sec: 6417.1, 300 sec: 6414.8). Total num frames: 4947968. Throughput: 0: 803.7, 1: 803.8. Samples: 1232961. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-27 03:42:21,415][24596] Avg episode reward: [(0, '-3692.420'), (1, '-4320.080')] +[2023-09-27 03:42:26,414][24596] Fps is (10 sec: 6553.6, 60 sec: 6417.1, 300 sec: 6414.7). Total num frames: 4980736. Throughput: 0: 806.5, 1: 805.6. Samples: 1242892. Policy #0 lag: (min: 3.0, avg: 3.0, max: 3.0) +[2023-09-27 03:42:26,415][24596] Avg episode reward: [(0, '-3692.420'), (1, '-4320.020')] +[2023-09-27 03:42:28,680][25564] Updated weights for policy 1, policy_version 9760 (0.0018) +[2023-09-27 03:42:28,680][25562] Updated weights for policy 0, policy_version 9760 (0.0017) +[2023-09-27 03:42:31,414][24596] Fps is (10 sec: 6553.6, 60 sec: 6553.6, 300 sec: 6414.7). Total num frames: 5013504. Throughput: 0: 809.5, 1: 809.7. Samples: 1252528. Policy #0 lag: (min: 3.0, avg: 3.0, max: 3.0) +[2023-09-27 03:42:31,415][24596] Avg episode reward: [(0, '-3692.460'), (1, '-4320.020')] +[2023-09-27 03:42:36,414][24596] Fps is (10 sec: 6553.5, 60 sec: 6553.6, 300 sec: 6414.8). Total num frames: 5046272. Throughput: 0: 810.1, 1: 808.3. Samples: 1257472. Policy #0 lag: (min: 6.0, avg: 6.0, max: 6.0) +[2023-09-27 03:42:36,415][24596] Avg episode reward: [(0, '-3692.500'), (1, '-4320.020')] +[2023-09-27 03:42:41,229][25564] Updated weights for policy 1, policy_version 9920 (0.0018) +[2023-09-27 03:42:41,230][25562] Updated weights for policy 0, policy_version 9920 (0.0016) +[2023-09-27 03:42:41,414][24596] Fps is (10 sec: 6553.7, 60 sec: 6553.6, 300 sec: 6414.8). Total num frames: 5079040. Throughput: 0: 814.2, 1: 814.2. Samples: 1267284. Policy #0 lag: (min: 6.0, avg: 6.0, max: 6.0) +[2023-09-27 03:42:41,414][24596] Avg episode reward: [(0, '-3691.920'), (1, '-4320.010')] +[2023-09-27 03:42:46,414][24596] Fps is (10 sec: 6144.0, 60 sec: 6485.3, 300 sec: 6428.6). Total num frames: 5107712. Throughput: 0: 806.1, 1: 806.9. Samples: 1276772. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-27 03:42:46,415][24596] Avg episode reward: [(0, '-3663.380'), (1, '-4320.010')] +[2023-09-27 03:42:51,414][24596] Fps is (10 sec: 5734.3, 60 sec: 6417.1, 300 sec: 6414.8). Total num frames: 5136384. Throughput: 0: 807.3, 1: 807.7. Samples: 1281478. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-27 03:42:51,415][24596] Avg episode reward: [(0, '-3663.380'), (1, '-4346.270')] +[2023-09-27 03:42:54,287][25562] Updated weights for policy 0, policy_version 10080 (0.0017) +[2023-09-27 03:42:54,287][25564] Updated weights for policy 1, policy_version 10080 (0.0018) +[2023-09-27 03:42:56,414][24596] Fps is (10 sec: 6144.0, 60 sec: 6417.1, 300 sec: 6414.8). Total num frames: 5169152. Throughput: 0: 801.5, 1: 801.7. Samples: 1290710. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-27 03:42:56,415][24596] Avg episode reward: [(0, '-3663.380'), (1, '-4346.270')] +[2023-09-27 03:43:01,414][24596] Fps is (10 sec: 6553.5, 60 sec: 6417.1, 300 sec: 6414.7). Total num frames: 5201920. Throughput: 0: 798.6, 1: 799.2. Samples: 1300429. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-27 03:43:01,415][24596] Avg episode reward: [(0, '-3642.270'), (1, '-4395.620')] +[2023-09-27 03:43:06,414][24596] Fps is (10 sec: 6553.6, 60 sec: 6417.1, 300 sec: 6414.7). Total num frames: 5234688. Throughput: 0: 799.6, 1: 799.6. Samples: 1304924. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-27 03:43:06,415][24596] Avg episode reward: [(0, '-3659.150'), (1, '-4419.610')] +[2023-09-27 03:43:07,150][25564] Updated weights for policy 1, policy_version 10240 (0.0018) +[2023-09-27 03:43:07,150][25562] Updated weights for policy 0, policy_version 10240 (0.0021) +[2023-09-27 03:43:11,414][24596] Fps is (10 sec: 6553.7, 60 sec: 6417.1, 300 sec: 6414.8). Total num frames: 5267456. Throughput: 0: 799.0, 1: 799.3. Samples: 1314816. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-27 03:43:11,414][24596] Avg episode reward: [(0, '-3659.080'), (1, '-4419.570')] +[2023-09-27 03:43:11,421][25443] Saving ./train_atari/atari_skiing/checkpoint_p1/checkpoint_000010288_2633728.pth... +[2023-09-27 03:43:11,421][25378] Saving ./train_atari/atari_skiing/checkpoint_p0/checkpoint_000010288_2633728.pth... +[2023-09-27 03:43:11,459][25443] Removing ./train_atari/atari_skiing/checkpoint_p1/checkpoint_000007280_1863680.pth +[2023-09-27 03:43:11,462][25378] Removing ./train_atari/atari_skiing/checkpoint_p0/checkpoint_000007280_1863680.pth +[2023-09-27 03:43:16,414][24596] Fps is (10 sec: 6553.8, 60 sec: 6417.1, 300 sec: 6414.8). Total num frames: 5300224. Throughput: 0: 795.9, 1: 795.8. Samples: 1324155. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-27 03:43:16,414][24596] Avg episode reward: [(0, '-3659.080'), (1, '-4419.570')] +[2023-09-27 03:43:20,090][25562] Updated weights for policy 0, policy_version 10400 (0.0019) +[2023-09-27 03:43:20,090][25564] Updated weights for policy 1, policy_version 10400 (0.0019) +[2023-09-27 03:43:21,414][24596] Fps is (10 sec: 6553.5, 60 sec: 6417.1, 300 sec: 6414.7). Total num frames: 5332992. Throughput: 0: 795.3, 1: 794.5. Samples: 1329013. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-27 03:43:21,415][24596] Avg episode reward: [(0, '-3671.990'), (1, '-4419.570')] +[2023-09-27 03:43:26,414][24596] Fps is (10 sec: 5734.3, 60 sec: 6280.5, 300 sec: 6387.0). Total num frames: 5357568. Throughput: 0: 791.5, 1: 792.4. Samples: 1338558. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-27 03:43:26,415][24596] Avg episode reward: [(0, '-3676.560'), (1, '-4440.640')] +[2023-09-27 03:43:31,414][24596] Fps is (10 sec: 5734.5, 60 sec: 6280.6, 300 sec: 6387.0). Total num frames: 5390336. Throughput: 0: 790.6, 1: 789.1. Samples: 1347857. Policy #0 lag: (min: 4.0, avg: 4.0, max: 4.0) +[2023-09-27 03:43:31,414][24596] Avg episode reward: [(0, '-3699.880'), (1, '-4440.640')] +[2023-09-27 03:43:32,957][25564] Updated weights for policy 1, policy_version 10560 (0.0015) +[2023-09-27 03:43:32,957][25562] Updated weights for policy 0, policy_version 10560 (0.0017) +[2023-09-27 03:43:36,414][24596] Fps is (10 sec: 6553.7, 60 sec: 6280.6, 300 sec: 6414.8). Total num frames: 5423104. Throughput: 0: 792.3, 1: 792.6. Samples: 1352797. Policy #0 lag: (min: 4.0, avg: 4.0, max: 4.0) +[2023-09-27 03:43:36,415][24596] Avg episode reward: [(0, '-3699.880'), (1, '-4440.660')] +[2023-09-27 03:43:41,414][24596] Fps is (10 sec: 6553.4, 60 sec: 6280.5, 300 sec: 6414.8). Total num frames: 5455872. Throughput: 0: 791.8, 1: 791.9. Samples: 1361977. Policy #0 lag: (min: 4.0, avg: 4.0, max: 4.0) +[2023-09-27 03:43:41,415][24596] Avg episode reward: [(0, '-3681.770'), (1, '-4440.660')] +[2023-09-27 03:43:46,138][25562] Updated weights for policy 0, policy_version 10720 (0.0016) +[2023-09-27 03:43:46,139][25564] Updated weights for policy 1, policy_version 10720 (0.0019) +[2023-09-27 03:43:46,414][24596] Fps is (10 sec: 6553.7, 60 sec: 6348.8, 300 sec: 6414.8). Total num frames: 5488640. Throughput: 0: 789.8, 1: 792.7. Samples: 1371640. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-27 03:43:46,414][24596] Avg episode reward: [(0, '-3680.570'), (1, '-4440.700')] +[2023-09-27 03:43:51,414][24596] Fps is (10 sec: 6553.7, 60 sec: 6417.1, 300 sec: 6414.8). Total num frames: 5521408. Throughput: 0: 792.7, 1: 792.4. Samples: 1376256. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-27 03:43:51,415][24596] Avg episode reward: [(0, '-3612.920'), (1, '-4462.440')] +[2023-09-27 03:43:56,414][24596] Fps is (10 sec: 6553.4, 60 sec: 6417.1, 300 sec: 6414.8). Total num frames: 5554176. Throughput: 0: 788.3, 1: 788.6. Samples: 1385777. Policy #0 lag: (min: 11.0, avg: 11.0, max: 11.0) +[2023-09-27 03:43:56,415][24596] Avg episode reward: [(0, '-3589.350'), (1, '-4462.440')] +[2023-09-27 03:43:58,870][25562] Updated weights for policy 0, policy_version 10880 (0.0017) +[2023-09-27 03:43:58,870][25564] Updated weights for policy 1, policy_version 10880 (0.0018) +[2023-09-27 03:44:01,414][24596] Fps is (10 sec: 6553.5, 60 sec: 6417.1, 300 sec: 6414.7). Total num frames: 5586944. Throughput: 0: 791.8, 1: 791.7. Samples: 1395412. Policy #0 lag: (min: 11.0, avg: 11.0, max: 11.0) +[2023-09-27 03:44:01,415][24596] Avg episode reward: [(0, '-3504.730'), (1, '-4462.440')] +[2023-09-27 03:44:06,414][24596] Fps is (10 sec: 5734.5, 60 sec: 6280.5, 300 sec: 6387.0). Total num frames: 5611520. Throughput: 0: 793.5, 1: 795.0. Samples: 1400494. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-27 03:44:06,415][24596] Avg episode reward: [(0, '-3487.150'), (1, '-4462.440')] +[2023-09-27 03:44:11,414][24596] Fps is (10 sec: 5734.4, 60 sec: 6280.5, 300 sec: 6387.0). Total num frames: 5644288. Throughput: 0: 791.6, 1: 791.4. Samples: 1409794. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-27 03:44:11,415][24596] Avg episode reward: [(0, '-3432.990'), (1, '-4462.510')] +[2023-09-27 03:44:11,659][25564] Updated weights for policy 1, policy_version 11040 (0.0017) +[2023-09-27 03:44:11,659][25562] Updated weights for policy 0, policy_version 11040 (0.0017) +[2023-09-27 03:44:16,414][24596] Fps is (10 sec: 6553.6, 60 sec: 6280.5, 300 sec: 6387.0). Total num frames: 5677056. Throughput: 0: 795.7, 1: 795.7. Samples: 1419472. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-27 03:44:16,415][24596] Avg episode reward: [(0, '-3403.810'), (1, '-4462.510')] +[2023-09-27 03:44:21,414][24596] Fps is (10 sec: 6553.6, 60 sec: 6280.5, 300 sec: 6387.0). Total num frames: 5709824. Throughput: 0: 798.1, 1: 796.5. Samples: 1424555. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-27 03:44:21,415][24596] Avg episode reward: [(0, '-3346.750'), (1, '-4462.460')] +[2023-09-27 03:44:24,505][25562] Updated weights for policy 0, policy_version 11200 (0.0017) +[2023-09-27 03:44:24,505][25564] Updated weights for policy 1, policy_version 11200 (0.0017) +[2023-09-27 03:44:26,414][24596] Fps is (10 sec: 6553.6, 60 sec: 6417.1, 300 sec: 6387.0). Total num frames: 5742592. Throughput: 0: 796.2, 1: 796.1. Samples: 1433630. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-27 03:44:26,415][24596] Avg episode reward: [(0, '-3270.440'), (1, '-4462.460')] +[2023-09-27 03:44:31,414][24596] Fps is (10 sec: 6553.6, 60 sec: 6417.0, 300 sec: 6387.0). Total num frames: 5775360. Throughput: 0: 801.7, 1: 798.1. Samples: 1443631. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-27 03:44:31,415][24596] Avg episode reward: [(0, '-3208.570'), (1, '-4483.390')] +[2023-09-27 03:44:36,414][24596] Fps is (10 sec: 6553.5, 60 sec: 6417.0, 300 sec: 6387.0). Total num frames: 5808128. Throughput: 0: 797.8, 1: 798.0. Samples: 1448069. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-27 03:44:36,415][24596] Avg episode reward: [(0, '-3175.790'), (1, '-4483.400')] +[2023-09-27 03:44:37,213][25564] Updated weights for policy 1, policy_version 11360 (0.0016) +[2023-09-27 03:44:37,214][25562] Updated weights for policy 0, policy_version 11360 (0.0017) +[2023-09-27 03:44:41,414][24596] Fps is (10 sec: 6553.6, 60 sec: 6417.1, 300 sec: 6387.0). Total num frames: 5840896. Throughput: 0: 804.6, 1: 804.3. Samples: 1458176. Policy #0 lag: (min: 9.0, avg: 9.0, max: 9.0) +[2023-09-27 03:44:41,415][24596] Avg episode reward: [(0, '-3140.680'), (1, '-4483.470')] +[2023-09-27 03:44:46,414][24596] Fps is (10 sec: 6553.6, 60 sec: 6417.0, 300 sec: 6387.0). Total num frames: 5873664. Throughput: 0: 803.3, 1: 803.6. Samples: 1467723. Policy #0 lag: (min: 9.0, avg: 9.0, max: 9.0) +[2023-09-27 03:44:46,415][24596] Avg episode reward: [(0, '-3068.890'), (1, '-4483.470')] +[2023-09-27 03:44:49,978][25564] Updated weights for policy 1, policy_version 11520 (0.0018) +[2023-09-27 03:44:49,978][25562] Updated weights for policy 0, policy_version 11520 (0.0018) +[2023-09-27 03:44:51,414][24596] Fps is (10 sec: 6553.6, 60 sec: 6417.1, 300 sec: 6414.8). Total num frames: 5906432. Throughput: 0: 800.6, 1: 799.8. Samples: 1472512. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-27 03:44:51,415][24596] Avg episode reward: [(0, '-3015.360'), (1, '-4483.470')] +[2023-09-27 03:44:56,414][24596] Fps is (10 sec: 6553.6, 60 sec: 6417.1, 300 sec: 6414.7). Total num frames: 5939200. Throughput: 0: 805.2, 1: 804.5. Samples: 1482228. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-27 03:44:56,415][24596] Avg episode reward: [(0, '-2962.050'), (1, '-4483.460')] +[2023-09-27 03:45:01,414][24596] Fps is (10 sec: 6553.6, 60 sec: 6417.1, 300 sec: 6414.7). Total num frames: 5971968. Throughput: 0: 804.1, 1: 803.5. Samples: 1491813. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-27 03:45:01,415][24596] Avg episode reward: [(0, '-2880.280'), (1, '-4483.460')] +[2023-09-27 03:45:02,640][25562] Updated weights for policy 0, policy_version 11680 (0.0015) +[2023-09-27 03:45:02,640][25564] Updated weights for policy 1, policy_version 11680 (0.0018) +[2023-09-27 03:45:06,414][24596] Fps is (10 sec: 6553.6, 60 sec: 6553.6, 300 sec: 6414.7). Total num frames: 6004736. Throughput: 0: 802.3, 1: 803.1. Samples: 1496800. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-27 03:45:06,415][24596] Avg episode reward: [(0, '-2824.890'), (1, '-4483.400')] +[2023-09-27 03:45:11,414][24596] Fps is (10 sec: 5734.4, 60 sec: 6417.1, 300 sec: 6387.0). Total num frames: 6029312. Throughput: 0: 805.3, 1: 806.6. Samples: 1506169. Policy #0 lag: (min: 10.0, avg: 10.0, max: 10.0) +[2023-09-27 03:45:11,415][24596] Avg episode reward: [(0, '-2766.200'), (1, '-4483.400')] +[2023-09-27 03:45:11,428][25443] Saving ./train_atari/atari_skiing/checkpoint_p1/checkpoint_000011776_3014656.pth... +[2023-09-27 03:45:11,429][25378] Saving ./train_atari/atari_skiing/checkpoint_p0/checkpoint_000011776_3014656.pth... +[2023-09-27 03:45:11,462][25443] Removing ./train_atari/atari_skiing/checkpoint_p1/checkpoint_000008784_2248704.pth +[2023-09-27 03:45:11,464][25378] Removing ./train_atari/atari_skiing/checkpoint_p0/checkpoint_000008784_2248704.pth +[2023-09-27 03:45:15,435][25562] Updated weights for policy 0, policy_version 11840 (0.0014) +[2023-09-27 03:45:15,436][25564] Updated weights for policy 1, policy_version 11840 (0.0013) +[2023-09-27 03:45:16,414][24596] Fps is (10 sec: 5734.4, 60 sec: 6417.1, 300 sec: 6387.0). Total num frames: 6062080. Throughput: 0: 800.1, 1: 801.1. Samples: 1515686. Policy #0 lag: (min: 10.0, avg: 10.0, max: 10.0) +[2023-09-27 03:45:16,415][24596] Avg episode reward: [(0, '-2714.670'), (1, '-4483.410')] +[2023-09-27 03:45:21,414][24596] Fps is (10 sec: 6553.7, 60 sec: 6417.1, 300 sec: 6387.0). Total num frames: 6094848. Throughput: 0: 807.1, 1: 807.1. Samples: 1520706. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-27 03:45:21,415][24596] Avg episode reward: [(0, '-2671.480'), (1, '-4483.410')] +[2023-09-27 03:45:26,414][24596] Fps is (10 sec: 6553.8, 60 sec: 6417.1, 300 sec: 6387.0). Total num frames: 6127616. Throughput: 0: 802.1, 1: 802.3. Samples: 1530370. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-27 03:45:26,414][24596] Avg episode reward: [(0, '-2671.480'), (1, '-4483.370')] +[2023-09-27 03:45:28,074][25564] Updated weights for policy 1, policy_version 12000 (0.0020) +[2023-09-27 03:45:28,075][25562] Updated weights for policy 0, policy_version 12000 (0.0016) +[2023-09-27 03:45:31,414][24596] Fps is (10 sec: 6553.6, 60 sec: 6417.1, 300 sec: 6387.0). Total num frames: 6160384. Throughput: 0: 804.5, 1: 803.8. Samples: 1540096. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-27 03:45:31,414][24596] Avg episode reward: [(0, '-2674.570'), (1, '-4483.370')] +[2023-09-27 03:45:36,414][24596] Fps is (10 sec: 6553.5, 60 sec: 6417.1, 300 sec: 6414.8). Total num frames: 6193152. Throughput: 0: 802.2, 1: 802.5. Samples: 1544724. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-27 03:45:36,414][24596] Avg episode reward: [(0, '-2637.030'), (1, '-4483.370')] +[2023-09-27 03:45:40,922][25564] Updated weights for policy 1, policy_version 12160 (0.0016) +[2023-09-27 03:45:40,922][25562] Updated weights for policy 0, policy_version 12160 (0.0016) +[2023-09-27 03:45:41,414][24596] Fps is (10 sec: 6553.6, 60 sec: 6417.1, 300 sec: 6414.8). Total num frames: 6225920. Throughput: 0: 802.4, 1: 802.1. Samples: 1554432. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-27 03:45:41,414][24596] Avg episode reward: [(0, '-2629.430'), (1, '-4483.370')] +[2023-09-27 03:45:46,414][24596] Fps is (10 sec: 6553.4, 60 sec: 6417.1, 300 sec: 6414.8). Total num frames: 6258688. Throughput: 0: 800.4, 1: 801.6. Samples: 1563905. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-27 03:45:46,415][24596] Avg episode reward: [(0, '-2629.430'), (1, '-4483.370')] +[2023-09-27 03:45:51,414][24596] Fps is (10 sec: 6553.5, 60 sec: 6417.1, 300 sec: 6414.8). Total num frames: 6291456. Throughput: 0: 799.7, 1: 799.6. Samples: 1568768. Policy #0 lag: (min: 9.0, avg: 9.0, max: 9.0) +[2023-09-27 03:45:51,415][24596] Avg episode reward: [(0, '-2619.160'), (1, '-4483.380')] +[2023-09-27 03:45:53,987][25562] Updated weights for policy 0, policy_version 12320 (0.0017) +[2023-09-27 03:45:53,987][25564] Updated weights for policy 1, policy_version 12320 (0.0017) +[2023-09-27 03:45:56,414][24596] Fps is (10 sec: 5734.3, 60 sec: 6280.5, 300 sec: 6387.0). Total num frames: 6316032. Throughput: 0: 798.6, 1: 797.9. Samples: 1578011. Policy #0 lag: (min: 9.0, avg: 9.0, max: 9.0) +[2023-09-27 03:45:56,415][24596] Avg episode reward: [(0, '-2626.480'), (1, '-4483.380')] +[2023-09-27 03:46:01,414][24596] Fps is (10 sec: 5734.4, 60 sec: 6280.5, 300 sec: 6387.0). Total num frames: 6348800. Throughput: 0: 796.0, 1: 796.0. Samples: 1587328. Policy #0 lag: (min: 9.0, avg: 9.0, max: 9.0) +[2023-09-27 03:46:01,415][24596] Avg episode reward: [(0, '-2651.070'), (1, '-4483.370')] +[2023-09-27 03:46:06,414][24596] Fps is (10 sec: 6553.7, 60 sec: 6280.5, 300 sec: 6387.0). Total num frames: 6381568. Throughput: 0: 792.6, 1: 792.6. Samples: 1592042. Policy #0 lag: (min: 9.0, avg: 9.0, max: 9.0) +[2023-09-27 03:46:06,415][24596] Avg episode reward: [(0, '-2593.100'), (1, '-4483.350')] +[2023-09-27 03:46:06,981][25562] Updated weights for policy 0, policy_version 12480 (0.0019) +[2023-09-27 03:46:06,982][25564] Updated weights for policy 1, policy_version 12480 (0.0019) +[2023-09-27 03:46:11,414][24596] Fps is (10 sec: 6553.6, 60 sec: 6417.1, 300 sec: 6387.0). Total num frames: 6414336. Throughput: 0: 790.8, 1: 790.7. Samples: 1601541. Policy #0 lag: (min: 7.0, avg: 7.0, max: 7.0) +[2023-09-27 03:46:11,415][24596] Avg episode reward: [(0, '-2555.970'), (1, '-4483.390')] +[2023-09-27 03:46:16,414][24596] Fps is (10 sec: 6553.6, 60 sec: 6417.1, 300 sec: 6387.0). Total num frames: 6447104. Throughput: 0: 786.7, 1: 786.8. Samples: 1610902. Policy #0 lag: (min: 7.0, avg: 7.0, max: 7.0) +[2023-09-27 03:46:16,415][24596] Avg episode reward: [(0, '-2503.320'), (1, '-4483.390')] +[2023-09-27 03:46:19,998][25564] Updated weights for policy 1, policy_version 12640 (0.0018) +[2023-09-27 03:46:19,998][25562] Updated weights for policy 0, policy_version 12640 (0.0018) +[2023-09-27 03:46:21,414][24596] Fps is (10 sec: 6553.8, 60 sec: 6417.1, 300 sec: 6387.0). Total num frames: 6479872. Throughput: 0: 790.7, 1: 790.4. Samples: 1615872. Policy #0 lag: (min: 4.0, avg: 4.0, max: 4.0) +[2023-09-27 03:46:21,414][24596] Avg episode reward: [(0, '-2467.640'), (1, '-4483.390')] +[2023-09-27 03:46:26,414][24596] Fps is (10 sec: 6143.9, 60 sec: 6348.8, 300 sec: 6400.9). Total num frames: 6508544. Throughput: 0: 788.0, 1: 788.4. Samples: 1625371. Policy #0 lag: (min: 4.0, avg: 4.0, max: 4.0) +[2023-09-27 03:46:26,415][24596] Avg episode reward: [(0, '-2429.310'), (1, '-4483.450')] +[2023-09-27 03:46:31,414][24596] Fps is (10 sec: 5734.3, 60 sec: 6280.5, 300 sec: 6387.0). Total num frames: 6537216. Throughput: 0: 786.5, 1: 786.6. Samples: 1634695. Policy #0 lag: (min: 4.0, avg: 4.0, max: 4.0) +[2023-09-27 03:46:31,415][24596] Avg episode reward: [(0, '-2420.430'), (1, '-4483.450')] +[2023-09-27 03:46:32,870][25562] Updated weights for policy 0, policy_version 12800 (0.0017) +[2023-09-27 03:46:32,870][25564] Updated weights for policy 1, policy_version 12800 (0.0016) +[2023-09-27 03:46:36,414][24596] Fps is (10 sec: 6144.1, 60 sec: 6280.5, 300 sec: 6387.0). Total num frames: 6569984. Throughput: 0: 788.3, 1: 788.6. Samples: 1639727. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-27 03:46:36,415][24596] Avg episode reward: [(0, '-2419.130'), (1, '-4483.520')] +[2023-09-27 03:46:41,414][24596] Fps is (10 sec: 6553.5, 60 sec: 6280.5, 300 sec: 6387.0). Total num frames: 6602752. Throughput: 0: 790.7, 1: 790.0. Samples: 1649142. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-27 03:46:41,415][24596] Avg episode reward: [(0, '-2388.980'), (1, '-4483.520')] +[2023-09-27 03:46:45,766][25562] Updated weights for policy 0, policy_version 12960 (0.0017) +[2023-09-27 03:46:45,766][25564] Updated weights for policy 1, policy_version 12960 (0.0017) +[2023-09-27 03:46:46,414][24596] Fps is (10 sec: 6553.6, 60 sec: 6280.5, 300 sec: 6387.0). Total num frames: 6635520. Throughput: 0: 795.1, 1: 794.7. Samples: 1658871. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-27 03:46:46,415][24596] Avg episode reward: [(0, '-2318.330'), (1, '-4483.470')] +[2023-09-27 03:46:51,414][24596] Fps is (10 sec: 6553.7, 60 sec: 6280.5, 300 sec: 6387.0). Total num frames: 6668288. Throughput: 0: 794.2, 1: 794.4. Samples: 1663527. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-27 03:46:51,414][24596] Avg episode reward: [(0, '-2296.410'), (1, '-4483.470')] +[2023-09-27 03:46:56,414][24596] Fps is (10 sec: 6553.6, 60 sec: 6417.1, 300 sec: 6387.0). Total num frames: 6701056. Throughput: 0: 796.4, 1: 796.4. Samples: 1673221. Policy #0 lag: (min: 12.0, avg: 12.0, max: 12.0) +[2023-09-27 03:46:56,415][24596] Avg episode reward: [(0, '-2266.400'), (1, '-4483.510')] +[2023-09-27 03:46:58,353][25562] Updated weights for policy 0, policy_version 13120 (0.0017) +[2023-09-27 03:46:58,354][25564] Updated weights for policy 1, policy_version 13120 (0.0018) +[2023-09-27 03:47:01,414][24596] Fps is (10 sec: 6553.5, 60 sec: 6417.1, 300 sec: 6387.0). Total num frames: 6733824. Throughput: 0: 800.3, 1: 801.6. Samples: 1682985. Policy #0 lag: (min: 12.0, avg: 12.0, max: 12.0) +[2023-09-27 03:47:01,415][24596] Avg episode reward: [(0, '-2259.120'), (1, '-4483.510')] +[2023-09-27 03:47:06,414][24596] Fps is (10 sec: 6553.7, 60 sec: 6417.1, 300 sec: 6387.0). Total num frames: 6766592. Throughput: 0: 796.6, 1: 796.6. Samples: 1687566. Policy #0 lag: (min: 13.0, avg: 13.0, max: 13.0) +[2023-09-27 03:47:06,415][24596] Avg episode reward: [(0, '-2197.200'), (1, '-4483.510')] +[2023-09-27 03:47:11,210][25564] Updated weights for policy 1, policy_version 13280 (0.0017) +[2023-09-27 03:47:11,210][25562] Updated weights for policy 0, policy_version 13280 (0.0015) +[2023-09-27 03:47:11,414][24596] Fps is (10 sec: 6553.5, 60 sec: 6417.1, 300 sec: 6387.0). Total num frames: 6799360. Throughput: 0: 800.7, 1: 803.1. Samples: 1697542. Policy #0 lag: (min: 13.0, avg: 13.0, max: 13.0) +[2023-09-27 03:47:11,415][24596] Avg episode reward: [(0, '-2156.040'), (1, '-4483.500')] +[2023-09-27 03:47:11,427][25443] Saving ./train_atari/atari_skiing/checkpoint_p1/checkpoint_000013280_3399680.pth... +[2023-09-27 03:47:11,427][25378] Saving ./train_atari/atari_skiing/checkpoint_p0/checkpoint_000013280_3399680.pth... +[2023-09-27 03:47:11,463][25378] Removing ./train_atari/atari_skiing/checkpoint_p0/checkpoint_000010288_2633728.pth +[2023-09-27 03:47:11,463][25443] Removing ./train_atari/atari_skiing/checkpoint_p1/checkpoint_000010288_2633728.pth +[2023-09-27 03:47:16,414][24596] Fps is (10 sec: 5734.4, 60 sec: 6280.5, 300 sec: 6359.2). Total num frames: 6823936. Throughput: 0: 796.7, 1: 796.0. Samples: 1706363. Policy #0 lag: (min: 0.0, avg: 0.0, max: 0.0) +[2023-09-27 03:47:16,415][24596] Avg episode reward: [(0, '-2091.190'), (1, '-4483.500')] +[2023-09-27 03:47:21,414][24596] Fps is (10 sec: 5734.5, 60 sec: 6280.5, 300 sec: 6359.2). Total num frames: 6856704. Throughput: 0: 793.2, 1: 794.4. Samples: 1711165. Policy #0 lag: (min: 0.0, avg: 0.0, max: 0.0) +[2023-09-27 03:47:21,415][24596] Avg episode reward: [(0, '-2008.810'), (1, '-4483.480')] +[2023-09-27 03:47:24,568][25564] Updated weights for policy 1, policy_version 13440 (0.0017) +[2023-09-27 03:47:24,568][25562] Updated weights for policy 0, policy_version 13440 (0.0017) +[2023-09-27 03:47:26,414][24596] Fps is (10 sec: 6553.6, 60 sec: 6348.8, 300 sec: 6359.2). Total num frames: 6889472. Throughput: 0: 791.0, 1: 790.9. Samples: 1720327. Policy #0 lag: (min: 5.0, avg: 5.0, max: 5.0) +[2023-09-27 03:47:26,415][24596] Avg episode reward: [(0, '-1974.670'), (1, '-4483.480')] +[2023-09-27 03:47:31,414][24596] Fps is (10 sec: 6553.6, 60 sec: 6417.1, 300 sec: 6359.2). Total num frames: 6922240. Throughput: 0: 788.4, 1: 789.3. Samples: 1729869. Policy #0 lag: (min: 5.0, avg: 5.0, max: 5.0) +[2023-09-27 03:47:31,415][24596] Avg episode reward: [(0, '-1928.670'), (1, '-4483.460')] +[2023-09-27 03:47:36,414][24596] Fps is (10 sec: 6553.6, 60 sec: 6417.1, 300 sec: 6359.2). Total num frames: 6955008. Throughput: 0: 790.5, 1: 790.2. Samples: 1734659. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-27 03:47:36,415][24596] Avg episode reward: [(0, '-1922.710'), (1, '-4483.410')] +[2023-09-27 03:47:37,321][25564] Updated weights for policy 1, policy_version 13600 (0.0017) +[2023-09-27 03:47:37,321][25562] Updated weights for policy 0, policy_version 13600 (0.0017) +[2023-09-27 03:47:41,414][24596] Fps is (10 sec: 6553.6, 60 sec: 6417.1, 300 sec: 6373.1). Total num frames: 6987776. Throughput: 0: 793.9, 1: 793.4. Samples: 1744649. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-27 03:47:41,415][24596] Avg episode reward: [(0, '-1861.910'), (1, '-4483.410')] +[2023-09-27 03:47:46,414][24596] Fps is (10 sec: 6553.6, 60 sec: 6417.1, 300 sec: 6387.0). Total num frames: 7020544. Throughput: 0: 788.1, 1: 787.1. Samples: 1753872. Policy #0 lag: (min: 6.0, avg: 6.0, max: 6.0) +[2023-09-27 03:47:46,415][24596] Avg episode reward: [(0, '-1871.000'), (1, '-4483.440')] +[2023-09-27 03:47:50,222][25564] Updated weights for policy 1, policy_version 13760 (0.0016) +[2023-09-27 03:47:50,222][25562] Updated weights for policy 0, policy_version 13760 (0.0017) +[2023-09-27 03:47:51,414][24596] Fps is (10 sec: 5734.4, 60 sec: 6280.5, 300 sec: 6359.2). Total num frames: 7045120. Throughput: 0: 792.4, 1: 792.6. Samples: 1758893. Policy #0 lag: (min: 6.0, avg: 6.0, max: 6.0) +[2023-09-27 03:47:51,415][24596] Avg episode reward: [(0, '-1848.450'), (1, '-4483.440')] +[2023-09-27 03:47:56,414][24596] Fps is (10 sec: 5734.6, 60 sec: 6280.6, 300 sec: 6359.2). Total num frames: 7077888. Throughput: 0: 788.6, 1: 786.0. Samples: 1768398. Policy #0 lag: (min: 1.0, avg: 1.0, max: 1.0) +[2023-09-27 03:47:56,414][24596] Avg episode reward: [(0, '-1852.060'), (1, '-4483.470')] +[2023-09-27 03:48:01,414][24596] Fps is (10 sec: 6553.6, 60 sec: 6280.5, 300 sec: 6359.2). Total num frames: 7110656. Throughput: 0: 796.5, 1: 796.8. Samples: 1778060. Policy #0 lag: (min: 1.0, avg: 1.0, max: 1.0) +[2023-09-27 03:48:01,415][24596] Avg episode reward: [(0, '-1891.080'), (1, '-4483.480')] +[2023-09-27 03:48:02,796][25562] Updated weights for policy 0, policy_version 13920 (0.0018) +[2023-09-27 03:48:02,796][25564] Updated weights for policy 1, policy_version 13920 (0.0018) +[2023-09-27 03:48:06,414][24596] Fps is (10 sec: 6553.4, 60 sec: 6280.5, 300 sec: 6359.2). Total num frames: 7143424. Throughput: 0: 801.0, 1: 800.2. Samples: 1783219. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-27 03:48:06,415][24596] Avg episode reward: [(0, '-1877.230'), (1, '-4483.470')] +[2023-09-27 03:48:11,414][24596] Fps is (10 sec: 6553.5, 60 sec: 6280.5, 300 sec: 6359.2). Total num frames: 7176192. Throughput: 0: 803.2, 1: 803.2. Samples: 1792613. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-27 03:48:11,415][24596] Avg episode reward: [(0, '-1877.230'), (1, '-4483.470')] +[2023-09-27 03:48:15,541][25562] Updated weights for policy 0, policy_version 14080 (0.0016) +[2023-09-27 03:48:15,541][25564] Updated weights for policy 1, policy_version 14080 (0.0017) +[2023-09-27 03:48:16,414][24596] Fps is (10 sec: 6553.6, 60 sec: 6417.1, 300 sec: 6359.2). Total num frames: 7208960. Throughput: 0: 804.5, 1: 803.9. Samples: 1802249. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-27 03:48:16,415][24596] Avg episode reward: [(0, '-1856.150'), (1, '-4477.700')] +[2023-09-27 03:48:21,414][24596] Fps is (10 sec: 6553.7, 60 sec: 6417.1, 300 sec: 6387.0). Total num frames: 7241728. Throughput: 0: 803.7, 1: 805.0. Samples: 1807052. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-27 03:48:21,415][24596] Avg episode reward: [(0, '-1875.530'), (1, '-4460.980')] +[2023-09-27 03:48:26,414][24596] Fps is (10 sec: 6553.7, 60 sec: 6417.1, 300 sec: 6387.0). Total num frames: 7274496. Throughput: 0: 797.6, 1: 797.8. Samples: 1816442. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-27 03:48:26,414][24596] Avg episode reward: [(0, '-1905.990'), (1, '-4460.980')] +[2023-09-27 03:48:28,596][25562] Updated weights for policy 0, policy_version 14240 (0.0018) +[2023-09-27 03:48:28,596][25564] Updated weights for policy 1, policy_version 14240 (0.0019) +[2023-09-27 03:48:31,414][24596] Fps is (10 sec: 6553.7, 60 sec: 6417.1, 300 sec: 6387.0). Total num frames: 7307264. Throughput: 0: 802.2, 1: 802.4. Samples: 1826078. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-27 03:48:31,414][24596] Avg episode reward: [(0, '-1922.940'), (1, '-4438.690')] +[2023-09-27 03:48:36,414][24596] Fps is (10 sec: 6553.5, 60 sec: 6417.1, 300 sec: 6387.0). Total num frames: 7340032. Throughput: 0: 800.4, 1: 800.0. Samples: 1830912. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-27 03:48:36,415][24596] Avg episode reward: [(0, '-1934.200'), (1, '-4428.350')] +[2023-09-27 03:48:41,291][25562] Updated weights for policy 0, policy_version 14400 (0.0016) +[2023-09-27 03:48:41,293][25564] Updated weights for policy 1, policy_version 14400 (0.0018) +[2023-09-27 03:48:41,414][24596] Fps is (10 sec: 6553.4, 60 sec: 6417.1, 300 sec: 6387.0). Total num frames: 7372800. Throughput: 0: 803.8, 1: 802.1. Samples: 1840663. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-27 03:48:41,415][24596] Avg episode reward: [(0, '-1958.180'), (1, '-4422.880')] +[2023-09-27 03:48:46,414][24596] Fps is (10 sec: 6553.6, 60 sec: 6417.1, 300 sec: 6387.0). Total num frames: 7405568. Throughput: 0: 801.5, 1: 801.1. Samples: 1850177. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-27 03:48:46,415][24596] Avg episode reward: [(0, '-1958.180'), (1, '-4422.880')] +[2023-09-27 03:48:51,414][24596] Fps is (10 sec: 5734.4, 60 sec: 6417.1, 300 sec: 6359.2). Total num frames: 7430144. Throughput: 0: 799.2, 1: 800.4. Samples: 1855200. Policy #0 lag: (min: 7.0, avg: 7.0, max: 7.0) +[2023-09-27 03:48:51,415][24596] Avg episode reward: [(0, '-2022.380'), (1, '-4422.850')] +[2023-09-27 03:48:54,027][25564] Updated weights for policy 1, policy_version 14560 (0.0017) +[2023-09-27 03:48:54,028][25562] Updated weights for policy 0, policy_version 14560 (0.0017) +[2023-09-27 03:48:56,414][24596] Fps is (10 sec: 5734.4, 60 sec: 6417.0, 300 sec: 6359.2). Total num frames: 7462912. Throughput: 0: 799.7, 1: 799.5. Samples: 1864577. Policy #0 lag: (min: 7.0, avg: 7.0, max: 7.0) +[2023-09-27 03:48:56,415][24596] Avg episode reward: [(0, '-2022.380'), (1, '-4402.330')] +[2023-09-27 03:49:01,414][24596] Fps is (10 sec: 6553.8, 60 sec: 6417.1, 300 sec: 6387.0). Total num frames: 7495680. Throughput: 0: 799.0, 1: 799.2. Samples: 1874170. Policy #0 lag: (min: 7.0, avg: 7.0, max: 7.0) +[2023-09-27 03:49:01,414][24596] Avg episode reward: [(0, '-2057.580'), (1, '-4400.520')] +[2023-09-27 03:49:06,414][24596] Fps is (10 sec: 6553.7, 60 sec: 6417.1, 300 sec: 6387.0). Total num frames: 7528448. Throughput: 0: 801.8, 1: 801.4. Samples: 1879195. Policy #0 lag: (min: 13.0, avg: 13.0, max: 13.0) +[2023-09-27 03:49:06,415][24596] Avg episode reward: [(0, '-2057.580'), (1, '-4400.520')] +[2023-09-27 03:49:06,660][25562] Updated weights for policy 0, policy_version 14720 (0.0019) +[2023-09-27 03:49:06,660][25564] Updated weights for policy 1, policy_version 14720 (0.0020) +[2023-09-27 03:49:11,414][24596] Fps is (10 sec: 6553.4, 60 sec: 6417.1, 300 sec: 6387.0). Total num frames: 7561216. Throughput: 0: 804.4, 1: 804.6. Samples: 1888848. Policy #0 lag: (min: 13.0, avg: 13.0, max: 13.0) +[2023-09-27 03:49:11,415][24596] Avg episode reward: [(0, '-2111.380'), (1, '-4400.520')] +[2023-09-27 03:49:11,426][25378] Saving ./train_atari/atari_skiing/checkpoint_p0/checkpoint_000014768_3780608.pth... +[2023-09-27 03:49:11,428][25443] Saving ./train_atari/atari_skiing/checkpoint_p1/checkpoint_000014768_3780608.pth... +[2023-09-27 03:49:11,461][25443] Removing ./train_atari/atari_skiing/checkpoint_p1/checkpoint_000011776_3014656.pth +[2023-09-27 03:49:11,462][25378] Removing ./train_atari/atari_skiing/checkpoint_p0/checkpoint_000011776_3014656.pth +[2023-09-27 03:49:16,414][24596] Fps is (10 sec: 6553.6, 60 sec: 6417.1, 300 sec: 6387.0). Total num frames: 7593984. Throughput: 0: 804.9, 1: 804.4. Samples: 1898498. Policy #0 lag: (min: 7.0, avg: 7.0, max: 7.0) +[2023-09-27 03:49:16,415][24596] Avg episode reward: [(0, '-2130.240'), (1, '-4400.540')] +[2023-09-27 03:49:19,347][25564] Updated weights for policy 1, policy_version 14880 (0.0018) +[2023-09-27 03:49:19,347][25562] Updated weights for policy 0, policy_version 14880 (0.0018) +[2023-09-27 03:49:21,414][24596] Fps is (10 sec: 6553.8, 60 sec: 6417.1, 300 sec: 6387.0). Total num frames: 7626752. Throughput: 0: 805.8, 1: 806.6. Samples: 1903468. Policy #0 lag: (min: 7.0, avg: 7.0, max: 7.0) +[2023-09-27 03:49:21,414][24596] Avg episode reward: [(0, '-2130.240'), (1, '-4398.480')] +[2023-09-27 03:49:26,414][24596] Fps is (10 sec: 6553.6, 60 sec: 6417.1, 300 sec: 6387.0). Total num frames: 7659520. Throughput: 0: 802.6, 1: 804.2. Samples: 1912971. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-27 03:49:26,415][24596] Avg episode reward: [(0, '-2130.240'), (1, '-4398.480')] +[2023-09-27 03:49:31,414][24596] Fps is (10 sec: 6553.6, 60 sec: 6417.1, 300 sec: 6387.0). Total num frames: 7692288. Throughput: 0: 807.5, 1: 807.6. Samples: 1922854. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-27 03:49:31,414][24596] Avg episode reward: [(0, '-2152.250'), (1, '-4359.910')] +[2023-09-27 03:49:32,125][25562] Updated weights for policy 0, policy_version 15040 (0.0018) +[2023-09-27 03:49:32,125][25564] Updated weights for policy 1, policy_version 15040 (0.0018) +[2023-09-27 03:49:36,414][24596] Fps is (10 sec: 6553.6, 60 sec: 6417.1, 300 sec: 6387.0). Total num frames: 7725056. Throughput: 0: 803.3, 1: 802.1. Samples: 1927442. Policy #0 lag: (min: 9.0, avg: 9.0, max: 9.0) +[2023-09-27 03:49:36,415][24596] Avg episode reward: [(0, '-2202.900'), (1, '-4359.940')] +[2023-09-27 03:49:41,414][24596] Fps is (10 sec: 6553.5, 60 sec: 6417.1, 300 sec: 6387.0). Total num frames: 7757824. Throughput: 0: 807.1, 1: 807.4. Samples: 1937232. Policy #0 lag: (min: 9.0, avg: 9.0, max: 9.0) +[2023-09-27 03:49:41,415][24596] Avg episode reward: [(0, '-2202.900'), (1, '-4350.940')] +[2023-09-27 03:49:44,833][25562] Updated weights for policy 0, policy_version 15200 (0.0017) +[2023-09-27 03:49:44,833][25564] Updated weights for policy 1, policy_version 15200 (0.0017) +[2023-09-27 03:49:46,414][24596] Fps is (10 sec: 6553.6, 60 sec: 6417.1, 300 sec: 6387.0). Total num frames: 7790592. Throughput: 0: 809.3, 1: 808.3. Samples: 1946965. Policy #0 lag: (min: 9.0, avg: 9.0, max: 9.0) +[2023-09-27 03:49:46,415][24596] Avg episode reward: [(0, '-2215.220'), (1, '-4350.940')] +[2023-09-27 03:49:51,414][24596] Fps is (10 sec: 6553.6, 60 sec: 6553.6, 300 sec: 6387.0). Total num frames: 7823360. Throughput: 0: 806.6, 1: 805.6. Samples: 1951744. Policy #0 lag: (min: 0.0, avg: 0.0, max: 0.0) +[2023-09-27 03:49:51,415][24596] Avg episode reward: [(0, '-2213.770'), (1, '-4351.000')] +[2023-09-27 03:49:56,414][24596] Fps is (10 sec: 5734.4, 60 sec: 6417.1, 300 sec: 6359.2). Total num frames: 7847936. Throughput: 0: 799.6, 1: 798.7. Samples: 1960773. Policy #0 lag: (min: 0.0, avg: 0.0, max: 0.0) +[2023-09-27 03:49:56,414][24596] Avg episode reward: [(0, '-2269.710'), (1, '-4351.000')] +[2023-09-27 03:49:57,932][25562] Updated weights for policy 0, policy_version 15360 (0.0016) +[2023-09-27 03:49:57,932][25564] Updated weights for policy 1, policy_version 15360 (0.0018) +[2023-09-27 03:50:01,414][24596] Fps is (10 sec: 5734.4, 60 sec: 6417.0, 300 sec: 6359.2). Total num frames: 7880704. Throughput: 0: 797.6, 1: 797.8. Samples: 1970287. Policy #0 lag: (min: 4.0, avg: 4.0, max: 4.0) +[2023-09-27 03:50:01,415][24596] Avg episode reward: [(0, '-2301.520'), (1, '-4350.990')] +[2023-09-27 03:50:06,414][24596] Fps is (10 sec: 6553.7, 60 sec: 6417.1, 300 sec: 6387.0). Total num frames: 7913472. Throughput: 0: 797.1, 1: 796.4. Samples: 1975174. Policy #0 lag: (min: 4.0, avg: 4.0, max: 4.0) +[2023-09-27 03:50:06,414][24596] Avg episode reward: [(0, '-2301.520'), (1, '-4350.940')] +[2023-09-27 03:50:10,808][25564] Updated weights for policy 1, policy_version 15520 (0.0016) +[2023-09-27 03:50:10,808][25562] Updated weights for policy 0, policy_version 15520 (0.0017) +[2023-09-27 03:50:11,414][24596] Fps is (10 sec: 6553.6, 60 sec: 6417.1, 300 sec: 6387.0). Total num frames: 7946240. Throughput: 0: 795.1, 1: 795.0. Samples: 1984524. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-27 03:50:11,415][24596] Avg episode reward: [(0, '-2301.520'), (1, '-4320.180')] +[2023-09-27 03:50:16,414][24596] Fps is (10 sec: 6553.4, 60 sec: 6417.1, 300 sec: 6387.0). Total num frames: 7979008. Throughput: 0: 796.8, 1: 796.1. Samples: 1994538. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-27 03:50:16,415][24596] Avg episode reward: [(0, '-2325.100'), (1, '-4299.990')] +[2023-09-27 03:50:21,414][24596] Fps is (10 sec: 6553.7, 60 sec: 6417.1, 300 sec: 6387.0). Total num frames: 8011776. Throughput: 0: 797.1, 1: 796.4. Samples: 1999146. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-27 03:50:21,415][24596] Avg episode reward: [(0, '-2410.340'), (1, '-4300.000')] +[2023-09-27 03:50:23,693][25562] Updated weights for policy 0, policy_version 15680 (0.0017) +[2023-09-27 03:50:23,693][25564] Updated weights for policy 1, policy_version 15680 (0.0018) +[2023-09-27 03:50:26,414][24596] Fps is (10 sec: 6553.7, 60 sec: 6417.1, 300 sec: 6387.0). Total num frames: 8044544. Throughput: 0: 794.2, 1: 794.2. Samples: 2008713. Policy #0 lag: (min: 14.0, avg: 14.0, max: 14.0) +[2023-09-27 03:50:26,414][24596] Avg episode reward: [(0, '-2410.340'), (1, '-4300.000')] +[2023-09-27 03:50:31,414][24596] Fps is (10 sec: 6553.6, 60 sec: 6417.1, 300 sec: 6387.0). Total num frames: 8077312. Throughput: 0: 792.3, 1: 793.3. Samples: 2018315. Policy #0 lag: (min: 14.0, avg: 14.0, max: 14.0) +[2023-09-27 03:50:31,415][24596] Avg episode reward: [(0, '-2410.340'), (1, '-4282.930')] +[2023-09-27 03:50:36,373][25562] Updated weights for policy 0, policy_version 15840 (0.0018) +[2023-09-27 03:50:36,373][25564] Updated weights for policy 1, policy_version 15840 (0.0018) +[2023-09-27 03:50:36,414][24596] Fps is (10 sec: 6553.6, 60 sec: 6417.1, 300 sec: 6387.0). Total num frames: 8110080. Throughput: 0: 794.5, 1: 794.7. Samples: 2023258. Policy #0 lag: (min: 9.0, avg: 9.0, max: 9.0) +[2023-09-27 03:50:36,414][24596] Avg episode reward: [(0, '-2410.340'), (1, '-4264.260')] +[2023-09-27 03:50:41,414][24596] Fps is (10 sec: 5734.4, 60 sec: 6280.5, 300 sec: 6359.2). Total num frames: 8134656. Throughput: 0: 800.6, 1: 801.9. Samples: 2032883. Policy #0 lag: (min: 9.0, avg: 9.0, max: 9.0) +[2023-09-27 03:50:41,415][24596] Avg episode reward: [(0, '-2442.990'), (1, '-4264.260')] +[2023-09-27 03:50:46,414][24596] Fps is (10 sec: 5734.3, 60 sec: 6280.5, 300 sec: 6359.2). Total num frames: 8167424. Throughput: 0: 801.0, 1: 801.1. Samples: 2042381. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-27 03:50:46,415][24596] Avg episode reward: [(0, '-2478.520'), (1, '-4243.190')] +[2023-09-27 03:50:48,970][25564] Updated weights for policy 1, policy_version 16000 (0.0016) +[2023-09-27 03:50:48,970][25562] Updated weights for policy 0, policy_version 16000 (0.0017) +[2023-09-27 03:50:51,414][24596] Fps is (10 sec: 6553.6, 60 sec: 6280.5, 300 sec: 6387.0). Total num frames: 8200192. Throughput: 0: 804.2, 1: 804.6. Samples: 2047572. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-27 03:50:51,415][24596] Avg episode reward: [(0, '-2478.520'), (1, '-4234.050')] +[2023-09-27 03:50:56,414][24596] Fps is (10 sec: 6553.7, 60 sec: 6417.1, 300 sec: 6387.0). Total num frames: 8232960. Throughput: 0: 804.7, 1: 804.0. Samples: 2056914. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-27 03:50:56,414][24596] Avg episode reward: [(0, '-2478.520'), (1, '-4234.090')] +[2023-09-27 03:51:01,414][24596] Fps is (10 sec: 6553.6, 60 sec: 6417.1, 300 sec: 6387.0). Total num frames: 8265728. Throughput: 0: 799.8, 1: 800.6. Samples: 2066554. Policy #0 lag: (min: 6.0, avg: 6.0, max: 6.0) +[2023-09-27 03:51:01,415][24596] Avg episode reward: [(0, '-2511.420'), (1, '-4234.060')] +[2023-09-27 03:51:01,690][25564] Updated weights for policy 1, policy_version 16160 (0.0016) +[2023-09-27 03:51:01,691][25562] Updated weights for policy 0, policy_version 16160 (0.0016) +[2023-09-27 03:51:06,414][24596] Fps is (10 sec: 6553.5, 60 sec: 6417.0, 300 sec: 6387.0). Total num frames: 8298496. Throughput: 0: 806.5, 1: 806.9. Samples: 2071746. Policy #0 lag: (min: 6.0, avg: 6.0, max: 6.0) +[2023-09-27 03:51:06,415][24596] Avg episode reward: [(0, '-2583.930'), (1, '-4234.060')] +[2023-09-27 03:51:11,414][24596] Fps is (10 sec: 6553.6, 60 sec: 6417.1, 300 sec: 6387.0). Total num frames: 8331264. Throughput: 0: 805.7, 1: 806.2. Samples: 2081247. Policy #0 lag: (min: 14.0, avg: 14.0, max: 14.0) +[2023-09-27 03:51:11,415][24596] Avg episode reward: [(0, '-2583.930'), (1, '-4234.080')] +[2023-09-27 03:51:11,426][25443] Saving ./train_atari/atari_skiing/checkpoint_p1/checkpoint_000016272_4165632.pth... +[2023-09-27 03:51:11,427][25378] Saving ./train_atari/atari_skiing/checkpoint_p0/checkpoint_000016272_4165632.pth... +[2023-09-27 03:51:11,456][25443] Removing ./train_atari/atari_skiing/checkpoint_p1/checkpoint_000013280_3399680.pth +[2023-09-27 03:51:11,460][25378] Removing ./train_atari/atari_skiing/checkpoint_p0/checkpoint_000013280_3399680.pth +[2023-09-27 03:51:14,636][25562] Updated weights for policy 0, policy_version 16320 (0.0018) +[2023-09-27 03:51:14,636][25564] Updated weights for policy 1, policy_version 16320 (0.0018) +[2023-09-27 03:51:16,414][24596] Fps is (10 sec: 6553.6, 60 sec: 6417.1, 300 sec: 6387.0). Total num frames: 8364032. Throughput: 0: 803.9, 1: 802.9. Samples: 2090622. Policy #0 lag: (min: 14.0, avg: 14.0, max: 14.0) +[2023-09-27 03:51:16,415][24596] Avg episode reward: [(0, '-2583.800'), (1, '-4234.060')] +[2023-09-27 03:51:21,414][24596] Fps is (10 sec: 6553.7, 60 sec: 6417.1, 300 sec: 6400.9). Total num frames: 8396800. Throughput: 0: 798.4, 1: 798.2. Samples: 2095106. Policy #0 lag: (min: 14.0, avg: 14.0, max: 14.0) +[2023-09-27 03:51:21,415][24596] Avg episode reward: [(0, '-2613.480'), (1, '-4234.070')] +[2023-09-27 03:51:26,414][24596] Fps is (10 sec: 6553.5, 60 sec: 6417.0, 300 sec: 6414.8). Total num frames: 8429568. Throughput: 0: 800.0, 1: 799.2. Samples: 2104848. Policy #0 lag: (min: 4.0, avg: 4.0, max: 4.0) +[2023-09-27 03:51:26,415][24596] Avg episode reward: [(0, '-2635.980'), (1, '-4234.070')] +[2023-09-27 03:51:27,571][25564] Updated weights for policy 1, policy_version 16480 (0.0017) +[2023-09-27 03:51:27,572][25562] Updated weights for policy 0, policy_version 16480 (0.0017) +[2023-09-27 03:51:31,414][24596] Fps is (10 sec: 5734.4, 60 sec: 6280.5, 300 sec: 6387.0). Total num frames: 8454144. Throughput: 0: 794.5, 1: 795.2. Samples: 2113918. Policy #0 lag: (min: 4.0, avg: 4.0, max: 4.0) +[2023-09-27 03:51:31,415][24596] Avg episode reward: [(0, '-2604.960'), (1, '-4211.830')] +[2023-09-27 03:51:36,414][24596] Fps is (10 sec: 5734.4, 60 sec: 6280.5, 300 sec: 6387.0). Total num frames: 8486912. Throughput: 0: 789.5, 1: 790.5. Samples: 2118672. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-27 03:51:36,415][24596] Avg episode reward: [(0, '-2635.830'), (1, '-4211.800')] +[2023-09-27 03:51:40,852][25564] Updated weights for policy 1, policy_version 16640 (0.0017) +[2023-09-27 03:51:40,853][25562] Updated weights for policy 0, policy_version 16640 (0.0016) +[2023-09-27 03:51:41,414][24596] Fps is (10 sec: 6553.6, 60 sec: 6417.1, 300 sec: 6387.0). Total num frames: 8519680. Throughput: 0: 788.2, 1: 788.8. Samples: 2127882. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-27 03:51:41,415][24596] Avg episode reward: [(0, '-2632.870'), (1, '-4211.800')] +[2023-09-27 03:51:46,414][24596] Fps is (10 sec: 6553.6, 60 sec: 6417.1, 300 sec: 6387.0). Total num frames: 8552448. Throughput: 0: 791.0, 1: 789.1. Samples: 2137660. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-27 03:51:46,415][24596] Avg episode reward: [(0, '-2646.630'), (1, '-4203.270')] +[2023-09-27 03:51:51,414][24596] Fps is (10 sec: 6553.6, 60 sec: 6417.1, 300 sec: 6387.0). Total num frames: 8585216. Throughput: 0: 783.7, 1: 783.7. Samples: 2142279. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-27 03:51:51,415][24596] Avg episode reward: [(0, '-2646.630'), (1, '-4203.270')] +[2023-09-27 03:51:53,542][25564] Updated weights for policy 1, policy_version 16800 (0.0017) +[2023-09-27 03:51:53,543][25562] Updated weights for policy 0, policy_version 16800 (0.0019) +[2023-09-27 03:51:56,414][24596] Fps is (10 sec: 6553.6, 60 sec: 6417.0, 300 sec: 6387.0). Total num frames: 8617984. Throughput: 0: 790.6, 1: 790.0. Samples: 2152376. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-27 03:51:56,415][24596] Avg episode reward: [(0, '-2646.630'), (1, '-4203.290')] +[2023-09-27 03:52:01,414][24596] Fps is (10 sec: 6553.7, 60 sec: 6417.1, 300 sec: 6387.0). Total num frames: 8650752. Throughput: 0: 789.4, 1: 790.5. Samples: 2161717. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-27 03:52:01,415][24596] Avg episode reward: [(0, '-2705.320'), (1, '-4203.320')] +[2023-09-27 03:52:06,335][25562] Updated weights for policy 0, policy_version 16960 (0.0016) +[2023-09-27 03:52:06,335][25564] Updated weights for policy 1, policy_version 16960 (0.0018) +[2023-09-27 03:52:06,414][24596] Fps is (10 sec: 6553.7, 60 sec: 6417.1, 300 sec: 6387.0). Total num frames: 8683520. Throughput: 0: 795.7, 1: 796.0. Samples: 2166735. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-27 03:52:06,415][24596] Avg episode reward: [(0, '-2748.800'), (1, '-4194.290')] +[2023-09-27 03:52:11,414][24596] Fps is (10 sec: 5734.4, 60 sec: 6280.5, 300 sec: 6387.0). Total num frames: 8708096. Throughput: 0: 791.6, 1: 792.4. Samples: 2176129. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-27 03:52:11,415][24596] Avg episode reward: [(0, '-2748.800'), (1, '-4194.290')] +[2023-09-27 03:52:16,414][24596] Fps is (10 sec: 5734.3, 60 sec: 6280.5, 300 sec: 6387.0). Total num frames: 8740864. Throughput: 0: 798.3, 1: 797.6. Samples: 2185731. Policy #0 lag: (min: 5.0, avg: 5.0, max: 5.0) +[2023-09-27 03:52:16,415][24596] Avg episode reward: [(0, '-2760.260'), (1, '-4194.230')] +[2023-09-27 03:52:19,045][25564] Updated weights for policy 1, policy_version 17120 (0.0016) +[2023-09-27 03:52:19,045][25562] Updated weights for policy 0, policy_version 17120 (0.0018) +[2023-09-27 03:52:21,414][24596] Fps is (10 sec: 6553.6, 60 sec: 6280.5, 300 sec: 6387.0). Total num frames: 8773632. Throughput: 0: 802.6, 1: 800.7. Samples: 2190820. Policy #0 lag: (min: 5.0, avg: 5.0, max: 5.0) +[2023-09-27 03:52:21,415][24596] Avg episode reward: [(0, '-2780.700'), (1, '-4194.230')] +[2023-09-27 03:52:26,414][24596] Fps is (10 sec: 6553.5, 60 sec: 6280.5, 300 sec: 6387.0). Total num frames: 8806400. Throughput: 0: 805.4, 1: 806.2. Samples: 2200401. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-27 03:52:26,415][24596] Avg episode reward: [(0, '-2780.700'), (1, '-4194.190')] +[2023-09-27 03:52:31,414][24596] Fps is (10 sec: 6553.6, 60 sec: 6417.1, 300 sec: 6387.0). Total num frames: 8839168. Throughput: 0: 803.8, 1: 805.6. Samples: 2210085. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-27 03:52:31,415][24596] Avg episode reward: [(0, '-2847.660'), (1, '-4194.210')] +[2023-09-27 03:52:31,642][25564] Updated weights for policy 1, policy_version 17280 (0.0017) +[2023-09-27 03:52:31,642][25562] Updated weights for policy 0, policy_version 17280 (0.0016) +[2023-09-27 03:52:36,414][24596] Fps is (10 sec: 6553.6, 60 sec: 6417.1, 300 sec: 6387.0). Total num frames: 8871936. Throughput: 0: 810.0, 1: 810.0. Samples: 2215178. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-27 03:52:36,415][24596] Avg episode reward: [(0, '-2876.740'), (1, '-4194.210')] +[2023-09-27 03:52:41,414][24596] Fps is (10 sec: 6553.7, 60 sec: 6417.1, 300 sec: 6387.0). Total num frames: 8904704. Throughput: 0: 801.3, 1: 801.3. Samples: 2224490. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-27 03:52:41,414][24596] Avg episode reward: [(0, '-2876.740'), (1, '-4194.210')] +[2023-09-27 03:52:44,366][25562] Updated weights for policy 0, policy_version 17440 (0.0016) +[2023-09-27 03:52:44,366][25564] Updated weights for policy 1, policy_version 17440 (0.0018) +[2023-09-27 03:52:46,414][24596] Fps is (10 sec: 6553.6, 60 sec: 6417.1, 300 sec: 6414.8). Total num frames: 8937472. Throughput: 0: 807.4, 1: 807.0. Samples: 2234368. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-27 03:52:46,415][24596] Avg episode reward: [(0, '-2910.160'), (1, '-4194.190')] +[2023-09-27 03:52:51,414][24596] Fps is (10 sec: 6553.4, 60 sec: 6417.1, 300 sec: 6414.7). Total num frames: 8970240. Throughput: 0: 807.1, 1: 806.7. Samples: 2239356. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-27 03:52:51,415][24596] Avg episode reward: [(0, '-2965.870'), (1, '-4194.160')] +[2023-09-27 03:52:56,414][24596] Fps is (10 sec: 6553.5, 60 sec: 6417.1, 300 sec: 6414.8). Total num frames: 9003008. Throughput: 0: 806.6, 1: 806.1. Samples: 2248704. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-27 03:52:56,415][24596] Avg episode reward: [(0, '-2965.870'), (1, '-4194.160')] +[2023-09-27 03:52:57,147][25562] Updated weights for policy 0, policy_version 17600 (0.0016) +[2023-09-27 03:52:57,148][25564] Updated weights for policy 1, policy_version 17600 (0.0017) +[2023-09-27 03:53:01,414][24596] Fps is (10 sec: 6553.6, 60 sec: 6417.1, 300 sec: 6414.8). Total num frames: 9035776. Throughput: 0: 810.4, 1: 808.9. Samples: 2258601. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-27 03:53:01,415][24596] Avg episode reward: [(0, '-2990.470'), (1, '-4194.140')] +[2023-09-27 03:53:06,414][24596] Fps is (10 sec: 6553.7, 60 sec: 6417.1, 300 sec: 6414.8). Total num frames: 9068544. Throughput: 0: 802.4, 1: 802.7. Samples: 2263048. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-27 03:53:06,415][24596] Avg episode reward: [(0, '-3026.260'), (1, '-4194.140')] +[2023-09-27 03:53:09,843][25562] Updated weights for policy 0, policy_version 17760 (0.0017) +[2023-09-27 03:53:09,843][25564] Updated weights for policy 1, policy_version 17760 (0.0019) +[2023-09-27 03:53:11,414][24596] Fps is (10 sec: 6553.5, 60 sec: 6553.6, 300 sec: 6414.8). Total num frames: 9101312. Throughput: 0: 808.4, 1: 808.5. Samples: 2273163. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-27 03:53:11,415][24596] Avg episode reward: [(0, '-3026.260'), (1, '-4194.150')] +[2023-09-27 03:53:11,427][25378] Saving ./train_atari/atari_skiing/checkpoint_p0/checkpoint_000017776_4550656.pth... +[2023-09-27 03:53:11,427][25443] Saving ./train_atari/atari_skiing/checkpoint_p1/checkpoint_000017776_4550656.pth... +[2023-09-27 03:53:11,460][25443] Removing ./train_atari/atari_skiing/checkpoint_p1/checkpoint_000014768_3780608.pth +[2023-09-27 03:53:11,463][25378] Removing ./train_atari/atari_skiing/checkpoint_p0/checkpoint_000014768_3780608.pth +[2023-09-27 03:53:16,414][24596] Fps is (10 sec: 6553.7, 60 sec: 6553.6, 300 sec: 6414.8). Total num frames: 9134080. Throughput: 0: 805.8, 1: 806.3. Samples: 2282628. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-27 03:53:16,414][24596] Avg episode reward: [(0, '-3090.240'), (1, '-4194.150')] +[2023-09-27 03:53:21,414][24596] Fps is (10 sec: 6553.7, 60 sec: 6553.6, 300 sec: 6414.8). Total num frames: 9166848. Throughput: 0: 805.0, 1: 804.7. Samples: 2287616. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-27 03:53:21,415][24596] Avg episode reward: [(0, '-3096.730'), (1, '-4194.150')] +[2023-09-27 03:53:22,724][25562] Updated weights for policy 0, policy_version 17920 (0.0018) +[2023-09-27 03:53:22,724][25564] Updated weights for policy 1, policy_version 17920 (0.0017) +[2023-09-27 03:53:26,414][24596] Fps is (10 sec: 5734.3, 60 sec: 6417.1, 300 sec: 6387.0). Total num frames: 9191424. Throughput: 0: 800.9, 1: 800.5. Samples: 2296554. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-27 03:53:26,415][24596] Avg episode reward: [(0, '-3096.730'), (1, '-4194.100')] +[2023-09-27 03:53:31,414][24596] Fps is (10 sec: 5734.5, 60 sec: 6417.1, 300 sec: 6387.0). Total num frames: 9224192. Throughput: 0: 796.5, 1: 796.6. Samples: 2306055. Policy #0 lag: (min: 0.0, avg: 0.0, max: 0.0) +[2023-09-27 03:53:31,414][24596] Avg episode reward: [(0, '-3128.120'), (1, '-4194.070')] +[2023-09-27 03:53:35,676][25562] Updated weights for policy 0, policy_version 18080 (0.0018) +[2023-09-27 03:53:35,676][25564] Updated weights for policy 1, policy_version 18080 (0.0018) +[2023-09-27 03:53:36,414][24596] Fps is (10 sec: 6553.7, 60 sec: 6417.1, 300 sec: 6387.0). Total num frames: 9256960. Throughput: 0: 793.7, 1: 794.0. Samples: 2310801. Policy #0 lag: (min: 0.0, avg: 0.0, max: 0.0) +[2023-09-27 03:53:36,414][24596] Avg episode reward: [(0, '-3196.590'), (1, '-4194.120')] +[2023-09-27 03:53:41,414][24596] Fps is (10 sec: 6553.6, 60 sec: 6417.1, 300 sec: 6387.0). Total num frames: 9289728. Throughput: 0: 796.6, 1: 796.8. Samples: 2320406. Policy #0 lag: (min: 0.0, avg: 0.0, max: 0.0) +[2023-09-27 03:53:41,414][24596] Avg episode reward: [(0, '-3196.590'), (1, '-4194.120')] +[2023-09-27 03:53:46,414][24596] Fps is (10 sec: 6553.5, 60 sec: 6417.1, 300 sec: 6414.8). Total num frames: 9322496. Throughput: 0: 794.9, 1: 797.1. Samples: 2330241. Policy #0 lag: (min: 0.0, avg: 0.0, max: 0.0) +[2023-09-27 03:53:46,415][24596] Avg episode reward: [(0, '-3231.750'), (1, '-4194.100')] +[2023-09-27 03:53:48,508][25562] Updated weights for policy 0, policy_version 18240 (0.0017) +[2023-09-27 03:53:48,508][25564] Updated weights for policy 1, policy_version 18240 (0.0017) +[2023-09-27 03:53:51,414][24596] Fps is (10 sec: 6553.6, 60 sec: 6417.1, 300 sec: 6414.8). Total num frames: 9355264. Throughput: 0: 796.5, 1: 796.8. Samples: 2334749. Policy #0 lag: (min: 0.0, avg: 0.0, max: 0.0) +[2023-09-27 03:53:51,414][24596] Avg episode reward: [(0, '-3264.630'), (1, '-4180.730')] +[2023-09-27 03:53:56,414][24596] Fps is (10 sec: 6553.7, 60 sec: 6417.1, 300 sec: 6414.8). Total num frames: 9388032. Throughput: 0: 795.2, 1: 794.3. Samples: 2344691. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-27 03:53:56,414][24596] Avg episode reward: [(0, '-3264.630'), (1, '-4180.730')] +[2023-09-27 03:54:01,197][25564] Updated weights for policy 1, policy_version 18400 (0.0018) +[2023-09-27 03:54:01,197][25562] Updated weights for policy 0, policy_version 18400 (0.0015) +[2023-09-27 03:54:01,414][24596] Fps is (10 sec: 6553.4, 60 sec: 6417.1, 300 sec: 6414.7). Total num frames: 9420800. Throughput: 0: 797.5, 1: 798.3. Samples: 2354440. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-27 03:54:01,415][24596] Avg episode reward: [(0, '-3316.760'), (1, '-4180.720')] +[2023-09-27 03:54:06,414][24596] Fps is (10 sec: 6553.5, 60 sec: 6417.1, 300 sec: 6414.8). Total num frames: 9453568. Throughput: 0: 796.4, 1: 796.4. Samples: 2359296. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-27 03:54:06,415][24596] Avg episode reward: [(0, '-3345.460'), (1, '-4180.720')] +[2023-09-27 03:54:11,414][24596] Fps is (10 sec: 5734.4, 60 sec: 6280.5, 300 sec: 6387.0). Total num frames: 9478144. Throughput: 0: 801.0, 1: 803.8. Samples: 2368774. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-27 03:54:11,415][24596] Avg episode reward: [(0, '-3345.460'), (1, '-4180.760')] +[2023-09-27 03:54:14,164][25564] Updated weights for policy 1, policy_version 18560 (0.0017) +[2023-09-27 03:54:14,166][25562] Updated weights for policy 0, policy_version 18560 (0.0017) +[2023-09-27 03:54:16,414][24596] Fps is (10 sec: 5734.4, 60 sec: 6280.5, 300 sec: 6387.0). Total num frames: 9510912. Throughput: 0: 796.5, 1: 796.4. Samples: 2377737. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-27 03:54:16,415][24596] Avg episode reward: [(0, '-3378.590'), (1, '-4180.780')] +[2023-09-27 03:54:21,414][24596] Fps is (10 sec: 6553.7, 60 sec: 6280.5, 300 sec: 6387.0). Total num frames: 9543680. Throughput: 0: 799.7, 1: 798.5. Samples: 2382720. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-27 03:54:21,415][24596] Avg episode reward: [(0, '-3420.340'), (1, '-4180.750')] +[2023-09-27 03:54:26,414][24596] Fps is (10 sec: 6553.6, 60 sec: 6417.0, 300 sec: 6387.0). Total num frames: 9576448. Throughput: 0: 796.3, 1: 796.2. Samples: 2392065. Policy #0 lag: (min: 12.0, avg: 12.0, max: 12.0) +[2023-09-27 03:54:26,415][24596] Avg episode reward: [(0, '-3420.340'), (1, '-4180.750')] +[2023-09-27 03:54:27,194][25562] Updated weights for policy 0, policy_version 18720 (0.0016) +[2023-09-27 03:54:27,194][25564] Updated weights for policy 1, policy_version 18720 (0.0018) +[2023-09-27 03:54:31,414][24596] Fps is (10 sec: 6553.7, 60 sec: 6417.1, 300 sec: 6387.0). Total num frames: 9609216. Throughput: 0: 797.0, 1: 796.6. Samples: 2401956. Policy #0 lag: (min: 12.0, avg: 12.0, max: 12.0) +[2023-09-27 03:54:31,414][24596] Avg episode reward: [(0, '-3449.010'), (1, '-4180.710')] +[2023-09-27 03:54:36,414][24596] Fps is (10 sec: 6553.8, 60 sec: 6417.1, 300 sec: 6387.0). Total num frames: 9641984. Throughput: 0: 796.3, 1: 796.0. Samples: 2406400. Policy #0 lag: (min: 11.0, avg: 11.0, max: 11.0) +[2023-09-27 03:54:36,414][24596] Avg episode reward: [(0, '-3485.510'), (1, '-4180.660')] +[2023-09-27 03:54:39,912][25564] Updated weights for policy 1, policy_version 18880 (0.0018) +[2023-09-27 03:54:39,912][25562] Updated weights for policy 0, policy_version 18880 (0.0016) +[2023-09-27 03:54:41,414][24596] Fps is (10 sec: 6553.6, 60 sec: 6417.1, 300 sec: 6387.0). Total num frames: 9674752. Throughput: 0: 795.8, 1: 795.9. Samples: 2416317. Policy #0 lag: (min: 11.0, avg: 11.0, max: 11.0) +[2023-09-27 03:54:41,414][24596] Avg episode reward: [(0, '-3485.510'), (1, '-4153.300')] +[2023-09-27 03:54:46,414][24596] Fps is (10 sec: 6143.9, 60 sec: 6348.8, 300 sec: 6373.1). Total num frames: 9703424. Throughput: 0: 792.5, 1: 791.0. Samples: 2425697. Policy #0 lag: (min: 11.0, avg: 11.0, max: 11.0) +[2023-09-27 03:54:46,415][24596] Avg episode reward: [(0, '-3554.570'), (1, '-4153.140')] +[2023-09-27 03:54:51,414][24596] Fps is (10 sec: 5734.2, 60 sec: 6280.5, 300 sec: 6387.0). Total num frames: 9732096. Throughput: 0: 791.3, 1: 791.6. Samples: 2430525. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-27 03:54:51,415][24596] Avg episode reward: [(0, '-3570.830'), (1, '-4153.140')] +[2023-09-27 03:54:52,737][25564] Updated weights for policy 1, policy_version 19040 (0.0016) +[2023-09-27 03:54:52,737][25562] Updated weights for policy 0, policy_version 19040 (0.0015) +[2023-09-27 03:54:56,414][24596] Fps is (10 sec: 6144.1, 60 sec: 6280.5, 300 sec: 6387.0). Total num frames: 9764864. Throughput: 0: 793.7, 1: 791.8. Samples: 2440120. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-27 03:54:56,414][24596] Avg episode reward: [(0, '-3570.830'), (1, '-4153.140')] +[2023-09-27 03:55:01,414][24596] Fps is (10 sec: 6553.7, 60 sec: 6280.6, 300 sec: 6387.0). Total num frames: 9797632. Throughput: 0: 796.4, 1: 796.4. Samples: 2449410. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-27 03:55:01,414][24596] Avg episode reward: [(0, '-3601.490'), (1, '-4153.090')] +[2023-09-27 03:55:05,688][25562] Updated weights for policy 0, policy_version 19200 (0.0016) +[2023-09-27 03:55:05,688][25564] Updated weights for policy 1, policy_version 19200 (0.0018) +[2023-09-27 03:55:06,414][24596] Fps is (10 sec: 6553.5, 60 sec: 6280.5, 300 sec: 6387.0). Total num frames: 9830400. Throughput: 0: 794.3, 1: 795.6. Samples: 2454266. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-27 03:55:06,415][24596] Avg episode reward: [(0, '-3664.530'), (1, '-4153.120')] +[2023-09-27 03:55:11,414][24596] Fps is (10 sec: 6553.5, 60 sec: 6417.1, 300 sec: 6387.0). Total num frames: 9863168. Throughput: 0: 798.5, 1: 799.0. Samples: 2463956. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-27 03:55:11,415][24596] Avg episode reward: [(0, '-3664.530'), (1, '-4153.120')] +[2023-09-27 03:55:11,427][25378] Saving ./train_atari/atari_skiing/checkpoint_p0/checkpoint_000019264_4931584.pth... +[2023-09-27 03:55:11,428][25443] Saving ./train_atari/atari_skiing/checkpoint_p1/checkpoint_000019264_4931584.pth... +[2023-09-27 03:55:11,462][25443] Removing ./train_atari/atari_skiing/checkpoint_p1/checkpoint_000016272_4165632.pth +[2023-09-27 03:55:11,465][25378] Removing ./train_atari/atari_skiing/checkpoint_p0/checkpoint_000016272_4165632.pth +[2023-09-27 03:55:16,414][24596] Fps is (10 sec: 6553.8, 60 sec: 6417.1, 300 sec: 6387.0). Total num frames: 9895936. Throughput: 0: 797.7, 1: 799.1. Samples: 2473811. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-27 03:55:16,414][24596] Avg episode reward: [(0, '-3689.170'), (1, '-4153.120')] +[2023-09-27 03:55:18,504][25562] Updated weights for policy 0, policy_version 19360 (0.0017) +[2023-09-27 03:55:18,505][25564] Updated weights for policy 1, policy_version 19360 (0.0016) +[2023-09-27 03:55:21,414][24596] Fps is (10 sec: 6553.6, 60 sec: 6417.1, 300 sec: 6387.0). Total num frames: 9928704. Throughput: 0: 796.7, 1: 796.9. Samples: 2478112. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-27 03:55:21,415][24596] Avg episode reward: [(0, '-3715.260'), (1, '-4153.070')] +[2023-09-27 03:55:26,414][24596] Fps is (10 sec: 6553.4, 60 sec: 6417.1, 300 sec: 6387.0). Total num frames: 9961472. Throughput: 0: 797.9, 1: 798.0. Samples: 2488133. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-27 03:55:26,415][24596] Avg episode reward: [(0, '-3715.260'), (1, '-4153.130')] +[2023-09-27 03:55:31,117][25562] Updated weights for policy 0, policy_version 19520 (0.0019) +[2023-09-27 03:55:31,117][25564] Updated weights for policy 1, policy_version 19520 (0.0018) +[2023-09-27 03:55:31,414][24596] Fps is (10 sec: 6553.6, 60 sec: 6417.0, 300 sec: 6387.0). Total num frames: 9994240. Throughput: 0: 801.7, 1: 801.4. Samples: 2497836. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-27 03:55:31,415][24596] Avg episode reward: [(0, '-3771.440'), (1, '-4095.470')] +[2023-09-27 03:55:36,414][24596] Fps is (10 sec: 6553.8, 60 sec: 6417.1, 300 sec: 6414.8). Total num frames: 10027008. Throughput: 0: 801.5, 1: 801.2. Samples: 2502649. Policy #0 lag: (min: 12.0, avg: 12.0, max: 12.0) +[2023-09-27 03:55:36,414][24596] Avg episode reward: [(0, '-3801.280'), (1, '-4066.970')] +[2023-09-27 03:55:41,414][24596] Fps is (10 sec: 5734.4, 60 sec: 6280.5, 300 sec: 6387.0). Total num frames: 10051584. Throughput: 0: 799.0, 1: 799.5. Samples: 2512052. Policy #0 lag: (min: 12.0, avg: 12.0, max: 12.0) +[2023-09-27 03:55:41,415][24596] Avg episode reward: [(0, '-3801.280'), (1, '-4066.970')] +[2023-09-27 03:55:44,234][25564] Updated weights for policy 1, policy_version 19680 (0.0018) +[2023-09-27 03:55:44,234][25562] Updated weights for policy 0, policy_version 19680 (0.0017) +[2023-09-27 03:55:46,414][24596] Fps is (10 sec: 5734.3, 60 sec: 6348.8, 300 sec: 6387.0). Total num frames: 10084352. Throughput: 0: 797.0, 1: 797.2. Samples: 2521149. Policy #0 lag: (min: 12.0, avg: 12.0, max: 12.0) +[2023-09-27 03:55:46,415][24596] Avg episode reward: [(0, '-3834.650'), (1, '-4057.440')] +[2023-09-27 03:55:51,414][24596] Fps is (10 sec: 6553.6, 60 sec: 6417.1, 300 sec: 6387.0). Total num frames: 10117120. Throughput: 0: 800.0, 1: 799.9. Samples: 2526262. Policy #0 lag: (min: 10.0, avg: 10.0, max: 10.0) +[2023-09-27 03:55:51,415][24596] Avg episode reward: [(0, '-3871.510'), (1, '-4057.410')] +[2023-09-27 03:55:56,414][24596] Fps is (10 sec: 6553.5, 60 sec: 6417.0, 300 sec: 6387.0). Total num frames: 10149888. Throughput: 0: 799.3, 1: 799.1. Samples: 2535885. Policy #0 lag: (min: 10.0, avg: 10.0, max: 10.0) +[2023-09-27 03:55:56,415][24596] Avg episode reward: [(0, '-3871.510'), (1, '-4057.380')] +[2023-09-27 03:55:56,792][25564] Updated weights for policy 1, policy_version 19840 (0.0017) +[2023-09-27 03:55:56,792][25562] Updated weights for policy 0, policy_version 19840 (0.0018) +[2023-09-27 03:56:01,414][24596] Fps is (10 sec: 6553.6, 60 sec: 6417.0, 300 sec: 6387.0). Total num frames: 10182656. Throughput: 0: 799.4, 1: 797.5. Samples: 2545670. Policy #0 lag: (min: 5.0, avg: 5.0, max: 5.0) +[2023-09-27 03:56:01,415][24596] Avg episode reward: [(0, '-3905.920'), (1, '-4037.930')] +[2023-09-27 03:56:06,414][24596] Fps is (10 sec: 6553.6, 60 sec: 6417.1, 300 sec: 6387.0). Total num frames: 10215424. Throughput: 0: 802.8, 1: 803.0. Samples: 2550372. Policy #0 lag: (min: 5.0, avg: 5.0, max: 5.0) +[2023-09-27 03:56:06,415][24596] Avg episode reward: [(0, '-3935.990'), (1, '-4005.840')] +[2023-09-27 03:56:09,473][25564] Updated weights for policy 1, policy_version 20000 (0.0017) +[2023-09-27 03:56:09,473][25562] Updated weights for policy 0, policy_version 20000 (0.0016) +[2023-09-27 03:56:11,414][24596] Fps is (10 sec: 6553.6, 60 sec: 6417.1, 300 sec: 6387.0). Total num frames: 10248192. Throughput: 0: 799.5, 1: 799.6. Samples: 2560094. Policy #0 lag: (min: 5.0, avg: 5.0, max: 5.0) +[2023-09-27 03:56:11,415][24596] Avg episode reward: [(0, '-3935.990'), (1, '-4005.790')] +[2023-09-27 03:56:16,414][24596] Fps is (10 sec: 6553.7, 60 sec: 6417.1, 300 sec: 6387.0). Total num frames: 10280960. Throughput: 0: 799.8, 1: 802.1. Samples: 2569918. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-27 03:56:16,414][24596] Avg episode reward: [(0, '-3965.410'), (1, '-3977.240')] +[2023-09-27 03:56:21,414][24596] Fps is (10 sec: 6553.8, 60 sec: 6417.1, 300 sec: 6387.0). Total num frames: 10313728. Throughput: 0: 796.6, 1: 796.6. Samples: 2574345. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-27 03:56:21,414][24596] Avg episode reward: [(0, '-3965.400'), (1, '-3977.310')] +[2023-09-27 03:56:22,566][25562] Updated weights for policy 0, policy_version 20160 (0.0015) +[2023-09-27 03:56:22,566][25564] Updated weights for policy 1, policy_version 20160 (0.0017) +[2023-09-27 03:56:26,414][24596] Fps is (10 sec: 6144.0, 60 sec: 6348.8, 300 sec: 6400.9). Total num frames: 10342400. Throughput: 0: 797.4, 1: 796.6. Samples: 2583781. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-27 03:56:26,414][24596] Avg episode reward: [(0, '-3965.400'), (1, '-3957.710')] +[2023-09-27 03:56:31,414][24596] Fps is (10 sec: 5734.3, 60 sec: 6280.5, 300 sec: 6387.0). Total num frames: 10371072. Throughput: 0: 796.8, 1: 797.0. Samples: 2592867. Policy #0 lag: (min: 0.0, avg: 0.0, max: 0.0) +[2023-09-27 03:56:31,415][24596] Avg episode reward: [(0, '-3998.900'), (1, '-3957.710')] +[2023-09-27 03:56:35,688][25562] Updated weights for policy 0, policy_version 20320 (0.0016) +[2023-09-27 03:56:35,689][25564] Updated weights for policy 1, policy_version 20320 (0.0018) +[2023-09-27 03:56:36,414][24596] Fps is (10 sec: 6143.9, 60 sec: 6280.5, 300 sec: 6387.0). Total num frames: 10403840. Throughput: 0: 792.9, 1: 793.1. Samples: 2597634. Policy #0 lag: (min: 0.0, avg: 0.0, max: 0.0) +[2023-09-27 03:56:36,415][24596] Avg episode reward: [(0, '-4027.870'), (1, '-3956.880')] +[2023-09-27 03:56:41,414][24596] Fps is (10 sec: 6553.7, 60 sec: 6417.1, 300 sec: 6387.0). Total num frames: 10436608. Throughput: 0: 793.2, 1: 792.9. Samples: 2607257. Policy #0 lag: (min: 0.0, avg: 0.0, max: 0.0) +[2023-09-27 03:56:41,414][24596] Avg episode reward: [(0, '-4049.960'), (1, '-3956.870')] +[2023-09-27 03:56:46,414][24596] Fps is (10 sec: 6553.7, 60 sec: 6417.1, 300 sec: 6387.0). Total num frames: 10469376. Throughput: 0: 794.8, 1: 795.1. Samples: 2617214. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-27 03:56:46,414][24596] Avg episode reward: [(0, '-4083.640'), (1, '-3986.890')] +[2023-09-27 03:56:48,366][25564] Updated weights for policy 1, policy_version 20480 (0.0018) +[2023-09-27 03:56:48,366][25562] Updated weights for policy 0, policy_version 20480 (0.0017) +[2023-09-27 03:56:51,414][24596] Fps is (10 sec: 6553.6, 60 sec: 6417.1, 300 sec: 6387.0). Total num frames: 10502144. Throughput: 0: 793.0, 1: 792.6. Samples: 2621727. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-27 03:56:51,414][24596] Avg episode reward: [(0, '-4094.140'), (1, '-3986.890')] +[2023-09-27 03:56:56,414][24596] Fps is (10 sec: 6553.5, 60 sec: 6417.1, 300 sec: 6387.0). Total num frames: 10534912. Throughput: 0: 792.2, 1: 794.1. Samples: 2631478. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-27 03:56:56,415][24596] Avg episode reward: [(0, '-4094.140'), (1, '-3986.890')] +[2023-09-27 03:57:01,282][25564] Updated weights for policy 1, policy_version 20640 (0.0015) +[2023-09-27 03:57:01,283][25562] Updated weights for policy 0, policy_version 20640 (0.0018) +[2023-09-27 03:57:01,414][24596] Fps is (10 sec: 6553.6, 60 sec: 6417.1, 300 sec: 6387.0). Total num frames: 10567680. Throughput: 0: 789.5, 1: 787.5. Samples: 2640882. Policy #0 lag: (min: 7.0, avg: 7.0, max: 7.0) +[2023-09-27 03:57:01,414][24596] Avg episode reward: [(0, '-4121.240'), (1, '-3983.030')] +[2023-09-27 03:57:06,414][24596] Fps is (10 sec: 6553.7, 60 sec: 6417.1, 300 sec: 6414.8). Total num frames: 10600448. Throughput: 0: 795.5, 1: 796.0. Samples: 2645964. Policy #0 lag: (min: 7.0, avg: 7.0, max: 7.0) +[2023-09-27 03:57:06,414][24596] Avg episode reward: [(0, '-4121.240'), (1, '-3983.030')] +[2023-09-27 03:57:11,414][24596] Fps is (10 sec: 5734.2, 60 sec: 6280.5, 300 sec: 6387.0). Total num frames: 10625024. Throughput: 0: 795.9, 1: 795.4. Samples: 2655388. Policy #0 lag: (min: 7.0, avg: 7.0, max: 7.0) +[2023-09-27 03:57:11,415][24596] Avg episode reward: [(0, '-4121.230'), (1, '-4003.530')] +[2023-09-27 03:57:11,429][25443] Saving ./train_atari/atari_skiing/checkpoint_p1/checkpoint_000020768_5316608.pth... +[2023-09-27 03:57:11,432][25378] Saving ./train_atari/atari_skiing/checkpoint_p0/checkpoint_000020768_5316608.pth... +[2023-09-27 03:57:11,458][25443] Removing ./train_atari/atari_skiing/checkpoint_p1/checkpoint_000017776_4550656.pth +[2023-09-27 03:57:11,470][25378] Removing ./train_atari/atari_skiing/checkpoint_p0/checkpoint_000017776_4550656.pth +[2023-09-27 03:57:14,013][25562] Updated weights for policy 0, policy_version 20800 (0.0018) +[2023-09-27 03:57:14,013][25564] Updated weights for policy 1, policy_version 20800 (0.0017) +[2023-09-27 03:57:16,414][24596] Fps is (10 sec: 5734.3, 60 sec: 6280.5, 300 sec: 6387.0). Total num frames: 10657792. Throughput: 0: 799.9, 1: 799.8. Samples: 2664855. Policy #0 lag: (min: 0.0, avg: 0.0, max: 0.0) +[2023-09-27 03:57:16,415][24596] Avg episode reward: [(0, '-4138.950'), (1, '-4003.530')] +[2023-09-27 03:57:21,414][24596] Fps is (10 sec: 6553.8, 60 sec: 6280.5, 300 sec: 6387.0). Total num frames: 10690560. Throughput: 0: 799.3, 1: 799.7. Samples: 2669592. Policy #0 lag: (min: 0.0, avg: 0.0, max: 0.0) +[2023-09-27 03:57:21,414][24596] Avg episode reward: [(0, '-4143.550'), (1, '-4003.530')] +[2023-09-27 03:57:26,414][24596] Fps is (10 sec: 6553.7, 60 sec: 6348.8, 300 sec: 6387.0). Total num frames: 10723328. Throughput: 0: 800.3, 1: 800.4. Samples: 2679290. Policy #0 lag: (min: 0.0, avg: 0.0, max: 0.0) +[2023-09-27 03:57:26,414][24596] Avg episode reward: [(0, '-4145.350'), (1, '-3993.010')] +[2023-09-27 03:57:26,742][25562] Updated weights for policy 0, policy_version 20960 (0.0017) +[2023-09-27 03:57:26,742][25564] Updated weights for policy 1, policy_version 20960 (0.0018) +[2023-09-27 03:57:31,414][24596] Fps is (10 sec: 6553.6, 60 sec: 6417.1, 300 sec: 6387.0). Total num frames: 10756096. Throughput: 0: 798.2, 1: 797.8. Samples: 2689032. Policy #0 lag: (min: 5.0, avg: 5.0, max: 5.0) +[2023-09-27 03:57:31,414][24596] Avg episode reward: [(0, '-4145.380'), (1, '-3992.990')] +[2023-09-27 03:57:36,414][24596] Fps is (10 sec: 6553.6, 60 sec: 6417.1, 300 sec: 6387.0). Total num frames: 10788864. Throughput: 0: 800.2, 1: 800.8. Samples: 2693772. Policy #0 lag: (min: 5.0, avg: 5.0, max: 5.0) +[2023-09-27 03:57:36,414][24596] Avg episode reward: [(0, '-4145.350'), (1, '-3992.990')] +[2023-09-27 03:57:39,488][25562] Updated weights for policy 0, policy_version 21120 (0.0016) +[2023-09-27 03:57:39,488][25564] Updated weights for policy 1, policy_version 21120 (0.0014) +[2023-09-27 03:57:41,414][24596] Fps is (10 sec: 6553.5, 60 sec: 6417.0, 300 sec: 6387.0). Total num frames: 10821632. Throughput: 0: 800.4, 1: 798.4. Samples: 2703423. Policy #0 lag: (min: 5.0, avg: 5.0, max: 5.0) +[2023-09-27 03:57:41,415][24596] Avg episode reward: [(0, '-4145.350'), (1, '-3965.960')] +[2023-09-27 03:57:46,414][24596] Fps is (10 sec: 6553.4, 60 sec: 6417.0, 300 sec: 6387.0). Total num frames: 10854400. Throughput: 0: 804.6, 1: 803.1. Samples: 2713226. Policy #0 lag: (min: 2.0, avg: 2.0, max: 2.0) +[2023-09-27 03:57:46,415][24596] Avg episode reward: [(0, '-4145.330'), (1, '-3990.490')] +[2023-09-27 03:57:51,414][24596] Fps is (10 sec: 6553.8, 60 sec: 6417.1, 300 sec: 6387.0). Total num frames: 10887168. Throughput: 0: 798.8, 1: 798.5. Samples: 2717843. Policy #0 lag: (min: 2.0, avg: 2.0, max: 2.0) +[2023-09-27 03:57:51,414][24596] Avg episode reward: [(0, '-4145.310'), (1, '-4010.600')] +[2023-09-27 03:57:52,168][25562] Updated weights for policy 0, policy_version 21280 (0.0017) +[2023-09-27 03:57:52,168][25564] Updated weights for policy 1, policy_version 21280 (0.0017) +[2023-09-27 03:57:56,414][24596] Fps is (10 sec: 6553.7, 60 sec: 6417.1, 300 sec: 6387.0). Total num frames: 10919936. Throughput: 0: 806.0, 1: 806.2. Samples: 2727936. Policy #0 lag: (min: 2.0, avg: 2.0, max: 2.0) +[2023-09-27 03:57:56,415][24596] Avg episode reward: [(0, '-4152.580'), (1, '-4010.600')] +[2023-09-27 03:58:01,414][24596] Fps is (10 sec: 6553.6, 60 sec: 6417.1, 300 sec: 6387.0). Total num frames: 10952704. Throughput: 0: 812.1, 1: 811.9. Samples: 2737933. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-27 03:58:01,414][24596] Avg episode reward: [(0, '-4152.580'), (1, '-4010.600')] +[2023-09-27 03:58:04,897][25564] Updated weights for policy 1, policy_version 21440 (0.0018) +[2023-09-27 03:58:04,897][25562] Updated weights for policy 0, policy_version 21440 (0.0017) +[2023-09-27 03:58:06,414][24596] Fps is (10 sec: 6553.7, 60 sec: 6417.1, 300 sec: 6387.0). Total num frames: 10985472. Throughput: 0: 808.1, 1: 807.2. Samples: 2742280. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-27 03:58:06,414][24596] Avg episode reward: [(0, '-4166.460'), (1, '-3997.090')] +[2023-09-27 03:58:11,414][24596] Fps is (10 sec: 6553.6, 60 sec: 6553.6, 300 sec: 6387.0). Total num frames: 11018240. Throughput: 0: 806.7, 1: 807.1. Samples: 2751911. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-27 03:58:11,415][24596] Avg episode reward: [(0, '-4188.700'), (1, '-4022.270')] +[2023-09-27 03:58:16,414][24596] Fps is (10 sec: 5734.3, 60 sec: 6417.1, 300 sec: 6359.2). Total num frames: 11042816. Throughput: 0: 801.6, 1: 801.8. Samples: 2761184. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-27 03:58:16,415][24596] Avg episode reward: [(0, '-4200.820'), (1, '-4022.270')] +[2023-09-27 03:58:17,867][25562] Updated weights for policy 0, policy_version 21600 (0.0018) +[2023-09-27 03:58:17,867][25564] Updated weights for policy 1, policy_version 21600 (0.0017) +[2023-09-27 03:58:21,414][24596] Fps is (10 sec: 5734.3, 60 sec: 6417.0, 300 sec: 6387.0). Total num frames: 11075584. Throughput: 0: 803.8, 1: 802.2. Samples: 2766040. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-27 03:58:21,415][24596] Avg episode reward: [(0, '-4200.830'), (1, '-4014.250')] +[2023-09-27 03:58:26,414][24596] Fps is (10 sec: 6553.7, 60 sec: 6417.1, 300 sec: 6387.0). Total num frames: 11108352. Throughput: 0: 796.0, 1: 795.8. Samples: 2775057. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-27 03:58:26,415][24596] Avg episode reward: [(0, '-4200.830'), (1, '-4014.210')] +[2023-09-27 03:58:30,902][25562] Updated weights for policy 0, policy_version 21760 (0.0018) +[2023-09-27 03:58:30,902][25564] Updated weights for policy 1, policy_version 21760 (0.0017) +[2023-09-27 03:58:31,414][24596] Fps is (10 sec: 6553.7, 60 sec: 6417.1, 300 sec: 6387.0). Total num frames: 11141120. Throughput: 0: 797.1, 1: 799.2. Samples: 2785060. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-27 03:58:31,414][24596] Avg episode reward: [(0, '-4233.790'), (1, '-4014.240')] +[2023-09-27 03:58:36,414][24596] Fps is (10 sec: 6553.7, 60 sec: 6417.1, 300 sec: 6387.0). Total num frames: 11173888. Throughput: 0: 796.9, 1: 797.2. Samples: 2789581. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-27 03:58:36,414][24596] Avg episode reward: [(0, '-4233.790'), (1, '-4044.950')] +[2023-09-27 03:58:41,414][24596] Fps is (10 sec: 6553.5, 60 sec: 6417.1, 300 sec: 6387.0). Total num frames: 11206656. Throughput: 0: 793.6, 1: 794.1. Samples: 2799384. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-27 03:58:41,415][24596] Avg episode reward: [(0, '-4249.460'), (1, '-4022.900')] +[2023-09-27 03:58:43,662][25564] Updated weights for policy 1, policy_version 21920 (0.0016) +[2023-09-27 03:58:43,662][25562] Updated weights for policy 0, policy_version 21920 (0.0016) +[2023-09-27 03:58:46,414][24596] Fps is (10 sec: 6553.5, 60 sec: 6417.1, 300 sec: 6387.0). Total num frames: 11239424. Throughput: 0: 789.6, 1: 789.6. Samples: 2808998. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-27 03:58:46,415][24596] Avg episode reward: [(0, '-4276.340'), (1, '-4022.850')] +[2023-09-27 03:58:51,414][24596] Fps is (10 sec: 6553.6, 60 sec: 6417.0, 300 sec: 6387.0). Total num frames: 11272192. Throughput: 0: 794.2, 1: 795.0. Samples: 2813792. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-27 03:58:51,415][24596] Avg episode reward: [(0, '-4276.370'), (1, '-4030.020')] +[2023-09-27 03:58:56,378][25562] Updated weights for policy 0, policy_version 22080 (0.0018) +[2023-09-27 03:58:56,378][25564] Updated weights for policy 1, policy_version 22080 (0.0018) +[2023-09-27 03:58:56,414][24596] Fps is (10 sec: 6553.7, 60 sec: 6417.1, 300 sec: 6387.0). Total num frames: 11304960. Throughput: 0: 794.7, 1: 794.1. Samples: 2823405. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-27 03:58:56,415][24596] Avg episode reward: [(0, '-4276.410'), (1, '-4030.020')] +[2023-09-27 03:59:01,414][24596] Fps is (10 sec: 5734.5, 60 sec: 6280.5, 300 sec: 6359.2). Total num frames: 11329536. Throughput: 0: 796.8, 1: 796.8. Samples: 2832895. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-27 03:59:01,414][24596] Avg episode reward: [(0, '-4262.750'), (1, '-4030.020')] +[2023-09-27 03:59:06,414][24596] Fps is (10 sec: 5734.4, 60 sec: 6280.5, 300 sec: 6387.0). Total num frames: 11362304. Throughput: 0: 798.7, 1: 800.2. Samples: 2837988. Policy #0 lag: (min: 1.0, avg: 1.0, max: 1.0) +[2023-09-27 03:59:06,415][24596] Avg episode reward: [(0, '-4247.380'), (1, '-4052.400')] +[2023-09-27 03:59:09,361][25562] Updated weights for policy 0, policy_version 22240 (0.0017) +[2023-09-27 03:59:09,361][25564] Updated weights for policy 1, policy_version 22240 (0.0018) +[2023-09-27 03:59:11,414][24596] Fps is (10 sec: 6553.6, 60 sec: 6280.5, 300 sec: 6387.0). Total num frames: 11395072. Throughput: 0: 799.5, 1: 799.9. Samples: 2847028. Policy #0 lag: (min: 1.0, avg: 1.0, max: 1.0) +[2023-09-27 03:59:11,414][24596] Avg episode reward: [(0, '-4247.380'), (1, '-4052.400')] +[2023-09-27 03:59:11,425][25443] Saving ./train_atari/atari_skiing/checkpoint_p1/checkpoint_000022256_5697536.pth... +[2023-09-27 03:59:11,425][25378] Saving ./train_atari/atari_skiing/checkpoint_p0/checkpoint_000022256_5697536.pth... +[2023-09-27 03:59:11,460][25443] Removing ./train_atari/atari_skiing/checkpoint_p1/checkpoint_000019264_4931584.pth +[2023-09-27 03:59:11,464][25378] Removing ./train_atari/atari_skiing/checkpoint_p0/checkpoint_000019264_4931584.pth +[2023-09-27 03:59:16,414][24596] Fps is (10 sec: 6553.6, 60 sec: 6417.1, 300 sec: 6387.0). Total num frames: 11427840. Throughput: 0: 792.0, 1: 791.7. Samples: 2856328. Policy #0 lag: (min: 1.0, avg: 1.0, max: 1.0) +[2023-09-27 03:59:16,415][24596] Avg episode reward: [(0, '-4247.440'), (1, '-4052.400')] +[2023-09-27 03:59:21,414][24596] Fps is (10 sec: 6553.5, 60 sec: 6417.1, 300 sec: 6387.0). Total num frames: 11460608. Throughput: 0: 794.4, 1: 794.0. Samples: 2861059. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-27 03:59:21,415][24596] Avg episode reward: [(0, '-4247.420'), (1, '-4046.740')] +[2023-09-27 03:59:22,397][25562] Updated weights for policy 0, policy_version 22400 (0.0017) +[2023-09-27 03:59:22,397][25564] Updated weights for policy 1, policy_version 22400 (0.0017) +[2023-09-27 03:59:26,414][24596] Fps is (10 sec: 6553.5, 60 sec: 6417.1, 300 sec: 6387.0). Total num frames: 11493376. Throughput: 0: 794.9, 1: 795.1. Samples: 2870932. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-27 03:59:26,415][24596] Avg episode reward: [(0, '-4247.430'), (1, '-4046.740')] +[2023-09-27 03:59:31,414][24596] Fps is (10 sec: 6553.6, 60 sec: 6417.1, 300 sec: 6387.0). Total num frames: 11526144. Throughput: 0: 795.0, 1: 795.0. Samples: 2880548. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-27 03:59:31,415][24596] Avg episode reward: [(0, '-4247.430'), (1, '-4046.680')] +[2023-09-27 03:59:35,042][25562] Updated weights for policy 0, policy_version 22560 (0.0018) +[2023-09-27 03:59:35,042][25564] Updated weights for policy 1, policy_version 22560 (0.0018) +[2023-09-27 03:59:36,414][24596] Fps is (10 sec: 6553.7, 60 sec: 6417.1, 300 sec: 6387.0). Total num frames: 11558912. Throughput: 0: 797.9, 1: 797.4. Samples: 2885584. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-27 03:59:36,415][24596] Avg episode reward: [(0, '-4247.430'), (1, '-4046.680')] +[2023-09-27 03:59:41,414][24596] Fps is (10 sec: 5734.4, 60 sec: 6280.5, 300 sec: 6373.1). Total num frames: 11583488. Throughput: 0: 796.4, 1: 796.7. Samples: 2895094. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-27 03:59:41,415][24596] Avg episode reward: [(0, '-4247.470'), (1, '-4046.680')] +[2023-09-27 03:59:46,414][24596] Fps is (10 sec: 5734.3, 60 sec: 6280.5, 300 sec: 6387.0). Total num frames: 11616256. Throughput: 0: 796.8, 1: 796.7. Samples: 2904605. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-27 03:59:46,415][24596] Avg episode reward: [(0, '-4247.440'), (1, '-4046.650')] +[2023-09-27 03:59:47,785][25562] Updated weights for policy 0, policy_version 22720 (0.0017) +[2023-09-27 03:59:47,785][25564] Updated weights for policy 1, policy_version 22720 (0.0018) +[2023-09-27 03:59:51,414][24596] Fps is (10 sec: 6553.6, 60 sec: 6280.5, 300 sec: 6387.0). Total num frames: 11649024. Throughput: 0: 794.7, 1: 795.8. Samples: 2909561. Policy #0 lag: (min: 9.0, avg: 9.0, max: 9.0) +[2023-09-27 03:59:51,415][24596] Avg episode reward: [(0, '-4247.450'), (1, '-4068.880')] +[2023-09-27 03:59:56,414][24596] Fps is (10 sec: 6553.6, 60 sec: 6280.5, 300 sec: 6387.0). Total num frames: 11681792. Throughput: 0: 797.0, 1: 796.7. Samples: 2918741. Policy #0 lag: (min: 9.0, avg: 9.0, max: 9.0) +[2023-09-27 03:59:56,415][24596] Avg episode reward: [(0, '-4247.450'), (1, '-4068.880')] +[2023-09-27 04:00:00,636][25562] Updated weights for policy 0, policy_version 22880 (0.0016) +[2023-09-27 04:00:00,636][25564] Updated weights for policy 1, policy_version 22880 (0.0018) +[2023-09-27 04:00:01,414][24596] Fps is (10 sec: 6553.6, 60 sec: 6417.0, 300 sec: 6387.0). Total num frames: 11714560. Throughput: 0: 803.7, 1: 803.2. Samples: 2928640. Policy #0 lag: (min: 9.0, avg: 9.0, max: 9.0) +[2023-09-27 04:00:01,415][24596] Avg episode reward: [(0, '-4247.450'), (1, '-4068.880')] +[2023-09-27 04:00:06,414][24596] Fps is (10 sec: 6553.6, 60 sec: 6417.1, 300 sec: 6387.0). Total num frames: 11747328. Throughput: 0: 802.8, 1: 803.6. Samples: 2933345. Policy #0 lag: (min: 9.0, avg: 9.0, max: 9.0) +[2023-09-27 04:00:06,415][24596] Avg episode reward: [(0, '-4247.460'), (1, '-4068.910')] +[2023-09-27 04:00:11,414][24596] Fps is (10 sec: 6553.7, 60 sec: 6417.1, 300 sec: 6387.0). Total num frames: 11780096. Throughput: 0: 800.9, 1: 800.2. Samples: 2942985. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-27 04:00:11,414][24596] Avg episode reward: [(0, '-4270.940'), (1, '-4077.430')] +[2023-09-27 04:00:13,332][25562] Updated weights for policy 0, policy_version 23040 (0.0017) +[2023-09-27 04:00:13,332][25564] Updated weights for policy 1, policy_version 23040 (0.0017) +[2023-09-27 04:00:16,414][24596] Fps is (10 sec: 6553.5, 60 sec: 6417.1, 300 sec: 6387.0). Total num frames: 11812864. Throughput: 0: 804.7, 1: 804.9. Samples: 2952980. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-27 04:00:16,415][24596] Avg episode reward: [(0, '-4270.890'), (1, '-4077.430')] +[2023-09-27 04:00:21,414][24596] Fps is (10 sec: 6553.5, 60 sec: 6417.1, 300 sec: 6387.0). Total num frames: 11845632. Throughput: 0: 798.0, 1: 798.0. Samples: 2957404. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-27 04:00:21,415][24596] Avg episode reward: [(0, '-4270.890'), (1, '-4030.110')] +[2023-09-27 04:00:26,293][25562] Updated weights for policy 0, policy_version 23200 (0.0016) +[2023-09-27 04:00:26,294][25564] Updated weights for policy 1, policy_version 23200 (0.0018) +[2023-09-27 04:00:26,414][24596] Fps is (10 sec: 6553.7, 60 sec: 6417.1, 300 sec: 6387.0). Total num frames: 11878400. Throughput: 0: 798.8, 1: 799.7. Samples: 2967026. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-27 04:00:26,414][24596] Avg episode reward: [(0, '-4276.830'), (1, '-4003.720')] +[2023-09-27 04:00:31,414][24596] Fps is (10 sec: 5734.3, 60 sec: 6280.5, 300 sec: 6359.2). Total num frames: 11902976. Throughput: 0: 796.2, 1: 796.4. Samples: 2976276. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-27 04:00:31,415][24596] Avg episode reward: [(0, '-4296.020'), (1, '-4003.720')] +[2023-09-27 04:00:36,414][24596] Fps is (10 sec: 5734.4, 60 sec: 6280.5, 300 sec: 6387.0). Total num frames: 11935744. Throughput: 0: 797.9, 1: 796.3. Samples: 2981301. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-27 04:00:36,415][24596] Avg episode reward: [(0, '-4323.910'), (1, '-4003.720')] +[2023-09-27 04:00:39,053][25562] Updated weights for policy 0, policy_version 23360 (0.0016) +[2023-09-27 04:00:39,054][25564] Updated weights for policy 1, policy_version 23360 (0.0017) +[2023-09-27 04:00:41,414][24596] Fps is (10 sec: 6553.6, 60 sec: 6417.1, 300 sec: 6387.0). Total num frames: 11968512. Throughput: 0: 802.8, 1: 802.9. Samples: 2990996. Policy #0 lag: (min: 5.0, avg: 5.0, max: 5.0) +[2023-09-27 04:00:41,415][24596] Avg episode reward: [(0, '-4323.910'), (1, '-3999.510')] +[2023-09-27 04:00:46,414][24596] Fps is (10 sec: 6553.5, 60 sec: 6417.1, 300 sec: 6387.0). Total num frames: 12001280. Throughput: 0: 796.4, 1: 796.4. Samples: 3000320. Policy #0 lag: (min: 5.0, avg: 5.0, max: 5.0) +[2023-09-27 04:00:46,415][24596] Avg episode reward: [(0, '-4323.910'), (1, '-3999.600')] +[2023-09-27 04:00:51,414][24596] Fps is (10 sec: 6553.6, 60 sec: 6417.1, 300 sec: 6387.0). Total num frames: 12034048. Throughput: 0: 796.5, 1: 795.7. Samples: 3004995. Policy #0 lag: (min: 5.0, avg: 5.0, max: 5.0) +[2023-09-27 04:00:51,415][24596] Avg episode reward: [(0, '-4345.080'), (1, '-3999.550')] +[2023-09-27 04:00:52,149][25562] Updated weights for policy 0, policy_version 23520 (0.0016) +[2023-09-27 04:00:52,150][25564] Updated weights for policy 1, policy_version 23520 (0.0018) +[2023-09-27 04:00:56,414][24596] Fps is (10 sec: 6553.5, 60 sec: 6417.1, 300 sec: 6387.0). Total num frames: 12066816. Throughput: 0: 796.4, 1: 796.3. Samples: 3014656. Policy #0 lag: (min: 5.0, avg: 5.0, max: 5.0) +[2023-09-27 04:00:56,415][24596] Avg episode reward: [(0, '-4398.290'), (1, '-3999.550')] +[2023-09-27 04:01:01,414][24596] Fps is (10 sec: 6553.7, 60 sec: 6417.1, 300 sec: 6387.0). Total num frames: 12099584. Throughput: 0: 794.7, 1: 794.8. Samples: 3024509. Policy #0 lag: (min: 0.0, avg: 0.0, max: 0.0) +[2023-09-27 04:01:01,414][24596] Avg episode reward: [(0, '-4424.620'), (1, '-3999.610')] +[2023-09-27 04:01:04,736][25562] Updated weights for policy 0, policy_version 23680 (0.0018) +[2023-09-27 04:01:04,736][25564] Updated weights for policy 1, policy_version 23680 (0.0017) +[2023-09-27 04:01:06,414][24596] Fps is (10 sec: 6553.7, 60 sec: 6417.1, 300 sec: 6387.0). Total num frames: 12132352. Throughput: 0: 797.4, 1: 797.9. Samples: 3029191. Policy #0 lag: (min: 0.0, avg: 0.0, max: 0.0) +[2023-09-27 04:01:06,415][24596] Avg episode reward: [(0, '-4402.110'), (1, '-3999.620')] +[2023-09-27 04:01:11,414][24596] Fps is (10 sec: 6553.6, 60 sec: 6417.1, 300 sec: 6387.0). Total num frames: 12165120. Throughput: 0: 801.6, 1: 800.2. Samples: 3039110. Policy #0 lag: (min: 0.0, avg: 0.0, max: 0.0) +[2023-09-27 04:01:11,414][24596] Avg episode reward: [(0, '-4384.830'), (1, '-3999.530')] +[2023-09-27 04:01:11,423][25443] Saving ./train_atari/atari_skiing/checkpoint_p1/checkpoint_000023760_6082560.pth... +[2023-09-27 04:01:11,423][25378] Saving ./train_atari/atari_skiing/checkpoint_p0/checkpoint_000023760_6082560.pth... +[2023-09-27 04:01:11,457][25443] Removing ./train_atari/atari_skiing/checkpoint_p1/checkpoint_000020768_5316608.pth +[2023-09-27 04:01:11,458][25378] Removing ./train_atari/atari_skiing/checkpoint_p0/checkpoint_000020768_5316608.pth +[2023-09-27 04:01:16,414][24596] Fps is (10 sec: 6553.6, 60 sec: 6417.1, 300 sec: 6387.0). Total num frames: 12197888. Throughput: 0: 803.5, 1: 802.7. Samples: 3048555. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-27 04:01:16,415][24596] Avg episode reward: [(0, '-4360.440'), (1, '-3999.530')] +[2023-09-27 04:01:17,523][25562] Updated weights for policy 0, policy_version 23840 (0.0018) +[2023-09-27 04:01:17,524][25564] Updated weights for policy 1, policy_version 23840 (0.0018) +[2023-09-27 04:01:21,414][24596] Fps is (10 sec: 6553.4, 60 sec: 6417.1, 300 sec: 6400.9). Total num frames: 12230656. Throughput: 0: 802.9, 1: 802.1. Samples: 3053525. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-27 04:01:21,415][24596] Avg episode reward: [(0, '-4359.510'), (1, '-3999.530')] +[2023-09-27 04:01:26,414][24596] Fps is (10 sec: 5734.4, 60 sec: 6280.5, 300 sec: 6387.0). Total num frames: 12255232. Throughput: 0: 800.4, 1: 800.4. Samples: 3063033. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-27 04:01:26,415][24596] Avg episode reward: [(0, '-4359.510'), (1, '-3999.550')] +[2023-09-27 04:01:30,584][25564] Updated weights for policy 1, policy_version 24000 (0.0017) +[2023-09-27 04:01:30,584][25562] Updated weights for policy 0, policy_version 24000 (0.0017) +[2023-09-27 04:01:31,414][24596] Fps is (10 sec: 5734.4, 60 sec: 6417.1, 300 sec: 6387.0). Total num frames: 12288000. Throughput: 0: 796.5, 1: 796.5. Samples: 3072004. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-27 04:01:31,415][24596] Avg episode reward: [(0, '-4284.170'), (1, '-3999.580')] +[2023-09-27 04:01:36,414][24596] Fps is (10 sec: 6553.6, 60 sec: 6417.1, 300 sec: 6387.0). Total num frames: 12320768. Throughput: 0: 796.2, 1: 796.6. Samples: 3076668. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-27 04:01:36,415][24596] Avg episode reward: [(0, '-4250.870'), (1, '-3999.590')] +[2023-09-27 04:01:41,414][24596] Fps is (10 sec: 6553.6, 60 sec: 6417.1, 300 sec: 6387.0). Total num frames: 12353536. Throughput: 0: 796.4, 1: 796.4. Samples: 3086336. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-27 04:01:41,415][24596] Avg episode reward: [(0, '-4250.870'), (1, '-3999.590')] +[2023-09-27 04:01:43,483][25564] Updated weights for policy 1, policy_version 24160 (0.0017) +[2023-09-27 04:01:43,483][25562] Updated weights for policy 0, policy_version 24160 (0.0016) +[2023-09-27 04:01:46,414][24596] Fps is (10 sec: 6553.7, 60 sec: 6417.1, 300 sec: 6387.0). Total num frames: 12386304. Throughput: 0: 793.9, 1: 792.8. Samples: 3095911. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-27 04:01:46,414][24596] Avg episode reward: [(0, '-4209.510'), (1, '-3999.590')] +[2023-09-27 04:01:51,414][24596] Fps is (10 sec: 6144.1, 60 sec: 6348.8, 300 sec: 6373.1). Total num frames: 12414976. Throughput: 0: 793.8, 1: 793.7. Samples: 3100626. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-27 04:01:51,414][24596] Avg episode reward: [(0, '-4180.810'), (1, '-3999.640')] +[2023-09-27 04:01:56,414][24596] Fps is (10 sec: 5734.4, 60 sec: 6280.6, 300 sec: 6359.2). Total num frames: 12443648. Throughput: 0: 783.6, 1: 785.0. Samples: 3109697. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-27 04:01:56,414][24596] Avg episode reward: [(0, '-4150.220'), (1, '-3999.670')] +[2023-09-27 04:01:56,751][25564] Updated weights for policy 1, policy_version 24320 (0.0020) +[2023-09-27 04:01:56,751][25562] Updated weights for policy 0, policy_version 24320 (0.0019) +[2023-09-27 04:02:01,414][24596] Fps is (10 sec: 6143.9, 60 sec: 6280.5, 300 sec: 6359.2). Total num frames: 12476416. Throughput: 0: 783.1, 1: 784.1. Samples: 3119077. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-27 04:02:01,415][24596] Avg episode reward: [(0, '-4124.010'), (1, '-3999.660')] +[2023-09-27 04:02:06,414][24596] Fps is (10 sec: 6553.5, 60 sec: 6280.5, 300 sec: 6387.0). Total num frames: 12509184. Throughput: 0: 774.4, 1: 775.3. Samples: 3123259. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-27 04:02:06,415][24596] Avg episode reward: [(0, '-4123.950'), (1, '-3999.660')] +[2023-09-27 04:02:10,175][25562] Updated weights for policy 0, policy_version 24480 (0.0015) +[2023-09-27 04:02:10,175][25564] Updated weights for policy 1, policy_version 24480 (0.0019) +[2023-09-27 04:02:11,414][24596] Fps is (10 sec: 5734.4, 60 sec: 6144.0, 300 sec: 6359.2). Total num frames: 12533760. Throughput: 0: 774.7, 1: 772.9. Samples: 3132675. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-27 04:02:11,415][24596] Avg episode reward: [(0, '-4089.750'), (1, '-3999.650')] +[2023-09-27 04:02:16,414][24596] Fps is (10 sec: 5734.4, 60 sec: 6144.0, 300 sec: 6359.2). Total num frames: 12566528. Throughput: 0: 773.7, 1: 773.8. Samples: 3141638. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-27 04:02:16,415][24596] Avg episode reward: [(0, '-4089.750'), (1, '-4013.070')] +[2023-09-27 04:02:21,414][24596] Fps is (10 sec: 6553.7, 60 sec: 6144.0, 300 sec: 6359.2). Total num frames: 12599296. Throughput: 0: 774.9, 1: 774.8. Samples: 3146404. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-27 04:02:21,415][24596] Avg episode reward: [(0, '-4089.750'), (1, '-4013.110')] +[2023-09-27 04:02:23,508][25562] Updated weights for policy 0, policy_version 24640 (0.0018) +[2023-09-27 04:02:23,508][25564] Updated weights for policy 1, policy_version 24640 (0.0018) +[2023-09-27 04:02:26,414][24596] Fps is (10 sec: 6553.7, 60 sec: 6280.6, 300 sec: 6359.2). Total num frames: 12632064. Throughput: 0: 771.0, 1: 771.4. Samples: 3155745. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-27 04:02:26,414][24596] Avg episode reward: [(0, '-4077.710'), (1, '-4013.110')] +[2023-09-27 04:02:31,414][24596] Fps is (10 sec: 5734.4, 60 sec: 6144.0, 300 sec: 6331.4). Total num frames: 12656640. Throughput: 0: 760.4, 1: 761.2. Samples: 3164386. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-27 04:02:31,415][24596] Avg episode reward: [(0, '-4077.700'), (1, '-4013.110')] +[2023-09-27 04:02:36,414][24596] Fps is (10 sec: 5734.3, 60 sec: 6144.0, 300 sec: 6331.4). Total num frames: 12689408. Throughput: 0: 759.8, 1: 759.7. Samples: 3169003. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-27 04:02:36,415][24596] Avg episode reward: [(0, '-4045.820'), (1, '-4013.080')] +[2023-09-27 04:02:36,984][25562] Updated weights for policy 0, policy_version 24800 (0.0015) +[2023-09-27 04:02:36,984][25564] Updated weights for policy 1, policy_version 24800 (0.0018) +[2023-09-27 04:02:41,414][24596] Fps is (10 sec: 6553.6, 60 sec: 6144.0, 300 sec: 6331.4). Total num frames: 12722176. Throughput: 0: 765.2, 1: 764.4. Samples: 3178530. Policy #0 lag: (min: 11.0, avg: 11.0, max: 11.0) +[2023-09-27 04:02:41,415][24596] Avg episode reward: [(0, '-3994.620'), (1, '-4013.150')] +[2023-09-27 04:02:46,414][24596] Fps is (10 sec: 6553.6, 60 sec: 6144.0, 300 sec: 6331.4). Total num frames: 12754944. Throughput: 0: 767.4, 1: 769.1. Samples: 3188220. Policy #0 lag: (min: 11.0, avg: 11.0, max: 11.0) +[2023-09-27 04:02:46,415][24596] Avg episode reward: [(0, '-3936.620'), (1, '-4013.140')] +[2023-09-27 04:02:50,132][25564] Updated weights for policy 1, policy_version 24960 (0.0016) +[2023-09-27 04:02:50,132][25562] Updated weights for policy 0, policy_version 24960 (0.0018) +[2023-09-27 04:02:51,414][24596] Fps is (10 sec: 5734.4, 60 sec: 6075.7, 300 sec: 6303.7). Total num frames: 12779520. Throughput: 0: 773.1, 1: 772.9. Samples: 3192832. Policy #0 lag: (min: 11.0, avg: 11.0, max: 11.0) +[2023-09-27 04:02:51,415][24596] Avg episode reward: [(0, '-3936.620'), (1, '-4013.140')] +[2023-09-27 04:02:56,414][24596] Fps is (10 sec: 5734.3, 60 sec: 6144.0, 300 sec: 6303.7). Total num frames: 12812288. Throughput: 0: 760.1, 1: 761.6. Samples: 3201152. Policy #0 lag: (min: 11.0, avg: 11.0, max: 11.0) +[2023-09-27 04:02:56,415][24596] Avg episode reward: [(0, '-3868.570'), (1, '-4013.140')] +[2023-09-27 04:03:01,414][24596] Fps is (10 sec: 6553.6, 60 sec: 6144.0, 300 sec: 6303.7). Total num frames: 12845056. Throughput: 0: 764.9, 1: 763.8. Samples: 3210428. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-27 04:03:01,415][24596] Avg episode reward: [(0, '-3868.570'), (1, '-4013.130')] +[2023-09-27 04:03:03,964][25562] Updated weights for policy 0, policy_version 25120 (0.0018) +[2023-09-27 04:03:03,964][25564] Updated weights for policy 1, policy_version 25120 (0.0015) +[2023-09-27 04:03:06,414][24596] Fps is (10 sec: 5734.6, 60 sec: 6007.5, 300 sec: 6275.9). Total num frames: 12869632. Throughput: 0: 762.9, 1: 761.8. Samples: 3215015. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-27 04:03:06,414][24596] Avg episode reward: [(0, '-3868.530'), (1, '-4013.180')] +[2023-09-27 04:03:11,414][24596] Fps is (10 sec: 5734.5, 60 sec: 6144.0, 300 sec: 6303.7). Total num frames: 12902400. Throughput: 0: 759.1, 1: 758.5. Samples: 3224039. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-27 04:03:11,414][24596] Avg episode reward: [(0, '-3868.530'), (1, '-4040.530')] +[2023-09-27 04:03:11,422][25443] Saving ./train_atari/atari_skiing/checkpoint_p1/checkpoint_000025200_6451200.pth... +[2023-09-27 04:03:11,422][25378] Saving ./train_atari/atari_skiing/checkpoint_p0/checkpoint_000025200_6451200.pth... +[2023-09-27 04:03:11,459][25443] Removing ./train_atari/atari_skiing/checkpoint_p1/checkpoint_000022256_5697536.pth +[2023-09-27 04:03:11,459][25378] Removing ./train_atari/atari_skiing/checkpoint_p0/checkpoint_000022256_5697536.pth +[2023-09-27 04:03:16,414][24596] Fps is (10 sec: 6553.5, 60 sec: 6144.0, 300 sec: 6303.7). Total num frames: 12935168. Throughput: 0: 769.8, 1: 770.0. Samples: 3233675. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-27 04:03:16,415][24596] Avg episode reward: [(0, '-3861.640'), (1, '-4040.530')] +[2023-09-27 04:03:17,099][25564] Updated weights for policy 1, policy_version 25280 (0.0016) +[2023-09-27 04:03:17,099][25562] Updated weights for policy 0, policy_version 25280 (0.0016) +[2023-09-27 04:03:21,414][24596] Fps is (10 sec: 6553.5, 60 sec: 6144.0, 300 sec: 6303.7). Total num frames: 12967936. Throughput: 0: 768.9, 1: 768.5. Samples: 3238185. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-27 04:03:21,415][24596] Avg episode reward: [(0, '-3861.650'), (1, '-4040.740')] +[2023-09-27 04:03:26,414][24596] Fps is (10 sec: 6553.7, 60 sec: 6144.0, 300 sec: 6303.7). Total num frames: 13000704. Throughput: 0: 773.5, 1: 773.2. Samples: 3248128. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-27 04:03:26,414][24596] Avg episode reward: [(0, '-3861.650'), (1, '-4040.760')] +[2023-09-27 04:03:29,793][25564] Updated weights for policy 1, policy_version 25440 (0.0017) +[2023-09-27 04:03:29,794][25562] Updated weights for policy 0, policy_version 25440 (0.0018) +[2023-09-27 04:03:31,414][24596] Fps is (10 sec: 6553.7, 60 sec: 6280.6, 300 sec: 6303.7). Total num frames: 13033472. Throughput: 0: 773.9, 1: 771.9. Samples: 3257782. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-27 04:03:31,414][24596] Avg episode reward: [(0, '-3842.260'), (1, '-4040.760')] +[2023-09-27 04:03:36,414][24596] Fps is (10 sec: 6553.4, 60 sec: 6280.5, 300 sec: 6303.7). Total num frames: 13066240. Throughput: 0: 773.7, 1: 773.7. Samples: 3262466. Policy #0 lag: (min: 8.0, avg: 8.0, max: 8.0) +[2023-09-27 04:03:36,415][24596] Avg episode reward: [(0, '-3832.310'), (1, '-4040.760')] +[2023-09-27 04:03:41,414][24596] Fps is (10 sec: 6553.4, 60 sec: 6280.5, 300 sec: 6303.7). Total num frames: 13099008. Throughput: 0: 789.5, 1: 789.6. Samples: 3272211. Policy #0 lag: (min: 8.0, avg: 8.0, max: 8.0) +[2023-09-27 04:03:41,415][24596] Avg episode reward: [(0, '-3804.630'), (1, '-4013.660')] +[2023-09-27 04:03:42,536][25562] Updated weights for policy 0, policy_version 25600 (0.0016) +[2023-09-27 04:03:42,536][25564] Updated weights for policy 1, policy_version 25600 (0.0018) +[2023-09-27 04:03:46,414][24596] Fps is (10 sec: 5734.4, 60 sec: 6144.0, 300 sec: 6275.9). Total num frames: 13123584. Throughput: 0: 787.3, 1: 788.7. Samples: 3281345. Policy #0 lag: (min: 8.0, avg: 8.0, max: 8.0) +[2023-09-27 04:03:46,415][24596] Avg episode reward: [(0, '-3804.630'), (1, '-4013.650')] +[2023-09-27 04:03:51,414][24596] Fps is (10 sec: 5734.6, 60 sec: 6280.6, 300 sec: 6275.9). Total num frames: 13156352. Throughput: 0: 791.5, 1: 792.7. Samples: 3286303. Policy #0 lag: (min: 8.0, avg: 8.0, max: 8.0) +[2023-09-27 04:03:51,415][24596] Avg episode reward: [(0, '-3787.990'), (1, '-4013.640')] +[2023-09-27 04:03:55,466][25562] Updated weights for policy 0, policy_version 25760 (0.0017) +[2023-09-27 04:03:55,466][25564] Updated weights for policy 1, policy_version 25760 (0.0017) +[2023-09-27 04:03:56,414][24596] Fps is (10 sec: 6553.6, 60 sec: 6280.5, 300 sec: 6303.7). Total num frames: 13189120. Throughput: 0: 798.1, 1: 798.5. Samples: 3295886. Policy #0 lag: (min: 8.0, avg: 8.0, max: 8.0) +[2023-09-27 04:03:56,415][24596] Avg episode reward: [(0, '-3788.000'), (1, '-4013.640')] +[2023-09-27 04:04:01,414][24596] Fps is (10 sec: 6553.6, 60 sec: 6280.6, 300 sec: 6303.7). Total num frames: 13221888. Throughput: 0: 798.0, 1: 797.5. Samples: 3305472. Policy #0 lag: (min: 8.0, avg: 8.0, max: 8.0) +[2023-09-27 04:04:01,415][24596] Avg episode reward: [(0, '-3787.960'), (1, '-4013.640')] +[2023-09-27 04:04:06,414][24596] Fps is (10 sec: 6553.6, 60 sec: 6417.0, 300 sec: 6303.7). Total num frames: 13254656. Throughput: 0: 801.3, 1: 801.6. Samples: 3310312. Policy #0 lag: (min: 8.0, avg: 8.0, max: 8.0) +[2023-09-27 04:04:06,415][24596] Avg episode reward: [(0, '-3787.960'), (1, '-4042.840')] +[2023-09-27 04:04:08,130][25564] Updated weights for policy 1, policy_version 25920 (0.0018) +[2023-09-27 04:04:08,130][25562] Updated weights for policy 0, policy_version 25920 (0.0017) +[2023-09-27 04:04:11,414][24596] Fps is (10 sec: 6553.5, 60 sec: 6417.0, 300 sec: 6303.7). Total num frames: 13287424. Throughput: 0: 799.5, 1: 799.4. Samples: 3320079. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-27 04:04:11,415][24596] Avg episode reward: [(0, '-3764.870'), (1, '-4099.690')] +[2023-09-27 04:04:16,414][24596] Fps is (10 sec: 6553.7, 60 sec: 6417.1, 300 sec: 6303.7). Total num frames: 13320192. Throughput: 0: 803.1, 1: 802.8. Samples: 3330048. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-27 04:04:16,415][24596] Avg episode reward: [(0, '-3764.900'), (1, '-4099.690')] +[2023-09-27 04:04:20,704][25564] Updated weights for policy 1, policy_version 26080 (0.0018) +[2023-09-27 04:04:20,705][25562] Updated weights for policy 0, policy_version 26080 (0.0017) +[2023-09-27 04:04:21,414][24596] Fps is (10 sec: 6553.7, 60 sec: 6417.1, 300 sec: 6303.7). Total num frames: 13352960. Throughput: 0: 803.7, 1: 803.2. Samples: 3334777. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-27 04:04:21,415][24596] Avg episode reward: [(0, '-3761.710'), (1, '-4099.690')] +[2023-09-27 04:04:26,414][24596] Fps is (10 sec: 6553.5, 60 sec: 6417.0, 300 sec: 6303.7). Total num frames: 13385728. Throughput: 0: 802.2, 1: 801.9. Samples: 3344393. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-27 04:04:26,415][24596] Avg episode reward: [(0, '-3728.900'), (1, '-4109.230')] +[2023-09-27 04:04:31,414][24596] Fps is (10 sec: 6553.5, 60 sec: 6417.1, 300 sec: 6303.7). Total num frames: 13418496. Throughput: 0: 810.9, 1: 809.3. Samples: 3354257. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-27 04:04:31,415][24596] Avg episode reward: [(0, '-3700.910'), (1, '-4096.860')] +[2023-09-27 04:04:33,632][25564] Updated weights for policy 1, policy_version 26240 (0.0016) +[2023-09-27 04:04:33,632][25562] Updated weights for policy 0, policy_version 26240 (0.0014) +[2023-09-27 04:04:36,414][24596] Fps is (10 sec: 6553.6, 60 sec: 6417.1, 300 sec: 6331.4). Total num frames: 13451264. Throughput: 0: 804.8, 1: 804.4. Samples: 3358720. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-27 04:04:36,415][24596] Avg episode reward: [(0, '-3700.860'), (1, '-4096.900')] +[2023-09-27 04:04:41,414][24596] Fps is (10 sec: 6553.7, 60 sec: 6417.1, 300 sec: 6331.4). Total num frames: 13484032. Throughput: 0: 809.9, 1: 809.8. Samples: 3368776. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-27 04:04:41,414][24596] Avg episode reward: [(0, '-3700.850'), (1, '-4096.900')] +[2023-09-27 04:04:46,269][25562] Updated weights for policy 0, policy_version 26400 (0.0015) +[2023-09-27 04:04:46,269][25564] Updated weights for policy 1, policy_version 26400 (0.0018) +[2023-09-27 04:04:46,414][24596] Fps is (10 sec: 6553.6, 60 sec: 6553.6, 300 sec: 6331.4). Total num frames: 13516800. Throughput: 0: 807.4, 1: 807.8. Samples: 3378153. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-27 04:04:46,415][24596] Avg episode reward: [(0, '-3678.760'), (1, '-4098.030')] +[2023-09-27 04:04:51,414][24596] Fps is (10 sec: 5734.3, 60 sec: 6417.0, 300 sec: 6303.7). Total num frames: 13541376. Throughput: 0: 808.1, 1: 807.8. Samples: 3383027. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-27 04:04:51,415][24596] Avg episode reward: [(0, '-3678.760'), (1, '-4158.630')] +[2023-09-27 04:04:56,414][24596] Fps is (10 sec: 5734.4, 60 sec: 6417.1, 300 sec: 6303.7). Total num frames: 13574144. Throughput: 0: 797.9, 1: 798.2. Samples: 3391905. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-27 04:04:56,415][24596] Avg episode reward: [(0, '-3656.940'), (1, '-4158.630')] +[2023-09-27 04:04:59,307][25564] Updated weights for policy 1, policy_version 26560 (0.0018) +[2023-09-27 04:04:59,307][25562] Updated weights for policy 0, policy_version 26560 (0.0018) +[2023-09-27 04:05:01,414][24596] Fps is (10 sec: 6553.6, 60 sec: 6417.1, 300 sec: 6303.7). Total num frames: 13606912. Throughput: 0: 796.5, 1: 796.5. Samples: 3401733. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-27 04:05:01,415][24596] Avg episode reward: [(0, '-3656.940'), (1, '-4140.110')] +[2023-09-27 04:05:06,414][24596] Fps is (10 sec: 6553.7, 60 sec: 6417.1, 300 sec: 6303.7). Total num frames: 13639680. Throughput: 0: 798.5, 1: 799.2. Samples: 3406671. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-27 04:05:06,415][24596] Avg episode reward: [(0, '-3633.360'), (1, '-4131.390')] +[2023-09-27 04:05:11,414][24596] Fps is (10 sec: 6553.7, 60 sec: 6417.1, 300 sec: 6303.7). Total num frames: 13672448. Throughput: 0: 799.6, 1: 800.2. Samples: 3416384. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-27 04:05:11,414][24596] Avg episode reward: [(0, '-3633.360'), (1, '-4131.390')] +[2023-09-27 04:05:11,424][25378] Saving ./train_atari/atari_skiing/checkpoint_p0/checkpoint_000026704_6836224.pth... +[2023-09-27 04:05:11,424][25443] Saving ./train_atari/atari_skiing/checkpoint_p1/checkpoint_000026704_6836224.pth... +[2023-09-27 04:05:11,459][25443] Removing ./train_atari/atari_skiing/checkpoint_p1/checkpoint_000023760_6082560.pth +[2023-09-27 04:05:11,460][25378] Removing ./train_atari/atari_skiing/checkpoint_p0/checkpoint_000023760_6082560.pth +[2023-09-27 04:05:11,962][25562] Updated weights for policy 0, policy_version 26720 (0.0017) +[2023-09-27 04:05:11,962][25564] Updated weights for policy 1, policy_version 26720 (0.0015) +[2023-09-27 04:05:16,414][24596] Fps is (10 sec: 6553.7, 60 sec: 6417.1, 300 sec: 6303.7). Total num frames: 13705216. Throughput: 0: 799.9, 1: 801.2. Samples: 3426304. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-27 04:05:16,414][24596] Avg episode reward: [(0, '-3622.030'), (1, '-4157.610')] +[2023-09-27 04:05:21,414][24596] Fps is (10 sec: 6553.5, 60 sec: 6417.1, 300 sec: 6303.7). Total num frames: 13737984. Throughput: 0: 801.4, 1: 801.8. Samples: 3430867. Policy #0 lag: (min: 7.0, avg: 7.0, max: 7.0) +[2023-09-27 04:05:21,415][24596] Avg episode reward: [(0, '-3597.500'), (1, '-4147.620')] +[2023-09-27 04:05:24,852][25562] Updated weights for policy 0, policy_version 26880 (0.0017) +[2023-09-27 04:05:24,852][25564] Updated weights for policy 1, policy_version 26880 (0.0017) +[2023-09-27 04:05:26,414][24596] Fps is (10 sec: 6553.4, 60 sec: 6417.1, 300 sec: 6331.4). Total num frames: 13770752. Throughput: 0: 798.1, 1: 797.2. Samples: 3440565. Policy #0 lag: (min: 7.0, avg: 7.0, max: 7.0) +[2023-09-27 04:05:26,415][24596] Avg episode reward: [(0, '-3597.500'), (1, '-4147.610')] +[2023-09-27 04:05:31,414][24596] Fps is (10 sec: 6553.6, 60 sec: 6417.1, 300 sec: 6331.4). Total num frames: 13803520. Throughput: 0: 798.4, 1: 798.4. Samples: 3450011. Policy #0 lag: (min: 7.0, avg: 7.0, max: 7.0) +[2023-09-27 04:05:31,415][24596] Avg episode reward: [(0, '-3597.500'), (1, '-4147.610')] +[2023-09-27 04:05:36,414][24596] Fps is (10 sec: 6553.6, 60 sec: 6417.1, 300 sec: 6331.4). Total num frames: 13836288. Throughput: 0: 799.6, 1: 799.3. Samples: 3454976. Policy #0 lag: (min: 7.0, avg: 7.0, max: 7.0) +[2023-09-27 04:05:36,415][24596] Avg episode reward: [(0, '-3581.240'), (1, '-4166.820')] +[2023-09-27 04:05:37,535][25564] Updated weights for policy 1, policy_version 27040 (0.0018) +[2023-09-27 04:05:37,535][25562] Updated weights for policy 0, policy_version 27040 (0.0017) +[2023-09-27 04:05:41,414][24596] Fps is (10 sec: 6553.5, 60 sec: 6417.0, 300 sec: 6331.4). Total num frames: 13869056. Throughput: 0: 807.5, 1: 808.6. Samples: 3464629. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-27 04:05:41,415][24596] Avg episode reward: [(0, '-3581.180'), (1, '-4166.830')] +[2023-09-27 04:05:46,414][24596] Fps is (10 sec: 5734.4, 60 sec: 6280.5, 300 sec: 6303.7). Total num frames: 13893632. Throughput: 0: 802.6, 1: 803.8. Samples: 3474024. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-27 04:05:46,415][24596] Avg episode reward: [(0, '-3581.180'), (1, '-4166.830')] +[2023-09-27 04:05:50,393][25562] Updated weights for policy 0, policy_version 27200 (0.0019) +[2023-09-27 04:05:50,393][25564] Updated weights for policy 1, policy_version 27200 (0.0019) +[2023-09-27 04:05:51,414][24596] Fps is (10 sec: 5734.5, 60 sec: 6417.1, 300 sec: 6303.7). Total num frames: 13926400. Throughput: 0: 802.5, 1: 804.0. Samples: 3478966. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-27 04:05:51,415][24596] Avg episode reward: [(0, '-3545.170'), (1, '-4176.040')] +[2023-09-27 04:05:56,414][24596] Fps is (10 sec: 6553.7, 60 sec: 6417.1, 300 sec: 6303.7). Total num frames: 13959168. Throughput: 0: 799.5, 1: 799.1. Samples: 3488319. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-27 04:05:56,414][24596] Avg episode reward: [(0, '-3545.170'), (1, '-4176.040')] +[2023-09-27 04:06:01,414][24596] Fps is (10 sec: 6553.7, 60 sec: 6417.1, 300 sec: 6303.7). Total num frames: 13991936. Throughput: 0: 796.5, 1: 796.6. Samples: 3497992. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-27 04:06:01,415][24596] Avg episode reward: [(0, '-3523.550'), (1, '-4176.080')] +[2023-09-27 04:06:03,024][25564] Updated weights for policy 1, policy_version 27360 (0.0016) +[2023-09-27 04:06:03,026][25562] Updated weights for policy 0, policy_version 27360 (0.0016) +[2023-09-27 04:06:06,414][24596] Fps is (10 sec: 6553.5, 60 sec: 6417.1, 300 sec: 6303.7). Total num frames: 14024704. Throughput: 0: 802.0, 1: 802.0. Samples: 3503049. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-27 04:06:06,415][24596] Avg episode reward: [(0, '-3519.880'), (1, '-4176.090')] +[2023-09-27 04:06:11,414][24596] Fps is (10 sec: 6553.4, 60 sec: 6417.0, 300 sec: 6303.7). Total num frames: 14057472. Throughput: 0: 800.3, 1: 801.5. Samples: 3512646. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-27 04:06:11,415][24596] Avg episode reward: [(0, '-3494.630'), (1, '-4176.130')] +[2023-09-27 04:06:15,652][25564] Updated weights for policy 1, policy_version 27520 (0.0015) +[2023-09-27 04:06:15,653][25562] Updated weights for policy 0, policy_version 27520 (0.0017) +[2023-09-27 04:06:16,414][24596] Fps is (10 sec: 6553.7, 60 sec: 6417.0, 300 sec: 6303.7). Total num frames: 14090240. Throughput: 0: 806.3, 1: 805.9. Samples: 3522560. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-27 04:06:16,415][24596] Avg episode reward: [(0, '-3487.000'), (1, '-4176.130')] +[2023-09-27 04:06:21,414][24596] Fps is (10 sec: 6553.6, 60 sec: 6417.1, 300 sec: 6331.4). Total num frames: 14123008. Throughput: 0: 805.1, 1: 805.1. Samples: 3527435. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-27 04:06:21,415][24596] Avg episode reward: [(0, '-3487.000'), (1, '-4188.480')] +[2023-09-27 04:06:26,414][24596] Fps is (10 sec: 6553.6, 60 sec: 6417.1, 300 sec: 6331.4). Total num frames: 14155776. Throughput: 0: 803.6, 1: 802.4. Samples: 3536901. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-27 04:06:26,415][24596] Avg episode reward: [(0, '-3486.970'), (1, '-4215.560')] +[2023-09-27 04:06:28,479][25562] Updated weights for policy 0, policy_version 27680 (0.0016) +[2023-09-27 04:06:28,479][25564] Updated weights for policy 1, policy_version 27680 (0.0013) +[2023-09-27 04:06:31,414][24596] Fps is (10 sec: 6553.7, 60 sec: 6417.1, 300 sec: 6331.4). Total num frames: 14188544. Throughput: 0: 805.2, 1: 804.5. Samples: 3546460. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-27 04:06:31,415][24596] Avg episode reward: [(0, '-3486.970'), (1, '-4215.560')] +[2023-09-27 04:06:36,414][24596] Fps is (10 sec: 6553.8, 60 sec: 6417.1, 300 sec: 6331.4). Total num frames: 14221312. Throughput: 0: 803.8, 1: 802.1. Samples: 3551232. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-27 04:06:36,414][24596] Avg episode reward: [(0, '-3475.070'), (1, '-4215.560')] +[2023-09-27 04:06:41,339][25564] Updated weights for policy 1, policy_version 27840 (0.0017) +[2023-09-27 04:06:41,339][25562] Updated weights for policy 0, policy_version 27840 (0.0017) +[2023-09-27 04:06:41,414][24596] Fps is (10 sec: 6553.6, 60 sec: 6417.1, 300 sec: 6331.4). Total num frames: 14254080. Throughput: 0: 806.8, 1: 807.2. Samples: 3560947. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-27 04:06:41,414][24596] Avg episode reward: [(0, '-3475.070'), (1, '-4215.560')] +[2023-09-27 04:06:46,414][24596] Fps is (10 sec: 5734.3, 60 sec: 6417.1, 300 sec: 6317.6). Total num frames: 14278656. Throughput: 0: 803.1, 1: 803.1. Samples: 3570272. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-27 04:06:46,415][24596] Avg episode reward: [(0, '-3475.070'), (1, '-4215.490')] +[2023-09-27 04:06:51,414][24596] Fps is (10 sec: 5734.3, 60 sec: 6417.1, 300 sec: 6331.4). Total num frames: 14311424. Throughput: 0: 801.1, 1: 801.2. Samples: 3575152. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-27 04:06:51,415][24596] Avg episode reward: [(0, '-3457.930'), (1, '-4229.050')] +[2023-09-27 04:06:54,264][25562] Updated weights for policy 0, policy_version 28000 (0.0015) +[2023-09-27 04:06:54,265][25564] Updated weights for policy 1, policy_version 28000 (0.0017) +[2023-09-27 04:06:56,414][24596] Fps is (10 sec: 6553.7, 60 sec: 6417.1, 300 sec: 6331.5). Total num frames: 14344192. Throughput: 0: 798.1, 1: 798.5. Samples: 3584494. Policy #0 lag: (min: 13.0, avg: 13.0, max: 13.0) +[2023-09-27 04:06:56,414][24596] Avg episode reward: [(0, '-3452.690'), (1, '-4228.990')] +[2023-09-27 04:07:01,414][24596] Fps is (10 sec: 6553.7, 60 sec: 6417.1, 300 sec: 6331.4). Total num frames: 14376960. Throughput: 0: 796.4, 1: 796.5. Samples: 3594241. Policy #0 lag: (min: 13.0, avg: 13.0, max: 13.0) +[2023-09-27 04:07:01,414][24596] Avg episode reward: [(0, '-3452.690'), (1, '-4228.990')] +[2023-09-27 04:07:06,414][24596] Fps is (10 sec: 6553.5, 60 sec: 6417.1, 300 sec: 6359.2). Total num frames: 14409728. Throughput: 0: 796.7, 1: 796.8. Samples: 3599143. Policy #0 lag: (min: 13.0, avg: 13.0, max: 13.0) +[2023-09-27 04:07:06,415][24596] Avg episode reward: [(0, '-3452.690'), (1, '-4229.010')] +[2023-09-27 04:07:07,008][25564] Updated weights for policy 1, policy_version 28160 (0.0016) +[2023-09-27 04:07:07,009][25562] Updated weights for policy 0, policy_version 28160 (0.0016) +[2023-09-27 04:07:11,414][24596] Fps is (10 sec: 6553.4, 60 sec: 6417.1, 300 sec: 6359.2). Total num frames: 14442496. Throughput: 0: 796.4, 1: 796.4. Samples: 3608576. Policy #0 lag: (min: 13.0, avg: 13.0, max: 13.0) +[2023-09-27 04:07:11,415][24596] Avg episode reward: [(0, '-3448.790'), (1, '-4237.080')] +[2023-09-27 04:07:11,427][25443] Saving ./train_atari/atari_skiing/checkpoint_p1/checkpoint_000028208_7221248.pth... +[2023-09-27 04:07:11,427][25378] Saving ./train_atari/atari_skiing/checkpoint_p0/checkpoint_000028208_7221248.pth... +[2023-09-27 04:07:11,465][25443] Removing ./train_atari/atari_skiing/checkpoint_p1/checkpoint_000025200_6451200.pth +[2023-09-27 04:07:11,466][25378] Removing ./train_atari/atari_skiing/checkpoint_p0/checkpoint_000025200_6451200.pth +[2023-09-27 04:07:16,414][24596] Fps is (10 sec: 6553.6, 60 sec: 6417.1, 300 sec: 6359.2). Total num frames: 14475264. Throughput: 0: 799.9, 1: 800.3. Samples: 3618468. Policy #0 lag: (min: 0.0, avg: 0.0, max: 0.0) +[2023-09-27 04:07:16,415][24596] Avg episode reward: [(0, '-3437.670'), (1, '-4206.700')] +[2023-09-27 04:07:19,704][25564] Updated weights for policy 1, policy_version 28320 (0.0016) +[2023-09-27 04:07:19,706][25562] Updated weights for policy 0, policy_version 28320 (0.0018) +[2023-09-27 04:07:21,414][24596] Fps is (10 sec: 6553.9, 60 sec: 6417.1, 300 sec: 6359.2). Total num frames: 14508032. Throughput: 0: 798.7, 1: 798.9. Samples: 3623123. Policy #0 lag: (min: 0.0, avg: 0.0, max: 0.0) +[2023-09-27 04:07:21,414][24596] Avg episode reward: [(0, '-3437.620'), (1, '-4206.640')] +[2023-09-27 04:07:26,414][24596] Fps is (10 sec: 6553.7, 60 sec: 6417.1, 300 sec: 6387.0). Total num frames: 14540800. Throughput: 0: 801.0, 1: 801.2. Samples: 3633045. Policy #0 lag: (min: 0.0, avg: 0.0, max: 0.0) +[2023-09-27 04:07:26,415][24596] Avg episode reward: [(0, '-3386.190'), (1, '-4206.640')] +[2023-09-27 04:07:31,414][24596] Fps is (10 sec: 6553.5, 60 sec: 6417.1, 300 sec: 6387.0). Total num frames: 14573568. Throughput: 0: 803.5, 1: 803.6. Samples: 3642592. Policy #0 lag: (min: 0.0, avg: 0.0, max: 0.0) +[2023-09-27 04:07:31,414][24596] Avg episode reward: [(0, '-3386.190'), (1, '-4206.680')] +[2023-09-27 04:07:32,440][25562] Updated weights for policy 0, policy_version 28480 (0.0017) +[2023-09-27 04:07:32,440][25564] Updated weights for policy 1, policy_version 28480 (0.0018) +[2023-09-27 04:07:36,414][24596] Fps is (10 sec: 6553.7, 60 sec: 6417.1, 300 sec: 6387.0). Total num frames: 14606336. Throughput: 0: 803.9, 1: 803.5. Samples: 3647488. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-27 04:07:36,414][24596] Avg episode reward: [(0, '-3383.690'), (1, '-4248.930')] +[2023-09-27 04:07:41,414][24596] Fps is (10 sec: 6553.6, 60 sec: 6417.1, 300 sec: 6387.0). Total num frames: 14639104. Throughput: 0: 809.2, 1: 808.0. Samples: 3657268. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-27 04:07:41,414][24596] Avg episode reward: [(0, '-3383.640'), (1, '-4248.940')] +[2023-09-27 04:07:45,102][25562] Updated weights for policy 0, policy_version 28640 (0.0012) +[2023-09-27 04:07:45,103][25564] Updated weights for policy 1, policy_version 28640 (0.0019) +[2023-09-27 04:07:46,414][24596] Fps is (10 sec: 6553.5, 60 sec: 6553.6, 300 sec: 6414.8). Total num frames: 14671872. Throughput: 0: 805.8, 1: 805.3. Samples: 3666740. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-27 04:07:46,415][24596] Avg episode reward: [(0, '-3378.540'), (1, '-4233.590')] +[2023-09-27 04:07:51,414][24596] Fps is (10 sec: 5734.3, 60 sec: 6417.1, 300 sec: 6387.0). Total num frames: 14696448. Throughput: 0: 805.3, 1: 804.1. Samples: 3671568. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-27 04:07:51,415][24596] Avg episode reward: [(0, '-3363.330'), (1, '-4250.960')] +[2023-09-27 04:07:56,414][24596] Fps is (10 sec: 5734.4, 60 sec: 6417.0, 300 sec: 6387.0). Total num frames: 14729216. Throughput: 0: 798.0, 1: 798.3. Samples: 3680411. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-27 04:07:56,415][24596] Avg episode reward: [(0, '-3363.330'), (1, '-4250.960')] +[2023-09-27 04:07:58,369][25562] Updated weights for policy 0, policy_version 28800 (0.0017) +[2023-09-27 04:07:58,370][25564] Updated weights for policy 1, policy_version 28800 (0.0018) +[2023-09-27 04:08:01,414][24596] Fps is (10 sec: 6553.7, 60 sec: 6417.1, 300 sec: 6414.8). Total num frames: 14761984. Throughput: 0: 798.4, 1: 795.0. Samples: 3690172. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-27 04:08:01,414][24596] Avg episode reward: [(0, '-3363.330'), (1, '-4250.980')] +[2023-09-27 04:08:06,414][24596] Fps is (10 sec: 6553.6, 60 sec: 6417.1, 300 sec: 6414.7). Total num frames: 14794752. Throughput: 0: 794.2, 1: 794.0. Samples: 3694592. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-27 04:08:06,415][24596] Avg episode reward: [(0, '-3363.360'), (1, '-4250.930')] +[2023-09-27 04:08:11,350][25562] Updated weights for policy 0, policy_version 28960 (0.0018) +[2023-09-27 04:08:11,350][25564] Updated weights for policy 1, policy_version 28960 (0.0019) +[2023-09-27 04:08:11,414][24596] Fps is (10 sec: 6553.4, 60 sec: 6417.1, 300 sec: 6414.7). Total num frames: 14827520. Throughput: 0: 791.6, 1: 790.9. Samples: 3704256. Policy #0 lag: (min: 2.0, avg: 2.0, max: 2.0) +[2023-09-27 04:08:11,415][24596] Avg episode reward: [(0, '-3384.670'), (1, '-4250.930')] +[2023-09-27 04:08:16,414][24596] Fps is (10 sec: 6553.7, 60 sec: 6417.1, 300 sec: 6414.8). Total num frames: 14860288. Throughput: 0: 791.3, 1: 791.8. Samples: 3713829. Policy #0 lag: (min: 2.0, avg: 2.0, max: 2.0) +[2023-09-27 04:08:16,414][24596] Avg episode reward: [(0, '-3401.960'), (1, '-4265.730')] +[2023-09-27 04:08:21,414][24596] Fps is (10 sec: 6553.7, 60 sec: 6417.0, 300 sec: 6414.7). Total num frames: 14893056. Throughput: 0: 793.7, 1: 793.8. Samples: 3718927. Policy #0 lag: (min: 2.0, avg: 2.0, max: 2.0) +[2023-09-27 04:08:21,415][24596] Avg episode reward: [(0, '-3401.960'), (1, '-4265.790')] +[2023-09-27 04:08:24,031][25564] Updated weights for policy 1, policy_version 29120 (0.0017) +[2023-09-27 04:08:24,031][25562] Updated weights for policy 0, policy_version 29120 (0.0018) +[2023-09-27 04:08:26,414][24596] Fps is (10 sec: 5734.2, 60 sec: 6280.5, 300 sec: 6387.0). Total num frames: 14917632. Throughput: 0: 789.5, 1: 789.6. Samples: 3728328. Policy #0 lag: (min: 2.0, avg: 2.0, max: 2.0) +[2023-09-27 04:08:26,415][24596] Avg episode reward: [(0, '-3401.960'), (1, '-4265.800')] +[2023-09-27 04:08:31,414][24596] Fps is (10 sec: 5734.4, 60 sec: 6280.5, 300 sec: 6387.0). Total num frames: 14950400. Throughput: 0: 789.4, 1: 790.1. Samples: 3737820. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-27 04:08:31,415][24596] Avg episode reward: [(0, '-3426.300'), (1, '-4265.850')] +[2023-09-27 04:08:36,414][24596] Fps is (10 sec: 6553.7, 60 sec: 6280.5, 300 sec: 6387.0). Total num frames: 14983168. Throughput: 0: 787.7, 1: 789.1. Samples: 3742522. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-27 04:08:36,415][24596] Avg episode reward: [(0, '-3445.720'), (1, '-4265.850')] +[2023-09-27 04:08:36,849][25562] Updated weights for policy 0, policy_version 29280 (0.0017) +[2023-09-27 04:08:36,850][25564] Updated weights for policy 1, policy_version 29280 (0.0016) +[2023-09-27 04:08:41,414][24596] Fps is (10 sec: 6553.5, 60 sec: 6280.5, 300 sec: 6414.7). Total num frames: 15015936. Throughput: 0: 799.3, 1: 799.3. Samples: 3752348. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-27 04:08:41,415][24596] Avg episode reward: [(0, '-3445.720'), (1, '-4265.850')] +[2023-09-27 04:08:46,414][24596] Fps is (10 sec: 6553.6, 60 sec: 6280.5, 300 sec: 6414.7). Total num frames: 15048704. Throughput: 0: 798.6, 1: 801.0. Samples: 3762156. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-27 04:08:46,415][24596] Avg episode reward: [(0, '-3445.720'), (1, '-4265.850')] +[2023-09-27 04:08:49,616][25564] Updated weights for policy 1, policy_version 29440 (0.0018) +[2023-09-27 04:08:49,616][25562] Updated weights for policy 0, policy_version 29440 (0.0017) +[2023-09-27 04:08:51,414][24596] Fps is (10 sec: 6553.7, 60 sec: 6417.1, 300 sec: 6414.8). Total num frames: 15081472. Throughput: 0: 799.4, 1: 799.4. Samples: 3766539. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-27 04:08:51,415][24596] Avg episode reward: [(0, '-3476.600'), (1, '-4265.800')] +[2023-09-27 04:08:56,414][24596] Fps is (10 sec: 6553.5, 60 sec: 6417.1, 300 sec: 6414.7). Total num frames: 15114240. Throughput: 0: 800.1, 1: 801.2. Samples: 3776314. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-27 04:08:56,415][24596] Avg episode reward: [(0, '-3498.410'), (1, '-4265.800')] +[2023-09-27 04:09:01,414][24596] Fps is (10 sec: 5734.4, 60 sec: 6280.5, 300 sec: 6387.0). Total num frames: 15138816. Throughput: 0: 794.7, 1: 794.4. Samples: 3785341. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-27 04:09:01,415][24596] Avg episode reward: [(0, '-3498.410'), (1, '-4265.790')] +[2023-09-27 04:09:02,839][25562] Updated weights for policy 0, policy_version 29600 (0.0018) +[2023-09-27 04:09:02,839][25564] Updated weights for policy 1, policy_version 29600 (0.0017) +[2023-09-27 04:09:06,414][24596] Fps is (10 sec: 5734.6, 60 sec: 6280.6, 300 sec: 6387.0). Total num frames: 15171584. Throughput: 0: 791.5, 1: 792.0. Samples: 3790183. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-27 04:09:06,414][24596] Avg episode reward: [(0, '-3498.410'), (1, '-4265.790')] +[2023-09-27 04:09:11,414][24596] Fps is (10 sec: 6553.5, 60 sec: 6280.5, 300 sec: 6387.0). Total num frames: 15204352. Throughput: 0: 789.6, 1: 790.6. Samples: 3799438. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-27 04:09:11,415][24596] Avg episode reward: [(0, '-3518.850'), (1, '-4241.920')] +[2023-09-27 04:09:11,427][25378] Saving ./train_atari/atari_skiing/checkpoint_p0/checkpoint_000029696_7602176.pth... +[2023-09-27 04:09:11,428][25443] Saving ./train_atari/atari_skiing/checkpoint_p1/checkpoint_000029696_7602176.pth... +[2023-09-27 04:09:11,462][25378] Removing ./train_atari/atari_skiing/checkpoint_p0/checkpoint_000026704_6836224.pth +[2023-09-27 04:09:11,464][25443] Removing ./train_atari/atari_skiing/checkpoint_p1/checkpoint_000026704_6836224.pth +[2023-09-27 04:09:15,697][25564] Updated weights for policy 1, policy_version 29760 (0.0017) +[2023-09-27 04:09:15,698][25562] Updated weights for policy 0, policy_version 29760 (0.0017) +[2023-09-27 04:09:16,414][24596] Fps is (10 sec: 6553.6, 60 sec: 6280.5, 300 sec: 6387.0). Total num frames: 15237120. Throughput: 0: 794.1, 1: 793.9. Samples: 3809280. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-27 04:09:16,414][24596] Avg episode reward: [(0, '-3545.930'), (1, '-4271.810')] +[2023-09-27 04:09:21,414][24596] Fps is (10 sec: 6553.7, 60 sec: 6280.5, 300 sec: 6387.0). Total num frames: 15269888. Throughput: 0: 793.9, 1: 793.8. Samples: 3813966. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-27 04:09:21,415][24596] Avg episode reward: [(0, '-3555.760'), (1, '-4271.810')] +[2023-09-27 04:09:26,414][24596] Fps is (10 sec: 6553.5, 60 sec: 6417.1, 300 sec: 6387.0). Total num frames: 15302656. Throughput: 0: 790.7, 1: 790.7. Samples: 3823513. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-27 04:09:26,415][24596] Avg episode reward: [(0, '-3566.410'), (1, '-4271.810')] +[2023-09-27 04:09:28,608][25562] Updated weights for policy 0, policy_version 29920 (0.0015) +[2023-09-27 04:09:28,609][25564] Updated weights for policy 1, policy_version 29920 (0.0016) +[2023-09-27 04:09:31,414][24596] Fps is (10 sec: 6553.6, 60 sec: 6417.1, 300 sec: 6387.0). Total num frames: 15335424. Throughput: 0: 789.0, 1: 789.7. Samples: 3833198. Policy #0 lag: (min: 13.0, avg: 13.0, max: 13.0) +[2023-09-27 04:09:31,415][24596] Avg episode reward: [(0, '-3566.410'), (1, '-4309.200')] +[2023-09-27 04:09:36,414][24596] Fps is (10 sec: 6553.6, 60 sec: 6417.1, 300 sec: 6387.0). Total num frames: 15368192. Throughput: 0: 793.5, 1: 793.4. Samples: 3837952. Policy #0 lag: (min: 13.0, avg: 13.0, max: 13.0) +[2023-09-27 04:09:36,415][24596] Avg episode reward: [(0, '-3564.220'), (1, '-4325.110')] +[2023-09-27 04:09:41,391][25564] Updated weights for policy 1, policy_version 30080 (0.0017) +[2023-09-27 04:09:41,393][25562] Updated weights for policy 0, policy_version 30080 (0.0016) +[2023-09-27 04:09:41,414][24596] Fps is (10 sec: 6553.6, 60 sec: 6417.1, 300 sec: 6387.0). Total num frames: 15400960. Throughput: 0: 789.9, 1: 789.4. Samples: 3847381. Policy #0 lag: (min: 13.0, avg: 13.0, max: 13.0) +[2023-09-27 04:09:41,415][24596] Avg episode reward: [(0, '-3579.450'), (1, '-4325.110')] +[2023-09-27 04:09:46,414][24596] Fps is (10 sec: 5734.6, 60 sec: 6280.6, 300 sec: 6387.0). Total num frames: 15425536. Throughput: 0: 795.5, 1: 795.4. Samples: 3856932. Policy #0 lag: (min: 13.0, avg: 13.0, max: 13.0) +[2023-09-27 04:09:46,414][24596] Avg episode reward: [(0, '-3579.430'), (1, '-4313.730')] +[2023-09-27 04:09:51,414][24596] Fps is (10 sec: 5734.4, 60 sec: 6280.5, 300 sec: 6387.0). Total num frames: 15458304. Throughput: 0: 797.1, 1: 797.4. Samples: 3861933. Policy #0 lag: (min: 13.0, avg: 13.0, max: 13.0) +[2023-09-27 04:09:51,415][24596] Avg episode reward: [(0, '-3577.640'), (1, '-4300.570')] +[2023-09-27 04:09:54,299][25562] Updated weights for policy 0, policy_version 30240 (0.0017) +[2023-09-27 04:09:54,299][25564] Updated weights for policy 1, policy_version 30240 (0.0018) +[2023-09-27 04:09:56,414][24596] Fps is (10 sec: 6553.4, 60 sec: 6280.5, 300 sec: 6387.0). Total num frames: 15491072. Throughput: 0: 796.6, 1: 796.8. Samples: 3871144. Policy #0 lag: (min: 0.0, avg: 0.0, max: 0.0) +[2023-09-27 04:09:56,415][24596] Avg episode reward: [(0, '-3577.640'), (1, '-4300.570')] +[2023-09-27 04:10:01,414][24596] Fps is (10 sec: 6553.7, 60 sec: 6417.1, 300 sec: 6387.0). Total num frames: 15523840. Throughput: 0: 796.4, 1: 796.4. Samples: 3880960. Policy #0 lag: (min: 0.0, avg: 0.0, max: 0.0) +[2023-09-27 04:10:01,414][24596] Avg episode reward: [(0, '-3577.270'), (1, '-4300.530')] +[2023-09-27 04:10:06,414][24596] Fps is (10 sec: 6553.7, 60 sec: 6417.1, 300 sec: 6387.0). Total num frames: 15556608. Throughput: 0: 796.2, 1: 796.3. Samples: 3885631. Policy #0 lag: (min: 0.0, avg: 0.0, max: 0.0) +[2023-09-27 04:10:06,414][24596] Avg episode reward: [(0, '-3602.550'), (1, '-4300.530')] +[2023-09-27 04:10:07,017][25564] Updated weights for policy 1, policy_version 30400 (0.0017) +[2023-09-27 04:10:07,018][25562] Updated weights for policy 0, policy_version 30400 (0.0017) +[2023-09-27 04:10:11,414][24596] Fps is (10 sec: 6553.4, 60 sec: 6417.1, 300 sec: 6387.0). Total num frames: 15589376. Throughput: 0: 797.8, 1: 797.5. Samples: 3895301. Policy #0 lag: (min: 0.0, avg: 0.0, max: 0.0) +[2023-09-27 04:10:11,415][24596] Avg episode reward: [(0, '-3588.260'), (1, '-4300.540')] +[2023-09-27 04:10:16,414][24596] Fps is (10 sec: 6553.6, 60 sec: 6417.1, 300 sec: 6387.0). Total num frames: 15622144. Throughput: 0: 796.7, 1: 796.1. Samples: 3904874. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-27 04:10:16,415][24596] Avg episode reward: [(0, '-3588.260'), (1, '-4294.490')] +[2023-09-27 04:10:20,031][25562] Updated weights for policy 0, policy_version 30560 (0.0016) +[2023-09-27 04:10:20,031][25564] Updated weights for policy 1, policy_version 30560 (0.0017) +[2023-09-27 04:10:21,414][24596] Fps is (10 sec: 6553.6, 60 sec: 6417.1, 300 sec: 6387.0). Total num frames: 15654912. Throughput: 0: 796.4, 1: 796.4. Samples: 3909629. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-27 04:10:21,415][24596] Avg episode reward: [(0, '-3618.590'), (1, '-4294.460')] +[2023-09-27 04:10:26,414][24596] Fps is (10 sec: 5734.4, 60 sec: 6280.5, 300 sec: 6359.2). Total num frames: 15679488. Throughput: 0: 794.6, 1: 794.3. Samples: 3918884. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-27 04:10:26,415][24596] Avg episode reward: [(0, '-3618.590'), (1, '-4294.460')] +[2023-09-27 04:10:31,414][24596] Fps is (10 sec: 5734.4, 60 sec: 6280.5, 300 sec: 6359.2). Total num frames: 15712256. Throughput: 0: 795.2, 1: 795.4. Samples: 3928512. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-27 04:10:31,415][24596] Avg episode reward: [(0, '-3648.540'), (1, '-4293.310')] +[2023-09-27 04:10:32,864][25564] Updated weights for policy 1, policy_version 30720 (0.0016) +[2023-09-27 04:10:32,864][25562] Updated weights for policy 0, policy_version 30720 (0.0017) +[2023-09-27 04:10:36,414][24596] Fps is (10 sec: 6553.6, 60 sec: 6280.6, 300 sec: 6359.2). Total num frames: 15745024. Throughput: 0: 795.4, 1: 795.0. Samples: 3933498. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-27 04:10:36,414][24596] Avg episode reward: [(0, '-3668.270'), (1, '-4293.350')] +[2023-09-27 04:10:41,414][24596] Fps is (10 sec: 6553.5, 60 sec: 6280.5, 300 sec: 6387.0). Total num frames: 15777792. Throughput: 0: 800.2, 1: 800.0. Samples: 3943156. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-27 04:10:41,415][24596] Avg episode reward: [(0, '-3668.270'), (1, '-4293.350')] +[2023-09-27 04:10:45,440][25562] Updated weights for policy 0, policy_version 30880 (0.0017) +[2023-09-27 04:10:45,440][25564] Updated weights for policy 1, policy_version 30880 (0.0016) +[2023-09-27 04:10:46,414][24596] Fps is (10 sec: 6553.7, 60 sec: 6417.1, 300 sec: 6387.0). Total num frames: 15810560. Throughput: 0: 798.1, 1: 798.4. Samples: 3952803. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-27 04:10:46,414][24596] Avg episode reward: [(0, '-3711.200'), (1, '-4291.420')] +[2023-09-27 04:10:51,414][24596] Fps is (10 sec: 6553.6, 60 sec: 6417.1, 300 sec: 6387.0). Total num frames: 15843328. Throughput: 0: 803.0, 1: 803.6. Samples: 3957927. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-27 04:10:51,415][24596] Avg episode reward: [(0, '-3711.200'), (1, '-4282.610')] +[2023-09-27 04:10:56,414][24596] Fps is (10 sec: 6553.4, 60 sec: 6417.1, 300 sec: 6387.0). Total num frames: 15876096. Throughput: 0: 800.4, 1: 800.5. Samples: 3967339. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-27 04:10:56,415][24596] Avg episode reward: [(0, '-3718.140'), (1, '-4281.740')] +[2023-09-27 04:10:58,131][25564] Updated weights for policy 1, policy_version 31040 (0.0018) +[2023-09-27 04:10:58,132][25562] Updated weights for policy 0, policy_version 31040 (0.0016) +[2023-09-27 04:11:01,414][24596] Fps is (10 sec: 6553.7, 60 sec: 6417.1, 300 sec: 6387.0). Total num frames: 15908864. Throughput: 0: 803.8, 1: 803.8. Samples: 3977216. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-27 04:11:01,415][24596] Avg episode reward: [(0, '-3718.140'), (1, '-4281.720')] +[2023-09-27 04:11:06,414][24596] Fps is (10 sec: 6553.6, 60 sec: 6417.0, 300 sec: 6387.0). Total num frames: 15941632. Throughput: 0: 806.1, 1: 806.4. Samples: 3982194. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-27 04:11:06,415][24596] Avg episode reward: [(0, '-3718.160'), (1, '-4281.720')] +[2023-09-27 04:11:10,702][25564] Updated weights for policy 1, policy_version 31200 (0.0018) +[2023-09-27 04:11:10,702][25562] Updated weights for policy 0, policy_version 31200 (0.0016) +[2023-09-27 04:11:11,414][24596] Fps is (10 sec: 6553.7, 60 sec: 6417.1, 300 sec: 6387.0). Total num frames: 15974400. Throughput: 0: 809.2, 1: 809.4. Samples: 3991722. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-27 04:11:11,414][24596] Avg episode reward: [(0, '-3718.160'), (1, '-4281.720')] +[2023-09-27 04:11:11,422][25443] Saving ./train_atari/atari_skiing/checkpoint_p1/checkpoint_000031200_7987200.pth... +[2023-09-27 04:11:11,423][25378] Saving ./train_atari/atari_skiing/checkpoint_p0/checkpoint_000031200_7987200.pth... +[2023-09-27 04:11:11,457][25378] Removing ./train_atari/atari_skiing/checkpoint_p0/checkpoint_000028208_7221248.pth +[2023-09-27 04:11:11,458][25443] Removing ./train_atari/atari_skiing/checkpoint_p1/checkpoint_000028208_7221248.pth +[2023-09-27 04:11:16,414][24596] Fps is (10 sec: 6553.7, 60 sec: 6417.1, 300 sec: 6387.0). Total num frames: 16007168. Throughput: 0: 813.3, 1: 813.2. Samples: 4001706. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-27 04:11:16,414][24596] Avg episode reward: [(0, '-3734.630'), (1, '-4281.760')] +[2023-09-27 04:11:21,414][24596] Fps is (10 sec: 6553.5, 60 sec: 6417.1, 300 sec: 6387.0). Total num frames: 16039936. Throughput: 0: 809.9, 1: 809.9. Samples: 4006392. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-27 04:11:21,415][24596] Avg episode reward: [(0, '-3734.630'), (1, '-4281.760')] +[2023-09-27 04:11:23,371][25562] Updated weights for policy 0, policy_version 31360 (0.0016) +[2023-09-27 04:11:23,371][25564] Updated weights for policy 1, policy_version 31360 (0.0018) +[2023-09-27 04:11:26,414][24596] Fps is (10 sec: 6553.7, 60 sec: 6553.6, 300 sec: 6387.0). Total num frames: 16072704. Throughput: 0: 811.0, 1: 810.6. Samples: 4016128. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-27 04:11:26,414][24596] Avg episode reward: [(0, '-3737.730'), (1, '-4281.760')] +[2023-09-27 04:11:31,414][24596] Fps is (10 sec: 6553.6, 60 sec: 6553.6, 300 sec: 6387.0). Total num frames: 16105472. Throughput: 0: 809.8, 1: 810.3. Samples: 4025708. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-27 04:11:31,415][24596] Avg episode reward: [(0, '-3749.650'), (1, '-4281.760')] +[2023-09-27 04:11:36,220][25564] Updated weights for policy 1, policy_version 31520 (0.0017) +[2023-09-27 04:11:36,221][25562] Updated weights for policy 0, policy_version 31520 (0.0017) +[2023-09-27 04:11:36,414][24596] Fps is (10 sec: 6553.5, 60 sec: 6553.6, 300 sec: 6387.0). Total num frames: 16138240. Throughput: 0: 806.4, 1: 805.6. Samples: 4030464. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-27 04:11:36,415][24596] Avg episode reward: [(0, '-3741.290'), (1, '-4281.790')] +[2023-09-27 04:11:41,414][24596] Fps is (10 sec: 6553.7, 60 sec: 6553.6, 300 sec: 6414.8). Total num frames: 16171008. Throughput: 0: 808.1, 1: 808.7. Samples: 4040094. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-27 04:11:41,415][24596] Avg episode reward: [(0, '-3705.920'), (1, '-4281.740')] +[2023-09-27 04:11:46,414][24596] Fps is (10 sec: 6553.7, 60 sec: 6553.6, 300 sec: 6414.8). Total num frames: 16203776. Throughput: 0: 806.0, 1: 806.6. Samples: 4049784. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-27 04:11:46,414][24596] Avg episode reward: [(0, '-3673.750'), (1, '-4281.770')] +[2023-09-27 04:11:48,868][25564] Updated weights for policy 1, policy_version 31680 (0.0016) +[2023-09-27 04:11:48,868][25562] Updated weights for policy 0, policy_version 31680 (0.0017) +[2023-09-27 04:11:51,414][24596] Fps is (10 sec: 6553.6, 60 sec: 6553.6, 300 sec: 6414.8). Total num frames: 16236544. Throughput: 0: 807.1, 1: 808.3. Samples: 4054888. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-27 04:11:51,414][24596] Avg episode reward: [(0, '-3673.750'), (1, '-4281.810')] +[2023-09-27 04:11:56,414][24596] Fps is (10 sec: 5734.3, 60 sec: 6417.1, 300 sec: 6387.0). Total num frames: 16261120. Throughput: 0: 806.7, 1: 806.1. Samples: 4064300. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-27 04:11:56,415][24596] Avg episode reward: [(0, '-3673.750'), (1, '-4268.480')] +[2023-09-27 04:12:01,414][24596] Fps is (10 sec: 5734.4, 60 sec: 6417.1, 300 sec: 6387.0). Total num frames: 16293888. Throughput: 0: 798.5, 1: 798.6. Samples: 4073577. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-27 04:12:01,414][24596] Avg episode reward: [(0, '-3696.800'), (1, '-4268.460')] +[2023-09-27 04:12:01,786][25562] Updated weights for policy 0, policy_version 31840 (0.0018) +[2023-09-27 04:12:01,786][25564] Updated weights for policy 1, policy_version 31840 (0.0015) +[2023-09-27 04:12:06,414][24596] Fps is (10 sec: 6553.7, 60 sec: 6417.1, 300 sec: 6387.0). Total num frames: 16326656. Throughput: 0: 799.7, 1: 800.4. Samples: 4078395. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-27 04:12:06,414][24596] Avg episode reward: [(0, '-3723.520'), (1, '-4268.490')] +[2023-09-27 04:12:11,414][24596] Fps is (10 sec: 6553.4, 60 sec: 6417.0, 300 sec: 6387.0). Total num frames: 16359424. Throughput: 0: 796.4, 1: 796.5. Samples: 4087809. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-27 04:12:11,415][24596] Avg episode reward: [(0, '-3723.520'), (1, '-4268.490')] +[2023-09-27 04:12:14,824][25562] Updated weights for policy 0, policy_version 32000 (0.0018) +[2023-09-27 04:12:14,824][25564] Updated weights for policy 1, policy_version 32000 (0.0017) +[2023-09-27 04:12:16,414][24596] Fps is (10 sec: 6553.5, 60 sec: 6417.0, 300 sec: 6387.0). Total num frames: 16392192. Throughput: 0: 797.0, 1: 796.5. Samples: 4097415. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-27 04:12:16,415][24596] Avg episode reward: [(0, '-3723.520'), (1, '-4267.260')] +[2023-09-27 04:12:21,414][24596] Fps is (10 sec: 6553.8, 60 sec: 6417.1, 300 sec: 6387.0). Total num frames: 16424960. Throughput: 0: 796.5, 1: 796.6. Samples: 4102154. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-27 04:12:21,414][24596] Avg episode reward: [(0, '-3751.470'), (1, '-4267.260')] +[2023-09-27 04:12:26,414][24596] Fps is (10 sec: 6553.8, 60 sec: 6417.1, 300 sec: 6387.0). Total num frames: 16457728. Throughput: 0: 797.4, 1: 796.6. Samples: 4111820. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-27 04:12:26,414][24596] Avg episode reward: [(0, '-3736.740'), (1, '-4267.230')] +[2023-09-27 04:12:27,564][25564] Updated weights for policy 1, policy_version 32160 (0.0017) +[2023-09-27 04:12:27,564][25562] Updated weights for policy 0, policy_version 32160 (0.0017) +[2023-09-27 04:12:31,414][24596] Fps is (10 sec: 6553.5, 60 sec: 6417.1, 300 sec: 6387.0). Total num frames: 16490496. Throughput: 0: 795.0, 1: 795.0. Samples: 4121335. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-27 04:12:31,415][24596] Avg episode reward: [(0, '-3736.740'), (1, '-4267.220')] +[2023-09-27 04:12:36,414][24596] Fps is (10 sec: 6553.5, 60 sec: 6417.1, 300 sec: 6387.0). Total num frames: 16523264. Throughput: 0: 795.2, 1: 795.1. Samples: 4126454. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-27 04:12:36,415][24596] Avg episode reward: [(0, '-3736.740'), (1, '-4294.250')] +[2023-09-27 04:12:40,162][25562] Updated weights for policy 0, policy_version 32320 (0.0019) +[2023-09-27 04:12:40,162][25564] Updated weights for policy 1, policy_version 32320 (0.0016) +[2023-09-27 04:12:41,414][24596] Fps is (10 sec: 6143.9, 60 sec: 6348.8, 300 sec: 6373.1). Total num frames: 16551936. Throughput: 0: 798.0, 1: 798.3. Samples: 4136130. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-27 04:12:41,415][24596] Avg episode reward: [(0, '-3736.740'), (1, '-4294.250')] +[2023-09-27 04:12:46,414][24596] Fps is (10 sec: 5734.4, 60 sec: 6280.5, 300 sec: 6387.0). Total num frames: 16580608. Throughput: 0: 796.0, 1: 795.8. Samples: 4145209. Policy #0 lag: (min: 9.0, avg: 9.0, max: 9.0) +[2023-09-27 04:12:46,415][24596] Avg episode reward: [(0, '-3780.210'), (1, '-4294.230')] +[2023-09-27 04:12:51,414][24596] Fps is (10 sec: 6144.1, 60 sec: 6280.5, 300 sec: 6387.0). Total num frames: 16613376. Throughput: 0: 797.2, 1: 796.7. Samples: 4150124. Policy #0 lag: (min: 9.0, avg: 9.0, max: 9.0) +[2023-09-27 04:12:51,414][24596] Avg episode reward: [(0, '-3780.210'), (1, '-4294.240')] +[2023-09-27 04:12:53,239][25562] Updated weights for policy 0, policy_version 32480 (0.0017) +[2023-09-27 04:12:53,240][25564] Updated weights for policy 1, policy_version 32480 (0.0017) +[2023-09-27 04:12:56,414][24596] Fps is (10 sec: 6553.7, 60 sec: 6417.1, 300 sec: 6387.0). Total num frames: 16646144. Throughput: 0: 796.4, 1: 796.4. Samples: 4159489. Policy #0 lag: (min: 9.0, avg: 9.0, max: 9.0) +[2023-09-27 04:12:56,414][24596] Avg episode reward: [(0, '-3754.510'), (1, '-4286.670')] +[2023-09-27 04:13:01,414][24596] Fps is (10 sec: 6553.6, 60 sec: 6417.1, 300 sec: 6387.0). Total num frames: 16678912. Throughput: 0: 799.6, 1: 799.4. Samples: 4169369. Policy #0 lag: (min: 9.0, avg: 9.0, max: 9.0) +[2023-09-27 04:13:01,414][24596] Avg episode reward: [(0, '-3754.510'), (1, '-4286.680')] +[2023-09-27 04:13:06,122][25562] Updated weights for policy 0, policy_version 32640 (0.0015) +[2023-09-27 04:13:06,123][25564] Updated weights for policy 1, policy_version 32640 (0.0016) +[2023-09-27 04:13:06,414][24596] Fps is (10 sec: 6553.5, 60 sec: 6417.1, 300 sec: 6387.0). Total num frames: 16711680. Throughput: 0: 796.4, 1: 796.4. Samples: 4173828. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-27 04:13:06,415][24596] Avg episode reward: [(0, '-3753.690'), (1, '-4286.680')] +[2023-09-27 04:13:11,414][24596] Fps is (10 sec: 6553.6, 60 sec: 6417.1, 300 sec: 6387.0). Total num frames: 16744448. Throughput: 0: 799.1, 1: 799.1. Samples: 4183738. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-27 04:13:11,414][24596] Avg episode reward: [(0, '-3753.690'), (1, '-4286.710')] +[2023-09-27 04:13:11,424][25378] Saving ./train_atari/atari_skiing/checkpoint_p0/checkpoint_000032704_8372224.pth... +[2023-09-27 04:13:11,424][25443] Saving ./train_atari/atari_skiing/checkpoint_p1/checkpoint_000032704_8372224.pth... +[2023-09-27 04:13:11,458][25443] Removing ./train_atari/atari_skiing/checkpoint_p1/checkpoint_000029696_7602176.pth +[2023-09-27 04:13:11,459][25378] Removing ./train_atari/atari_skiing/checkpoint_p0/checkpoint_000029696_7602176.pth +[2023-09-27 04:13:16,414][24596] Fps is (10 sec: 6553.6, 60 sec: 6417.1, 300 sec: 6387.0). Total num frames: 16777216. Throughput: 0: 797.2, 1: 797.0. Samples: 4193071. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-27 04:13:16,415][24596] Avg episode reward: [(0, '-3759.540'), (1, '-4286.740')] +[2023-09-27 04:13:18,924][25562] Updated weights for policy 0, policy_version 32800 (0.0019) +[2023-09-27 04:13:18,924][25564] Updated weights for policy 1, policy_version 32800 (0.0018) +[2023-09-27 04:13:21,414][24596] Fps is (10 sec: 5734.4, 60 sec: 6280.5, 300 sec: 6387.0). Total num frames: 16801792. Throughput: 0: 797.2, 1: 795.8. Samples: 4198136. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-27 04:13:21,415][24596] Avg episode reward: [(0, '-3774.780'), (1, '-4299.140')] +[2023-09-27 04:13:26,414][24596] Fps is (10 sec: 5734.5, 60 sec: 6280.5, 300 sec: 6387.0). Total num frames: 16834560. Throughput: 0: 793.0, 1: 792.6. Samples: 4207480. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-27 04:13:26,414][24596] Avg episode reward: [(0, '-3771.600'), (1, '-4299.140')] +[2023-09-27 04:13:31,414][24596] Fps is (10 sec: 6553.6, 60 sec: 6280.5, 300 sec: 6387.0). Total num frames: 16867328. Throughput: 0: 795.9, 1: 795.7. Samples: 4216833. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-27 04:13:31,415][24596] Avg episode reward: [(0, '-3771.600'), (1, '-4299.160')] +[2023-09-27 04:13:31,879][25562] Updated weights for policy 0, policy_version 32960 (0.0019) +[2023-09-27 04:13:31,879][25564] Updated weights for policy 1, policy_version 32960 (0.0016) +[2023-09-27 04:13:36,414][24596] Fps is (10 sec: 6553.5, 60 sec: 6280.5, 300 sec: 6387.0). Total num frames: 16900096. Throughput: 0: 794.3, 1: 794.0. Samples: 4221598. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-27 04:13:36,415][24596] Avg episode reward: [(0, '-3763.690'), (1, '-4299.160')] +[2023-09-27 04:13:41,414][24596] Fps is (10 sec: 6553.7, 60 sec: 6348.8, 300 sec: 6387.0). Total num frames: 16932864. Throughput: 0: 798.1, 1: 798.6. Samples: 4231340. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-27 04:13:41,415][24596] Avg episode reward: [(0, '-3762.020'), (1, '-4317.450')] +[2023-09-27 04:13:44,431][25564] Updated weights for policy 1, policy_version 33120 (0.0016) +[2023-09-27 04:13:44,432][25562] Updated weights for policy 0, policy_version 33120 (0.0017) +[2023-09-27 04:13:46,414][24596] Fps is (10 sec: 6553.6, 60 sec: 6417.1, 300 sec: 6387.0). Total num frames: 16965632. Throughput: 0: 800.4, 1: 800.4. Samples: 4241408. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-27 04:13:46,415][24596] Avg episode reward: [(0, '-3760.690'), (1, '-4317.440')] +[2023-09-27 04:13:51,414][24596] Fps is (10 sec: 6553.5, 60 sec: 6417.1, 300 sec: 6387.0). Total num frames: 16998400. Throughput: 0: 802.2, 1: 802.6. Samples: 4246041. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-27 04:13:51,415][24596] Avg episode reward: [(0, '-3760.690'), (1, '-4317.440')] +[2023-09-27 04:13:56,414][24596] Fps is (10 sec: 6553.7, 60 sec: 6417.1, 300 sec: 6414.8). Total num frames: 17031168. Throughput: 0: 800.2, 1: 800.2. Samples: 4255752. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-27 04:13:56,414][24596] Avg episode reward: [(0, '-3750.110'), (1, '-4317.440')] +[2023-09-27 04:13:57,077][25562] Updated weights for policy 0, policy_version 33280 (0.0016) +[2023-09-27 04:13:57,077][25564] Updated weights for policy 1, policy_version 33280 (0.0018) +[2023-09-27 04:14:01,414][24596] Fps is (10 sec: 6553.7, 60 sec: 6417.1, 300 sec: 6414.8). Total num frames: 17063936. Throughput: 0: 805.0, 1: 805.3. Samples: 4265536. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-27 04:14:01,414][24596] Avg episode reward: [(0, '-3748.940'), (1, '-4336.000')] +[2023-09-27 04:14:06,414][24596] Fps is (10 sec: 6553.5, 60 sec: 6417.1, 300 sec: 6414.8). Total num frames: 17096704. Throughput: 0: 799.5, 1: 799.6. Samples: 4270096. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-27 04:14:06,415][24596] Avg episode reward: [(0, '-3771.310'), (1, '-4344.710')] +[2023-09-27 04:14:09,813][25562] Updated weights for policy 0, policy_version 33440 (0.0017) +[2023-09-27 04:14:09,813][25564] Updated weights for policy 1, policy_version 33440 (0.0017) +[2023-09-27 04:14:11,414][24596] Fps is (10 sec: 6553.5, 60 sec: 6417.0, 300 sec: 6414.7). Total num frames: 17129472. Throughput: 0: 807.8, 1: 808.0. Samples: 4280190. Policy #0 lag: (min: 12.0, avg: 12.0, max: 12.0) +[2023-09-27 04:14:11,415][24596] Avg episode reward: [(0, '-3771.310'), (1, '-4320.220')] +[2023-09-27 04:14:16,414][24596] Fps is (10 sec: 6553.6, 60 sec: 6417.1, 300 sec: 6414.8). Total num frames: 17162240. Throughput: 0: 805.4, 1: 805.9. Samples: 4289341. Policy #0 lag: (min: 12.0, avg: 12.0, max: 12.0) +[2023-09-27 04:14:16,415][24596] Avg episode reward: [(0, '-3778.270'), (1, '-4308.760')] +[2023-09-27 04:14:21,414][24596] Fps is (10 sec: 5734.5, 60 sec: 6417.1, 300 sec: 6387.0). Total num frames: 17186816. Throughput: 0: 808.0, 1: 808.3. Samples: 4294329. Policy #0 lag: (min: 12.0, avg: 12.0, max: 12.0) +[2023-09-27 04:14:21,415][24596] Avg episode reward: [(0, '-3778.980'), (1, '-4250.860')] +[2023-09-27 04:14:22,706][25562] Updated weights for policy 0, policy_version 33600 (0.0017) +[2023-09-27 04:14:22,707][25564] Updated weights for policy 1, policy_version 33600 (0.0017) +[2023-09-27 04:14:26,414][24596] Fps is (10 sec: 5734.4, 60 sec: 6417.0, 300 sec: 6387.0). Total num frames: 17219584. Throughput: 0: 806.8, 1: 807.3. Samples: 4303975. Policy #0 lag: (min: 12.0, avg: 12.0, max: 12.0) +[2023-09-27 04:14:26,415][24596] Avg episode reward: [(0, '-3778.980'), (1, '-4250.860')] +[2023-09-27 04:14:31,414][24596] Fps is (10 sec: 6553.6, 60 sec: 6417.1, 300 sec: 6387.0). Total num frames: 17252352. Throughput: 0: 801.0, 1: 801.1. Samples: 4313504. Policy #0 lag: (min: 12.0, avg: 12.0, max: 12.0) +[2023-09-27 04:14:31,415][24596] Avg episode reward: [(0, '-3778.810'), (1, '-4234.110')] +[2023-09-27 04:14:35,479][25562] Updated weights for policy 0, policy_version 33760 (0.0016) +[2023-09-27 04:14:35,479][25564] Updated weights for policy 1, policy_version 33760 (0.0018) +[2023-09-27 04:14:36,414][24596] Fps is (10 sec: 6553.6, 60 sec: 6417.1, 300 sec: 6387.0). Total num frames: 17285120. Throughput: 0: 805.7, 1: 804.0. Samples: 4318477. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-27 04:14:36,415][24596] Avg episode reward: [(0, '-3800.000'), (1, '-4204.260')] +[2023-09-27 04:14:41,414][24596] Fps is (10 sec: 6553.7, 60 sec: 6417.1, 300 sec: 6414.8). Total num frames: 17317888. Throughput: 0: 802.8, 1: 803.0. Samples: 4328011. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-27 04:14:41,414][24596] Avg episode reward: [(0, '-3790.750'), (1, '-4204.260')] +[2023-09-27 04:14:46,414][24596] Fps is (10 sec: 6553.7, 60 sec: 6417.1, 300 sec: 6414.8). Total num frames: 17350656. Throughput: 0: 801.8, 1: 801.1. Samples: 4337664. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-27 04:14:46,415][24596] Avg episode reward: [(0, '-3744.690'), (1, '-4166.450')] +[2023-09-27 04:14:48,186][25562] Updated weights for policy 0, policy_version 33920 (0.0016) +[2023-09-27 04:14:48,186][25564] Updated weights for policy 1, policy_version 33920 (0.0017) +[2023-09-27 04:14:51,414][24596] Fps is (10 sec: 6553.5, 60 sec: 6417.1, 300 sec: 6414.8). Total num frames: 17383424. Throughput: 0: 803.2, 1: 803.8. Samples: 4342411. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-27 04:14:51,415][24596] Avg episode reward: [(0, '-3721.650'), (1, '-4164.820')] +[2023-09-27 04:14:56,414][24596] Fps is (10 sec: 6553.5, 60 sec: 6417.0, 300 sec: 6414.7). Total num frames: 17416192. Throughput: 0: 797.8, 1: 798.0. Samples: 4352000. Policy #0 lag: (min: 3.0, avg: 3.0, max: 3.0) +[2023-09-27 04:14:56,415][24596] Avg episode reward: [(0, '-3706.380'), (1, '-4159.340')] +[2023-09-27 04:15:01,015][25562] Updated weights for policy 0, policy_version 34080 (0.0016) +[2023-09-27 04:15:01,016][25564] Updated weights for policy 1, policy_version 34080 (0.0015) +[2023-09-27 04:15:01,414][24596] Fps is (10 sec: 6553.7, 60 sec: 6417.1, 300 sec: 6414.8). Total num frames: 17448960. Throughput: 0: 804.2, 1: 803.8. Samples: 4361699. Policy #0 lag: (min: 3.0, avg: 3.0, max: 3.0) +[2023-09-27 04:15:01,414][24596] Avg episode reward: [(0, '-3701.480'), (1, '-4159.340')] +[2023-09-27 04:15:06,414][24596] Fps is (10 sec: 6553.7, 60 sec: 6417.1, 300 sec: 6414.8). Total num frames: 17481728. Throughput: 0: 800.6, 1: 800.3. Samples: 4366372. Policy #0 lag: (min: 3.0, avg: 3.0, max: 3.0) +[2023-09-27 04:15:06,415][24596] Avg episode reward: [(0, '-3701.480'), (1, '-4136.680')] +[2023-09-27 04:15:11,414][24596] Fps is (10 sec: 6553.5, 60 sec: 6417.1, 300 sec: 6414.8). Total num frames: 17514496. Throughput: 0: 804.5, 1: 803.5. Samples: 4376335. Policy #0 lag: (min: 3.0, avg: 3.0, max: 3.0) +[2023-09-27 04:15:11,415][24596] Avg episode reward: [(0, '-3701.480'), (1, '-4136.660')] +[2023-09-27 04:15:11,422][25443] Saving ./train_atari/atari_skiing/checkpoint_p1/checkpoint_000034208_8757248.pth... +[2023-09-27 04:15:11,422][25378] Saving ./train_atari/atari_skiing/checkpoint_p0/checkpoint_000034208_8757248.pth... +[2023-09-27 04:15:11,457][25378] Removing ./train_atari/atari_skiing/checkpoint_p0/checkpoint_000031200_7987200.pth +[2023-09-27 04:15:11,459][25443] Removing ./train_atari/atari_skiing/checkpoint_p1/checkpoint_000031200_7987200.pth +[2023-09-27 04:15:13,739][25564] Updated weights for policy 1, policy_version 34240 (0.0018) +[2023-09-27 04:15:13,740][25562] Updated weights for policy 0, policy_version 34240 (0.0018) +[2023-09-27 04:15:16,414][24596] Fps is (10 sec: 6553.6, 60 sec: 6417.1, 300 sec: 6414.8). Total num frames: 17547264. Throughput: 0: 803.2, 1: 803.0. Samples: 4385780. Policy #0 lag: (min: 3.0, avg: 3.0, max: 3.0) +[2023-09-27 04:15:16,415][24596] Avg episode reward: [(0, '-3752.960'), (1, '-4136.660')] +[2023-09-27 04:15:21,414][24596] Fps is (10 sec: 6553.6, 60 sec: 6553.6, 300 sec: 6442.5). Total num frames: 17580032. Throughput: 0: 803.2, 1: 805.5. Samples: 4390869. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-27 04:15:21,414][24596] Avg episode reward: [(0, '-3752.960'), (1, '-4136.660')] +[2023-09-27 04:15:26,414][24596] Fps is (10 sec: 6144.0, 60 sec: 6485.3, 300 sec: 6428.6). Total num frames: 17608704. Throughput: 0: 804.9, 1: 804.7. Samples: 4400443. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-27 04:15:26,415][24596] Avg episode reward: [(0, '-3755.460'), (1, '-4127.630')] +[2023-09-27 04:15:26,426][25562] Updated weights for policy 0, policy_version 34400 (0.0017) +[2023-09-27 04:15:26,426][25564] Updated weights for policy 1, policy_version 34400 (0.0018) +[2023-09-27 04:15:31,414][24596] Fps is (10 sec: 5734.3, 60 sec: 6417.1, 300 sec: 6414.7). Total num frames: 17637376. Throughput: 0: 804.2, 1: 804.7. Samples: 4410061. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-27 04:15:31,415][24596] Avg episode reward: [(0, '-3755.500'), (1, '-4127.660')] +[2023-09-27 04:15:36,414][24596] Fps is (10 sec: 6144.1, 60 sec: 6417.1, 300 sec: 6414.8). Total num frames: 17670144. Throughput: 0: 806.5, 1: 806.6. Samples: 4415001. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-27 04:15:36,415][24596] Avg episode reward: [(0, '-3787.640'), (1, '-4127.660')] +[2023-09-27 04:15:39,204][25564] Updated weights for policy 1, policy_version 34560 (0.0019) +[2023-09-27 04:15:39,204][25562] Updated weights for policy 0, policy_version 34560 (0.0018) +[2023-09-27 04:15:41,414][24596] Fps is (10 sec: 6553.7, 60 sec: 6417.1, 300 sec: 6414.8). Total num frames: 17702912. Throughput: 0: 803.5, 1: 803.6. Samples: 4424318. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-27 04:15:41,414][24596] Avg episode reward: [(0, '-3787.650'), (1, '-4127.700')] +[2023-09-27 04:15:46,414][24596] Fps is (10 sec: 6553.5, 60 sec: 6417.1, 300 sec: 6414.8). Total num frames: 17735680. Throughput: 0: 802.5, 1: 802.4. Samples: 4433920. Policy #0 lag: (min: 3.0, avg: 3.0, max: 3.0) +[2023-09-27 04:15:46,415][24596] Avg episode reward: [(0, '-3787.600'), (1, '-4118.020')] +[2023-09-27 04:15:51,414][24596] Fps is (10 sec: 6553.5, 60 sec: 6417.1, 300 sec: 6414.8). Total num frames: 17768448. Throughput: 0: 801.4, 1: 802.0. Samples: 4438525. Policy #0 lag: (min: 3.0, avg: 3.0, max: 3.0) +[2023-09-27 04:15:51,415][24596] Avg episode reward: [(0, '-3802.830'), (1, '-4148.410')] +[2023-09-27 04:15:52,019][25564] Updated weights for policy 1, policy_version 34720 (0.0019) +[2023-09-27 04:15:52,019][25562] Updated weights for policy 0, policy_version 34720 (0.0016) +[2023-09-27 04:15:56,414][24596] Fps is (10 sec: 6553.7, 60 sec: 6417.1, 300 sec: 6414.8). Total num frames: 17801216. Throughput: 0: 799.2, 1: 799.4. Samples: 4448272. Policy #0 lag: (min: 3.0, avg: 3.0, max: 3.0) +[2023-09-27 04:15:56,414][24596] Avg episode reward: [(0, '-3792.130'), (1, '-4148.410')] +[2023-09-27 04:16:01,414][24596] Fps is (10 sec: 6553.7, 60 sec: 6417.1, 300 sec: 6414.8). Total num frames: 17833984. Throughput: 0: 807.6, 1: 808.0. Samples: 4458480. Policy #0 lag: (min: 3.0, avg: 3.0, max: 3.0) +[2023-09-27 04:16:01,414][24596] Avg episode reward: [(0, '-3793.310'), (1, '-4148.410')] +[2023-09-27 04:16:04,623][25562] Updated weights for policy 0, policy_version 34880 (0.0017) +[2023-09-27 04:16:04,623][25564] Updated weights for policy 1, policy_version 34880 (0.0018) +[2023-09-27 04:16:06,414][24596] Fps is (10 sec: 6553.4, 60 sec: 6417.1, 300 sec: 6414.7). Total num frames: 17866752. Throughput: 0: 801.9, 1: 801.6. Samples: 4463023. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-27 04:16:06,415][24596] Avg episode reward: [(0, '-3793.310'), (1, '-4148.410')] +[2023-09-27 04:16:11,414][24596] Fps is (10 sec: 6553.5, 60 sec: 6417.1, 300 sec: 6414.7). Total num frames: 17899520. Throughput: 0: 804.4, 1: 804.3. Samples: 4472832. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-27 04:16:11,415][24596] Avg episode reward: [(0, '-3793.340'), (1, '-4148.360')] +[2023-09-27 04:16:16,414][24596] Fps is (10 sec: 6553.7, 60 sec: 6417.1, 300 sec: 6414.8). Total num frames: 17932288. Throughput: 0: 808.1, 1: 807.1. Samples: 4482744. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-27 04:16:16,415][24596] Avg episode reward: [(0, '-3789.520'), (1, '-4148.320')] +[2023-09-27 04:16:17,171][25562] Updated weights for policy 0, policy_version 35040 (0.0014) +[2023-09-27 04:16:17,172][25564] Updated weights for policy 1, policy_version 35040 (0.0015) +[2023-09-27 04:16:21,414][24596] Fps is (10 sec: 6553.6, 60 sec: 6417.1, 300 sec: 6414.7). Total num frames: 17965056. Throughput: 0: 804.6, 1: 804.0. Samples: 4487390. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-27 04:16:21,415][24596] Avg episode reward: [(0, '-3787.980'), (1, '-4148.320')] +[2023-09-27 04:16:26,414][24596] Fps is (10 sec: 6553.6, 60 sec: 6485.3, 300 sec: 6414.8). Total num frames: 17997824. Throughput: 0: 812.2, 1: 812.1. Samples: 4497408. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-27 04:16:26,415][24596] Avg episode reward: [(0, '-3787.980'), (1, '-4148.350')] +[2023-09-27 04:16:29,855][25564] Updated weights for policy 1, policy_version 35200 (0.0016) +[2023-09-27 04:16:29,856][25562] Updated weights for policy 0, policy_version 35200 (0.0016) +[2023-09-27 04:16:31,414][24596] Fps is (10 sec: 6553.7, 60 sec: 6553.6, 300 sec: 6414.8). Total num frames: 18030592. Throughput: 0: 811.1, 1: 810.3. Samples: 4506882. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-27 04:16:31,414][24596] Avg episode reward: [(0, '-3787.950'), (1, '-4169.020')] +[2023-09-27 04:16:36,414][24596] Fps is (10 sec: 6553.6, 60 sec: 6553.6, 300 sec: 6414.7). Total num frames: 18063360. Throughput: 0: 813.2, 1: 811.6. Samples: 4511642. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-27 04:16:36,415][24596] Avg episode reward: [(0, '-3787.970'), (1, '-4144.490')] +[2023-09-27 04:16:41,414][24596] Fps is (10 sec: 5734.2, 60 sec: 6417.0, 300 sec: 6387.0). Total num frames: 18087936. Throughput: 0: 806.6, 1: 806.5. Samples: 4520861. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-27 04:16:41,415][24596] Avg episode reward: [(0, '-3804.540'), (1, '-4144.490')] +[2023-09-27 04:16:42,805][25562] Updated weights for policy 0, policy_version 35360 (0.0018) +[2023-09-27 04:16:42,805][25564] Updated weights for policy 1, policy_version 35360 (0.0019) +[2023-09-27 04:16:46,414][24596] Fps is (10 sec: 5734.4, 60 sec: 6417.1, 300 sec: 6387.0). Total num frames: 18120704. Throughput: 0: 800.9, 1: 800.2. Samples: 4530532. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-27 04:16:46,415][24596] Avg episode reward: [(0, '-3804.540'), (1, '-4144.490')] +[2023-09-27 04:16:51,414][24596] Fps is (10 sec: 6553.7, 60 sec: 6417.1, 300 sec: 6414.8). Total num frames: 18153472. Throughput: 0: 804.7, 1: 805.1. Samples: 4535464. Policy #0 lag: (min: 11.0, avg: 11.0, max: 11.0) +[2023-09-27 04:16:51,415][24596] Avg episode reward: [(0, '-3804.540'), (1, '-4136.040')] +[2023-09-27 04:16:55,653][25562] Updated weights for policy 0, policy_version 35520 (0.0017) +[2023-09-27 04:16:55,653][25564] Updated weights for policy 1, policy_version 35520 (0.0018) +[2023-09-27 04:16:56,414][24596] Fps is (10 sec: 6553.8, 60 sec: 6417.1, 300 sec: 6414.8). Total num frames: 18186240. Throughput: 0: 798.8, 1: 799.2. Samples: 4544738. Policy #0 lag: (min: 11.0, avg: 11.0, max: 11.0) +[2023-09-27 04:16:56,414][24596] Avg episode reward: [(0, '-3808.800'), (1, '-4133.310')] +[2023-09-27 04:17:01,414][24596] Fps is (10 sec: 6553.6, 60 sec: 6417.1, 300 sec: 6414.7). Total num frames: 18219008. Throughput: 0: 799.7, 1: 800.2. Samples: 4554737. Policy #0 lag: (min: 11.0, avg: 11.0, max: 11.0) +[2023-09-27 04:17:01,415][24596] Avg episode reward: [(0, '-3843.880'), (1, '-4133.310')] +[2023-09-27 04:17:06,414][24596] Fps is (10 sec: 6553.6, 60 sec: 6417.1, 300 sec: 6414.8). Total num frames: 18251776. Throughput: 0: 799.5, 1: 799.6. Samples: 4559346. Policy #0 lag: (min: 11.0, avg: 11.0, max: 11.0) +[2023-09-27 04:17:06,414][24596] Avg episode reward: [(0, '-3853.260'), (1, '-4133.310')] +[2023-09-27 04:17:08,244][25564] Updated weights for policy 1, policy_version 35680 (0.0018) +[2023-09-27 04:17:08,244][25562] Updated weights for policy 0, policy_version 35680 (0.0017) +[2023-09-27 04:17:11,414][24596] Fps is (10 sec: 6553.7, 60 sec: 6417.1, 300 sec: 6414.8). Total num frames: 18284544. Throughput: 0: 796.6, 1: 796.6. Samples: 4569098. Policy #0 lag: (min: 11.0, avg: 11.0, max: 11.0) +[2023-09-27 04:17:11,415][24596] Avg episode reward: [(0, '-3853.260'), (1, '-4120.570')] +[2023-09-27 04:17:11,424][25378] Saving ./train_atari/atari_skiing/checkpoint_p0/checkpoint_000035712_9142272.pth... +[2023-09-27 04:17:11,424][25443] Saving ./train_atari/atari_skiing/checkpoint_p1/checkpoint_000035712_9142272.pth... +[2023-09-27 04:17:11,454][25443] Removing ./train_atari/atari_skiing/checkpoint_p1/checkpoint_000032704_8372224.pth +[2023-09-27 04:17:11,462][25378] Removing ./train_atari/atari_skiing/checkpoint_p0/checkpoint_000032704_8372224.pth +[2023-09-27 04:17:16,414][24596] Fps is (10 sec: 6553.6, 60 sec: 6417.1, 300 sec: 6414.8). Total num frames: 18317312. Throughput: 0: 800.7, 1: 802.3. Samples: 4579015. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-27 04:17:16,414][24596] Avg episode reward: [(0, '-3842.410'), (1, '-4117.830')] +[2023-09-27 04:17:21,251][25562] Updated weights for policy 0, policy_version 35840 (0.0019) +[2023-09-27 04:17:21,251][25564] Updated weights for policy 1, policy_version 35840 (0.0019) +[2023-09-27 04:17:21,414][24596] Fps is (10 sec: 6553.5, 60 sec: 6417.1, 300 sec: 6414.7). Total num frames: 18350080. Throughput: 0: 797.3, 1: 798.0. Samples: 4583433. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-27 04:17:21,415][24596] Avg episode reward: [(0, '-3842.350'), (1, '-4117.820')] +[2023-09-27 04:17:26,414][24596] Fps is (10 sec: 6553.6, 60 sec: 6417.1, 300 sec: 6414.8). Total num frames: 18382848. Throughput: 0: 801.3, 1: 803.2. Samples: 4593061. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-27 04:17:26,414][24596] Avg episode reward: [(0, '-3871.800'), (1, '-4117.820')] +[2023-09-27 04:17:31,414][24596] Fps is (10 sec: 5734.4, 60 sec: 6280.5, 300 sec: 6387.0). Total num frames: 18407424. Throughput: 0: 796.7, 1: 798.0. Samples: 4602292. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-27 04:17:31,415][24596] Avg episode reward: [(0, '-3834.380'), (1, '-4117.810')] +[2023-09-27 04:17:34,277][25564] Updated weights for policy 1, policy_version 36000 (0.0018) +[2023-09-27 04:17:34,277][25562] Updated weights for policy 0, policy_version 36000 (0.0017) +[2023-09-27 04:17:36,414][24596] Fps is (10 sec: 5734.3, 60 sec: 6280.5, 300 sec: 6400.9). Total num frames: 18440192. Throughput: 0: 794.7, 1: 793.8. Samples: 4606944. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-27 04:17:36,415][24596] Avg episode reward: [(0, '-3834.380'), (1, '-4117.840')] +[2023-09-27 04:17:41,414][24596] Fps is (10 sec: 6553.5, 60 sec: 6417.1, 300 sec: 6414.8). Total num frames: 18472960. Throughput: 0: 796.9, 1: 796.8. Samples: 4616452. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-27 04:17:41,415][24596] Avg episode reward: [(0, '-3827.140'), (1, '-4135.330')] +[2023-09-27 04:17:46,414][24596] Fps is (10 sec: 6553.6, 60 sec: 6417.1, 300 sec: 6414.8). Total num frames: 18505728. Throughput: 0: 796.6, 1: 796.6. Samples: 4626432. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-27 04:17:46,415][24596] Avg episode reward: [(0, '-3827.140'), (1, '-4135.330')] +[2023-09-27 04:17:46,959][25564] Updated weights for policy 1, policy_version 36160 (0.0015) +[2023-09-27 04:17:46,959][25562] Updated weights for policy 0, policy_version 36160 (0.0017) +[2023-09-27 04:17:51,414][24596] Fps is (10 sec: 6553.7, 60 sec: 6417.1, 300 sec: 6414.8). Total num frames: 18538496. Throughput: 0: 797.8, 1: 797.7. Samples: 4631142. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-27 04:17:51,414][24596] Avg episode reward: [(0, '-3859.110'), (1, '-4135.330')] +[2023-09-27 04:17:56,414][24596] Fps is (10 sec: 6553.6, 60 sec: 6417.0, 300 sec: 6414.7). Total num frames: 18571264. Throughput: 0: 796.3, 1: 796.3. Samples: 4640768. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-27 04:17:56,415][24596] Avg episode reward: [(0, '-3859.050'), (1, '-4135.330')] +[2023-09-27 04:17:59,733][25564] Updated weights for policy 1, policy_version 36320 (0.0017) +[2023-09-27 04:17:59,733][25562] Updated weights for policy 0, policy_version 36320 (0.0015) +[2023-09-27 04:18:01,414][24596] Fps is (10 sec: 6553.4, 60 sec: 6417.1, 300 sec: 6414.8). Total num frames: 18604032. Throughput: 0: 794.9, 1: 794.7. Samples: 4650552. Policy #0 lag: (min: 2.0, avg: 2.0, max: 2.0) +[2023-09-27 04:18:01,415][24596] Avg episode reward: [(0, '-3805.600'), (1, '-4135.330')] +[2023-09-27 04:18:06,414][24596] Fps is (10 sec: 6553.6, 60 sec: 6417.0, 300 sec: 6414.8). Total num frames: 18636800. Throughput: 0: 796.5, 1: 796.6. Samples: 4655121. Policy #0 lag: (min: 2.0, avg: 2.0, max: 2.0) +[2023-09-27 04:18:06,415][24596] Avg episode reward: [(0, '-3806.990'), (1, '-4135.330')] +[2023-09-27 04:18:11,414][24596] Fps is (10 sec: 6553.5, 60 sec: 6417.0, 300 sec: 6414.7). Total num frames: 18669568. Throughput: 0: 798.3, 1: 796.7. Samples: 4664835. Policy #0 lag: (min: 2.0, avg: 2.0, max: 2.0) +[2023-09-27 04:18:11,416][24596] Avg episode reward: [(0, '-3774.530'), (1, '-4119.130')] +[2023-09-27 04:18:12,514][25562] Updated weights for policy 0, policy_version 36480 (0.0016) +[2023-09-27 04:18:12,514][25564] Updated weights for policy 1, policy_version 36480 (0.0017) +[2023-09-27 04:18:16,414][24596] Fps is (10 sec: 6553.6, 60 sec: 6417.0, 300 sec: 6442.5). Total num frames: 18702336. Throughput: 0: 802.2, 1: 801.6. Samples: 4674467. Policy #0 lag: (min: 2.0, avg: 2.0, max: 2.0) +[2023-09-27 04:18:16,415][24596] Avg episode reward: [(0, '-3797.710'), (1, '-4119.100')] +[2023-09-27 04:18:21,414][24596] Fps is (10 sec: 5734.6, 60 sec: 6280.5, 300 sec: 6414.7). Total num frames: 18726912. Throughput: 0: 804.5, 1: 804.6. Samples: 4679356. Policy #0 lag: (min: 2.0, avg: 2.0, max: 2.0) +[2023-09-27 04:18:21,415][24596] Avg episode reward: [(0, '-3797.710'), (1, '-4139.670')] +[2023-09-27 04:18:25,447][25562] Updated weights for policy 0, policy_version 36640 (0.0017) +[2023-09-27 04:18:25,448][25564] Updated weights for policy 1, policy_version 36640 (0.0018) +[2023-09-27 04:18:26,414][24596] Fps is (10 sec: 5734.5, 60 sec: 6280.5, 300 sec: 6414.8). Total num frames: 18759680. Throughput: 0: 801.0, 1: 801.8. Samples: 4688578. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-27 04:18:26,414][24596] Avg episode reward: [(0, '-3792.950'), (1, '-4139.670')] +[2023-09-27 04:18:31,414][24596] Fps is (10 sec: 6553.7, 60 sec: 6417.1, 300 sec: 6414.8). Total num frames: 18792448. Throughput: 0: 796.5, 1: 796.6. Samples: 4698121. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-27 04:18:31,414][24596] Avg episode reward: [(0, '-3778.640'), (1, '-4139.670')] +[2023-09-27 04:18:36,414][24596] Fps is (10 sec: 6553.7, 60 sec: 6417.1, 300 sec: 6414.8). Total num frames: 18825216. Throughput: 0: 798.8, 1: 798.8. Samples: 4703037. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-27 04:18:36,414][24596] Avg episode reward: [(0, '-3778.640'), (1, '-4139.700')] +[2023-09-27 04:18:38,167][25564] Updated weights for policy 1, policy_version 36800 (0.0017) +[2023-09-27 04:18:38,168][25562] Updated weights for policy 0, policy_version 36800 (0.0017) +[2023-09-27 04:18:41,414][24596] Fps is (10 sec: 6553.6, 60 sec: 6417.1, 300 sec: 6414.8). Total num frames: 18857984. Throughput: 0: 798.0, 1: 798.5. Samples: 4712611. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-27 04:18:41,414][24596] Avg episode reward: [(0, '-3778.640'), (1, '-4139.700')] +[2023-09-27 04:18:46,414][24596] Fps is (10 sec: 6553.4, 60 sec: 6417.1, 300 sec: 6414.8). Total num frames: 18890752. Throughput: 0: 801.2, 1: 800.9. Samples: 4722646. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-27 04:18:46,415][24596] Avg episode reward: [(0, '-3765.290'), (1, '-4139.700')] +[2023-09-27 04:18:50,781][25562] Updated weights for policy 0, policy_version 36960 (0.0015) +[2023-09-27 04:18:50,782][25564] Updated weights for policy 1, policy_version 36960 (0.0018) +[2023-09-27 04:18:51,414][24596] Fps is (10 sec: 6553.5, 60 sec: 6417.1, 300 sec: 6414.7). Total num frames: 18923520. Throughput: 0: 801.0, 1: 801.1. Samples: 4727216. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-27 04:18:51,415][24596] Avg episode reward: [(0, '-3765.290'), (1, '-4139.700')] +[2023-09-27 04:18:56,414][24596] Fps is (10 sec: 6553.7, 60 sec: 6417.1, 300 sec: 6414.8). Total num frames: 18956288. Throughput: 0: 802.2, 1: 802.0. Samples: 4737024. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-27 04:18:56,415][24596] Avg episode reward: [(0, '-3765.280'), (1, '-4145.770')] +[2023-09-27 04:19:01,414][24596] Fps is (10 sec: 6553.7, 60 sec: 6417.1, 300 sec: 6414.8). Total num frames: 18989056. Throughput: 0: 805.7, 1: 805.4. Samples: 4746966. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-27 04:19:01,414][24596] Avg episode reward: [(0, '-3765.280'), (1, '-4125.350')] +[2023-09-27 04:19:03,520][25564] Updated weights for policy 1, policy_version 37120 (0.0016) +[2023-09-27 04:19:03,520][25562] Updated weights for policy 0, policy_version 37120 (0.0016) +[2023-09-27 04:19:06,414][24596] Fps is (10 sec: 6553.7, 60 sec: 6417.1, 300 sec: 6414.8). Total num frames: 19021824. Throughput: 0: 800.2, 1: 800.2. Samples: 4751376. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-27 04:19:06,414][24596] Avg episode reward: [(0, '-3768.160'), (1, '-4125.350')] +[2023-09-27 04:19:11,414][24596] Fps is (10 sec: 6553.4, 60 sec: 6417.1, 300 sec: 6414.7). Total num frames: 19054592. Throughput: 0: 810.2, 1: 807.7. Samples: 4761380. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-27 04:19:11,415][24596] Avg episode reward: [(0, '-3768.230'), (1, '-4125.350')] +[2023-09-27 04:19:11,427][25443] Saving ./train_atari/atari_skiing/checkpoint_p1/checkpoint_000037216_9527296.pth... +[2023-09-27 04:19:11,427][25378] Saving ./train_atari/atari_skiing/checkpoint_p0/checkpoint_000037216_9527296.pth... +[2023-09-27 04:19:11,463][25378] Removing ./train_atari/atari_skiing/checkpoint_p0/checkpoint_000034208_8757248.pth +[2023-09-27 04:19:11,465][25443] Removing ./train_atari/atari_skiing/checkpoint_p1/checkpoint_000034208_8757248.pth +[2023-09-27 04:19:16,266][25562] Updated weights for policy 0, policy_version 37280 (0.0017) +[2023-09-27 04:19:16,267][25564] Updated weights for policy 1, policy_version 37280 (0.0017) +[2023-09-27 04:19:16,414][24596] Fps is (10 sec: 6553.5, 60 sec: 6417.1, 300 sec: 6442.5). Total num frames: 19087360. Throughput: 0: 807.6, 1: 807.7. Samples: 4770810. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-27 04:19:16,415][24596] Avg episode reward: [(0, '-3775.070'), (1, '-4091.490')] +[2023-09-27 04:19:21,414][24596] Fps is (10 sec: 6553.6, 60 sec: 6553.6, 300 sec: 6442.5). Total num frames: 19120128. Throughput: 0: 810.1, 1: 809.6. Samples: 4775924. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-27 04:19:21,415][24596] Avg episode reward: [(0, '-3775.070'), (1, '-4101.190')] +[2023-09-27 04:19:26,414][24596] Fps is (10 sec: 5734.4, 60 sec: 6417.0, 300 sec: 6414.8). Total num frames: 19144704. Throughput: 0: 808.3, 1: 808.1. Samples: 4785349. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-27 04:19:26,415][24596] Avg episode reward: [(0, '-3775.070'), (1, '-4101.190')] +[2023-09-27 04:19:29,076][25562] Updated weights for policy 0, policy_version 37440 (0.0016) +[2023-09-27 04:19:29,077][25564] Updated weights for policy 1, policy_version 37440 (0.0017) +[2023-09-27 04:19:31,414][24596] Fps is (10 sec: 5734.5, 60 sec: 6417.1, 300 sec: 6414.8). Total num frames: 19177472. Throughput: 0: 800.8, 1: 801.2. Samples: 4794738. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-27 04:19:31,415][24596] Avg episode reward: [(0, '-3790.790'), (1, '-4101.190')] +[2023-09-27 04:19:36,414][24596] Fps is (10 sec: 6553.7, 60 sec: 6417.1, 300 sec: 6414.8). Total num frames: 19210240. Throughput: 0: 806.0, 1: 806.0. Samples: 4799754. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-27 04:19:36,414][24596] Avg episode reward: [(0, '-3790.790'), (1, '-4093.920')] +[2023-09-27 04:19:41,414][24596] Fps is (10 sec: 6553.6, 60 sec: 6417.1, 300 sec: 6414.8). Total num frames: 19243008. Throughput: 0: 799.4, 1: 799.7. Samples: 4808983. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-27 04:19:41,414][24596] Avg episode reward: [(0, '-3842.380'), (1, '-4093.880')] +[2023-09-27 04:19:41,887][25562] Updated weights for policy 0, policy_version 37600 (0.0015) +[2023-09-27 04:19:41,888][25564] Updated weights for policy 1, policy_version 37600 (0.0018) +[2023-09-27 04:19:46,414][24596] Fps is (10 sec: 6553.6, 60 sec: 6417.1, 300 sec: 6414.8). Total num frames: 19275776. Throughput: 0: 799.8, 1: 799.7. Samples: 4818944. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-27 04:19:46,414][24596] Avg episode reward: [(0, '-3821.670'), (1, '-4037.460')] +[2023-09-27 04:19:51,414][24596] Fps is (10 sec: 6553.5, 60 sec: 6417.1, 300 sec: 6414.8). Total num frames: 19308544. Throughput: 0: 805.2, 1: 805.3. Samples: 4823851. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-27 04:19:51,415][24596] Avg episode reward: [(0, '-3853.910'), (1, '-4037.460')] +[2023-09-27 04:19:54,817][25564] Updated weights for policy 1, policy_version 37760 (0.0016) +[2023-09-27 04:19:54,817][25562] Updated weights for policy 0, policy_version 37760 (0.0018) +[2023-09-27 04:19:56,414][24596] Fps is (10 sec: 6553.4, 60 sec: 6417.1, 300 sec: 6414.7). Total num frames: 19341312. Throughput: 0: 797.7, 1: 798.2. Samples: 4833197. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-27 04:19:56,415][24596] Avg episode reward: [(0, '-3853.910'), (1, '-4017.460')] +[2023-09-27 04:20:01,414][24596] Fps is (10 sec: 6553.8, 60 sec: 6417.1, 300 sec: 6414.8). Total num frames: 19374080. Throughput: 0: 794.6, 1: 795.4. Samples: 4842361. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-27 04:20:01,414][24596] Avg episode reward: [(0, '-3817.020'), (1, '-4017.460')] +[2023-09-27 04:20:06,414][24596] Fps is (10 sec: 5734.5, 60 sec: 6280.5, 300 sec: 6387.0). Total num frames: 19398656. Throughput: 0: 794.4, 1: 794.6. Samples: 4847430. Policy #0 lag: (min: 9.0, avg: 9.0, max: 9.0) +[2023-09-27 04:20:06,415][24596] Avg episode reward: [(0, '-3817.020'), (1, '-4017.460')] +[2023-09-27 04:20:07,823][25562] Updated weights for policy 0, policy_version 37920 (0.0015) +[2023-09-27 04:20:07,823][25564] Updated weights for policy 1, policy_version 37920 (0.0018) +[2023-09-27 04:20:11,414][24596] Fps is (10 sec: 5734.4, 60 sec: 6280.6, 300 sec: 6387.0). Total num frames: 19431424. Throughput: 0: 793.0, 1: 793.3. Samples: 4856733. Policy #0 lag: (min: 9.0, avg: 9.0, max: 9.0) +[2023-09-27 04:20:11,414][24596] Avg episode reward: [(0, '-3817.020'), (1, '-4017.480')] +[2023-09-27 04:20:16,414][24596] Fps is (10 sec: 6553.7, 60 sec: 6280.6, 300 sec: 6387.0). Total num frames: 19464192. Throughput: 0: 792.8, 1: 792.0. Samples: 4866056. Policy #0 lag: (min: 9.0, avg: 9.0, max: 9.0) +[2023-09-27 04:20:16,414][24596] Avg episode reward: [(0, '-3817.080'), (1, '-3995.960')] +[2023-09-27 04:20:20,705][25564] Updated weights for policy 1, policy_version 38080 (0.0018) +[2023-09-27 04:20:20,706][25562] Updated weights for policy 0, policy_version 38080 (0.0017) +[2023-09-27 04:20:21,414][24596] Fps is (10 sec: 6553.6, 60 sec: 6280.6, 300 sec: 6400.9). Total num frames: 19496960. Throughput: 0: 790.8, 1: 790.8. Samples: 4870929. Policy #0 lag: (min: 9.0, avg: 9.0, max: 9.0) +[2023-09-27 04:20:21,415][24596] Avg episode reward: [(0, '-3823.190'), (1, '-3924.420')] +[2023-09-27 04:20:26,414][24596] Fps is (10 sec: 6553.4, 60 sec: 6417.1, 300 sec: 6414.8). Total num frames: 19529728. Throughput: 0: 793.5, 1: 793.2. Samples: 4880384. Policy #0 lag: (min: 9.0, avg: 9.0, max: 9.0) +[2023-09-27 04:20:26,415][24596] Avg episode reward: [(0, '-3831.810'), (1, '-3924.420')] +[2023-09-27 04:20:31,414][24596] Fps is (10 sec: 6553.5, 60 sec: 6417.1, 300 sec: 6414.7). Total num frames: 19562496. Throughput: 0: 792.6, 1: 792.9. Samples: 4890289. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-27 04:20:31,415][24596] Avg episode reward: [(0, '-3799.180'), (1, '-3898.060')] +[2023-09-27 04:20:33,494][25562] Updated weights for policy 0, policy_version 38240 (0.0017) +[2023-09-27 04:20:33,494][25564] Updated weights for policy 1, policy_version 38240 (0.0018) +[2023-09-27 04:20:36,414][24596] Fps is (10 sec: 6553.8, 60 sec: 6417.1, 300 sec: 6414.8). Total num frames: 19595264. Throughput: 0: 788.0, 1: 788.1. Samples: 4894772. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-27 04:20:36,414][24596] Avg episode reward: [(0, '-3799.180'), (1, '-3898.060')] +[2023-09-27 04:20:41,414][24596] Fps is (10 sec: 6553.8, 60 sec: 6417.1, 300 sec: 6414.8). Total num frames: 19628032. Throughput: 0: 797.0, 1: 797.7. Samples: 4904956. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-27 04:20:41,414][24596] Avg episode reward: [(0, '-3799.150'), (1, '-3898.090')] +[2023-09-27 04:20:46,027][25564] Updated weights for policy 1, policy_version 38400 (0.0018) +[2023-09-27 04:20:46,027][25562] Updated weights for policy 0, policy_version 38400 (0.0016) +[2023-09-27 04:20:46,414][24596] Fps is (10 sec: 6553.6, 60 sec: 6417.1, 300 sec: 6414.8). Total num frames: 19660800. Throughput: 0: 803.0, 1: 802.9. Samples: 4914625. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-27 04:20:46,415][24596] Avg episode reward: [(0, '-3791.090'), (1, '-3866.250')] +[2023-09-27 04:20:51,414][24596] Fps is (10 sec: 6553.5, 60 sec: 6417.1, 300 sec: 6414.7). Total num frames: 19693568. Throughput: 0: 798.9, 1: 799.2. Samples: 4919345. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-27 04:20:51,415][24596] Avg episode reward: [(0, '-3793.020'), (1, '-3866.200')] +[2023-09-27 04:20:56,414][24596] Fps is (10 sec: 6553.6, 60 sec: 6417.1, 300 sec: 6414.8). Total num frames: 19726336. Throughput: 0: 807.7, 1: 808.7. Samples: 4929471. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-27 04:20:56,415][24596] Avg episode reward: [(0, '-3793.670'), (1, '-3866.200')] +[2023-09-27 04:20:58,599][25562] Updated weights for policy 0, policy_version 38560 (0.0016) +[2023-09-27 04:20:58,601][25564] Updated weights for policy 1, policy_version 38560 (0.0016) +[2023-09-27 04:21:01,414][24596] Fps is (10 sec: 6553.7, 60 sec: 6417.1, 300 sec: 6414.8). Total num frames: 19759104. Throughput: 0: 808.4, 1: 809.0. Samples: 4938841. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-27 04:21:01,415][24596] Avg episode reward: [(0, '-3793.670'), (1, '-3866.200')] +[2023-09-27 04:21:06,414][24596] Fps is (10 sec: 6553.5, 60 sec: 6553.6, 300 sec: 6414.8). Total num frames: 19791872. Throughput: 0: 810.6, 1: 810.3. Samples: 4943872. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-27 04:21:06,415][24596] Avg episode reward: [(0, '-3793.670'), (1, '-3873.770')] +[2023-09-27 04:21:11,414][24596] Fps is (10 sec: 5734.4, 60 sec: 6417.1, 300 sec: 6387.0). Total num frames: 19816448. Throughput: 0: 809.1, 1: 809.3. Samples: 4953210. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-27 04:21:11,414][24596] Avg episode reward: [(0, '-3804.710'), (1, '-3873.730')] +[2023-09-27 04:21:11,451][25443] Saving ./train_atari/atari_skiing/checkpoint_p1/checkpoint_000038720_9912320.pth... +[2023-09-27 04:21:11,471][25378] Saving ./train_atari/atari_skiing/checkpoint_p0/checkpoint_000038720_9912320.pth... +[2023-09-27 04:21:11,473][25562] Updated weights for policy 0, policy_version 38720 (0.0017) +[2023-09-27 04:21:11,473][25564] Updated weights for policy 1, policy_version 38720 (0.0018) +[2023-09-27 04:21:11,476][25443] Removing ./train_atari/atari_skiing/checkpoint_p1/checkpoint_000035712_9142272.pth +[2023-09-27 04:21:11,498][25378] Removing ./train_atari/atari_skiing/checkpoint_p0/checkpoint_000035712_9142272.pth +[2023-09-27 04:21:16,414][24596] Fps is (10 sec: 5734.4, 60 sec: 6417.0, 300 sec: 6387.0). Total num frames: 19849216. Throughput: 0: 804.4, 1: 804.5. Samples: 4962689. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-27 04:21:16,415][24596] Avg episode reward: [(0, '-3802.010'), (1, '-3818.220')] +[2023-09-27 04:21:21,414][24596] Fps is (10 sec: 6553.5, 60 sec: 6417.0, 300 sec: 6387.0). Total num frames: 19881984. Throughput: 0: 810.2, 1: 810.6. Samples: 4967707. Policy #0 lag: (min: 3.0, avg: 3.0, max: 3.0) +[2023-09-27 04:21:21,415][24596] Avg episode reward: [(0, '-3805.220'), (1, '-3818.220')] +[2023-09-27 04:21:24,400][25562] Updated weights for policy 0, policy_version 38880 (0.0015) +[2023-09-27 04:21:24,402][25564] Updated weights for policy 1, policy_version 38880 (0.0016) +[2023-09-27 04:21:26,414][24596] Fps is (10 sec: 6553.8, 60 sec: 6417.1, 300 sec: 6387.0). Total num frames: 19914752. Throughput: 0: 799.1, 1: 799.3. Samples: 4976884. Policy #0 lag: (min: 3.0, avg: 3.0, max: 3.0) +[2023-09-27 04:21:26,414][24596] Avg episode reward: [(0, '-3804.760'), (1, '-3795.880')] +[2023-09-27 04:21:31,414][24596] Fps is (10 sec: 6553.7, 60 sec: 6417.1, 300 sec: 6387.0). Total num frames: 19947520. Throughput: 0: 803.2, 1: 802.5. Samples: 4986880. Policy #0 lag: (min: 3.0, avg: 3.0, max: 3.0) +[2023-09-27 04:21:31,415][24596] Avg episode reward: [(0, '-3781.400'), (1, '-3795.880')] +[2023-09-27 04:21:36,414][24596] Fps is (10 sec: 6553.6, 60 sec: 6417.1, 300 sec: 6414.8). Total num frames: 19980288. Throughput: 0: 801.5, 1: 801.5. Samples: 4991480. Policy #0 lag: (min: 3.0, avg: 3.0, max: 3.0) +[2023-09-27 04:21:36,414][24596] Avg episode reward: [(0, '-3811.260'), (1, '-3771.540')] +[2023-09-27 04:21:37,099][25564] Updated weights for policy 1, policy_version 39040 (0.0017) +[2023-09-27 04:21:37,099][25562] Updated weights for policy 0, policy_version 39040 (0.0017) +[2023-09-27 04:21:40,802][25608] Stopping RolloutWorker_w6... +[2023-09-27 04:21:40,802][25606] Stopping RolloutWorker_w4... +[2023-09-27 04:21:40,802][24596] Component RolloutWorker_w6 stopped! +[2023-09-27 04:21:40,802][25604] Stopping RolloutWorker_w2... +[2023-09-27 04:21:40,802][25605] Stopping RolloutWorker_w3... +[2023-09-27 04:21:40,802][25601] Stopping RolloutWorker_w0... +[2023-09-27 04:21:40,802][25602] Stopping RolloutWorker_w1... +[2023-09-27 04:21:40,802][25610] Stopping RolloutWorker_w7... +[2023-09-27 04:21:40,802][25609] Stopping RolloutWorker_w5... +[2023-09-27 04:21:40,802][25443] Stopping Batcher_1... +[2023-09-27 04:21:40,802][25608] Loop rollout_proc6_evt_loop terminating... +[2023-09-27 04:21:40,802][25604] Loop rollout_proc2_evt_loop terminating... +[2023-09-27 04:21:40,802][25601] Loop rollout_proc0_evt_loop terminating... +[2023-09-27 04:21:40,802][24596] Component RolloutWorker_w4 stopped! +[2023-09-27 04:21:40,802][25605] Loop rollout_proc3_evt_loop terminating... +[2023-09-27 04:21:40,802][25606] Loop rollout_proc4_evt_loop terminating... +[2023-09-27 04:21:40,802][25378] Stopping Batcher_0... +[2023-09-27 04:21:40,803][24596] Component RolloutWorker_w1 stopped! +[2023-09-27 04:21:40,803][25443] Loop batcher_evt_loop terminating... +[2023-09-27 04:21:40,803][25602] Loop rollout_proc1_evt_loop terminating... +[2023-09-27 04:21:40,803][25609] Loop rollout_proc5_evt_loop terminating... +[2023-09-27 04:21:40,803][25610] Loop rollout_proc7_evt_loop terminating... +[2023-09-27 04:21:40,803][24596] Component RolloutWorker_w2 stopped! +[2023-09-27 04:21:40,803][25378] Saving ./train_atari/atari_skiing/checkpoint_p0/checkpoint_000039088_10006528.pth... +[2023-09-27 04:21:40,803][24596] Component RolloutWorker_w3 stopped! +[2023-09-27 04:21:40,803][24596] Component RolloutWorker_w5 stopped! +[2023-09-27 04:21:40,804][24596] Component RolloutWorker_w0 stopped! +[2023-09-27 04:21:40,804][24596] Component RolloutWorker_w7 stopped! +[2023-09-27 04:21:40,804][24596] Component Batcher_1 stopped! +[2023-09-27 04:21:40,804][24596] Component Batcher_0 stopped! +[2023-09-27 04:21:40,811][25443] Saving ./train_atari/atari_skiing/checkpoint_p1/checkpoint_000039088_10006528.pth... +[2023-09-27 04:21:40,803][25378] Loop batcher_evt_loop terminating... +[2023-09-27 04:21:40,833][25378] Removing ./train_atari/atari_skiing/checkpoint_p0/checkpoint_000037216_9527296.pth +[2023-09-27 04:21:40,837][25378] Saving ./train_atari/atari_skiing/checkpoint_p0/checkpoint_000039088_10006528.pth... +[2023-09-27 04:21:40,840][25443] Removing ./train_atari/atari_skiing/checkpoint_p1/checkpoint_000037216_9527296.pth +[2023-09-27 04:21:40,844][25443] Saving ./train_atari/atari_skiing/checkpoint_p1/checkpoint_000039088_10006528.pth... +[2023-09-27 04:21:40,850][25564] Weights refcount: 2 0 +[2023-09-27 04:21:40,852][25564] Stopping InferenceWorker_p1-w0... +[2023-09-27 04:21:40,852][25564] Loop inference_proc1-0_evt_loop terminating... +[2023-09-27 04:21:40,852][24596] Component InferenceWorker_p1-w0 stopped! +[2023-09-27 04:21:40,867][25562] Weights refcount: 2 0 +[2023-09-27 04:21:40,868][25562] Stopping InferenceWorker_p0-w0... +[2023-09-27 04:21:40,868][25562] Loop inference_proc0-0_evt_loop terminating... +[2023-09-27 04:21:40,868][24596] Component InferenceWorker_p0-w0 stopped! +[2023-09-27 04:21:40,873][25378] Stopping LearnerWorker_p0... +[2023-09-27 04:21:40,874][25378] Loop learner_proc0_evt_loop terminating... +[2023-09-27 04:21:40,875][24596] Component LearnerWorker_p0 stopped! +[2023-09-27 04:21:40,882][25443] Stopping LearnerWorker_p1... +[2023-09-27 04:21:40,882][24596] Component LearnerWorker_p1 stopped! +[2023-09-27 04:21:40,882][25443] Loop learner_proc1_evt_loop terminating... +[2023-09-27 04:21:40,883][24596] Waiting for process learner_proc0 to stop... +[2023-09-27 04:21:41,571][24596] Waiting for process learner_proc1 to stop... +[2023-09-27 04:21:41,598][24596] Waiting for process inference_proc0-0 to join... +[2023-09-27 04:21:41,598][24596] Waiting for process inference_proc1-0 to join... +[2023-09-27 04:21:41,632][24596] Waiting for process rollout_proc0 to join... +[2023-09-27 04:21:41,632][24596] Waiting for process rollout_proc1 to join... +[2023-09-27 04:21:41,633][24596] Waiting for process rollout_proc2 to join... +[2023-09-27 04:21:41,633][24596] Waiting for process rollout_proc3 to join... +[2023-09-27 04:21:41,634][24596] Waiting for process rollout_proc4 to join... +[2023-09-27 04:21:41,634][24596] Waiting for process rollout_proc5 to join... +[2023-09-27 04:21:41,634][24596] Waiting for process rollout_proc6 to join... +[2023-09-27 04:21:41,635][24596] Waiting for process rollout_proc7 to join... +[2023-09-27 04:21:41,635][24596] Batcher 0 profile tree view: +batching: 21.0433, releasing_batches: 1.6219 +[2023-09-27 04:21:41,636][24596] Batcher 1 profile tree view: +batching: 20.9252, releasing_batches: 1.5947 +[2023-09-27 04:21:41,636][24596] InferenceWorker_p0-w0 profile tree view: +wait_policy: 0.0052 + wait_policy_total: 644.8092 +update_model: 37.3677 + weight_update: 0.0018 +one_step: 0.0011 + handle_policy_step: 2255.8454 + deserialize: 66.7592, stack: 16.5303, obs_to_device_normalize: 550.2420, forward: 1086.3086, send_messages: 92.4589 + prepare_outputs: 299.0538 + to_cpu: 149.8232 +[2023-09-27 04:21:41,636][24596] InferenceWorker_p1-w0 profile tree view: +wait_policy: 0.0051 + wait_policy_total: 651.8228 +update_model: 37.1897 + weight_update: 0.0018 +one_step: 0.0012 + handle_policy_step: 2248.6314 + deserialize: 67.1775, stack: 16.7013, obs_to_device_normalize: 547.0517, forward: 1082.2787, send_messages: 93.0109 + prepare_outputs: 299.3125 + to_cpu: 151.4215 +[2023-09-27 04:21:41,637][24596] Learner 0 profile tree view: +misc: 0.0160, prepare_batch: 32.2351 +train: 457.1352 + epoch_init: 0.1061, minibatch_init: 3.1069, losses_postprocess: 62.8193, kl_divergence: 5.4209, after_optimizer: 21.6231 + calculate_losses: 44.9377 + losses_init: 0.1055, forward_head: 14.3277, bptt_initial: 0.4488, bptt: 0.5094, tail: 10.2217, advantages_returns: 3.0527, losses: 12.7167 + update: 315.0564 + clip: 163.7191 +[2023-09-27 04:21:41,637][24596] Learner 1 profile tree view: +misc: 0.0166, prepare_batch: 32.1195 +train: 456.3876 + epoch_init: 0.1040, minibatch_init: 3.1308, losses_postprocess: 62.6558, kl_divergence: 5.3733, after_optimizer: 21.9097 + calculate_losses: 44.9497 + losses_init: 0.0999, forward_head: 14.4228, bptt_initial: 0.4378, bptt: 0.4960, tail: 10.2280, advantages_returns: 3.0842, losses: 12.6086 + update: 314.2079 + clip: 162.6475 +[2023-09-27 04:21:41,638][24596] RolloutWorker_w0 profile tree view: +wait_for_trajectories: 0.4088, enqueue_policy_requests: 42.2236, env_step: 1112.4807, overhead: 29.3455, complete_rollouts: 1.0851 +save_policy_outputs: 54.4114 + split_output_tensors: 18.5044 +[2023-09-27 04:21:41,638][24596] RolloutWorker_w7 profile tree view: +wait_for_trajectories: 0.3963, enqueue_policy_requests: 42.0563, env_step: 1085.9394, overhead: 29.7072, complete_rollouts: 1.0508 +save_policy_outputs: 53.5307 + split_output_tensors: 18.3604 +[2023-09-27 04:21:41,639][24596] Loop Runner_EvtLoop terminating... +[2023-09-27 04:21:41,639][24596] Runner profile tree view: +main_loop: 3146.0846 +[2023-09-27 04:21:41,640][24596] Collected {0: 10006528, 1: 10006528}, FPS: 6361.3