| [2023-09-24 15:39:10,168][96631] Saving configuration to ./train_atari/Berzerk/config.json... |
| [2023-09-24 15:39:10,506][96631] Rollout worker 0 uses device cpu |
| [2023-09-24 15:39:10,506][96631] Rollout worker 1 uses device cpu |
| [2023-09-24 15:39:10,506][96631] Rollout worker 2 uses device cpu |
| [2023-09-24 15:39:10,507][96631] Rollout worker 3 uses device cpu |
| [2023-09-24 15:39:10,507][96631] Rollout worker 4 uses device cpu |
| [2023-09-24 15:39:10,507][96631] Rollout worker 5 uses device cpu |
| [2023-09-24 15:39:10,507][96631] Rollout worker 6 uses device cpu |
| [2023-09-24 15:39:10,508][96631] Rollout worker 7 uses device cpu |
| [2023-09-24 15:39:10,508][96631] In synchronous mode, we only accumulate one batch. Setting num_batches_to_accumulate to 1 |
| [2023-09-24 15:39:10,542][96631] Using GPUs [0] for process 0 (actually maps to GPUs [0]) |
| [2023-09-24 15:39:10,542][96631] InferenceWorker_p0-w0: min num requests: 2 |
| [2023-09-24 15:39:10,566][96631] Starting all processes... |
| [2023-09-24 15:39:10,566][96631] Starting process learner_proc0 |
| [2023-09-24 15:39:12,136][96631] Starting all processes... |
| [2023-09-24 15:39:12,139][97008] Using GPUs [0] for process 0 (actually maps to GPUs [0]) |
| [2023-09-24 15:39:12,139][97008] Set environment var CUDA_VISIBLE_DEVICES to '0' (GPU indices [0]) for learning process 0 |
| [2023-09-24 15:39:12,143][96631] Starting process inference_proc0-0 |
| [2023-09-24 15:39:12,144][96631] Starting process rollout_proc0 |
| [2023-09-24 15:39:12,144][96631] Starting process rollout_proc1 |
| [2023-09-24 15:39:12,144][96631] Starting process rollout_proc2 |
| [2023-09-24 15:39:12,157][97008] Num visible devices: 1 |
| [2023-09-24 15:39:12,145][96631] Starting process rollout_proc3 |
| [2023-09-24 15:39:12,149][96631] Starting process rollout_proc4 |
| [2023-09-24 15:39:12,178][97008] Starting seed is not provided |
| [2023-09-24 15:39:12,178][97008] Using GPUs [0] for process 0 (actually maps to GPUs [0]) |
| [2023-09-24 15:39:12,178][97008] Initializing actor-critic model on device cuda:0 |
| [2023-09-24 15:39:12,178][97008] RunningMeanStd input shape: (4, 84, 84) |
| [2023-09-24 15:39:12,179][97008] RunningMeanStd input shape: (1,) |
| [2023-09-24 15:39:12,149][96631] Starting process rollout_proc5 |
| [2023-09-24 15:39:12,150][96631] Starting process rollout_proc6 |
| [2023-09-24 15:39:12,153][96631] Starting process rollout_proc7 |
| [2023-09-24 15:39:12,192][97008] ConvEncoder: input_channels=4 |
| [2023-09-24 15:39:12,590][97008] Conv encoder output size: 512 |
| [2023-09-24 15:39:12,592][97008] Created Actor Critic model with architecture: |
| [2023-09-24 15:39:12,592][97008] ActorCriticSharedWeights( |
| (obs_normalizer): ObservationNormalizer( |
| (running_mean_std): RunningMeanStdDictInPlace( |
| (running_mean_std): ModuleDict( |
| (obs): RunningMeanStdInPlace() |
| ) |
| ) |
| ) |
| (returns_normalizer): RecursiveScriptModule(original_name=RunningMeanStdInPlace) |
| (encoder): MultiInputEncoder( |
| (encoders): ModuleDict( |
| (obs): ConvEncoder( |
| (enc): RecursiveScriptModule( |
| original_name=ConvEncoderImpl |
| (conv_head): RecursiveScriptModule( |
| original_name=Sequential |
| (0): RecursiveScriptModule(original_name=Conv2d) |
| (1): RecursiveScriptModule(original_name=ReLU) |
| (2): RecursiveScriptModule(original_name=Conv2d) |
| (3): RecursiveScriptModule(original_name=ReLU) |
| (4): RecursiveScriptModule(original_name=Conv2d) |
| (5): RecursiveScriptModule(original_name=ReLU) |
| ) |
| (mlp_layers): RecursiveScriptModule( |
| original_name=Sequential |
| (0): RecursiveScriptModule(original_name=Linear) |
| (1): RecursiveScriptModule(original_name=ReLU) |
| ) |
| ) |
| ) |
| ) |
| ) |
| (core): ModelCoreIdentity() |
| (decoder): MlpDecoder( |
| (mlp): Identity() |
| ) |
| (critic_linear): Linear(in_features=512, out_features=1, bias=True) |
| (action_parameterization): ActionParameterizationDefault( |
| (distribution_linear): Linear(in_features=512, out_features=18, bias=True) |
| ) |
| ) |
| [2023-09-24 15:39:13,178][97008] Using optimizer <class 'torch.optim.adam.Adam'> |
| [2023-09-24 15:39:13,180][97008] No checkpoints found |
| [2023-09-24 15:39:13,180][97008] Did not load from checkpoint, starting from scratch! |
| [2023-09-24 15:39:13,181][97008] Initialized policy 0 weights for model version 0 |
| [2023-09-24 15:39:13,183][97008] LearnerWorker_p0 finished initialization! |
| [2023-09-24 15:39:13,183][97008] Using GPUs [0] for process 0 (actually maps to GPUs [0]) |
| [2023-09-24 15:39:14,048][97100] Worker 2 uses CPU cores [8, 9, 10, 11] |
| [2023-09-24 15:39:14,071][97066] Using GPUs [0] for process 0 (actually maps to GPUs [0]) |
| [2023-09-24 15:39:14,070][97099] Worker 1 uses CPU cores [4, 5, 6, 7] |
| [2023-09-24 15:39:14,071][97066] Set environment var CUDA_VISIBLE_DEVICES to '0' (GPU indices [0]) for inference process 0 |
| [2023-09-24 15:39:14,086][97102] Worker 3 uses CPU cores [12, 13, 14, 15] |
| [2023-09-24 15:39:14,089][97066] Num visible devices: 1 |
| [2023-09-24 15:39:14,097][97067] Worker 0 uses CPU cores [0, 1, 2, 3] |
| [2023-09-24 15:39:14,119][97105] Worker 5 uses CPU cores [20, 21, 22, 23] |
| [2023-09-24 15:39:14,158][97104] Worker 6 uses CPU cores [24, 25, 26, 27] |
| [2023-09-24 15:39:14,170][97106] Worker 7 uses CPU cores [28, 29, 30, 31] |
| [2023-09-24 15:39:14,170][97103] Worker 4 uses CPU cores [16, 17, 18, 19] |
| [2023-09-24 15:39:14,721][97066] RunningMeanStd input shape: (4, 84, 84) |
| [2023-09-24 15:39:14,722][97066] RunningMeanStd input shape: (1,) |
| [2023-09-24 15:39:14,733][97066] ConvEncoder: input_channels=4 |
| [2023-09-24 15:39:14,829][97066] Conv encoder output size: 512 |
| [2023-09-24 15:39:14,835][96631] Inference worker 0-0 is ready! |
| [2023-09-24 15:39:14,835][96631] All inference workers are ready! Signal rollout workers to start! |
| [2023-09-24 15:39:15,277][97099] Decorrelating experience for 0 frames... |
| [2023-09-24 15:39:15,279][97103] Decorrelating experience for 0 frames... |
| [2023-09-24 15:39:15,280][97104] Decorrelating experience for 0 frames... |
| [2023-09-24 15:39:15,285][97105] Decorrelating experience for 0 frames... |
| [2023-09-24 15:39:15,285][97100] Decorrelating experience for 0 frames... |
| [2023-09-24 15:39:15,288][97067] Decorrelating experience for 0 frames... |
| [2023-09-24 15:39:15,399][97106] Decorrelating experience for 0 frames... |
| [2023-09-24 15:39:15,407][97102] Decorrelating experience for 0 frames... |
| [2023-09-24 15:39:16,482][96631] Fps is (10 sec: nan, 60 sec: nan, 300 sec: nan). Total num frames: 0. Throughput: 0: nan. Samples: 0. Policy #0 lag: (min: -1.0, avg: -1.0, max: -1.0) |
| [2023-09-24 15:39:16,483][96631] Avg episode reward: [(0, '0.727')] |
| [2023-09-24 15:39:16,866][96631] Keyboard interrupt detected in the event loop EvtLoop [Runner_EvtLoop, process=main process 96631], exiting... |
| [2023-09-24 15:39:16,867][96631] Runner profile tree view: |
| main_loop: 6.3011 |
| [2023-09-24 15:39:16,867][97102] Stopping RolloutWorker_w3... |
| [2023-09-24 15:39:16,867][97104] Stopping RolloutWorker_w6... |
| [2023-09-24 15:39:16,867][97099] Stopping RolloutWorker_w1... |
| [2023-09-24 15:39:16,867][96631] Collected {0: 0}, FPS: 0.0 |
| [2023-09-24 15:39:16,867][97100] Stopping RolloutWorker_w2... |
| [2023-09-24 15:39:16,867][97105] Stopping RolloutWorker_w5... |
| [2023-09-24 15:39:16,867][97106] Stopping RolloutWorker_w7... |
| [2023-09-24 15:39:16,867][97103] Stopping RolloutWorker_w4... |
| [2023-09-24 15:39:16,867][97008] Stopping Batcher_0... |
| [2023-09-24 15:39:16,867][97102] Loop rollout_proc3_evt_loop terminating... |
| [2023-09-24 15:39:16,867][97067] Stopping RolloutWorker_w0... |
| [2023-09-24 15:39:16,867][97099] Loop rollout_proc1_evt_loop terminating... |
| [2023-09-24 15:39:16,867][97104] Loop rollout_proc6_evt_loop terminating... |
| [2023-09-24 15:39:16,868][97105] Loop rollout_proc5_evt_loop terminating... |
| [2023-09-24 15:39:16,868][97100] Loop rollout_proc2_evt_loop terminating... |
| [2023-09-24 15:39:16,868][97106] Loop rollout_proc7_evt_loop terminating... |
| [2023-09-24 15:39:16,868][97008] Loop batcher_evt_loop terminating... |
| [2023-09-24 15:39:16,868][97103] Loop rollout_proc4_evt_loop terminating... |
| [2023-09-24 15:39:16,868][97067] Loop rollout_proc0_evt_loop terminating... |
| [2023-09-24 15:39:16,925][97066] Weights refcount: 2 0 |
| [2023-09-24 15:39:16,926][97066] Stopping InferenceWorker_p0-w0... |
| [2023-09-24 15:39:16,927][97066] Loop inference_proc0-0_evt_loop terminating... |
| [2023-09-24 15:39:18,629][97008] Saving ./train_atari/Berzerk/checkpoint_p0/checkpoint_000000016_4096.pth... |
| [2023-09-24 15:39:18,654][97008] Stopping LearnerWorker_p0... |
| [2023-09-24 15:39:18,654][97008] Loop learner_proc0_evt_loop terminating... |
| [2023-09-24 15:41:39,221][104016] Saving configuration to ./train_atari/Berzerk/config.json... |
| [2023-09-24 15:41:39,493][104016] Rollout worker 0 uses device cpu |
| [2023-09-24 15:41:39,493][104016] Rollout worker 1 uses device cpu |
| [2023-09-24 15:41:39,494][104016] Rollout worker 2 uses device cpu |
| [2023-09-24 15:41:39,495][104016] Rollout worker 3 uses device cpu |
| [2023-09-24 15:41:39,495][104016] Rollout worker 4 uses device cpu |
| [2023-09-24 15:41:39,496][104016] Rollout worker 5 uses device cpu |
| [2023-09-24 15:41:39,496][104016] Rollout worker 6 uses device cpu |
| [2023-09-24 15:41:39,496][104016] Rollout worker 7 uses device cpu |
| [2023-09-24 15:41:39,497][104016] In synchronous mode, we only accumulate one batch. Setting num_batches_to_accumulate to 1 |
| [2023-09-24 15:41:39,540][104016] Using GPUs [0] for process 0 (actually maps to GPUs [0]) |
| [2023-09-24 15:41:39,540][104016] InferenceWorker_p0-w0: min num requests: 1 |
| [2023-09-24 15:41:39,543][104016] Using GPUs [1] for process 1 (actually maps to GPUs [1]) |
| [2023-09-24 15:41:39,544][104016] InferenceWorker_p1-w0: min num requests: 1 |
| [2023-09-24 15:41:39,565][104016] Starting all processes... |
| [2023-09-24 15:41:39,565][104016] Starting process learner_proc0 |
| [2023-09-24 15:41:41,135][104016] Starting process learner_proc1 |
| [2023-09-24 15:41:41,139][104522] Using GPUs [0] for process 0 (actually maps to GPUs [0]) |
| [2023-09-24 15:41:41,139][104522] Set environment var CUDA_VISIBLE_DEVICES to '0' (GPU indices [0]) for learning process 0 |
| [2023-09-24 15:41:41,157][104522] Num visible devices: 1 |
| [2023-09-24 15:41:41,174][104522] Starting seed is not provided |
| [2023-09-24 15:41:41,174][104522] Using GPUs [0] for process 0 (actually maps to GPUs [0]) |
| [2023-09-24 15:41:41,174][104522] Initializing actor-critic model on device cuda:0 |
| [2023-09-24 15:41:41,174][104522] RunningMeanStd input shape: (4, 84, 84) |
| [2023-09-24 15:41:41,175][104522] RunningMeanStd input shape: (1,) |
| [2023-09-24 15:41:41,186][104522] ConvEncoder: input_channels=4 |
| [2023-09-24 15:41:41,343][104522] Conv encoder output size: 512 |
| [2023-09-24 15:41:41,344][104522] Created Actor Critic model with architecture: |
| [2023-09-24 15:41:41,345][104522] ActorCriticSharedWeights( |
| (obs_normalizer): ObservationNormalizer( |
| (running_mean_std): RunningMeanStdDictInPlace( |
| (running_mean_std): ModuleDict( |
| (obs): RunningMeanStdInPlace() |
| ) |
| ) |
| ) |
| (returns_normalizer): RecursiveScriptModule(original_name=RunningMeanStdInPlace) |
| (encoder): MultiInputEncoder( |
| (encoders): ModuleDict( |
| (obs): ConvEncoder( |
| (enc): RecursiveScriptModule( |
| original_name=ConvEncoderImpl |
| (conv_head): RecursiveScriptModule( |
| original_name=Sequential |
| (0): RecursiveScriptModule(original_name=Conv2d) |
| (1): RecursiveScriptModule(original_name=ReLU) |
| (2): RecursiveScriptModule(original_name=Conv2d) |
| (3): RecursiveScriptModule(original_name=ReLU) |
| (4): RecursiveScriptModule(original_name=Conv2d) |
| (5): RecursiveScriptModule(original_name=ReLU) |
| ) |
| (mlp_layers): RecursiveScriptModule( |
| original_name=Sequential |
| (0): RecursiveScriptModule(original_name=Linear) |
| (1): RecursiveScriptModule(original_name=ReLU) |
| ) |
| ) |
| ) |
| ) |
| ) |
| (core): ModelCoreIdentity() |
| (decoder): MlpDecoder( |
| (mlp): Identity() |
| ) |
| (critic_linear): Linear(in_features=512, out_features=1, bias=True) |
| (action_parameterization): ActionParameterizationDefault( |
| (distribution_linear): Linear(in_features=512, out_features=18, bias=True) |
| ) |
| ) |
| [2023-09-24 15:41:41,889][104522] Using optimizer <class 'torch.optim.adam.Adam'> |
| [2023-09-24 15:41:41,889][104522] Loading state from checkpoint ./train_atari/Berzerk/checkpoint_p0/checkpoint_000000016_4096.pth... |
| [2023-09-24 15:41:41,907][104522] Loading model from checkpoint |
| [2023-09-24 15:41:41,910][104522] Loaded experiment state at self.train_step=16, self.env_steps=4096 |
| [2023-09-24 15:41:41,910][104522] Initialized policy 0 weights for model version 16 |
| [2023-09-24 15:41:41,913][104522] LearnerWorker_p0 finished initialization! |
| [2023-09-24 15:41:41,913][104522] Using GPUs [0] for process 0 (actually maps to GPUs [0]) |
| [2023-09-24 15:41:42,719][104016] Starting all processes... |
| [2023-09-24 15:41:42,723][104739] Using GPUs [1] for process 1 (actually maps to GPUs [1]) |
| [2023-09-24 15:41:42,723][104739] Set environment var CUDA_VISIBLE_DEVICES to '1' (GPU indices [1]) for learning process 1 |
| [2023-09-24 15:41:42,724][104016] Starting process inference_proc0-0 |
| [2023-09-24 15:41:42,724][104016] Starting process inference_proc1-0 |
| [2023-09-24 15:41:42,724][104016] Starting process rollout_proc0 |
| [2023-09-24 15:41:42,725][104016] Starting process rollout_proc1 |
| [2023-09-24 15:41:42,725][104016] Starting process rollout_proc2 |
| [2023-09-24 15:41:42,741][104739] Num visible devices: 1 |
| [2023-09-24 15:41:42,725][104016] Starting process rollout_proc3 |
| [2023-09-24 15:41:42,725][104016] Starting process rollout_proc4 |
| [2023-09-24 15:41:42,728][104016] Starting process rollout_proc5 |
| [2023-09-24 15:41:42,730][104016] Starting process rollout_proc6 |
| [2023-09-24 15:41:42,764][104739] Starting seed is not provided |
| [2023-09-24 15:41:42,764][104739] Using GPUs [0] for process 1 (actually maps to GPUs [1]) |
| [2023-09-24 15:41:42,764][104739] Initializing actor-critic model on device cuda:0 |
| [2023-09-24 15:41:42,765][104739] RunningMeanStd input shape: (4, 84, 84) |
| [2023-09-24 15:41:42,732][104016] Starting process rollout_proc7 |
| [2023-09-24 15:41:42,765][104739] RunningMeanStd input shape: (1,) |
| [2023-09-24 15:41:42,777][104739] ConvEncoder: input_channels=4 |
| [2023-09-24 15:41:43,113][104739] Conv encoder output size: 512 |
| [2023-09-24 15:41:43,115][104739] Created Actor Critic model with architecture: |
| [2023-09-24 15:41:43,115][104739] ActorCriticSharedWeights( |
| (obs_normalizer): ObservationNormalizer( |
| (running_mean_std): RunningMeanStdDictInPlace( |
| (running_mean_std): ModuleDict( |
| (obs): RunningMeanStdInPlace() |
| ) |
| ) |
| ) |
| (returns_normalizer): RecursiveScriptModule(original_name=RunningMeanStdInPlace) |
| (encoder): MultiInputEncoder( |
| (encoders): ModuleDict( |
| (obs): ConvEncoder( |
| (enc): RecursiveScriptModule( |
| original_name=ConvEncoderImpl |
| (conv_head): RecursiveScriptModule( |
| original_name=Sequential |
| (0): RecursiveScriptModule(original_name=Conv2d) |
| (1): RecursiveScriptModule(original_name=ReLU) |
| (2): RecursiveScriptModule(original_name=Conv2d) |
| (3): RecursiveScriptModule(original_name=ReLU) |
| (4): RecursiveScriptModule(original_name=Conv2d) |
| (5): RecursiveScriptModule(original_name=ReLU) |
| ) |
| (mlp_layers): RecursiveScriptModule( |
| original_name=Sequential |
| (0): RecursiveScriptModule(original_name=Linear) |
| (1): RecursiveScriptModule(original_name=ReLU) |
| ) |
| ) |
| ) |
| ) |
| ) |
| (core): ModelCoreIdentity() |
| (decoder): MlpDecoder( |
| (mlp): Identity() |
| ) |
| (critic_linear): Linear(in_features=512, out_features=1, bias=True) |
| (action_parameterization): ActionParameterizationDefault( |
| (distribution_linear): Linear(in_features=512, out_features=18, bias=True) |
| ) |
| ) |
| [2023-09-24 15:41:43,680][104739] Using optimizer <class 'torch.optim.adam.Adam'> |
| [2023-09-24 15:41:43,681][104739] No checkpoints found |
| [2023-09-24 15:41:43,681][104739] Did not load from checkpoint, starting from scratch! |
| [2023-09-24 15:41:43,682][104739] Initialized policy 1 weights for model version 0 |
| [2023-09-24 15:41:43,683][104739] LearnerWorker_p1 finished initialization! |
| [2023-09-24 15:41:43,683][104739] Using GPUs [0] for process 1 (actually maps to GPUs [1]) |
| [2023-09-24 15:41:44,654][104913] Worker 4 uses CPU cores [16, 17, 18, 19] |
| [2023-09-24 15:41:44,670][104910] Worker 2 uses CPU cores [8, 9, 10, 11] |
| [2023-09-24 15:41:44,672][104915] Worker 5 uses CPU cores [20, 21, 22, 23] |
| [2023-09-24 15:41:44,678][104916] Worker 7 uses CPU cores [28, 29, 30, 31] |
| [2023-09-24 15:41:44,678][104914] Worker 6 uses CPU cores [24, 25, 26, 27] |
| [2023-09-24 15:41:44,710][104876] Using GPUs [0] for process 0 (actually maps to GPUs [0]) |
| [2023-09-24 15:41:44,710][104876] Set environment var CUDA_VISIBLE_DEVICES to '0' (GPU indices [0]) for inference process 0 |
| [2023-09-24 15:41:44,729][104876] Num visible devices: 1 |
| [2023-09-24 15:41:44,741][104877] Worker 0 uses CPU cores [0, 1, 2, 3] |
| [2023-09-24 15:41:44,775][104887] Worker 1 uses CPU cores [4, 5, 6, 7] |
| [2023-09-24 15:41:44,836][104911] Worker 3 uses CPU cores [12, 13, 14, 15] |
| [2023-09-24 15:41:44,921][104875] Using GPUs [1] for process 1 (actually maps to GPUs [1]) |
| [2023-09-24 15:41:44,921][104875] Set environment var CUDA_VISIBLE_DEVICES to '1' (GPU indices [1]) for inference process 1 |
| [2023-09-24 15:41:44,939][104875] Num visible devices: 1 |
| [2023-09-24 15:41:45,315][104876] RunningMeanStd input shape: (4, 84, 84) |
| [2023-09-24 15:41:45,316][104876] RunningMeanStd input shape: (1,) |
| [2023-09-24 15:41:45,327][104876] ConvEncoder: input_channels=4 |
| [2023-09-24 15:41:45,423][104876] Conv encoder output size: 512 |
| [2023-09-24 15:41:45,428][104016] Inference worker 0-0 is ready! |
| [2023-09-24 15:41:45,480][104875] RunningMeanStd input shape: (4, 84, 84) |
| [2023-09-24 15:41:45,481][104875] RunningMeanStd input shape: (1,) |
| [2023-09-24 15:41:45,492][104875] ConvEncoder: input_channels=4 |
| [2023-09-24 15:41:45,566][104016] Fps is (10 sec: nan, 60 sec: nan, 300 sec: nan). Total num frames: 4096. Throughput: 0: nan, 1: nan. Samples: 0. Policy #0 lag: (min: -1.0, avg: -1.0, max: -1.0) |
| [2023-09-24 15:41:45,588][104875] Conv encoder output size: 512 |
| [2023-09-24 15:41:45,594][104016] Inference worker 1-0 is ready! |
| [2023-09-24 15:41:45,595][104016] All inference workers are ready! Signal rollout workers to start! |
| [2023-09-24 15:41:46,035][104915] Decorrelating experience for 0 frames... |
| [2023-09-24 15:41:46,036][104887] Decorrelating experience for 0 frames... |
| [2023-09-24 15:41:46,037][104916] Decorrelating experience for 0 frames... |
| [2023-09-24 15:41:46,038][104910] Decorrelating experience for 0 frames... |
| [2023-09-24 15:41:46,042][104911] Decorrelating experience for 0 frames... |
| [2023-09-24 15:41:46,042][104914] Decorrelating experience for 0 frames... |
| [2023-09-24 15:41:46,090][104913] Decorrelating experience for 0 frames... |
| [2023-09-24 15:41:46,101][104877] Decorrelating experience for 0 frames... |
| [2023-09-24 15:41:50,566][104016] Fps is (10 sec: 1638.4, 60 sec: 1638.4, 300 sec: 1638.4). Total num frames: 12288. Throughput: 0: 204.8, 1: 204.8. Samples: 2048. Policy #0 lag: (min: 13.0, avg: 13.0, max: 13.0) |
| [2023-09-24 15:41:50,567][104016] Avg episode reward: [(0, '1.000'), (1, '0.762')] |
| [2023-09-24 15:41:55,566][104016] Fps is (10 sec: 3276.8, 60 sec: 3276.8, 300 sec: 3276.8). Total num frames: 36864. Throughput: 0: 393.6, 1: 409.6. Samples: 8032. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) |
| [2023-09-24 15:41:55,567][104016] Avg episode reward: [(0, '1.271'), (1, '0.919')] |
| [2023-09-24 15:41:59,528][104016] Heartbeat connected on Batcher_0 |
| [2023-09-24 15:41:59,531][104016] Heartbeat connected on LearnerWorker_p0 |
| [2023-09-24 15:41:59,534][104016] Heartbeat connected on Batcher_1 |
| [2023-09-24 15:41:59,536][104016] Heartbeat connected on LearnerWorker_p1 |
| [2023-09-24 15:41:59,542][104016] Heartbeat connected on InferenceWorker_p0-w0 |
| [2023-09-24 15:41:59,546][104016] Heartbeat connected on InferenceWorker_p1-w0 |
| [2023-09-24 15:41:59,548][104016] Heartbeat connected on RolloutWorker_w0 |
| [2023-09-24 15:41:59,549][104016] Heartbeat connected on RolloutWorker_w1 |
| [2023-09-24 15:41:59,553][104016] Heartbeat connected on RolloutWorker_w2 |
| [2023-09-24 15:41:59,554][104016] Heartbeat connected on RolloutWorker_w3 |
| [2023-09-24 15:41:59,557][104016] Heartbeat connected on RolloutWorker_w4 |
| [2023-09-24 15:41:59,561][104016] Heartbeat connected on RolloutWorker_w5 |
| [2023-09-24 15:41:59,562][104016] Heartbeat connected on RolloutWorker_w6 |
| [2023-09-24 15:41:59,564][104016] Heartbeat connected on RolloutWorker_w7 |
| [2023-09-24 15:42:00,566][104016] Fps is (10 sec: 5734.5, 60 sec: 4369.1, 300 sec: 4369.1). Total num frames: 69632. Throughput: 0: 578.7, 1: 423.6. Samples: 15034. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) |
| [2023-09-24 15:42:00,566][104016] Avg episode reward: [(0, '1.310'), (1, '0.940')] |
| [2023-09-24 15:42:02,517][104875] Updated weights for policy 1, policy_version 160 (0.0016) |
| [2023-09-24 15:42:02,517][104876] Updated weights for policy 0, policy_version 176 (0.0018) |
| [2023-09-24 15:42:05,566][104016] Fps is (10 sec: 6553.6, 60 sec: 4915.2, 300 sec: 4915.2). Total num frames: 102400. Throughput: 0: 558.9, 1: 563.2. Samples: 22442. Policy #0 lag: (min: 1.0, avg: 1.0, max: 1.0) |
| [2023-09-24 15:42:05,567][104016] Avg episode reward: [(0, '1.430'), (1, '0.870')] |
| [2023-09-24 15:42:10,566][104016] Fps is (10 sec: 6553.5, 60 sec: 5242.9, 300 sec: 5242.9). Total num frames: 135168. Throughput: 0: 636.0, 1: 643.9. Samples: 31998. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) |
| [2023-09-24 15:42:10,567][104016] Avg episode reward: [(0, '1.590'), (1, '0.940')] |
| [2023-09-24 15:42:15,255][104875] Updated weights for policy 1, policy_version 320 (0.0015) |
| [2023-09-24 15:42:15,255][104876] Updated weights for policy 0, policy_version 336 (0.0016) |
| [2023-09-24 15:42:15,566][104016] Fps is (10 sec: 6553.7, 60 sec: 5461.4, 300 sec: 5461.4). Total num frames: 167936. Throughput: 0: 688.3, 1: 614.4. Samples: 39081. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) |
| [2023-09-24 15:42:15,566][104016] Avg episode reward: [(0, '1.610'), (1, '1.040')] |
| [2023-09-24 15:42:20,566][104016] Fps is (10 sec: 6553.6, 60 sec: 5617.4, 300 sec: 5617.4). Total num frames: 200704. Throughput: 0: 660.9, 1: 665.6. Samples: 46428. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) |
| [2023-09-24 15:42:20,567][104016] Avg episode reward: [(0, '1.580'), (1, '1.160')] |
| [2023-09-24 15:42:20,576][104522] Saving new best policy, reward=1.580! |
| [2023-09-24 15:42:25,566][104016] Fps is (10 sec: 6553.6, 60 sec: 5734.4, 300 sec: 5734.4). Total num frames: 233472. Throughput: 0: 695.1, 1: 699.4. Samples: 55782. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) |
| [2023-09-24 15:42:25,566][104016] Avg episode reward: [(0, '1.530'), (1, '1.480')] |
| [2023-09-24 15:42:25,567][104739] Saving new best policy, reward=1.480! |
| [2023-09-24 15:42:28,053][104875] Updated weights for policy 1, policy_version 480 (0.0016) |
| [2023-09-24 15:42:28,053][104876] Updated weights for policy 0, policy_version 496 (0.0016) |
| [2023-09-24 15:42:30,566][104016] Fps is (10 sec: 6144.0, 60 sec: 5734.4, 300 sec: 5734.4). Total num frames: 262144. Throughput: 0: 728.2, 1: 678.9. Samples: 63319. Policy #0 lag: (min: 14.0, avg: 14.0, max: 14.0) |
| [2023-09-24 15:42:30,567][104016] Avg episode reward: [(0, '1.610'), (1, '1.590')] |
| [2023-09-24 15:42:30,568][104522] Saving new best policy, reward=1.610! |
| [2023-09-24 15:42:30,613][104739] Saving new best policy, reward=1.590! |
| [2023-09-24 15:42:35,566][104016] Fps is (10 sec: 5734.3, 60 sec: 5734.4, 300 sec: 5734.4). Total num frames: 290816. Throughput: 0: 754.8, 1: 759.1. Samples: 70176. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) |
| [2023-09-24 15:42:35,567][104016] Avg episode reward: [(0, '1.750'), (1, '1.640')] |
| [2023-09-24 15:42:35,577][104739] Saving new best policy, reward=1.640! |
| [2023-09-24 15:42:35,756][104522] Saving new best policy, reward=1.750! |
| [2023-09-24 15:42:40,566][104016] Fps is (10 sec: 6144.0, 60 sec: 5808.9, 300 sec: 5808.9). Total num frames: 323584. Throughput: 0: 800.0, 1: 796.5. Samples: 79873. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) |
| [2023-09-24 15:42:40,567][104016] Avg episode reward: [(0, '2.000'), (1, '1.700')] |
| [2023-09-24 15:42:40,568][104522] Saving new best policy, reward=2.000! |
| [2023-09-24 15:42:40,568][104739] Saving new best policy, reward=1.700! |
| [2023-09-24 15:42:40,938][104875] Updated weights for policy 1, policy_version 640 (0.0018) |
| [2023-09-24 15:42:40,938][104876] Updated weights for policy 0, policy_version 656 (0.0019) |
| [2023-09-24 15:42:45,566][104016] Fps is (10 sec: 6553.6, 60 sec: 5870.9, 300 sec: 5870.9). Total num frames: 356352. Throughput: 0: 801.5, 1: 800.2. Samples: 87113. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) |
| [2023-09-24 15:42:45,567][104016] Avg episode reward: [(0, '1.980'), (1, '1.770')] |
| [2023-09-24 15:42:45,568][104739] Saving new best policy, reward=1.770! |
| [2023-09-24 15:42:50,566][104016] Fps is (10 sec: 6553.6, 60 sec: 6280.5, 300 sec: 5923.4). Total num frames: 389120. Throughput: 0: 798.4, 1: 797.9. Samples: 94274. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) |
| [2023-09-24 15:42:50,567][104016] Avg episode reward: [(0, '2.250'), (1, '1.970')] |
| [2023-09-24 15:42:50,575][104522] Saving new best policy, reward=2.250! |
| [2023-09-24 15:42:50,576][104739] Saving new best policy, reward=1.970! |
| [2023-09-24 15:42:53,539][104875] Updated weights for policy 1, policy_version 800 (0.0017) |
| [2023-09-24 15:42:53,541][104876] Updated weights for policy 0, policy_version 816 (0.0018) |
| [2023-09-24 15:42:55,566][104016] Fps is (10 sec: 6553.6, 60 sec: 6417.1, 300 sec: 5968.5). Total num frames: 421888. Throughput: 0: 804.0, 1: 802.5. Samples: 104291. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) |
| [2023-09-24 15:42:55,567][104016] Avg episode reward: [(0, '2.180'), (1, '2.080')] |
| [2023-09-24 15:42:55,568][104739] Saving new best policy, reward=2.080! |
| [2023-09-24 15:43:00,566][104016] Fps is (10 sec: 6553.7, 60 sec: 6417.1, 300 sec: 6007.5). Total num frames: 454656. Throughput: 0: 804.3, 1: 802.3. Samples: 111378. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) |
| [2023-09-24 15:43:00,566][104016] Avg episode reward: [(0, '2.060'), (1, '2.060')] |
| [2023-09-24 15:43:05,566][104016] Fps is (10 sec: 6553.5, 60 sec: 6417.1, 300 sec: 6041.6). Total num frames: 487424. Throughput: 0: 805.7, 1: 802.1. Samples: 118780. Policy #0 lag: (min: 9.0, avg: 9.0, max: 9.0) |
| [2023-09-24 15:43:05,567][104016] Avg episode reward: [(0, '2.130'), (1, '2.020')] |
| [2023-09-24 15:43:06,208][104876] Updated weights for policy 0, policy_version 976 (0.0018) |
| [2023-09-24 15:43:06,208][104875] Updated weights for policy 1, policy_version 960 (0.0017) |
| [2023-09-24 15:43:10,566][104016] Fps is (10 sec: 6553.6, 60 sec: 6417.1, 300 sec: 6071.7). Total num frames: 520192. Throughput: 0: 808.5, 1: 808.5. Samples: 128546. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) |
| [2023-09-24 15:43:10,567][104016] Avg episode reward: [(0, '2.210'), (1, '2.040')] |
| [2023-09-24 15:43:15,566][104016] Fps is (10 sec: 6553.7, 60 sec: 6417.1, 300 sec: 6098.5). Total num frames: 552960. Throughput: 0: 806.1, 1: 802.8. Samples: 135718. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) |
| [2023-09-24 15:43:15,567][104016] Avg episode reward: [(0, '2.410'), (1, '2.190')] |
| [2023-09-24 15:43:15,568][104522] Saving new best policy, reward=2.410! |
| [2023-09-24 15:43:15,568][104739] Saving new best policy, reward=2.190! |
| [2023-09-24 15:43:18,839][104875] Updated weights for policy 1, policy_version 1120 (0.0018) |
| [2023-09-24 15:43:18,839][104876] Updated weights for policy 0, policy_version 1136 (0.0017) |
| [2023-09-24 15:43:20,566][104016] Fps is (10 sec: 6553.6, 60 sec: 6417.1, 300 sec: 6122.4). Total num frames: 585728. Throughput: 0: 811.6, 1: 811.0. Samples: 143193. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) |
| [2023-09-24 15:43:20,567][104016] Avg episode reward: [(0, '2.430'), (1, '2.390')] |
| [2023-09-24 15:43:20,577][104522] Saving new best policy, reward=2.430! |
| [2023-09-24 15:43:20,578][104739] Saving new best policy, reward=2.390! |
| [2023-09-24 15:43:25,566][104016] Fps is (10 sec: 6553.7, 60 sec: 6417.1, 300 sec: 6144.0). Total num frames: 618496. Throughput: 0: 807.6, 1: 811.7. Samples: 152740. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) |
| [2023-09-24 15:43:25,566][104016] Avg episode reward: [(0, '2.280'), (1, '2.530')] |
| [2023-09-24 15:43:25,567][104739] Saving new best policy, reward=2.530! |
| [2023-09-24 15:43:30,566][104016] Fps is (10 sec: 6553.6, 60 sec: 6485.3, 300 sec: 6163.5). Total num frames: 651264. Throughput: 0: 808.3, 1: 810.8. Samples: 159969. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) |
| [2023-09-24 15:43:30,567][104016] Avg episode reward: [(0, '2.220'), (1, '2.810')] |
| [2023-09-24 15:43:30,568][104739] Saving new best policy, reward=2.810! |
| [2023-09-24 15:43:31,472][104875] Updated weights for policy 1, policy_version 1280 (0.0012) |
| [2023-09-24 15:43:31,472][104876] Updated weights for policy 0, policy_version 1296 (0.0018) |
| [2023-09-24 15:43:35,566][104016] Fps is (10 sec: 6553.5, 60 sec: 6553.6, 300 sec: 6181.2). Total num frames: 684032. Throughput: 0: 811.1, 1: 814.7. Samples: 167434. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) |
| [2023-09-24 15:43:35,567][104016] Avg episode reward: [(0, '2.320'), (1, '2.840')] |
| [2023-09-24 15:43:35,580][104522] Saving ./train_atari/Berzerk/checkpoint_p0/checkpoint_000001344_344064.pth... |
| [2023-09-24 15:43:35,580][104739] Saving ./train_atari/Berzerk/checkpoint_p1/checkpoint_000001328_339968.pth... |
| [2023-09-24 15:43:35,616][104739] Saving new best policy, reward=2.840! |
| [2023-09-24 15:43:40,566][104016] Fps is (10 sec: 6553.7, 60 sec: 6553.6, 300 sec: 6197.4). Total num frames: 716800. Throughput: 0: 807.8, 1: 809.3. Samples: 177063. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) |
| [2023-09-24 15:43:40,566][104016] Avg episode reward: [(0, '2.600'), (1, '2.760')] |
| [2023-09-24 15:43:40,567][104522] Saving new best policy, reward=2.600! |
| [2023-09-24 15:43:44,099][104876] Updated weights for policy 0, policy_version 1456 (0.0019) |
| [2023-09-24 15:43:44,100][104875] Updated weights for policy 1, policy_version 1440 (0.0017) |
| [2023-09-24 15:43:45,566][104016] Fps is (10 sec: 6553.7, 60 sec: 6553.6, 300 sec: 6212.3). Total num frames: 749568. Throughput: 0: 808.7, 1: 812.4. Samples: 184327. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) |
| [2023-09-24 15:43:45,566][104016] Avg episode reward: [(0, '2.630'), (1, '2.820')] |
| [2023-09-24 15:43:45,567][104522] Saving new best policy, reward=2.630! |
| [2023-09-24 15:43:50,566][104016] Fps is (10 sec: 5734.3, 60 sec: 6417.1, 300 sec: 6160.4). Total num frames: 774144. Throughput: 0: 806.1, 1: 810.5. Samples: 191525. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) |
| [2023-09-24 15:43:50,567][104016] Avg episode reward: [(0, '2.570'), (1, '2.630')] |
| [2023-09-24 15:43:55,566][104016] Fps is (10 sec: 5734.3, 60 sec: 6417.1, 300 sec: 6175.5). Total num frames: 806912. Throughput: 0: 804.1, 1: 803.6. Samples: 200894. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) |
| [2023-09-24 15:43:55,567][104016] Avg episode reward: [(0, '2.790'), (1, '2.710')] |
| [2023-09-24 15:43:55,664][104522] Saving new best policy, reward=2.790! |
| [2023-09-24 15:43:56,967][104876] Updated weights for policy 0, policy_version 1616 (0.0018) |
| [2023-09-24 15:43:56,967][104875] Updated weights for policy 1, policy_version 1600 (0.0017) |
| [2023-09-24 15:44:00,566][104016] Fps is (10 sec: 6553.7, 60 sec: 6417.1, 300 sec: 6189.5). Total num frames: 839680. Throughput: 0: 807.2, 1: 808.1. Samples: 208406. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) |
| [2023-09-24 15:44:00,566][104016] Avg episode reward: [(0, '2.770'), (1, '2.930')] |
| [2023-09-24 15:44:00,567][104739] Saving new best policy, reward=2.930! |
| [2023-09-24 15:44:05,566][104016] Fps is (10 sec: 6553.6, 60 sec: 6417.1, 300 sec: 6202.5). Total num frames: 872448. Throughput: 0: 802.7, 1: 802.9. Samples: 215442. Policy #0 lag: (min: 13.0, avg: 13.0, max: 13.0) |
| [2023-09-24 15:44:05,567][104016] Avg episode reward: [(0, '2.940'), (1, '3.110')] |
| [2023-09-24 15:44:05,576][104739] Saving new best policy, reward=3.110! |
| [2023-09-24 15:44:05,576][104522] Saving new best policy, reward=2.940! |
| [2023-09-24 15:44:09,613][104875] Updated weights for policy 1, policy_version 1760 (0.0018) |
| [2023-09-24 15:44:09,620][104876] Updated weights for policy 0, policy_version 1776 (0.0018) |
| [2023-09-24 15:44:10,566][104016] Fps is (10 sec: 6553.5, 60 sec: 6417.1, 300 sec: 6214.6). Total num frames: 905216. Throughput: 0: 808.1, 1: 804.0. Samples: 225283. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) |
| [2023-09-24 15:44:10,567][104016] Avg episode reward: [(0, '3.060'), (1, '3.030')] |
| [2023-09-24 15:44:10,568][104522] Saving new best policy, reward=3.060! |
| [2023-09-24 15:44:15,566][104016] Fps is (10 sec: 6553.6, 60 sec: 6417.1, 300 sec: 6225.9). Total num frames: 937984. Throughput: 0: 807.6, 1: 804.1. Samples: 232499. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) |
| [2023-09-24 15:44:15,567][104016] Avg episode reward: [(0, '2.980'), (1, '2.870')] |
| [2023-09-24 15:44:20,566][104016] Fps is (10 sec: 6553.7, 60 sec: 6417.1, 300 sec: 6236.5). Total num frames: 970752. Throughput: 0: 804.6, 1: 802.0. Samples: 239731. Policy #0 lag: (min: 6.0, avg: 6.0, max: 6.0) |
| [2023-09-24 15:44:20,566][104016] Avg episode reward: [(0, '3.110'), (1, '2.740')] |
| [2023-09-24 15:44:20,577][104522] Saving new best policy, reward=3.110! |
| [2023-09-24 15:44:22,267][104875] Updated weights for policy 1, policy_version 1920 (0.0016) |
| [2023-09-24 15:44:22,268][104876] Updated weights for policy 0, policy_version 1936 (0.0017) |
| [2023-09-24 15:44:25,566][104016] Fps is (10 sec: 6553.8, 60 sec: 6417.1, 300 sec: 6246.4). Total num frames: 1003520. Throughput: 0: 807.6, 1: 806.6. Samples: 249704. Policy #0 lag: (min: 3.0, avg: 3.0, max: 3.0) |
| [2023-09-24 15:44:25,566][104016] Avg episode reward: [(0, '3.060'), (1, '2.780')] |
| [2023-09-24 15:44:30,566][104016] Fps is (10 sec: 6553.6, 60 sec: 6417.1, 300 sec: 6255.7). Total num frames: 1036288. Throughput: 0: 807.0, 1: 803.7. Samples: 256809. Policy #0 lag: (min: 14.0, avg: 14.0, max: 14.0) |
| [2023-09-24 15:44:30,566][104016] Avg episode reward: [(0, '3.060'), (1, '2.930')] |
| [2023-09-24 15:44:35,026][104876] Updated weights for policy 0, policy_version 2096 (0.0014) |
| [2023-09-24 15:44:35,028][104875] Updated weights for policy 1, policy_version 2080 (0.0018) |
| [2023-09-24 15:44:35,566][104016] Fps is (10 sec: 6553.5, 60 sec: 6417.1, 300 sec: 6264.5). Total num frames: 1069056. Throughput: 0: 809.1, 1: 805.2. Samples: 264167. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) |
| [2023-09-24 15:44:35,566][104016] Avg episode reward: [(0, '3.250'), (1, '3.100')] |
| [2023-09-24 15:44:35,575][104522] Saving new best policy, reward=3.250! |
| [2023-09-24 15:44:40,566][104016] Fps is (10 sec: 6553.5, 60 sec: 6417.0, 300 sec: 6272.7). Total num frames: 1101824. Throughput: 0: 810.1, 1: 810.1. Samples: 273803. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) |
| [2023-09-24 15:44:40,567][104016] Avg episode reward: [(0, '3.410'), (1, '3.040')] |
| [2023-09-24 15:44:40,568][104522] Saving new best policy, reward=3.410! |
| [2023-09-24 15:44:45,566][104016] Fps is (10 sec: 6553.6, 60 sec: 6417.1, 300 sec: 6280.5). Total num frames: 1134592. Throughput: 0: 805.4, 1: 805.0. Samples: 280878. Policy #0 lag: (min: 3.0, avg: 3.0, max: 3.0) |
| [2023-09-24 15:44:45,567][104016] Avg episode reward: [(0, '3.310'), (1, '2.970')] |
| [2023-09-24 15:44:47,626][104875] Updated weights for policy 1, policy_version 2240 (0.0017) |
| [2023-09-24 15:44:47,626][104876] Updated weights for policy 0, policy_version 2256 (0.0019) |
| [2023-09-24 15:44:50,566][104016] Fps is (10 sec: 6553.7, 60 sec: 6553.6, 300 sec: 6287.9). Total num frames: 1167360. Throughput: 0: 811.6, 1: 811.4. Samples: 288474. Policy #0 lag: (min: 1.0, avg: 1.0, max: 1.0) |
| [2023-09-24 15:44:50,566][104016] Avg episode reward: [(0, '3.330'), (1, '3.060')] |
| [2023-09-24 15:44:55,566][104016] Fps is (10 sec: 6553.6, 60 sec: 6553.6, 300 sec: 6294.9). Total num frames: 1200128. Throughput: 0: 806.2, 1: 810.0. Samples: 298014. Policy #0 lag: (min: 13.0, avg: 13.0, max: 13.0) |
| [2023-09-24 15:44:55,567][104016] Avg episode reward: [(0, '3.310'), (1, '3.000')] |
| [2023-09-24 15:45:00,272][104875] Updated weights for policy 1, policy_version 2400 (0.0017) |
| [2023-09-24 15:45:00,272][104876] Updated weights for policy 0, policy_version 2416 (0.0017) |
| [2023-09-24 15:45:00,566][104016] Fps is (10 sec: 6553.6, 60 sec: 6553.6, 300 sec: 6301.5). Total num frames: 1232896. Throughput: 0: 806.0, 1: 811.5. Samples: 305288. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) |
| [2023-09-24 15:45:00,567][104016] Avg episode reward: [(0, '3.340'), (1, '3.140')] |
| [2023-09-24 15:45:00,568][104739] Saving new best policy, reward=3.140! |
| [2023-09-24 15:45:05,566][104016] Fps is (10 sec: 6553.6, 60 sec: 6553.6, 300 sec: 6307.8). Total num frames: 1265664. Throughput: 0: 806.9, 1: 808.8. Samples: 312438. Policy #0 lag: (min: 8.0, avg: 8.0, max: 8.0) |
| [2023-09-24 15:45:05,567][104016] Avg episode reward: [(0, '3.460'), (1, '3.220')] |
| [2023-09-24 15:45:05,578][104522] Saving new best policy, reward=3.460! |
| [2023-09-24 15:45:05,578][104739] Saving new best policy, reward=3.220! |
| [2023-09-24 15:45:10,566][104016] Fps is (10 sec: 6553.6, 60 sec: 6553.6, 300 sec: 6313.8). Total num frames: 1298432. Throughput: 0: 803.4, 1: 803.8. Samples: 322028. Policy #0 lag: (min: 7.0, avg: 7.0, max: 7.0) |
| [2023-09-24 15:45:10,567][104016] Avg episode reward: [(0, '3.510'), (1, '3.410')] |
| [2023-09-24 15:45:10,568][104522] Saving new best policy, reward=3.510! |
| [2023-09-24 15:45:10,568][104739] Saving new best policy, reward=3.410! |
| [2023-09-24 15:45:13,126][104876] Updated weights for policy 0, policy_version 2576 (0.0017) |
| [2023-09-24 15:45:13,126][104875] Updated weights for policy 1, policy_version 2560 (0.0018) |
| [2023-09-24 15:45:15,566][104016] Fps is (10 sec: 5734.4, 60 sec: 6417.1, 300 sec: 6280.5). Total num frames: 1323008. Throughput: 0: 807.4, 1: 806.3. Samples: 329425. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) |
| [2023-09-24 15:45:15,567][104016] Avg episode reward: [(0, '3.580'), (1, '3.370')] |
| [2023-09-24 15:45:15,665][104522] Saving new best policy, reward=3.580! |
| [2023-09-24 15:45:20,566][104016] Fps is (10 sec: 5734.4, 60 sec: 6417.1, 300 sec: 6286.9). Total num frames: 1355776. Throughput: 0: 802.8, 1: 805.9. Samples: 336557. Policy #0 lag: (min: 13.0, avg: 13.0, max: 13.0) |
| [2023-09-24 15:45:20,567][104016] Avg episode reward: [(0, '3.590'), (1, '3.420')] |
| [2023-09-24 15:45:20,689][104739] Saving new best policy, reward=3.420! |
| [2023-09-24 15:45:20,730][104522] Saving new best policy, reward=3.590! |
| [2023-09-24 15:45:25,566][104016] Fps is (10 sec: 6553.7, 60 sec: 6417.1, 300 sec: 6293.0). Total num frames: 1388544. Throughput: 0: 805.2, 1: 803.2. Samples: 346178. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) |
| [2023-09-24 15:45:25,566][104016] Avg episode reward: [(0, '3.430'), (1, '3.660')] |
| [2023-09-24 15:45:25,737][104739] Saving new best policy, reward=3.660! |
| [2023-09-24 15:45:25,758][104875] Updated weights for policy 1, policy_version 2720 (0.0018) |
| [2023-09-24 15:45:25,759][104876] Updated weights for policy 0, policy_version 2736 (0.0017) |
| [2023-09-24 15:45:30,566][104016] Fps is (10 sec: 6553.6, 60 sec: 6417.1, 300 sec: 6298.7). Total num frames: 1421312. Throughput: 0: 806.8, 1: 807.7. Samples: 353534. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) |
| [2023-09-24 15:45:30,567][104016] Avg episode reward: [(0, '3.310'), (1, '3.550')] |
| [2023-09-24 15:45:35,566][104016] Fps is (10 sec: 6553.5, 60 sec: 6417.1, 300 sec: 6304.3). Total num frames: 1454080. Throughput: 0: 801.6, 1: 798.0. Samples: 360452. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) |
| [2023-09-24 15:45:35,566][104016] Avg episode reward: [(0, '3.330'), (1, '3.740')] |
| [2023-09-24 15:45:35,575][104739] Saving ./train_atari/Berzerk/checkpoint_p1/checkpoint_000002832_724992.pth... |
| [2023-09-24 15:45:35,576][104522] Saving ./train_atari/Berzerk/checkpoint_p0/checkpoint_000002848_729088.pth... |
| [2023-09-24 15:45:35,611][104522] Removing ./train_atari/Berzerk/checkpoint_p0/checkpoint_000000016_4096.pth |
| [2023-09-24 15:45:35,616][104739] Saving new best policy, reward=3.740! |
| [2023-09-24 15:45:38,843][104876] Updated weights for policy 0, policy_version 2896 (0.0015) |
| [2023-09-24 15:45:38,844][104875] Updated weights for policy 1, policy_version 2880 (0.0018) |
| [2023-09-24 15:45:40,566][104016] Fps is (10 sec: 6553.7, 60 sec: 6417.1, 300 sec: 6309.6). Total num frames: 1486848. Throughput: 0: 799.7, 1: 799.2. Samples: 369965. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) |
| [2023-09-24 15:45:40,566][104016] Avg episode reward: [(0, '3.320'), (1, '3.850')] |
| [2023-09-24 15:45:40,567][104739] Saving new best policy, reward=3.850! |
| [2023-09-24 15:45:45,566][104016] Fps is (10 sec: 6553.6, 60 sec: 6417.1, 300 sec: 6314.7). Total num frames: 1519616. Throughput: 0: 796.7, 1: 796.4. Samples: 376978. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) |
| [2023-09-24 15:45:45,566][104016] Avg episode reward: [(0, '3.540'), (1, '4.030')] |
| [2023-09-24 15:45:45,567][104739] Saving new best policy, reward=4.030! |
| [2023-09-24 15:45:50,566][104016] Fps is (10 sec: 6553.6, 60 sec: 6417.1, 300 sec: 6319.5). Total num frames: 1552384. Throughput: 0: 799.9, 1: 799.7. Samples: 384420. Policy #0 lag: (min: 8.0, avg: 8.0, max: 8.0) |
| [2023-09-24 15:45:50,566][104016] Avg episode reward: [(0, '3.440'), (1, '3.870')] |
| [2023-09-24 15:45:51,530][104875] Updated weights for policy 1, policy_version 3040 (0.0017) |
| [2023-09-24 15:45:51,531][104876] Updated weights for policy 0, policy_version 3056 (0.0018) |
| [2023-09-24 15:45:55,566][104016] Fps is (10 sec: 6553.6, 60 sec: 6417.1, 300 sec: 6324.2). Total num frames: 1585152. Throughput: 0: 801.6, 1: 801.3. Samples: 394158. Policy #0 lag: (min: 1.0, avg: 1.0, max: 1.0) |
| [2023-09-24 15:45:55,566][104016] Avg episode reward: [(0, '3.550'), (1, '3.920')] |
| [2023-09-24 15:46:00,566][104016] Fps is (10 sec: 6553.6, 60 sec: 6417.1, 300 sec: 6328.7). Total num frames: 1617920. Throughput: 0: 798.8, 1: 803.0. Samples: 401504. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) |
| [2023-09-24 15:46:00,567][104016] Avg episode reward: [(0, '3.480'), (1, '3.890')] |
| [2023-09-24 15:46:04,108][104875] Updated weights for policy 1, policy_version 3200 (0.0019) |
| [2023-09-24 15:46:04,108][104876] Updated weights for policy 0, policy_version 3216 (0.0015) |
| [2023-09-24 15:46:05,566][104016] Fps is (10 sec: 6553.5, 60 sec: 6417.1, 300 sec: 6333.0). Total num frames: 1650688. Throughput: 0: 803.4, 1: 803.9. Samples: 408887. Policy #0 lag: (min: 9.0, avg: 9.0, max: 9.0) |
| [2023-09-24 15:46:05,567][104016] Avg episode reward: [(0, '3.370'), (1, '4.030')] |
| [2023-09-24 15:46:10,566][104016] Fps is (10 sec: 6553.7, 60 sec: 6417.1, 300 sec: 6337.2). Total num frames: 1683456. Throughput: 0: 802.5, 1: 805.0. Samples: 418515. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) |
| [2023-09-24 15:46:10,566][104016] Avg episode reward: [(0, '3.380'), (1, '3.830')] |
| [2023-09-24 15:46:15,566][104016] Fps is (10 sec: 6553.6, 60 sec: 6553.6, 300 sec: 6341.2). Total num frames: 1716224. Throughput: 0: 802.2, 1: 806.1. Samples: 425906. Policy #0 lag: (min: 7.0, avg: 7.0, max: 7.0) |
| [2023-09-24 15:46:15,567][104016] Avg episode reward: [(0, '3.350'), (1, '3.970')] |
| [2023-09-24 15:46:16,724][104876] Updated weights for policy 0, policy_version 3376 (0.0019) |
| [2023-09-24 15:46:16,724][104875] Updated weights for policy 1, policy_version 3360 (0.0017) |
| [2023-09-24 15:46:20,569][104016] Fps is (10 sec: 6551.5, 60 sec: 6553.3, 300 sec: 6345.0). Total num frames: 1748992. Throughput: 0: 805.0, 1: 809.3. Samples: 433098. Policy #0 lag: (min: 6.0, avg: 6.0, max: 6.0) |
| [2023-09-24 15:46:20,571][104016] Avg episode reward: [(0, '3.290'), (1, '3.660')] |
| [2023-09-24 15:46:25,566][104016] Fps is (10 sec: 5734.4, 60 sec: 6417.0, 300 sec: 6319.5). Total num frames: 1773568. Throughput: 0: 807.8, 1: 808.1. Samples: 442682. Policy #0 lag: (min: 6.0, avg: 6.0, max: 6.0) |
| [2023-09-24 15:46:25,567][104016] Avg episode reward: [(0, '3.170'), (1, '3.320')] |
| [2023-09-24 15:46:29,546][104876] Updated weights for policy 0, policy_version 3536 (0.0017) |
| [2023-09-24 15:46:29,547][104875] Updated weights for policy 1, policy_version 3520 (0.0017) |
| [2023-09-24 15:46:30,566][104016] Fps is (10 sec: 5736.2, 60 sec: 6417.1, 300 sec: 6323.7). Total num frames: 1806336. Throughput: 0: 812.1, 1: 809.3. Samples: 449939. Policy #0 lag: (min: 3.0, avg: 3.0, max: 3.0) |
| [2023-09-24 15:46:30,566][104016] Avg episode reward: [(0, '3.080'), (1, '3.380')] |
| [2023-09-24 15:46:35,566][104016] Fps is (10 sec: 6553.6, 60 sec: 6417.1, 300 sec: 6327.6). Total num frames: 1839104. Throughput: 0: 806.8, 1: 806.4. Samples: 457013. Policy #0 lag: (min: 8.0, avg: 8.0, max: 8.0) |
| [2023-09-24 15:46:35,566][104016] Avg episode reward: [(0, '2.850'), (1, '3.500')] |
| [2023-09-24 15:46:40,566][104016] Fps is (10 sec: 6553.7, 60 sec: 6417.1, 300 sec: 6331.4). Total num frames: 1871872. Throughput: 0: 810.5, 1: 807.0. Samples: 466948. Policy #0 lag: (min: 14.0, avg: 14.0, max: 14.0) |
| [2023-09-24 15:46:40,566][104016] Avg episode reward: [(0, '2.930'), (1, '3.600')] |
| [2023-09-24 15:46:42,135][104875] Updated weights for policy 1, policy_version 3680 (0.0016) |
| [2023-09-24 15:46:42,135][104876] Updated weights for policy 0, policy_version 3696 (0.0016) |
| [2023-09-24 15:46:45,566][104016] Fps is (10 sec: 6553.6, 60 sec: 6417.1, 300 sec: 6414.8). Total num frames: 1904640. Throughput: 0: 808.1, 1: 806.6. Samples: 474165. Policy #0 lag: (min: 3.0, avg: 3.0, max: 3.0) |
| [2023-09-24 15:46:45,567][104016] Avg episode reward: [(0, '3.000'), (1, '3.840')] |
| [2023-09-24 15:46:50,566][104016] Fps is (10 sec: 6553.4, 60 sec: 6417.1, 300 sec: 6442.5). Total num frames: 1937408. Throughput: 0: 806.4, 1: 802.3. Samples: 481281. Policy #0 lag: (min: 8.0, avg: 8.0, max: 8.0) |
| [2023-09-24 15:46:50,567][104016] Avg episode reward: [(0, '3.030'), (1, '3.920')] |
| [2023-09-24 15:46:54,949][104875] Updated weights for policy 1, policy_version 3840 (0.0017) |
| [2023-09-24 15:46:54,950][104876] Updated weights for policy 0, policy_version 3856 (0.0017) |
| [2023-09-24 15:46:55,566][104016] Fps is (10 sec: 6553.6, 60 sec: 6417.1, 300 sec: 6442.5). Total num frames: 1970176. Throughput: 0: 807.2, 1: 806.6. Samples: 491138. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) |
| [2023-09-24 15:46:55,567][104016] Avg episode reward: [(0, '3.160'), (1, '3.820')] |
| [2023-09-24 15:47:00,566][104016] Fps is (10 sec: 6553.6, 60 sec: 6417.1, 300 sec: 6442.5). Total num frames: 2002944. Throughput: 0: 805.8, 1: 801.1. Samples: 498219. Policy #0 lag: (min: 8.0, avg: 8.0, max: 8.0) |
| [2023-09-24 15:47:00,567][104016] Avg episode reward: [(0, '3.150'), (1, '3.610')] |
| [2023-09-24 15:47:05,566][104016] Fps is (10 sec: 6553.6, 60 sec: 6417.1, 300 sec: 6442.5). Total num frames: 2035712. Throughput: 0: 808.7, 1: 806.3. Samples: 505768. Policy #0 lag: (min: 2.0, avg: 2.0, max: 2.0) |
| [2023-09-24 15:47:05,567][104016] Avg episode reward: [(0, '3.130'), (1, '3.280')] |
| [2023-09-24 15:47:07,507][104876] Updated weights for policy 0, policy_version 4016 (0.0015) |
| [2023-09-24 15:47:07,507][104875] Updated weights for policy 1, policy_version 4000 (0.0013) |
| [2023-09-24 15:47:10,566][104016] Fps is (10 sec: 6553.7, 60 sec: 6417.1, 300 sec: 6442.5). Total num frames: 2068480. Throughput: 0: 805.9, 1: 808.0. Samples: 515304. Policy #0 lag: (min: 1.0, avg: 1.0, max: 1.0) |
| [2023-09-24 15:47:10,566][104016] Avg episode reward: [(0, '3.250'), (1, '3.260')] |
| [2023-09-24 15:47:15,566][104016] Fps is (10 sec: 6553.6, 60 sec: 6417.1, 300 sec: 6442.5). Total num frames: 2101248. Throughput: 0: 802.0, 1: 805.6. Samples: 522281. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) |
| [2023-09-24 15:47:15,567][104016] Avg episode reward: [(0, '3.170'), (1, '3.320')] |
| [2023-09-24 15:47:20,397][104876] Updated weights for policy 0, policy_version 4176 (0.0017) |
| [2023-09-24 15:47:20,397][104875] Updated weights for policy 1, policy_version 4160 (0.0017) |
| [2023-09-24 15:47:20,566][104016] Fps is (10 sec: 6553.5, 60 sec: 6417.4, 300 sec: 6442.5). Total num frames: 2134016. Throughput: 0: 808.1, 1: 808.0. Samples: 529735. Policy #0 lag: (min: 12.0, avg: 12.0, max: 12.0) |
| [2023-09-24 15:47:20,566][104016] Avg episode reward: [(0, '3.130'), (1, '3.310')] |
| [2023-09-24 15:47:25,566][104016] Fps is (10 sec: 5734.4, 60 sec: 6417.1, 300 sec: 6428.6). Total num frames: 2158592. Throughput: 0: 797.4, 1: 801.1. Samples: 538878. Policy #0 lag: (min: 1.0, avg: 1.0, max: 1.0) |
| [2023-09-24 15:47:25,567][104016] Avg episode reward: [(0, '3.260'), (1, '3.240')] |
| [2023-09-24 15:47:30,566][104016] Fps is (10 sec: 5734.3, 60 sec: 6417.1, 300 sec: 6442.5). Total num frames: 2191360. Throughput: 0: 805.2, 1: 802.2. Samples: 546500. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) |
| [2023-09-24 15:47:30,567][104016] Avg episode reward: [(0, '3.110'), (1, '3.350')] |
| [2023-09-24 15:47:33,225][104876] Updated weights for policy 0, policy_version 4336 (0.0017) |
| [2023-09-24 15:47:33,225][104875] Updated weights for policy 1, policy_version 4320 (0.0018) |
| [2023-09-24 15:47:35,566][104016] Fps is (10 sec: 6553.5, 60 sec: 6417.1, 300 sec: 6442.5). Total num frames: 2224128. Throughput: 0: 800.8, 1: 804.8. Samples: 553535. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) |
| [2023-09-24 15:47:35,567][104016] Avg episode reward: [(0, '3.240'), (1, '3.550')] |
| [2023-09-24 15:47:35,756][104739] Saving ./train_atari/Berzerk/checkpoint_p1/checkpoint_000004352_1114112.pth... |
| [2023-09-24 15:47:35,777][104522] Saving ./train_atari/Berzerk/checkpoint_p0/checkpoint_000004368_1118208.pth... |
| [2023-09-24 15:47:35,783][104739] Removing ./train_atari/Berzerk/checkpoint_p1/checkpoint_000001328_339968.pth |
| [2023-09-24 15:47:35,804][104522] Removing ./train_atari/Berzerk/checkpoint_p0/checkpoint_000001344_344064.pth |
| [2023-09-24 15:47:40,566][104016] Fps is (10 sec: 6553.5, 60 sec: 6417.0, 300 sec: 6442.5). Total num frames: 2256896. Throughput: 0: 802.4, 1: 799.1. Samples: 563204. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) |
| [2023-09-24 15:47:40,567][104016] Avg episode reward: [(0, '3.140'), (1, '3.430')] |
| [2023-09-24 15:47:45,566][104016] Fps is (10 sec: 6553.8, 60 sec: 6417.1, 300 sec: 6442.5). Total num frames: 2289664. Throughput: 0: 803.7, 1: 802.7. Samples: 570505. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) |
| [2023-09-24 15:47:45,566][104016] Avg episode reward: [(0, '3.100'), (1, '3.510')] |
| [2023-09-24 15:47:45,929][104876] Updated weights for policy 0, policy_version 4496 (0.0016) |
| [2023-09-24 15:47:45,930][104875] Updated weights for policy 1, policy_version 4480 (0.0017) |
| [2023-09-24 15:47:50,566][104016] Fps is (10 sec: 6553.6, 60 sec: 6417.1, 300 sec: 6442.5). Total num frames: 2322432. Throughput: 0: 798.5, 1: 800.1. Samples: 577706. Policy #0 lag: (min: 5.0, avg: 5.0, max: 5.0) |
| [2023-09-24 15:47:50,567][104016] Avg episode reward: [(0, '3.270'), (1, '3.560')] |
| [2023-09-24 15:47:55,566][104016] Fps is (10 sec: 6553.5, 60 sec: 6417.1, 300 sec: 6442.5). Total num frames: 2355200. Throughput: 0: 806.5, 1: 802.4. Samples: 587704. Policy #0 lag: (min: 2.0, avg: 2.0, max: 2.0) |
| [2023-09-24 15:47:55,567][104016] Avg episode reward: [(0, '3.100'), (1, '3.590')] |
| [2023-09-24 15:47:58,557][104875] Updated weights for policy 1, policy_version 4640 (0.0017) |
| [2023-09-24 15:47:58,557][104876] Updated weights for policy 0, policy_version 4656 (0.0018) |
| [2023-09-24 15:48:00,566][104016] Fps is (10 sec: 6553.7, 60 sec: 6417.1, 300 sec: 6442.5). Total num frames: 2387968. Throughput: 0: 807.7, 1: 804.1. Samples: 594813. Policy #0 lag: (min: 10.0, avg: 10.0, max: 10.0) |
| [2023-09-24 15:48:00,566][104016] Avg episode reward: [(0, '3.190'), (1, '3.510')] |
| [2023-09-24 15:48:05,566][104016] Fps is (10 sec: 6553.6, 60 sec: 6417.1, 300 sec: 6442.5). Total num frames: 2420736. Throughput: 0: 805.9, 1: 802.4. Samples: 602108. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) |
| [2023-09-24 15:48:05,567][104016] Avg episode reward: [(0, '3.350'), (1, '3.530')] |
| [2023-09-24 15:48:10,566][104016] Fps is (10 sec: 6553.5, 60 sec: 6417.0, 300 sec: 6442.5). Total num frames: 2453504. Throughput: 0: 811.5, 1: 811.7. Samples: 611925. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) |
| [2023-09-24 15:48:10,567][104016] Avg episode reward: [(0, '3.330'), (1, '3.110')] |
| [2023-09-24 15:48:11,210][104875] Updated weights for policy 1, policy_version 4800 (0.0015) |
| [2023-09-24 15:48:11,210][104876] Updated weights for policy 0, policy_version 4816 (0.0017) |
| [2023-09-24 15:48:15,566][104016] Fps is (10 sec: 6553.7, 60 sec: 6417.1, 300 sec: 6442.5). Total num frames: 2486272. Throughput: 0: 805.0, 1: 806.5. Samples: 619014. Policy #0 lag: (min: 11.0, avg: 11.0, max: 11.0) |
| [2023-09-24 15:48:15,566][104016] Avg episode reward: [(0, '3.530'), (1, '3.150')] |
| [2023-09-24 15:48:20,566][104016] Fps is (10 sec: 6553.6, 60 sec: 6417.1, 300 sec: 6442.5). Total num frames: 2519040. Throughput: 0: 808.7, 1: 808.9. Samples: 626327. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) |
| [2023-09-24 15:48:20,566][104016] Avg episode reward: [(0, '3.480'), (1, '3.250')] |
| [2023-09-24 15:48:24,062][104875] Updated weights for policy 1, policy_version 4960 (0.0016) |
| [2023-09-24 15:48:24,063][104876] Updated weights for policy 0, policy_version 4976 (0.0018) |
| [2023-09-24 15:48:25,566][104016] Fps is (10 sec: 6553.5, 60 sec: 6553.6, 300 sec: 6442.5). Total num frames: 2551808. Throughput: 0: 804.5, 1: 808.4. Samples: 635783. Policy #0 lag: (min: 11.0, avg: 11.0, max: 11.0) |
| [2023-09-24 15:48:25,567][104016] Avg episode reward: [(0, '3.240'), (1, '3.330')] |
| [2023-09-24 15:48:30,566][104016] Fps is (10 sec: 6553.6, 60 sec: 6553.6, 300 sec: 6442.5). Total num frames: 2584576. Throughput: 0: 802.8, 1: 809.1. Samples: 643039. Policy #0 lag: (min: 14.0, avg: 14.0, max: 14.0) |
| [2023-09-24 15:48:30,567][104016] Avg episode reward: [(0, '3.340'), (1, '3.350')] |
| [2023-09-24 15:48:35,566][104016] Fps is (10 sec: 6553.6, 60 sec: 6553.6, 300 sec: 6442.5). Total num frames: 2617344. Throughput: 0: 806.4, 1: 806.4. Samples: 650280. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) |
| [2023-09-24 15:48:35,566][104016] Avg episode reward: [(0, '3.170'), (1, '3.520')] |
| [2023-09-24 15:48:36,706][104875] Updated weights for policy 1, policy_version 5120 (0.0019) |
| [2023-09-24 15:48:36,706][104876] Updated weights for policy 0, policy_version 5136 (0.0018) |
| [2023-09-24 15:48:40,566][104016] Fps is (10 sec: 6553.7, 60 sec: 6553.6, 300 sec: 6442.5). Total num frames: 2650112. Throughput: 0: 802.3, 1: 804.5. Samples: 660011. Policy #0 lag: (min: 0.0, avg: 0.0, max: 0.0) |
| [2023-09-24 15:48:40,566][104016] Avg episode reward: [(0, '3.260'), (1, '3.410')] |
| [2023-09-24 15:48:45,566][104016] Fps is (10 sec: 6144.0, 60 sec: 6485.3, 300 sec: 6456.4). Total num frames: 2678784. Throughput: 0: 806.2, 1: 807.6. Samples: 667438. Policy #0 lag: (min: 0.0, avg: 0.0, max: 0.0) |
| [2023-09-24 15:48:45,567][104016] Avg episode reward: [(0, '3.330'), (1, '3.380')] |
| [2023-09-24 15:48:49,470][104875] Updated weights for policy 1, policy_version 5280 (0.0017) |
| [2023-09-24 15:48:49,471][104876] Updated weights for policy 0, policy_version 5296 (0.0018) |
| [2023-09-24 15:48:50,566][104016] Fps is (10 sec: 5734.4, 60 sec: 6417.1, 300 sec: 6442.5). Total num frames: 2707456. Throughput: 0: 801.1, 1: 804.6. Samples: 674365. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) |
| [2023-09-24 15:48:50,566][104016] Avg episode reward: [(0, '3.510'), (1, '3.370')] |
| [2023-09-24 15:48:55,566][104016] Fps is (10 sec: 6144.1, 60 sec: 6417.1, 300 sec: 6442.5). Total num frames: 2740224. Throughput: 0: 803.2, 1: 799.3. Samples: 684037. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) |
| [2023-09-24 15:48:55,566][104016] Avg episode reward: [(0, '3.700'), (1, '3.340')] |
| [2023-09-24 15:48:55,567][104522] Saving new best policy, reward=3.700! |
| [2023-09-24 15:49:00,566][104016] Fps is (10 sec: 6553.6, 60 sec: 6417.1, 300 sec: 6442.5). Total num frames: 2772992. Throughput: 0: 802.7, 1: 803.9. Samples: 691313. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) |
| [2023-09-24 15:49:00,566][104016] Avg episode reward: [(0, '3.430'), (1, '3.330')] |
| [2023-09-24 15:49:02,359][104876] Updated weights for policy 0, policy_version 5456 (0.0016) |
| [2023-09-24 15:49:02,359][104875] Updated weights for policy 1, policy_version 5440 (0.0018) |
| [2023-09-24 15:49:05,566][104016] Fps is (10 sec: 6553.6, 60 sec: 6417.1, 300 sec: 6442.5). Total num frames: 2805760. Throughput: 0: 802.4, 1: 798.4. Samples: 698367. Policy #0 lag: (min: 8.0, avg: 8.0, max: 8.0) |
| [2023-09-24 15:49:05,566][104016] Avg episode reward: [(0, '3.410'), (1, '3.410')] |
| [2023-09-24 15:49:10,566][104016] Fps is (10 sec: 6553.7, 60 sec: 6417.1, 300 sec: 6442.5). Total num frames: 2838528. Throughput: 0: 804.8, 1: 804.8. Samples: 708214. Policy #0 lag: (min: 8.0, avg: 8.0, max: 8.0) |
| [2023-09-24 15:49:10,566][104016] Avg episode reward: [(0, '3.160'), (1, '3.520')] |
| [2023-09-24 15:49:15,042][104875] Updated weights for policy 1, policy_version 5600 (0.0017) |
| [2023-09-24 15:49:15,043][104876] Updated weights for policy 0, policy_version 5616 (0.0018) |
| [2023-09-24 15:49:15,566][104016] Fps is (10 sec: 6553.5, 60 sec: 6417.1, 300 sec: 6442.5). Total num frames: 2871296. Throughput: 0: 805.0, 1: 797.6. Samples: 715155. Policy #0 lag: (min: 5.0, avg: 5.0, max: 5.0) |
| [2023-09-24 15:49:15,567][104016] Avg episode reward: [(0, '3.260'), (1, '3.640')] |
| [2023-09-24 15:49:20,566][104016] Fps is (10 sec: 6553.4, 60 sec: 6417.1, 300 sec: 6442.5). Total num frames: 2904064. Throughput: 0: 805.4, 1: 805.6. Samples: 722776. Policy #0 lag: (min: 2.0, avg: 2.0, max: 2.0) |
| [2023-09-24 15:49:20,567][104016] Avg episode reward: [(0, '3.210'), (1, '3.850')] |
| [2023-09-24 15:49:25,566][104016] Fps is (10 sec: 6553.7, 60 sec: 6417.1, 300 sec: 6442.5). Total num frames: 2936832. Throughput: 0: 798.7, 1: 799.4. Samples: 731925. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) |
| [2023-09-24 15:49:25,566][104016] Avg episode reward: [(0, '3.240'), (1, '3.910')] |
| [2023-09-24 15:49:27,871][104875] Updated weights for policy 1, policy_version 5760 (0.0018) |
| [2023-09-24 15:49:27,871][104876] Updated weights for policy 0, policy_version 5776 (0.0017) |
| [2023-09-24 15:49:30,566][104016] Fps is (10 sec: 6553.6, 60 sec: 6417.1, 300 sec: 6442.5). Total num frames: 2969600. Throughput: 0: 796.4, 1: 800.6. Samples: 739304. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) |
| [2023-09-24 15:49:30,567][104016] Avg episode reward: [(0, '3.150'), (1, '3.890')] |
| [2023-09-24 15:49:35,566][104016] Fps is (10 sec: 5734.3, 60 sec: 6280.5, 300 sec: 6414.8). Total num frames: 2994176. Throughput: 0: 799.4, 1: 798.6. Samples: 746275. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) |
| [2023-09-24 15:49:35,567][104016] Avg episode reward: [(0, '3.160'), (1, '4.000')] |
| [2023-09-24 15:49:35,610][104739] Saving ./train_atari/Berzerk/checkpoint_p1/checkpoint_000005856_1499136.pth... |
| [2023-09-24 15:49:35,636][104739] Removing ./train_atari/Berzerk/checkpoint_p1/checkpoint_000002832_724992.pth |
| [2023-09-24 15:49:35,637][104522] Saving ./train_atari/Berzerk/checkpoint_p0/checkpoint_000005872_1503232.pth... |
| [2023-09-24 15:49:35,675][104522] Removing ./train_atari/Berzerk/checkpoint_p0/checkpoint_000002848_729088.pth |
| [2023-09-24 15:49:40,566][104016] Fps is (10 sec: 5734.4, 60 sec: 6280.5, 300 sec: 6414.8). Total num frames: 3026944. Throughput: 0: 796.4, 1: 797.9. Samples: 755782. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) |
| [2023-09-24 15:49:40,567][104016] Avg episode reward: [(0, '3.210'), (1, '4.040')] |
| [2023-09-24 15:49:40,568][104739] Saving new best policy, reward=4.040! |
| [2023-09-24 15:49:40,836][104875] Updated weights for policy 1, policy_version 5920 (0.0017) |
| [2023-09-24 15:49:40,836][104876] Updated weights for policy 0, policy_version 5936 (0.0016) |
| [2023-09-24 15:49:45,566][104016] Fps is (10 sec: 6553.6, 60 sec: 6348.8, 300 sec: 6414.8). Total num frames: 3059712. Throughput: 0: 799.0, 1: 795.6. Samples: 763067. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) |
| [2023-09-24 15:49:45,567][104016] Avg episode reward: [(0, '3.260'), (1, '4.120')] |
| [2023-09-24 15:49:45,568][104739] Saving new best policy, reward=4.120! |
| [2023-09-24 15:49:50,566][104016] Fps is (10 sec: 6553.6, 60 sec: 6417.1, 300 sec: 6414.8). Total num frames: 3092480. Throughput: 0: 796.5, 1: 798.9. Samples: 770161. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) |
| [2023-09-24 15:49:50,567][104016] Avg episode reward: [(0, '3.320'), (1, '4.130')] |
| [2023-09-24 15:49:50,579][104739] Saving new best policy, reward=4.130! |
| [2023-09-24 15:49:53,759][104876] Updated weights for policy 0, policy_version 6096 (0.0016) |
| [2023-09-24 15:49:53,759][104875] Updated weights for policy 1, policy_version 6080 (0.0019) |
| [2023-09-24 15:49:55,566][104016] Fps is (10 sec: 6553.6, 60 sec: 6417.0, 300 sec: 6414.8). Total num frames: 3125248. Throughput: 0: 794.9, 1: 794.7. Samples: 779748. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) |
| [2023-09-24 15:49:55,567][104016] Avg episode reward: [(0, '3.270'), (1, '3.890')] |
| [2023-09-24 15:50:00,566][104016] Fps is (10 sec: 6553.6, 60 sec: 6417.1, 300 sec: 6414.8). Total num frames: 3158016. Throughput: 0: 795.5, 1: 797.8. Samples: 786852. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) |
| [2023-09-24 15:50:00,567][104016] Avg episode reward: [(0, '3.350'), (1, '3.830')] |
| [2023-09-24 15:50:05,566][104016] Fps is (10 sec: 6553.7, 60 sec: 6417.1, 300 sec: 6414.8). Total num frames: 3190784. Throughput: 0: 795.3, 1: 795.5. Samples: 794359. Policy #0 lag: (min: 13.0, avg: 13.0, max: 13.0) |
| [2023-09-24 15:50:05,566][104016] Avg episode reward: [(0, '3.320'), (1, '3.820')] |
| [2023-09-24 15:50:06,350][104875] Updated weights for policy 1, policy_version 6240 (0.0018) |
| [2023-09-24 15:50:06,350][104876] Updated weights for policy 0, policy_version 6256 (0.0016) |
| [2023-09-24 15:50:10,566][104016] Fps is (10 sec: 6553.5, 60 sec: 6417.0, 300 sec: 6442.5). Total num frames: 3223552. Throughput: 0: 800.6, 1: 800.1. Samples: 803954. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) |
| [2023-09-24 15:50:10,567][104016] Avg episode reward: [(0, '3.510'), (1, '3.860')] |
| [2023-09-24 15:50:15,566][104016] Fps is (10 sec: 6144.0, 60 sec: 6348.8, 300 sec: 6428.6). Total num frames: 3252224. Throughput: 0: 796.4, 1: 796.3. Samples: 810979. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) |
| [2023-09-24 15:50:15,567][104016] Avg episode reward: [(0, '3.620'), (1, '3.730')] |
| [2023-09-24 15:50:18,748][104016] Keyboard interrupt detected in the event loop EvtLoop [Runner_EvtLoop, process=main process 104016], exiting... |
| [2023-09-24 15:50:18,749][104914] Stopping RolloutWorker_w6... |
| [2023-09-24 15:50:18,749][104916] Stopping RolloutWorker_w7... |
| [2023-09-24 15:50:18,749][104911] Stopping RolloutWorker_w3... |
| [2023-09-24 15:50:18,749][104914] Loop rollout_proc6_evt_loop terminating... |
| [2023-09-24 15:50:18,749][104916] Loop rollout_proc7_evt_loop terminating... |
| [2023-09-24 15:50:18,749][104911] Loop rollout_proc3_evt_loop terminating... |
| [2023-09-24 15:50:18,749][104910] Stopping RolloutWorker_w2... |
| [2023-09-24 15:50:18,749][104887] Stopping RolloutWorker_w1... |
| [2023-09-24 15:50:18,749][104913] Stopping RolloutWorker_w4... |
| [2023-09-24 15:50:18,749][104739] Stopping Batcher_1... |
| [2023-09-24 15:50:18,750][104910] Loop rollout_proc2_evt_loop terminating... |
| [2023-09-24 15:50:18,749][104522] Stopping Batcher_0... |
| [2023-09-24 15:50:18,749][104016] Runner profile tree view: |
| main_loop: 519.1842 |
| [2023-09-24 15:50:18,750][104887] Loop rollout_proc1_evt_loop terminating... |
| [2023-09-24 15:50:18,750][104522] Loop batcher_evt_loop terminating... |
| [2023-09-24 15:50:18,750][104913] Loop rollout_proc4_evt_loop terminating... |
| [2023-09-24 15:50:18,750][104915] Stopping RolloutWorker_w5... |
| [2023-09-24 15:50:18,750][104016] Collected {0: 1638400, 1: 1634304}, FPS: 6295.7 |
| [2023-09-24 15:50:18,750][104739] Loop batcher_evt_loop terminating... |
| [2023-09-24 15:50:18,750][104915] Loop rollout_proc5_evt_loop terminating... |
| [2023-09-24 15:50:18,751][104877] Stopping RolloutWorker_w0... |
| [2023-09-24 15:50:18,751][104522] Saving ./train_atari/Berzerk/checkpoint_p0/checkpoint_000006400_1638400.pth... |
| [2023-09-24 15:50:18,751][104739] Saving ./train_atari/Berzerk/checkpoint_p1/checkpoint_000006384_1634304.pth... |
| [2023-09-24 15:50:18,751][104877] Loop rollout_proc0_evt_loop terminating... |
| [2023-09-24 15:50:18,764][104875] Weights refcount: 2 0 |
| [2023-09-24 15:50:18,765][104876] Weights refcount: 2 0 |
| [2023-09-24 15:50:18,765][104875] Stopping InferenceWorker_p1-w0... |
| [2023-09-24 15:50:18,766][104875] Loop inference_proc1-0_evt_loop terminating... |
| [2023-09-24 15:50:18,766][104876] Stopping InferenceWorker_p0-w0... |
| [2023-09-24 15:50:18,766][104876] Loop inference_proc0-0_evt_loop terminating... |
| [2023-09-24 15:50:18,787][104522] Removing ./train_atari/Berzerk/checkpoint_p0/checkpoint_000004368_1118208.pth |
| [2023-09-24 15:50:18,790][104522] Stopping LearnerWorker_p0... |
| [2023-09-24 15:50:18,791][104522] Loop learner_proc0_evt_loop terminating... |
| [2023-09-24 15:50:18,793][104739] Removing ./train_atari/Berzerk/checkpoint_p1/checkpoint_000004352_1114112.pth |
| [2023-09-24 15:50:18,798][104739] Stopping LearnerWorker_p1... |
| [2023-09-24 15:50:18,799][104739] Loop learner_proc1_evt_loop terminating... |
| [2023-09-24 15:50:54,238][129962] Saving configuration to ./train_atari/Berzerk/config.json... |
| [2023-09-24 15:50:54,515][129962] Rollout worker 0 uses device cpu |
| [2023-09-24 15:50:54,516][129962] Rollout worker 1 uses device cpu |
| [2023-09-24 15:50:54,516][129962] Rollout worker 2 uses device cpu |
| [2023-09-24 15:50:54,517][129962] Rollout worker 3 uses device cpu |
| [2023-09-24 15:50:54,517][129962] Rollout worker 4 uses device cpu |
| [2023-09-24 15:50:54,518][129962] Rollout worker 5 uses device cpu |
| [2023-09-24 15:50:54,518][129962] Rollout worker 6 uses device cpu |
| [2023-09-24 15:50:54,519][129962] Rollout worker 7 uses device cpu |
| [2023-09-24 15:50:54,519][129962] In synchronous mode, we only accumulate one batch. Setting num_batches_to_accumulate to 1 |
| [2023-09-24 15:50:54,560][129962] Using GPUs [0] for process 0 (actually maps to GPUs [0]) |
| [2023-09-24 15:50:54,561][129962] InferenceWorker_p0-w0: min num requests: 1 |
| [2023-09-24 15:50:54,564][129962] Using GPUs [1] for process 1 (actually maps to GPUs [1]) |
| [2023-09-24 15:50:54,564][129962] InferenceWorker_p1-w0: min num requests: 1 |
| [2023-09-24 15:50:54,587][129962] Starting all processes... |
| [2023-09-24 15:50:54,587][129962] Starting process learner_proc0 |
| [2023-09-24 15:50:56,145][129962] Starting process learner_proc1 |
| [2023-09-24 15:50:56,149][130682] Using GPUs [0] for process 0 (actually maps to GPUs [0]) |
| [2023-09-24 15:50:56,149][130682] Set environment var CUDA_VISIBLE_DEVICES to '0' (GPU indices [0]) for learning process 0 |
| [2023-09-24 15:50:56,167][130682] Num visible devices: 1 |
| [2023-09-24 15:50:56,187][130682] Starting seed is not provided |
| [2023-09-24 15:50:56,187][130682] Using GPUs [0] for process 0 (actually maps to GPUs [0]) |
| [2023-09-24 15:50:56,187][130682] Initializing actor-critic model on device cuda:0 |
| [2023-09-24 15:50:56,188][130682] RunningMeanStd input shape: (4, 84, 84) |
| [2023-09-24 15:50:56,188][130682] RunningMeanStd input shape: (1,) |
| [2023-09-24 15:50:56,200][130682] ConvEncoder: input_channels=4 |
| [2023-09-24 15:50:56,360][130682] Conv encoder output size: 512 |
| [2023-09-24 15:50:56,361][130682] Created Actor Critic model with architecture: |
| [2023-09-24 15:50:56,362][130682] ActorCriticSharedWeights( |
| (obs_normalizer): ObservationNormalizer( |
| (running_mean_std): RunningMeanStdDictInPlace( |
| (running_mean_std): ModuleDict( |
| (obs): RunningMeanStdInPlace() |
| ) |
| ) |
| ) |
| (returns_normalizer): RecursiveScriptModule(original_name=RunningMeanStdInPlace) |
| (encoder): MultiInputEncoder( |
| (encoders): ModuleDict( |
| (obs): ConvEncoder( |
| (enc): RecursiveScriptModule( |
| original_name=ConvEncoderImpl |
| (conv_head): RecursiveScriptModule( |
| original_name=Sequential |
| (0): RecursiveScriptModule(original_name=Conv2d) |
| (1): RecursiveScriptModule(original_name=ReLU) |
| (2): RecursiveScriptModule(original_name=Conv2d) |
| (3): RecursiveScriptModule(original_name=ReLU) |
| (4): RecursiveScriptModule(original_name=Conv2d) |
| (5): RecursiveScriptModule(original_name=ReLU) |
| ) |
| (mlp_layers): RecursiveScriptModule( |
| original_name=Sequential |
| (0): RecursiveScriptModule(original_name=Linear) |
| (1): RecursiveScriptModule(original_name=ReLU) |
| ) |
| ) |
| ) |
| ) |
| ) |
| (core): ModelCoreIdentity() |
| (decoder): MlpDecoder( |
| (mlp): Identity() |
| ) |
| (critic_linear): Linear(in_features=512, out_features=1, bias=True) |
| (action_parameterization): ActionParameterizationDefault( |
| (distribution_linear): Linear(in_features=512, out_features=18, bias=True) |
| ) |
| ) |
| [2023-09-24 15:50:56,914][130682] Using optimizer <class 'torch.optim.adam.Adam'> |
| [2023-09-24 15:50:56,914][130682] Loading state from checkpoint ./train_atari/Berzerk/checkpoint_p0/checkpoint_000006400_1638400.pth... |
| [2023-09-24 15:50:56,933][130682] Loading model from checkpoint |
| [2023-09-24 15:50:56,936][130682] Loaded experiment state at self.train_step=6400, self.env_steps=1638400 |
| [2023-09-24 15:50:56,936][130682] Initialized policy 0 weights for model version 6400 |
| [2023-09-24 15:50:56,938][130682] LearnerWorker_p0 finished initialization! |
| [2023-09-24 15:50:56,938][130682] Using GPUs [0] for process 0 (actually maps to GPUs [0]) |
| [2023-09-24 15:50:57,746][129962] Starting all processes... |
| [2023-09-24 15:50:57,750][130727] Using GPUs [1] for process 1 (actually maps to GPUs [1]) |
| [2023-09-24 15:50:57,750][130727] Set environment var CUDA_VISIBLE_DEVICES to '1' (GPU indices [1]) for learning process 1 |
| [2023-09-24 15:50:57,753][129962] Starting process inference_proc0-0 |
| [2023-09-24 15:50:57,754][129962] Starting process inference_proc1-0 |
| [2023-09-24 15:50:57,754][129962] Starting process rollout_proc0 |
| [2023-09-24 15:50:57,754][129962] Starting process rollout_proc1 |
| [2023-09-24 15:50:57,768][130727] Num visible devices: 1 |
| [2023-09-24 15:50:57,755][129962] Starting process rollout_proc2 |
| [2023-09-24 15:50:57,755][129962] Starting process rollout_proc3 |
| [2023-09-24 15:50:57,793][130727] Starting seed is not provided |
| [2023-09-24 15:50:57,793][130727] Using GPUs [0] for process 1 (actually maps to GPUs [1]) |
| [2023-09-24 15:50:57,793][130727] Initializing actor-critic model on device cuda:0 |
| [2023-09-24 15:50:57,759][129962] Starting process rollout_proc4 |
| [2023-09-24 15:50:57,794][130727] RunningMeanStd input shape: (4, 84, 84) |
| [2023-09-24 15:50:57,794][130727] RunningMeanStd input shape: (1,) |
| [2023-09-24 15:50:57,760][129962] Starting process rollout_proc5 |
| [2023-09-24 15:50:57,762][129962] Starting process rollout_proc6 |
| [2023-09-24 15:50:57,764][129962] Starting process rollout_proc7 |
| [2023-09-24 15:50:57,807][130727] ConvEncoder: input_channels=4 |
| [2023-09-24 15:50:58,151][130727] Conv encoder output size: 512 |
| [2023-09-24 15:50:58,154][130727] Created Actor Critic model with architecture: |
| [2023-09-24 15:50:58,154][130727] ActorCriticSharedWeights( |
| (obs_normalizer): ObservationNormalizer( |
| (running_mean_std): RunningMeanStdDictInPlace( |
| (running_mean_std): ModuleDict( |
| (obs): RunningMeanStdInPlace() |
| ) |
| ) |
| ) |
| (returns_normalizer): RecursiveScriptModule(original_name=RunningMeanStdInPlace) |
| (encoder): MultiInputEncoder( |
| (encoders): ModuleDict( |
| (obs): ConvEncoder( |
| (enc): RecursiveScriptModule( |
| original_name=ConvEncoderImpl |
| (conv_head): RecursiveScriptModule( |
| original_name=Sequential |
| (0): RecursiveScriptModule(original_name=Conv2d) |
| (1): RecursiveScriptModule(original_name=ReLU) |
| (2): RecursiveScriptModule(original_name=Conv2d) |
| (3): RecursiveScriptModule(original_name=ReLU) |
| (4): RecursiveScriptModule(original_name=Conv2d) |
| (5): RecursiveScriptModule(original_name=ReLU) |
| ) |
| (mlp_layers): RecursiveScriptModule( |
| original_name=Sequential |
| (0): RecursiveScriptModule(original_name=Linear) |
| (1): RecursiveScriptModule(original_name=ReLU) |
| ) |
| ) |
| ) |
| ) |
| ) |
| (core): ModelCoreIdentity() |
| (decoder): MlpDecoder( |
| (mlp): Identity() |
| ) |
| (critic_linear): Linear(in_features=512, out_features=1, bias=True) |
| (action_parameterization): ActionParameterizationDefault( |
| (distribution_linear): Linear(in_features=512, out_features=18, bias=True) |
| ) |
| ) |
| [2023-09-24 15:50:58,747][130727] Using optimizer <class 'torch.optim.adam.Adam'> |
| [2023-09-24 15:50:58,748][130727] Loading state from checkpoint ./train_atari/Berzerk/checkpoint_p1/checkpoint_000006384_1634304.pth... |
| [2023-09-24 15:50:58,768][130727] Loading model from checkpoint |
| [2023-09-24 15:50:58,771][130727] Loaded experiment state at self.train_step=6384, self.env_steps=1634304 |
| [2023-09-24 15:50:58,771][130727] Initialized policy 1 weights for model version 6384 |
| [2023-09-24 15:50:58,773][130727] LearnerWorker_p1 finished initialization! |
| [2023-09-24 15:50:58,774][130727] Using GPUs [0] for process 1 (actually maps to GPUs [1]) |
| [2023-09-24 15:50:59,679][130849] Using GPUs [1] for process 1 (actually maps to GPUs [1]) |
| [2023-09-24 15:50:59,679][130849] Set environment var CUDA_VISIBLE_DEVICES to '1' (GPU indices [1]) for inference process 1 |
| [2023-09-24 15:50:59,682][130887] Worker 5 uses CPU cores [20, 21, 22, 23] |
| [2023-09-24 15:50:59,685][130850] Worker 0 uses CPU cores [0, 1, 2, 3] |
| [2023-09-24 15:50:59,697][130849] Num visible devices: 1 |
| [2023-09-24 15:50:59,714][130886] Worker 4 uses CPU cores [16, 17, 18, 19] |
| [2023-09-24 15:50:59,734][130883] Worker 2 uses CPU cores [8, 9, 10, 11] |
| [2023-09-24 15:50:59,747][130885] Worker 3 uses CPU cores [12, 13, 14, 15] |
| [2023-09-24 15:50:59,749][130888] Worker 6 uses CPU cores [24, 25, 26, 27] |
| [2023-09-24 15:50:59,791][130848] Using GPUs [0] for process 0 (actually maps to GPUs [0]) |
| [2023-09-24 15:50:59,791][130848] Set environment var CUDA_VISIBLE_DEVICES to '0' (GPU indices [0]) for inference process 0 |
| [2023-09-24 15:50:59,794][130882] Worker 1 uses CPU cores [4, 5, 6, 7] |
| [2023-09-24 15:50:59,797][130889] Worker 7 uses CPU cores [28, 29, 30, 31] |
| [2023-09-24 15:50:59,810][130848] Num visible devices: 1 |
| [2023-09-24 15:51:00,312][130849] RunningMeanStd input shape: (4, 84, 84) |
| [2023-09-24 15:51:00,312][130849] RunningMeanStd input shape: (1,) |
| [2023-09-24 15:51:00,323][130849] ConvEncoder: input_channels=4 |
| [2023-09-24 15:51:00,418][130849] Conv encoder output size: 512 |
| [2023-09-24 15:51:00,424][129962] Inference worker 1-0 is ready! |
| [2023-09-24 15:51:00,441][130848] RunningMeanStd input shape: (4, 84, 84) |
| [2023-09-24 15:51:00,442][130848] RunningMeanStd input shape: (1,) |
| [2023-09-24 15:51:00,453][130848] ConvEncoder: input_channels=4 |
| [2023-09-24 15:51:00,547][130848] Conv encoder output size: 512 |
| [2023-09-24 15:51:00,552][129962] Inference worker 0-0 is ready! |
| [2023-09-24 15:51:00,553][129962] All inference workers are ready! Signal rollout workers to start! |
| [2023-09-24 15:51:00,569][129962] Fps is (10 sec: nan, 60 sec: nan, 300 sec: nan). Total num frames: 3272704. Throughput: 0: nan, 1: nan. Samples: 0. Policy #0 lag: (min: -1.0, avg: -1.0, max: -1.0) |
| [2023-09-24 15:51:00,995][130886] Decorrelating experience for 0 frames... |
| [2023-09-24 15:51:00,998][130882] Decorrelating experience for 0 frames... |
| [2023-09-24 15:51:00,999][130888] Decorrelating experience for 0 frames... |
| [2023-09-24 15:51:00,999][130883] Decorrelating experience for 0 frames... |
| [2023-09-24 15:51:00,999][130850] Decorrelating experience for 0 frames... |
| [2023-09-24 15:51:01,002][130887] Decorrelating experience for 0 frames... |
| [2023-09-24 15:51:01,058][130889] Decorrelating experience for 0 frames... |
| [2023-09-24 15:51:01,067][130885] Decorrelating experience for 0 frames... |
| [2023-09-24 15:51:05,569][129962] Fps is (10 sec: 1638.4, 60 sec: 1638.4, 300 sec: 1638.4). Total num frames: 3280896. Throughput: 0: 204.8, 1: 204.8. Samples: 2048. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) |
| [2023-09-24 15:51:05,570][129962] Avg episode reward: [(0, '3.583'), (1, '3.875')] |
| [2023-09-24 15:51:10,568][129962] Fps is (10 sec: 3276.8, 60 sec: 3276.8, 300 sec: 3276.8). Total num frames: 3305472. Throughput: 0: 409.6, 1: 409.6. Samples: 8192. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) |
| [2023-09-24 15:51:10,569][129962] Avg episode reward: [(0, '3.324'), (1, '4.174')] |
| [2023-09-24 15:51:10,570][130727] Saving new best policy, reward=4.174! |
| [2023-09-24 15:51:14,549][129962] Heartbeat connected on Batcher_0 |
| [2023-09-24 15:51:14,554][129962] Heartbeat connected on Batcher_1 |
| [2023-09-24 15:51:14,568][129962] Heartbeat connected on RolloutWorker_w0 |
| [2023-09-24 15:51:14,571][129962] Heartbeat connected on RolloutWorker_w1 |
| [2023-09-24 15:51:14,573][129962] Heartbeat connected on RolloutWorker_w2 |
| [2023-09-24 15:51:14,576][129962] Heartbeat connected on RolloutWorker_w3 |
| [2023-09-24 15:51:14,578][129962] Heartbeat connected on RolloutWorker_w4 |
| [2023-09-24 15:51:14,581][129962] Heartbeat connected on RolloutWorker_w5 |
| [2023-09-24 15:51:14,583][129962] Heartbeat connected on RolloutWorker_w6 |
| [2023-09-24 15:51:14,586][129962] Heartbeat connected on RolloutWorker_w7 |
| [2023-09-24 15:51:14,590][129962] Heartbeat connected on InferenceWorker_p0-w0 |
| [2023-09-24 15:51:14,614][129962] Heartbeat connected on InferenceWorker_p1-w0 |
| [2023-09-24 15:51:14,620][129962] Heartbeat connected on LearnerWorker_p0 |
| [2023-09-24 15:51:14,621][129962] Heartbeat connected on LearnerWorker_p1 |
| [2023-09-24 15:51:15,568][129962] Fps is (10 sec: 5734.5, 60 sec: 4369.1, 300 sec: 4369.1). Total num frames: 3338240. Throughput: 0: 608.0, 1: 597.8. Samples: 18087. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) |
| [2023-09-24 15:51:15,569][129962] Avg episode reward: [(0, '3.557'), (1, '4.021')] |
| [2023-09-24 15:51:17,143][130848] Updated weights for policy 0, policy_version 6560 (0.0017) |
| [2023-09-24 15:51:17,143][130849] Updated weights for policy 1, policy_version 6544 (0.0017) |
| [2023-09-24 15:51:20,569][129962] Fps is (10 sec: 6553.5, 60 sec: 4915.2, 300 sec: 4915.2). Total num frames: 3371008. Throughput: 0: 569.4, 1: 563.4. Samples: 22657. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) |
| [2023-09-24 15:51:20,569][129962] Avg episode reward: [(0, '3.494'), (1, '3.671')] |
| [2023-09-24 15:51:25,568][129962] Fps is (10 sec: 6553.6, 60 sec: 5242.9, 300 sec: 5242.9). Total num frames: 3403776. Throughput: 0: 653.4, 1: 648.4. Samples: 32544. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) |
| [2023-09-24 15:51:25,569][129962] Avg episode reward: [(0, '3.540'), (1, '3.577')] |
| [2023-09-24 15:51:29,896][130848] Updated weights for policy 0, policy_version 6720 (0.0016) |
| [2023-09-24 15:51:29,898][130849] Updated weights for policy 1, policy_version 6704 (0.0018) |
| [2023-09-24 15:51:30,568][129962] Fps is (10 sec: 6553.8, 60 sec: 5461.4, 300 sec: 5461.4). Total num frames: 3436544. Throughput: 0: 704.7, 1: 700.0. Samples: 42143. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) |
| [2023-09-24 15:51:30,569][129962] Avg episode reward: [(0, '3.670'), (1, '3.510')] |
| [2023-09-24 15:51:35,569][129962] Fps is (10 sec: 6553.5, 60 sec: 5617.4, 300 sec: 5617.4). Total num frames: 3469312. Throughput: 0: 672.9, 1: 672.9. Samples: 47104. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) |
| [2023-09-24 15:51:35,570][129962] Avg episode reward: [(0, '3.620'), (1, '3.600')] |
| [2023-09-24 15:51:40,569][129962] Fps is (10 sec: 6553.5, 60 sec: 5734.4, 300 sec: 5734.4). Total num frames: 3502080. Throughput: 0: 712.7, 1: 708.9. Samples: 56863. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) |
| [2023-09-24 15:51:40,569][129962] Avg episode reward: [(0, '3.830'), (1, '3.680')] |
| [2023-09-24 15:51:40,571][130682] Saving new best policy, reward=3.830! |
| [2023-09-24 15:51:42,552][130849] Updated weights for policy 1, policy_version 6864 (0.0017) |
| [2023-09-24 15:51:42,552][130848] Updated weights for policy 0, policy_version 6880 (0.0017) |
| [2023-09-24 15:51:45,569][129962] Fps is (10 sec: 6553.6, 60 sec: 5825.4, 300 sec: 5825.4). Total num frames: 3534848. Throughput: 0: 738.7, 1: 735.8. Samples: 66352. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) |
| [2023-09-24 15:51:45,570][129962] Avg episode reward: [(0, '3.830'), (1, '3.730')] |
| [2023-09-24 15:51:50,569][129962] Fps is (10 sec: 6553.5, 60 sec: 5898.2, 300 sec: 5898.2). Total num frames: 3567616. Throughput: 0: 772.4, 1: 768.2. Samples: 71377. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) |
| [2023-09-24 15:51:50,570][129962] Avg episode reward: [(0, '3.730'), (1, '3.970')] |
| [2023-09-24 15:51:55,193][130849] Updated weights for policy 1, policy_version 7024 (0.0016) |
| [2023-09-24 15:51:55,194][130848] Updated weights for policy 0, policy_version 7040 (0.0016) |
| [2023-09-24 15:51:55,568][129962] Fps is (10 sec: 6553.7, 60 sec: 5957.8, 300 sec: 5957.8). Total num frames: 3600384. Throughput: 0: 810.7, 1: 807.0. Samples: 80990. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) |
| [2023-09-24 15:51:55,569][129962] Avg episode reward: [(0, '3.760'), (1, '3.610')] |
| [2023-09-24 15:52:00,569][129962] Fps is (10 sec: 6553.7, 60 sec: 6007.5, 300 sec: 6007.5). Total num frames: 3633152. Throughput: 0: 807.9, 1: 808.1. Samples: 90805. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) |
| [2023-09-24 15:52:00,569][129962] Avg episode reward: [(0, '3.560'), (1, '3.770')] |
| [2023-09-24 15:52:05,568][129962] Fps is (10 sec: 6553.6, 60 sec: 6417.1, 300 sec: 6049.5). Total num frames: 3665920. Throughput: 0: 811.2, 1: 810.8. Samples: 95649. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) |
| [2023-09-24 15:52:05,569][129962] Avg episode reward: [(0, '3.600'), (1, '3.720')] |
| [2023-09-24 15:52:07,833][130848] Updated weights for policy 0, policy_version 7200 (0.0016) |
| [2023-09-24 15:52:07,834][130849] Updated weights for policy 1, policy_version 7184 (0.0018) |
| [2023-09-24 15:52:10,569][129962] Fps is (10 sec: 6553.6, 60 sec: 6553.6, 300 sec: 6085.5). Total num frames: 3698688. Throughput: 0: 807.2, 1: 807.0. Samples: 105183. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) |
| [2023-09-24 15:52:10,569][129962] Avg episode reward: [(0, '3.790'), (1, '3.710')] |
| [2023-09-24 15:52:15,569][129962] Fps is (10 sec: 6553.5, 60 sec: 6553.6, 300 sec: 6116.7). Total num frames: 3731456. Throughput: 0: 806.0, 1: 807.6. Samples: 114754. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) |
| [2023-09-24 15:52:15,569][129962] Avg episode reward: [(0, '3.960'), (1, '3.750')] |
| [2023-09-24 15:52:15,571][130682] Saving new best policy, reward=3.960! |
| [2023-09-24 15:52:20,472][130849] Updated weights for policy 1, policy_version 7344 (0.0014) |
| [2023-09-24 15:52:20,472][130848] Updated weights for policy 0, policy_version 7360 (0.0018) |
| [2023-09-24 15:52:20,568][129962] Fps is (10 sec: 6553.8, 60 sec: 6553.6, 300 sec: 6144.0). Total num frames: 3764224. Throughput: 0: 810.2, 1: 807.1. Samples: 119881. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) |
| [2023-09-24 15:52:20,569][129962] Avg episode reward: [(0, '4.120'), (1, '3.720')] |
| [2023-09-24 15:52:20,577][130682] Saving new best policy, reward=4.120! |
| [2023-09-24 15:52:25,568][129962] Fps is (10 sec: 6553.7, 60 sec: 6553.6, 300 sec: 6168.1). Total num frames: 3796992. Throughput: 0: 806.9, 1: 807.2. Samples: 129495. Policy #0 lag: (min: 14.0, avg: 14.0, max: 14.0) |
| [2023-09-24 15:52:25,569][129962] Avg episode reward: [(0, '4.000'), (1, '3.690')] |
| [2023-09-24 15:52:30,569][129962] Fps is (10 sec: 6553.5, 60 sec: 6553.6, 300 sec: 6189.5). Total num frames: 3829760. Throughput: 0: 808.8, 1: 811.6. Samples: 139269. Policy #0 lag: (min: 14.0, avg: 14.0, max: 14.0) |
| [2023-09-24 15:52:30,570][129962] Avg episode reward: [(0, '3.890'), (1, '3.540')] |
| [2023-09-24 15:52:33,044][130848] Updated weights for policy 0, policy_version 7520 (0.0015) |
| [2023-09-24 15:52:33,046][130849] Updated weights for policy 1, policy_version 7504 (0.0018) |
| [2023-09-24 15:52:35,568][129962] Fps is (10 sec: 6553.5, 60 sec: 6553.6, 300 sec: 6208.7). Total num frames: 3862528. Throughput: 0: 809.2, 1: 810.5. Samples: 144261. Policy #0 lag: (min: 13.0, avg: 13.0, max: 13.0) |
| [2023-09-24 15:52:35,569][129962] Avg episode reward: [(0, '3.900'), (1, '3.570')] |
| [2023-09-24 15:52:40,569][129962] Fps is (10 sec: 5734.4, 60 sec: 6417.1, 300 sec: 6144.0). Total num frames: 3887104. Throughput: 0: 809.2, 1: 809.8. Samples: 153844. Policy #0 lag: (min: 3.0, avg: 3.0, max: 3.0) |
| [2023-09-24 15:52:40,570][129962] Avg episode reward: [(0, '3.770'), (1, '3.780')] |
| [2023-09-24 15:52:45,569][129962] Fps is (10 sec: 5734.3, 60 sec: 6417.1, 300 sec: 6163.5). Total num frames: 3919872. Throughput: 0: 755.8, 1: 812.2. Samples: 161366. Policy #0 lag: (min: 3.0, avg: 3.0, max: 3.0) |
| [2023-09-24 15:52:45,570][129962] Avg episode reward: [(0, '3.900'), (1, '3.690')] |
| [2023-09-24 15:52:45,708][130848] Updated weights for policy 0, policy_version 7680 (0.0016) |
| [2023-09-24 15:52:45,709][130849] Updated weights for policy 1, policy_version 7664 (0.0018) |
| [2023-09-24 15:52:50,569][129962] Fps is (10 sec: 6553.5, 60 sec: 6417.1, 300 sec: 6181.2). Total num frames: 3952640. Throughput: 0: 808.7, 1: 809.0. Samples: 168445. Policy #0 lag: (min: 4.0, avg: 4.0, max: 4.0) |
| [2023-09-24 15:52:50,570][129962] Avg episode reward: [(0, '3.990'), (1, '3.750')] |
| [2023-09-24 15:52:50,699][130682] Saving ./train_atari/Berzerk/checkpoint_p0/checkpoint_000007744_1982464.pth... |
| [2023-09-24 15:52:50,726][130682] Removing ./train_atari/Berzerk/checkpoint_p0/checkpoint_000005872_1503232.pth |
| [2023-09-24 15:52:50,739][130727] Saving ./train_atari/Berzerk/checkpoint_p1/checkpoint_000007728_1978368.pth... |
| [2023-09-24 15:52:50,766][130727] Removing ./train_atari/Berzerk/checkpoint_p1/checkpoint_000005856_1499136.pth |
| [2023-09-24 15:52:55,569][129962] Fps is (10 sec: 6553.6, 60 sec: 6417.0, 300 sec: 6197.4). Total num frames: 3985408. Throughput: 0: 809.8, 1: 812.5. Samples: 178186. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) |
| [2023-09-24 15:52:55,570][129962] Avg episode reward: [(0, '3.800'), (1, '3.810')] |
| [2023-09-24 15:52:58,273][130849] Updated weights for policy 1, policy_version 7824 (0.0016) |
| [2023-09-24 15:52:58,274][130848] Updated weights for policy 0, policy_version 7840 (0.0017) |
| [2023-09-24 15:53:00,568][129962] Fps is (10 sec: 6553.8, 60 sec: 6417.1, 300 sec: 6212.3). Total num frames: 4018176. Throughput: 0: 817.7, 1: 814.9. Samples: 188219. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) |
| [2023-09-24 15:53:00,569][129962] Avg episode reward: [(0, '4.020'), (1, '3.740')] |
| [2023-09-24 15:53:05,569][129962] Fps is (10 sec: 6553.6, 60 sec: 6417.0, 300 sec: 6225.9). Total num frames: 4050944. Throughput: 0: 810.9, 1: 810.6. Samples: 192848. Policy #0 lag: (min: 3.0, avg: 3.0, max: 3.0) |
| [2023-09-24 15:53:05,569][129962] Avg episode reward: [(0, '4.280'), (1, '3.820')] |
| [2023-09-24 15:53:05,580][130682] Saving new best policy, reward=4.280! |
| [2023-09-24 15:53:10,568][129962] Fps is (10 sec: 6553.6, 60 sec: 6417.1, 300 sec: 6238.5). Total num frames: 4083712. Throughput: 0: 812.5, 1: 815.5. Samples: 202757. Policy #0 lag: (min: 6.0, avg: 6.0, max: 6.0) |
| [2023-09-24 15:53:10,569][129962] Avg episode reward: [(0, '4.140'), (1, '4.040')] |
| [2023-09-24 15:53:10,789][130848] Updated weights for policy 0, policy_version 8000 (0.0015) |
| [2023-09-24 15:53:10,790][130849] Updated weights for policy 1, policy_version 7984 (0.0017) |
| [2023-09-24 15:53:15,569][129962] Fps is (10 sec: 6553.6, 60 sec: 6417.1, 300 sec: 6250.2). Total num frames: 4116480. Throughput: 0: 817.6, 1: 814.9. Samples: 212733. Policy #0 lag: (min: 6.0, avg: 6.0, max: 6.0) |
| [2023-09-24 15:53:15,569][129962] Avg episode reward: [(0, '4.060'), (1, '3.990')] |
| [2023-09-24 15:53:20,569][129962] Fps is (10 sec: 6553.4, 60 sec: 6417.0, 300 sec: 6261.0). Total num frames: 4149248. Throughput: 0: 811.9, 1: 811.7. Samples: 217322. Policy #0 lag: (min: 8.0, avg: 8.0, max: 8.0) |
| [2023-09-24 15:53:20,569][129962] Avg episode reward: [(0, '4.140'), (1, '3.880')] |
| [2023-09-24 15:53:23,356][130848] Updated weights for policy 0, policy_version 8160 (0.0017) |
| [2023-09-24 15:53:23,357][130849] Updated weights for policy 1, policy_version 8144 (0.0017) |
| [2023-09-24 15:53:25,568][129962] Fps is (10 sec: 6553.7, 60 sec: 6417.1, 300 sec: 6271.1). Total num frames: 4182016. Throughput: 0: 814.9, 1: 817.9. Samples: 227323. Policy #0 lag: (min: 0.0, avg: 0.0, max: 0.0) |
| [2023-09-24 15:53:25,569][129962] Avg episode reward: [(0, '4.390'), (1, '3.820')] |
| [2023-09-24 15:53:25,570][130682] Saving new best policy, reward=4.390! |
| [2023-09-24 15:53:30,568][129962] Fps is (10 sec: 6553.8, 60 sec: 6417.1, 300 sec: 6280.5). Total num frames: 4214784. Throughput: 0: 870.2, 1: 814.1. Samples: 237159. Policy #0 lag: (min: 0.0, avg: 0.0, max: 0.0) |
| [2023-09-24 15:53:30,569][129962] Avg episode reward: [(0, '4.190'), (1, '3.820')] |
| [2023-09-24 15:53:35,568][129962] Fps is (10 sec: 6553.6, 60 sec: 6417.1, 300 sec: 6289.4). Total num frames: 4247552. Throughput: 0: 812.3, 1: 814.9. Samples: 241668. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) |
| [2023-09-24 15:53:35,569][129962] Avg episode reward: [(0, '4.320'), (1, '3.770')] |
| [2023-09-24 15:53:36,252][130849] Updated weights for policy 1, policy_version 8304 (0.0017) |
| [2023-09-24 15:53:36,252][130848] Updated weights for policy 0, policy_version 8320 (0.0016) |
| [2023-09-24 15:53:40,568][129962] Fps is (10 sec: 6553.5, 60 sec: 6553.6, 300 sec: 6297.6). Total num frames: 4280320. Throughput: 0: 812.7, 1: 808.7. Samples: 251148. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) |
| [2023-09-24 15:53:40,569][129962] Avg episode reward: [(0, '4.060'), (1, '3.950')] |
| [2023-09-24 15:53:45,569][129962] Fps is (10 sec: 6553.4, 60 sec: 6553.6, 300 sec: 6305.4). Total num frames: 4313088. Throughput: 0: 801.8, 1: 803.1. Samples: 260437. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) |
| [2023-09-24 15:53:45,570][129962] Avg episode reward: [(0, '3.650'), (1, '4.060')] |
| [2023-09-24 15:53:49,069][130848] Updated weights for policy 0, policy_version 8480 (0.0016) |
| [2023-09-24 15:53:49,069][130849] Updated weights for policy 1, policy_version 8464 (0.0016) |
| [2023-09-24 15:53:50,569][129962] Fps is (10 sec: 6553.5, 60 sec: 6553.6, 300 sec: 6312.7). Total num frames: 4345856. Throughput: 0: 807.2, 1: 807.8. Samples: 265524. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) |
| [2023-09-24 15:53:50,570][129962] Avg episode reward: [(0, '3.750'), (1, '4.100')] |
| [2023-09-24 15:53:55,568][129962] Fps is (10 sec: 6553.7, 60 sec: 6553.6, 300 sec: 6319.5). Total num frames: 4378624. Throughput: 0: 806.0, 1: 803.1. Samples: 275163. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) |
| [2023-09-24 15:53:55,569][129962] Avg episode reward: [(0, '3.820'), (1, '4.000')] |
| [2023-09-24 15:54:00,568][129962] Fps is (10 sec: 6553.8, 60 sec: 6553.6, 300 sec: 6326.1). Total num frames: 4411392. Throughput: 0: 802.0, 1: 801.8. Samples: 284901. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) |
| [2023-09-24 15:54:00,569][129962] Avg episode reward: [(0, '3.950'), (1, '3.920')] |
| [2023-09-24 15:54:01,659][130849] Updated weights for policy 1, policy_version 8624 (0.0017) |
| [2023-09-24 15:54:01,660][130848] Updated weights for policy 0, policy_version 8640 (0.0017) |
| [2023-09-24 15:54:05,569][129962] Fps is (10 sec: 6553.5, 60 sec: 6553.6, 300 sec: 6332.2). Total num frames: 4444160. Throughput: 0: 806.2, 1: 806.5. Samples: 289896. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) |
| [2023-09-24 15:54:05,570][129962] Avg episode reward: [(0, '4.090'), (1, '3.870')] |
| [2023-09-24 15:54:10,568][129962] Fps is (10 sec: 6553.7, 60 sec: 6553.6, 300 sec: 6338.0). Total num frames: 4476928. Throughput: 0: 802.5, 1: 799.6. Samples: 299418. Policy #0 lag: (min: 2.0, avg: 2.0, max: 2.0) |
| [2023-09-24 15:54:10,569][129962] Avg episode reward: [(0, '4.420'), (1, '3.690')] |
| [2023-09-24 15:54:10,569][130682] Saving new best policy, reward=4.420! |
| [2023-09-24 15:54:14,305][130848] Updated weights for policy 0, policy_version 8800 (0.0017) |
| [2023-09-24 15:54:14,305][130849] Updated weights for policy 1, policy_version 8784 (0.0019) |
| [2023-09-24 15:54:15,569][129962] Fps is (10 sec: 6144.1, 60 sec: 6485.3, 300 sec: 6322.5). Total num frames: 4505600. Throughput: 0: 747.6, 1: 802.5. Samples: 306911. Policy #0 lag: (min: 2.0, avg: 2.0, max: 2.0) |
| [2023-09-24 15:54:15,569][129962] Avg episode reward: [(0, '4.430'), (1, '3.790')] |
| [2023-09-24 15:54:15,571][130682] Saving new best policy, reward=4.430! |
| [2023-09-24 15:54:20,568][129962] Fps is (10 sec: 6553.6, 60 sec: 6553.6, 300 sec: 6348.8). Total num frames: 4542464. Throughput: 0: 807.8, 1: 805.0. Samples: 314244. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) |
| [2023-09-24 15:54:20,569][129962] Avg episode reward: [(0, '4.500'), (1, '3.840')] |
| [2023-09-24 15:54:20,577][130682] Saving new best policy, reward=4.500! |
| [2023-09-24 15:54:25,569][129962] Fps is (10 sec: 6144.0, 60 sec: 6417.0, 300 sec: 6313.8). Total num frames: 4567040. Throughput: 0: 806.9, 1: 808.3. Samples: 323835. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) |
| [2023-09-24 15:54:25,570][129962] Avg episode reward: [(0, '4.210'), (1, '3.770')] |
| [2023-09-24 15:54:26,932][130849] Updated weights for policy 1, policy_version 8944 (0.0014) |
| [2023-09-24 15:54:26,934][130848] Updated weights for policy 0, policy_version 8960 (0.0016) |
| [2023-09-24 15:54:30,569][129962] Fps is (10 sec: 5734.3, 60 sec: 6417.0, 300 sec: 6319.5). Total num frames: 4599808. Throughput: 0: 759.9, 1: 815.3. Samples: 331320. Policy #0 lag: (min: 10.0, avg: 10.0, max: 10.0) |
| [2023-09-24 15:54:30,569][129962] Avg episode reward: [(0, '4.410'), (1, '3.870')] |
| [2023-09-24 15:54:30,687][129962] Keyboard interrupt detected in the event loop EvtLoop [Runner_EvtLoop, process=main process 129962], exiting... |
| [2023-09-24 15:54:30,688][130886] Stopping RolloutWorker_w4... |
| [2023-09-24 15:54:30,688][130887] Stopping RolloutWorker_w5... |
| [2023-09-24 15:54:30,688][129962] Runner profile tree view: |
| main_loop: 216.1012 |
| [2023-09-24 15:54:30,688][130885] Stopping RolloutWorker_w3... |
| [2023-09-24 15:54:30,688][130882] Stopping RolloutWorker_w1... |
| [2023-09-24 15:54:30,688][130883] Stopping RolloutWorker_w2... |
| [2023-09-24 15:54:30,688][130850] Stopping RolloutWorker_w0... |
| [2023-09-24 15:54:30,688][130888] Stopping RolloutWorker_w6... |
| [2023-09-24 15:54:30,688][130887] Loop rollout_proc5_evt_loop terminating... |
| [2023-09-24 15:54:30,688][130886] Loop rollout_proc4_evt_loop terminating... |
| [2023-09-24 15:54:30,688][130889] Stopping RolloutWorker_w7... |
| [2023-09-24 15:54:30,688][129962] Collected {0: 2301952, 1: 2297856}, FPS: 6141.1 |
| [2023-09-24 15:54:30,689][130882] Loop rollout_proc1_evt_loop terminating... |
| [2023-09-24 15:54:30,689][130850] Loop rollout_proc0_evt_loop terminating... |
| [2023-09-24 15:54:30,689][130682] Stopping Batcher_0... |
| [2023-09-24 15:54:30,689][130885] Loop rollout_proc3_evt_loop terminating... |
| [2023-09-24 15:54:30,689][130883] Loop rollout_proc2_evt_loop terminating... |
| [2023-09-24 15:54:30,689][130888] Loop rollout_proc6_evt_loop terminating... |
| [2023-09-24 15:54:30,689][130889] Loop rollout_proc7_evt_loop terminating... |
| [2023-09-24 15:54:30,689][130682] Loop batcher_evt_loop terminating... |
| [2023-09-24 15:54:30,692][130727] Stopping Batcher_1... |
| [2023-09-24 15:54:30,692][130727] Loop batcher_evt_loop terminating... |
| [2023-09-24 15:54:30,718][130727] Saving ./train_atari/Berzerk/checkpoint_p1/checkpoint_000008992_2301952.pth... |
| [2023-09-24 15:54:30,746][130682] Saving ./train_atari/Berzerk/checkpoint_p0/checkpoint_000009008_2306048.pth... |
| [2023-09-24 15:54:30,749][130848] Weights refcount: 2 0 |
| [2023-09-24 15:54:30,750][130848] Stopping InferenceWorker_p0-w0... |
| [2023-09-24 15:54:30,751][130848] Loop inference_proc0-0_evt_loop terminating... |
| [2023-09-24 15:54:30,755][130849] Weights refcount: 2 0 |
| [2023-09-24 15:54:30,756][130849] Stopping InferenceWorker_p1-w0... |
| [2023-09-24 15:54:30,756][130849] Loop inference_proc1-0_evt_loop terminating... |
| [2023-09-24 15:54:30,758][130727] Removing ./train_atari/Berzerk/checkpoint_p1/checkpoint_000006384_1634304.pth |
| [2023-09-24 15:54:30,764][130727] Stopping LearnerWorker_p1... |
| [2023-09-24 15:54:30,764][130727] Loop learner_proc1_evt_loop terminating... |
| [2023-09-24 15:54:30,775][130682] Removing ./train_atari/Berzerk/checkpoint_p0/checkpoint_000006400_1638400.pth |
| [2023-09-24 15:54:30,779][130682] Stopping LearnerWorker_p0... |
| [2023-09-24 15:54:30,780][130682] Loop learner_proc0_evt_loop terminating... |
| [2023-09-24 15:54:53,982][13256] Saving configuration to ./train_atari/Berzerk/config.json... |
| [2023-09-24 15:54:54,258][13256] Rollout worker 0 uses device cpu |
| [2023-09-24 15:54:54,259][13256] Rollout worker 1 uses device cpu |
| [2023-09-24 15:54:54,259][13256] Rollout worker 2 uses device cpu |
| [2023-09-24 15:54:54,260][13256] Rollout worker 3 uses device cpu |
| [2023-09-24 15:54:54,260][13256] Rollout worker 4 uses device cpu |
| [2023-09-24 15:54:54,260][13256] Rollout worker 5 uses device cpu |
| [2023-09-24 15:54:54,261][13256] Rollout worker 6 uses device cpu |
| [2023-09-24 15:54:54,261][13256] Rollout worker 7 uses device cpu |
| [2023-09-24 15:54:54,261][13256] In synchronous mode, we only accumulate one batch. Setting num_batches_to_accumulate to 1 |
| [2023-09-24 15:54:54,306][13256] Using GPUs [0] for process 0 (actually maps to GPUs [0]) |
| [2023-09-24 15:54:54,306][13256] InferenceWorker_p0-w0: min num requests: 1 |
| [2023-09-24 15:54:54,310][13256] Using GPUs [1] for process 1 (actually maps to GPUs [1]) |
| [2023-09-24 15:54:54,310][13256] InferenceWorker_p1-w0: min num requests: 1 |
| [2023-09-24 15:54:54,332][13256] Starting all processes... |
| [2023-09-24 15:54:54,333][13256] Starting process learner_proc0 |
| [2023-09-24 15:54:55,901][13256] Starting process learner_proc1 |
| [2023-09-24 15:54:55,904][13827] Using GPUs [0] for process 0 (actually maps to GPUs [0]) |
| [2023-09-24 15:54:55,904][13827] Set environment var CUDA_VISIBLE_DEVICES to '0' (GPU indices [0]) for learning process 0 |
| [2023-09-24 15:54:55,922][13827] Num visible devices: 1 |
| [2023-09-24 15:54:55,937][13827] Starting seed is not provided |
| [2023-09-24 15:54:55,937][13827] Using GPUs [0] for process 0 (actually maps to GPUs [0]) |
| [2023-09-24 15:54:55,937][13827] Initializing actor-critic model on device cuda:0 |
| [2023-09-24 15:54:55,938][13827] RunningMeanStd input shape: (4, 84, 84) |
| [2023-09-24 15:54:55,938][13827] RunningMeanStd input shape: (1,) |
| [2023-09-24 15:54:55,949][13827] ConvEncoder: input_channels=4 |
| [2023-09-24 15:54:56,133][13827] Conv encoder output size: 512 |
| [2023-09-24 15:54:56,135][13827] Created Actor Critic model with architecture: |
| [2023-09-24 15:54:56,135][13827] ActorCriticSharedWeights( |
| (obs_normalizer): ObservationNormalizer( |
| (running_mean_std): RunningMeanStdDictInPlace( |
| (running_mean_std): ModuleDict( |
| (obs): RunningMeanStdInPlace() |
| ) |
| ) |
| ) |
| (returns_normalizer): RecursiveScriptModule(original_name=RunningMeanStdInPlace) |
| (encoder): MultiInputEncoder( |
| (encoders): ModuleDict( |
| (obs): ConvEncoder( |
| (enc): RecursiveScriptModule( |
| original_name=ConvEncoderImpl |
| (conv_head): RecursiveScriptModule( |
| original_name=Sequential |
| (0): RecursiveScriptModule(original_name=Conv2d) |
| (1): RecursiveScriptModule(original_name=ReLU) |
| (2): RecursiveScriptModule(original_name=Conv2d) |
| (3): RecursiveScriptModule(original_name=ReLU) |
| (4): RecursiveScriptModule(original_name=Conv2d) |
| (5): RecursiveScriptModule(original_name=ReLU) |
| ) |
| (mlp_layers): RecursiveScriptModule( |
| original_name=Sequential |
| (0): RecursiveScriptModule(original_name=Linear) |
| (1): RecursiveScriptModule(original_name=ReLU) |
| ) |
| ) |
| ) |
| ) |
| ) |
| (core): ModelCoreIdentity() |
| (decoder): MlpDecoder( |
| (mlp): Identity() |
| ) |
| (critic_linear): Linear(in_features=512, out_features=1, bias=True) |
| (action_parameterization): ActionParameterizationDefault( |
| (distribution_linear): Linear(in_features=512, out_features=18, bias=True) |
| ) |
| ) |
| [2023-09-24 15:54:56,679][13827] Using optimizer <class 'torch.optim.adam.Adam'> |
| [2023-09-24 15:54:56,679][13827] Loading state from checkpoint ./train_atari/Berzerk/checkpoint_p0/checkpoint_000009008_2306048.pth... |
| [2023-09-24 15:54:56,701][13827] Loading model from checkpoint |
| [2023-09-24 15:54:56,706][13827] Loaded experiment state at self.train_step=9008, self.env_steps=2306048 |
| [2023-09-24 15:54:56,706][13827] Initialized policy 0 weights for model version 9008 |
| [2023-09-24 15:54:56,708][13827] LearnerWorker_p0 finished initialization! |
| [2023-09-24 15:54:56,708][13827] Using GPUs [0] for process 0 (actually maps to GPUs [0]) |
| [2023-09-24 15:54:57,480][13256] Starting all processes... |
| [2023-09-24 15:54:57,484][13996] Using GPUs [1] for process 1 (actually maps to GPUs [1]) |
| [2023-09-24 15:54:57,484][13996] Set environment var CUDA_VISIBLE_DEVICES to '1' (GPU indices [1]) for learning process 1 |
| [2023-09-24 15:54:57,487][13256] Starting process inference_proc0-0 |
| [2023-09-24 15:54:57,488][13256] Starting process inference_proc1-0 |
| [2023-09-24 15:54:57,488][13256] Starting process rollout_proc0 |
| [2023-09-24 15:54:57,488][13256] Starting process rollout_proc1 |
| [2023-09-24 15:54:57,502][13996] Num visible devices: 1 |
| [2023-09-24 15:54:57,489][13256] Starting process rollout_proc2 |
| [2023-09-24 15:54:57,522][13996] Starting seed is not provided |
| [2023-09-24 15:54:57,522][13996] Using GPUs [0] for process 1 (actually maps to GPUs [1]) |
| [2023-09-24 15:54:57,523][13996] Initializing actor-critic model on device cuda:0 |
| [2023-09-24 15:54:57,523][13996] RunningMeanStd input shape: (4, 84, 84) |
| [2023-09-24 15:54:57,489][13256] Starting process rollout_proc3 |
| [2023-09-24 15:54:57,524][13996] RunningMeanStd input shape: (1,) |
| [2023-09-24 15:54:57,493][13256] Starting process rollout_proc4 |
| [2023-09-24 15:54:57,497][13256] Starting process rollout_proc5 |
| [2023-09-24 15:54:57,498][13256] Starting process rollout_proc6 |
| [2023-09-24 15:54:57,498][13256] Starting process rollout_proc7 |
| [2023-09-24 15:54:57,535][13996] ConvEncoder: input_channels=4 |
| [2023-09-24 15:54:57,886][13996] Conv encoder output size: 512 |
| [2023-09-24 15:54:57,888][13996] Created Actor Critic model with architecture: |
| [2023-09-24 15:54:57,888][13996] ActorCriticSharedWeights( |
| (obs_normalizer): ObservationNormalizer( |
| (running_mean_std): RunningMeanStdDictInPlace( |
| (running_mean_std): ModuleDict( |
| (obs): RunningMeanStdInPlace() |
| ) |
| ) |
| ) |
| (returns_normalizer): RecursiveScriptModule(original_name=RunningMeanStdInPlace) |
| (encoder): MultiInputEncoder( |
| (encoders): ModuleDict( |
| (obs): ConvEncoder( |
| (enc): RecursiveScriptModule( |
| original_name=ConvEncoderImpl |
| (conv_head): RecursiveScriptModule( |
| original_name=Sequential |
| (0): RecursiveScriptModule(original_name=Conv2d) |
| (1): RecursiveScriptModule(original_name=ReLU) |
| (2): RecursiveScriptModule(original_name=Conv2d) |
| (3): RecursiveScriptModule(original_name=ReLU) |
| (4): RecursiveScriptModule(original_name=Conv2d) |
| (5): RecursiveScriptModule(original_name=ReLU) |
| ) |
| (mlp_layers): RecursiveScriptModule( |
| original_name=Sequential |
| (0): RecursiveScriptModule(original_name=Linear) |
| (1): RecursiveScriptModule(original_name=ReLU) |
| ) |
| ) |
| ) |
| ) |
| ) |
| (core): ModelCoreIdentity() |
| (decoder): MlpDecoder( |
| (mlp): Identity() |
| ) |
| (critic_linear): Linear(in_features=512, out_features=1, bias=True) |
| (action_parameterization): ActionParameterizationDefault( |
| (distribution_linear): Linear(in_features=512, out_features=18, bias=True) |
| ) |
| ) |
| [2023-09-24 15:54:58,475][13996] Using optimizer <class 'torch.optim.adam.Adam'> |
| [2023-09-24 15:54:58,476][13996] Loading state from checkpoint ./train_atari/Berzerk/checkpoint_p1/checkpoint_000008992_2301952.pth... |
| [2023-09-24 15:54:58,496][13996] Loading model from checkpoint |
| [2023-09-24 15:54:58,500][13996] Loaded experiment state at self.train_step=8992, self.env_steps=2301952 |
| [2023-09-24 15:54:58,500][13996] Initialized policy 1 weights for model version 8992 |
| [2023-09-24 15:54:58,502][13996] LearnerWorker_p1 finished initialization! |
| [2023-09-24 15:54:58,502][13996] Using GPUs [0] for process 1 (actually maps to GPUs [1]) |
| [2023-09-24 15:54:59,386][14087] Using GPUs [0] for process 0 (actually maps to GPUs [0]) |
| [2023-09-24 15:54:59,387][14087] Set environment var CUDA_VISIBLE_DEVICES to '0' (GPU indices [0]) for inference process 0 |
| [2023-09-24 15:54:59,391][14088] Using GPUs [1] for process 1 (actually maps to GPUs [1]) |
| [2023-09-24 15:54:59,391][14088] Set environment var CUDA_VISIBLE_DEVICES to '1' (GPU indices [1]) for inference process 1 |
| [2023-09-24 15:54:59,393][14122] Worker 2 uses CPU cores [8, 9, 10, 11] |
| [2023-09-24 15:54:59,405][14087] Num visible devices: 1 |
| [2023-09-24 15:54:59,409][14127] Worker 6 uses CPU cores [24, 25, 26, 27] |
| [2023-09-24 15:54:59,409][14088] Num visible devices: 1 |
| [2023-09-24 15:54:59,428][14120] Worker 0 uses CPU cores [0, 1, 2, 3] |
| [2023-09-24 15:54:59,434][14123] Worker 4 uses CPU cores [16, 17, 18, 19] |
| [2023-09-24 15:54:59,461][14125] Worker 1 uses CPU cores [4, 5, 6, 7] |
| [2023-09-24 15:54:59,462][14126] Worker 5 uses CPU cores [20, 21, 22, 23] |
| [2023-09-24 15:54:59,471][14128] Worker 7 uses CPU cores [28, 29, 30, 31] |
| [2023-09-24 15:54:59,514][14124] Worker 3 uses CPU cores [12, 13, 14, 15] |
| [2023-09-24 15:55:00,036][14088] RunningMeanStd input shape: (4, 84, 84) |
| [2023-09-24 15:55:00,036][14088] RunningMeanStd input shape: (1,) |
| [2023-09-24 15:55:00,047][14088] ConvEncoder: input_channels=4 |
| [2023-09-24 15:55:00,052][14087] RunningMeanStd input shape: (4, 84, 84) |
| [2023-09-24 15:55:00,052][14087] RunningMeanStd input shape: (1,) |
| [2023-09-24 15:55:00,064][14087] ConvEncoder: input_channels=4 |
| [2023-09-24 15:55:00,154][14088] Conv encoder output size: 512 |
| [2023-09-24 15:55:00,160][13256] Inference worker 1-0 is ready! |
| [2023-09-24 15:55:00,174][14087] Conv encoder output size: 512 |
| [2023-09-24 15:55:00,180][13256] Inference worker 0-0 is ready! |
| [2023-09-24 15:55:00,181][13256] All inference workers are ready! Signal rollout workers to start! |
| [2023-09-24 15:55:00,293][13256] Fps is (10 sec: nan, 60 sec: nan, 300 sec: nan). Total num frames: 4608000. Throughput: 0: nan, 1: nan. Samples: 0. Policy #0 lag: (min: -1.0, avg: -1.0, max: -1.0) |
| [2023-09-24 15:55:00,619][14125] Decorrelating experience for 0 frames... |
| [2023-09-24 15:55:00,622][14123] Decorrelating experience for 0 frames... |
| [2023-09-24 15:55:00,628][14126] Decorrelating experience for 0 frames... |
| [2023-09-24 15:55:00,629][14120] Decorrelating experience for 0 frames... |
| [2023-09-24 15:55:00,708][14128] Decorrelating experience for 0 frames... |
| [2023-09-24 15:55:00,719][14124] Decorrelating experience for 0 frames... |
| [2023-09-24 15:55:00,720][14127] Decorrelating experience for 0 frames... |
| [2023-09-24 15:55:00,729][14122] Decorrelating experience for 0 frames... |
| [2023-09-24 15:55:05,293][13256] Fps is (10 sec: 1638.4, 60 sec: 1638.4, 300 sec: 1638.4). Total num frames: 4616192. Throughput: 0: 204.8, 1: 204.8. Samples: 2048. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) |
| [2023-09-24 15:55:05,294][13256] Avg episode reward: [(0, '5.143'), (1, '4.400')] |
| [2023-09-24 15:55:05,298][13996] Saving new best policy, reward=4.400! |
| [2023-09-24 15:55:05,298][13827] Saving new best policy, reward=5.143! |
| [2023-09-24 15:55:10,293][13256] Fps is (10 sec: 3276.8, 60 sec: 3276.8, 300 sec: 3276.8). Total num frames: 4640768. Throughput: 0: 409.6, 1: 409.6. Samples: 8192. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) |
| [2023-09-24 15:55:10,294][13256] Avg episode reward: [(0, '4.200'), (1, '4.103')] |
| [2023-09-24 15:55:14,296][13256] Heartbeat connected on Batcher_0 |
| [2023-09-24 15:55:14,298][13256] Heartbeat connected on LearnerWorker_p0 |
| [2023-09-24 15:55:14,300][13256] Heartbeat connected on Batcher_1 |
| [2023-09-24 15:55:14,302][13256] Heartbeat connected on LearnerWorker_p1 |
| [2023-09-24 15:55:14,310][13256] Heartbeat connected on InferenceWorker_p0-w0 |
| [2023-09-24 15:55:14,313][13256] Heartbeat connected on RolloutWorker_w0 |
| [2023-09-24 15:55:14,316][13256] Heartbeat connected on InferenceWorker_p1-w0 |
| [2023-09-24 15:55:14,318][13256] Heartbeat connected on RolloutWorker_w1 |
| [2023-09-24 15:55:14,318][13256] Heartbeat connected on RolloutWorker_w2 |
| [2023-09-24 15:55:14,321][13256] Heartbeat connected on RolloutWorker_w3 |
| [2023-09-24 15:55:14,324][13256] Heartbeat connected on RolloutWorker_w4 |
| [2023-09-24 15:55:14,328][13256] Heartbeat connected on RolloutWorker_w5 |
| [2023-09-24 15:55:14,330][13256] Heartbeat connected on RolloutWorker_w6 |
| [2023-09-24 15:55:14,332][13256] Heartbeat connected on RolloutWorker_w7 |
| [2023-09-24 15:55:15,293][13256] Fps is (10 sec: 5734.5, 60 sec: 4369.1, 300 sec: 4369.1). Total num frames: 4673536. Throughput: 0: 606.4, 1: 602.7. Samples: 18137. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) |
| [2023-09-24 15:55:15,293][13256] Avg episode reward: [(0, '4.364'), (1, '4.163')] |
| [2023-09-24 15:55:16,799][14087] Updated weights for policy 0, policy_version 9168 (0.0015) |
| [2023-09-24 15:55:16,799][14088] Updated weights for policy 1, policy_version 9152 (0.0017) |
| [2023-09-24 15:55:20,293][13256] Fps is (10 sec: 6553.7, 60 sec: 4915.2, 300 sec: 4915.2). Total num frames: 4706304. Throughput: 0: 568.2, 1: 567.2. Samples: 22706. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) |
| [2023-09-24 15:55:20,294][13256] Avg episode reward: [(0, '4.258'), (1, '3.973')] |
| [2023-09-24 15:55:25,293][13256] Fps is (10 sec: 6553.5, 60 sec: 5242.9, 300 sec: 5242.9). Total num frames: 4739072. Throughput: 0: 655.4, 1: 655.4. Samples: 32768. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) |
| [2023-09-24 15:55:25,294][13256] Avg episode reward: [(0, '4.578'), (1, '4.043')] |
| [2023-09-24 15:55:29,554][14088] Updated weights for policy 1, policy_version 9312 (0.0018) |
| [2023-09-24 15:55:29,554][14087] Updated weights for policy 0, policy_version 9328 (0.0018) |
| [2023-09-24 15:55:30,293][13256] Fps is (10 sec: 6553.4, 60 sec: 5461.3, 300 sec: 5461.3). Total num frames: 4771840. Throughput: 0: 703.8, 1: 702.4. Samples: 42186. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) |
| [2023-09-24 15:55:30,294][13256] Avg episode reward: [(0, '4.480'), (1, '4.070')] |
| [2023-09-24 15:55:35,293][13256] Fps is (10 sec: 6553.6, 60 sec: 5617.4, 300 sec: 5617.4). Total num frames: 4804608. Throughput: 0: 672.9, 1: 670.4. Samples: 47015. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) |
| [2023-09-24 15:55:35,294][13256] Avg episode reward: [(0, '4.360'), (1, '4.110')] |
| [2023-09-24 15:55:40,293][13256] Fps is (10 sec: 6553.8, 60 sec: 5734.4, 300 sec: 5734.4). Total num frames: 4837376. Throughput: 0: 700.2, 1: 700.5. Samples: 56024. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) |
| [2023-09-24 15:55:40,294][13256] Avg episode reward: [(0, '4.410'), (1, '4.070')] |
| [2023-09-24 15:55:42,635][14088] Updated weights for policy 1, policy_version 9472 (0.0018) |
| [2023-09-24 15:55:42,635][14087] Updated weights for policy 0, policy_version 9488 (0.0019) |
| [2023-09-24 15:55:45,293][13256] Fps is (10 sec: 6553.6, 60 sec: 5825.4, 300 sec: 5825.4). Total num frames: 4870144. Throughput: 0: 728.3, 1: 728.3. Samples: 65544. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) |
| [2023-09-24 15:55:45,294][13256] Avg episode reward: [(0, '4.530'), (1, '4.190')] |
| [2023-09-24 15:55:50,293][13256] Fps is (10 sec: 6553.5, 60 sec: 5898.3, 300 sec: 5898.3). Total num frames: 4902912. Throughput: 0: 762.3, 1: 761.9. Samples: 70637. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) |
| [2023-09-24 15:55:50,294][13256] Avg episode reward: [(0, '4.340'), (1, '3.980')] |
| [2023-09-24 15:55:55,233][14088] Updated weights for policy 1, policy_version 9632 (0.0017) |
| [2023-09-24 15:55:55,233][14087] Updated weights for policy 0, policy_version 9648 (0.0015) |
| [2023-09-24 15:55:55,293][13256] Fps is (10 sec: 6553.6, 60 sec: 5957.8, 300 sec: 5957.8). Total num frames: 4935680. Throughput: 0: 800.8, 1: 800.9. Samples: 80267. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) |
| [2023-09-24 15:55:55,294][13256] Avg episode reward: [(0, '4.410'), (1, '3.950')] |
| [2023-09-24 15:56:00,293][13256] Fps is (10 sec: 6553.7, 60 sec: 6007.5, 300 sec: 6007.5). Total num frames: 4968448. Throughput: 0: 799.1, 1: 800.3. Samples: 90112. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) |
| [2023-09-24 15:56:00,293][13256] Avg episode reward: [(0, '4.560'), (1, '3.960')] |
| [2023-09-24 15:56:05,298][13256] Fps is (10 sec: 6141.0, 60 sec: 6348.3, 300 sec: 5986.0). Total num frames: 4997120. Throughput: 0: 803.2, 1: 803.1. Samples: 94996. Policy #0 lag: (min: 11.0, avg: 11.0, max: 11.0) |
| [2023-09-24 15:56:05,300][13256] Avg episode reward: [(0, '4.540'), (1, '4.140')] |
| [2023-09-24 15:56:07,822][14088] Updated weights for policy 1, policy_version 9792 (0.0016) |
| [2023-09-24 15:56:07,823][14087] Updated weights for policy 0, policy_version 9808 (0.0017) |
| [2023-09-24 15:56:10,293][13256] Fps is (10 sec: 5734.3, 60 sec: 6417.1, 300 sec: 5968.5). Total num frames: 5025792. Throughput: 0: 798.3, 1: 798.0. Samples: 104605. Policy #0 lag: (min: 11.0, avg: 11.0, max: 11.0) |
| [2023-09-24 15:56:10,294][13256] Avg episode reward: [(0, '4.260'), (1, '4.110')] |
| [2023-09-24 15:56:15,293][13256] Fps is (10 sec: 6147.0, 60 sec: 6417.1, 300 sec: 6007.5). Total num frames: 5058560. Throughput: 0: 804.5, 1: 804.5. Samples: 114592. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) |
| [2023-09-24 15:56:15,294][13256] Avg episode reward: [(0, '4.290'), (1, '4.390')] |
| [2023-09-24 15:56:20,293][13256] Fps is (10 sec: 6553.7, 60 sec: 6417.1, 300 sec: 6041.6). Total num frames: 5091328. Throughput: 0: 799.9, 1: 801.6. Samples: 119081. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) |
| [2023-09-24 15:56:20,294][13256] Avg episode reward: [(0, '4.200'), (1, '4.220')] |
| [2023-09-24 15:56:20,489][14087] Updated weights for policy 0, policy_version 9968 (0.0018) |
| [2023-09-24 15:56:20,489][14088] Updated weights for policy 1, policy_version 9952 (0.0016) |
| [2023-09-24 15:56:25,293][13256] Fps is (10 sec: 6553.5, 60 sec: 6417.1, 300 sec: 6071.7). Total num frames: 5124096. Throughput: 0: 811.3, 1: 811.0. Samples: 129026. Policy #0 lag: (min: 1.0, avg: 1.0, max: 1.0) |
| [2023-09-24 15:56:25,294][13256] Avg episode reward: [(0, '4.050'), (1, '4.120')] |
| [2023-09-24 15:56:30,293][13256] Fps is (10 sec: 6553.4, 60 sec: 6417.1, 300 sec: 6098.5). Total num frames: 5156864. Throughput: 0: 815.2, 1: 814.6. Samples: 138884. Policy #0 lag: (min: 1.0, avg: 1.0, max: 1.0) |
| [2023-09-24 15:56:30,295][13256] Avg episode reward: [(0, '4.190'), (1, '3.980')] |
| [2023-09-24 15:56:33,223][14088] Updated weights for policy 1, policy_version 10112 (0.0017) |
| [2023-09-24 15:56:33,223][14087] Updated weights for policy 0, policy_version 10128 (0.0015) |
| [2023-09-24 15:56:35,293][13256] Fps is (10 sec: 6553.5, 60 sec: 6417.1, 300 sec: 6122.4). Total num frames: 5189632. Throughput: 0: 807.8, 1: 808.3. Samples: 143362. Policy #0 lag: (min: 6.0, avg: 6.0, max: 6.0) |
| [2023-09-24 15:56:35,294][13256] Avg episode reward: [(0, '4.120'), (1, '4.170')] |
| [2023-09-24 15:56:40,293][13256] Fps is (10 sec: 6553.7, 60 sec: 6417.0, 300 sec: 6144.0). Total num frames: 5222400. Throughput: 0: 811.2, 1: 811.7. Samples: 153296. Policy #0 lag: (min: 6.0, avg: 6.0, max: 6.0) |
| [2023-09-24 15:56:40,294][13256] Avg episode reward: [(0, '4.120'), (1, '4.060')] |
| [2023-09-24 15:56:45,293][13256] Fps is (10 sec: 6553.8, 60 sec: 6417.1, 300 sec: 6163.5). Total num frames: 5255168. Throughput: 0: 806.0, 1: 806.0. Samples: 162656. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) |
| [2023-09-24 15:56:45,294][13256] Avg episode reward: [(0, '4.300'), (1, '4.180')] |
| [2023-09-24 15:56:46,016][14088] Updated weights for policy 1, policy_version 10272 (0.0018) |
| [2023-09-24 15:56:46,017][14087] Updated weights for policy 0, policy_version 10288 (0.0017) |
| [2023-09-24 15:56:50,293][13256] Fps is (10 sec: 6553.5, 60 sec: 6417.0, 300 sec: 6181.2). Total num frames: 5287936. Throughput: 0: 807.7, 1: 807.8. Samples: 167684. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) |
| [2023-09-24 15:56:50,294][13256] Avg episode reward: [(0, '4.180'), (1, '4.210')] |
| [2023-09-24 15:56:50,305][13996] Saving ./train_atari/Berzerk/checkpoint_p1/checkpoint_000010320_2641920.pth... |
| [2023-09-24 15:56:50,305][13827] Saving ./train_atari/Berzerk/checkpoint_p0/checkpoint_000010336_2646016.pth... |
| [2023-09-24 15:56:50,334][13996] Removing ./train_atari/Berzerk/checkpoint_p1/checkpoint_000007728_1978368.pth |
| [2023-09-24 15:56:50,342][13827] Removing ./train_atari/Berzerk/checkpoint_p0/checkpoint_000007744_1982464.pth |
| [2023-09-24 15:56:55,293][13256] Fps is (10 sec: 6553.6, 60 sec: 6417.1, 300 sec: 6197.4). Total num frames: 5320704. Throughput: 0: 806.9, 1: 806.4. Samples: 177204. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) |
| [2023-09-24 15:56:55,294][13256] Avg episode reward: [(0, '4.360'), (1, '4.230')] |
| [2023-09-24 15:56:58,677][14087] Updated weights for policy 0, policy_version 10448 (0.0015) |
| [2023-09-24 15:56:58,677][14088] Updated weights for policy 1, policy_version 10432 (0.0015) |
| [2023-09-24 15:57:00,293][13256] Fps is (10 sec: 6553.6, 60 sec: 6417.0, 300 sec: 6212.3). Total num frames: 5353472. Throughput: 0: 802.7, 1: 803.3. Samples: 186862. Policy #0 lag: (min: 1.0, avg: 1.0, max: 1.0) |
| [2023-09-24 15:57:00,295][13256] Avg episode reward: [(0, '4.390'), (1, '4.010')] |
| [2023-09-24 15:57:05,293][13256] Fps is (10 sec: 6553.5, 60 sec: 6485.8, 300 sec: 6225.9). Total num frames: 5386240. Throughput: 0: 810.5, 1: 810.4. Samples: 192023. Policy #0 lag: (min: 1.0, avg: 1.0, max: 1.0) |
| [2023-09-24 15:57:05,294][13256] Avg episode reward: [(0, '4.390'), (1, '3.970')] |
| [2023-09-24 15:57:10,293][13256] Fps is (10 sec: 6553.7, 60 sec: 6553.6, 300 sec: 6238.5). Total num frames: 5419008. Throughput: 0: 807.2, 1: 807.6. Samples: 201693. Policy #0 lag: (min: 5.0, avg: 5.0, max: 5.0) |
| [2023-09-24 15:57:10,294][13256] Avg episode reward: [(0, '4.460'), (1, '4.020')] |
| [2023-09-24 15:57:11,166][14087] Updated weights for policy 0, policy_version 10608 (0.0016) |
| [2023-09-24 15:57:11,167][14088] Updated weights for policy 1, policy_version 10592 (0.0017) |
| [2023-09-24 15:57:15,293][13256] Fps is (10 sec: 6553.8, 60 sec: 6553.6, 300 sec: 6250.2). Total num frames: 5451776. Throughput: 0: 805.5, 1: 805.6. Samples: 211384. Policy #0 lag: (min: 5.0, avg: 5.0, max: 5.0) |
| [2023-09-24 15:57:15,293][13256] Avg episode reward: [(0, '4.510'), (1, '4.010')] |
| [2023-09-24 15:57:20,293][13256] Fps is (10 sec: 6553.7, 60 sec: 6553.6, 300 sec: 6261.0). Total num frames: 5484544. Throughput: 0: 810.3, 1: 808.7. Samples: 216216. Policy #0 lag: (min: 9.0, avg: 9.0, max: 9.0) |
| [2023-09-24 15:57:20,293][13256] Avg episode reward: [(0, '4.570'), (1, '4.050')] |
| [2023-09-24 15:57:23,928][14088] Updated weights for policy 1, policy_version 10752 (0.0018) |
| [2023-09-24 15:57:23,928][14087] Updated weights for policy 0, policy_version 10768 (0.0018) |
| [2023-09-24 15:57:25,293][13256] Fps is (10 sec: 6553.6, 60 sec: 6553.6, 300 sec: 6271.1). Total num frames: 5517312. Throughput: 0: 805.6, 1: 804.1. Samples: 225733. Policy #0 lag: (min: 12.0, avg: 12.0, max: 12.0) |
| [2023-09-24 15:57:25,294][13256] Avg episode reward: [(0, '4.530'), (1, '4.200')] |
| [2023-09-24 15:57:30,293][13256] Fps is (10 sec: 5734.3, 60 sec: 6417.1, 300 sec: 6225.9). Total num frames: 5541888. Throughput: 0: 809.6, 1: 808.9. Samples: 235488. Policy #0 lag: (min: 12.0, avg: 12.0, max: 12.0) |
| [2023-09-24 15:57:30,294][13256] Avg episode reward: [(0, '4.330'), (1, '4.160')] |
| [2023-09-24 15:57:35,293][13256] Fps is (10 sec: 5734.2, 60 sec: 6417.1, 300 sec: 6236.5). Total num frames: 5574656. Throughput: 0: 803.2, 1: 803.5. Samples: 239985. Policy #0 lag: (min: 8.0, avg: 8.0, max: 8.0) |
| [2023-09-24 15:57:35,294][13256] Avg episode reward: [(0, '4.160'), (1, '4.050')] |
| [2023-09-24 15:57:36,812][14088] Updated weights for policy 1, policy_version 10912 (0.0015) |
| [2023-09-24 15:57:36,813][14087] Updated weights for policy 0, policy_version 10928 (0.0018) |
| [2023-09-24 15:57:40,293][13256] Fps is (10 sec: 6553.5, 60 sec: 6417.1, 300 sec: 6246.4). Total num frames: 5607424. Throughput: 0: 806.9, 1: 807.6. Samples: 249856. Policy #0 lag: (min: 8.0, avg: 8.0, max: 8.0) |
| [2023-09-24 15:57:40,294][13256] Avg episode reward: [(0, '4.090'), (1, '4.000')] |
| [2023-09-24 15:57:45,293][13256] Fps is (10 sec: 6553.7, 60 sec: 6417.1, 300 sec: 6255.7). Total num frames: 5640192. Throughput: 0: 806.7, 1: 806.6. Samples: 259460. Policy #0 lag: (min: 6.0, avg: 6.0, max: 6.0) |
| [2023-09-24 15:57:45,294][13256] Avg episode reward: [(0, '4.020'), (1, '3.920')] |
| [2023-09-24 15:57:49,600][14088] Updated weights for policy 1, policy_version 11072 (0.0017) |
| [2023-09-24 15:57:49,600][14087] Updated weights for policy 0, policy_version 11088 (0.0017) |
| [2023-09-24 15:57:50,293][13256] Fps is (10 sec: 6553.6, 60 sec: 6417.1, 300 sec: 6264.5). Total num frames: 5672960. Throughput: 0: 801.7, 1: 802.1. Samples: 264192. Policy #0 lag: (min: 6.0, avg: 6.0, max: 6.0) |
| [2023-09-24 15:57:50,294][13256] Avg episode reward: [(0, '4.050'), (1, '3.780')] |
| [2023-09-24 15:57:55,293][13256] Fps is (10 sec: 6553.7, 60 sec: 6417.1, 300 sec: 6272.7). Total num frames: 5705728. Throughput: 0: 802.2, 1: 801.2. Samples: 273843. Policy #0 lag: (min: 0.0, avg: 0.0, max: 0.0) |
| [2023-09-24 15:57:55,293][13256] Avg episode reward: [(0, '4.330'), (1, '3.880')] |
| [2023-09-24 15:58:00,293][13256] Fps is (10 sec: 6553.7, 60 sec: 6417.1, 300 sec: 6280.5). Total num frames: 5738496. Throughput: 0: 800.4, 1: 800.4. Samples: 283418. Policy #0 lag: (min: 0.0, avg: 0.0, max: 0.0) |
| [2023-09-24 15:58:00,294][13256] Avg episode reward: [(0, '4.320'), (1, '3.820')] |
| [2023-09-24 15:58:02,330][14087] Updated weights for policy 0, policy_version 11248 (0.0016) |
| [2023-09-24 15:58:02,331][14088] Updated weights for policy 1, policy_version 11232 (0.0015) |
| [2023-09-24 15:58:05,293][13256] Fps is (10 sec: 6553.6, 60 sec: 6417.1, 300 sec: 6287.9). Total num frames: 5771264. Throughput: 0: 801.1, 1: 801.9. Samples: 288352. Policy #0 lag: (min: 10.0, avg: 10.0, max: 10.0) |
| [2023-09-24 15:58:05,293][13256] Avg episode reward: [(0, '4.200'), (1, '3.880')] |
| [2023-09-24 15:58:10,293][13256] Fps is (10 sec: 6553.7, 60 sec: 6417.1, 300 sec: 6294.9). Total num frames: 5804032. Throughput: 0: 800.2, 1: 800.4. Samples: 297760. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) |
| [2023-09-24 15:58:10,293][13256] Avg episode reward: [(0, '4.200'), (1, '3.810')] |
| [2023-09-24 15:58:15,277][14087] Updated weights for policy 0, policy_version 11408 (0.0017) |
| [2023-09-24 15:58:15,278][14088] Updated weights for policy 1, policy_version 11392 (0.0017) |
| [2023-09-24 15:58:15,293][13256] Fps is (10 sec: 6553.5, 60 sec: 6417.0, 300 sec: 6301.5). Total num frames: 5836800. Throughput: 0: 796.4, 1: 797.2. Samples: 307200. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) |
| [2023-09-24 15:58:15,294][13256] Avg episode reward: [(0, '4.210'), (1, '3.790')] |
| [2023-09-24 15:58:20,293][13256] Fps is (10 sec: 6553.4, 60 sec: 6417.0, 300 sec: 6307.8). Total num frames: 5869568. Throughput: 0: 801.0, 1: 800.6. Samples: 312056. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) |
| [2023-09-24 15:58:20,294][13256] Avg episode reward: [(0, '4.320'), (1, '3.810')] |
| [2023-09-24 15:58:25,293][13256] Fps is (10 sec: 5734.3, 60 sec: 6280.5, 300 sec: 6273.9). Total num frames: 5894144. Throughput: 0: 799.1, 1: 798.5. Samples: 321747. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) |
| [2023-09-24 15:58:25,294][13256] Avg episode reward: [(0, '4.320'), (1, '3.800')] |
| [2023-09-24 15:58:27,863][14088] Updated weights for policy 1, policy_version 11552 (0.0018) |
| [2023-09-24 15:58:27,863][14087] Updated weights for policy 0, policy_version 11568 (0.0019) |
| [2023-09-24 15:58:30,293][13256] Fps is (10 sec: 5734.5, 60 sec: 6417.1, 300 sec: 6280.5). Total num frames: 5926912. Throughput: 0: 803.3, 1: 803.7. Samples: 331776. Policy #0 lag: (min: 9.0, avg: 9.0, max: 9.0) |
| [2023-09-24 15:58:30,294][13256] Avg episode reward: [(0, '4.440'), (1, '3.690')] |
| [2023-09-24 15:58:35,293][13256] Fps is (10 sec: 6553.6, 60 sec: 6417.1, 300 sec: 6286.9). Total num frames: 5959680. Throughput: 0: 801.3, 1: 800.3. Samples: 336266. Policy #0 lag: (min: 9.0, avg: 9.0, max: 9.0) |
| [2023-09-24 15:58:35,294][13256] Avg episode reward: [(0, '4.520'), (1, '3.930')] |
| [2023-09-24 15:58:40,293][13256] Fps is (10 sec: 6553.5, 60 sec: 6417.1, 300 sec: 6292.9). Total num frames: 5992448. Throughput: 0: 802.6, 1: 803.3. Samples: 346112. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) |
| [2023-09-24 15:58:40,294][13256] Avg episode reward: [(0, '4.570'), (1, '3.940')] |
| [2023-09-24 15:58:40,512][14087] Updated weights for policy 0, policy_version 11728 (0.0015) |
| [2023-09-24 15:58:40,513][14088] Updated weights for policy 1, policy_version 11712 (0.0018) |
| [2023-09-24 15:58:45,293][13256] Fps is (10 sec: 6553.6, 60 sec: 6417.1, 300 sec: 6298.7). Total num frames: 6025216. Throughput: 0: 806.0, 1: 806.2. Samples: 355968. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) |
| [2023-09-24 15:58:45,294][13256] Avg episode reward: [(0, '4.420'), (1, '4.020')] |
| [2023-09-24 15:58:50,293][13256] Fps is (10 sec: 6553.5, 60 sec: 6417.1, 300 sec: 6304.3). Total num frames: 6057984. Throughput: 0: 801.9, 1: 802.0. Samples: 360529. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) |
| [2023-09-24 15:58:50,294][13256] Avg episode reward: [(0, '4.450'), (1, '4.150')] |
| [2023-09-24 15:58:50,304][13996] Saving ./train_atari/Berzerk/checkpoint_p1/checkpoint_000011824_3026944.pth... |
| [2023-09-24 15:58:50,304][13827] Saving ./train_atari/Berzerk/checkpoint_p0/checkpoint_000011840_3031040.pth... |
| [2023-09-24 15:58:50,340][13827] Removing ./train_atari/Berzerk/checkpoint_p0/checkpoint_000009008_2306048.pth |
| [2023-09-24 15:58:50,345][13996] Removing ./train_atari/Berzerk/checkpoint_p1/checkpoint_000008992_2301952.pth |
| [2023-09-24 15:58:53,151][14087] Updated weights for policy 0, policy_version 11888 (0.0014) |
| [2023-09-24 15:58:53,152][14088] Updated weights for policy 1, policy_version 11872 (0.0017) |
| [2023-09-24 15:58:55,293][13256] Fps is (10 sec: 6553.6, 60 sec: 6417.0, 300 sec: 6309.6). Total num frames: 6090752. Throughput: 0: 809.5, 1: 810.2. Samples: 370647. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) |
| [2023-09-24 15:58:55,294][13256] Avg episode reward: [(0, '4.420'), (1, '4.220')] |
| [2023-09-24 15:59:00,293][13256] Fps is (10 sec: 6553.7, 60 sec: 6417.1, 300 sec: 6314.7). Total num frames: 6123520. Throughput: 0: 808.8, 1: 808.3. Samples: 379970. Policy #0 lag: (min: 4.0, avg: 4.0, max: 4.0) |
| [2023-09-24 15:59:00,294][13256] Avg episode reward: [(0, '4.430'), (1, '4.120')] |
| [2023-09-24 15:59:05,293][13256] Fps is (10 sec: 6553.6, 60 sec: 6417.0, 300 sec: 6319.5). Total num frames: 6156288. Throughput: 0: 808.0, 1: 809.4. Samples: 384835. Policy #0 lag: (min: 4.0, avg: 4.0, max: 4.0) |
| [2023-09-24 15:59:05,294][13256] Avg episode reward: [(0, '4.320'), (1, '4.320')] |
| [2023-09-24 15:59:06,115][14087] Updated weights for policy 0, policy_version 12048 (0.0017) |
| [2023-09-24 15:59:06,115][14088] Updated weights for policy 1, policy_version 12032 (0.0018) |
| [2023-09-24 15:59:10,293][13256] Fps is (10 sec: 6553.7, 60 sec: 6417.1, 300 sec: 6324.2). Total num frames: 6189056. Throughput: 0: 805.7, 1: 805.6. Samples: 394257. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) |
| [2023-09-24 15:59:10,294][13256] Avg episode reward: [(0, '4.470'), (1, '4.320')] |
| [2023-09-24 15:59:15,293][13256] Fps is (10 sec: 6553.8, 60 sec: 6417.1, 300 sec: 6328.7). Total num frames: 6221824. Throughput: 0: 799.6, 1: 799.2. Samples: 403722. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) |
| [2023-09-24 15:59:15,294][13256] Avg episode reward: [(0, '4.440'), (1, '4.370')] |
| [2023-09-24 15:59:18,799][14088] Updated weights for policy 1, policy_version 12192 (0.0015) |
| [2023-09-24 15:59:18,801][14087] Updated weights for policy 0, policy_version 12208 (0.0017) |
| [2023-09-24 15:59:20,293][13256] Fps is (10 sec: 6553.5, 60 sec: 6417.1, 300 sec: 6333.0). Total num frames: 6254592. Throughput: 0: 805.4, 1: 806.2. Samples: 408788. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) |
| [2023-09-24 15:59:20,294][13256] Avg episode reward: [(0, '4.350'), (1, '4.440')] |
| [2023-09-24 15:59:20,305][13996] Saving new best policy, reward=4.440! |
| [2023-09-24 15:59:25,293][13256] Fps is (10 sec: 6553.5, 60 sec: 6553.6, 300 sec: 6337.2). Total num frames: 6287360. Throughput: 0: 801.4, 1: 801.2. Samples: 418231. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) |
| [2023-09-24 15:59:25,294][13256] Avg episode reward: [(0, '4.620'), (1, '4.360')] |
| [2023-09-24 15:59:30,293][13256] Fps is (10 sec: 5734.4, 60 sec: 6417.0, 300 sec: 6310.9). Total num frames: 6311936. Throughput: 0: 800.6, 1: 800.9. Samples: 428032. Policy #0 lag: (min: 2.0, avg: 2.0, max: 2.0) |
| [2023-09-24 15:59:30,294][13256] Avg episode reward: [(0, '4.610'), (1, '4.310')] |
| [2023-09-24 15:59:31,708][14088] Updated weights for policy 1, policy_version 12352 (0.0018) |
| [2023-09-24 15:59:31,709][14087] Updated weights for policy 0, policy_version 12368 (0.0018) |
| [2023-09-24 15:59:35,293][13256] Fps is (10 sec: 5734.6, 60 sec: 6417.1, 300 sec: 6315.3). Total num frames: 6344704. Throughput: 0: 799.7, 1: 799.7. Samples: 432500. Policy #0 lag: (min: 2.0, avg: 2.0, max: 2.0) |
| [2023-09-24 15:59:35,293][13256] Avg episode reward: [(0, '4.820'), (1, '4.590')] |
| [2023-09-24 15:59:35,305][13996] Saving new best policy, reward=4.590! |
| [2023-09-24 15:59:40,293][13256] Fps is (10 sec: 6553.7, 60 sec: 6417.1, 300 sec: 6319.5). Total num frames: 6377472. Throughput: 0: 797.0, 1: 796.8. Samples: 442368. Policy #0 lag: (min: 11.0, avg: 11.0, max: 11.0) |
| [2023-09-24 15:59:40,294][13256] Avg episode reward: [(0, '4.600'), (1, '4.650')] |
| [2023-09-24 15:59:40,295][13996] Saving new best policy, reward=4.650! |
| [2023-09-24 15:59:44,340][14088] Updated weights for policy 1, policy_version 12512 (0.0018) |
| [2023-09-24 15:59:44,340][14087] Updated weights for policy 0, policy_version 12528 (0.0018) |
| [2023-09-24 15:59:45,293][13256] Fps is (10 sec: 6553.6, 60 sec: 6417.1, 300 sec: 6323.7). Total num frames: 6410240. Throughput: 0: 802.9, 1: 802.8. Samples: 452224. Policy #0 lag: (min: 11.0, avg: 11.0, max: 11.0) |
| [2023-09-24 15:59:45,294][13256] Avg episode reward: [(0, '4.890'), (1, '4.630')] |
| [2023-09-24 15:59:50,293][13256] Fps is (10 sec: 6553.5, 60 sec: 6417.1, 300 sec: 6327.6). Total num frames: 6443008. Throughput: 0: 800.2, 1: 798.9. Samples: 456795. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) |
| [2023-09-24 15:59:50,294][13256] Avg episode reward: [(0, '4.620'), (1, '4.700')] |
| [2023-09-24 15:59:50,306][13996] Saving new best policy, reward=4.700! |
| [2023-09-24 15:59:55,293][13256] Fps is (10 sec: 6553.5, 60 sec: 6417.1, 300 sec: 6331.4). Total num frames: 6475776. Throughput: 0: 807.3, 1: 807.5. Samples: 466924. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) |
| [2023-09-24 15:59:55,294][13256] Avg episode reward: [(0, '4.600'), (1, '4.830')] |
| [2023-09-24 15:59:55,295][13996] Saving new best policy, reward=4.830! |
| [2023-09-24 15:59:56,928][14088] Updated weights for policy 1, policy_version 12672 (0.0020) |
| [2023-09-24 15:59:56,929][14087] Updated weights for policy 0, policy_version 12688 (0.0020) |
| [2023-09-24 16:00:00,293][13256] Fps is (10 sec: 6553.7, 60 sec: 6417.1, 300 sec: 6414.8). Total num frames: 6508544. Throughput: 0: 808.4, 1: 808.1. Samples: 476464. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) |
| [2023-09-24 16:00:00,294][13256] Avg episode reward: [(0, '4.450'), (1, '4.790')] |
| [2023-09-24 16:00:05,293][13256] Fps is (10 sec: 6553.5, 60 sec: 6417.1, 300 sec: 6442.5). Total num frames: 6541312. Throughput: 0: 805.4, 1: 805.6. Samples: 481280. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) |
| [2023-09-24 16:00:05,294][13256] Avg episode reward: [(0, '4.330'), (1, '4.470')] |
| [2023-09-24 16:00:09,591][14087] Updated weights for policy 0, policy_version 12848 (0.0015) |
| [2023-09-24 16:00:09,591][14088] Updated weights for policy 1, policy_version 12832 (0.0018) |
| [2023-09-24 16:00:10,293][13256] Fps is (10 sec: 6553.5, 60 sec: 6417.1, 300 sec: 6442.5). Total num frames: 6574080. Throughput: 0: 810.1, 1: 809.9. Samples: 491133. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) |
| [2023-09-24 16:00:10,294][13256] Avg episode reward: [(0, '4.290'), (1, '4.490')] |
| [2023-09-24 16:00:15,293][13256] Fps is (10 sec: 6553.7, 60 sec: 6417.1, 300 sec: 6442.5). Total num frames: 6606848. Throughput: 0: 806.4, 1: 806.0. Samples: 500588. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) |
| [2023-09-24 16:00:15,294][13256] Avg episode reward: [(0, '4.370'), (1, '4.390')] |
| [2023-09-24 16:00:20,293][13256] Fps is (10 sec: 6553.7, 60 sec: 6417.1, 300 sec: 6442.5). Total num frames: 6639616. Throughput: 0: 812.6, 1: 813.5. Samples: 505675. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) |
| [2023-09-24 16:00:20,294][13256] Avg episode reward: [(0, '4.390'), (1, '4.340')] |
| [2023-09-24 16:00:22,319][14088] Updated weights for policy 1, policy_version 12992 (0.0017) |
| [2023-09-24 16:00:22,319][14087] Updated weights for policy 0, policy_version 13008 (0.0016) |
| [2023-09-24 16:00:25,293][13256] Fps is (10 sec: 6553.7, 60 sec: 6417.1, 300 sec: 6442.5). Total num frames: 6672384. Throughput: 0: 810.6, 1: 810.0. Samples: 515296. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) |
| [2023-09-24 16:00:25,293][13256] Avg episode reward: [(0, '4.400'), (1, '4.150')] |
| [2023-09-24 16:00:30,293][13256] Fps is (10 sec: 6553.6, 60 sec: 6553.6, 300 sec: 6442.5). Total num frames: 6705152. Throughput: 0: 806.9, 1: 807.2. Samples: 524856. Policy #0 lag: (min: 1.0, avg: 1.0, max: 1.0) |
| [2023-09-24 16:00:30,293][13256] Avg episode reward: [(0, '4.510'), (1, '4.420')] |
| [2023-09-24 16:00:35,036][14087] Updated weights for policy 0, policy_version 13168 (0.0017) |
| [2023-09-24 16:00:35,036][14088] Updated weights for policy 1, policy_version 13152 (0.0015) |
| [2023-09-24 16:00:35,293][13256] Fps is (10 sec: 6553.4, 60 sec: 6553.6, 300 sec: 6442.5). Total num frames: 6737920. Throughput: 0: 811.1, 1: 811.1. Samples: 529794. Policy #0 lag: (min: 1.0, avg: 1.0, max: 1.0) |
| [2023-09-24 16:00:35,294][13256] Avg episode reward: [(0, '4.370'), (1, '4.370')] |
| [2023-09-24 16:00:40,293][13256] Fps is (10 sec: 6553.5, 60 sec: 6553.6, 300 sec: 6442.5). Total num frames: 6770688. Throughput: 0: 803.9, 1: 803.7. Samples: 539266. Policy #0 lag: (min: 12.0, avg: 12.0, max: 12.0) |
| [2023-09-24 16:00:40,294][13256] Avg episode reward: [(0, '4.110'), (1, '4.250')] |
| [2023-09-24 16:00:45,293][13256] Fps is (10 sec: 6553.7, 60 sec: 6553.6, 300 sec: 6442.5). Total num frames: 6803456. Throughput: 0: 804.1, 1: 804.8. Samples: 548864. Policy #0 lag: (min: 12.0, avg: 12.0, max: 12.0) |
| [2023-09-24 16:00:45,294][13256] Avg episode reward: [(0, '4.030'), (1, '4.480')] |
| [2023-09-24 16:00:47,817][14088] Updated weights for policy 1, policy_version 13312 (0.0016) |
| [2023-09-24 16:00:47,818][14087] Updated weights for policy 0, policy_version 13328 (0.0019) |
| [2023-09-24 16:00:50,293][13256] Fps is (10 sec: 5734.3, 60 sec: 6417.1, 300 sec: 6414.7). Total num frames: 6828032. Throughput: 0: 804.1, 1: 804.0. Samples: 553646. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) |
| [2023-09-24 16:00:50,294][13256] Avg episode reward: [(0, '3.780'), (1, '4.490')] |
| [2023-09-24 16:00:50,388][13827] Saving ./train_atari/Berzerk/checkpoint_p0/checkpoint_000013360_3420160.pth... |
| [2023-09-24 16:00:50,398][13996] Saving ./train_atari/Berzerk/checkpoint_p1/checkpoint_000013344_3416064.pth... |
| [2023-09-24 16:00:50,416][13827] Removing ./train_atari/Berzerk/checkpoint_p0/checkpoint_000010336_2646016.pth |
| [2023-09-24 16:00:50,426][13996] Removing ./train_atari/Berzerk/checkpoint_p1/checkpoint_000010320_2641920.pth |
| [2023-09-24 16:00:55,293][13256] Fps is (10 sec: 5734.4, 60 sec: 6417.1, 300 sec: 6414.8). Total num frames: 6860800. Throughput: 0: 800.6, 1: 800.9. Samples: 563200. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) |
| [2023-09-24 16:00:55,294][13256] Avg episode reward: [(0, '3.990'), (1, '4.790')] |
| [2023-09-24 16:01:00,293][13256] Fps is (10 sec: 6553.8, 60 sec: 6417.1, 300 sec: 6428.7). Total num frames: 6893568. Throughput: 0: 802.2, 1: 802.7. Samples: 572809. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) |
| [2023-09-24 16:01:00,294][13256] Avg episode reward: [(0, '4.170'), (1, '4.650')] |
| [2023-09-24 16:01:00,752][14088] Updated weights for policy 1, policy_version 13472 (0.0019) |
| [2023-09-24 16:01:00,752][14087] Updated weights for policy 0, policy_version 13488 (0.0019) |
| [2023-09-24 16:01:05,293][13256] Fps is (10 sec: 6553.6, 60 sec: 6417.1, 300 sec: 6442.5). Total num frames: 6926336. Throughput: 0: 798.5, 1: 798.4. Samples: 577536. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) |
| [2023-09-24 16:01:05,293][13256] Avg episode reward: [(0, '4.170'), (1, '4.820')] |
| [2023-09-24 16:01:10,293][13256] Fps is (10 sec: 6553.6, 60 sec: 6417.1, 300 sec: 6442.5). Total num frames: 6959104. Throughput: 0: 800.4, 1: 800.2. Samples: 587321. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) |
| [2023-09-24 16:01:10,294][13256] Avg episode reward: [(0, '4.370'), (1, '4.540')] |
| [2023-09-24 16:01:13,457][14087] Updated weights for policy 0, policy_version 13648 (0.0015) |
| [2023-09-24 16:01:13,459][14088] Updated weights for policy 1, policy_version 13632 (0.0017) |
| [2023-09-24 16:01:15,293][13256] Fps is (10 sec: 6553.4, 60 sec: 6417.1, 300 sec: 6442.5). Total num frames: 6991872. Throughput: 0: 800.4, 1: 800.3. Samples: 596889. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) |
| [2023-09-24 16:01:15,294][13256] Avg episode reward: [(0, '4.510'), (1, '4.620')] |
| [2023-09-24 16:01:20,293][13256] Fps is (10 sec: 6553.7, 60 sec: 6417.1, 300 sec: 6442.5). Total num frames: 7024640. Throughput: 0: 800.6, 1: 801.0. Samples: 601869. Policy #0 lag: (min: 2.0, avg: 2.0, max: 2.0) |
| [2023-09-24 16:01:20,293][13256] Avg episode reward: [(0, '4.400'), (1, '4.370')] |
| [2023-09-24 16:01:25,293][13256] Fps is (10 sec: 6553.7, 60 sec: 6417.1, 300 sec: 6442.5). Total num frames: 7057408. Throughput: 0: 800.5, 1: 800.5. Samples: 611314. Policy #0 lag: (min: 2.0, avg: 2.0, max: 2.0) |
| [2023-09-24 16:01:25,294][13256] Avg episode reward: [(0, '4.620'), (1, '4.490')] |
| [2023-09-24 16:01:26,152][14087] Updated weights for policy 0, policy_version 13808 (0.0016) |
| [2023-09-24 16:01:26,152][14088] Updated weights for policy 1, policy_version 13792 (0.0017) |
| [2023-09-24 16:01:30,293][13256] Fps is (10 sec: 6553.6, 60 sec: 6417.1, 300 sec: 6442.5). Total num frames: 7090176. Throughput: 0: 799.1, 1: 799.0. Samples: 620781. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) |
| [2023-09-24 16:01:30,293][13256] Avg episode reward: [(0, '4.590'), (1, '4.650')] |
| [2023-09-24 16:01:35,293][13256] Fps is (10 sec: 6553.5, 60 sec: 6417.1, 300 sec: 6442.5). Total num frames: 7122944. Throughput: 0: 802.0, 1: 802.1. Samples: 625831. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) |
| [2023-09-24 16:01:35,294][13256] Avg episode reward: [(0, '4.420'), (1, '4.630')] |
| [2023-09-24 16:01:38,933][14088] Updated weights for policy 1, policy_version 13952 (0.0017) |
| [2023-09-24 16:01:38,933][14087] Updated weights for policy 0, policy_version 13968 (0.0015) |
| [2023-09-24 16:01:40,293][13256] Fps is (10 sec: 6553.4, 60 sec: 6417.0, 300 sec: 6442.5). Total num frames: 7155712. Throughput: 0: 801.4, 1: 801.2. Samples: 635320. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) |
| [2023-09-24 16:01:40,294][13256] Avg episode reward: [(0, '4.390'), (1, '4.510')] |
| [2023-09-24 16:01:45,293][13256] Fps is (10 sec: 6553.8, 60 sec: 6417.1, 300 sec: 6442.5). Total num frames: 7188480. Throughput: 0: 803.5, 1: 803.4. Samples: 645121. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) |
| [2023-09-24 16:01:45,293][13256] Avg episode reward: [(0, '4.460'), (1, '4.680')] |
| [2023-09-24 16:01:50,293][13256] Fps is (10 sec: 5734.4, 60 sec: 6417.1, 300 sec: 6414.7). Total num frames: 7213056. Throughput: 0: 803.7, 1: 803.9. Samples: 649879. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) |
| [2023-09-24 16:01:50,294][13256] Avg episode reward: [(0, '4.300'), (1, '4.740')] |
| [2023-09-24 16:01:51,581][14088] Updated weights for policy 1, policy_version 14112 (0.0015) |
| [2023-09-24 16:01:51,581][14087] Updated weights for policy 0, policy_version 14128 (0.0017) |
| [2023-09-24 16:01:55,293][13256] Fps is (10 sec: 5734.4, 60 sec: 6417.1, 300 sec: 6414.8). Total num frames: 7245824. Throughput: 0: 802.3, 1: 802.6. Samples: 659543. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) |
| [2023-09-24 16:01:55,294][13256] Avg episode reward: [(0, '4.240'), (1, '5.020')] |
| [2023-09-24 16:01:55,369][13996] Saving new best policy, reward=5.020! |
| [2023-09-24 16:02:00,293][13256] Fps is (10 sec: 6553.7, 60 sec: 6417.1, 300 sec: 6414.8). Total num frames: 7278592. Throughput: 0: 808.5, 1: 808.6. Samples: 669657. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) |
| [2023-09-24 16:02:00,294][13256] Avg episode reward: [(0, '4.150'), (1, '5.150')] |
| [2023-09-24 16:02:00,424][13996] Saving new best policy, reward=5.150! |
| [2023-09-24 16:02:04,261][14088] Updated weights for policy 1, policy_version 14272 (0.0020) |
| [2023-09-24 16:02:04,261][14087] Updated weights for policy 0, policy_version 14288 (0.0020) |
| [2023-09-24 16:02:05,293][13256] Fps is (10 sec: 6553.6, 60 sec: 6417.1, 300 sec: 6414.8). Total num frames: 7311360. Throughput: 0: 803.4, 1: 803.0. Samples: 674154. Policy #0 lag: (min: 9.0, avg: 9.0, max: 9.0) |
| [2023-09-24 16:02:05,293][13256] Avg episode reward: [(0, '3.980'), (1, '5.230')] |
| [2023-09-24 16:02:05,303][13996] Saving new best policy, reward=5.230! |
| [2023-09-24 16:02:10,293][13256] Fps is (10 sec: 6553.5, 60 sec: 6417.1, 300 sec: 6414.7). Total num frames: 7344128. Throughput: 0: 807.7, 1: 808.3. Samples: 684032. Policy #0 lag: (min: 9.0, avg: 9.0, max: 9.0) |
| [2023-09-24 16:02:10,294][13256] Avg episode reward: [(0, '3.920'), (1, '5.330')] |
| [2023-09-24 16:02:10,296][13996] Saving new best policy, reward=5.330! |
| [2023-09-24 16:02:15,293][13256] Fps is (10 sec: 6553.5, 60 sec: 6417.1, 300 sec: 6414.7). Total num frames: 7376896. Throughput: 0: 810.9, 1: 809.4. Samples: 693695. Policy #0 lag: (min: 9.0, avg: 9.0, max: 9.0) |
| [2023-09-24 16:02:15,294][13256] Avg episode reward: [(0, '3.930'), (1, '5.260')] |
| [2023-09-24 16:02:17,016][14088] Updated weights for policy 1, policy_version 14432 (0.0017) |
| [2023-09-24 16:02:17,016][14087] Updated weights for policy 0, policy_version 14448 (0.0017) |
| [2023-09-24 16:02:20,293][13256] Fps is (10 sec: 6553.5, 60 sec: 6417.0, 300 sec: 6414.7). Total num frames: 7409664. Throughput: 0: 805.9, 1: 806.0. Samples: 698368. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) |
| [2023-09-24 16:02:20,294][13256] Avg episode reward: [(0, '3.870'), (1, '5.270')] |
| [2023-09-24 16:02:25,293][13256] Fps is (10 sec: 6553.6, 60 sec: 6417.1, 300 sec: 6442.5). Total num frames: 7442432. Throughput: 0: 809.4, 1: 809.1. Samples: 708151. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) |
| [2023-09-24 16:02:25,294][13256] Avg episode reward: [(0, '3.880'), (1, '5.180')] |
| [2023-09-24 16:02:29,646][14088] Updated weights for policy 1, policy_version 14592 (0.0018) |
| [2023-09-24 16:02:29,647][14087] Updated weights for policy 0, policy_version 14608 (0.0017) |
| [2023-09-24 16:02:30,293][13256] Fps is (10 sec: 6553.8, 60 sec: 6417.1, 300 sec: 6442.5). Total num frames: 7475200. Throughput: 0: 807.9, 1: 807.8. Samples: 717826. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) |
| [2023-09-24 16:02:30,294][13256] Avg episode reward: [(0, '3.980'), (1, '4.790')] |
| [2023-09-24 16:02:35,293][13256] Fps is (10 sec: 6553.5, 60 sec: 6417.1, 300 sec: 6442.5). Total num frames: 7507968. Throughput: 0: 811.2, 1: 810.1. Samples: 722839. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) |
| [2023-09-24 16:02:35,294][13256] Avg episode reward: [(0, '4.140'), (1, '4.710')] |
| [2023-09-24 16:02:40,293][13256] Fps is (10 sec: 6553.5, 60 sec: 6417.1, 300 sec: 6442.5). Total num frames: 7540736. Throughput: 0: 811.0, 1: 811.4. Samples: 732548. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) |
| [2023-09-24 16:02:40,294][13256] Avg episode reward: [(0, '4.190'), (1, '4.420')] |
| [2023-09-24 16:02:42,290][14088] Updated weights for policy 1, policy_version 14752 (0.0013) |
| [2023-09-24 16:02:42,291][14087] Updated weights for policy 0, policy_version 14768 (0.0017) |
| [2023-09-24 16:02:45,293][13256] Fps is (10 sec: 6553.8, 60 sec: 6417.1, 300 sec: 6442.5). Total num frames: 7573504. Throughput: 0: 803.8, 1: 803.0. Samples: 741964. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) |
| [2023-09-24 16:02:45,293][13256] Avg episode reward: [(0, '4.250'), (1, '4.460')] |
| [2023-09-24 16:02:50,293][13256] Fps is (10 sec: 6553.5, 60 sec: 6553.6, 300 sec: 6442.5). Total num frames: 7606272. Throughput: 0: 807.6, 1: 807.7. Samples: 746842. Policy #0 lag: (min: 3.0, avg: 3.0, max: 3.0) |
| [2023-09-24 16:02:50,294][13256] Avg episode reward: [(0, '4.390'), (1, '4.300')] |
| [2023-09-24 16:02:50,304][13827] Saving ./train_atari/Berzerk/checkpoint_p0/checkpoint_000014864_3805184.pth... |
| [2023-09-24 16:02:50,304][13996] Saving ./train_atari/Berzerk/checkpoint_p1/checkpoint_000014848_3801088.pth... |
| [2023-09-24 16:02:50,335][13996] Removing ./train_atari/Berzerk/checkpoint_p1/checkpoint_000011824_3026944.pth |
| [2023-09-24 16:02:50,341][13827] Removing ./train_atari/Berzerk/checkpoint_p0/checkpoint_000011840_3031040.pth |
| [2023-09-24 16:02:55,064][14087] Updated weights for policy 0, policy_version 14928 (0.0019) |
| [2023-09-24 16:02:55,064][14088] Updated weights for policy 1, policy_version 14912 (0.0018) |
| [2023-09-24 16:02:55,293][13256] Fps is (10 sec: 6553.4, 60 sec: 6553.6, 300 sec: 6442.5). Total num frames: 7639040. Throughput: 0: 804.7, 1: 804.1. Samples: 756428. Policy #0 lag: (min: 3.0, avg: 3.0, max: 3.0) |
| [2023-09-24 16:02:55,294][13256] Avg episode reward: [(0, '4.370'), (1, '4.440')] |
| [2023-09-24 16:03:00,293][13256] Fps is (10 sec: 6553.7, 60 sec: 6553.6, 300 sec: 6442.5). Total num frames: 7671808. Throughput: 0: 804.1, 1: 805.3. Samples: 766118. Policy #0 lag: (min: 3.0, avg: 3.0, max: 3.0) |
| [2023-09-24 16:03:00,294][13256] Avg episode reward: [(0, '4.240'), (1, '4.510')] |
| [2023-09-24 16:03:05,293][13256] Fps is (10 sec: 6553.8, 60 sec: 6553.6, 300 sec: 6442.5). Total num frames: 7704576. Throughput: 0: 808.6, 1: 808.0. Samples: 771114. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) |
| [2023-09-24 16:03:05,294][13256] Avg episode reward: [(0, '4.110'), (1, '4.600')] |
| [2023-09-24 16:03:07,601][14088] Updated weights for policy 1, policy_version 15072 (0.0018) |
| [2023-09-24 16:03:07,602][14087] Updated weights for policy 0, policy_version 15088 (0.0017) |
| [2023-09-24 16:03:10,293][13256] Fps is (10 sec: 6553.6, 60 sec: 6553.6, 300 sec: 6442.5). Total num frames: 7737344. Throughput: 0: 807.7, 1: 807.7. Samples: 780845. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) |
| [2023-09-24 16:03:10,294][13256] Avg episode reward: [(0, '3.920'), (1, '4.490')] |
| [2023-09-24 16:03:15,293][13256] Fps is (10 sec: 6553.5, 60 sec: 6553.6, 300 sec: 6442.5). Total num frames: 7770112. Throughput: 0: 807.9, 1: 808.0. Samples: 790538. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) |
| [2023-09-24 16:03:15,294][13256] Avg episode reward: [(0, '3.990'), (1, '4.470')] |
| [2023-09-24 16:03:20,281][14088] Updated weights for policy 1, policy_version 15232 (0.0016) |
| [2023-09-24 16:03:20,282][14087] Updated weights for policy 0, policy_version 15248 (0.0017) |
| [2023-09-24 16:03:20,293][13256] Fps is (10 sec: 6553.6, 60 sec: 6553.6, 300 sec: 6470.3). Total num frames: 7802880. Throughput: 0: 806.6, 1: 807.0. Samples: 795450. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) |
| [2023-09-24 16:03:20,294][13256] Avg episode reward: [(0, '3.870'), (1, '4.610')] |
| [2023-09-24 16:03:25,293][13256] Fps is (10 sec: 5734.4, 60 sec: 6417.1, 300 sec: 6442.5). Total num frames: 7827456. Throughput: 0: 803.5, 1: 803.7. Samples: 804872. Policy #0 lag: (min: 3.0, avg: 3.0, max: 3.0) |
| [2023-09-24 16:03:25,294][13256] Avg episode reward: [(0, '3.970'), (1, '4.620')] |
| [2023-09-24 16:03:30,293][13256] Fps is (10 sec: 5734.4, 60 sec: 6417.0, 300 sec: 6442.5). Total num frames: 7860224. Throughput: 0: 810.5, 1: 811.0. Samples: 814934. Policy #0 lag: (min: 3.0, avg: 3.0, max: 3.0) |
| [2023-09-24 16:03:30,294][13256] Avg episode reward: [(0, '4.000'), (1, '4.750')] |
| [2023-09-24 16:03:33,159][14088] Updated weights for policy 1, policy_version 15392 (0.0017) |
| [2023-09-24 16:03:33,159][14087] Updated weights for policy 0, policy_version 15408 (0.0017) |
| [2023-09-24 16:03:35,293][13256] Fps is (10 sec: 6553.5, 60 sec: 6417.1, 300 sec: 6442.5). Total num frames: 7892992. Throughput: 0: 804.1, 1: 804.2. Samples: 819219. Policy #0 lag: (min: 3.0, avg: 3.0, max: 3.0) |
| [2023-09-24 16:03:35,294][13256] Avg episode reward: [(0, '4.080'), (1, '4.970')] |
| [2023-09-24 16:03:40,293][13256] Fps is (10 sec: 6553.6, 60 sec: 6417.1, 300 sec: 6442.5). Total num frames: 7925760. Throughput: 0: 808.7, 1: 808.8. Samples: 829219. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) |
| [2023-09-24 16:03:40,294][13256] Avg episode reward: [(0, '4.240'), (1, '4.880')] |
| [2023-09-24 16:03:45,293][13256] Fps is (10 sec: 6553.7, 60 sec: 6417.1, 300 sec: 6442.5). Total num frames: 7958528. Throughput: 0: 809.7, 1: 809.2. Samples: 838968. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) |
| [2023-09-24 16:03:45,294][13256] Avg episode reward: [(0, '4.290'), (1, '4.630')] |
| [2023-09-24 16:03:45,790][14088] Updated weights for policy 1, policy_version 15552 (0.0018) |
| [2023-09-24 16:03:45,790][14087] Updated weights for policy 0, policy_version 15568 (0.0016) |
| [2023-09-24 16:03:50,293][13256] Fps is (10 sec: 6553.6, 60 sec: 6417.1, 300 sec: 6442.5). Total num frames: 7991296. Throughput: 0: 807.1, 1: 807.6. Samples: 843776. Policy #0 lag: (min: 10.0, avg: 10.0, max: 10.0) |
| [2023-09-24 16:03:50,294][13256] Avg episode reward: [(0, '4.320'), (1, '4.710')] |
| [2023-09-24 16:03:55,293][13256] Fps is (10 sec: 6553.6, 60 sec: 6417.1, 300 sec: 6442.5). Total num frames: 8024064. Throughput: 0: 810.1, 1: 809.7. Samples: 853736. Policy #0 lag: (min: 10.0, avg: 10.0, max: 10.0) |
| [2023-09-24 16:03:55,294][13256] Avg episode reward: [(0, '4.640'), (1, '4.540')] |
| [2023-09-24 16:03:58,294][14088] Updated weights for policy 1, policy_version 15712 (0.0020) |
| [2023-09-24 16:03:58,294][14087] Updated weights for policy 0, policy_version 15728 (0.0019) |
| [2023-09-24 16:04:00,293][13256] Fps is (10 sec: 6553.7, 60 sec: 6417.1, 300 sec: 6442.5). Total num frames: 8056832. Throughput: 0: 810.2, 1: 809.9. Samples: 863444. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) |
| [2023-09-24 16:04:00,294][13256] Avg episode reward: [(0, '4.650'), (1, '4.170')] |
| [2023-09-24 16:04:05,293][13256] Fps is (10 sec: 6553.5, 60 sec: 6417.0, 300 sec: 6442.5). Total num frames: 8089600. Throughput: 0: 809.1, 1: 809.7. Samples: 868296. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) |
| [2023-09-24 16:04:05,294][13256] Avg episode reward: [(0, '4.760'), (1, '4.340')] |
| [2023-09-24 16:04:10,293][13256] Fps is (10 sec: 6553.7, 60 sec: 6417.1, 300 sec: 6442.5). Total num frames: 8122368. Throughput: 0: 810.3, 1: 808.9. Samples: 877739. Policy #0 lag: (min: 6.0, avg: 6.0, max: 6.0) |
| [2023-09-24 16:04:10,293][13256] Avg episode reward: [(0, '4.740'), (1, '4.350')] |
| [2023-09-24 16:04:11,101][14087] Updated weights for policy 0, policy_version 15888 (0.0018) |
| [2023-09-24 16:04:11,101][14088] Updated weights for policy 1, policy_version 15872 (0.0018) |
| [2023-09-24 16:04:15,293][13256] Fps is (10 sec: 6553.6, 60 sec: 6417.1, 300 sec: 6442.5). Total num frames: 8155136. Throughput: 0: 805.0, 1: 805.5. Samples: 887409. Policy #0 lag: (min: 6.0, avg: 6.0, max: 6.0) |
| [2023-09-24 16:04:15,294][13256] Avg episode reward: [(0, '5.000'), (1, '4.410')] |
| [2023-09-24 16:04:20,293][13256] Fps is (10 sec: 6553.6, 60 sec: 6417.1, 300 sec: 6442.5). Total num frames: 8187904. Throughput: 0: 815.2, 1: 814.7. Samples: 892565. Policy #0 lag: (min: 6.0, avg: 6.0, max: 6.0) |
| [2023-09-24 16:04:20,294][13256] Avg episode reward: [(0, '4.520'), (1, '4.550')] |
| [2023-09-24 16:04:23,722][14087] Updated weights for policy 0, policy_version 16048 (0.0015) |
| [2023-09-24 16:04:23,722][14088] Updated weights for policy 1, policy_version 16032 (0.0015) |
| [2023-09-24 16:04:25,293][13256] Fps is (10 sec: 6553.6, 60 sec: 6553.6, 300 sec: 6470.3). Total num frames: 8220672. Throughput: 0: 809.1, 1: 809.1. Samples: 902038. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) |
| [2023-09-24 16:04:25,294][13256] Avg episode reward: [(0, '4.300'), (1, '4.730')] |
| [2023-09-24 16:04:30,293][13256] Fps is (10 sec: 6553.5, 60 sec: 6553.6, 300 sec: 6470.3). Total num frames: 8253440. Throughput: 0: 806.4, 1: 806.7. Samples: 911555. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) |
| [2023-09-24 16:04:30,294][13256] Avg episode reward: [(0, '4.190'), (1, '5.070')] |
| [2023-09-24 16:04:35,293][13256] Fps is (10 sec: 6553.6, 60 sec: 6553.6, 300 sec: 6470.3). Total num frames: 8286208. Throughput: 0: 808.8, 1: 808.1. Samples: 916538. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) |
| [2023-09-24 16:04:35,294][13256] Avg episode reward: [(0, '3.880'), (1, '5.050')] |
| [2023-09-24 16:04:36,430][14088] Updated weights for policy 1, policy_version 16192 (0.0016) |
| [2023-09-24 16:04:36,431][14087] Updated weights for policy 0, policy_version 16208 (0.0017) |
| [2023-09-24 16:04:40,293][13256] Fps is (10 sec: 6553.7, 60 sec: 6553.6, 300 sec: 6470.3). Total num frames: 8318976. Throughput: 0: 803.9, 1: 804.5. Samples: 926114. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) |
| [2023-09-24 16:04:40,294][13256] Avg episode reward: [(0, '4.120'), (1, '5.130')] |
| [2023-09-24 16:04:45,293][13256] Fps is (10 sec: 6553.6, 60 sec: 6553.6, 300 sec: 6470.3). Total num frames: 8351744. Throughput: 0: 805.3, 1: 805.6. Samples: 935937. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) |
| [2023-09-24 16:04:45,294][13256] Avg episode reward: [(0, '4.360'), (1, '4.990')] |
| [2023-09-24 16:04:49,011][14088] Updated weights for policy 1, policy_version 16352 (0.0018) |
| [2023-09-24 16:04:49,011][14087] Updated weights for policy 0, policy_version 16368 (0.0017) |
| [2023-09-24 16:04:50,293][13256] Fps is (10 sec: 6553.4, 60 sec: 6553.6, 300 sec: 6470.3). Total num frames: 8384512. Throughput: 0: 806.6, 1: 806.4. Samples: 940881. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) |
| [2023-09-24 16:04:50,294][13256] Avg episode reward: [(0, '4.550'), (1, '4.880')] |
| [2023-09-24 16:04:50,305][13827] Saving ./train_atari/Berzerk/checkpoint_p0/checkpoint_000016384_4194304.pth... |
| [2023-09-24 16:04:50,306][13996] Saving ./train_atari/Berzerk/checkpoint_p1/checkpoint_000016368_4190208.pth... |
| [2023-09-24 16:04:50,334][13827] Removing ./train_atari/Berzerk/checkpoint_p0/checkpoint_000013360_3420160.pth |
| [2023-09-24 16:04:50,339][13996] Removing ./train_atari/Berzerk/checkpoint_p1/checkpoint_000013344_3416064.pth |
| [2023-09-24 16:04:55,293][13256] Fps is (10 sec: 5734.4, 60 sec: 6417.1, 300 sec: 6442.5). Total num frames: 8409088. Throughput: 0: 807.7, 1: 808.6. Samples: 950470. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) |
| [2023-09-24 16:04:55,294][13256] Avg episode reward: [(0, '4.610'), (1, '5.040')] |
| [2023-09-24 16:05:00,293][13256] Fps is (10 sec: 5734.5, 60 sec: 6417.1, 300 sec: 6442.5). Total num frames: 8441856. Throughput: 0: 811.7, 1: 812.4. Samples: 960491. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) |
| [2023-09-24 16:05:00,294][13256] Avg episode reward: [(0, '4.710'), (1, '4.700')] |
| [2023-09-24 16:05:01,677][14088] Updated weights for policy 1, policy_version 16512 (0.0018) |
| [2023-09-24 16:05:01,677][14087] Updated weights for policy 0, policy_version 16528 (0.0015) |
| [2023-09-24 16:05:05,293][13256] Fps is (10 sec: 6553.6, 60 sec: 6417.1, 300 sec: 6442.5). Total num frames: 8474624. Throughput: 0: 805.1, 1: 805.4. Samples: 965040. Policy #0 lag: (min: 4.0, avg: 4.0, max: 4.0) |
| [2023-09-24 16:05:05,294][13256] Avg episode reward: [(0, '4.690'), (1, '5.000')] |
| [2023-09-24 16:05:10,293][13256] Fps is (10 sec: 6553.7, 60 sec: 6417.1, 300 sec: 6442.5). Total num frames: 8507392. Throughput: 0: 808.1, 1: 809.2. Samples: 974816. Policy #0 lag: (min: 4.0, avg: 4.0, max: 4.0) |
| [2023-09-24 16:05:10,294][13256] Avg episode reward: [(0, '4.560'), (1, '5.110')] |
| [2023-09-24 16:05:14,635][14088] Updated weights for policy 1, policy_version 16672 (0.0017) |
| [2023-09-24 16:05:14,635][14087] Updated weights for policy 0, policy_version 16688 (0.0018) |
| [2023-09-24 16:05:15,293][13256] Fps is (10 sec: 6553.6, 60 sec: 6417.1, 300 sec: 6442.5). Total num frames: 8540160. Throughput: 0: 805.8, 1: 805.6. Samples: 984070. Policy #0 lag: (min: 9.0, avg: 9.0, max: 9.0) |
| [2023-09-24 16:05:15,294][13256] Avg episode reward: [(0, '4.610'), (1, '4.980')] |
| [2023-09-24 16:05:20,293][13256] Fps is (10 sec: 6553.4, 60 sec: 6417.0, 300 sec: 6442.5). Total num frames: 8572928. Throughput: 0: 805.6, 1: 806.0. Samples: 989062. Policy #0 lag: (min: 9.0, avg: 9.0, max: 9.0) |
| [2023-09-24 16:05:20,294][13256] Avg episode reward: [(0, '4.680'), (1, '4.970')] |
| [2023-09-24 16:05:25,293][13256] Fps is (10 sec: 6553.6, 60 sec: 6417.1, 300 sec: 6442.5). Total num frames: 8605696. Throughput: 0: 806.5, 1: 806.4. Samples: 998693. Policy #0 lag: (min: 9.0, avg: 9.0, max: 9.0) |
| [2023-09-24 16:05:25,294][13256] Avg episode reward: [(0, '4.810'), (1, '4.900')] |
| [2023-09-24 16:05:27,207][14088] Updated weights for policy 1, policy_version 16832 (0.0017) |
| [2023-09-24 16:05:27,207][14087] Updated weights for policy 0, policy_version 16848 (0.0015) |
| [2023-09-24 16:05:30,293][13256] Fps is (10 sec: 6553.7, 60 sec: 6417.1, 300 sec: 6442.5). Total num frames: 8638464. Throughput: 0: 805.9, 1: 805.7. Samples: 1008460. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) |
| [2023-09-24 16:05:30,294][13256] Avg episode reward: [(0, '4.950'), (1, '4.850')] |
| [2023-09-24 16:05:35,293][13256] Fps is (10 sec: 6553.6, 60 sec: 6417.1, 300 sec: 6442.5). Total num frames: 8671232. Throughput: 0: 806.2, 1: 805.8. Samples: 1013421. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) |
| [2023-09-24 16:05:35,294][13256] Avg episode reward: [(0, '4.710'), (1, '4.770')] |
| [2023-09-24 16:05:40,024][14088] Updated weights for policy 1, policy_version 16992 (0.0017) |
| [2023-09-24 16:05:40,025][14087] Updated weights for policy 0, policy_version 17008 (0.0019) |
| [2023-09-24 16:05:40,293][13256] Fps is (10 sec: 6553.6, 60 sec: 6417.0, 300 sec: 6442.5). Total num frames: 8704000. Throughput: 0: 802.9, 1: 803.3. Samples: 1022747. Policy #0 lag: (min: 13.0, avg: 13.0, max: 13.0) |
| [2023-09-24 16:05:40,294][13256] Avg episode reward: [(0, '4.810'), (1, '4.950')] |
| [2023-09-24 16:05:45,293][13256] Fps is (10 sec: 6553.6, 60 sec: 6417.1, 300 sec: 6470.3). Total num frames: 8736768. Throughput: 0: 798.1, 1: 797.2. Samples: 1032276. Policy #0 lag: (min: 13.0, avg: 13.0, max: 13.0) |
| [2023-09-24 16:05:45,294][13256] Avg episode reward: [(0, '4.980'), (1, '4.950')] |
| [2023-09-24 16:05:50,293][13256] Fps is (10 sec: 6553.7, 60 sec: 6417.1, 300 sec: 6470.3). Total num frames: 8769536. Throughput: 0: 802.6, 1: 802.8. Samples: 1037281. Policy #0 lag: (min: 13.0, avg: 13.0, max: 13.0) |
| [2023-09-24 16:05:50,294][13256] Avg episode reward: [(0, '4.870'), (1, '5.160')] |
| [2023-09-24 16:05:52,883][14088] Updated weights for policy 1, policy_version 17152 (0.0018) |
| [2023-09-24 16:05:52,883][14087] Updated weights for policy 0, policy_version 17168 (0.0016) |
| [2023-09-24 16:05:55,293][13256] Fps is (10 sec: 5734.5, 60 sec: 6417.1, 300 sec: 6442.5). Total num frames: 8794112. Throughput: 0: 798.4, 1: 797.2. Samples: 1046616. Policy #0 lag: (min: 8.0, avg: 8.0, max: 8.0) |
| [2023-09-24 16:05:55,293][13256] Avg episode reward: [(0, '4.660'), (1, '4.820')] |
| [2023-09-24 16:06:00,293][13256] Fps is (10 sec: 5734.5, 60 sec: 6417.1, 300 sec: 6442.5). Total num frames: 8826880. Throughput: 0: 807.4, 1: 808.1. Samples: 1056768. Policy #0 lag: (min: 8.0, avg: 8.0, max: 8.0) |
| [2023-09-24 16:06:00,294][13256] Avg episode reward: [(0, '4.680'), (1, '5.030')] |
| [2023-09-24 16:06:05,293][13256] Fps is (10 sec: 6553.4, 60 sec: 6417.1, 300 sec: 6442.5). Total num frames: 8859648. Throughput: 0: 804.0, 1: 803.8. Samples: 1061411. Policy #0 lag: (min: 5.0, avg: 5.0, max: 5.0) |
| [2023-09-24 16:06:05,294][13256] Avg episode reward: [(0, '4.580'), (1, '5.040')] |
| [2023-09-24 16:06:05,382][14088] Updated weights for policy 1, policy_version 17312 (0.0016) |
| [2023-09-24 16:06:05,383][14087] Updated weights for policy 0, policy_version 17328 (0.0018) |
| [2023-09-24 16:06:10,293][13256] Fps is (10 sec: 6553.5, 60 sec: 6417.1, 300 sec: 6442.5). Total num frames: 8892416. Throughput: 0: 804.4, 1: 804.8. Samples: 1071109. Policy #0 lag: (min: 5.0, avg: 5.0, max: 5.0) |
| [2023-09-24 16:06:10,294][13256] Avg episode reward: [(0, '4.470'), (1, '4.840')] |
| [2023-09-24 16:06:15,293][13256] Fps is (10 sec: 6553.8, 60 sec: 6417.1, 300 sec: 6442.5). Total num frames: 8925184. Throughput: 0: 805.9, 1: 805.3. Samples: 1080963. Policy #0 lag: (min: 5.0, avg: 5.0, max: 5.0) |
| [2023-09-24 16:06:15,293][13256] Avg episode reward: [(0, '4.510'), (1, '4.660')] |
| [2023-09-24 16:06:18,175][14088] Updated weights for policy 1, policy_version 17472 (0.0018) |
| [2023-09-24 16:06:18,176][14087] Updated weights for policy 0, policy_version 17488 (0.0017) |
| [2023-09-24 16:06:20,293][13256] Fps is (10 sec: 6553.7, 60 sec: 6417.1, 300 sec: 6442.5). Total num frames: 8957952. Throughput: 0: 800.0, 1: 800.6. Samples: 1085449. Policy #0 lag: (min: 13.0, avg: 13.0, max: 13.0) |
| [2023-09-24 16:06:20,293][13256] Avg episode reward: [(0, '4.690'), (1, '4.570')] |
| [2023-09-24 16:06:25,293][13256] Fps is (10 sec: 6553.5, 60 sec: 6417.1, 300 sec: 6442.5). Total num frames: 8990720. Throughput: 0: 809.4, 1: 809.3. Samples: 1095587. Policy #0 lag: (min: 13.0, avg: 13.0, max: 13.0) |
| [2023-09-24 16:06:25,294][13256] Avg episode reward: [(0, '4.720'), (1, '4.970')] |
| [2023-09-24 16:06:30,293][13256] Fps is (10 sec: 6553.5, 60 sec: 6417.1, 300 sec: 6442.5). Total num frames: 9023488. Throughput: 0: 808.5, 1: 806.7. Samples: 1104959. Policy #0 lag: (min: 3.0, avg: 3.0, max: 3.0) |
| [2023-09-24 16:06:30,294][13256] Avg episode reward: [(0, '4.490'), (1, '4.420')] |
| [2023-09-24 16:06:30,950][14088] Updated weights for policy 1, policy_version 17632 (0.0017) |
| [2023-09-24 16:06:30,950][14087] Updated weights for policy 0, policy_version 17648 (0.0017) |
| [2023-09-24 16:06:35,293][13256] Fps is (10 sec: 6553.7, 60 sec: 6417.1, 300 sec: 6442.5). Total num frames: 9056256. Throughput: 0: 805.2, 1: 807.2. Samples: 1109837. Policy #0 lag: (min: 3.0, avg: 3.0, max: 3.0) |
| [2023-09-24 16:06:35,293][13256] Avg episode reward: [(0, '4.750'), (1, '4.460')] |
| [2023-09-24 16:06:40,293][13256] Fps is (10 sec: 6553.8, 60 sec: 6417.1, 300 sec: 6442.5). Total num frames: 9089024. Throughput: 0: 807.8, 1: 807.7. Samples: 1119314. Policy #0 lag: (min: 3.0, avg: 3.0, max: 3.0) |
| [2023-09-24 16:06:40,293][13256] Avg episode reward: [(0, '4.800'), (1, '4.610')] |
| [2023-09-24 16:06:43,720][14088] Updated weights for policy 1, policy_version 17792 (0.0017) |
| [2023-09-24 16:06:43,720][14087] Updated weights for policy 0, policy_version 17808 (0.0016) |
| [2023-09-24 16:06:45,293][13256] Fps is (10 sec: 6553.6, 60 sec: 6417.1, 300 sec: 6470.3). Total num frames: 9121792. Throughput: 0: 800.5, 1: 800.2. Samples: 1128802. Policy #0 lag: (min: 13.0, avg: 13.0, max: 13.0) |
| [2023-09-24 16:06:45,294][13256] Avg episode reward: [(0, '4.950'), (1, '4.640')] |
| [2023-09-24 16:06:50,293][13256] Fps is (10 sec: 6553.6, 60 sec: 6417.1, 300 sec: 6470.3). Total num frames: 9154560. Throughput: 0: 805.1, 1: 804.7. Samples: 1133853. Policy #0 lag: (min: 13.0, avg: 13.0, max: 13.0) |
| [2023-09-24 16:06:50,293][13256] Avg episode reward: [(0, '4.820'), (1, '4.490')] |
| [2023-09-24 16:06:50,302][13996] Saving ./train_atari/Berzerk/checkpoint_p1/checkpoint_000017872_4575232.pth... |
| [2023-09-24 16:06:50,303][13827] Saving ./train_atari/Berzerk/checkpoint_p0/checkpoint_000017888_4579328.pth... |
| [2023-09-24 16:06:50,335][13996] Removing ./train_atari/Berzerk/checkpoint_p1/checkpoint_000014848_3801088.pth |
| [2023-09-24 16:06:50,338][13827] Removing ./train_atari/Berzerk/checkpoint_p0/checkpoint_000014864_3805184.pth |
| [2023-09-24 16:06:55,293][13256] Fps is (10 sec: 6553.6, 60 sec: 6553.6, 300 sec: 6470.3). Total num frames: 9187328. Throughput: 0: 805.3, 1: 804.9. Samples: 1143568. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) |
| [2023-09-24 16:06:55,294][13256] Avg episode reward: [(0, '4.960'), (1, '4.400')] |
| [2023-09-24 16:06:56,274][14087] Updated weights for policy 0, policy_version 17968 (0.0015) |
| [2023-09-24 16:06:56,275][14088] Updated weights for policy 1, policy_version 17952 (0.0016) |
| [2023-09-24 16:07:00,293][13256] Fps is (10 sec: 6553.6, 60 sec: 6553.6, 300 sec: 6470.3). Total num frames: 9220096. Throughput: 0: 802.4, 1: 803.2. Samples: 1153214. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) |
| [2023-09-24 16:07:00,294][13256] Avg episode reward: [(0, '5.110'), (1, '4.450')] |
| [2023-09-24 16:07:05,293][13256] Fps is (10 sec: 6553.6, 60 sec: 6553.6, 300 sec: 6470.3). Total num frames: 9252864. Throughput: 0: 809.4, 1: 808.5. Samples: 1158255. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) |
| [2023-09-24 16:07:05,294][13256] Avg episode reward: [(0, '5.360'), (1, '4.640')] |
| [2023-09-24 16:07:05,302][13827] Saving new best policy, reward=5.360! |
| [2023-09-24 16:07:08,830][14088] Updated weights for policy 1, policy_version 18112 (0.0019) |
| [2023-09-24 16:07:08,830][14087] Updated weights for policy 0, policy_version 18128 (0.0017) |
| [2023-09-24 16:07:10,293][13256] Fps is (10 sec: 6553.4, 60 sec: 6553.6, 300 sec: 6470.3). Total num frames: 9285632. Throughput: 0: 804.9, 1: 804.6. Samples: 1168013. Policy #0 lag: (min: 9.0, avg: 9.0, max: 9.0) |
| [2023-09-24 16:07:10,294][13256] Avg episode reward: [(0, '5.430'), (1, '4.470')] |
| [2023-09-24 16:07:10,295][13827] Saving new best policy, reward=5.430! |
| [2023-09-24 16:07:15,293][13256] Fps is (10 sec: 6553.6, 60 sec: 6553.6, 300 sec: 6470.3). Total num frames: 9318400. Throughput: 0: 806.1, 1: 808.3. Samples: 1177608. Policy #0 lag: (min: 9.0, avg: 9.0, max: 9.0) |
| [2023-09-24 16:07:15,294][13256] Avg episode reward: [(0, '5.340'), (1, '4.700')] |
| [2023-09-24 16:07:20,293][13256] Fps is (10 sec: 6553.8, 60 sec: 6553.6, 300 sec: 6470.3). Total num frames: 9351168. Throughput: 0: 809.7, 1: 807.7. Samples: 1182620. Policy #0 lag: (min: 9.0, avg: 9.0, max: 9.0) |
| [2023-09-24 16:07:20,294][13256] Avg episode reward: [(0, '5.230'), (1, '4.940')] |
| [2023-09-24 16:07:21,494][14087] Updated weights for policy 0, policy_version 18288 (0.0018) |
| [2023-09-24 16:07:21,495][14088] Updated weights for policy 1, policy_version 18272 (0.0015) |
| [2023-09-24 16:07:25,293][13256] Fps is (10 sec: 6553.6, 60 sec: 6553.6, 300 sec: 6470.3). Total num frames: 9383936. Throughput: 0: 809.9, 1: 809.8. Samples: 1192197. Policy #0 lag: (min: 12.0, avg: 12.0, max: 12.0) |
| [2023-09-24 16:07:25,294][13256] Avg episode reward: [(0, '5.120'), (1, '5.020')] |
| [2023-09-24 16:07:30,293][13256] Fps is (10 sec: 5734.3, 60 sec: 6417.1, 300 sec: 6442.5). Total num frames: 9408512. Throughput: 0: 815.1, 1: 814.8. Samples: 1202151. Policy #0 lag: (min: 12.0, avg: 12.0, max: 12.0) |
| [2023-09-24 16:07:30,294][13256] Avg episode reward: [(0, '4.590'), (1, '5.040')] |
| [2023-09-24 16:07:34,227][14088] Updated weights for policy 1, policy_version 18432 (0.0015) |
| [2023-09-24 16:07:34,228][14087] Updated weights for policy 0, policy_version 18448 (0.0017) |
| [2023-09-24 16:07:35,293][13256] Fps is (10 sec: 5734.5, 60 sec: 6417.1, 300 sec: 6442.5). Total num frames: 9441280. Throughput: 0: 808.6, 1: 808.9. Samples: 1206641. Policy #0 lag: (min: 12.0, avg: 12.0, max: 12.0) |
| [2023-09-24 16:07:35,293][13256] Avg episode reward: [(0, '4.600'), (1, '5.100')] |
| [2023-09-24 16:07:40,293][13256] Fps is (10 sec: 6553.6, 60 sec: 6417.0, 300 sec: 6442.5). Total num frames: 9474048. Throughput: 0: 810.2, 1: 810.7. Samples: 1216512. Policy #0 lag: (min: 13.0, avg: 13.0, max: 13.0) |
| [2023-09-24 16:07:40,294][13256] Avg episode reward: [(0, '4.480'), (1, '4.890')] |
| [2023-09-24 16:07:45,293][13256] Fps is (10 sec: 6553.4, 60 sec: 6417.0, 300 sec: 6442.5). Total num frames: 9506816. Throughput: 0: 806.0, 1: 805.6. Samples: 1225736. Policy #0 lag: (min: 13.0, avg: 13.0, max: 13.0) |
| [2023-09-24 16:07:45,294][13256] Avg episode reward: [(0, '4.330'), (1, '4.760')] |
| [2023-09-24 16:07:47,368][14088] Updated weights for policy 1, policy_version 18592 (0.0014) |
| [2023-09-24 16:07:47,368][14087] Updated weights for policy 0, policy_version 18608 (0.0017) |
| [2023-09-24 16:07:50,293][13256] Fps is (10 sec: 6553.6, 60 sec: 6417.1, 300 sec: 6442.5). Total num frames: 9539584. Throughput: 0: 800.9, 1: 801.4. Samples: 1230356. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) |
| [2023-09-24 16:07:50,294][13256] Avg episode reward: [(0, '4.220'), (1, '4.590')] |
| [2023-09-24 16:07:55,293][13256] Fps is (10 sec: 6553.8, 60 sec: 6417.1, 300 sec: 6442.5). Total num frames: 9572352. Throughput: 0: 800.0, 1: 800.2. Samples: 1240023. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) |
| [2023-09-24 16:07:55,293][13256] Avg episode reward: [(0, '4.600'), (1, '4.800')] |
| [2023-09-24 16:07:59,935][14088] Updated weights for policy 1, policy_version 18752 (0.0016) |
| [2023-09-24 16:07:59,935][14087] Updated weights for policy 0, policy_version 18768 (0.0016) |
| [2023-09-24 16:08:00,293][13256] Fps is (10 sec: 6553.5, 60 sec: 6417.0, 300 sec: 6442.5). Total num frames: 9605120. Throughput: 0: 801.6, 1: 801.2. Samples: 1249736. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) |
| [2023-09-24 16:08:00,294][13256] Avg episode reward: [(0, '4.470'), (1, '4.960')] |
| [2023-09-24 16:08:05,293][13256] Fps is (10 sec: 6553.4, 60 sec: 6417.1, 300 sec: 6442.5). Total num frames: 9637888. Throughput: 0: 802.6, 1: 802.5. Samples: 1254849. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) |
| [2023-09-24 16:08:05,294][13256] Avg episode reward: [(0, '4.660'), (1, '5.040')] |
| [2023-09-24 16:08:10,293][13256] Fps is (10 sec: 6553.8, 60 sec: 6417.1, 300 sec: 6442.5). Total num frames: 9670656. Throughput: 0: 804.8, 1: 805.3. Samples: 1264653. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) |
| [2023-09-24 16:08:10,293][13256] Avg episode reward: [(0, '4.780'), (1, '5.050')] |
| [2023-09-24 16:08:12,571][14087] Updated weights for policy 0, policy_version 18928 (0.0017) |
| [2023-09-24 16:08:12,572][14088] Updated weights for policy 1, policy_version 18912 (0.0015) |
| [2023-09-24 16:08:15,293][13256] Fps is (10 sec: 6553.6, 60 sec: 6417.1, 300 sec: 6442.5). Total num frames: 9703424. Throughput: 0: 796.6, 1: 797.1. Samples: 1273867. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) |
| [2023-09-24 16:08:15,294][13256] Avg episode reward: [(0, '5.140'), (1, '5.200')] |
| [2023-09-24 16:08:20,293][13256] Fps is (10 sec: 6553.4, 60 sec: 6417.0, 300 sec: 6470.3). Total num frames: 9736192. Throughput: 0: 803.1, 1: 803.3. Samples: 1278930. Policy #0 lag: (min: 10.0, avg: 10.0, max: 10.0) |
| [2023-09-24 16:08:20,294][13256] Avg episode reward: [(0, '5.240'), (1, '5.090')] |
| [2023-09-24 16:08:25,293][13256] Fps is (10 sec: 6144.0, 60 sec: 6348.8, 300 sec: 6456.4). Total num frames: 9764864. Throughput: 0: 799.2, 1: 798.5. Samples: 1288408. Policy #0 lag: (min: 10.0, avg: 10.0, max: 10.0) |
| [2023-09-24 16:08:25,294][13256] Avg episode reward: [(0, '5.220'), (1, '4.990')] |
| [2023-09-24 16:08:25,331][14087] Updated weights for policy 0, policy_version 19088 (0.0017) |
| [2023-09-24 16:08:25,332][14088] Updated weights for policy 1, policy_version 19072 (0.0018) |
| [2023-09-24 16:08:30,293][13256] Fps is (10 sec: 5734.5, 60 sec: 6417.1, 300 sec: 6442.5). Total num frames: 9793536. Throughput: 0: 807.4, 1: 808.0. Samples: 1298432. Policy #0 lag: (min: 10.0, avg: 10.0, max: 10.0) |
| [2023-09-24 16:08:30,294][13256] Avg episode reward: [(0, '5.060'), (1, '4.770')] |
| [2023-09-24 16:08:35,293][13256] Fps is (10 sec: 6143.9, 60 sec: 6417.0, 300 sec: 6442.5). Total num frames: 9826304. Throughput: 0: 807.5, 1: 807.3. Samples: 1303022. Policy #0 lag: (min: 3.0, avg: 3.0, max: 3.0) |
| [2023-09-24 16:08:35,294][13256] Avg episode reward: [(0, '4.880'), (1, '5.030')] |
| [2023-09-24 16:08:37,902][14087] Updated weights for policy 0, policy_version 19248 (0.0019) |
| [2023-09-24 16:08:37,902][14088] Updated weights for policy 1, policy_version 19232 (0.0017) |
| [2023-09-24 16:08:40,293][13256] Fps is (10 sec: 6553.7, 60 sec: 6417.1, 300 sec: 6442.5). Total num frames: 9859072. Throughput: 0: 808.7, 1: 808.6. Samples: 1312800. Policy #0 lag: (min: 3.0, avg: 3.0, max: 3.0) |
| [2023-09-24 16:08:40,294][13256] Avg episode reward: [(0, '4.960'), (1, '5.090')] |
| [2023-09-24 16:08:45,293][13256] Fps is (10 sec: 6553.7, 60 sec: 6417.1, 300 sec: 6442.5). Total num frames: 9891840. Throughput: 0: 814.0, 1: 813.8. Samples: 1322987. Policy #0 lag: (min: 3.0, avg: 3.0, max: 3.0) |
| [2023-09-24 16:08:45,294][13256] Avg episode reward: [(0, '4.920'), (1, '5.200')] |
| [2023-09-24 16:08:50,293][13256] Fps is (10 sec: 6553.5, 60 sec: 6417.1, 300 sec: 6442.5). Total num frames: 9924608. Throughput: 0: 808.3, 1: 807.7. Samples: 1327571. Policy #0 lag: (min: 13.0, avg: 13.0, max: 13.0) |
| [2023-09-24 16:08:50,294][13256] Avg episode reward: [(0, '4.650'), (1, '5.070')] |
| [2023-09-24 16:08:50,418][13827] Saving ./train_atari/Berzerk/checkpoint_p0/checkpoint_000019408_4968448.pth... |
| [2023-09-24 16:08:50,434][13996] Saving ./train_atari/Berzerk/checkpoint_p1/checkpoint_000019392_4964352.pth... |
| [2023-09-24 16:08:50,436][14088] Updated weights for policy 1, policy_version 19392 (0.0019) |
| [2023-09-24 16:08:50,436][14087] Updated weights for policy 0, policy_version 19408 (0.0017) |
| [2023-09-24 16:08:50,446][13827] Removing ./train_atari/Berzerk/checkpoint_p0/checkpoint_000016384_4194304.pth |
| [2023-09-24 16:08:50,469][13996] Removing ./train_atari/Berzerk/checkpoint_p1/checkpoint_000016368_4190208.pth |
| [2023-09-24 16:08:55,293][13256] Fps is (10 sec: 6553.6, 60 sec: 6417.0, 300 sec: 6442.5). Total num frames: 9957376. Throughput: 0: 807.6, 1: 807.9. Samples: 1337353. Policy #0 lag: (min: 13.0, avg: 13.0, max: 13.0) |
| [2023-09-24 16:08:55,294][13256] Avg episode reward: [(0, '4.790'), (1, '5.090')] |
| [2023-09-24 16:09:00,293][13256] Fps is (10 sec: 6553.6, 60 sec: 6417.1, 300 sec: 6442.5). Total num frames: 9990144. Throughput: 0: 816.8, 1: 816.2. Samples: 1347353. Policy #0 lag: (min: 10.0, avg: 10.0, max: 10.0) |
| [2023-09-24 16:09:00,294][13256] Avg episode reward: [(0, '4.790'), (1, '4.880')] |
| [2023-09-24 16:09:03,048][14088] Updated weights for policy 1, policy_version 19552 (0.0014) |
| [2023-09-24 16:09:03,048][14087] Updated weights for policy 0, policy_version 19568 (0.0018) |
| [2023-09-24 16:09:05,293][13256] Fps is (10 sec: 6553.5, 60 sec: 6417.1, 300 sec: 6442.5). Total num frames: 10022912. Throughput: 0: 810.9, 1: 810.6. Samples: 1351900. Policy #0 lag: (min: 10.0, avg: 10.0, max: 10.0) |
| [2023-09-24 16:09:05,294][13256] Avg episode reward: [(0, '4.820'), (1, '4.770')] |
| [2023-09-24 16:09:10,293][13256] Fps is (10 sec: 6553.7, 60 sec: 6417.1, 300 sec: 6442.5). Total num frames: 10055680. Throughput: 0: 816.4, 1: 817.2. Samples: 1361920. Policy #0 lag: (min: 10.0, avg: 10.0, max: 10.0) |
| [2023-09-24 16:09:10,294][13256] Avg episode reward: [(0, '4.780'), (1, '4.620')] |
| [2023-09-24 16:09:15,293][13256] Fps is (10 sec: 6553.7, 60 sec: 6417.1, 300 sec: 6442.5). Total num frames: 10088448. Throughput: 0: 816.1, 1: 815.4. Samples: 1371848. Policy #0 lag: (min: 10.0, avg: 10.0, max: 10.0) |
| [2023-09-24 16:09:15,294][13256] Avg episode reward: [(0, '4.920'), (1, '4.670')] |
| [2023-09-24 16:09:15,543][14087] Updated weights for policy 0, policy_version 19728 (0.0017) |
| [2023-09-24 16:09:15,543][14088] Updated weights for policy 1, policy_version 19712 (0.0016) |
| [2023-09-24 16:09:20,293][13256] Fps is (10 sec: 6553.5, 60 sec: 6417.1, 300 sec: 6442.5). Total num frames: 10121216. Throughput: 0: 816.9, 1: 817.1. Samples: 1376554. Policy #0 lag: (min: 10.0, avg: 10.0, max: 10.0) |
| [2023-09-24 16:09:20,294][13256] Avg episode reward: [(0, '4.760'), (1, '4.750')] |
| [2023-09-24 16:09:25,293][13256] Fps is (10 sec: 6553.6, 60 sec: 6485.3, 300 sec: 6442.5). Total num frames: 10153984. Throughput: 0: 818.6, 1: 817.4. Samples: 1386420. Policy #0 lag: (min: 10.0, avg: 10.0, max: 10.0) |
| [2023-09-24 16:09:25,294][13256] Avg episode reward: [(0, '4.760'), (1, '4.950')] |
| [2023-09-24 16:09:28,272][14088] Updated weights for policy 1, policy_version 19872 (0.0020) |
| [2023-09-24 16:09:28,272][14087] Updated weights for policy 0, policy_version 19888 (0.0019) |
| [2023-09-24 16:09:30,293][13256] Fps is (10 sec: 6553.7, 60 sec: 6553.6, 300 sec: 6442.5). Total num frames: 10186752. Throughput: 0: 810.5, 1: 810.8. Samples: 1395945. Policy #0 lag: (min: 11.0, avg: 11.0, max: 11.0) |
| [2023-09-24 16:09:30,294][13256] Avg episode reward: [(0, '4.610'), (1, '4.860')] |
| [2023-09-24 16:09:35,293][13256] Fps is (10 sec: 6553.6, 60 sec: 6553.6, 300 sec: 6442.5). Total num frames: 10219520. Throughput: 0: 813.5, 1: 814.5. Samples: 1400832. Policy #0 lag: (min: 11.0, avg: 11.0, max: 11.0) |
| [2023-09-24 16:09:35,294][13256] Avg episode reward: [(0, '4.410'), (1, '4.850')] |
| [2023-09-24 16:09:40,293][13256] Fps is (10 sec: 6553.6, 60 sec: 6553.6, 300 sec: 6442.5). Total num frames: 10252288. Throughput: 0: 813.3, 1: 812.7. Samples: 1410522. Policy #0 lag: (min: 11.0, avg: 11.0, max: 11.0) |
| [2023-09-24 16:09:40,293][13256] Avg episode reward: [(0, '4.410'), (1, '4.950')] |
| [2023-09-24 16:09:40,990][14087] Updated weights for policy 0, policy_version 20048 (0.0017) |
| [2023-09-24 16:09:40,990][14088] Updated weights for policy 1, policy_version 20032 (0.0021) |
| [2023-09-24 16:09:45,293][13256] Fps is (10 sec: 6553.6, 60 sec: 6553.6, 300 sec: 6442.5). Total num frames: 10285056. Throughput: 0: 804.6, 1: 805.4. Samples: 1419803. Policy #0 lag: (min: 6.0, avg: 6.0, max: 6.0) |
| [2023-09-24 16:09:45,294][13256] Avg episode reward: [(0, '4.600'), (1, '4.940')] |
| [2023-09-24 16:09:50,293][13256] Fps is (10 sec: 6553.5, 60 sec: 6553.6, 300 sec: 6470.3). Total num frames: 10317824. Throughput: 0: 811.6, 1: 811.2. Samples: 1424925. Policy #0 lag: (min: 6.0, avg: 6.0, max: 6.0) |
| [2023-09-24 16:09:50,294][13256] Avg episode reward: [(0, '4.600'), (1, '5.090')] |
| [2023-09-24 16:09:53,743][14087] Updated weights for policy 0, policy_version 20208 (0.0014) |
| [2023-09-24 16:09:53,744][14088] Updated weights for policy 1, policy_version 20192 (0.0016) |
| [2023-09-24 16:09:55,293][13256] Fps is (10 sec: 6553.6, 60 sec: 6553.6, 300 sec: 6470.3). Total num frames: 10350592. Throughput: 0: 806.5, 1: 805.4. Samples: 1434456. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) |
| [2023-09-24 16:09:55,294][13256] Avg episode reward: [(0, '4.570'), (1, '4.990')] |
| [2023-09-24 16:10:00,293][13256] Fps is (10 sec: 6553.7, 60 sec: 6553.6, 300 sec: 6470.3). Total num frames: 10383360. Throughput: 0: 802.9, 1: 803.1. Samples: 1444118. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) |
| [2023-09-24 16:10:00,294][13256] Avg episode reward: [(0, '4.550'), (1, '5.030')] |
| [2023-09-24 16:10:05,293][13256] Fps is (10 sec: 6553.6, 60 sec: 6553.6, 300 sec: 6470.3). Total num frames: 10416128. Throughput: 0: 806.3, 1: 805.8. Samples: 1449097. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) |
| [2023-09-24 16:10:05,294][13256] Avg episode reward: [(0, '4.460'), (1, '4.970')] |
| [2023-09-24 16:10:06,336][14087] Updated weights for policy 0, policy_version 20368 (0.0017) |
| [2023-09-24 16:10:06,336][14088] Updated weights for policy 1, policy_version 20352 (0.0018) |
| [2023-09-24 16:10:10,293][13256] Fps is (10 sec: 6553.6, 60 sec: 6553.6, 300 sec: 6470.3). Total num frames: 10448896. Throughput: 0: 803.1, 1: 805.0. Samples: 1458782. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) |
| [2023-09-24 16:10:10,294][13256] Avg episode reward: [(0, '4.370'), (1, '4.880')] |
| [2023-09-24 16:10:15,293][13256] Fps is (10 sec: 6553.7, 60 sec: 6553.6, 300 sec: 6470.3). Total num frames: 10481664. Throughput: 0: 805.2, 1: 805.3. Samples: 1468417. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) |
| [2023-09-24 16:10:15,294][13256] Avg episode reward: [(0, '4.170'), (1, '4.800')] |
| [2023-09-24 16:10:18,978][14088] Updated weights for policy 1, policy_version 20512 (0.0018) |
| [2023-09-24 16:10:18,978][14087] Updated weights for policy 0, policy_version 20528 (0.0017) |
| [2023-09-24 16:10:20,293][13256] Fps is (10 sec: 6553.5, 60 sec: 6553.6, 300 sec: 6470.3). Total num frames: 10514432. Throughput: 0: 806.4, 1: 805.8. Samples: 1473381. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) |
| [2023-09-24 16:10:20,294][13256] Avg episode reward: [(0, '4.520'), (1, '4.930')] |
| [2023-09-24 16:10:25,293][13256] Fps is (10 sec: 6143.9, 60 sec: 6485.3, 300 sec: 6456.4). Total num frames: 10543104. Throughput: 0: 806.0, 1: 806.2. Samples: 1483068. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) |
| [2023-09-24 16:10:25,294][13256] Avg episode reward: [(0, '4.540'), (1, '5.000')] |
| [2023-09-24 16:10:30,293][13256] Fps is (10 sec: 6553.6, 60 sec: 6553.6, 300 sec: 6470.3). Total num frames: 10579968. Throughput: 0: 813.3, 1: 813.2. Samples: 1492992. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) |
| [2023-09-24 16:10:30,294][13256] Avg episode reward: [(0, '4.470'), (1, '5.030')] |
| [2023-09-24 16:10:31,576][14087] Updated weights for policy 0, policy_version 20688 (0.0017) |
| [2023-09-24 16:10:31,576][14088] Updated weights for policy 1, policy_version 20672 (0.0015) |
| [2023-09-24 16:10:35,293][13256] Fps is (10 sec: 6144.0, 60 sec: 6417.1, 300 sec: 6442.5). Total num frames: 10604544. Throughput: 0: 808.2, 1: 808.8. Samples: 1497690. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) |
| [2023-09-24 16:10:35,294][13256] Avg episode reward: [(0, '4.610'), (1, '4.990')] |
| [2023-09-24 16:10:40,293][13256] Fps is (10 sec: 5734.5, 60 sec: 6417.1, 300 sec: 6442.5). Total num frames: 10637312. Throughput: 0: 809.7, 1: 810.4. Samples: 1507362. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) |
| [2023-09-24 16:10:40,293][13256] Avg episode reward: [(0, '4.650'), (1, '4.950')] |
| [2023-09-24 16:10:44,161][14087] Updated weights for policy 0, policy_version 20848 (0.0017) |
| [2023-09-24 16:10:44,161][14088] Updated weights for policy 1, policy_version 20832 (0.0016) |
| [2023-09-24 16:10:45,293][13256] Fps is (10 sec: 6553.6, 60 sec: 6417.1, 300 sec: 6442.5). Total num frames: 10670080. Throughput: 0: 814.9, 1: 816.1. Samples: 1517513. Policy #0 lag: (min: 13.0, avg: 13.0, max: 13.0) |
| [2023-09-24 16:10:45,294][13256] Avg episode reward: [(0, '4.640'), (1, '4.760')] |
| [2023-09-24 16:10:50,293][13256] Fps is (10 sec: 6553.6, 60 sec: 6417.1, 300 sec: 6470.3). Total num frames: 10702848. Throughput: 0: 809.6, 1: 810.0. Samples: 1521977. Policy #0 lag: (min: 13.0, avg: 13.0, max: 13.0) |
| [2023-09-24 16:10:50,293][13256] Avg episode reward: [(0, '4.750'), (1, '4.440')] |
| [2023-09-24 16:10:50,301][13827] Saving ./train_atari/Berzerk/checkpoint_p0/checkpoint_000020912_5353472.pth... |
| [2023-09-24 16:10:50,301][13996] Saving ./train_atari/Berzerk/checkpoint_p1/checkpoint_000020896_5349376.pth... |
| [2023-09-24 16:10:50,329][13827] Removing ./train_atari/Berzerk/checkpoint_p0/checkpoint_000017888_4579328.pth |
| [2023-09-24 16:10:50,336][13996] Removing ./train_atari/Berzerk/checkpoint_p1/checkpoint_000017872_4575232.pth |
| [2023-09-24 16:10:55,293][13256] Fps is (10 sec: 6553.5, 60 sec: 6417.1, 300 sec: 6470.3). Total num frames: 10735616. Throughput: 0: 812.6, 1: 812.4. Samples: 1531904. Policy #0 lag: (min: 13.0, avg: 13.0, max: 13.0) |
| [2023-09-24 16:10:55,294][13256] Avg episode reward: [(0, '4.790'), (1, '4.210')] |
| [2023-09-24 16:10:56,791][14087] Updated weights for policy 0, policy_version 21008 (0.0016) |
| [2023-09-24 16:10:56,791][14088] Updated weights for policy 1, policy_version 20992 (0.0018) |
| [2023-09-24 16:11:00,293][13256] Fps is (10 sec: 6553.6, 60 sec: 6417.1, 300 sec: 6470.3). Total num frames: 10768384. Throughput: 0: 814.6, 1: 814.2. Samples: 1541712. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) |
| [2023-09-24 16:11:00,294][13256] Avg episode reward: [(0, '4.720'), (1, '4.210')] |
| [2023-09-24 16:11:05,293][13256] Fps is (10 sec: 6553.7, 60 sec: 6417.1, 300 sec: 6470.3). Total num frames: 10801152. Throughput: 0: 809.3, 1: 809.9. Samples: 1546245. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) |
| [2023-09-24 16:11:05,294][13256] Avg episode reward: [(0, '4.600'), (1, '4.070')] |
| [2023-09-24 16:11:09,504][14087] Updated weights for policy 0, policy_version 21168 (0.0015) |
| [2023-09-24 16:11:09,505][14088] Updated weights for policy 1, policy_version 21152 (0.0018) |
| [2023-09-24 16:11:10,293][13256] Fps is (10 sec: 6553.5, 60 sec: 6417.0, 300 sec: 6470.3). Total num frames: 10833920. Throughput: 0: 813.6, 1: 813.5. Samples: 1556289. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) |
| [2023-09-24 16:11:10,294][13256] Avg episode reward: [(0, '4.400'), (1, '4.240')] |
| [2023-09-24 16:11:15,293][13256] Fps is (10 sec: 6553.7, 60 sec: 6417.1, 300 sec: 6470.3). Total num frames: 10866688. Throughput: 0: 810.2, 1: 810.0. Samples: 1565902. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) |
| [2023-09-24 16:11:15,294][13256] Avg episode reward: [(0, '4.670'), (1, '4.240')] |
| [2023-09-24 16:11:20,293][13256] Fps is (10 sec: 6553.7, 60 sec: 6417.1, 300 sec: 6470.3). Total num frames: 10899456. Throughput: 0: 812.3, 1: 812.7. Samples: 1570816. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) |
| [2023-09-24 16:11:20,293][13256] Avg episode reward: [(0, '4.580'), (1, '4.510')] |
| [2023-09-24 16:11:22,226][14087] Updated weights for policy 0, policy_version 21328 (0.0016) |
| [2023-09-24 16:11:22,227][14088] Updated weights for policy 1, policy_version 21312 (0.0018) |
| [2023-09-24 16:11:25,293][13256] Fps is (10 sec: 6553.5, 60 sec: 6485.3, 300 sec: 6470.3). Total num frames: 10932224. Throughput: 0: 811.5, 1: 811.5. Samples: 1580396. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) |
| [2023-09-24 16:11:25,294][13256] Avg episode reward: [(0, '4.800'), (1, '4.750')] |
| [2023-09-24 16:11:30,293][13256] Fps is (10 sec: 6553.5, 60 sec: 6417.1, 300 sec: 6470.3). Total num frames: 10964992. Throughput: 0: 807.0, 1: 806.3. Samples: 1590110. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) |
| [2023-09-24 16:11:30,294][13256] Avg episode reward: [(0, '4.910'), (1, '4.850')] |
| [2023-09-24 16:11:34,787][14088] Updated weights for policy 1, policy_version 21472 (0.0017) |
| [2023-09-24 16:11:34,787][14087] Updated weights for policy 0, policy_version 21488 (0.0019) |
| [2023-09-24 16:11:35,293][13256] Fps is (10 sec: 6553.6, 60 sec: 6553.6, 300 sec: 6470.3). Total num frames: 10997760. Throughput: 0: 812.9, 1: 812.7. Samples: 1595127. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) |
| [2023-09-24 16:11:35,294][13256] Avg episode reward: [(0, '4.850'), (1, '4.870')] |
| [2023-09-24 16:11:40,293][13256] Fps is (10 sec: 6553.5, 60 sec: 6553.6, 300 sec: 6470.3). Total num frames: 11030528. Throughput: 0: 810.4, 1: 809.8. Samples: 1604814. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) |
| [2023-09-24 16:11:40,294][13256] Avg episode reward: [(0, '4.930'), (1, '4.980')] |
| [2023-09-24 16:11:45,293][13256] Fps is (10 sec: 6553.5, 60 sec: 6553.6, 300 sec: 6470.3). Total num frames: 11063296. Throughput: 0: 804.8, 1: 804.9. Samples: 1614148. Policy #0 lag: (min: 13.0, avg: 13.0, max: 13.0) |
| [2023-09-24 16:11:45,294][13256] Avg episode reward: [(0, '4.730'), (1, '4.840')] |
| [2023-09-24 16:11:47,515][14088] Updated weights for policy 1, policy_version 21632 (0.0016) |
| [2023-09-24 16:11:47,516][14087] Updated weights for policy 0, policy_version 21648 (0.0017) |
| [2023-09-24 16:11:50,293][13256] Fps is (10 sec: 6553.6, 60 sec: 6553.6, 300 sec: 6470.3). Total num frames: 11096064. Throughput: 0: 810.8, 1: 810.5. Samples: 1619206. Policy #0 lag: (min: 13.0, avg: 13.0, max: 13.0) |
| [2023-09-24 16:11:50,294][13256] Avg episode reward: [(0, '4.540'), (1, '4.760')] |
| [2023-09-24 16:11:55,293][13256] Fps is (10 sec: 6553.6, 60 sec: 6553.6, 300 sec: 6470.3). Total num frames: 11128832. Throughput: 0: 806.6, 1: 806.7. Samples: 1628887. Policy #0 lag: (min: 13.0, avg: 13.0, max: 13.0) |
| [2023-09-24 16:11:55,294][13256] Avg episode reward: [(0, '4.350'), (1, '4.810')] |
| [2023-09-24 16:12:00,157][14088] Updated weights for policy 1, policy_version 21792 (0.0015) |
| [2023-09-24 16:12:00,158][14087] Updated weights for policy 0, policy_version 21808 (0.0018) |
| [2023-09-24 16:12:00,293][13256] Fps is (10 sec: 6553.6, 60 sec: 6553.6, 300 sec: 6470.3). Total num frames: 11161600. Throughput: 0: 805.8, 1: 805.8. Samples: 1638423. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) |
| [2023-09-24 16:12:00,294][13256] Avg episode reward: [(0, '4.160'), (1, '4.850')] |
| [2023-09-24 16:12:05,293][13256] Fps is (10 sec: 6553.7, 60 sec: 6553.6, 300 sec: 6470.3). Total num frames: 11194368. Throughput: 0: 806.0, 1: 805.9. Samples: 1643353. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) |
| [2023-09-24 16:12:05,294][13256] Avg episode reward: [(0, '4.030'), (1, '4.720')] |
| [2023-09-24 16:12:10,293][13256] Fps is (10 sec: 6553.7, 60 sec: 6553.6, 300 sec: 6470.3). Total num frames: 11227136. Throughput: 0: 807.4, 1: 807.1. Samples: 1653047. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) |
| [2023-09-24 16:12:10,294][13256] Avg episode reward: [(0, '4.140'), (1, '4.700')] |
| [2023-09-24 16:12:12,783][14088] Updated weights for policy 1, policy_version 21952 (0.0019) |
| [2023-09-24 16:12:12,783][14087] Updated weights for policy 0, policy_version 21968 (0.0019) |
| [2023-09-24 16:12:15,293][13256] Fps is (10 sec: 5734.4, 60 sec: 6417.1, 300 sec: 6442.5). Total num frames: 11251712. Throughput: 0: 809.6, 1: 809.6. Samples: 1662976. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) |
| [2023-09-24 16:12:15,294][13256] Avg episode reward: [(0, '4.240'), (1, '4.660')] |
| [2023-09-24 16:12:20,293][13256] Fps is (10 sec: 5734.3, 60 sec: 6417.0, 300 sec: 6442.5). Total num frames: 11284480. Throughput: 0: 806.4, 1: 806.2. Samples: 1667695. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) |
| [2023-09-24 16:12:20,294][13256] Avg episode reward: [(0, '4.410'), (1, '4.600')] |
| [2023-09-24 16:12:25,293][13256] Fps is (10 sec: 6553.4, 60 sec: 6417.0, 300 sec: 6470.3). Total num frames: 11317248. Throughput: 0: 805.2, 1: 805.9. Samples: 1677314. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) |
| [2023-09-24 16:12:25,295][13256] Avg episode reward: [(0, '4.350'), (1, '4.590')] |
| [2023-09-24 16:12:25,471][14088] Updated weights for policy 1, policy_version 22112 (0.0017) |
| [2023-09-24 16:12:25,473][14087] Updated weights for policy 0, policy_version 22128 (0.0016) |
| [2023-09-24 16:12:30,293][13256] Fps is (10 sec: 6553.6, 60 sec: 6417.1, 300 sec: 6470.3). Total num frames: 11350016. Throughput: 0: 812.6, 1: 812.9. Samples: 1687295. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) |
| [2023-09-24 16:12:30,294][13256] Avg episode reward: [(0, '4.560'), (1, '4.710')] |
| [2023-09-24 16:12:35,293][13256] Fps is (10 sec: 6553.7, 60 sec: 6417.1, 300 sec: 6470.3). Total num frames: 11382784. Throughput: 0: 806.9, 1: 806.7. Samples: 1691818. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) |
| [2023-09-24 16:12:35,294][13256] Avg episode reward: [(0, '4.840'), (1, '4.590')] |
| [2023-09-24 16:12:38,072][14088] Updated weights for policy 1, policy_version 22272 (0.0017) |
| [2023-09-24 16:12:38,072][14087] Updated weights for policy 0, policy_version 22288 (0.0016) |
| [2023-09-24 16:12:40,293][13256] Fps is (10 sec: 6553.7, 60 sec: 6417.1, 300 sec: 6470.3). Total num frames: 11415552. Throughput: 0: 810.9, 1: 811.3. Samples: 1701888. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) |
| [2023-09-24 16:12:40,294][13256] Avg episode reward: [(0, '4.950'), (1, '4.640')] |
| [2023-09-24 16:12:45,293][13256] Fps is (10 sec: 6553.7, 60 sec: 6417.1, 300 sec: 6470.3). Total num frames: 11448320. Throughput: 0: 811.7, 1: 811.3. Samples: 1711460. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) |
| [2023-09-24 16:12:45,293][13256] Avg episode reward: [(0, '4.720'), (1, '4.640')] |
| [2023-09-24 16:12:50,293][13256] Fps is (10 sec: 6553.5, 60 sec: 6417.1, 300 sec: 6470.3). Total num frames: 11481088. Throughput: 0: 809.6, 1: 809.7. Samples: 1716224. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) |
| [2023-09-24 16:12:50,294][13256] Avg episode reward: [(0, '4.830'), (1, '4.610')] |
| [2023-09-24 16:12:50,303][13996] Saving ./train_atari/Berzerk/checkpoint_p1/checkpoint_000022416_5738496.pth... |
| [2023-09-24 16:12:50,303][13827] Saving ./train_atari/Berzerk/checkpoint_p0/checkpoint_000022432_5742592.pth... |
| [2023-09-24 16:12:50,332][13996] Removing ./train_atari/Berzerk/checkpoint_p1/checkpoint_000019392_4964352.pth |
| [2023-09-24 16:12:50,338][13827] Removing ./train_atari/Berzerk/checkpoint_p0/checkpoint_000019408_4968448.pth |
| [2023-09-24 16:12:50,874][14088] Updated weights for policy 1, policy_version 22432 (0.0017) |
| [2023-09-24 16:12:50,874][14087] Updated weights for policy 0, policy_version 22448 (0.0016) |
| [2023-09-24 16:12:55,293][13256] Fps is (10 sec: 6553.5, 60 sec: 6417.1, 300 sec: 6470.3). Total num frames: 11513856. Throughput: 0: 810.6, 1: 809.0. Samples: 1725929. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) |
| [2023-09-24 16:12:55,294][13256] Avg episode reward: [(0, '4.790'), (1, '4.530')] |
| [2023-09-24 16:13:00,293][13256] Fps is (10 sec: 6553.5, 60 sec: 6417.1, 300 sec: 6470.3). Total num frames: 11546624. Throughput: 0: 796.4, 1: 796.4. Samples: 1734656. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) |
| [2023-09-24 16:13:00,294][13256] Avg episode reward: [(0, '4.690'), (1, '4.590')] |
| [2023-09-24 16:13:04,112][14088] Updated weights for policy 1, policy_version 22592 (0.0019) |
| [2023-09-24 16:13:04,113][14087] Updated weights for policy 0, policy_version 22608 (0.0018) |
| [2023-09-24 16:13:05,293][13256] Fps is (10 sec: 5734.3, 60 sec: 6280.5, 300 sec: 6442.5). Total num frames: 11571200. Throughput: 0: 797.2, 1: 797.2. Samples: 1739443. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) |
| [2023-09-24 16:13:05,294][13256] Avg episode reward: [(0, '4.550'), (1, '4.600')] |
| [2023-09-24 16:13:10,293][13256] Fps is (10 sec: 5734.6, 60 sec: 6280.5, 300 sec: 6442.5). Total num frames: 11603968. Throughput: 0: 796.6, 1: 796.5. Samples: 1749004. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) |
| [2023-09-24 16:13:10,294][13256] Avg episode reward: [(0, '4.470'), (1, '4.690')] |
| [2023-09-24 16:13:15,293][13256] Fps is (10 sec: 6553.7, 60 sec: 6417.1, 300 sec: 6442.5). Total num frames: 11636736. Throughput: 0: 797.1, 1: 796.0. Samples: 1758982. Policy #0 lag: (min: 6.0, avg: 6.0, max: 6.0) |
| [2023-09-24 16:13:15,294][13256] Avg episode reward: [(0, '4.280'), (1, '4.650')] |
| [2023-09-24 16:13:16,787][14088] Updated weights for policy 1, policy_version 22752 (0.0019) |
| [2023-09-24 16:13:16,788][14087] Updated weights for policy 0, policy_version 22768 (0.0019) |
| [2023-09-24 16:13:20,293][13256] Fps is (10 sec: 6553.6, 60 sec: 6417.1, 300 sec: 6456.4). Total num frames: 11669504. Throughput: 0: 795.4, 1: 795.5. Samples: 1763408. Policy #0 lag: (min: 6.0, avg: 6.0, max: 6.0) |
| [2023-09-24 16:13:20,294][13256] Avg episode reward: [(0, '4.200'), (1, '4.770')] |
| [2023-09-24 16:13:25,293][13256] Fps is (10 sec: 6553.6, 60 sec: 6417.1, 300 sec: 6470.3). Total num frames: 11702272. Throughput: 0: 796.0, 1: 795.2. Samples: 1773492. Policy #0 lag: (min: 6.0, avg: 6.0, max: 6.0) |
| [2023-09-24 16:13:25,294][13256] Avg episode reward: [(0, '4.130'), (1, '4.880')] |
| [2023-09-24 16:13:29,447][14088] Updated weights for policy 1, policy_version 22912 (0.0018) |
| [2023-09-24 16:13:29,447][14087] Updated weights for policy 0, policy_version 22928 (0.0017) |
| [2023-09-24 16:13:30,293][13256] Fps is (10 sec: 6553.7, 60 sec: 6417.1, 300 sec: 6470.3). Total num frames: 11735040. Throughput: 0: 796.4, 1: 796.7. Samples: 1783149. Policy #0 lag: (min: 1.0, avg: 1.0, max: 1.0) |
| [2023-09-24 16:13:30,293][13256] Avg episode reward: [(0, '4.540'), (1, '4.900')] |
| [2023-09-24 16:13:35,293][13256] Fps is (10 sec: 6553.5, 60 sec: 6417.1, 300 sec: 6470.3). Total num frames: 11767808. Throughput: 0: 796.4, 1: 796.4. Samples: 1787904. Policy #0 lag: (min: 1.0, avg: 1.0, max: 1.0) |
| [2023-09-24 16:13:35,294][13256] Avg episode reward: [(0, '4.620'), (1, '4.920')] |
| [2023-09-24 16:13:40,293][13256] Fps is (10 sec: 6553.4, 60 sec: 6417.0, 300 sec: 6470.3). Total num frames: 11800576. Throughput: 0: 799.1, 1: 800.7. Samples: 1797917. Policy #0 lag: (min: 1.0, avg: 1.0, max: 1.0) |
| [2023-09-24 16:13:40,294][13256] Avg episode reward: [(0, '4.730'), (1, '5.100')] |
| [2023-09-24 16:13:42,137][14088] Updated weights for policy 1, policy_version 23072 (0.0017) |
| [2023-09-24 16:13:42,137][14087] Updated weights for policy 0, policy_version 23088 (0.0017) |
| [2023-09-24 16:13:45,293][13256] Fps is (10 sec: 6553.6, 60 sec: 6417.0, 300 sec: 6470.3). Total num frames: 11833344. Throughput: 0: 804.8, 1: 804.3. Samples: 1807062. Policy #0 lag: (min: 1.0, avg: 1.0, max: 1.0) |
| [2023-09-24 16:13:45,294][13256] Avg episode reward: [(0, '4.780'), (1, '5.150')] |
| [2023-09-24 16:13:50,293][13256] Fps is (10 sec: 6553.5, 60 sec: 6417.1, 300 sec: 6470.3). Total num frames: 11866112. Throughput: 0: 804.8, 1: 804.8. Samples: 1811873. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) |
| [2023-09-24 16:13:50,294][13256] Avg episode reward: [(0, '4.860'), (1, '5.120')] |
| [2023-09-24 16:13:55,138][14087] Updated weights for policy 0, policy_version 23248 (0.0015) |
| [2023-09-24 16:13:55,139][14088] Updated weights for policy 1, policy_version 23232 (0.0017) |
| [2023-09-24 16:13:55,293][13256] Fps is (10 sec: 6553.6, 60 sec: 6417.1, 300 sec: 6470.3). Total num frames: 11898880. Throughput: 0: 804.0, 1: 803.3. Samples: 1821332. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) |
| [2023-09-24 16:13:55,294][13256] Avg episode reward: [(0, '4.690'), (1, '5.110')] |
| [2023-09-24 16:14:00,293][13256] Fps is (10 sec: 6144.1, 60 sec: 6348.8, 300 sec: 6456.4). Total num frames: 11927552. Throughput: 0: 798.7, 1: 799.7. Samples: 1830912. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) |
| [2023-09-24 16:14:00,294][13256] Avg episode reward: [(0, '4.440'), (1, '5.270')] |
| [2023-09-24 16:14:05,293][13256] Fps is (10 sec: 5734.4, 60 sec: 6417.1, 300 sec: 6442.5). Total num frames: 11956224. Throughput: 0: 802.5, 1: 802.6. Samples: 1835637. Policy #0 lag: (min: 9.0, avg: 9.0, max: 9.0) |
| [2023-09-24 16:14:05,294][13256] Avg episode reward: [(0, '4.640'), (1, '5.440')] |
| [2023-09-24 16:14:05,456][13996] Saving new best policy, reward=5.440! |
| [2023-09-24 16:14:07,966][14087] Updated weights for policy 0, policy_version 23408 (0.0019) |
| [2023-09-24 16:14:07,967][14088] Updated weights for policy 1, policy_version 23392 (0.0019) |
| [2023-09-24 16:14:10,293][13256] Fps is (10 sec: 6144.1, 60 sec: 6417.1, 300 sec: 6442.5). Total num frames: 11988992. Throughput: 0: 796.9, 1: 797.7. Samples: 1845248. Policy #0 lag: (min: 9.0, avg: 9.0, max: 9.0) |
| [2023-09-24 16:14:10,294][13256] Avg episode reward: [(0, '4.760'), (1, '5.190')] |
| [2023-09-24 16:14:15,293][13256] Fps is (10 sec: 6553.6, 60 sec: 6417.1, 300 sec: 6442.5). Total num frames: 12021760. Throughput: 0: 801.7, 1: 801.5. Samples: 1855291. Policy #0 lag: (min: 9.0, avg: 9.0, max: 9.0) |
| [2023-09-24 16:14:15,294][13256] Avg episode reward: [(0, '4.740'), (1, '5.000')] |
| [2023-09-24 16:14:20,293][13256] Fps is (10 sec: 6553.5, 60 sec: 6417.1, 300 sec: 6442.5). Total num frames: 12054528. Throughput: 0: 799.3, 1: 799.0. Samples: 1859825. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) |
| [2023-09-24 16:14:20,294][13256] Avg episode reward: [(0, '4.880'), (1, '4.980')] |
| [2023-09-24 16:14:20,699][14087] Updated weights for policy 0, policy_version 23568 (0.0018) |
| [2023-09-24 16:14:20,700][14088] Updated weights for policy 1, policy_version 23552 (0.0019) |
| [2023-09-24 16:14:25,293][13256] Fps is (10 sec: 6553.7, 60 sec: 6417.1, 300 sec: 6442.5). Total num frames: 12087296. Throughput: 0: 796.8, 1: 796.8. Samples: 1869629. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) |
| [2023-09-24 16:14:25,294][13256] Avg episode reward: [(0, '4.870'), (1, '5.100')] |
| [2023-09-24 16:14:30,293][13256] Fps is (10 sec: 6553.6, 60 sec: 6417.0, 300 sec: 6442.5). Total num frames: 12120064. Throughput: 0: 799.3, 1: 800.1. Samples: 1879035. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) |
| [2023-09-24 16:14:30,294][13256] Avg episode reward: [(0, '4.900'), (1, '5.080')] |
| [2023-09-24 16:14:33,535][14087] Updated weights for policy 0, policy_version 23728 (0.0017) |
| [2023-09-24 16:14:33,535][14088] Updated weights for policy 1, policy_version 23712 (0.0015) |
| [2023-09-24 16:14:35,293][13256] Fps is (10 sec: 6553.4, 60 sec: 6417.1, 300 sec: 6442.5). Total num frames: 12152832. Throughput: 0: 802.5, 1: 800.7. Samples: 1884019. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) |
| [2023-09-24 16:14:35,294][13256] Avg episode reward: [(0, '4.880'), (1, '5.030')] |
| [2023-09-24 16:14:40,293][13256] Fps is (10 sec: 6553.6, 60 sec: 6417.1, 300 sec: 6442.5). Total num frames: 12185600. Throughput: 0: 800.9, 1: 801.8. Samples: 1893453. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) |
| [2023-09-24 16:14:40,294][13256] Avg episode reward: [(0, '5.000'), (1, '5.050')] |
| [2023-09-24 16:14:45,293][13256] Fps is (10 sec: 6553.6, 60 sec: 6417.1, 300 sec: 6442.5). Total num frames: 12218368. Throughput: 0: 798.3, 1: 797.8. Samples: 1902738. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) |
| [2023-09-24 16:14:45,294][13256] Avg episode reward: [(0, '5.050'), (1, '5.300')] |
| [2023-09-24 16:14:46,329][14087] Updated weights for policy 0, policy_version 23888 (0.0016) |
| [2023-09-24 16:14:46,329][14088] Updated weights for policy 1, policy_version 23872 (0.0016) |
| [2023-09-24 16:14:50,293][13256] Fps is (10 sec: 6553.7, 60 sec: 6417.1, 300 sec: 6442.5). Total num frames: 12251136. Throughput: 0: 802.0, 1: 801.4. Samples: 1907792. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) |
| [2023-09-24 16:14:50,293][13256] Avg episode reward: [(0, '4.830'), (1, '5.310')] |
| [2023-09-24 16:14:50,301][13827] Saving ./train_atari/Berzerk/checkpoint_p0/checkpoint_000023936_6127616.pth... |
| [2023-09-24 16:14:50,301][13996] Saving ./train_atari/Berzerk/checkpoint_p1/checkpoint_000023920_6123520.pth... |
| [2023-09-24 16:14:50,331][13827] Removing ./train_atari/Berzerk/checkpoint_p0/checkpoint_000020912_5353472.pth |
| [2023-09-24 16:14:50,332][13996] Removing ./train_atari/Berzerk/checkpoint_p1/checkpoint_000020896_5349376.pth |
| [2023-09-24 16:14:55,293][13256] Fps is (10 sec: 5734.4, 60 sec: 6280.5, 300 sec: 6414.7). Total num frames: 12275712. Throughput: 0: 798.0, 1: 797.5. Samples: 1917045. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) |
| [2023-09-24 16:14:55,294][13256] Avg episode reward: [(0, '4.800'), (1, '5.170')] |
| [2023-09-24 16:14:59,142][14088] Updated weights for policy 1, policy_version 24032 (0.0017) |
| [2023-09-24 16:14:59,142][14087] Updated weights for policy 0, policy_version 24048 (0.0016) |
| [2023-09-24 16:15:00,293][13256] Fps is (10 sec: 5734.3, 60 sec: 6348.8, 300 sec: 6414.8). Total num frames: 12308480. Throughput: 0: 798.2, 1: 798.4. Samples: 1927138. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) |
| [2023-09-24 16:15:00,294][13256] Avg episode reward: [(0, '4.820'), (1, '5.100')] |
| [2023-09-24 16:15:05,293][13256] Fps is (10 sec: 6553.7, 60 sec: 6417.1, 300 sec: 6414.8). Total num frames: 12341248. Throughput: 0: 795.8, 1: 795.7. Samples: 1931444. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) |
| [2023-09-24 16:15:05,294][13256] Avg episode reward: [(0, '5.060'), (1, '4.870')] |
| [2023-09-24 16:15:10,293][13256] Fps is (10 sec: 6553.7, 60 sec: 6417.1, 300 sec: 6414.8). Total num frames: 12374016. Throughput: 0: 798.1, 1: 798.9. Samples: 1941493. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) |
| [2023-09-24 16:15:10,294][13256] Avg episode reward: [(0, '5.010'), (1, '4.720')] |
| [2023-09-24 16:15:11,948][14088] Updated weights for policy 1, policy_version 24192 (0.0018) |
| [2023-09-24 16:15:11,949][14087] Updated weights for policy 0, policy_version 24208 (0.0016) |
| [2023-09-24 16:15:15,293][13256] Fps is (10 sec: 6553.5, 60 sec: 6417.1, 300 sec: 6414.8). Total num frames: 12406784. Throughput: 0: 801.7, 1: 801.1. Samples: 1951163. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) |
| [2023-09-24 16:15:15,294][13256] Avg episode reward: [(0, '4.970'), (1, '4.530')] |
| [2023-09-24 16:15:20,293][13256] Fps is (10 sec: 6553.5, 60 sec: 6417.1, 300 sec: 6428.6). Total num frames: 12439552. Throughput: 0: 796.9, 1: 799.3. Samples: 1955849. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) |
| [2023-09-24 16:15:20,294][13256] Avg episode reward: [(0, '4.960'), (1, '4.630')] |
| [2023-09-24 16:15:24,565][14088] Updated weights for policy 1, policy_version 24352 (0.0012) |
| [2023-09-24 16:15:24,566][14087] Updated weights for policy 0, policy_version 24368 (0.0017) |
| [2023-09-24 16:15:25,293][13256] Fps is (10 sec: 6553.6, 60 sec: 6417.0, 300 sec: 6414.8). Total num frames: 12472320. Throughput: 0: 804.0, 1: 802.8. Samples: 1965759. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) |
| [2023-09-24 16:15:25,294][13256] Avg episode reward: [(0, '5.190'), (1, '4.630')] |
| [2023-09-24 16:15:30,293][13256] Fps is (10 sec: 6553.7, 60 sec: 6417.1, 300 sec: 6442.5). Total num frames: 12505088. Throughput: 0: 803.2, 1: 804.3. Samples: 1975077. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) |
| [2023-09-24 16:15:30,294][13256] Avg episode reward: [(0, '4.960'), (1, '4.730')] |
| [2023-09-24 16:15:35,293][13256] Fps is (10 sec: 6553.6, 60 sec: 6417.1, 300 sec: 6442.5). Total num frames: 12537856. Throughput: 0: 802.3, 1: 802.4. Samples: 1980001. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) |
| [2023-09-24 16:15:35,294][13256] Avg episode reward: [(0, '5.050'), (1, '5.000')] |
| [2023-09-24 16:15:37,572][14088] Updated weights for policy 1, policy_version 24512 (0.0017) |
| [2023-09-24 16:15:37,574][14087] Updated weights for policy 0, policy_version 24528 (0.0018) |
| [2023-09-24 16:15:40,293][13256] Fps is (10 sec: 6553.5, 60 sec: 6417.1, 300 sec: 6442.5). Total num frames: 12570624. Throughput: 0: 800.6, 1: 800.8. Samples: 1989107. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) |
| [2023-09-24 16:15:40,294][13256] Avg episode reward: [(0, '5.350'), (1, '4.920')] |
| [2023-09-24 16:15:45,293][13256] Fps is (10 sec: 6553.6, 60 sec: 6417.1, 300 sec: 6442.5). Total num frames: 12603392. Throughput: 0: 796.6, 1: 797.1. Samples: 1998856. Policy #0 lag: (min: 1.0, avg: 1.0, max: 1.0) |
| [2023-09-24 16:15:45,294][13256] Avg episode reward: [(0, '5.410'), (1, '5.060')] |
| [2023-09-24 16:15:50,293][13256] Fps is (10 sec: 5734.4, 60 sec: 6280.5, 300 sec: 6414.8). Total num frames: 12627968. Throughput: 0: 804.7, 1: 804.1. Samples: 2003842. Policy #0 lag: (min: 1.0, avg: 1.0, max: 1.0) |
| [2023-09-24 16:15:50,294][13256] Avg episode reward: [(0, '5.240'), (1, '5.010')] |
| [2023-09-24 16:15:50,371][14088] Updated weights for policy 1, policy_version 24672 (0.0020) |
| [2023-09-24 16:15:50,371][14087] Updated weights for policy 0, policy_version 24688 (0.0020) |
| [2023-09-24 16:15:55,293][13256] Fps is (10 sec: 5734.4, 60 sec: 6417.1, 300 sec: 6414.7). Total num frames: 12660736. Throughput: 0: 796.8, 1: 796.5. Samples: 2013190. Policy #0 lag: (min: 1.0, avg: 1.0, max: 1.0) |
| [2023-09-24 16:15:55,294][13256] Avg episode reward: [(0, '5.100'), (1, '4.750')] |
| [2023-09-24 16:16:00,293][13256] Fps is (10 sec: 6553.7, 60 sec: 6417.1, 300 sec: 6414.8). Total num frames: 12693504. Throughput: 0: 801.4, 1: 801.4. Samples: 2023291. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) |
| [2023-09-24 16:16:00,294][13256] Avg episode reward: [(0, '4.710'), (1, '4.970')] |
| [2023-09-24 16:16:03,023][14088] Updated weights for policy 1, policy_version 24832 (0.0017) |
| [2023-09-24 16:16:03,023][14087] Updated weights for policy 0, policy_version 24848 (0.0016) |
| [2023-09-24 16:16:05,293][13256] Fps is (10 sec: 6553.6, 60 sec: 6417.1, 300 sec: 6414.8). Total num frames: 12726272. Throughput: 0: 799.7, 1: 799.3. Samples: 2027805. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) |
| [2023-09-24 16:16:05,294][13256] Avg episode reward: [(0, '4.720'), (1, '4.750')] |
| [2023-09-24 16:16:10,293][13256] Fps is (10 sec: 6553.6, 60 sec: 6417.0, 300 sec: 6414.7). Total num frames: 12759040. Throughput: 0: 798.9, 1: 799.9. Samples: 2037705. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) |
| [2023-09-24 16:16:10,294][13256] Avg episode reward: [(0, '4.680'), (1, '4.860')] |
| [2023-09-24 16:16:15,293][13256] Fps is (10 sec: 6553.7, 60 sec: 6417.1, 300 sec: 6414.8). Total num frames: 12791808. Throughput: 0: 802.5, 1: 801.5. Samples: 2047259. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) |
| [2023-09-24 16:16:15,293][13256] Avg episode reward: [(0, '4.640'), (1, '4.810')] |
| [2023-09-24 16:16:15,801][14087] Updated weights for policy 0, policy_version 25008 (0.0016) |
| [2023-09-24 16:16:15,801][14088] Updated weights for policy 1, policy_version 24992 (0.0017) |
| [2023-09-24 16:16:20,293][13256] Fps is (10 sec: 6553.7, 60 sec: 6417.1, 300 sec: 6414.8). Total num frames: 12824576. Throughput: 0: 800.6, 1: 800.0. Samples: 2052030. Policy #0 lag: (min: 8.0, avg: 8.0, max: 8.0) |
| [2023-09-24 16:16:20,294][13256] Avg episode reward: [(0, '4.650'), (1, '4.780')] |
| [2023-09-24 16:16:25,293][13256] Fps is (10 sec: 6553.5, 60 sec: 6417.1, 300 sec: 6414.8). Total num frames: 12857344. Throughput: 0: 805.8, 1: 805.8. Samples: 2061627. Policy #0 lag: (min: 8.0, avg: 8.0, max: 8.0) |
| [2023-09-24 16:16:25,294][13256] Avg episode reward: [(0, '4.920'), (1, '4.680')] |
| [2023-09-24 16:16:28,610][14088] Updated weights for policy 1, policy_version 25152 (0.0016) |
| [2023-09-24 16:16:28,611][14087] Updated weights for policy 0, policy_version 25168 (0.0017) |
| [2023-09-24 16:16:30,293][13256] Fps is (10 sec: 6553.5, 60 sec: 6417.1, 300 sec: 6414.8). Total num frames: 12890112. Throughput: 0: 802.7, 1: 803.2. Samples: 2071122. Policy #0 lag: (min: 8.0, avg: 8.0, max: 8.0) |
| [2023-09-24 16:16:30,294][13256] Avg episode reward: [(0, '4.980'), (1, '4.830')] |
| [2023-09-24 16:16:35,293][13256] Fps is (10 sec: 6553.6, 60 sec: 6417.1, 300 sec: 6414.8). Total num frames: 12922880. Throughput: 0: 803.4, 1: 803.6. Samples: 2076158. Policy #0 lag: (min: 1.0, avg: 1.0, max: 1.0) |
| [2023-09-24 16:16:35,294][13256] Avg episode reward: [(0, '4.800'), (1, '4.770')] |
| [2023-09-24 16:16:40,293][13256] Fps is (10 sec: 6553.6, 60 sec: 6417.1, 300 sec: 6414.8). Total num frames: 12955648. Throughput: 0: 806.9, 1: 807.0. Samples: 2085814. Policy #0 lag: (min: 1.0, avg: 1.0, max: 1.0) |
| [2023-09-24 16:16:40,294][13256] Avg episode reward: [(0, '4.740'), (1, '4.990')] |
| [2023-09-24 16:16:41,230][14087] Updated weights for policy 0, policy_version 25328 (0.0015) |
| [2023-09-24 16:16:41,231][14088] Updated weights for policy 1, policy_version 25312 (0.0016) |
| [2023-09-24 16:16:45,293][13256] Fps is (10 sec: 6553.6, 60 sec: 6417.1, 300 sec: 6414.8). Total num frames: 12988416. Throughput: 0: 799.6, 1: 799.2. Samples: 2095238. Policy #0 lag: (min: 1.0, avg: 1.0, max: 1.0) |
| [2023-09-24 16:16:45,294][13256] Avg episode reward: [(0, '4.520'), (1, '4.970')] |
| [2023-09-24 16:16:50,293][13256] Fps is (10 sec: 6553.7, 60 sec: 6553.6, 300 sec: 6414.8). Total num frames: 13021184. Throughput: 0: 805.8, 1: 805.4. Samples: 2100307. Policy #0 lag: (min: 1.0, avg: 1.0, max: 1.0) |
| [2023-09-24 16:16:50,294][13256] Avg episode reward: [(0, '4.570'), (1, '5.020')] |
| [2023-09-24 16:16:50,301][13827] Saving ./train_atari/Berzerk/checkpoint_p0/checkpoint_000025440_6512640.pth... |
| [2023-09-24 16:16:50,301][13996] Saving ./train_atari/Berzerk/checkpoint_p1/checkpoint_000025424_6508544.pth... |
| [2023-09-24 16:16:50,331][13827] Removing ./train_atari/Berzerk/checkpoint_p0/checkpoint_000022432_5742592.pth |
| [2023-09-24 16:16:50,343][13996] Removing ./train_atari/Berzerk/checkpoint_p1/checkpoint_000022416_5738496.pth |
| [2023-09-24 16:16:53,951][14088] Updated weights for policy 1, policy_version 25472 (0.0016) |
| [2023-09-24 16:16:53,951][14087] Updated weights for policy 0, policy_version 25488 (0.0017) |
| [2023-09-24 16:16:55,293][13256] Fps is (10 sec: 6553.6, 60 sec: 6553.6, 300 sec: 6414.8). Total num frames: 13053952. Throughput: 0: 802.2, 1: 801.6. Samples: 2109877. Policy #0 lag: (min: 6.0, avg: 6.0, max: 6.0) |
| [2023-09-24 16:16:55,294][13256] Avg episode reward: [(0, '4.490'), (1, '4.940')] |
| [2023-09-24 16:17:00,293][13256] Fps is (10 sec: 6553.7, 60 sec: 6553.6, 300 sec: 6414.8). Total num frames: 13086720. Throughput: 0: 804.5, 1: 804.8. Samples: 2119680. Policy #0 lag: (min: 6.0, avg: 6.0, max: 6.0) |
| [2023-09-24 16:17:00,293][13256] Avg episode reward: [(0, '4.500'), (1, '4.710')] |
| [2023-09-24 16:17:05,293][13256] Fps is (10 sec: 5734.4, 60 sec: 6417.1, 300 sec: 6387.0). Total num frames: 13111296. Throughput: 0: 803.4, 1: 804.4. Samples: 2124384. Policy #0 lag: (min: 6.0, avg: 6.0, max: 6.0) |
| [2023-09-24 16:17:05,294][13256] Avg episode reward: [(0, '4.570'), (1, '4.640')] |
| [2023-09-24 16:17:06,617][14088] Updated weights for policy 1, policy_version 25632 (0.0015) |
| [2023-09-24 16:17:06,617][14087] Updated weights for policy 0, policy_version 25648 (0.0018) |
| [2023-09-24 16:17:10,293][13256] Fps is (10 sec: 5734.2, 60 sec: 6417.1, 300 sec: 6414.7). Total num frames: 13144064. Throughput: 0: 804.2, 1: 804.6. Samples: 2134023. Policy #0 lag: (min: 6.0, avg: 6.0, max: 6.0) |
| [2023-09-24 16:17:10,294][13256] Avg episode reward: [(0, '4.300'), (1, '4.810')] |
| [2023-09-24 16:17:15,293][13256] Fps is (10 sec: 6553.6, 60 sec: 6417.0, 300 sec: 6414.8). Total num frames: 13176832. Throughput: 0: 805.4, 1: 806.0. Samples: 2143632. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) |
| [2023-09-24 16:17:15,294][13256] Avg episode reward: [(0, '4.380'), (1, '5.000')] |
| [2023-09-24 16:17:19,469][14088] Updated weights for policy 1, policy_version 25792 (0.0018) |
| [2023-09-24 16:17:19,470][14087] Updated weights for policy 0, policy_version 25808 (0.0015) |
| [2023-09-24 16:17:20,293][13256] Fps is (10 sec: 6553.8, 60 sec: 6417.1, 300 sec: 6414.8). Total num frames: 13209600. Throughput: 0: 801.8, 1: 802.7. Samples: 2148360. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) |
| [2023-09-24 16:17:20,293][13256] Avg episode reward: [(0, '4.520'), (1, '4.910')] |
| [2023-09-24 16:17:25,293][13256] Fps is (10 sec: 6553.6, 60 sec: 6417.1, 300 sec: 6414.8). Total num frames: 13242368. Throughput: 0: 803.0, 1: 802.6. Samples: 2158065. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) |
| [2023-09-24 16:17:25,294][13256] Avg episode reward: [(0, '4.460'), (1, '5.310')] |
| [2023-09-24 16:17:30,293][13256] Fps is (10 sec: 6553.5, 60 sec: 6417.1, 300 sec: 6414.8). Total num frames: 13275136. Throughput: 0: 805.0, 1: 806.3. Samples: 2167744. Policy #0 lag: (min: 12.0, avg: 12.0, max: 12.0) |
| [2023-09-24 16:17:30,294][13256] Avg episode reward: [(0, '4.720'), (1, '5.430')] |
| [2023-09-24 16:17:32,275][14087] Updated weights for policy 0, policy_version 25968 (0.0016) |
| [2023-09-24 16:17:32,275][14088] Updated weights for policy 1, policy_version 25952 (0.0017) |
| [2023-09-24 16:17:35,293][13256] Fps is (10 sec: 6553.6, 60 sec: 6417.1, 300 sec: 6414.7). Total num frames: 13307904. Throughput: 0: 803.6, 1: 803.3. Samples: 2172616. Policy #0 lag: (min: 12.0, avg: 12.0, max: 12.0) |
| [2023-09-24 16:17:35,294][13256] Avg episode reward: [(0, '4.660'), (1, '5.560')] |
| [2023-09-24 16:17:35,305][13996] Saving new best policy, reward=5.560! |
| [2023-09-24 16:17:40,293][13256] Fps is (10 sec: 6553.5, 60 sec: 6417.1, 300 sec: 6414.7). Total num frames: 13340672. Throughput: 0: 802.4, 1: 802.3. Samples: 2182090. Policy #0 lag: (min: 12.0, avg: 12.0, max: 12.0) |
| [2023-09-24 16:17:40,294][13256] Avg episode reward: [(0, '4.770'), (1, '5.470')] |
| [2023-09-24 16:17:44,961][14088] Updated weights for policy 1, policy_version 26112 (0.0018) |
| [2023-09-24 16:17:44,961][14087] Updated weights for policy 0, policy_version 26128 (0.0015) |
| [2023-09-24 16:17:45,293][13256] Fps is (10 sec: 6553.6, 60 sec: 6417.1, 300 sec: 6414.8). Total num frames: 13373440. Throughput: 0: 801.3, 1: 800.8. Samples: 2191775. Policy #0 lag: (min: 12.0, avg: 12.0, max: 12.0) |
| [2023-09-24 16:17:45,294][13256] Avg episode reward: [(0, '4.670'), (1, '5.330')] |
| [2023-09-24 16:17:50,293][13256] Fps is (10 sec: 6553.5, 60 sec: 6417.0, 300 sec: 6414.8). Total num frames: 13406208. Throughput: 0: 802.0, 1: 802.4. Samples: 2196582. Policy #0 lag: (min: 1.0, avg: 1.0, max: 1.0) |
| [2023-09-24 16:17:50,294][13256] Avg episode reward: [(0, '4.770'), (1, '5.450')] |
| [2023-09-24 16:17:55,293][13256] Fps is (10 sec: 6553.7, 60 sec: 6417.1, 300 sec: 6414.8). Total num frames: 13438976. Throughput: 0: 801.2, 1: 800.8. Samples: 2206113. Policy #0 lag: (min: 1.0, avg: 1.0, max: 1.0) |
| [2023-09-24 16:17:55,294][13256] Avg episode reward: [(0, '4.660'), (1, '5.320')] |
| [2023-09-24 16:17:57,749][14088] Updated weights for policy 1, policy_version 26272 (0.0015) |
| [2023-09-24 16:17:57,749][14087] Updated weights for policy 0, policy_version 26288 (0.0018) |
| [2023-09-24 16:18:00,293][13256] Fps is (10 sec: 6553.7, 60 sec: 6417.0, 300 sec: 6442.5). Total num frames: 13471744. Throughput: 0: 803.9, 1: 802.8. Samples: 2215936. Policy #0 lag: (min: 1.0, avg: 1.0, max: 1.0) |
| [2023-09-24 16:18:00,294][13256] Avg episode reward: [(0, '4.810'), (1, '5.340')] |
| [2023-09-24 16:18:05,293][13256] Fps is (10 sec: 5734.3, 60 sec: 6417.1, 300 sec: 6414.7). Total num frames: 13496320. Throughput: 0: 804.2, 1: 804.2. Samples: 2220736. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) |
| [2023-09-24 16:18:05,294][13256] Avg episode reward: [(0, '4.610'), (1, '5.440')] |
| [2023-09-24 16:18:10,293][13256] Fps is (10 sec: 5734.4, 60 sec: 6417.1, 300 sec: 6414.8). Total num frames: 13529088. Throughput: 0: 803.6, 1: 803.4. Samples: 2230378. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) |
| [2023-09-24 16:18:10,294][13256] Avg episode reward: [(0, '4.780'), (1, '5.660')] |
| [2023-09-24 16:18:10,401][13996] Saving new best policy, reward=5.660! |
| [2023-09-24 16:18:10,403][14087] Updated weights for policy 0, policy_version 26448 (0.0017) |
| [2023-09-24 16:18:10,403][14088] Updated weights for policy 1, policy_version 26432 (0.0017) |
| [2023-09-24 16:18:15,293][13256] Fps is (10 sec: 6553.6, 60 sec: 6417.1, 300 sec: 6414.7). Total num frames: 13561856. Throughput: 0: 808.2, 1: 806.5. Samples: 2240405. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) |
| [2023-09-24 16:18:15,294][13256] Avg episode reward: [(0, '4.900'), (1, '5.610')] |
| [2023-09-24 16:18:20,293][13256] Fps is (10 sec: 6553.5, 60 sec: 6417.0, 300 sec: 6414.8). Total num frames: 13594624. Throughput: 0: 804.0, 1: 804.5. Samples: 2244999. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) |
| [2023-09-24 16:18:20,294][13256] Avg episode reward: [(0, '4.950'), (1, '5.480')] |
| [2023-09-24 16:18:23,266][14088] Updated weights for policy 1, policy_version 26592 (0.0018) |
| [2023-09-24 16:18:23,267][14087] Updated weights for policy 0, policy_version 26608 (0.0016) |
| [2023-09-24 16:18:25,293][13256] Fps is (10 sec: 6553.6, 60 sec: 6417.1, 300 sec: 6414.7). Total num frames: 13627392. Throughput: 0: 806.3, 1: 806.2. Samples: 2254652. Policy #0 lag: (min: 2.0, avg: 2.0, max: 2.0) |
| [2023-09-24 16:18:25,294][13256] Avg episode reward: [(0, '4.980'), (1, '5.480')] |
| [2023-09-24 16:18:30,293][13256] Fps is (10 sec: 6553.7, 60 sec: 6417.1, 300 sec: 6414.8). Total num frames: 13660160. Throughput: 0: 805.9, 1: 805.8. Samples: 2264302. Policy #0 lag: (min: 2.0, avg: 2.0, max: 2.0) |
| [2023-09-24 16:18:30,294][13256] Avg episode reward: [(0, '4.960'), (1, '5.480')] |
| [2023-09-24 16:18:35,293][13256] Fps is (10 sec: 6553.6, 60 sec: 6417.1, 300 sec: 6414.8). Total num frames: 13692928. Throughput: 0: 806.6, 1: 806.7. Samples: 2269184. Policy #0 lag: (min: 2.0, avg: 2.0, max: 2.0) |
| [2023-09-24 16:18:35,294][13256] Avg episode reward: [(0, '4.910'), (1, '5.370')] |
| [2023-09-24 16:18:35,863][14088] Updated weights for policy 1, policy_version 26752 (0.0016) |
| [2023-09-24 16:18:35,864][14087] Updated weights for policy 0, policy_version 26768 (0.0017) |
| [2023-09-24 16:18:40,293][13256] Fps is (10 sec: 6553.7, 60 sec: 6417.1, 300 sec: 6414.8). Total num frames: 13725696. Throughput: 0: 806.9, 1: 807.0. Samples: 2278739. Policy #0 lag: (min: 7.0, avg: 7.0, max: 7.0) |
| [2023-09-24 16:18:40,293][13256] Avg episode reward: [(0, '4.870'), (1, '5.400')] |
| [2023-09-24 16:18:45,293][13256] Fps is (10 sec: 6553.6, 60 sec: 6417.1, 300 sec: 6414.8). Total num frames: 13758464. Throughput: 0: 803.0, 1: 802.4. Samples: 2288179. Policy #0 lag: (min: 7.0, avg: 7.0, max: 7.0) |
| [2023-09-24 16:18:45,294][13256] Avg episode reward: [(0, '4.860'), (1, '5.380')] |
| [2023-09-24 16:18:48,677][14087] Updated weights for policy 0, policy_version 26928 (0.0015) |
| [2023-09-24 16:18:48,678][14088] Updated weights for policy 1, policy_version 26912 (0.0017) |
| [2023-09-24 16:18:50,293][13256] Fps is (10 sec: 6553.5, 60 sec: 6417.1, 300 sec: 6414.8). Total num frames: 13791232. Throughput: 0: 806.0, 1: 805.2. Samples: 2293242. Policy #0 lag: (min: 7.0, avg: 7.0, max: 7.0) |
| [2023-09-24 16:18:50,294][13256] Avg episode reward: [(0, '4.860'), (1, '5.380')] |
| [2023-09-24 16:18:50,302][13827] Saving ./train_atari/Berzerk/checkpoint_p0/checkpoint_000026944_6897664.pth... |
| [2023-09-24 16:18:50,302][13996] Saving ./train_atari/Berzerk/checkpoint_p1/checkpoint_000026928_6893568.pth... |
| [2023-09-24 16:18:50,330][13827] Removing ./train_atari/Berzerk/checkpoint_p0/checkpoint_000023936_6127616.pth |
| [2023-09-24 16:18:50,338][13996] Removing ./train_atari/Berzerk/checkpoint_p1/checkpoint_000023920_6123520.pth |
| [2023-09-24 16:18:55,293][13256] Fps is (10 sec: 6553.7, 60 sec: 6417.1, 300 sec: 6428.6). Total num frames: 13824000. Throughput: 0: 802.7, 1: 802.5. Samples: 2302610. Policy #0 lag: (min: 7.0, avg: 7.0, max: 7.0) |
| [2023-09-24 16:18:55,294][13256] Avg episode reward: [(0, '4.970'), (1, '4.960')] |
| [2023-09-24 16:19:00,293][13256] Fps is (10 sec: 6553.6, 60 sec: 6417.1, 300 sec: 6442.5). Total num frames: 13856768. Throughput: 0: 798.0, 1: 798.6. Samples: 2312255. Policy #0 lag: (min: 0.0, avg: 0.0, max: 0.0) |
| [2023-09-24 16:19:00,294][13256] Avg episode reward: [(0, '4.960'), (1, '4.840')] |
| [2023-09-24 16:19:01,428][14088] Updated weights for policy 1, policy_version 27072 (0.0016) |
| [2023-09-24 16:19:01,428][14087] Updated weights for policy 0, policy_version 27088 (0.0017) |
| [2023-09-24 16:19:05,293][13256] Fps is (10 sec: 6553.5, 60 sec: 6553.6, 300 sec: 6442.5). Total num frames: 13889536. Throughput: 0: 803.1, 1: 803.1. Samples: 2317279. Policy #0 lag: (min: 0.0, avg: 0.0, max: 0.0) |
| [2023-09-24 16:19:05,294][13256] Avg episode reward: [(0, '4.610'), (1, '4.960')] |
| [2023-09-24 16:19:10,293][13256] Fps is (10 sec: 6553.6, 60 sec: 6553.6, 300 sec: 6442.5). Total num frames: 13922304. Throughput: 0: 803.1, 1: 803.4. Samples: 2326943. Policy #0 lag: (min: 0.0, avg: 0.0, max: 0.0) |
| [2023-09-24 16:19:10,294][13256] Avg episode reward: [(0, '4.550'), (1, '5.070')] |
| [2023-09-24 16:19:14,021][14088] Updated weights for policy 1, policy_version 27232 (0.0017) |
| [2023-09-24 16:19:14,021][14087] Updated weights for policy 0, policy_version 27248 (0.0018) |
| [2023-09-24 16:19:15,293][13256] Fps is (10 sec: 6553.6, 60 sec: 6553.6, 300 sec: 6442.5). Total num frames: 13955072. Throughput: 0: 804.8, 1: 805.5. Samples: 2336768. Policy #0 lag: (min: 0.0, avg: 0.0, max: 0.0) |
| [2023-09-24 16:19:15,294][13256] Avg episode reward: [(0, '4.490'), (1, '5.110')] |
| [2023-09-24 16:19:20,293][13256] Fps is (10 sec: 5734.4, 60 sec: 6417.1, 300 sec: 6414.8). Total num frames: 13979648. Throughput: 0: 802.6, 1: 802.1. Samples: 2341397. Policy #0 lag: (min: 1.0, avg: 1.0, max: 1.0) |
| [2023-09-24 16:19:20,294][13256] Avg episode reward: [(0, '4.620'), (1, '5.130')] |
| [2023-09-24 16:19:25,293][13256] Fps is (10 sec: 5734.4, 60 sec: 6417.1, 300 sec: 6414.8). Total num frames: 14012416. Throughput: 0: 804.2, 1: 804.3. Samples: 2351122. Policy #0 lag: (min: 1.0, avg: 1.0, max: 1.0) |
| [2023-09-24 16:19:25,294][13256] Avg episode reward: [(0, '4.810'), (1, '5.260')] |
| [2023-09-24 16:19:26,743][14087] Updated weights for policy 0, policy_version 27408 (0.0016) |
| [2023-09-24 16:19:26,743][14088] Updated weights for policy 1, policy_version 27392 (0.0018) |
| [2023-09-24 16:19:30,293][13256] Fps is (10 sec: 6553.6, 60 sec: 6417.1, 300 sec: 6414.8). Total num frames: 14045184. Throughput: 0: 807.9, 1: 807.9. Samples: 2360889. Policy #0 lag: (min: 1.0, avg: 1.0, max: 1.0) |
| [2023-09-24 16:19:30,294][13256] Avg episode reward: [(0, '5.010'), (1, '5.380')] |
| [2023-09-24 16:19:35,293][13256] Fps is (10 sec: 6553.7, 60 sec: 6417.1, 300 sec: 6414.8). Total num frames: 14077952. Throughput: 0: 801.8, 1: 802.6. Samples: 2365440. Policy #0 lag: (min: 1.0, avg: 1.0, max: 1.0) |
| [2023-09-24 16:19:35,294][13256] Avg episode reward: [(0, '4.940'), (1, '5.290')] |
| [2023-09-24 16:19:39,542][14088] Updated weights for policy 1, policy_version 27552 (0.0017) |
| [2023-09-24 16:19:39,542][14087] Updated weights for policy 0, policy_version 27568 (0.0017) |
| [2023-09-24 16:19:40,293][13256] Fps is (10 sec: 6553.6, 60 sec: 6417.1, 300 sec: 6414.8). Total num frames: 14110720. Throughput: 0: 808.3, 1: 808.9. Samples: 2375384. Policy #0 lag: (min: 7.0, avg: 7.0, max: 7.0) |
| [2023-09-24 16:19:40,293][13256] Avg episode reward: [(0, '4.760'), (1, '5.320')] |
| [2023-09-24 16:19:45,293][13256] Fps is (10 sec: 6553.5, 60 sec: 6417.1, 300 sec: 6414.7). Total num frames: 14143488. Throughput: 0: 809.1, 1: 808.7. Samples: 2385057. Policy #0 lag: (min: 7.0, avg: 7.0, max: 7.0) |
| [2023-09-24 16:19:45,294][13256] Avg episode reward: [(0, '4.800'), (1, '5.240')] |
| [2023-09-24 16:19:50,293][13256] Fps is (10 sec: 6553.5, 60 sec: 6417.1, 300 sec: 6442.5). Total num frames: 14176256. Throughput: 0: 807.9, 1: 808.5. Samples: 2390016. Policy #0 lag: (min: 7.0, avg: 7.0, max: 7.0) |
| [2023-09-24 16:19:50,294][13256] Avg episode reward: [(0, '4.650'), (1, '4.980')] |
| [2023-09-24 16:19:52,165][14087] Updated weights for policy 0, policy_version 27728 (0.0016) |
| [2023-09-24 16:19:52,166][14088] Updated weights for policy 1, policy_version 27712 (0.0017) |
| [2023-09-24 16:19:55,293][13256] Fps is (10 sec: 6553.7, 60 sec: 6417.1, 300 sec: 6442.5). Total num frames: 14209024. Throughput: 0: 807.7, 1: 807.6. Samples: 2399632. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) |
| [2023-09-24 16:19:55,294][13256] Avg episode reward: [(0, '4.510'), (1, '4.950')] |
| [2023-09-24 16:20:00,293][13256] Fps is (10 sec: 6553.7, 60 sec: 6417.1, 300 sec: 6442.5). Total num frames: 14241792. Throughput: 0: 806.4, 1: 805.6. Samples: 2409307. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) |
| [2023-09-24 16:20:00,293][13256] Avg episode reward: [(0, '4.530'), (1, '5.110')] |
| [2023-09-24 16:20:04,786][14088] Updated weights for policy 1, policy_version 27872 (0.0017) |
| [2023-09-24 16:20:04,786][14087] Updated weights for policy 0, policy_version 27888 (0.0015) |
| [2023-09-24 16:20:05,293][13256] Fps is (10 sec: 6553.6, 60 sec: 6417.1, 300 sec: 6442.5). Total num frames: 14274560. Throughput: 0: 810.2, 1: 809.9. Samples: 2414302. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) |
| [2023-09-24 16:20:05,294][13256] Avg episode reward: [(0, '4.450'), (1, '5.160')] |
| [2023-09-24 16:20:10,293][13256] Fps is (10 sec: 6553.6, 60 sec: 6417.1, 300 sec: 6442.5). Total num frames: 14307328. Throughput: 0: 810.5, 1: 811.0. Samples: 2424087. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) |
| [2023-09-24 16:20:10,293][13256] Avg episode reward: [(0, '4.520'), (1, '5.080')] |
| [2023-09-24 16:20:15,293][13256] Fps is (10 sec: 6553.5, 60 sec: 6417.1, 300 sec: 6442.5). Total num frames: 14340096. Throughput: 0: 808.8, 1: 809.2. Samples: 2433698. Policy #0 lag: (min: 8.0, avg: 8.0, max: 8.0) |
| [2023-09-24 16:20:15,294][13256] Avg episode reward: [(0, '4.570'), (1, '5.170')] |
| [2023-09-24 16:20:17,348][14088] Updated weights for policy 1, policy_version 28032 (0.0018) |
| [2023-09-24 16:20:17,349][14087] Updated weights for policy 0, policy_version 28048 (0.0016) |
| [2023-09-24 16:20:20,293][13256] Fps is (10 sec: 6553.4, 60 sec: 6553.6, 300 sec: 6442.5). Total num frames: 14372864. Throughput: 0: 814.2, 1: 814.8. Samples: 2438745. Policy #0 lag: (min: 8.0, avg: 8.0, max: 8.0) |
| [2023-09-24 16:20:20,294][13256] Avg episode reward: [(0, '4.670'), (1, '5.120')] |
| [2023-09-24 16:20:25,293][13256] Fps is (10 sec: 6553.7, 60 sec: 6553.6, 300 sec: 6442.5). Total num frames: 14405632. Throughput: 0: 811.0, 1: 810.3. Samples: 2448342. Policy #0 lag: (min: 8.0, avg: 8.0, max: 8.0) |
| [2023-09-24 16:20:25,294][13256] Avg episode reward: [(0, '4.740'), (1, '5.110')] |
| [2023-09-24 16:20:30,005][14087] Updated weights for policy 0, policy_version 28208 (0.0016) |
| [2023-09-24 16:20:30,006][14088] Updated weights for policy 1, policy_version 28192 (0.0016) |
| [2023-09-24 16:20:30,293][13256] Fps is (10 sec: 6553.6, 60 sec: 6553.6, 300 sec: 6442.5). Total num frames: 14438400. Throughput: 0: 809.4, 1: 809.7. Samples: 2457916. Policy #0 lag: (min: 8.0, avg: 8.0, max: 8.0) |
| [2023-09-24 16:20:30,294][13256] Avg episode reward: [(0, '4.430'), (1, '5.040')] |
| [2023-09-24 16:20:35,293][13256] Fps is (10 sec: 6553.5, 60 sec: 6553.6, 300 sec: 6442.5). Total num frames: 14471168. Throughput: 0: 810.7, 1: 810.4. Samples: 2462965. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) |
| [2023-09-24 16:20:35,294][13256] Avg episode reward: [(0, '4.340'), (1, '5.150')] |
| [2023-09-24 16:20:40,293][13256] Fps is (10 sec: 6553.6, 60 sec: 6553.6, 300 sec: 6442.5). Total num frames: 14503936. Throughput: 0: 810.4, 1: 810.7. Samples: 2472582. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) |
| [2023-09-24 16:20:40,294][13256] Avg episode reward: [(0, '4.200'), (1, '5.170')] |
| [2023-09-24 16:20:42,545][14088] Updated weights for policy 1, policy_version 28352 (0.0018) |
| [2023-09-24 16:20:42,545][14087] Updated weights for policy 0, policy_version 28368 (0.0016) |
| [2023-09-24 16:20:45,293][13256] Fps is (10 sec: 6553.6, 60 sec: 6553.6, 300 sec: 6470.3). Total num frames: 14536704. Throughput: 0: 812.1, 1: 812.6. Samples: 2482419. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) |
| [2023-09-24 16:20:45,294][13256] Avg episode reward: [(0, '4.480'), (1, '5.200')] |
| [2023-09-24 16:20:50,293][13256] Fps is (10 sec: 6553.5, 60 sec: 6553.6, 300 sec: 6470.3). Total num frames: 14569472. Throughput: 0: 813.7, 1: 813.9. Samples: 2487544. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) |
| [2023-09-24 16:20:50,294][13256] Avg episode reward: [(0, '4.670'), (1, '5.290')] |
| [2023-09-24 16:20:50,308][13996] Saving ./train_atari/Berzerk/checkpoint_p1/checkpoint_000028448_7282688.pth... |
| [2023-09-24 16:20:50,308][13827] Saving ./train_atari/Berzerk/checkpoint_p0/checkpoint_000028464_7286784.pth... |
| [2023-09-24 16:20:50,338][13827] Removing ./train_atari/Berzerk/checkpoint_p0/checkpoint_000025440_6512640.pth |
| [2023-09-24 16:20:50,344][13996] Removing ./train_atari/Berzerk/checkpoint_p1/checkpoint_000025424_6508544.pth |
| [2023-09-24 16:20:55,099][14088] Updated weights for policy 1, policy_version 28512 (0.0019) |
| [2023-09-24 16:20:55,099][14087] Updated weights for policy 0, policy_version 28528 (0.0019) |
| [2023-09-24 16:20:55,293][13256] Fps is (10 sec: 6553.8, 60 sec: 6553.6, 300 sec: 6470.3). Total num frames: 14602240. Throughput: 0: 812.4, 1: 811.4. Samples: 2497155. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) |
| [2023-09-24 16:20:55,293][13256] Avg episode reward: [(0, '4.690'), (1, '5.450')] |
| [2023-09-24 16:21:00,293][13256] Fps is (10 sec: 6553.7, 60 sec: 6553.6, 300 sec: 6470.3). Total num frames: 14635008. Throughput: 0: 811.6, 1: 811.8. Samples: 2506752. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) |
| [2023-09-24 16:21:00,294][13256] Avg episode reward: [(0, '4.870'), (1, '5.410')] |
| [2023-09-24 16:21:05,293][13256] Fps is (10 sec: 6553.4, 60 sec: 6553.6, 300 sec: 6470.3). Total num frames: 14667776. Throughput: 0: 811.8, 1: 810.5. Samples: 2511748. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) |
| [2023-09-24 16:21:05,294][13256] Avg episode reward: [(0, '5.000'), (1, '5.400')] |
| [2023-09-24 16:21:07,754][14087] Updated weights for policy 0, policy_version 28688 (0.0017) |
| [2023-09-24 16:21:07,754][14088] Updated weights for policy 1, policy_version 28672 (0.0016) |
| [2023-09-24 16:21:10,293][13256] Fps is (10 sec: 6144.0, 60 sec: 6485.3, 300 sec: 6456.4). Total num frames: 14696448. Throughput: 0: 811.8, 1: 812.5. Samples: 2521439. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) |
| [2023-09-24 16:21:10,294][13256] Avg episode reward: [(0, '5.100'), (1, '5.330')] |
| [2023-09-24 16:21:15,293][13256] Fps is (10 sec: 5734.4, 60 sec: 6417.1, 300 sec: 6442.5). Total num frames: 14725120. Throughput: 0: 815.4, 1: 815.8. Samples: 2531319. Policy #0 lag: (min: 3.0, avg: 3.0, max: 3.0) |
| [2023-09-24 16:21:15,294][13256] Avg episode reward: [(0, '5.210'), (1, '5.290')] |
| [2023-09-24 16:21:20,293][13256] Fps is (10 sec: 6144.0, 60 sec: 6417.1, 300 sec: 6442.5). Total num frames: 14757888. Throughput: 0: 809.9, 1: 809.9. Samples: 2535856. Policy #0 lag: (min: 3.0, avg: 3.0, max: 3.0) |
| [2023-09-24 16:21:20,294][13256] Avg episode reward: [(0, '4.930'), (1, '5.210')] |
| [2023-09-24 16:21:20,528][14088] Updated weights for policy 1, policy_version 28832 (0.0015) |
| [2023-09-24 16:21:20,529][14087] Updated weights for policy 0, policy_version 28848 (0.0016) |
| [2023-09-24 16:21:25,293][13256] Fps is (10 sec: 6553.6, 60 sec: 6417.0, 300 sec: 6442.5). Total num frames: 14790656. Throughput: 0: 811.8, 1: 812.2. Samples: 2545664. Policy #0 lag: (min: 3.0, avg: 3.0, max: 3.0) |
| [2023-09-24 16:21:25,294][13256] Avg episode reward: [(0, '5.040'), (1, '4.940')] |
| [2023-09-24 16:21:30,293][13256] Fps is (10 sec: 6553.7, 60 sec: 6417.1, 300 sec: 6442.5). Total num frames: 14823424. Throughput: 0: 808.3, 1: 808.5. Samples: 2555176. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) |
| [2023-09-24 16:21:30,294][13256] Avg episode reward: [(0, '4.810'), (1, '4.780')] |
| [2023-09-24 16:21:33,290][14087] Updated weights for policy 0, policy_version 29008 (0.0015) |
| [2023-09-24 16:21:33,290][14088] Updated weights for policy 1, policy_version 28992 (0.0017) |
| [2023-09-24 16:21:35,293][13256] Fps is (10 sec: 6553.6, 60 sec: 6417.1, 300 sec: 6442.5). Total num frames: 14856192. Throughput: 0: 804.8, 1: 805.4. Samples: 2560000. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) |
| [2023-09-24 16:21:35,294][13256] Avg episode reward: [(0, '4.870'), (1, '4.600')] |
| [2023-09-24 16:21:40,293][13256] Fps is (10 sec: 6553.5, 60 sec: 6417.1, 300 sec: 6442.5). Total num frames: 14888960. Throughput: 0: 805.7, 1: 805.8. Samples: 2569676. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) |
| [2023-09-24 16:21:40,294][13256] Avg episode reward: [(0, '4.620'), (1, '4.530')] |
| [2023-09-24 16:21:45,293][13256] Fps is (10 sec: 6553.8, 60 sec: 6417.1, 300 sec: 6442.5). Total num frames: 14921728. Throughput: 0: 806.2, 1: 805.8. Samples: 2579288. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) |
| [2023-09-24 16:21:45,293][13256] Avg episode reward: [(0, '4.750'), (1, '4.530')] |
| [2023-09-24 16:21:46,014][14088] Updated weights for policy 1, policy_version 29152 (0.0017) |
| [2023-09-24 16:21:46,014][14087] Updated weights for policy 0, policy_version 29168 (0.0017) |
| [2023-09-24 16:21:50,293][13256] Fps is (10 sec: 6553.6, 60 sec: 6417.1, 300 sec: 6442.5). Total num frames: 14954496. Throughput: 0: 804.8, 1: 806.1. Samples: 2584237. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) |
| [2023-09-24 16:21:50,294][13256] Avg episode reward: [(0, '4.490'), (1, '4.490')] |
| [2023-09-24 16:21:55,293][13256] Fps is (10 sec: 6553.5, 60 sec: 6417.0, 300 sec: 6442.5). Total num frames: 14987264. Throughput: 0: 801.5, 1: 801.1. Samples: 2593556. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) |
| [2023-09-24 16:21:55,294][13256] Avg episode reward: [(0, '4.430'), (1, '4.660')] |
| [2023-09-24 16:21:58,822][14088] Updated weights for policy 1, policy_version 29312 (0.0016) |
| [2023-09-24 16:21:58,822][14087] Updated weights for policy 0, policy_version 29328 (0.0017) |
| [2023-09-24 16:22:00,293][13256] Fps is (10 sec: 6553.6, 60 sec: 6417.1, 300 sec: 6470.3). Total num frames: 15020032. Throughput: 0: 796.7, 1: 796.8. Samples: 2603025. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) |
| [2023-09-24 16:22:00,293][13256] Avg episode reward: [(0, '4.510'), (1, '4.300')] |
| [2023-09-24 16:22:05,293][13256] Fps is (10 sec: 6553.5, 60 sec: 6417.1, 300 sec: 6470.3). Total num frames: 15052800. Throughput: 0: 803.6, 1: 803.4. Samples: 2608174. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) |
| [2023-09-24 16:22:05,294][13256] Avg episode reward: [(0, '4.610'), (1, '4.550')] |
| [2023-09-24 16:22:10,293][13256] Fps is (10 sec: 6143.9, 60 sec: 6417.1, 300 sec: 6456.4). Total num frames: 15081472. Throughput: 0: 801.0, 1: 801.3. Samples: 2617768. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) |
| [2023-09-24 16:22:10,294][13256] Avg episode reward: [(0, '4.440'), (1, '4.540')] |
| [2023-09-24 16:22:11,621][14088] Updated weights for policy 1, policy_version 29472 (0.0018) |
| [2023-09-24 16:22:11,621][14087] Updated weights for policy 0, policy_version 29488 (0.0016) |
| [2023-09-24 16:22:15,293][13256] Fps is (10 sec: 5734.4, 60 sec: 6417.1, 300 sec: 6442.5). Total num frames: 15110144. Throughput: 0: 802.2, 1: 801.6. Samples: 2627346. Policy #0 lag: (min: 6.0, avg: 6.0, max: 6.0) |
| [2023-09-24 16:22:15,294][13256] Avg episode reward: [(0, '4.580'), (1, '5.020')] |
| [2023-09-24 16:22:20,293][13256] Fps is (10 sec: 6144.1, 60 sec: 6417.1, 300 sec: 6442.5). Total num frames: 15142912. Throughput: 0: 799.3, 1: 799.3. Samples: 2631938. Policy #0 lag: (min: 6.0, avg: 6.0, max: 6.0) |
| [2023-09-24 16:22:20,293][13256] Avg episode reward: [(0, '4.350'), (1, '4.930')] |
| [2023-09-24 16:22:24,519][14088] Updated weights for policy 1, policy_version 29632 (0.0017) |
| [2023-09-24 16:22:24,520][14087] Updated weights for policy 0, policy_version 29648 (0.0018) |
| [2023-09-24 16:22:25,293][13256] Fps is (10 sec: 6553.8, 60 sec: 6417.1, 300 sec: 6442.5). Total num frames: 15175680. Throughput: 0: 800.2, 1: 800.3. Samples: 2641701. Policy #0 lag: (min: 6.0, avg: 6.0, max: 6.0) |
| [2023-09-24 16:22:25,293][13256] Avg episode reward: [(0, '4.340'), (1, '4.980')] |
| [2023-09-24 16:22:30,293][13256] Fps is (10 sec: 6553.4, 60 sec: 6417.0, 300 sec: 6442.5). Total num frames: 15208448. Throughput: 0: 796.4, 1: 795.5. Samples: 2650921. Policy #0 lag: (min: 6.0, avg: 6.0, max: 6.0) |
| [2023-09-24 16:22:30,294][13256] Avg episode reward: [(0, '4.520'), (1, '4.870')] |
| [2023-09-24 16:22:35,293][13256] Fps is (10 sec: 6553.4, 60 sec: 6417.1, 300 sec: 6442.5). Total num frames: 15241216. Throughput: 0: 794.4, 1: 793.7. Samples: 2655701. Policy #0 lag: (min: 10.0, avg: 10.0, max: 10.0) |
| [2023-09-24 16:22:35,294][13256] Avg episode reward: [(0, '4.380'), (1, '4.900')] |
| [2023-09-24 16:22:37,578][14088] Updated weights for policy 1, policy_version 29792 (0.0019) |
| [2023-09-24 16:22:37,579][14087] Updated weights for policy 0, policy_version 29808 (0.0018) |
| [2023-09-24 16:22:40,293][13256] Fps is (10 sec: 6553.6, 60 sec: 6417.1, 300 sec: 6442.5). Total num frames: 15273984. Throughput: 0: 794.4, 1: 794.7. Samples: 2665067. Policy #0 lag: (min: 10.0, avg: 10.0, max: 10.0) |
| [2023-09-24 16:22:40,294][13256] Avg episode reward: [(0, '4.440'), (1, '4.760')] |
| [2023-09-24 16:22:45,293][13256] Fps is (10 sec: 6553.5, 60 sec: 6417.0, 300 sec: 6442.5). Total num frames: 15306752. Throughput: 0: 796.5, 1: 796.4. Samples: 2674707. Policy #0 lag: (min: 10.0, avg: 10.0, max: 10.0) |
| [2023-09-24 16:22:45,294][13256] Avg episode reward: [(0, '4.590'), (1, '4.780')] |
| [2023-09-24 16:22:50,194][14088] Updated weights for policy 1, policy_version 29952 (0.0017) |
| [2023-09-24 16:22:50,195][14087] Updated weights for policy 0, policy_version 29968 (0.0016) |
| [2023-09-24 16:22:50,293][13256] Fps is (10 sec: 6553.6, 60 sec: 6417.1, 300 sec: 6442.5). Total num frames: 15339520. Throughput: 0: 795.4, 1: 795.5. Samples: 2679765. Policy #0 lag: (min: 10.0, avg: 10.0, max: 10.0) |
| [2023-09-24 16:22:50,294][13256] Avg episode reward: [(0, '4.680'), (1, '4.930')] |
| [2023-09-24 16:22:50,305][13827] Saving ./train_atari/Berzerk/checkpoint_p0/checkpoint_000029968_7671808.pth... |
| [2023-09-24 16:22:50,306][13996] Saving ./train_atari/Berzerk/checkpoint_p1/checkpoint_000029952_7667712.pth... |
| [2023-09-24 16:22:50,336][13827] Removing ./train_atari/Berzerk/checkpoint_p0/checkpoint_000026944_6897664.pth |
| [2023-09-24 16:22:50,338][13996] Removing ./train_atari/Berzerk/checkpoint_p1/checkpoint_000026928_6893568.pth |
| [2023-09-24 16:22:55,293][13256] Fps is (10 sec: 5734.5, 60 sec: 6280.5, 300 sec: 6414.8). Total num frames: 15364096. Throughput: 0: 793.6, 1: 792.9. Samples: 2689161. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) |
| [2023-09-24 16:22:55,294][13256] Avg episode reward: [(0, '4.770'), (1, '5.000')] |
| [2023-09-24 16:23:00,293][13256] Fps is (10 sec: 5734.4, 60 sec: 6280.5, 300 sec: 6442.5). Total num frames: 15396864. Throughput: 0: 797.9, 1: 798.0. Samples: 2699165. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) |
| [2023-09-24 16:23:00,294][13256] Avg episode reward: [(0, '4.550'), (1, '4.990')] |
| [2023-09-24 16:23:02,982][14087] Updated weights for policy 0, policy_version 30128 (0.0018) |
| [2023-09-24 16:23:02,982][14088] Updated weights for policy 1, policy_version 30112 (0.0018) |
| [2023-09-24 16:23:05,293][13256] Fps is (10 sec: 6553.7, 60 sec: 6280.6, 300 sec: 6442.5). Total num frames: 15429632. Throughput: 0: 797.7, 1: 797.4. Samples: 2703720. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) |
| [2023-09-24 16:23:05,293][13256] Avg episode reward: [(0, '4.580'), (1, '5.200')] |
| [2023-09-24 16:23:10,293][13256] Fps is (10 sec: 6553.7, 60 sec: 6348.8, 300 sec: 6442.5). Total num frames: 15462400. Throughput: 0: 798.6, 1: 799.1. Samples: 2713600. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) |
| [2023-09-24 16:23:10,294][13256] Avg episode reward: [(0, '4.730'), (1, '5.180')] |
| [2023-09-24 16:23:15,293][13256] Fps is (10 sec: 6553.4, 60 sec: 6417.1, 300 sec: 6442.5). Total num frames: 15495168. Throughput: 0: 802.5, 1: 803.4. Samples: 2723190. Policy #0 lag: (min: 3.0, avg: 3.0, max: 3.0) |
| [2023-09-24 16:23:15,294][13256] Avg episode reward: [(0, '4.460'), (1, '5.250')] |
| [2023-09-24 16:23:15,707][14088] Updated weights for policy 1, policy_version 30272 (0.0019) |
| [2023-09-24 16:23:15,707][14087] Updated weights for policy 0, policy_version 30288 (0.0017) |
| [2023-09-24 16:23:20,293][13256] Fps is (10 sec: 6553.6, 60 sec: 6417.1, 300 sec: 6442.5). Total num frames: 15527936. Throughput: 0: 802.5, 1: 802.7. Samples: 2727936. Policy #0 lag: (min: 3.0, avg: 3.0, max: 3.0) |
| [2023-09-24 16:23:20,294][13256] Avg episode reward: [(0, '4.460'), (1, '5.250')] |
| [2023-09-24 16:23:25,293][13256] Fps is (10 sec: 6553.8, 60 sec: 6417.1, 300 sec: 6442.5). Total num frames: 15560704. Throughput: 0: 808.3, 1: 808.2. Samples: 2737811. Policy #0 lag: (min: 3.0, avg: 3.0, max: 3.0) |
| [2023-09-24 16:23:25,294][13256] Avg episode reward: [(0, '4.520'), (1, '5.280')] |
| [2023-09-24 16:23:28,558][14087] Updated weights for policy 0, policy_version 30448 (0.0017) |
| [2023-09-24 16:23:28,559][14088] Updated weights for policy 1, policy_version 30432 (0.0019) |
| [2023-09-24 16:23:30,293][13256] Fps is (10 sec: 6553.6, 60 sec: 6417.1, 300 sec: 6442.5). Total num frames: 15593472. Throughput: 0: 803.0, 1: 802.8. Samples: 2746970. Policy #0 lag: (min: 3.0, avg: 3.0, max: 3.0) |
| [2023-09-24 16:23:30,294][13256] Avg episode reward: [(0, '4.460'), (1, '5.290')] |
| [2023-09-24 16:23:35,293][13256] Fps is (10 sec: 6553.5, 60 sec: 6417.1, 300 sec: 6442.5). Total num frames: 15626240. Throughput: 0: 803.9, 1: 803.6. Samples: 2752101. Policy #0 lag: (min: 12.0, avg: 12.0, max: 12.0) |
| [2023-09-24 16:23:35,294][13256] Avg episode reward: [(0, '4.420'), (1, '5.370')] |
| [2023-09-24 16:23:40,293][13256] Fps is (10 sec: 6553.4, 60 sec: 6417.0, 300 sec: 6442.5). Total num frames: 15659008. Throughput: 0: 808.1, 1: 807.6. Samples: 2761865. Policy #0 lag: (min: 12.0, avg: 12.0, max: 12.0) |
| [2023-09-24 16:23:40,294][13256] Avg episode reward: [(0, '4.570'), (1, '5.290')] |
| [2023-09-24 16:23:41,098][14087] Updated weights for policy 0, policy_version 30608 (0.0017) |
| [2023-09-24 16:23:41,098][14088] Updated weights for policy 1, policy_version 30592 (0.0017) |
| [2023-09-24 16:23:45,293][13256] Fps is (10 sec: 6553.5, 60 sec: 6417.1, 300 sec: 6442.5). Total num frames: 15691776. Throughput: 0: 805.3, 1: 805.5. Samples: 2771650. Policy #0 lag: (min: 12.0, avg: 12.0, max: 12.0) |
| [2023-09-24 16:23:45,294][13256] Avg episode reward: [(0, '4.560'), (1, '5.480')] |
| [2023-09-24 16:23:50,293][13256] Fps is (10 sec: 6553.8, 60 sec: 6417.1, 300 sec: 6442.5). Total num frames: 15724544. Throughput: 0: 811.1, 1: 811.5. Samples: 2776738. Policy #0 lag: (min: 12.0, avg: 12.0, max: 12.0) |
| [2023-09-24 16:23:50,294][13256] Avg episode reward: [(0, '4.610'), (1, '5.390')] |
| [2023-09-24 16:23:53,763][14087] Updated weights for policy 0, policy_version 30768 (0.0014) |
| [2023-09-24 16:23:53,764][14088] Updated weights for policy 1, policy_version 30752 (0.0018) |
| [2023-09-24 16:23:55,293][13256] Fps is (10 sec: 6553.7, 60 sec: 6553.6, 300 sec: 6442.5). Total num frames: 15757312. Throughput: 0: 805.5, 1: 805.0. Samples: 2786076. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) |
| [2023-09-24 16:23:55,294][13256] Avg episode reward: [(0, '4.420'), (1, '5.610')] |
| [2023-09-24 16:24:00,293][13256] Fps is (10 sec: 6553.6, 60 sec: 6553.6, 300 sec: 6442.5). Total num frames: 15790080. Throughput: 0: 804.3, 1: 804.2. Samples: 2795573. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) |
| [2023-09-24 16:24:00,294][13256] Avg episode reward: [(0, '4.420'), (1, '5.400')] |
| [2023-09-24 16:24:05,293][13256] Fps is (10 sec: 6553.7, 60 sec: 6553.6, 300 sec: 6442.5). Total num frames: 15822848. Throughput: 0: 808.3, 1: 806.8. Samples: 2800612. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) |
| [2023-09-24 16:24:05,293][13256] Avg episode reward: [(0, '4.390'), (1, '5.360')] |
| [2023-09-24 16:24:06,470][14087] Updated weights for policy 0, policy_version 30928 (0.0015) |
| [2023-09-24 16:24:06,472][14088] Updated weights for policy 1, policy_version 30912 (0.0016) |
| [2023-09-24 16:24:10,293][13256] Fps is (10 sec: 5734.3, 60 sec: 6417.1, 300 sec: 6414.8). Total num frames: 15847424. Throughput: 0: 802.4, 1: 803.1. Samples: 2810058. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) |
| [2023-09-24 16:24:10,294][13256] Avg episode reward: [(0, '4.630'), (1, '5.470')] |
| [2023-09-24 16:24:15,293][13256] Fps is (10 sec: 5734.3, 60 sec: 6417.1, 300 sec: 6442.5). Total num frames: 15880192. Throughput: 0: 812.3, 1: 812.7. Samples: 2820096. Policy #0 lag: (min: 0.0, avg: 0.0, max: 0.0) |
| [2023-09-24 16:24:15,294][13256] Avg episode reward: [(0, '4.750'), (1, '5.400')] |
| [2023-09-24 16:24:19,245][14087] Updated weights for policy 0, policy_version 31088 (0.0017) |
| [2023-09-24 16:24:19,245][14088] Updated weights for policy 1, policy_version 31072 (0.0016) |
| [2023-09-24 16:24:20,293][13256] Fps is (10 sec: 6553.7, 60 sec: 6417.1, 300 sec: 6442.5). Total num frames: 15912960. Throughput: 0: 806.2, 1: 806.1. Samples: 2824655. Policy #0 lag: (min: 0.0, avg: 0.0, max: 0.0) |
| [2023-09-24 16:24:20,294][13256] Avg episode reward: [(0, '4.760'), (1, '5.420')] |
| [2023-09-24 16:24:25,293][13256] Fps is (10 sec: 6553.6, 60 sec: 6417.0, 300 sec: 6442.5). Total num frames: 15945728. Throughput: 0: 805.8, 1: 806.8. Samples: 2834432. Policy #0 lag: (min: 0.0, avg: 0.0, max: 0.0) |
| [2023-09-24 16:24:25,294][13256] Avg episode reward: [(0, '4.970'), (1, '5.270')] |
| [2023-09-24 16:24:30,293][13256] Fps is (10 sec: 6553.5, 60 sec: 6417.1, 300 sec: 6442.5). Total num frames: 15978496. Throughput: 0: 806.2, 1: 806.6. Samples: 2844226. Policy #0 lag: (min: 0.0, avg: 0.0, max: 0.0) |
| [2023-09-24 16:24:30,294][13256] Avg episode reward: [(0, '4.900'), (1, '5.430')] |
| [2023-09-24 16:24:31,897][14087] Updated weights for policy 0, policy_version 31248 (0.0018) |
| [2023-09-24 16:24:31,897][14088] Updated weights for policy 1, policy_version 31232 (0.0018) |
| [2023-09-24 16:24:35,293][13256] Fps is (10 sec: 6553.6, 60 sec: 6417.1, 300 sec: 6442.5). Total num frames: 16011264. Throughput: 0: 800.7, 1: 800.4. Samples: 2848787. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) |
| [2023-09-24 16:24:35,294][13256] Avg episode reward: [(0, '5.050'), (1, '5.260')] |
| [2023-09-24 16:24:40,293][13256] Fps is (10 sec: 6553.8, 60 sec: 6417.1, 300 sec: 6442.5). Total num frames: 16044032. Throughput: 0: 809.4, 1: 809.4. Samples: 2858921. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) |
| [2023-09-24 16:24:40,293][13256] Avg episode reward: [(0, '5.030'), (1, '5.400')] |
| [2023-09-24 16:24:44,457][14087] Updated weights for policy 0, policy_version 31408 (0.0019) |
| [2023-09-24 16:24:44,457][14088] Updated weights for policy 1, policy_version 31392 (0.0018) |
| [2023-09-24 16:24:45,293][13256] Fps is (10 sec: 6553.6, 60 sec: 6417.1, 300 sec: 6442.5). Total num frames: 16076800. Throughput: 0: 811.4, 1: 811.0. Samples: 2868580. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) |
| [2023-09-24 16:24:45,294][13256] Avg episode reward: [(0, '5.200'), (1, '5.160')] |
| [2023-09-24 16:24:50,293][13256] Fps is (10 sec: 6553.3, 60 sec: 6417.0, 300 sec: 6442.5). Total num frames: 16109568. Throughput: 0: 807.4, 1: 808.9. Samples: 2873344. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) |
| [2023-09-24 16:24:50,294][13256] Avg episode reward: [(0, '5.100'), (1, '5.330')] |
| [2023-09-24 16:24:50,303][13996] Saving ./train_atari/Berzerk/checkpoint_p1/checkpoint_000031456_8052736.pth... |
| [2023-09-24 16:24:50,304][13827] Saving ./train_atari/Berzerk/checkpoint_p0/checkpoint_000031472_8056832.pth... |
| [2023-09-24 16:24:50,334][13827] Removing ./train_atari/Berzerk/checkpoint_p0/checkpoint_000028464_7286784.pth |
| [2023-09-24 16:24:50,335][13996] Removing ./train_atari/Berzerk/checkpoint_p1/checkpoint_000028448_7282688.pth |
| [2023-09-24 16:24:55,293][13256] Fps is (10 sec: 6553.7, 60 sec: 6417.1, 300 sec: 6442.5). Total num frames: 16142336. Throughput: 0: 813.6, 1: 813.1. Samples: 2883260. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) |
| [2023-09-24 16:24:55,294][13256] Avg episode reward: [(0, '5.030'), (1, '5.410')] |
| [2023-09-24 16:24:57,136][14087] Updated weights for policy 0, policy_version 31568 (0.0016) |
| [2023-09-24 16:24:57,136][14088] Updated weights for policy 1, policy_version 31552 (0.0016) |
| [2023-09-24 16:25:00,293][13256] Fps is (10 sec: 6553.8, 60 sec: 6417.1, 300 sec: 6442.5). Total num frames: 16175104. Throughput: 0: 807.5, 1: 806.9. Samples: 2892742. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) |
| [2023-09-24 16:25:00,294][13256] Avg episode reward: [(0, '4.870'), (1, '5.420')] |
| [2023-09-24 16:25:05,293][13256] Fps is (10 sec: 6553.5, 60 sec: 6417.0, 300 sec: 6442.5). Total num frames: 16207872. Throughput: 0: 809.7, 1: 812.1. Samples: 2897634. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) |
| [2023-09-24 16:25:05,294][13256] Avg episode reward: [(0, '5.160'), (1, '5.360')] |
| [2023-09-24 16:25:09,808][14088] Updated weights for policy 1, policy_version 31712 (0.0017) |
| [2023-09-24 16:25:09,809][14087] Updated weights for policy 0, policy_version 31728 (0.0017) |
| [2023-09-24 16:25:10,293][13256] Fps is (10 sec: 6553.5, 60 sec: 6553.6, 300 sec: 6442.5). Total num frames: 16240640. Throughput: 0: 810.2, 1: 809.8. Samples: 2907334. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) |
| [2023-09-24 16:25:10,294][13256] Avg episode reward: [(0, '5.170'), (1, '5.430')] |
| [2023-09-24 16:25:15,293][13256] Fps is (10 sec: 6553.6, 60 sec: 6553.6, 300 sec: 6442.5). Total num frames: 16273408. Throughput: 0: 807.0, 1: 806.7. Samples: 2916843. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) |
| [2023-09-24 16:25:15,294][13256] Avg episode reward: [(0, '5.170'), (1, '5.510')] |
| [2023-09-24 16:25:20,293][13256] Fps is (10 sec: 6553.6, 60 sec: 6553.6, 300 sec: 6442.5). Total num frames: 16306176. Throughput: 0: 811.9, 1: 811.9. Samples: 2921861. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) |
| [2023-09-24 16:25:20,294][13256] Avg episode reward: [(0, '5.240'), (1, '5.260')] |
| [2023-09-24 16:25:22,532][14087] Updated weights for policy 0, policy_version 31888 (0.0015) |
| [2023-09-24 16:25:22,532][14088] Updated weights for policy 1, policy_version 31872 (0.0018) |
| [2023-09-24 16:25:25,293][13256] Fps is (10 sec: 6553.7, 60 sec: 6553.6, 300 sec: 6442.5). Total num frames: 16338944. Throughput: 0: 806.0, 1: 806.1. Samples: 2931466. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) |
| [2023-09-24 16:25:25,294][13256] Avg episode reward: [(0, '5.040'), (1, '5.230')] |
| [2023-09-24 16:25:30,293][13256] Fps is (10 sec: 6553.6, 60 sec: 6553.6, 300 sec: 6442.5). Total num frames: 16371712. Throughput: 0: 803.4, 1: 804.3. Samples: 2940928. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) |
| [2023-09-24 16:25:30,294][13256] Avg episode reward: [(0, '5.210'), (1, '5.360')] |
| [2023-09-24 16:25:35,293][13256] Fps is (10 sec: 5734.4, 60 sec: 6417.1, 300 sec: 6414.8). Total num frames: 16396288. Throughput: 0: 800.5, 1: 799.8. Samples: 2945359. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) |
| [2023-09-24 16:25:35,294][13256] Avg episode reward: [(0, '4.870'), (1, '5.300')] |
| [2023-09-24 16:25:35,611][14087] Updated weights for policy 0, policy_version 32048 (0.0018) |
| [2023-09-24 16:25:35,612][14088] Updated weights for policy 1, policy_version 32032 (0.0017) |
| [2023-09-24 16:25:40,293][13256] Fps is (10 sec: 5734.4, 60 sec: 6417.0, 300 sec: 6414.8). Total num frames: 16429056. Throughput: 0: 799.1, 1: 799.6. Samples: 2955201. Policy #0 lag: (min: 13.0, avg: 13.0, max: 13.0) |
| [2023-09-24 16:25:40,294][13256] Avg episode reward: [(0, '4.790'), (1, '5.170')] |
| [2023-09-24 16:25:45,293][13256] Fps is (10 sec: 6553.5, 60 sec: 6417.1, 300 sec: 6414.8). Total num frames: 16461824. Throughput: 0: 800.3, 1: 800.2. Samples: 2964765. Policy #0 lag: (min: 13.0, avg: 13.0, max: 13.0) |
| [2023-09-24 16:25:45,294][13256] Avg episode reward: [(0, '4.790'), (1, '5.280')] |
| [2023-09-24 16:25:48,282][14088] Updated weights for policy 1, policy_version 32192 (0.0018) |
| [2023-09-24 16:25:48,283][14087] Updated weights for policy 0, policy_version 32208 (0.0017) |
| [2023-09-24 16:25:50,293][13256] Fps is (10 sec: 6553.5, 60 sec: 6417.1, 300 sec: 6414.7). Total num frames: 16494592. Throughput: 0: 800.4, 1: 798.9. Samples: 2969600. Policy #0 lag: (min: 13.0, avg: 13.0, max: 13.0) |
| [2023-09-24 16:25:50,294][13256] Avg episode reward: [(0, '4.680'), (1, '5.330')] |
| [2023-09-24 16:25:55,293][13256] Fps is (10 sec: 6553.7, 60 sec: 6417.0, 300 sec: 6414.7). Total num frames: 16527360. Throughput: 0: 799.3, 1: 798.8. Samples: 2979248. Policy #0 lag: (min: 13.0, avg: 13.0, max: 13.0) |
| [2023-09-24 16:25:55,294][13256] Avg episode reward: [(0, '4.540'), (1, '5.360')] |
| [2023-09-24 16:26:00,293][13256] Fps is (10 sec: 6553.8, 60 sec: 6417.1, 300 sec: 6414.8). Total num frames: 16560128. Throughput: 0: 798.1, 1: 798.4. Samples: 2988687. Policy #0 lag: (min: 6.0, avg: 6.0, max: 6.0) |
| [2023-09-24 16:26:00,293][13256] Avg episode reward: [(0, '4.660'), (1, '5.270')] |
| [2023-09-24 16:26:01,108][14088] Updated weights for policy 1, policy_version 32352 (0.0018) |
| [2023-09-24 16:26:01,108][14087] Updated weights for policy 0, policy_version 32368 (0.0017) |
| [2023-09-24 16:26:05,293][13256] Fps is (10 sec: 6553.8, 60 sec: 6417.1, 300 sec: 6428.6). Total num frames: 16592896. Throughput: 0: 798.3, 1: 798.0. Samples: 2993693. Policy #0 lag: (min: 6.0, avg: 6.0, max: 6.0) |
| [2023-09-24 16:26:05,294][13256] Avg episode reward: [(0, '4.660'), (1, '5.430')] |
| [2023-09-24 16:26:10,293][13256] Fps is (10 sec: 6553.6, 60 sec: 6417.1, 300 sec: 6442.5). Total num frames: 16625664. Throughput: 0: 795.6, 1: 794.4. Samples: 3003018. Policy #0 lag: (min: 6.0, avg: 6.0, max: 6.0) |
| [2023-09-24 16:26:10,294][13256] Avg episode reward: [(0, '4.570'), (1, '5.460')] |
| [2023-09-24 16:26:14,009][14088] Updated weights for policy 1, policy_version 32512 (0.0017) |
| [2023-09-24 16:26:14,009][14087] Updated weights for policy 0, policy_version 32528 (0.0017) |
| [2023-09-24 16:26:15,293][13256] Fps is (10 sec: 6553.5, 60 sec: 6417.1, 300 sec: 6442.5). Total num frames: 16658432. Throughput: 0: 796.4, 1: 796.4. Samples: 3012608. Policy #0 lag: (min: 6.0, avg: 6.0, max: 6.0) |
| [2023-09-24 16:26:15,294][13256] Avg episode reward: [(0, '4.560'), (1, '5.200')] |
| [2023-09-24 16:26:20,293][13256] Fps is (10 sec: 5734.3, 60 sec: 6280.5, 300 sec: 6414.8). Total num frames: 16683008. Throughput: 0: 799.1, 1: 799.3. Samples: 3017289. Policy #0 lag: (min: 6.0, avg: 6.0, max: 6.0) |
| [2023-09-24 16:26:20,295][13256] Avg episode reward: [(0, '4.700'), (1, '5.160')] |
| [2023-09-24 16:26:25,293][13256] Fps is (10 sec: 5734.5, 60 sec: 6280.5, 300 sec: 6414.8). Total num frames: 16715776. Throughput: 0: 797.2, 1: 797.1. Samples: 3026944. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) |
| [2023-09-24 16:26:25,294][13256] Avg episode reward: [(0, '4.670'), (1, '5.080')] |
| [2023-09-24 16:26:26,775][14088] Updated weights for policy 1, policy_version 32672 (0.0015) |
| [2023-09-24 16:26:26,776][14087] Updated weights for policy 0, policy_version 32688 (0.0017) |
| [2023-09-24 16:26:30,293][13256] Fps is (10 sec: 6553.6, 60 sec: 6280.5, 300 sec: 6414.8). Total num frames: 16748544. Throughput: 0: 800.7, 1: 800.7. Samples: 3036826. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) |
| [2023-09-24 16:26:30,294][13256] Avg episode reward: [(0, '4.740'), (1, '4.990')] |
| [2023-09-24 16:26:35,293][13256] Fps is (10 sec: 6553.5, 60 sec: 6417.1, 300 sec: 6414.8). Total num frames: 16781312. Throughput: 0: 797.7, 1: 797.2. Samples: 3041374. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) |
| [2023-09-24 16:26:35,294][13256] Avg episode reward: [(0, '4.910'), (1, '5.140')] |
| [2023-09-24 16:26:39,371][14088] Updated weights for policy 1, policy_version 32832 (0.0018) |
| [2023-09-24 16:26:39,371][14087] Updated weights for policy 0, policy_version 32848 (0.0016) |
| [2023-09-24 16:26:40,293][13256] Fps is (10 sec: 6553.6, 60 sec: 6417.1, 300 sec: 6414.7). Total num frames: 16814080. Throughput: 0: 802.6, 1: 803.5. Samples: 3051520. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) |
| [2023-09-24 16:26:40,294][13256] Avg episode reward: [(0, '5.050'), (1, '5.110')] |
| [2023-09-24 16:26:45,293][13256] Fps is (10 sec: 6553.7, 60 sec: 6417.1, 300 sec: 6414.8). Total num frames: 16846848. Throughput: 0: 804.3, 1: 804.0. Samples: 3061060. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) |
| [2023-09-24 16:26:45,294][13256] Avg episode reward: [(0, '5.140'), (1, '5.190')] |
| [2023-09-24 16:26:50,293][13256] Fps is (10 sec: 6553.5, 60 sec: 6417.1, 300 sec: 6414.7). Total num frames: 16879616. Throughput: 0: 801.5, 1: 802.1. Samples: 3065856. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) |
| [2023-09-24 16:26:50,295][13256] Avg episode reward: [(0, '4.980'), (1, '5.360')] |
| [2023-09-24 16:26:50,308][13827] Saving ./train_atari/Berzerk/checkpoint_p0/checkpoint_000032976_8441856.pth... |
| [2023-09-24 16:26:50,309][13996] Saving ./train_atari/Berzerk/checkpoint_p1/checkpoint_000032960_8437760.pth... |
| [2023-09-24 16:26:50,344][13996] Removing ./train_atari/Berzerk/checkpoint_p1/checkpoint_000029952_7667712.pth |
| [2023-09-24 16:26:50,347][13827] Removing ./train_atari/Berzerk/checkpoint_p0/checkpoint_000029968_7671808.pth |
| [2023-09-24 16:26:52,082][14087] Updated weights for policy 0, policy_version 33008 (0.0017) |
| [2023-09-24 16:26:52,083][14088] Updated weights for policy 1, policy_version 32992 (0.0018) |
| [2023-09-24 16:26:55,293][13256] Fps is (10 sec: 6553.4, 60 sec: 6417.1, 300 sec: 6414.7). Total num frames: 16912384. Throughput: 0: 806.1, 1: 807.4. Samples: 3075625. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) |
| [2023-09-24 16:26:55,294][13256] Avg episode reward: [(0, '4.980'), (1, '5.240')] |
| [2023-09-24 16:27:00,293][13256] Fps is (10 sec: 6553.8, 60 sec: 6417.1, 300 sec: 6414.8). Total num frames: 16945152. Throughput: 0: 807.4, 1: 807.0. Samples: 3085254. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) |
| [2023-09-24 16:27:00,294][13256] Avg episode reward: [(0, '4.920'), (1, '5.330')] |
| [2023-09-24 16:27:04,710][14087] Updated weights for policy 0, policy_version 33168 (0.0017) |
| [2023-09-24 16:27:04,710][14088] Updated weights for policy 1, policy_version 33152 (0.0018) |
| [2023-09-24 16:27:05,293][13256] Fps is (10 sec: 6553.5, 60 sec: 6417.0, 300 sec: 6428.6). Total num frames: 16977920. Throughput: 0: 810.8, 1: 810.8. Samples: 3090259. Policy #0 lag: (min: 8.0, avg: 8.0, max: 8.0) |
| [2023-09-24 16:27:05,294][13256] Avg episode reward: [(0, '4.910'), (1, '5.110')] |
| [2023-09-24 16:27:10,293][13256] Fps is (10 sec: 6553.5, 60 sec: 6417.0, 300 sec: 6442.5). Total num frames: 17010688. Throughput: 0: 813.8, 1: 811.4. Samples: 3100082. Policy #0 lag: (min: 8.0, avg: 8.0, max: 8.0) |
| [2023-09-24 16:27:10,294][13256] Avg episode reward: [(0, '4.560'), (1, '5.210')] |
| [2023-09-24 16:27:15,293][13256] Fps is (10 sec: 6553.7, 60 sec: 6417.1, 300 sec: 6442.5). Total num frames: 17043456. Throughput: 0: 804.6, 1: 805.2. Samples: 3109265. Policy #0 lag: (min: 8.0, avg: 8.0, max: 8.0) |
| [2023-09-24 16:27:15,294][13256] Avg episode reward: [(0, '4.680'), (1, '5.300')] |
| [2023-09-24 16:27:17,540][14088] Updated weights for policy 1, policy_version 33312 (0.0016) |
| [2023-09-24 16:27:17,540][14087] Updated weights for policy 0, policy_version 33328 (0.0016) |
| [2023-09-24 16:27:20,293][13256] Fps is (10 sec: 6553.6, 60 sec: 6553.6, 300 sec: 6442.5). Total num frames: 17076224. Throughput: 0: 809.5, 1: 808.0. Samples: 3114164. Policy #0 lag: (min: 8.0, avg: 8.0, max: 8.0) |
| [2023-09-24 16:27:20,294][13256] Avg episode reward: [(0, '4.590'), (1, '5.110')] |
| [2023-09-24 16:27:25,293][13256] Fps is (10 sec: 6553.6, 60 sec: 6553.6, 300 sec: 6442.5). Total num frames: 17108992. Throughput: 0: 801.0, 1: 800.7. Samples: 3123594. Policy #0 lag: (min: 8.0, avg: 8.0, max: 8.0) |
| [2023-09-24 16:27:25,294][13256] Avg episode reward: [(0, '4.690'), (1, '5.110')] |
| [2023-09-24 16:27:30,293][13256] Fps is (10 sec: 5734.4, 60 sec: 6417.1, 300 sec: 6414.8). Total num frames: 17133568. Throughput: 0: 804.1, 1: 804.4. Samples: 3133440. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) |
| [2023-09-24 16:27:30,294][13256] Avg episode reward: [(0, '4.630'), (1, '5.330')] |
| [2023-09-24 16:27:30,308][14088] Updated weights for policy 1, policy_version 33472 (0.0016) |
| [2023-09-24 16:27:30,308][14087] Updated weights for policy 0, policy_version 33488 (0.0017) |
| [2023-09-24 16:27:35,293][13256] Fps is (10 sec: 5734.3, 60 sec: 6417.0, 300 sec: 6414.7). Total num frames: 17166336. Throughput: 0: 804.3, 1: 803.7. Samples: 3138215. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) |
| [2023-09-24 16:27:35,294][13256] Avg episode reward: [(0, '4.680'), (1, '5.380')] |
| [2023-09-24 16:27:40,293][13256] Fps is (10 sec: 6553.6, 60 sec: 6417.1, 300 sec: 6414.8). Total num frames: 17199104. Throughput: 0: 803.1, 1: 803.2. Samples: 3147908. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) |
| [2023-09-24 16:27:40,294][13256] Avg episode reward: [(0, '4.830'), (1, '5.530')] |
| [2023-09-24 16:27:42,908][14088] Updated weights for policy 1, policy_version 33632 (0.0019) |
| [2023-09-24 16:27:42,908][14087] Updated weights for policy 0, policy_version 33648 (0.0017) |
| [2023-09-24 16:27:45,293][13256] Fps is (10 sec: 6553.7, 60 sec: 6417.0, 300 sec: 6414.8). Total num frames: 17231872. Throughput: 0: 808.0, 1: 808.0. Samples: 3157972. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) |
| [2023-09-24 16:27:45,294][13256] Avg episode reward: [(0, '4.840'), (1, '5.250')] |
| [2023-09-24 16:27:50,293][13256] Fps is (10 sec: 6553.7, 60 sec: 6417.1, 300 sec: 6442.5). Total num frames: 17264640. Throughput: 0: 802.6, 1: 802.7. Samples: 3162498. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) |
| [2023-09-24 16:27:50,294][13256] Avg episode reward: [(0, '4.910'), (1, '5.450')] |
| [2023-09-24 16:27:55,293][13256] Fps is (10 sec: 6553.6, 60 sec: 6417.1, 300 sec: 6442.5). Total num frames: 17297408. Throughput: 0: 801.8, 1: 803.7. Samples: 3172328. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) |
| [2023-09-24 16:27:55,294][13256] Avg episode reward: [(0, '4.780'), (1, '5.130')] |
| [2023-09-24 16:27:55,725][14088] Updated weights for policy 1, policy_version 33792 (0.0020) |
| [2023-09-24 16:27:55,725][14087] Updated weights for policy 0, policy_version 33808 (0.0020) |
| [2023-09-24 16:28:00,293][13256] Fps is (10 sec: 6553.6, 60 sec: 6417.1, 300 sec: 6442.5). Total num frames: 17330176. Throughput: 0: 807.1, 1: 807.1. Samples: 3181905. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) |
| [2023-09-24 16:28:00,294][13256] Avg episode reward: [(0, '4.950'), (1, '4.960')] |
| [2023-09-24 16:28:05,293][13256] Fps is (10 sec: 6553.5, 60 sec: 6417.1, 300 sec: 6442.5). Total num frames: 17362944. Throughput: 0: 804.8, 1: 806.8. Samples: 3186688. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) |
| [2023-09-24 16:28:05,294][13256] Avg episode reward: [(0, '4.830'), (1, '5.050')] |
| [2023-09-24 16:28:08,309][14088] Updated weights for policy 1, policy_version 33952 (0.0017) |
| [2023-09-24 16:28:08,310][14087] Updated weights for policy 0, policy_version 33968 (0.0017) |
| [2023-09-24 16:28:10,293][13256] Fps is (10 sec: 6553.7, 60 sec: 6417.1, 300 sec: 6442.5). Total num frames: 17395712. Throughput: 0: 810.2, 1: 811.0. Samples: 3196546. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) |
| [2023-09-24 16:28:10,294][13256] Avg episode reward: [(0, '4.780'), (1, '5.050')] |
| [2023-09-24 16:28:15,293][13256] Fps is (10 sec: 6553.8, 60 sec: 6417.1, 300 sec: 6442.5). Total num frames: 17428480. Throughput: 0: 808.1, 1: 806.7. Samples: 3206107. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) |
| [2023-09-24 16:28:15,293][13256] Avg episode reward: [(0, '4.580'), (1, '5.080')] |
| [2023-09-24 16:28:20,293][13256] Fps is (10 sec: 6553.4, 60 sec: 6417.1, 300 sec: 6442.5). Total num frames: 17461248. Throughput: 0: 809.2, 1: 809.9. Samples: 3211076. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) |
| [2023-09-24 16:28:20,294][13256] Avg episode reward: [(0, '4.370'), (1, '5.030')] |
| [2023-09-24 16:28:21,131][14087] Updated weights for policy 0, policy_version 34128 (0.0017) |
| [2023-09-24 16:28:21,131][14088] Updated weights for policy 1, policy_version 34112 (0.0018) |
| [2023-09-24 16:28:25,293][13256] Fps is (10 sec: 6553.4, 60 sec: 6417.1, 300 sec: 6442.5). Total num frames: 17494016. Throughput: 0: 805.4, 1: 804.3. Samples: 3220344. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) |
| [2023-09-24 16:28:25,294][13256] Avg episode reward: [(0, '4.510'), (1, '5.210')] |
| [2023-09-24 16:28:30,293][13256] Fps is (10 sec: 6553.8, 60 sec: 6553.6, 300 sec: 6442.5). Total num frames: 17526784. Throughput: 0: 796.9, 1: 797.1. Samples: 3229701. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) |
| [2023-09-24 16:28:30,293][13256] Avg episode reward: [(0, '4.440'), (1, '5.320')] |
| [2023-09-24 16:28:34,166][14087] Updated weights for policy 0, policy_version 34288 (0.0016) |
| [2023-09-24 16:28:34,167][14088] Updated weights for policy 1, policy_version 34272 (0.0015) |
| [2023-09-24 16:28:35,293][13256] Fps is (10 sec: 5734.4, 60 sec: 6417.1, 300 sec: 6414.8). Total num frames: 17551360. Throughput: 0: 799.9, 1: 799.9. Samples: 3234488. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) |
| [2023-09-24 16:28:35,294][13256] Avg episode reward: [(0, '4.560'), (1, '5.120')] |
| [2023-09-24 16:28:40,293][13256] Fps is (10 sec: 5734.3, 60 sec: 6417.1, 300 sec: 6414.8). Total num frames: 17584128. Throughput: 0: 796.4, 1: 797.0. Samples: 3244032. Policy #0 lag: (min: 14.0, avg: 14.0, max: 14.0) |
| [2023-09-24 16:28:40,294][13256] Avg episode reward: [(0, '4.600'), (1, '5.240')] |
| [2023-09-24 16:28:45,293][13256] Fps is (10 sec: 6553.6, 60 sec: 6417.1, 300 sec: 6414.7). Total num frames: 17616896. Throughput: 0: 798.2, 1: 796.6. Samples: 3253675. Policy #0 lag: (min: 14.0, avg: 14.0, max: 14.0) |
| [2023-09-24 16:28:45,294][13256] Avg episode reward: [(0, '4.730'), (1, '5.030')] |
| [2023-09-24 16:28:46,971][14088] Updated weights for policy 1, policy_version 34432 (0.0017) |
| [2023-09-24 16:28:46,971][14087] Updated weights for policy 0, policy_version 34448 (0.0017) |
| [2023-09-24 16:28:50,293][13256] Fps is (10 sec: 6553.5, 60 sec: 6417.1, 300 sec: 6414.8). Total num frames: 17649664. Throughput: 0: 796.5, 1: 796.4. Samples: 3258371. Policy #0 lag: (min: 14.0, avg: 14.0, max: 14.0) |
| [2023-09-24 16:28:50,294][13256] Avg episode reward: [(0, '4.740'), (1, '4.930')] |
| [2023-09-24 16:28:50,305][13827] Saving ./train_atari/Berzerk/checkpoint_p0/checkpoint_000034480_8826880.pth... |
| [2023-09-24 16:28:50,305][13996] Saving ./train_atari/Berzerk/checkpoint_p1/checkpoint_000034464_8822784.pth... |
| [2023-09-24 16:28:50,335][13827] Removing ./train_atari/Berzerk/checkpoint_p0/checkpoint_000031472_8056832.pth |
| [2023-09-24 16:28:50,342][13996] Removing ./train_atari/Berzerk/checkpoint_p1/checkpoint_000031456_8052736.pth |
| [2023-09-24 16:28:55,293][13256] Fps is (10 sec: 6553.7, 60 sec: 6417.1, 300 sec: 6414.7). Total num frames: 17682432. Throughput: 0: 795.6, 1: 794.7. Samples: 3268112. Policy #0 lag: (min: 14.0, avg: 14.0, max: 14.0) |
| [2023-09-24 16:28:55,294][13256] Avg episode reward: [(0, '4.820'), (1, '4.960')] |
| [2023-09-24 16:28:59,720][14088] Updated weights for policy 1, policy_version 34592 (0.0013) |
| [2023-09-24 16:28:59,722][14087] Updated weights for policy 0, policy_version 34608 (0.0016) |
| [2023-09-24 16:29:00,293][13256] Fps is (10 sec: 6553.7, 60 sec: 6417.1, 300 sec: 6414.8). Total num frames: 17715200. Throughput: 0: 794.3, 1: 795.7. Samples: 3277657. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) |
| [2023-09-24 16:29:00,294][13256] Avg episode reward: [(0, '4.800'), (1, '4.730')] |
| [2023-09-24 16:29:05,293][13256] Fps is (10 sec: 6553.5, 60 sec: 6417.1, 300 sec: 6442.5). Total num frames: 17747968. Throughput: 0: 796.2, 1: 795.6. Samples: 3282706. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) |
| [2023-09-24 16:29:05,294][13256] Avg episode reward: [(0, '5.010'), (1, '4.950')] |
| [2023-09-24 16:29:10,293][13256] Fps is (10 sec: 6553.6, 60 sec: 6417.1, 300 sec: 6442.5). Total num frames: 17780736. Throughput: 0: 797.7, 1: 798.6. Samples: 3292177. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) |
| [2023-09-24 16:29:10,294][13256] Avg episode reward: [(0, '4.850'), (1, '4.640')] |
| [2023-09-24 16:29:12,446][14088] Updated weights for policy 1, policy_version 34752 (0.0017) |
| [2023-09-24 16:29:12,446][14087] Updated weights for policy 0, policy_version 34768 (0.0017) |
| [2023-09-24 16:29:15,293][13256] Fps is (10 sec: 6553.6, 60 sec: 6417.0, 300 sec: 6442.5). Total num frames: 17813504. Throughput: 0: 799.2, 1: 798.6. Samples: 3301602. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) |
| [2023-09-24 16:29:15,294][13256] Avg episode reward: [(0, '4.930'), (1, '4.740')] |
| [2023-09-24 16:29:20,293][13256] Fps is (10 sec: 6553.5, 60 sec: 6417.1, 300 sec: 6442.5). Total num frames: 17846272. Throughput: 0: 802.7, 1: 802.6. Samples: 3306724. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) |
| [2023-09-24 16:29:20,294][13256] Avg episode reward: [(0, '4.630'), (1, '4.950')] |
| [2023-09-24 16:29:25,160][14087] Updated weights for policy 0, policy_version 34928 (0.0017) |
| [2023-09-24 16:29:25,160][14088] Updated weights for policy 1, policy_version 34912 (0.0018) |
| [2023-09-24 16:29:25,293][13256] Fps is (10 sec: 6553.6, 60 sec: 6417.1, 300 sec: 6442.5). Total num frames: 17879040. Throughput: 0: 802.0, 1: 801.7. Samples: 3316201. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) |
| [2023-09-24 16:29:25,294][13256] Avg episode reward: [(0, '4.630'), (1, '5.000')] |
| [2023-09-24 16:29:30,293][13256] Fps is (10 sec: 6553.7, 60 sec: 6417.0, 300 sec: 6442.5). Total num frames: 17911808. Throughput: 0: 802.4, 1: 804.1. Samples: 3325967. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) |
| [2023-09-24 16:29:30,294][13256] Avg episode reward: [(0, '4.500'), (1, '4.960')] |
| [2023-09-24 16:29:35,293][13256] Fps is (10 sec: 6553.8, 60 sec: 6553.6, 300 sec: 6442.5). Total num frames: 17944576. Throughput: 0: 807.8, 1: 806.8. Samples: 3331028. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) |
| [2023-09-24 16:29:35,293][13256] Avg episode reward: [(0, '4.660'), (1, '4.960')] |
| [2023-09-24 16:29:37,709][14087] Updated weights for policy 0, policy_version 35088 (0.0017) |
| [2023-09-24 16:29:37,709][14088] Updated weights for policy 1, policy_version 35072 (0.0018) |
| [2023-09-24 16:29:40,293][13256] Fps is (10 sec: 6144.1, 60 sec: 6485.3, 300 sec: 6428.6). Total num frames: 17973248. Throughput: 0: 805.1, 1: 804.9. Samples: 3340562. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) |
| [2023-09-24 16:29:40,294][13256] Avg episode reward: [(0, '4.650'), (1, '5.210')] |
| [2023-09-24 16:29:45,293][13256] Fps is (10 sec: 5734.3, 60 sec: 6417.1, 300 sec: 6414.8). Total num frames: 18001920. Throughput: 0: 809.3, 1: 807.9. Samples: 3350431. Policy #0 lag: (min: 9.0, avg: 9.0, max: 9.0) |
| [2023-09-24 16:29:45,294][13256] Avg episode reward: [(0, '4.620'), (1, '4.780')] |
| [2023-09-24 16:29:50,293][13256] Fps is (10 sec: 6143.9, 60 sec: 6417.1, 300 sec: 6414.7). Total num frames: 18034688. Throughput: 0: 803.9, 1: 804.2. Samples: 3355069. Policy #0 lag: (min: 9.0, avg: 9.0, max: 9.0) |
| [2023-09-24 16:29:50,294][13256] Avg episode reward: [(0, '4.620'), (1, '4.890')] |
| [2023-09-24 16:29:50,491][14087] Updated weights for policy 0, policy_version 35248 (0.0015) |
| [2023-09-24 16:29:50,492][14088] Updated weights for policy 1, policy_version 35232 (0.0015) |
| [2023-09-24 16:29:55,293][13256] Fps is (10 sec: 6553.6, 60 sec: 6417.1, 300 sec: 6414.7). Total num frames: 18067456. Throughput: 0: 807.4, 1: 807.8. Samples: 3364864. Policy #0 lag: (min: 9.0, avg: 9.0, max: 9.0) |
| [2023-09-24 16:29:55,294][13256] Avg episode reward: [(0, '4.680'), (1, '4.790')] |
| [2023-09-24 16:30:00,293][13256] Fps is (10 sec: 6553.6, 60 sec: 6417.1, 300 sec: 6414.8). Total num frames: 18100224. Throughput: 0: 812.4, 1: 812.2. Samples: 3374712. Policy #0 lag: (min: 9.0, avg: 9.0, max: 9.0) |
| [2023-09-24 16:30:00,294][13256] Avg episode reward: [(0, '4.820'), (1, '5.100')] |
| [2023-09-24 16:30:03,224][14088] Updated weights for policy 1, policy_version 35392 (0.0016) |
| [2023-09-24 16:30:03,224][14087] Updated weights for policy 0, policy_version 35408 (0.0018) |
| [2023-09-24 16:30:05,293][13256] Fps is (10 sec: 6553.6, 60 sec: 6417.1, 300 sec: 6414.8). Total num frames: 18132992. Throughput: 0: 805.1, 1: 805.6. Samples: 3379202. Policy #0 lag: (min: 9.0, avg: 9.0, max: 9.0) |
| [2023-09-24 16:30:05,294][13256] Avg episode reward: [(0, '4.860'), (1, '5.050')] |
| [2023-09-24 16:30:10,293][13256] Fps is (10 sec: 6553.7, 60 sec: 6417.1, 300 sec: 6414.8). Total num frames: 18165760. Throughput: 0: 811.2, 1: 810.5. Samples: 3389179. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) |
| [2023-09-24 16:30:10,294][13256] Avg episode reward: [(0, '4.670'), (1, '5.170')] |
| [2023-09-24 16:30:15,293][13256] Fps is (10 sec: 6553.7, 60 sec: 6417.1, 300 sec: 6414.8). Total num frames: 18198528. Throughput: 0: 809.2, 1: 808.5. Samples: 3398767. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) |
| [2023-09-24 16:30:15,294][13256] Avg episode reward: [(0, '4.850'), (1, '5.100')] |
| [2023-09-24 16:30:15,845][14088] Updated weights for policy 1, policy_version 35552 (0.0018) |
| [2023-09-24 16:30:15,845][14087] Updated weights for policy 0, policy_version 35568 (0.0016) |
| [2023-09-24 16:30:20,293][13256] Fps is (10 sec: 6553.6, 60 sec: 6417.1, 300 sec: 6414.8). Total num frames: 18231296. Throughput: 0: 807.7, 1: 808.9. Samples: 3403776. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) |
| [2023-09-24 16:30:20,294][13256] Avg episode reward: [(0, '4.880'), (1, '5.360')] |
| [2023-09-24 16:30:25,293][13256] Fps is (10 sec: 6553.6, 60 sec: 6417.1, 300 sec: 6414.8). Total num frames: 18264064. Throughput: 0: 810.4, 1: 810.0. Samples: 3413481. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) |
| [2023-09-24 16:30:25,294][13256] Avg episode reward: [(0, '4.880'), (1, '5.510')] |
| [2023-09-24 16:30:28,532][14088] Updated weights for policy 1, policy_version 35712 (0.0018) |
| [2023-09-24 16:30:28,533][14087] Updated weights for policy 0, policy_version 35728 (0.0016) |
| [2023-09-24 16:30:30,293][13256] Fps is (10 sec: 6553.7, 60 sec: 6417.1, 300 sec: 6442.5). Total num frames: 18296832. Throughput: 0: 805.8, 1: 806.9. Samples: 3423006. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) |
| [2023-09-24 16:30:30,294][13256] Avg episode reward: [(0, '4.730'), (1, '5.280')] |
| [2023-09-24 16:30:35,293][13256] Fps is (10 sec: 6553.6, 60 sec: 6417.0, 300 sec: 6442.5). Total num frames: 18329600. Throughput: 0: 809.8, 1: 810.4. Samples: 3427974. Policy #0 lag: (min: 8.0, avg: 8.0, max: 8.0) |
| [2023-09-24 16:30:35,294][13256] Avg episode reward: [(0, '4.700'), (1, '5.180')] |
| [2023-09-24 16:30:40,293][13256] Fps is (10 sec: 6553.6, 60 sec: 6485.4, 300 sec: 6442.5). Total num frames: 18362368. Throughput: 0: 809.7, 1: 808.6. Samples: 3437687. Policy #0 lag: (min: 8.0, avg: 8.0, max: 8.0) |
| [2023-09-24 16:30:40,293][13256] Avg episode reward: [(0, '4.620'), (1, '5.100')] |
| [2023-09-24 16:30:41,111][14088] Updated weights for policy 1, policy_version 35872 (0.0015) |
| [2023-09-24 16:30:41,111][14087] Updated weights for policy 0, policy_version 35888 (0.0017) |
| [2023-09-24 16:30:45,293][13256] Fps is (10 sec: 6553.6, 60 sec: 6553.6, 300 sec: 6442.5). Total num frames: 18395136. Throughput: 0: 805.8, 1: 806.8. Samples: 3447281. Policy #0 lag: (min: 8.0, avg: 8.0, max: 8.0) |
| [2023-09-24 16:30:45,294][13256] Avg episode reward: [(0, '4.710'), (1, '5.410')] |
| [2023-09-24 16:30:50,293][13256] Fps is (10 sec: 6553.6, 60 sec: 6553.6, 300 sec: 6442.5). Total num frames: 18427904. Throughput: 0: 812.1, 1: 811.5. Samples: 3452266. Policy #0 lag: (min: 8.0, avg: 8.0, max: 8.0) |
| [2023-09-24 16:30:50,293][13256] Avg episode reward: [(0, '4.690'), (1, '5.030')] |
| [2023-09-24 16:30:50,302][13996] Saving ./train_atari/Berzerk/checkpoint_p1/checkpoint_000035984_9211904.pth... |
| [2023-09-24 16:30:50,302][13827] Saving ./train_atari/Berzerk/checkpoint_p0/checkpoint_000036000_9216000.pth... |
| [2023-09-24 16:30:50,332][13996] Removing ./train_atari/Berzerk/checkpoint_p1/checkpoint_000032960_8437760.pth |
| [2023-09-24 16:30:50,340][13827] Removing ./train_atari/Berzerk/checkpoint_p0/checkpoint_000032976_8441856.pth |
| [2023-09-24 16:30:53,910][14087] Updated weights for policy 0, policy_version 36048 (0.0017) |
| [2023-09-24 16:30:53,911][14088] Updated weights for policy 1, policy_version 36032 (0.0016) |
| [2023-09-24 16:30:55,293][13256] Fps is (10 sec: 6553.6, 60 sec: 6553.6, 300 sec: 6442.5). Total num frames: 18460672. Throughput: 0: 804.5, 1: 805.0. Samples: 3461606. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) |
| [2023-09-24 16:30:55,294][13256] Avg episode reward: [(0, '4.920'), (1, '5.240')] |
| [2023-09-24 16:31:00,293][13256] Fps is (10 sec: 6553.4, 60 sec: 6553.6, 300 sec: 6442.5). Total num frames: 18493440. Throughput: 0: 806.2, 1: 807.0. Samples: 3471360. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) |
| [2023-09-24 16:31:00,295][13256] Avg episode reward: [(0, '5.000'), (1, '5.170')] |
| [2023-09-24 16:31:05,293][13256] Fps is (10 sec: 6144.0, 60 sec: 6485.3, 300 sec: 6428.6). Total num frames: 18522112. Throughput: 0: 805.4, 1: 804.9. Samples: 3476238. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) |
| [2023-09-24 16:31:05,294][13256] Avg episode reward: [(0, '5.040'), (1, '5.210')] |
| [2023-09-24 16:31:06,591][14088] Updated weights for policy 1, policy_version 36192 (0.0016) |
| [2023-09-24 16:31:06,591][14087] Updated weights for policy 0, policy_version 36208 (0.0014) |
| [2023-09-24 16:31:10,293][13256] Fps is (10 sec: 5734.5, 60 sec: 6417.1, 300 sec: 6414.8). Total num frames: 18550784. Throughput: 0: 802.0, 1: 803.0. Samples: 3485704. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) |
| [2023-09-24 16:31:10,294][13256] Avg episode reward: [(0, '5.210'), (1, '5.170')] |
| [2023-09-24 16:31:15,293][13256] Fps is (10 sec: 6144.1, 60 sec: 6417.1, 300 sec: 6442.5). Total num frames: 18583552. Throughput: 0: 806.1, 1: 805.6. Samples: 3495531. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) |
| [2023-09-24 16:31:15,293][13256] Avg episode reward: [(0, '4.980'), (1, '4.940')] |
| [2023-09-24 16:31:19,508][14088] Updated weights for policy 1, policy_version 36352 (0.0018) |
| [2023-09-24 16:31:19,508][14087] Updated weights for policy 0, policy_version 36368 (0.0017) |
| [2023-09-24 16:31:20,293][13256] Fps is (10 sec: 6553.6, 60 sec: 6417.1, 300 sec: 6442.5). Total num frames: 18616320. Throughput: 0: 800.8, 1: 800.5. Samples: 3500033. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) |
| [2023-09-24 16:31:20,294][13256] Avg episode reward: [(0, '4.860'), (1, '4.590')] |
| [2023-09-24 16:31:25,293][13256] Fps is (10 sec: 6553.5, 60 sec: 6417.1, 300 sec: 6442.5). Total num frames: 18649088. Throughput: 0: 801.4, 1: 801.2. Samples: 3509807. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) |
| [2023-09-24 16:31:25,294][13256] Avg episode reward: [(0, '5.010'), (1, '4.610')] |
| [2023-09-24 16:31:30,293][13256] Fps is (10 sec: 6553.7, 60 sec: 6417.1, 300 sec: 6442.5). Total num frames: 18681856. Throughput: 0: 798.8, 1: 798.3. Samples: 3519148. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) |
| [2023-09-24 16:31:30,294][13256] Avg episode reward: [(0, '5.130'), (1, '4.460')] |
| [2023-09-24 16:31:32,384][14088] Updated weights for policy 1, policy_version 36512 (0.0017) |
| [2023-09-24 16:31:32,384][14087] Updated weights for policy 0, policy_version 36528 (0.0017) |
| [2023-09-24 16:31:35,293][13256] Fps is (10 sec: 6553.7, 60 sec: 6417.1, 300 sec: 6442.5). Total num frames: 18714624. Throughput: 0: 798.9, 1: 798.4. Samples: 3524147. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) |
| [2023-09-24 16:31:35,293][13256] Avg episode reward: [(0, '5.000'), (1, '4.520')] |
| [2023-09-24 16:31:40,293][13256] Fps is (10 sec: 6553.4, 60 sec: 6417.0, 300 sec: 6442.5). Total num frames: 18747392. Throughput: 0: 798.8, 1: 798.6. Samples: 3533489. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) |
| [2023-09-24 16:31:40,294][13256] Avg episode reward: [(0, '5.240'), (1, '4.480')] |
| [2023-09-24 16:31:45,293][13256] Fps is (10 sec: 5734.3, 60 sec: 6280.5, 300 sec: 6414.8). Total num frames: 18771968. Throughput: 0: 796.4, 1: 796.4. Samples: 3543040. Policy #0 lag: (min: 14.0, avg: 14.0, max: 14.0) |
| [2023-09-24 16:31:45,294][13256] Avg episode reward: [(0, '5.500'), (1, '4.680')] |
| [2023-09-24 16:31:45,317][13827] Saving new best policy, reward=5.500! |
| [2023-09-24 16:31:45,322][14087] Updated weights for policy 0, policy_version 36688 (0.0018) |
| [2023-09-24 16:31:45,322][14088] Updated weights for policy 1, policy_version 36672 (0.0019) |
| [2023-09-24 16:31:50,293][13256] Fps is (10 sec: 5734.4, 60 sec: 6280.5, 300 sec: 6414.8). Total num frames: 18804736. Throughput: 0: 795.6, 1: 795.6. Samples: 3547844. Policy #0 lag: (min: 14.0, avg: 14.0, max: 14.0) |
| [2023-09-24 16:31:50,294][13256] Avg episode reward: [(0, '5.780'), (1, '4.890')] |
| [2023-09-24 16:31:50,311][13827] Saving new best policy, reward=5.780! |
| [2023-09-24 16:31:55,293][13256] Fps is (10 sec: 6963.3, 60 sec: 6348.8, 300 sec: 6428.6). Total num frames: 18841600. Throughput: 0: 798.8, 1: 798.4. Samples: 3557579. Policy #0 lag: (min: 14.0, avg: 14.0, max: 14.0) |
| [2023-09-24 16:31:55,294][13256] Avg episode reward: [(0, '5.620'), (1, '4.980')] |
| [2023-09-24 16:31:57,821][14088] Updated weights for policy 1, policy_version 36832 (0.0017) |
| [2023-09-24 16:31:57,822][14087] Updated weights for policy 0, policy_version 36848 (0.0018) |
| [2023-09-24 16:32:00,293][13256] Fps is (10 sec: 6553.6, 60 sec: 6280.6, 300 sec: 6414.8). Total num frames: 18870272. Throughput: 0: 800.5, 1: 801.4. Samples: 3567616. Policy #0 lag: (min: 14.0, avg: 14.0, max: 14.0) |
| [2023-09-24 16:32:00,294][13256] Avg episode reward: [(0, '5.700'), (1, '5.270')] |
| [2023-09-24 16:32:05,293][13256] Fps is (10 sec: 6144.0, 60 sec: 6348.8, 300 sec: 6414.8). Total num frames: 18903040. Throughput: 0: 801.5, 1: 801.1. Samples: 3572150. Policy #0 lag: (min: 14.0, avg: 14.0, max: 14.0) |
| [2023-09-24 16:32:05,294][13256] Avg episode reward: [(0, '5.590'), (1, '5.080')] |
| [2023-09-24 16:32:10,293][13256] Fps is (10 sec: 6553.6, 60 sec: 6417.1, 300 sec: 6414.8). Total num frames: 18935808. Throughput: 0: 801.1, 1: 802.3. Samples: 3581961. Policy #0 lag: (min: 3.0, avg: 3.0, max: 3.0) |
| [2023-09-24 16:32:10,294][13256] Avg episode reward: [(0, '5.440'), (1, '5.330')] |
| [2023-09-24 16:32:10,449][14088] Updated weights for policy 1, policy_version 36992 (0.0017) |
| [2023-09-24 16:32:10,450][14087] Updated weights for policy 0, policy_version 37008 (0.0016) |
| [2023-09-24 16:32:15,293][13256] Fps is (10 sec: 6553.7, 60 sec: 6417.1, 300 sec: 6414.8). Total num frames: 18968576. Throughput: 0: 809.6, 1: 810.0. Samples: 3592028. Policy #0 lag: (min: 3.0, avg: 3.0, max: 3.0) |
| [2023-09-24 16:32:15,294][13256] Avg episode reward: [(0, '5.540'), (1, '5.350')] |
| [2023-09-24 16:32:20,293][13256] Fps is (10 sec: 6553.7, 60 sec: 6417.1, 300 sec: 6414.8). Total num frames: 19001344. Throughput: 0: 805.6, 1: 806.3. Samples: 3596683. Policy #0 lag: (min: 3.0, avg: 3.0, max: 3.0) |
| [2023-09-24 16:32:20,294][13256] Avg episode reward: [(0, '5.070'), (1, '5.330')] |
| [2023-09-24 16:32:22,999][14087] Updated weights for policy 0, policy_version 37168 (0.0018) |
| [2023-09-24 16:32:22,999][14088] Updated weights for policy 1, policy_version 37152 (0.0015) |
| [2023-09-24 16:32:25,293][13256] Fps is (10 sec: 6553.6, 60 sec: 6417.1, 300 sec: 6442.5). Total num frames: 19034112. Throughput: 0: 811.2, 1: 811.9. Samples: 3606528. Policy #0 lag: (min: 3.0, avg: 3.0, max: 3.0) |
| [2023-09-24 16:32:25,294][13256] Avg episode reward: [(0, '5.130'), (1, '5.290')] |
| [2023-09-24 16:32:30,293][13256] Fps is (10 sec: 6553.5, 60 sec: 6417.0, 300 sec: 6442.5). Total num frames: 19066880. Throughput: 0: 816.8, 1: 816.3. Samples: 3616528. Policy #0 lag: (min: 3.0, avg: 3.0, max: 3.0) |
| [2023-09-24 16:32:30,294][13256] Avg episode reward: [(0, '4.910'), (1, '5.300')] |
| [2023-09-24 16:32:35,293][13256] Fps is (10 sec: 6553.6, 60 sec: 6417.1, 300 sec: 6442.5). Total num frames: 19099648. Throughput: 0: 813.6, 1: 813.6. Samples: 3621070. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) |
| [2023-09-24 16:32:35,293][13256] Avg episode reward: [(0, '4.960'), (1, '5.210')] |
| [2023-09-24 16:32:35,672][14088] Updated weights for policy 1, policy_version 37312 (0.0019) |
| [2023-09-24 16:32:35,672][14087] Updated weights for policy 0, policy_version 37328 (0.0018) |
| [2023-09-24 16:32:40,293][13256] Fps is (10 sec: 6553.7, 60 sec: 6417.1, 300 sec: 6442.5). Total num frames: 19132416. Throughput: 0: 816.6, 1: 815.8. Samples: 3631037. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) |
| [2023-09-24 16:32:40,294][13256] Avg episode reward: [(0, '5.010'), (1, '5.000')] |
| [2023-09-24 16:32:45,293][13256] Fps is (10 sec: 6553.5, 60 sec: 6553.6, 300 sec: 6442.5). Total num frames: 19165184. Throughput: 0: 809.8, 1: 809.1. Samples: 3640469. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) |
| [2023-09-24 16:32:45,294][13256] Avg episode reward: [(0, '5.090'), (1, '4.990')] |
| [2023-09-24 16:32:48,372][14088] Updated weights for policy 1, policy_version 37472 (0.0017) |
| [2023-09-24 16:32:48,372][14087] Updated weights for policy 0, policy_version 37488 (0.0017) |
| [2023-09-24 16:32:50,293][13256] Fps is (10 sec: 6553.4, 60 sec: 6553.6, 300 sec: 6442.5). Total num frames: 19197952. Throughput: 0: 814.1, 1: 814.5. Samples: 3645440. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) |
| [2023-09-24 16:32:50,294][13256] Avg episode reward: [(0, '5.100'), (1, '5.020')] |
| [2023-09-24 16:32:50,305][13996] Saving ./train_atari/Berzerk/checkpoint_p1/checkpoint_000037488_9596928.pth... |
| [2023-09-24 16:32:50,305][13827] Saving ./train_atari/Berzerk/checkpoint_p0/checkpoint_000037504_9601024.pth... |
| [2023-09-24 16:32:50,334][13996] Removing ./train_atari/Berzerk/checkpoint_p1/checkpoint_000034464_8822784.pth |
| [2023-09-24 16:32:50,340][13827] Removing ./train_atari/Berzerk/checkpoint_p0/checkpoint_000034480_8826880.pth |
| [2023-09-24 16:32:55,293][13256] Fps is (10 sec: 6553.6, 60 sec: 6485.3, 300 sec: 6442.5). Total num frames: 19230720. Throughput: 0: 813.0, 1: 812.2. Samples: 3655095. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) |
| [2023-09-24 16:32:55,294][13256] Avg episode reward: [(0, '5.080'), (1, '5.070')] |
| [2023-09-24 16:33:00,293][13256] Fps is (10 sec: 6553.8, 60 sec: 6553.6, 300 sec: 6442.5). Total num frames: 19263488. Throughput: 0: 808.3, 1: 807.6. Samples: 3664744. Policy #0 lag: (min: 6.0, avg: 6.0, max: 6.0) |
| [2023-09-24 16:33:00,293][13256] Avg episode reward: [(0, '4.850'), (1, '5.220')] |
| [2023-09-24 16:33:00,963][14087] Updated weights for policy 0, policy_version 37648 (0.0017) |
| [2023-09-24 16:33:00,963][14088] Updated weights for policy 1, policy_version 37632 (0.0018) |
| [2023-09-24 16:33:05,293][13256] Fps is (10 sec: 6553.7, 60 sec: 6553.6, 300 sec: 6442.5). Total num frames: 19296256. Throughput: 0: 813.7, 1: 813.4. Samples: 3669901. Policy #0 lag: (min: 6.0, avg: 6.0, max: 6.0) |
| [2023-09-24 16:33:05,294][13256] Avg episode reward: [(0, '4.640'), (1, '5.180')] |
| [2023-09-24 16:33:10,293][13256] Fps is (10 sec: 6553.5, 60 sec: 6553.6, 300 sec: 6442.5). Total num frames: 19329024. Throughput: 0: 810.4, 1: 810.0. Samples: 3679446. Policy #0 lag: (min: 6.0, avg: 6.0, max: 6.0) |
| [2023-09-24 16:33:10,294][13256] Avg episode reward: [(0, '4.890'), (1, '5.180')] |
| [2023-09-24 16:33:13,563][14087] Updated weights for policy 0, policy_version 37808 (0.0017) |
| [2023-09-24 16:33:13,564][14088] Updated weights for policy 1, policy_version 37792 (0.0017) |
| [2023-09-24 16:33:15,293][13256] Fps is (10 sec: 6553.6, 60 sec: 6553.6, 300 sec: 6442.5). Total num frames: 19361792. Throughput: 0: 806.2, 1: 806.4. Samples: 3689095. Policy #0 lag: (min: 6.0, avg: 6.0, max: 6.0) |
| [2023-09-24 16:33:15,294][13256] Avg episode reward: [(0, '4.730'), (1, '5.260')] |
| [2023-09-24 16:33:20,293][13256] Fps is (10 sec: 6553.7, 60 sec: 6553.6, 300 sec: 6442.5). Total num frames: 19394560. Throughput: 0: 812.6, 1: 813.1. Samples: 3694224. Policy #0 lag: (min: 6.0, avg: 6.0, max: 6.0) |
| [2023-09-24 16:33:20,294][13256] Avg episode reward: [(0, '4.740'), (1, '5.390')] |
| [2023-09-24 16:33:25,293][13256] Fps is (10 sec: 6553.5, 60 sec: 6553.6, 300 sec: 6442.5). Total num frames: 19427328. Throughput: 0: 806.6, 1: 807.3. Samples: 3703666. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) |
| [2023-09-24 16:33:25,294][13256] Avg episode reward: [(0, '4.550'), (1, '5.450')] |
| [2023-09-24 16:33:26,254][14088] Updated weights for policy 1, policy_version 37952 (0.0017) |
| [2023-09-24 16:33:26,254][14087] Updated weights for policy 0, policy_version 37968 (0.0018) |
| [2023-09-24 16:33:30,293][13256] Fps is (10 sec: 6553.6, 60 sec: 6553.6, 300 sec: 6470.3). Total num frames: 19460096. Throughput: 0: 809.0, 1: 809.8. Samples: 3713319. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) |
| [2023-09-24 16:33:30,294][13256] Avg episode reward: [(0, '4.680'), (1, '5.240')] |
| [2023-09-24 16:33:35,293][13256] Fps is (10 sec: 6553.7, 60 sec: 6553.6, 300 sec: 6470.3). Total num frames: 19492864. Throughput: 0: 808.0, 1: 807.6. Samples: 3718141. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) |
| [2023-09-24 16:33:35,293][13256] Avg episode reward: [(0, '4.630'), (1, '5.160')] |
| [2023-09-24 16:33:38,889][14088] Updated weights for policy 1, policy_version 38112 (0.0017) |
| [2023-09-24 16:33:38,890][14087] Updated weights for policy 0, policy_version 38128 (0.0016) |
| [2023-09-24 16:33:40,293][13256] Fps is (10 sec: 6553.4, 60 sec: 6553.6, 300 sec: 6470.3). Total num frames: 19525632. Throughput: 0: 808.8, 1: 809.6. Samples: 3727923. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) |
| [2023-09-24 16:33:40,294][13256] Avg episode reward: [(0, '4.790'), (1, '5.140')] |
| [2023-09-24 16:33:45,293][13256] Fps is (10 sec: 6553.6, 60 sec: 6553.6, 300 sec: 6470.3). Total num frames: 19558400. Throughput: 0: 809.2, 1: 810.0. Samples: 3737608. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) |
| [2023-09-24 16:33:45,294][13256] Avg episode reward: [(0, '4.670'), (1, '4.900')] |
| [2023-09-24 16:33:50,293][13256] Fps is (10 sec: 6553.5, 60 sec: 6553.6, 300 sec: 6470.3). Total num frames: 19591168. Throughput: 0: 807.2, 1: 806.6. Samples: 3742524. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) |
| [2023-09-24 16:33:50,294][13256] Avg episode reward: [(0, '4.760'), (1, '4.950')] |
| [2023-09-24 16:33:51,598][14087] Updated weights for policy 0, policy_version 38288 (0.0018) |
| [2023-09-24 16:33:51,598][14088] Updated weights for policy 1, policy_version 38272 (0.0018) |
| [2023-09-24 16:33:55,293][13256] Fps is (10 sec: 5734.3, 60 sec: 6417.1, 300 sec: 6442.5). Total num frames: 19615744. Throughput: 0: 806.6, 1: 806.5. Samples: 3752034. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) |
| [2023-09-24 16:33:55,294][13256] Avg episode reward: [(0, '4.810'), (1, '5.050')] |
| [2023-09-24 16:34:00,293][13256] Fps is (10 sec: 5734.5, 60 sec: 6417.0, 300 sec: 6442.5). Total num frames: 19648512. Throughput: 0: 811.9, 1: 812.2. Samples: 3762176. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) |
| [2023-09-24 16:34:00,294][13256] Avg episode reward: [(0, '4.810'), (1, '4.840')] |
| [2023-09-24 16:34:04,149][14088] Updated weights for policy 1, policy_version 38432 (0.0019) |
| [2023-09-24 16:34:04,149][14087] Updated weights for policy 0, policy_version 38448 (0.0017) |
| [2023-09-24 16:34:05,293][13256] Fps is (10 sec: 6553.6, 60 sec: 6417.0, 300 sec: 6442.5). Total num frames: 19681280. Throughput: 0: 806.5, 1: 806.0. Samples: 3766784. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) |
| [2023-09-24 16:34:05,294][13256] Avg episode reward: [(0, '5.010'), (1, '5.070')] |
| [2023-09-24 16:34:10,293][13256] Fps is (10 sec: 6553.7, 60 sec: 6417.1, 300 sec: 6442.5). Total num frames: 19714048. Throughput: 0: 810.6, 1: 810.6. Samples: 3776624. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) |
| [2023-09-24 16:34:10,294][13256] Avg episode reward: [(0, '5.010'), (1, '5.090')] |
| [2023-09-24 16:34:15,293][13256] Fps is (10 sec: 6553.6, 60 sec: 6417.1, 300 sec: 6442.5). Total num frames: 19746816. Throughput: 0: 815.6, 1: 815.3. Samples: 3786710. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) |
| [2023-09-24 16:34:15,294][13256] Avg episode reward: [(0, '4.900'), (1, '4.990')] |
| [2023-09-24 16:34:16,714][14087] Updated weights for policy 0, policy_version 38608 (0.0019) |
| [2023-09-24 16:34:16,715][14088] Updated weights for policy 1, policy_version 38592 (0.0019) |
| [2023-09-24 16:34:20,293][13256] Fps is (10 sec: 6553.6, 60 sec: 6417.0, 300 sec: 6442.5). Total num frames: 19779584. Throughput: 0: 811.7, 1: 811.4. Samples: 3791181. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) |
| [2023-09-24 16:34:20,294][13256] Avg episode reward: [(0, '5.110'), (1, '4.690')] |
| [2023-09-24 16:34:25,293][13256] Fps is (10 sec: 6553.6, 60 sec: 6417.1, 300 sec: 6442.5). Total num frames: 19812352. Throughput: 0: 812.8, 1: 813.0. Samples: 3801088. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) |
| [2023-09-24 16:34:25,294][13256] Avg episode reward: [(0, '4.870'), (1, '4.630')] |
| [2023-09-24 16:34:29,446][14088] Updated weights for policy 1, policy_version 38752 (0.0018) |
| [2023-09-24 16:34:29,446][14087] Updated weights for policy 0, policy_version 38768 (0.0017) |
| [2023-09-24 16:34:30,293][13256] Fps is (10 sec: 6553.6, 60 sec: 6417.0, 300 sec: 6442.5). Total num frames: 19845120. Throughput: 0: 812.1, 1: 811.8. Samples: 3810686. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) |
| [2023-09-24 16:34:30,294][13256] Avg episode reward: [(0, '4.970'), (1, '4.570')] |
| [2023-09-24 16:34:35,293][13256] Fps is (10 sec: 6553.7, 60 sec: 6417.1, 300 sec: 6456.4). Total num frames: 19877888. Throughput: 0: 809.4, 1: 810.6. Samples: 3815424. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) |
| [2023-09-24 16:34:35,294][13256] Avg episode reward: [(0, '4.960'), (1, '4.400')] |
| [2023-09-24 16:34:40,293][13256] Fps is (10 sec: 6553.6, 60 sec: 6417.1, 300 sec: 6470.3). Total num frames: 19910656. Throughput: 0: 813.6, 1: 812.6. Samples: 3825215. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) |
| [2023-09-24 16:34:40,294][13256] Avg episode reward: [(0, '5.100'), (1, '4.630')] |
| [2023-09-24 16:34:42,215][14088] Updated weights for policy 1, policy_version 38912 (0.0015) |
| [2023-09-24 16:34:42,216][14087] Updated weights for policy 0, policy_version 38928 (0.0017) |
| [2023-09-24 16:34:45,293][13256] Fps is (10 sec: 6553.5, 60 sec: 6417.0, 300 sec: 6470.3). Total num frames: 19943424. Throughput: 0: 805.4, 1: 805.2. Samples: 3834653. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) |
| [2023-09-24 16:34:45,294][13256] Avg episode reward: [(0, '4.920'), (1, '4.760')] |
| [2023-09-24 16:34:50,293][13256] Fps is (10 sec: 6553.7, 60 sec: 6417.1, 300 sec: 6470.3). Total num frames: 19976192. Throughput: 0: 810.4, 1: 810.6. Samples: 3839726. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) |
| [2023-09-24 16:34:50,294][13256] Avg episode reward: [(0, '5.050'), (1, '4.910')] |
| [2023-09-24 16:34:50,302][13996] Saving ./train_atari/Berzerk/checkpoint_p1/checkpoint_000039008_9986048.pth... |
| [2023-09-24 16:34:50,302][13827] Saving ./train_atari/Berzerk/checkpoint_p0/checkpoint_000039024_9990144.pth... |
| [2023-09-24 16:34:50,340][13827] Removing ./train_atari/Berzerk/checkpoint_p0/checkpoint_000036000_9216000.pth |
| [2023-09-24 16:34:50,345][13996] Removing ./train_atari/Berzerk/checkpoint_p1/checkpoint_000035984_9211904.pth |
| [2023-09-24 16:34:54,913][13996] Saving ./train_atari/Berzerk/checkpoint_p1/checkpoint_000039072_10002432.pth... |
| [2023-09-24 16:34:54,913][13827] Saving ./train_atari/Berzerk/checkpoint_p0/checkpoint_000039088_10006528.pth... |
| [2023-09-24 16:34:54,914][14087] Updated weights for policy 0, policy_version 39088 (0.0017) |
| [2023-09-24 16:34:54,914][14127] Stopping RolloutWorker_w6... |
| [2023-09-24 16:34:54,914][14126] Stopping RolloutWorker_w5... |
| [2023-09-24 16:34:54,914][14123] Stopping RolloutWorker_w4... |
| [2023-09-24 16:34:54,914][14088] Updated weights for policy 1, policy_version 39072 (0.0016) |
| [2023-09-24 16:34:54,914][14128] Stopping RolloutWorker_w7... |
| [2023-09-24 16:34:54,914][14124] Stopping RolloutWorker_w3... |
| [2023-09-24 16:34:54,914][14120] Stopping RolloutWorker_w0... |
| [2023-09-24 16:34:54,914][14125] Stopping RolloutWorker_w1... |
| [2023-09-24 16:34:54,914][14122] Stopping RolloutWorker_w2... |
| [2023-09-24 16:34:54,915][14127] Loop rollout_proc6_evt_loop terminating... |
| [2023-09-24 16:34:54,915][14123] Loop rollout_proc4_evt_loop terminating... |
| [2023-09-24 16:34:54,915][14126] Loop rollout_proc5_evt_loop terminating... |
| [2023-09-24 16:34:54,915][14120] Loop rollout_proc0_evt_loop terminating... |
| [2023-09-24 16:34:54,915][14128] Loop rollout_proc7_evt_loop terminating... |
| [2023-09-24 16:34:54,915][14125] Loop rollout_proc1_evt_loop terminating... |
| [2023-09-24 16:34:54,915][14124] Loop rollout_proc3_evt_loop terminating... |
| [2023-09-24 16:34:54,915][14122] Loop rollout_proc2_evt_loop terminating... |
| [2023-09-24 16:34:54,916][13256] Component RolloutWorker_w6 stopped! |
| [2023-09-24 16:34:54,917][13256] Component RolloutWorker_w5 stopped! |
| [2023-09-24 16:34:54,918][13256] Component RolloutWorker_w4 stopped! |
| [2023-09-24 16:34:54,919][13256] Component RolloutWorker_w3 stopped! |
| [2023-09-24 16:34:54,919][13256] Component RolloutWorker_w7 stopped! |
| [2023-09-24 16:34:54,920][13256] Component RolloutWorker_w2 stopped! |
| [2023-09-24 16:34:54,920][13256] Component RolloutWorker_w1 stopped! |
| [2023-09-24 16:34:54,921][13256] Component RolloutWorker_w0 stopped! |
| [2023-09-24 16:34:54,921][13256] Component Batcher_0 stopped! |
| [2023-09-24 16:34:54,925][13256] Component Batcher_1 stopped! |
| [2023-09-24 16:34:54,921][13827] Stopping Batcher_0... |
| [2023-09-24 16:34:54,937][14087] Weights refcount: 2 0 |
| [2023-09-24 16:34:54,938][14087] Stopping InferenceWorker_p0-w0... |
| [2023-09-24 16:34:54,938][14087] Loop inference_proc0-0_evt_loop terminating... |
| [2023-09-24 16:34:54,938][13256] Component InferenceWorker_p0-w0 stopped! |
| [2023-09-24 16:34:54,934][13996] Stopping Batcher_1... |
| [2023-09-24 16:34:54,944][14088] Weights refcount: 2 0 |
| [2023-09-24 16:34:54,944][13996] Loop batcher_evt_loop terminating... |
| [2023-09-24 16:34:54,945][13996] Removing ./train_atari/Berzerk/checkpoint_p1/checkpoint_000037488_9596928.pth |
| [2023-09-24 16:34:54,945][14088] Stopping InferenceWorker_p1-w0... |
| [2023-09-24 16:34:54,946][14088] Loop inference_proc1-0_evt_loop terminating... |
| [2023-09-24 16:34:54,946][13256] Component InferenceWorker_p1-w0 stopped! |
| [2023-09-24 16:34:54,949][13996] Saving ./train_atari/Berzerk/checkpoint_p1/checkpoint_000039072_10002432.pth... |
| [2023-09-24 16:34:54,945][13827] Loop batcher_evt_loop terminating... |
| [2023-09-24 16:34:54,958][13827] Removing ./train_atari/Berzerk/checkpoint_p0/checkpoint_000037504_9601024.pth |
| [2023-09-24 16:34:54,963][13827] Saving ./train_atari/Berzerk/checkpoint_p0/checkpoint_000039088_10006528.pth... |
| [2023-09-24 16:34:54,984][13996] Stopping LearnerWorker_p1... |
| [2023-09-24 16:34:54,984][13996] Loop learner_proc1_evt_loop terminating... |
| [2023-09-24 16:34:54,986][13256] Component LearnerWorker_p1 stopped! |
| [2023-09-24 16:34:55,019][13827] Stopping LearnerWorker_p0... |
| [2023-09-24 16:34:55,020][13827] Loop learner_proc0_evt_loop terminating... |
| [2023-09-24 16:34:55,020][13256] Component LearnerWorker_p0 stopped! |
| [2023-09-24 16:34:55,021][13256] Waiting for process learner_proc0 to stop... |
| [2023-09-24 16:34:55,766][13256] Waiting for process learner_proc1 to stop... |
| [2023-09-24 16:34:55,767][13256] Waiting for process inference_proc0-0 to join... |
| [2023-09-24 16:34:55,767][13256] Waiting for process inference_proc1-0 to join... |
| [2023-09-24 16:34:55,768][13256] Waiting for process rollout_proc0 to join... |
| [2023-09-24 16:34:55,768][13256] Waiting for process rollout_proc1 to join... |
| [2023-09-24 16:34:55,768][13256] Waiting for process rollout_proc2 to join... |
| [2023-09-24 16:34:55,769][13256] Waiting for process rollout_proc3 to join... |
| [2023-09-24 16:34:55,769][13256] Waiting for process rollout_proc4 to join... |
| [2023-09-24 16:34:55,770][13256] Waiting for process rollout_proc5 to join... |
| [2023-09-24 16:34:55,770][13256] Waiting for process rollout_proc6 to join... |
| [2023-09-24 16:34:55,771][13256] Waiting for process rollout_proc7 to join... |
| [2023-09-24 16:34:55,771][13256] Batcher 0 profile tree view: |
| batching: 16.2861, releasing_batches: 1.3364 |
| [2023-09-24 16:34:55,771][13256] Batcher 1 profile tree view: |
| batching: 16.3421, releasing_batches: 1.3540 |
| [2023-09-24 16:34:55,772][13256] InferenceWorker_p0-w0 profile tree view: |
| wait_policy: 0.0001 |
| wait_policy_total: 501.0548 |
| update_model: 28.1576 |
| weight_update: 0.0017 |
| one_step: 0.0040 |
| handle_policy_step: 1712.0195 |
| deserialize: 51.1994, stack: 11.9417, obs_to_device_normalize: 413.3766, forward: 826.7731, send_messages: 72.2617 |
| prepare_outputs: 226.5313 |
| to_cpu: 113.4787 |
| [2023-09-24 16:34:55,772][13256] InferenceWorker_p1-w0 profile tree view: |
| wait_policy: 0.0001 |
| wait_policy_total: 484.7367 |
| update_model: 28.2374 |
| weight_update: 0.0016 |
| one_step: 0.0037 |
| handle_policy_step: 1727.1135 |
| deserialize: 51.5490, stack: 12.2566, obs_to_device_normalize: 419.9343, forward: 829.6416, send_messages: 71.1474 |
| prepare_outputs: 230.9195 |
| to_cpu: 116.6916 |
| [2023-09-24 16:34:55,772][13256] Learner 0 profile tree view: |
| misc: 0.0115, prepare_batch: 25.1406 |
| train: 357.6501 |
| epoch_init: 0.0782, minibatch_init: 2.4569, losses_postprocess: 48.2564, kl_divergence: 4.1384, after_optimizer: 8.4481 |
| calculate_losses: 34.3369 |
| losses_init: 0.0742, forward_head: 10.9597, bptt_initial: 0.3359, bptt: 0.3553, tail: 7.8153, advantages_returns: 2.3333, losses: 9.7093 |
| update: 256.8421 |
| clip: 125.3540 |
| [2023-09-24 16:34:55,773][13256] Learner 1 profile tree view: |
| misc: 0.0121, prepare_batch: 25.2927 |
| train: 356.2741 |
| epoch_init: 0.0825, minibatch_init: 2.3259, losses_postprocess: 48.0735, kl_divergence: 4.1137, after_optimizer: 8.4996 |
| calculate_losses: 34.1863 |
| losses_init: 0.0774, forward_head: 10.9160, bptt_initial: 0.3370, bptt: 0.3509, tail: 7.8455, advantages_returns: 2.3153, losses: 9.6132 |
| update: 255.8876 |
| clip: 124.4357 |
| [2023-09-24 16:34:55,773][13256] RolloutWorker_w0 profile tree view: |
| wait_for_trajectories: 0.3052, enqueue_policy_requests: 32.5896, env_step: 791.2354, overhead: 22.4213, complete_rollouts: 0.8138 |
| save_policy_outputs: 40.9503 |
| split_output_tensors: 14.1845 |
| [2023-09-24 16:34:55,773][13256] RolloutWorker_w7 profile tree view: |
| wait_for_trajectories: 0.3116, enqueue_policy_requests: 32.2810, env_step: 771.3971, overhead: 22.6228, complete_rollouts: 0.8253 |
| save_policy_outputs: 41.3401 |
| split_output_tensors: 14.2515 |
| [2023-09-24 16:34:55,773][13256] Loop Runner_EvtLoop terminating... |
| [2023-09-24 16:34:55,774][13256] Runner profile tree view: |
| main_loop: 2401.4415 |
| [2023-09-24 16:34:55,774][13256] Collected {0: 10006528, 1: 10002432}, FPS: 6413.2 |
|
|