[03/28 22:27:42 TiTok]: Saving config to /mnt/books/train_stage2/order_32_stage2/config.yaml [03/28 22:27:42 TiTok]: Config: experiment: project: stage2 name: stage2 output_dir: /mnt/books/train_stage2/order_32_stage2/ max_train_examples: 1281167 save_every: 10000 eval_every: 1000000 generate_every: 10000 log_every: 100 log_grad_norm_every: 1000 resume: true init_weight: ckpt/OrderTok.bin logging_dir: /mnt/books/train_stage2/order_32_stage2/logs model: vq_model: codebook_size: 4096 token_size: 12 use_l2_norm: true commitment_cost: 0.25 vit_enc_model_size: large vit_dec_model_size: large vit_enc_patch_size: 16 vit_dec_patch_size: 16 num_latent_tokens: 32 layers_x: 18 layers_token: 2 embedding_width: 1024 width: 256 finetune_decoder: true pretrained_tokenizer_weight: maskgit-vqgan-imagenet-f16-256.bin losses: discriminator_start: 20000 quantizer_weight: 0.0 discriminator_factor: 1.0 discriminator_weight: 0.01 perceptual_loss: convnext_s perceptual_weight: 0.1 reconstruction_loss: l2 reconstruction_weight: 1.0 lecam_regularization_weight: 0.001 dataset: params: train_shards_path_or_url: imagenet/imagenet1k-train-{0000..1023}.tar eval_shards_path_or_url: imagenet/imagenet1k-validation-{00..63}.tar num_workers_per_gpu: 12 preprocessing: resize_shorter_edge: 256 crop_size: 256 random_crop: true random_flip: true optimizer: name: adamw params: learning_rate: 0.0001 discriminator_learning_rate: 0.0001 beta1: 0.9 beta2: 0.999 weight_decay: 0.0001 lr_scheduler: scheduler: cosine params: learning_rate: ${optimizer.params.learning_rate} warmup_steps: 5000 end_lr: 1.0e-05 training: gradient_accumulation_steps: 1 per_gpu_batch_size: 36 mixed_precision: fp16 enable_tf32: true enable_wandb: true use_ema: true seed: 42 max_train_steps: 500000 num_generated_images: 2 max_grad_norm: 1.0 config: configs/training/TiTok/stage2/titok_new.yaml [03/28 22:28:02 TiTok]: Creating model and loss module. [03/28 22:28:16 TiTok]: loading weight from ckpt/OrderTok.bin, msg: [03/28 22:28:18 TiTok]: Creating optimizers. [03/28 22:28:18 TiTok]: Creating lr_schedulers. [03/28 22:28:18 TiTok]: Creating dataloaders. [03/28 22:28:18 TiTok]: Creating evaluator. [03/28 22:28:19 TiTok]: Preparing model, optimizer and dataloaders [03/28 22:28:21 TiTok]: ***** Running training ***** [03/28 22:28:21 TiTok]:  Num training steps = 500000 [03/28 22:28:21 TiTok]:  Gradient Accumulation steps = 1 [03/28 22:28:21 TiTok]:  Instantaneous batch size per gpu = 36 [03/28 22:28:21 TiTok]:  Total train batch size (w. parallel, distributed & accumulation) = 288 [03/28 22:28:21 TiTok]: All globbed checkpoints are: [] [03/28 22:28:21 TiTok]: Training from scratch. [03/28 22:29:07 TiTok]: Data (t): 0.0032, 91.03/s/gpu Batch (t): 0.3955 LR: 0.000002 Step: 100 Total Loss: 0.0519 Recon Loss: 0.0285 [03/28 22:29:47 TiTok]: Data (t): 0.0032, 90.76/s/gpu Batch (t): 0.3966 LR: 0.000004 Step: 200 Total Loss: 0.0454 Recon Loss: 0.0260 [03/28 22:30:36 TiTok]: Saving config to /mnt/books/train_stage2/order_32_stage2/config.yaml [03/28 22:30:36 TiTok]: Config: experiment: project: stage2 name: stage2 output_dir: /mnt/books/train_stage2/order_32_stage2/ max_train_examples: 1281167 save_every: 10000 eval_every: 1000000 generate_every: 10000 log_every: 100 log_grad_norm_every: 1000 resume: true init_weight: ckpt/OrderTok.bin logging_dir: /mnt/books/train_stage2/order_32_stage2/logs model: vq_model: codebook_size: 4096 token_size: 12 use_l2_norm: true commitment_cost: 0.25 vit_enc_model_size: large vit_dec_model_size: large vit_enc_patch_size: 16 vit_dec_patch_size: 16 num_latent_tokens: 32 layers_x: 18 layers_token: 2 embedding_width: 1024 width: 256 finetune_decoder: true pretrained_tokenizer_weight: maskgit-vqgan-imagenet-f16-256.bin losses: discriminator_start: 20000 quantizer_weight: 0.0 discriminator_factor: 1.0 discriminator_weight: 0.01 perceptual_loss: convnext_s perceptual_weight: 0.1 reconstruction_loss: l2 reconstruction_weight: 1.0 lecam_regularization_weight: 0.001 dataset: params: train_shards_path_or_url: imagenet/imagenet1k-train-{0000..1023}.tar eval_shards_path_or_url: imagenet/imagenet1k-validation-{00..63}.tar num_workers_per_gpu: 12 preprocessing: resize_shorter_edge: 256 crop_size: 256 random_crop: true random_flip: true optimizer: name: adamw params: learning_rate: 0.0001 discriminator_learning_rate: 0.0001 beta1: 0.9 beta2: 0.999 weight_decay: 0.0001 lr_scheduler: scheduler: cosine params: learning_rate: ${optimizer.params.learning_rate} warmup_steps: 5000 end_lr: 1.0e-05 training: gradient_accumulation_steps: 1 per_gpu_batch_size: 36 mixed_precision: fp16 enable_tf32: true enable_wandb: true use_ema: true seed: 42 max_train_steps: 500000 num_generated_images: 2 max_grad_norm: 1.0 config: configs/training/TiTok/stage2/titok_new.yaml [03/28 22:30:57 TiTok]: Creating model and loss module. [03/28 22:31:12 TiTok]: loading weight from ckpt/OrderTok.bin, msg: [03/28 22:31:16 TiTok]: Creating optimizers. [03/28 22:31:16 TiTok]: Creating lr_schedulers. [03/28 22:31:16 TiTok]: Creating dataloaders. [03/28 22:31:16 TiTok]: Creating evaluator. [03/28 22:31:17 TiTok]: Preparing model, optimizer and dataloaders [03/28 22:31:22 TiTok]: ***** Running training ***** [03/28 22:31:22 TiTok]:  Num training steps = 500000 [03/28 22:31:22 TiTok]:  Gradient Accumulation steps = 1 [03/28 22:31:22 TiTok]:  Instantaneous batch size per gpu = 36 [03/28 22:31:22 TiTok]:  Total train batch size (w. parallel, distributed & accumulation) = 288 [03/28 22:31:23 TiTok]: All globbed checkpoints are: [] [03/28 22:31:23 TiTok]: Training from scratch. [03/28 22:32:53 TiTok]: Data (t): 0.0158, 56.19/s/gpu Batch (t): 0.6407 LR: 0.000002 Step: 100 Total Loss: 0.0531 Recon Loss: 0.0291 [03/28 22:34:05 TiTok]: Data (t): 0.0032, 81.92/s/gpu Batch (t): 0.4395 LR: 0.000004 Step: 200 Total Loss: 0.0448 Recon Loss: 0.0256 [03/28 22:34:46 TiTok]: Data (t): 0.0033, 89.74/s/gpu Batch (t): 0.4012 LR: 0.000006 Step: 300 Total Loss: 0.0396 Recon Loss: 0.0233 [03/28 22:35:25 TiTok]: Data (t): 0.0032, 89.98/s/gpu Batch (t): 0.4001 LR: 0.000008 Step: 400 Total Loss: 0.0390 Recon Loss: 0.0229 [03/28 22:36:05 TiTok]: Data (t): 0.0033, 90.31/s/gpu Batch (t): 0.3986 LR: 0.000010 Step: 500 Total Loss: 0.0422 Recon Loss: 0.0231 [03/28 22:36:45 TiTok]: Data (t): 0.0033, 90.37/s/gpu Batch (t): 0.3984 LR: 0.000012 Step: 600 Total Loss: 0.0367 Recon Loss: 0.0225 [03/28 22:37:25 TiTok]: Data (t): 0.0033, 90.22/s/gpu Batch (t): 0.3990 LR: 0.000014 Step: 700 Total Loss: 0.0377 Recon Loss: 0.0216 [03/28 22:38:05 TiTok]: Data (t): 0.0034, 90.43/s/gpu Batch (t): 0.3981 LR: 0.000016 Step: 800 Total Loss: 0.0369 Recon Loss: 0.0224 [03/28 22:38:45 TiTok]: Data (t): 0.0032, 90.49/s/gpu Batch (t): 0.3978 LR: 0.000018 Step: 900 Total Loss: 0.0368 Recon Loss: 0.0228 [03/28 22:39:26 TiTok]: Data (t): 0.0032, 78.36/s/gpu Batch (t): 0.4594 LR: 0.000020 Step: 1000 Total Loss: 0.0357 Recon Loss: 0.0211 [03/28 22:40:06 TiTok]: Data (t): 0.0032, 90.45/s/gpu Batch (t): 0.3980 LR: 0.000022 Step: 1100 Total Loss: 0.0347 Recon Loss: 0.0218 [03/28 22:40:46 TiTok]: Data (t): 0.0032, 90.59/s/gpu Batch (t): 0.3974 LR: 0.000024 Step: 1200 Total Loss: 0.0363 Recon Loss: 0.0231 [03/28 22:41:26 TiTok]: Data (t): 0.0032, 90.53/s/gpu Batch (t): 0.3977 LR: 0.000026 Step: 1300 Total Loss: 0.0363 Recon Loss: 0.0229 [03/28 22:42:06 TiTok]: Data (t): 0.0033, 90.37/s/gpu Batch (t): 0.3984 LR: 0.000028 Step: 1400 Total Loss: 0.0394 Recon Loss: 0.0241 [03/28 22:42:46 TiTok]: Data (t): 0.0032, 90.73/s/gpu Batch (t): 0.3968 LR: 0.000030 Step: 1500 Total Loss: 0.0363 Recon Loss: 0.0210 [03/28 22:43:26 TiTok]: Data (t): 0.0032, 90.51/s/gpu Batch (t): 0.3978 LR: 0.000032 Step: 1600 Total Loss: 0.0375 Recon Loss: 0.0226 [03/28 22:44:06 TiTok]: Data (t): 0.0033, 90.80/s/gpu Batch (t): 0.3965 LR: 0.000034 Step: 1700 Total Loss: 0.0375 Recon Loss: 0.0239 [03/28 22:44:45 TiTok]: Data (t): 0.0031, 90.79/s/gpu Batch (t): 0.3965 LR: 0.000036 Step: 1800 Total Loss: 0.0348 Recon Loss: 0.0224 [03/28 22:45:25 TiTok]: Data (t): 0.0031, 90.03/s/gpu Batch (t): 0.3998 LR: 0.000038 Step: 1900 Total Loss: 0.0368 Recon Loss: 0.0222 [03/28 22:46:05 TiTok]: Data (t): 0.0031, 77.65/s/gpu Batch (t): 0.4636 LR: 0.000040 Step: 2000 Total Loss: 0.0376 Recon Loss: 0.0226 [03/28 22:46:45 TiTok]: Data (t): 0.0032, 90.61/s/gpu Batch (t): 0.3973 LR: 0.000042 Step: 2100 Total Loss: 0.0345 Recon Loss: 0.0223 [03/28 22:47:25 TiTok]: Data (t): 0.0032, 90.75/s/gpu Batch (t): 0.3967 LR: 0.000044 Step: 2200 Total Loss: 0.0364 Recon Loss: 0.0221 [03/28 22:48:05 TiTok]: Data (t): 0.0032, 90.67/s/gpu Batch (t): 0.3970 LR: 0.000046 Step: 2300 Total Loss: 0.0345 Recon Loss: 0.0221 [03/28 22:48:44 TiTok]: Data (t): 0.0032, 90.44/s/gpu Batch (t): 0.3981 LR: 0.000048 Step: 2400 Total Loss: 0.0362 Recon Loss: 0.0223 [03/28 22:49:24 TiTok]: Data (t): 0.0032, 90.86/s/gpu Batch (t): 0.3962 LR: 0.000050 Step: 2500 Total Loss: 0.0348 Recon Loss: 0.0225 [03/28 22:50:04 TiTok]: Data (t): 0.0033, 90.78/s/gpu Batch (t): 0.3966 LR: 0.000052 Step: 2600 Total Loss: 0.0359 Recon Loss: 0.0221 [03/28 22:50:44 TiTok]: Data (t): 0.0032, 89.60/s/gpu Batch (t): 0.4018 LR: 0.000054 Step: 2700 Total Loss: 0.0339 Recon Loss: 0.0217 [03/28 22:51:23 TiTok]: Data (t): 0.0032, 90.87/s/gpu Batch (t): 0.3962 LR: 0.000056 Step: 2800 Total Loss: 0.0346 Recon Loss: 0.0225 [03/28 22:52:03 TiTok]: Data (t): 0.0033, 90.44/s/gpu Batch (t): 0.3981 LR: 0.000058 Step: 2900 Total Loss: 0.0352 Recon Loss: 0.0223 [03/28 22:52:43 TiTok]: Data (t): 0.0031, 79.04/s/gpu Batch (t): 0.4555 LR: 0.000060 Step: 3000 Total Loss: 0.0334 Recon Loss: 0.0215 [03/28 22:53:23 TiTok]: Data (t): 0.0033, 90.54/s/gpu Batch (t): 0.3976 LR: 0.000062 Step: 3100 Total Loss: 0.0351 Recon Loss: 0.0225 [03/28 22:54:03 TiTok]: Data (t): 0.0032, 90.54/s/gpu Batch (t): 0.3976 LR: 0.000064 Step: 3200 Total Loss: 0.0337 Recon Loss: 0.0216 [03/28 22:54:43 TiTok]: Data (t): 0.0031, 89.59/s/gpu Batch (t): 0.4019 LR: 0.000066 Step: 3300 Total Loss: 0.0337 Recon Loss: 0.0215 [03/28 22:55:24 TiTok]: Data (t): 0.0032, 90.28/s/gpu Batch (t): 0.3988 LR: 0.000068 Step: 3400 Total Loss: 0.0337 Recon Loss: 0.0216 [03/28 22:56:04 TiTok]: Data (t): 0.0032, 90.67/s/gpu Batch (t): 0.3971 LR: 0.000070 Step: 3500 Total Loss: 0.0365 Recon Loss: 0.0231 [03/28 22:56:43 TiTok]: Data (t): 0.0032, 90.55/s/gpu Batch (t): 0.3976 LR: 0.000072 Step: 3600 Total Loss: 0.0333 Recon Loss: 0.0213 [03/28 22:57:24 TiTok]: Data (t): 0.0031, 90.74/s/gpu Batch (t): 0.3967 LR: 0.000074 Step: 3700 Total Loss: 0.0352 Recon Loss: 0.0219 [03/28 22:58:03 TiTok]: Data (t): 0.0032, 90.57/s/gpu Batch (t): 0.3975 LR: 0.000076 Step: 3800 Total Loss: 0.0320 Recon Loss: 0.0210 [03/28 22:58:43 TiTok]: Data (t): 0.0033, 90.47/s/gpu Batch (t): 0.3979 LR: 0.000078 Step: 3900 Total Loss: 0.0367 Recon Loss: 0.0233 [03/28 22:59:23 TiTok]: Data (t): 0.0033, 79.18/s/gpu Batch (t): 0.4547 LR: 0.000080 Step: 4000 Total Loss: 0.0342 Recon Loss: 0.0216 [03/28 23:00:03 TiTok]: Data (t): 0.0032, 90.31/s/gpu Batch (t): 0.3986 LR: 0.000082 Step: 4100 Total Loss: 0.0357 Recon Loss: 0.0227 [03/28 23:00:43 TiTok]: Data (t): 0.0032, 90.10/s/gpu Batch (t): 0.3996 LR: 0.000084 Step: 4200 Total Loss: 0.0344 Recon Loss: 0.0217 [03/28 23:01:23 TiTok]: Data (t): 0.0033, 90.44/s/gpu Batch (t): 0.3981 LR: 0.000086 Step: 4300 Total Loss: 0.0350 Recon Loss: 0.0231 [03/28 23:02:03 TiTok]: Data (t): 0.0033, 90.47/s/gpu Batch (t): 0.3979 LR: 0.000088 Step: 4400 Total Loss: 0.0352 Recon Loss: 0.0219 [03/28 23:02:44 TiTok]: Data (t): 0.0033, 90.51/s/gpu Batch (t): 0.3977 LR: 0.000090 Step: 4500 Total Loss: 0.0353 Recon Loss: 0.0239 [03/28 23:03:25 TiTok]: Data (t): 0.0032, 90.24/s/gpu Batch (t): 0.3989 LR: 0.000092 Step: 4600 Total Loss: 0.0348 Recon Loss: 0.0221 [03/28 23:04:05 TiTok]: Data (t): 0.0032, 90.62/s/gpu Batch (t): 0.3973 LR: 0.000094 Step: 4700 Total Loss: 0.0342 Recon Loss: 0.0230 [03/28 23:04:45 TiTok]: Data (t): 0.0032, 90.69/s/gpu Batch (t): 0.3970 LR: 0.000096 Step: 4800 Total Loss: 0.0345 Recon Loss: 0.0222 [03/28 23:05:25 TiTok]: Data (t): 0.0032, 90.53/s/gpu Batch (t): 0.3977 LR: 0.000098 Step: 4900 Total Loss: 0.0368 Recon Loss: 0.0234 [03/28 23:06:05 TiTok]: Data (t): 0.0032, 79.22/s/gpu Batch (t): 0.4544 LR: 0.000100 Step: 5000 Total Loss: 0.0357 Recon Loss: 0.0228 [03/28 23:06:45 TiTok]: Data (t): 0.0032, 90.64/s/gpu Batch (t): 0.3972 LR: 0.000100 Step: 5100 Total Loss: 0.0345 Recon Loss: 0.0220 [03/28 23:07:25 TiTok]: Data (t): 0.0032, 90.55/s/gpu Batch (t): 0.3976 LR: 0.000100 Step: 5200 Total Loss: 0.0354 Recon Loss: 0.0227 [03/28 23:08:05 TiTok]: Data (t): 0.0033, 90.73/s/gpu Batch (t): 0.3968 LR: 0.000100 Step: 5300 Total Loss: 0.0333 Recon Loss: 0.0216 [03/28 23:08:45 TiTok]: Data (t): 0.0033, 90.20/s/gpu Batch (t): 0.3991 LR: 0.000100 Step: 5400 Total Loss: 0.0337 Recon Loss: 0.0223 [03/28 23:09:25 TiTok]: Data (t): 0.0032, 90.41/s/gpu Batch (t): 0.3982 LR: 0.000100 Step: 5500 Total Loss: 0.0336 Recon Loss: 0.0205 [03/28 23:10:05 TiTok]: Data (t): 0.0032, 90.39/s/gpu Batch (t): 0.3983 LR: 0.000100 Step: 5600 Total Loss: 0.0341 Recon Loss: 0.0221 [03/28 23:10:45 TiTok]: Data (t): 0.0031, 90.51/s/gpu Batch (t): 0.3977 LR: 0.000100 Step: 5700 Total Loss: 0.0347 Recon Loss: 0.0232 [03/28 23:11:25 TiTok]: Data (t): 0.0032, 90.51/s/gpu Batch (t): 0.3977 LR: 0.000100 Step: 5800 Total Loss: 0.0325 Recon Loss: 0.0211 [03/28 23:12:05 TiTok]: Data (t): 0.0032, 90.46/s/gpu Batch (t): 0.3980 LR: 0.000100 Step: 5900 Total Loss: 0.0336 Recon Loss: 0.0212 [03/28 23:12:45 TiTok]: Data (t): 0.0031, 77.48/s/gpu Batch (t): 0.4647 LR: 0.000100 Step: 6000 Total Loss: 0.0358 Recon Loss: 0.0225 [03/28 23:13:25 TiTok]: Data (t): 0.0031, 89.02/s/gpu Batch (t): 0.4044 LR: 0.000100 Step: 6100 Total Loss: 0.0355 Recon Loss: 0.0228 [03/28 23:14:05 TiTok]: Data (t): 0.0032, 90.59/s/gpu Batch (t): 0.3974 LR: 0.000100 Step: 6200 Total Loss: 0.0329 Recon Loss: 0.0209 [03/28 23:14:45 TiTok]: Data (t): 0.0032, 90.26/s/gpu Batch (t): 0.3988 LR: 0.000100 Step: 6300 Total Loss: 0.0357 Recon Loss: 0.0229 [03/28 23:15:25 TiTok]: Data (t): 0.0032, 90.35/s/gpu Batch (t): 0.3984 LR: 0.000100 Step: 6400 Total Loss: 0.0357 Recon Loss: 0.0230 [03/28 23:16:05 TiTok]: Data (t): 0.0032, 90.61/s/gpu Batch (t): 0.3973 LR: 0.000100 Step: 6500 Total Loss: 0.0356 Recon Loss: 0.0228 [03/28 23:16:45 TiTok]: Data (t): 0.0031, 90.47/s/gpu Batch (t): 0.3979 LR: 0.000100 Step: 6600 Total Loss: 0.0346 Recon Loss: 0.0220 [03/28 23:17:25 TiTok]: Data (t): 0.0032, 90.00/s/gpu Batch (t): 0.4000 LR: 0.000100 Step: 6700 Total Loss: 0.0334 Recon Loss: 0.0219 [03/28 23:18:05 TiTok]: Data (t): 0.0031, 90.19/s/gpu Batch (t): 0.3992 LR: 0.000100 Step: 6800 Total Loss: 0.0348 Recon Loss: 0.0225 [03/28 23:18:46 TiTok]: Data (t): 0.0032, 89.14/s/gpu Batch (t): 0.4039 LR: 0.000100 Step: 6900 Total Loss: 0.0348 Recon Loss: 0.0220 [03/28 23:19:26 TiTok]: Data (t): 0.0032, 78.11/s/gpu Batch (t): 0.4609 LR: 0.000100 Step: 7000 Total Loss: 0.0350 Recon Loss: 0.0228 [03/28 23:20:07 TiTok]: Data (t): 0.0033, 88.85/s/gpu Batch (t): 0.4052 LR: 0.000100 Step: 7100 Total Loss: 0.0363 Recon Loss: 0.0221 [03/28 23:20:48 TiTok]: Data (t): 0.0032, 89.07/s/gpu Batch (t): 0.4042 LR: 0.000100 Step: 7200 Total Loss: 0.0335 Recon Loss: 0.0218 [03/28 23:21:28 TiTok]: Data (t): 0.0032, 89.17/s/gpu Batch (t): 0.4037 LR: 0.000100 Step: 7300 Total Loss: 0.0341 Recon Loss: 0.0226 [03/28 23:22:08 TiTok]: Data (t): 0.0031, 89.03/s/gpu Batch (t): 0.4043 LR: 0.000100 Step: 7400 Total Loss: 0.0357 Recon Loss: 0.0218 [03/28 23:22:49 TiTok]: Data (t): 0.0032, 89.09/s/gpu Batch (t): 0.4041 LR: 0.000100 Step: 7500 Total Loss: 0.0336 Recon Loss: 0.0219 [03/28 23:23:30 TiTok]: Data (t): 0.0032, 89.00/s/gpu Batch (t): 0.4045 LR: 0.000100 Step: 7600 Total Loss: 0.0338 Recon Loss: 0.0222 [03/28 23:24:09 TiTok]: Data (t): 0.0032, 90.93/s/gpu Batch (t): 0.3959 LR: 0.000100 Step: 7700 Total Loss: 0.0359 Recon Loss: 0.0222 [03/28 23:24:49 TiTok]: Data (t): 0.0031, 90.46/s/gpu Batch (t): 0.3980 LR: 0.000100 Step: 7800 Total Loss: 0.0347 Recon Loss: 0.0226 [03/28 23:25:29 TiTok]: Data (t): 0.0031, 91.13/s/gpu Batch (t): 0.3950 LR: 0.000100 Step: 7900 Total Loss: 0.0366 Recon Loss: 0.0236 [03/28 23:26:09 TiTok]: Data (t): 0.0032, 79.45/s/gpu Batch (t): 0.4531 LR: 0.000100 Step: 8000 Total Loss: 0.0333 Recon Loss: 0.0214 [03/28 23:26:49 TiTok]: Data (t): 0.0031, 91.06/s/gpu Batch (t): 0.3953 LR: 0.000100 Step: 8100 Total Loss: 0.0363 Recon Loss: 0.0224 [03/28 23:27:29 TiTok]: Data (t): 0.0032, 90.18/s/gpu Batch (t): 0.3992 LR: 0.000100 Step: 8200 Total Loss: 0.0339 Recon Loss: 0.0219 [03/28 23:28:09 TiTok]: Data (t): 0.0033, 90.40/s/gpu Batch (t): 0.3982 LR: 0.000100 Step: 8300 Total Loss: 0.0326 Recon Loss: 0.0210 [03/28 23:28:49 TiTok]: Data (t): 0.0032, 90.30/s/gpu Batch (t): 0.3987 LR: 0.000100 Step: 8400 Total Loss: 0.0347 Recon Loss: 0.0227 [03/28 23:29:29 TiTok]: Data (t): 0.0031, 89.92/s/gpu Batch (t): 0.4003 LR: 0.000100 Step: 8500 Total Loss: 0.0328 Recon Loss: 0.0216 [03/28 23:30:09 TiTok]: Data (t): 0.0032, 90.46/s/gpu Batch (t): 0.3980 LR: 0.000100 Step: 8600 Total Loss: 0.0347 Recon Loss: 0.0230 [03/28 23:30:49 TiTok]: Data (t): 0.0031, 90.47/s/gpu Batch (t): 0.3979 LR: 0.000100 Step: 8700 Total Loss: 0.0341 Recon Loss: 0.0218 [03/28 23:31:29 TiTok]: Data (t): 0.0032, 90.22/s/gpu Batch (t): 0.3990 LR: 0.000100 Step: 8800 Total Loss: 0.0339 Recon Loss: 0.0224 [03/28 23:32:08 TiTok]: Data (t): 0.0030, 91.37/s/gpu Batch (t): 0.3940 LR: 0.000100 Step: 8900 Total Loss: 0.0366 Recon Loss: 0.0236 [03/28 23:32:50 TiTok]: Data (t): 0.0032, 78.92/s/gpu Batch (t): 0.4562 LR: 0.000100 Step: 9000 Total Loss: 0.0347 Recon Loss: 0.0222 [03/28 23:33:30 TiTok]: Data (t): 0.0035, 90.14/s/gpu Batch (t): 0.3994 LR: 0.000100 Step: 9100 Total Loss: 0.0322 Recon Loss: 0.0207 [03/28 23:34:10 TiTok]: Data (t): 0.0032, 88.63/s/gpu Batch (t): 0.4062 LR: 0.000100 Step: 9200 Total Loss: 0.0357 Recon Loss: 0.0234 [03/28 23:34:51 TiTok]: Data (t): 0.0033, 89.10/s/gpu Batch (t): 0.4040 LR: 0.000100 Step: 9300 Total Loss: 0.0331 Recon Loss: 0.0224 [03/28 23:35:31 TiTok]: Data (t): 0.0033, 90.46/s/gpu Batch (t): 0.3980 LR: 0.000100 Step: 9400 Total Loss: 0.0349 Recon Loss: 0.0228 [03/28 23:36:11 TiTok]: Data (t): 0.0032, 90.30/s/gpu Batch (t): 0.3987 LR: 0.000100 Step: 9500 Total Loss: 0.0351 Recon Loss: 0.0219 [03/28 23:36:51 TiTok]: Data (t): 0.0033, 90.20/s/gpu Batch (t): 0.3991 LR: 0.000100 Step: 9600 Total Loss: 0.0332 Recon Loss: 0.0224 [03/28 23:37:31 TiTok]: Data (t): 0.0035, 90.25/s/gpu Batch (t): 0.3989 LR: 0.000100 Step: 9700 Total Loss: 0.0343 Recon Loss: 0.0220 [03/28 23:38:11 TiTok]: Data (t): 0.0033, 90.40/s/gpu Batch (t): 0.3982 LR: 0.000100 Step: 9800 Total Loss: 0.0338 Recon Loss: 0.0227 [03/28 23:38:51 TiTok]: Data (t): 0.0032, 90.27/s/gpu Batch (t): 0.3988 LR: 0.000100 Step: 9900 Total Loss: 0.0342 Recon Loss: 0.0218 [03/28 23:39:31 TiTok]: Data (t): 0.0032, 79.41/s/gpu Batch (t): 0.4534 LR: 0.000100 Step: 10000 Total Loss: 0.0355 Recon Loss: 0.0231 [03/28 23:39:33 TiTok]: Saved state to /mnt/books/train_stage2/order_32_stage2/checkpoint-10000 [03/28 23:39:48 TiTok]: Reconstructing images... [03/28 23:40:30 TiTok]: Data (t): 0.0033, 90.43/s/gpu Batch (t): 0.3981 LR: 0.000100 Step: 10100 Total Loss: 0.0336 Recon Loss: 0.0229 [03/28 23:41:09 TiTok]: Data (t): 0.0034, 90.12/s/gpu Batch (t): 0.3995 LR: 0.000100 Step: 10200 Total Loss: 0.0361 Recon Loss: 0.0237 [03/28 23:41:50 TiTok]: Data (t): 0.0033, 90.43/s/gpu Batch (t): 0.3981 LR: 0.000100 Step: 10300 Total Loss: 0.0353 Recon Loss: 0.0232 [03/28 23:42:30 TiTok]: Data (t): 0.0033, 84.19/s/gpu Batch (t): 0.4276 LR: 0.000100 Step: 10400 Total Loss: 0.0370 Recon Loss: 0.0233 [03/28 23:43:10 TiTok]: Data (t): 0.0033, 89.99/s/gpu Batch (t): 0.4000 LR: 0.000100 Step: 10500 Total Loss: 0.0370 Recon Loss: 0.0228 [03/28 23:43:50 TiTok]: Data (t): 0.0033, 90.27/s/gpu Batch (t): 0.3988 LR: 0.000100 Step: 10600 Total Loss: 0.0331 Recon Loss: 0.0214 [03/28 23:44:30 TiTok]: Data (t): 0.0033, 90.38/s/gpu Batch (t): 0.3983 LR: 0.000100 Step: 10700 Total Loss: 0.0334 Recon Loss: 0.0215 [03/28 23:45:10 TiTok]: Data (t): 0.0034, 90.12/s/gpu Batch (t): 0.3995 LR: 0.000100 Step: 10800 Total Loss: 0.0350 Recon Loss: 0.0230 [03/28 23:45:50 TiTok]: Data (t): 0.0032, 89.69/s/gpu Batch (t): 0.4014 LR: 0.000100 Step: 10900 Total Loss: 0.0348 Recon Loss: 0.0228 [03/28 23:46:30 TiTok]: Data (t): 0.0033, 67.60/s/gpu Batch (t): 0.5325 LR: 0.000100 Step: 11000 Total Loss: 0.0342 Recon Loss: 0.0221 [03/28 23:47:10 TiTok]: Data (t): 0.0032, 90.46/s/gpu Batch (t): 0.3980 LR: 0.000100 Step: 11100 Total Loss: 0.0338 Recon Loss: 0.0222 [03/28 23:47:50 TiTok]: Data (t): 0.0032, 90.52/s/gpu Batch (t): 0.3977 LR: 0.000100 Step: 11200 Total Loss: 0.0350 Recon Loss: 0.0222 [03/28 23:48:30 TiTok]: Data (t): 0.0032, 90.50/s/gpu Batch (t): 0.3978 LR: 0.000100 Step: 11300 Total Loss: 0.0346 Recon Loss: 0.0231 [03/28 23:49:10 TiTok]: Data (t): 0.0032, 90.25/s/gpu Batch (t): 0.3989 LR: 0.000100 Step: 11400 Total Loss: 0.0351 Recon Loss: 0.0223 [03/28 23:49:50 TiTok]: Data (t): 0.0033, 90.37/s/gpu Batch (t): 0.3983 LR: 0.000100 Step: 11500 Total Loss: 0.0324 Recon Loss: 0.0209 [03/28 23:50:30 TiTok]: Data (t): 0.0033, 90.32/s/gpu Batch (t): 0.3986 LR: 0.000100 Step: 11600 Total Loss: 0.0329 Recon Loss: 0.0220 [03/28 23:51:10 TiTok]: Data (t): 0.0032, 90.56/s/gpu Batch (t): 0.3975 LR: 0.000100 Step: 11700 Total Loss: 0.0351 Recon Loss: 0.0225 [03/28 23:51:50 TiTok]: Data (t): 0.0034, 90.45/s/gpu Batch (t): 0.3980 LR: 0.000100 Step: 11800 Total Loss: 0.0332 Recon Loss: 0.0209 [03/28 23:52:30 TiTok]: Data (t): 0.0032, 90.71/s/gpu Batch (t): 0.3969 LR: 0.000100 Step: 11900 Total Loss: 0.0355 Recon Loss: 0.0236 [03/28 23:53:10 TiTok]: Data (t): 0.0032, 78.86/s/gpu Batch (t): 0.4565 LR: 0.000100 Step: 12000 Total Loss: 0.0331 Recon Loss: 0.0215 [03/28 23:53:50 TiTok]: Data (t): 0.0032, 90.44/s/gpu Batch (t): 0.3980 LR: 0.000100 Step: 12100 Total Loss: 0.0356 Recon Loss: 0.0230 [03/28 23:54:30 TiTok]: Data (t): 0.0032, 90.68/s/gpu Batch (t): 0.3970 LR: 0.000100 Step: 12200 Total Loss: 0.0367 Recon Loss: 0.0239 [03/28 23:55:10 TiTok]: Data (t): 0.0033, 88.88/s/gpu Batch (t): 0.4051 LR: 0.000100 Step: 12300 Total Loss: 0.0339 Recon Loss: 0.0215 [03/28 23:55:49 TiTok]: Data (t): 0.0031, 89.43/s/gpu Batch (t): 0.4025 LR: 0.000100 Step: 12400 Total Loss: 0.0337 Recon Loss: 0.0221 [03/28 23:56:29 TiTok]: Data (t): 0.0032, 90.88/s/gpu Batch (t): 0.3961 LR: 0.000100 Step: 12500 Total Loss: 0.0320 Recon Loss: 0.0217 [03/28 23:57:10 TiTok]: Data (t): 0.0032, 89.84/s/gpu Batch (t): 0.4007 LR: 0.000100 Step: 12600 Total Loss: 0.0349 Recon Loss: 0.0219 [03/28 23:57:50 TiTok]: Data (t): 0.0032, 90.91/s/gpu Batch (t): 0.3960 LR: 0.000100 Step: 12700 Total Loss: 0.0339 Recon Loss: 0.0220 [03/28 23:58:29 TiTok]: Data (t): 0.0032, 90.80/s/gpu Batch (t): 0.3965 LR: 0.000100 Step: 12800 Total Loss: 0.0343 Recon Loss: 0.0223 [03/28 23:59:09 TiTok]: Data (t): 0.0031, 91.04/s/gpu Batch (t): 0.3954 LR: 0.000100 Step: 12900 Total Loss: 0.0332 Recon Loss: 0.0214 [03/28 23:59:49 TiTok]: Data (t): 0.0031, 79.39/s/gpu Batch (t): 0.4534 LR: 0.000100 Step: 13000 Total Loss: 0.0335 Recon Loss: 0.0225 [03/29 00:00:29 TiTok]: Data (t): 0.0032, 91.12/s/gpu Batch (t): 0.3951 LR: 0.000100 Step: 13100 Total Loss: 0.0343 Recon Loss: 0.0228 [03/29 00:01:09 TiTok]: Data (t): 0.0033, 90.87/s/gpu Batch (t): 0.3962 LR: 0.000100 Step: 13200 Total Loss: 0.0353 Recon Loss: 0.0220 [03/29 00:01:48 TiTok]: Data (t): 0.0032, 90.92/s/gpu Batch (t): 0.3960 LR: 0.000100 Step: 13300 Total Loss: 0.0330 Recon Loss: 0.0217 [03/29 00:02:30 TiTok]: Data (t): 0.0032, 84.08/s/gpu Batch (t): 0.4282 LR: 0.000100 Step: 13400 Total Loss: 0.0344 Recon Loss: 0.0221 [03/29 00:03:10 TiTok]: Data (t): 0.0033, 91.13/s/gpu Batch (t): 0.3950 LR: 0.000100 Step: 13500 Total Loss: 0.0330 Recon Loss: 0.0217 [03/29 00:03:49 TiTok]: Data (t): 0.0031, 91.28/s/gpu Batch (t): 0.3944 LR: 0.000100 Step: 13600 Total Loss: 0.0342 Recon Loss: 0.0225 [03/29 00:04:29 TiTok]: Data (t): 0.0032, 91.38/s/gpu Batch (t): 0.3940 LR: 0.000100 Step: 13700 Total Loss: 0.0337 Recon Loss: 0.0221 [03/29 00:05:09 TiTok]: Data (t): 0.0032, 90.94/s/gpu Batch (t): 0.3959 LR: 0.000100 Step: 13800 Total Loss: 0.0341 Recon Loss: 0.0227 [03/29 00:05:48 TiTok]: Data (t): 0.0032, 91.51/s/gpu Batch (t): 0.3934 LR: 0.000100 Step: 13900 Total Loss: 0.0362 Recon Loss: 0.0233 [03/29 00:06:28 TiTok]: Data (t): 0.0032, 79.92/s/gpu Batch (t): 0.4504 LR: 0.000100 Step: 14000 Total Loss: 0.0352 Recon Loss: 0.0230 [03/29 00:07:08 TiTok]: Data (t): 0.0031, 91.06/s/gpu Batch (t): 0.3953 LR: 0.000100 Step: 14100 Total Loss: 0.0336 Recon Loss: 0.0220 [03/29 00:07:47 TiTok]: Data (t): 0.0033, 91.48/s/gpu Batch (t): 0.3935 LR: 0.000100 Step: 14200 Total Loss: 0.0346 Recon Loss: 0.0225 [03/29 00:08:27 TiTok]: Data (t): 0.0031, 91.42/s/gpu Batch (t): 0.3938 LR: 0.000100 Step: 14300 Total Loss: 0.0332 Recon Loss: 0.0218 [03/29 00:09:07 TiTok]: Data (t): 0.0032, 90.94/s/gpu Batch (t): 0.3959 LR: 0.000100 Step: 14400 Total Loss: 0.0363 Recon Loss: 0.0229 [03/29 00:09:46 TiTok]: Data (t): 0.0032, 91.03/s/gpu Batch (t): 0.3955 LR: 0.000100 Step: 14500 Total Loss: 0.0370 Recon Loss: 0.0225 [03/29 00:10:26 TiTok]: Data (t): 0.0032, 91.19/s/gpu Batch (t): 0.3948 LR: 0.000100 Step: 14600 Total Loss: 0.0328 Recon Loss: 0.0225 [03/29 00:11:06 TiTok]: Data (t): 0.0032, 90.90/s/gpu Batch (t): 0.3960 LR: 0.000100 Step: 14700 Total Loss: 0.0330 Recon Loss: 0.0213 [03/29 00:11:45 TiTok]: Data (t): 0.0033, 90.81/s/gpu Batch (t): 0.3964 LR: 0.000100 Step: 14800 Total Loss: 0.0342 Recon Loss: 0.0219 [03/29 00:12:26 TiTok]: Data (t): 0.0032, 91.17/s/gpu Batch (t): 0.3949 LR: 0.000100 Step: 14900 Total Loss: 0.0357 Recon Loss: 0.0230 [03/29 00:13:06 TiTok]: Data (t): 0.0032, 79.53/s/gpu Batch (t): 0.4527 LR: 0.000100 Step: 15000 Total Loss: 0.0329 Recon Loss: 0.0215 [03/29 00:13:46 TiTok]: Data (t): 0.0034, 90.73/s/gpu Batch (t): 0.3968 LR: 0.000100 Step: 15100 Total Loss: 0.0337 Recon Loss: 0.0231 [03/29 00:14:25 TiTok]: Data (t): 0.0032, 90.94/s/gpu Batch (t): 0.3959 LR: 0.000100 Step: 15200 Total Loss: 0.0332 Recon Loss: 0.0216 [03/29 00:15:05 TiTok]: Data (t): 0.0032, 91.25/s/gpu Batch (t): 0.3945 LR: 0.000100 Step: 15300 Total Loss: 0.0333 Recon Loss: 0.0221 [03/29 00:15:45 TiTok]: Data (t): 0.0031, 91.31/s/gpu Batch (t): 0.3942 LR: 0.000100 Step: 15400 Total Loss: 0.0340 Recon Loss: 0.0226 [03/29 00:16:24 TiTok]: Data (t): 0.0032, 90.86/s/gpu Batch (t): 0.3962 LR: 0.000100 Step: 15500 Total Loss: 0.0339 Recon Loss: 0.0225 [03/29 00:17:04 TiTok]: Data (t): 0.0032, 91.01/s/gpu Batch (t): 0.3956 LR: 0.000100 Step: 15600 Total Loss: 0.0335 Recon Loss: 0.0221 [03/29 00:17:44 TiTok]: Data (t): 0.0032, 89.87/s/gpu Batch (t): 0.4006 LR: 0.000100 Step: 15700 Total Loss: 0.0352 Recon Loss: 0.0224 [03/29 00:18:23 TiTok]: Data (t): 0.0032, 90.51/s/gpu Batch (t): 0.3978 LR: 0.000100 Step: 15800 Total Loss: 0.0336 Recon Loss: 0.0227 [03/29 00:19:03 TiTok]: Data (t): 0.0032, 90.46/s/gpu Batch (t): 0.3980 LR: 0.000100 Step: 15900 Total Loss: 0.0320 Recon Loss: 0.0211 [03/29 00:19:43 TiTok]: Data (t): 0.0031, 78.92/s/gpu Batch (t): 0.4561 LR: 0.000100 Step: 16000 Total Loss: 0.0339 Recon Loss: 0.0216 [03/29 00:20:23 TiTok]: Data (t): 0.0032, 91.08/s/gpu Batch (t): 0.3953 LR: 0.000100 Step: 16100 Total Loss: 0.0346 Recon Loss: 0.0227 [03/29 00:21:03 TiTok]: Data (t): 0.0032, 90.90/s/gpu Batch (t): 0.3960 LR: 0.000100 Step: 16200 Total Loss: 0.0314 Recon Loss: 0.0212 [03/29 00:21:42 TiTok]: Data (t): 0.0032, 90.75/s/gpu Batch (t): 0.3967 LR: 0.000100 Step: 16300 Total Loss: 0.0354 Recon Loss: 0.0226 [03/29 00:22:22 TiTok]: Data (t): 0.0031, 91.10/s/gpu Batch (t): 0.3952 LR: 0.000100 Step: 16400 Total Loss: 0.0333 Recon Loss: 0.0216 [03/29 00:23:02 TiTok]: Data (t): 0.0031, 91.05/s/gpu Batch (t): 0.3954 LR: 0.000100 Step: 16500 Total Loss: 0.0342 Recon Loss: 0.0227 [03/29 00:23:42 TiTok]: Data (t): 0.0032, 91.00/s/gpu Batch (t): 0.3956 LR: 0.000100 Step: 16600 Total Loss: 0.0349 Recon Loss: 0.0225 [03/29 00:24:22 TiTok]: Data (t): 0.0033, 84.18/s/gpu Batch (t): 0.4277 LR: 0.000100 Step: 16700 Total Loss: 0.0334 Recon Loss: 0.0219 [03/29 00:25:02 TiTok]: Data (t): 0.0032, 91.17/s/gpu Batch (t): 0.3949 LR: 0.000100 Step: 16800 Total Loss: 0.0331 Recon Loss: 0.0222 [03/29 00:25:41 TiTok]: Data (t): 0.0032, 91.04/s/gpu Batch (t): 0.3954 LR: 0.000100 Step: 16900 Total Loss: 0.0346 Recon Loss: 0.0224 [03/29 00:26:21 TiTok]: Data (t): 0.0032, 79.07/s/gpu Batch (t): 0.4553 LR: 0.000100 Step: 17000 Total Loss: 0.0355 Recon Loss: 0.0227 [03/29 00:27:01 TiTok]: Data (t): 0.0031, 90.73/s/gpu Batch (t): 0.3968 LR: 0.000100 Step: 17100 Total Loss: 0.0333 Recon Loss: 0.0218 [03/29 00:27:42 TiTok]: Data (t): 0.0032, 90.23/s/gpu Batch (t): 0.3990 LR: 0.000100 Step: 17200 Total Loss: 0.0350 Recon Loss: 0.0234 [03/29 00:28:22 TiTok]: Data (t): 0.0033, 90.54/s/gpu Batch (t): 0.3976 LR: 0.000100 Step: 17300 Total Loss: 0.0345 Recon Loss: 0.0231 [03/29 00:29:02 TiTok]: Data (t): 0.0032, 90.75/s/gpu Batch (t): 0.3967 LR: 0.000100 Step: 17400 Total Loss: 0.0334 Recon Loss: 0.0228 [03/29 00:29:41 TiTok]: Data (t): 0.0032, 90.98/s/gpu Batch (t): 0.3957 LR: 0.000100 Step: 17500 Total Loss: 0.0337 Recon Loss: 0.0223 [03/29 00:30:21 TiTok]: Data (t): 0.0033, 89.14/s/gpu Batch (t): 0.4039 LR: 0.000100 Step: 17600 Total Loss: 0.0353 Recon Loss: 0.0222 [03/29 00:31:01 TiTok]: Data (t): 0.0033, 90.67/s/gpu Batch (t): 0.3970 LR: 0.000100 Step: 17700 Total Loss: 0.0333 Recon Loss: 0.0227 [03/29 00:31:41 TiTok]: Data (t): 0.0032, 91.26/s/gpu Batch (t): 0.3945 LR: 0.000100 Step: 17800 Total Loss: 0.0341 Recon Loss: 0.0229 [03/29 00:32:22 TiTok]: Data (t): 0.0032, 90.74/s/gpu Batch (t): 0.3968 LR: 0.000100 Step: 17900 Total Loss: 0.0323 Recon Loss: 0.0213 [03/29 00:33:02 TiTok]: Data (t): 0.0032, 79.62/s/gpu Batch (t): 0.4522 LR: 0.000100 Step: 18000 Total Loss: 0.0322 Recon Loss: 0.0213 [03/29 00:33:42 TiTok]: Data (t): 0.0032, 90.18/s/gpu Batch (t): 0.3992 LR: 0.000100 Step: 18100 Total Loss: 0.0324 Recon Loss: 0.0219 [03/29 00:34:22 TiTok]: Data (t): 0.0032, 90.85/s/gpu Batch (t): 0.3962 LR: 0.000100 Step: 18200 Total Loss: 0.0342 Recon Loss: 0.0220 [03/29 00:35:02 TiTok]: Data (t): 0.0032, 90.98/s/gpu Batch (t): 0.3957 LR: 0.000100 Step: 18300 Total Loss: 0.0342 Recon Loss: 0.0223 [03/29 00:35:41 TiTok]: Data (t): 0.0032, 90.74/s/gpu Batch (t): 0.3967 LR: 0.000100 Step: 18400 Total Loss: 0.0360 Recon Loss: 0.0249 [03/29 00:36:21 TiTok]: Data (t): 0.0032, 90.62/s/gpu Batch (t): 0.3973 LR: 0.000100 Step: 18500 Total Loss: 0.0358 Recon Loss: 0.0232 [03/29 00:37:01 TiTok]: Data (t): 0.0056, 90.53/s/gpu Batch (t): 0.3977 LR: 0.000100 Step: 18600 Total Loss: 0.0340 Recon Loss: 0.0223 [03/29 00:37:41 TiTok]: Data (t): 0.0032, 90.30/s/gpu Batch (t): 0.3987 LR: 0.000100 Step: 18700 Total Loss: 0.0360 Recon Loss: 0.0231 [03/29 00:38:21 TiTok]: Data (t): 0.0031, 90.57/s/gpu Batch (t): 0.3975 LR: 0.000100 Step: 18800 Total Loss: 0.0341 Recon Loss: 0.0223 [03/29 00:39:00 TiTok]: Data (t): 0.0032, 90.45/s/gpu Batch (t): 0.3980 LR: 0.000100 Step: 18900 Total Loss: 0.0342 Recon Loss: 0.0227 [03/29 00:39:40 TiTok]: Data (t): 0.0032, 79.39/s/gpu Batch (t): 0.4535 LR: 0.000100 Step: 19000 Total Loss: 0.0333 Recon Loss: 0.0225 [03/29 00:40:21 TiTok]: Data (t): 0.0032, 90.71/s/gpu Batch (t): 0.3968 LR: 0.000100 Step: 19100 Total Loss: 0.0350 Recon Loss: 0.0225 [03/29 00:41:01 TiTok]: Data (t): 0.0032, 90.86/s/gpu Batch (t): 0.3962 LR: 0.000100 Step: 19200 Total Loss: 0.0349 Recon Loss: 0.0225 [03/29 00:41:40 TiTok]: Data (t): 0.0034, 89.93/s/gpu Batch (t): 0.4003 LR: 0.000100 Step: 19300 Total Loss: 0.0328 Recon Loss: 0.0220 [03/29 00:42:20 TiTok]: Data (t): 0.0033, 90.60/s/gpu Batch (t): 0.3973 LR: 0.000100 Step: 19400 Total Loss: 0.0318 Recon Loss: 0.0218 [03/29 00:43:02 TiTok]: Data (t): 0.0033, 90.79/s/gpu Batch (t): 0.3965 LR: 0.000100 Step: 19500 Total Loss: 0.0365 Recon Loss: 0.0239 [03/29 00:43:42 TiTok]: Data (t): 0.0032, 90.41/s/gpu Batch (t): 0.3982 LR: 0.000100 Step: 19600 Total Loss: 0.0347 Recon Loss: 0.0223 [03/29 00:44:22 TiTok]: Data (t): 0.0031, 90.86/s/gpu Batch (t): 0.3962 LR: 0.000100 Step: 19700 Total Loss: 0.0338 Recon Loss: 0.0230 [03/29 00:45:02 TiTok]: Data (t): 0.0032, 90.42/s/gpu Batch (t): 0.3981 LR: 0.000100 Step: 19800 Total Loss: 0.0329 Recon Loss: 0.0231 [03/29 00:45:42 TiTok]: Data (t): 0.0032, 88.18/s/gpu Batch (t): 0.4083 LR: 0.000100 Step: 19900 Total Loss: 0.0342 Recon Loss: 0.0222 [03/29 00:46:22 TiTok]: Data (t): 0.0033, 79.19/s/gpu Batch (t): 0.4546 LR: 0.000100 Step: 20000 Total Loss: 0.0333 Recon Loss: 0.0218 [03/29 00:46:24 TiTok]: Saved state to /mnt/books/train_stage2/order_32_stage2/checkpoint-20000 [03/29 00:46:38 TiTok]: Reconstructing images... [03/29 00:47:43 TiTok]: Data (t): 0.0032, 63.02/s/gpu Batch (t): 0.5713 LR: 0.000100 Step: 20100 Total Loss: 0.0320 Recon Loss: 0.0224 [03/29 00:48:40 TiTok]: Data (t): 0.0032, 62.95/s/gpu Batch (t): 0.5719 LR: 0.000100 Step: 20200 Total Loss: 0.0420 Recon Loss: 0.0265 [03/29 00:49:37 TiTok]: Data (t): 0.0032, 63.19/s/gpu Batch (t): 0.5697 LR: 0.000100 Step: 20300 Total Loss: 0.0421 Recon Loss: 0.0263 [03/29 00:50:34 TiTok]: Data (t): 0.0033, 63.01/s/gpu Batch (t): 0.5713 LR: 0.000100 Step: 20400 Total Loss: 0.0409 Recon Loss: 0.0245 [03/29 00:51:31 TiTok]: Data (t): 0.0033, 62.84/s/gpu Batch (t): 0.5729 LR: 0.000100 Step: 20500 Total Loss: 0.0415 Recon Loss: 0.0248 [03/29 00:52:29 TiTok]: Data (t): 0.0034, 62.52/s/gpu Batch (t): 0.5758 LR: 0.000100 Step: 20600 Total Loss: 0.0523 Recon Loss: 0.0310 [03/29 00:53:26 TiTok]: Data (t): 0.0033, 62.95/s/gpu Batch (t): 0.5719 LR: 0.000100 Step: 20700 Total Loss: 0.0473 Recon Loss: 0.0319 [03/29 00:54:24 TiTok]: Data (t): 0.0035, 62.82/s/gpu Batch (t): 0.5731 LR: 0.000100 Step: 20800 Total Loss: 0.0461 Recon Loss: 0.0262 [03/29 00:55:21 TiTok]: Data (t): 0.0033, 62.72/s/gpu Batch (t): 0.5740 LR: 0.000100 Step: 20900 Total Loss: 0.0345 Recon Loss: 0.0257 [03/29 00:56:19 TiTok]: Data (t): 0.0033, 52.14/s/gpu Batch (t): 0.6904 LR: 0.000100 Step: 21000 Total Loss: 0.0383 Recon Loss: 0.0279 [03/29 00:57:16 TiTok]: Data (t): 0.0033, 62.94/s/gpu Batch (t): 0.5719 LR: 0.000100 Step: 21100 Total Loss: 0.0293 Recon Loss: 0.0263 [03/29 00:58:14 TiTok]: Data (t): 0.0034, 63.02/s/gpu Batch (t): 0.5712 LR: 0.000100 Step: 21200 Total Loss: 0.0320 Recon Loss: 0.0246 [03/29 00:59:11 TiTok]: Data (t): 0.0033, 62.99/s/gpu Batch (t): 0.5716 LR: 0.000100 Step: 21300 Total Loss: 0.0409 Recon Loss: 0.0275 [03/29 01:00:08 TiTok]: Data (t): 0.0032, 62.88/s/gpu Batch (t): 0.5725 LR: 0.000100 Step: 21400 Total Loss: 0.0396 Recon Loss: 0.0240 [03/29 01:01:06 TiTok]: Data (t): 0.0034, 62.54/s/gpu Batch (t): 0.5756 LR: 0.000100 Step: 21500 Total Loss: 0.0474 Recon Loss: 0.0298 [03/29 01:02:03 TiTok]: Data (t): 0.0034, 62.87/s/gpu Batch (t): 0.5726 LR: 0.000100 Step: 21600 Total Loss: 0.0482 Recon Loss: 0.0314 [03/29 01:03:00 TiTok]: Data (t): 0.0033, 63.02/s/gpu Batch (t): 0.5712 LR: 0.000100 Step: 21700 Total Loss: 0.0498 Recon Loss: 0.0305 [03/29 01:03:59 TiTok]: Data (t): 0.0032, 63.15/s/gpu Batch (t): 0.5701 LR: 0.000100 Step: 21800 Total Loss: 0.0490 Recon Loss: 0.0278 [03/29 01:04:56 TiTok]: Data (t): 0.0033, 63.04/s/gpu Batch (t): 0.5710 LR: 0.000100 Step: 21900 Total Loss: 0.0428 Recon Loss: 0.0340 [03/29 01:05:54 TiTok]: Data (t): 0.0034, 56.86/s/gpu Batch (t): 0.6332 LR: 0.000100 Step: 22000 Total Loss: 0.0425 Recon Loss: 0.0286 [03/29 01:06:51 TiTok]: Data (t): 0.0034, 62.89/s/gpu Batch (t): 0.5724 LR: 0.000100 Step: 22100 Total Loss: 0.0417 Recon Loss: 0.0265 [03/29 01:07:48 TiTok]: Data (t): 0.0034, 62.70/s/gpu Batch (t): 0.5742 LR: 0.000100 Step: 22200 Total Loss: 0.0439 Recon Loss: 0.0282 [03/29 01:08:47 TiTok]: Data (t): 0.0033, 63.15/s/gpu Batch (t): 0.5701 LR: 0.000100 Step: 22300 Total Loss: 0.0387 Recon Loss: 0.0270 [03/29 01:09:44 TiTok]: Data (t): 0.0032, 63.05/s/gpu Batch (t): 0.5710 LR: 0.000100 Step: 22400 Total Loss: 0.0382 Recon Loss: 0.0275 [03/29 01:10:42 TiTok]: Data (t): 0.0033, 62.98/s/gpu Batch (t): 0.5716 LR: 0.000100 Step: 22500 Total Loss: 0.0426 Recon Loss: 0.0319 [03/29 01:11:39 TiTok]: Data (t): 0.0032, 63.11/s/gpu Batch (t): 0.5705 LR: 0.000100 Step: 22600 Total Loss: 0.0483 Recon Loss: 0.0301 [03/29 01:12:36 TiTok]: Data (t): 0.0032, 63.15/s/gpu Batch (t): 0.5701 LR: 0.000100 Step: 22700 Total Loss: 0.0481 Recon Loss: 0.0303 [03/29 01:13:33 TiTok]: Data (t): 0.0032, 63.18/s/gpu Batch (t): 0.5698 LR: 0.000100 Step: 22800 Total Loss: 0.0342 Recon Loss: 0.0235 [03/29 01:14:30 TiTok]: Data (t): 0.0033, 63.08/s/gpu Batch (t): 0.5707 LR: 0.000100 Step: 22900 Total Loss: 0.0324 Recon Loss: 0.0234 [03/29 01:15:27 TiTok]: Data (t): 0.0032, 56.23/s/gpu Batch (t): 0.6402 LR: 0.000100 Step: 23000 Total Loss: 0.0417 Recon Loss: 0.0261 [03/29 01:16:25 TiTok]: Data (t): 0.0032, 63.03/s/gpu Batch (t): 0.5712 LR: 0.000100 Step: 23100 Total Loss: 0.0491 Recon Loss: 0.0319 [03/29 01:17:22 TiTok]: Data (t): 0.0033, 62.80/s/gpu Batch (t): 0.5733 LR: 0.000100 Step: 23200 Total Loss: 0.0422 Recon Loss: 0.0263 [03/29 01:18:20 TiTok]: Data (t): 0.0034, 62.26/s/gpu Batch (t): 0.5782 LR: 0.000100 Step: 23300 Total Loss: 0.0434 Recon Loss: 0.0267 [03/29 01:19:18 TiTok]: Data (t): 0.0035, 62.96/s/gpu Batch (t): 0.5718 LR: 0.000100 Step: 23400 Total Loss: 0.0426 Recon Loss: 0.0284 [03/29 01:20:15 TiTok]: Data (t): 0.0033, 63.07/s/gpu Batch (t): 0.5708 LR: 0.000100 Step: 23500 Total Loss: 0.0401 Recon Loss: 0.0266 [03/29 01:21:12 TiTok]: Data (t): 0.0032, 62.93/s/gpu Batch (t): 0.5721 LR: 0.000100 Step: 23600 Total Loss: 0.0405 Recon Loss: 0.0242 [03/29 01:22:09 TiTok]: Data (t): 0.0032, 62.98/s/gpu Batch (t): 0.5716 LR: 0.000100 Step: 23700 Total Loss: 0.0330 Recon Loss: 0.0237 [03/29 01:23:07 TiTok]: Data (t): 0.0033, 63.00/s/gpu Batch (t): 0.5715 LR: 0.000100 Step: 23800 Total Loss: 0.0451 Recon Loss: 0.0276 [03/29 01:24:04 TiTok]: Data (t): 0.0032, 62.81/s/gpu Batch (t): 0.5731 LR: 0.000100 Step: 23900 Total Loss: 0.0432 Recon Loss: 0.0274 [03/29 01:25:01 TiTok]: Data (t): 0.0033, 56.68/s/gpu Batch (t): 0.6351 LR: 0.000100 Step: 24000 Total Loss: 0.0377 Recon Loss: 0.0268 [03/29 01:26:00 TiTok]: Data (t): 0.0032, 62.88/s/gpu Batch (t): 0.5725 LR: 0.000100 Step: 24100 Total Loss: 0.0442 Recon Loss: 0.0289 [03/29 01:26:58 TiTok]: Data (t): 0.0032, 63.15/s/gpu Batch (t): 0.5701 LR: 0.000100 Step: 24200 Total Loss: 0.0389 Recon Loss: 0.0243 [03/29 01:27:55 TiTok]: Data (t): 0.0032, 62.91/s/gpu Batch (t): 0.5722 LR: 0.000100 Step: 24300 Total Loss: 0.0476 Recon Loss: 0.0296 [03/29 01:28:52 TiTok]: Data (t): 0.0033, 62.81/s/gpu Batch (t): 0.5731 LR: 0.000100 Step: 24400 Total Loss: 0.0451 Recon Loss: 0.0285 [03/29 01:29:50 TiTok]: Data (t): 0.0034, 62.62/s/gpu Batch (t): 0.5749 LR: 0.000100 Step: 24500 Total Loss: 0.0485 Recon Loss: 0.0303 [03/29 01:30:47 TiTok]: Data (t): 0.0033, 62.78/s/gpu Batch (t): 0.5735 LR: 0.000100 Step: 24600 Total Loss: 0.0384 Recon Loss: 0.0242 [03/29 01:31:44 TiTok]: Data (t): 0.0032, 63.09/s/gpu Batch (t): 0.5706 LR: 0.000100 Step: 24700 Total Loss: 0.0466 Recon Loss: 0.0281 [03/29 01:32:42 TiTok]: Data (t): 0.0033, 62.93/s/gpu Batch (t): 0.5721 LR: 0.000100 Step: 24800 Total Loss: 0.0401 Recon Loss: 0.0260 [03/29 01:33:39 TiTok]: Data (t): 0.0033, 62.96/s/gpu Batch (t): 0.5718 LR: 0.000100 Step: 24900 Total Loss: 0.0453 Recon Loss: 0.0292 [03/29 01:34:36 TiTok]: Data (t): 0.0032, 56.81/s/gpu Batch (t): 0.6337 LR: 0.000100 Step: 25000 Total Loss: 0.0640 Recon Loss: 0.0412 [03/29 01:35:34 TiTok]: Data (t): 0.0034, 62.81/s/gpu Batch (t): 0.5732 LR: 0.000100 Step: 25100 Total Loss: 0.0429 Recon Loss: 0.0281 [03/29 01:36:31 TiTok]: Data (t): 0.0033, 63.00/s/gpu Batch (t): 0.5714 LR: 0.000100 Step: 25200 Total Loss: 0.0447 Recon Loss: 0.0264 [03/29 01:37:29 TiTok]: Data (t): 0.0034, 62.79/s/gpu Batch (t): 0.5733 LR: 0.000100 Step: 25300 Total Loss: 0.0426 Recon Loss: 0.0269 [03/29 01:38:26 TiTok]: Data (t): 0.0033, 62.96/s/gpu Batch (t): 0.5718 LR: 0.000100 Step: 25400 Total Loss: 0.0399 Recon Loss: 0.0235 [03/29 01:39:24 TiTok]: Data (t): 0.0032, 63.02/s/gpu Batch (t): 0.5713 LR: 0.000100 Step: 25500 Total Loss: 0.0425 Recon Loss: 0.0267 [03/29 01:40:21 TiTok]: Data (t): 0.0032, 62.81/s/gpu Batch (t): 0.5732 LR: 0.000100 Step: 25600 Total Loss: 0.0433 Recon Loss: 0.0303 [03/29 01:41:18 TiTok]: Data (t): 0.0032, 63.06/s/gpu Batch (t): 0.5709 LR: 0.000100 Step: 25700 Total Loss: 0.0402 Recon Loss: 0.0283 [03/29 01:42:15 TiTok]: Data (t): 0.0033, 63.06/s/gpu Batch (t): 0.5709 LR: 0.000100 Step: 25800 Total Loss: 0.0444 Recon Loss: 0.0298 [03/29 01:43:13 TiTok]: Data (t): 0.0032, 63.04/s/gpu Batch (t): 0.5710 LR: 0.000100 Step: 25900 Total Loss: 0.0420 Recon Loss: 0.0252 [03/29 01:44:10 TiTok]: Data (t): 0.0034, 56.93/s/gpu Batch (t): 0.6324 LR: 0.000100 Step: 26000 Total Loss: 0.0453 Recon Loss: 0.0307 [03/29 01:45:08 TiTok]: Data (t): 0.0033, 62.87/s/gpu Batch (t): 0.5726 LR: 0.000100 Step: 26100 Total Loss: 0.0512 Recon Loss: 0.0325 [03/29 01:46:05 TiTok]: Data (t): 0.0033, 62.41/s/gpu Batch (t): 0.5768 LR: 0.000100 Step: 26200 Total Loss: 0.0416 Recon Loss: 0.0276 [03/29 01:47:02 TiTok]: Data (t): 0.0033, 63.09/s/gpu Batch (t): 0.5707 LR: 0.000100 Step: 26300 Total Loss: 0.0460 Recon Loss: 0.0299 [03/29 01:48:01 TiTok]: Data (t): 0.0033, 62.93/s/gpu Batch (t): 0.5721 LR: 0.000100 Step: 26400 Total Loss: 0.0419 Recon Loss: 0.0273 [03/29 01:48:59 TiTok]: Data (t): 0.0034, 62.85/s/gpu Batch (t): 0.5728 LR: 0.000100 Step: 26500 Total Loss: 0.0459 Recon Loss: 0.0289 [03/29 01:49:56 TiTok]: Data (t): 0.0032, 62.92/s/gpu Batch (t): 0.5722 LR: 0.000100 Step: 26600 Total Loss: 0.0404 Recon Loss: 0.0275 [03/29 01:50:53 TiTok]: Data (t): 0.0033, 63.15/s/gpu Batch (t): 0.5701 LR: 0.000100 Step: 26700 Total Loss: 0.0477 Recon Loss: 0.0313 [03/29 01:51:52 TiTok]: Data (t): 0.0034, 63.31/s/gpu Batch (t): 0.5686 LR: 0.000100 Step: 26800 Total Loss: 0.0363 Recon Loss: 0.0246 [03/29 01:52:49 TiTok]: Data (t): 0.0032, 63.17/s/gpu Batch (t): 0.5699 LR: 0.000100 Step: 26900 Total Loss: 0.0432 Recon Loss: 0.0261 [03/29 01:53:46 TiTok]: Data (t): 0.0033, 56.81/s/gpu Batch (t): 0.6337 LR: 0.000100 Step: 27000 Total Loss: 0.0427 Recon Loss: 0.0277 [03/29 01:54:43 TiTok]: Data (t): 0.0032, 63.17/s/gpu Batch (t): 0.5699 LR: 0.000100 Step: 27100 Total Loss: 0.0380 Recon Loss: 0.0267 [03/29 01:55:40 TiTok]: Data (t): 0.0033, 62.96/s/gpu Batch (t): 0.5717 LR: 0.000100 Step: 27200 Total Loss: 0.0433 Recon Loss: 0.0282 [03/29 01:56:38 TiTok]: Data (t): 0.0033, 62.49/s/gpu Batch (t): 0.5761 LR: 0.000100 Step: 27300 Total Loss: 0.0477 Recon Loss: 0.0310 [03/29 01:57:35 TiTok]: Data (t): 0.0032, 63.06/s/gpu Batch (t): 0.5709 LR: 0.000100 Step: 27400 Total Loss: 0.0437 Recon Loss: 0.0273 [03/29 01:58:32 TiTok]: Data (t): 0.0032, 63.09/s/gpu Batch (t): 0.5706 LR: 0.000100 Step: 27500 Total Loss: 0.0476 Recon Loss: 0.0302 [03/29 01:59:29 TiTok]: Data (t): 0.0032, 63.29/s/gpu Batch (t): 0.5688 LR: 0.000100 Step: 27600 Total Loss: 0.0524 Recon Loss: 0.0348 [03/29 02:00:26 TiTok]: Data (t): 0.0032, 62.91/s/gpu Batch (t): 0.5722 LR: 0.000100 Step: 27700 Total Loss: 0.0473 Recon Loss: 0.0307 [03/29 02:01:24 TiTok]: Data (t): 0.0031, 63.07/s/gpu Batch (t): 0.5708 LR: 0.000100 Step: 27800 Total Loss: 0.0420 Recon Loss: 0.0281 [03/29 02:02:21 TiTok]: Data (t): 0.0032, 63.34/s/gpu Batch (t): 0.5684 LR: 0.000100 Step: 27900 Total Loss: 0.0431 Recon Loss: 0.0298 [03/29 02:03:18 TiTok]: Data (t): 0.0031, 57.15/s/gpu Batch (t): 0.6299 LR: 0.000100 Step: 28000 Total Loss: 0.0458 Recon Loss: 0.0334 [03/29 02:04:15 TiTok]: Data (t): 0.0032, 63.09/s/gpu Batch (t): 0.5706 LR: 0.000100 Step: 28100 Total Loss: 0.0473 Recon Loss: 0.0294 [03/29 02:05:12 TiTok]: Data (t): 0.0032, 63.14/s/gpu Batch (t): 0.5702 LR: 0.000100 Step: 28200 Total Loss: 0.0451 Recon Loss: 0.0286 [03/29 02:06:09 TiTok]: Data (t): 0.0032, 62.94/s/gpu Batch (t): 0.5720 LR: 0.000100 Step: 28300 Total Loss: 0.0424 Recon Loss: 0.0289 [03/29 02:07:06 TiTok]: Data (t): 0.0033, 62.99/s/gpu Batch (t): 0.5715 LR: 0.000100 Step: 28400 Total Loss: 0.0438 Recon Loss: 0.0290 [03/29 02:08:04 TiTok]: Data (t): 0.0033, 62.99/s/gpu Batch (t): 0.5715 LR: 0.000100 Step: 28500 Total Loss: 0.0442 Recon Loss: 0.0287 [03/29 02:09:02 TiTok]: Data (t): 0.0033, 62.96/s/gpu Batch (t): 0.5718 LR: 0.000099 Step: 28600 Total Loss: 0.0369 Recon Loss: 0.0248 [03/29 02:10:00 TiTok]: Data (t): 0.0033, 63.14/s/gpu Batch (t): 0.5701 LR: 0.000099 Step: 28700 Total Loss: 0.0436 Recon Loss: 0.0295 [03/29 02:10:57 TiTok]: Data (t): 0.0033, 63.02/s/gpu Batch (t): 0.5713 LR: 0.000099 Step: 28800 Total Loss: 0.0515 Recon Loss: 0.0347 [03/29 02:11:54 TiTok]: Data (t): 0.0032, 62.23/s/gpu Batch (t): 0.5785 LR: 0.000099 Step: 28900 Total Loss: 0.0442 Recon Loss: 0.0291 [03/29 02:12:51 TiTok]: Data (t): 0.0033, 57.12/s/gpu Batch (t): 0.6303 LR: 0.000099 Step: 29000 Total Loss: 0.0368 Recon Loss: 0.0264 [03/29 02:13:49 TiTok]: Data (t): 0.0033, 63.13/s/gpu Batch (t): 0.5703 LR: 0.000099 Step: 29100 Total Loss: 0.0420 Recon Loss: 0.0286 [03/29 02:14:46 TiTok]: Data (t): 0.0032, 62.98/s/gpu Batch (t): 0.5716 LR: 0.000099 Step: 29200 Total Loss: 0.0453 Recon Loss: 0.0258 [03/29 02:15:43 TiTok]: Data (t): 0.0033, 62.72/s/gpu Batch (t): 0.5740 LR: 0.000099 Step: 29300 Total Loss: 0.0358 Recon Loss: 0.0255 [03/29 02:16:40 TiTok]: Data (t): 0.0032, 63.05/s/gpu Batch (t): 0.5710 LR: 0.000099 Step: 29400 Total Loss: 0.0442 Recon Loss: 0.0307 [03/29 02:17:38 TiTok]: Data (t): 0.0033, 62.75/s/gpu Batch (t): 0.5737 LR: 0.000099 Step: 29500 Total Loss: 0.0411 Recon Loss: 0.0268 [03/29 02:18:35 TiTok]: Data (t): 0.0032, 62.99/s/gpu Batch (t): 0.5715 LR: 0.000099 Step: 29600 Total Loss: 0.0458 Recon Loss: 0.0307 [03/29 02:19:32 TiTok]: Data (t): 0.0031, 62.78/s/gpu Batch (t): 0.5734 LR: 0.000099 Step: 29700 Total Loss: 0.0442 Recon Loss: 0.0275 [03/29 02:20:29 TiTok]: Data (t): 0.0032, 62.95/s/gpu Batch (t): 0.5719 LR: 0.000099 Step: 29800 Total Loss: 0.0393 Recon Loss: 0.0297 [03/29 02:21:27 TiTok]: Data (t): 0.0033, 62.68/s/gpu Batch (t): 0.5743 LR: 0.000099 Step: 29900 Total Loss: 0.0429 Recon Loss: 0.0294 [03/29 02:22:24 TiTok]: Data (t): 0.0031, 57.21/s/gpu Batch (t): 0.6292 LR: 0.000099 Step: 30000 Total Loss: 0.0424 Recon Loss: 0.0278 [03/29 02:22:26 TiTok]: Saved state to /mnt/books/train_stage2/order_32_stage2/checkpoint-30000 [03/29 02:22:40 TiTok]: Reconstructing images... [03/29 02:23:38 TiTok]: Data (t): 0.0033, 63.24/s/gpu Batch (t): 0.5693 LR: 0.000099 Step: 30100 Total Loss: 0.0440 Recon Loss: 0.0308 [03/29 02:24:35 TiTok]: Data (t): 0.0032, 63.02/s/gpu Batch (t): 0.5712 LR: 0.000099 Step: 30200 Total Loss: 0.0468 Recon Loss: 0.0334 [03/29 02:25:32 TiTok]: Data (t): 0.0033, 63.03/s/gpu Batch (t): 0.5712 LR: 0.000099 Step: 30300 Total Loss: 0.0437 Recon Loss: 0.0307 [03/29 02:26:30 TiTok]: Data (t): 0.0033, 62.98/s/gpu Batch (t): 0.5716 LR: 0.000099 Step: 30400 Total Loss: 0.0397 Recon Loss: 0.0268 [03/29 02:27:27 TiTok]: Data (t): 0.0032, 63.09/s/gpu Batch (t): 0.5706 LR: 0.000099 Step: 30500 Total Loss: 0.0416 Recon Loss: 0.0274 [03/29 02:28:24 TiTok]: Data (t): 0.0032, 62.77/s/gpu Batch (t): 0.5736 LR: 0.000099 Step: 30600 Total Loss: 0.0409 Recon Loss: 0.0286 [03/29 02:29:21 TiTok]: Data (t): 0.0032, 63.00/s/gpu Batch (t): 0.5714 LR: 0.000099 Step: 30700 Total Loss: 0.0424 Recon Loss: 0.0302 [03/29 02:30:18 TiTok]: Data (t): 0.0032, 63.06/s/gpu Batch (t): 0.5709 LR: 0.000099 Step: 30800 Total Loss: 0.0434 Recon Loss: 0.0289 [03/29 02:31:17 TiTok]: Data (t): 0.0033, 61.74/s/gpu Batch (t): 0.5831 LR: 0.000099 Step: 30900 Total Loss: 0.0411 Recon Loss: 0.0296 [03/29 02:32:14 TiTok]: Data (t): 0.0032, 52.29/s/gpu Batch (t): 0.6885 LR: 0.000099 Step: 31000 Total Loss: 0.0402 Recon Loss: 0.0284 [03/29 02:33:12 TiTok]: Data (t): 0.0031, 61.92/s/gpu Batch (t): 0.5814 LR: 0.000099 Step: 31100 Total Loss: 0.0428 Recon Loss: 0.0299 [03/29 02:34:10 TiTok]: Data (t): 0.0033, 62.97/s/gpu Batch (t): 0.5717 LR: 0.000099 Step: 31200 Total Loss: 0.0428 Recon Loss: 0.0287 [03/29 02:35:08 TiTok]: Data (t): 0.0031, 63.10/s/gpu Batch (t): 0.5705 LR: 0.000099 Step: 31300 Total Loss: 0.0400 Recon Loss: 0.0275 [03/29 02:36:05 TiTok]: Data (t): 0.0031, 62.88/s/gpu Batch (t): 0.5725 LR: 0.000099 Step: 31400 Total Loss: 0.0435 Recon Loss: 0.0276 [03/29 02:37:02 TiTok]: Data (t): 0.0031, 63.01/s/gpu Batch (t): 0.5713 LR: 0.000099 Step: 31500 Total Loss: 0.0442 Recon Loss: 0.0311 [03/29 02:38:00 TiTok]: Data (t): 0.0033, 63.06/s/gpu Batch (t): 0.5709 LR: 0.000099 Step: 31600 Total Loss: 0.0448 Recon Loss: 0.0269 [03/29 02:38:59 TiTok]: Data (t): 0.0032, 59.83/s/gpu Batch (t): 0.6017 LR: 0.000099 Step: 31700 Total Loss: 0.0360 Recon Loss: 0.0256 [03/29 02:39:58 TiTok]: Data (t): 0.0031, 59.72/s/gpu Batch (t): 0.6029 LR: 0.000099 Step: 31800 Total Loss: 0.0437 Recon Loss: 0.0269 [03/29 02:40:56 TiTok]: Data (t): 0.0033, 62.93/s/gpu Batch (t): 0.5721 LR: 0.000099 Step: 31900 Total Loss: 0.0430 Recon Loss: 0.0279 [03/29 02:41:55 TiTok]: Data (t): 0.0031, 57.27/s/gpu Batch (t): 0.6286 LR: 0.000099 Step: 32000 Total Loss: 0.0397 Recon Loss: 0.0266 [03/29 02:42:54 TiTok]: Data (t): 0.0033, 59.09/s/gpu Batch (t): 0.6093 LR: 0.000099 Step: 32100 Total Loss: 0.0412 Recon Loss: 0.0264 [03/29 02:43:53 TiTok]: Data (t): 0.0032, 60.05/s/gpu Batch (t): 0.5995 LR: 0.000099 Step: 32200 Total Loss: 0.0433 Recon Loss: 0.0304 [03/29 02:44:52 TiTok]: Data (t): 0.0031, 63.06/s/gpu Batch (t): 0.5709 LR: 0.000099 Step: 32300 Total Loss: 0.0433 Recon Loss: 0.0298 [03/29 02:45:50 TiTok]: Data (t): 0.0032, 62.86/s/gpu Batch (t): 0.5727 LR: 0.000099 Step: 32400 Total Loss: 0.0468 Recon Loss: 0.0310 [03/29 02:46:49 TiTok]: Data (t): 0.0032, 59.83/s/gpu Batch (t): 0.6017 LR: 0.000099 Step: 32500 Total Loss: 0.0394 Recon Loss: 0.0241 [03/29 02:47:48 TiTok]: Data (t): 0.0031, 63.26/s/gpu Batch (t): 0.5691 LR: 0.000099 Step: 32600 Total Loss: 0.0430 Recon Loss: 0.0273 [03/29 02:48:47 TiTok]: Data (t): 0.0032, 63.10/s/gpu Batch (t): 0.5705 LR: 0.000099 Step: 32700 Total Loss: 0.0460 Recon Loss: 0.0282 [03/29 02:49:45 TiTok]: Data (t): 0.0031, 55.34/s/gpu Batch (t): 0.6505 LR: 0.000099 Step: 32800 Total Loss: 0.0422 Recon Loss: 0.0299 [03/29 02:50:44 TiTok]: Data (t): 0.0032, 63.17/s/gpu Batch (t): 0.5699 LR: 0.000099 Step: 32900 Total Loss: 0.0472 Recon Loss: 0.0342 [03/29 02:51:42 TiTok]: Data (t): 0.0031, 57.19/s/gpu Batch (t): 0.6295 LR: 0.000099 Step: 33000 Total Loss: 0.0452 Recon Loss: 0.0301 [03/29 02:52:41 TiTok]: Data (t): 0.0031, 63.36/s/gpu Batch (t): 0.5682 LR: 0.000099 Step: 33100 Total Loss: 0.0439 Recon Loss: 0.0312 [03/29 02:53:41 TiTok]: Data (t): 0.0032, 63.23/s/gpu Batch (t): 0.5693 LR: 0.000099 Step: 33200 Total Loss: 0.0433 Recon Loss: 0.0300 [03/29 02:54:40 TiTok]: Data (t): 0.0032, 59.89/s/gpu Batch (t): 0.6011 LR: 0.000099 Step: 33300 Total Loss: 0.0415 Recon Loss: 0.0309 [03/29 02:55:39 TiTok]: Data (t): 0.0033, 63.14/s/gpu Batch (t): 0.5702 LR: 0.000099 Step: 33400 Total Loss: 0.0406 Recon Loss: 0.0278 [03/29 02:56:37 TiTok]: Data (t): 0.0032, 60.18/s/gpu Batch (t): 0.5982 LR: 0.000099 Step: 33500 Total Loss: 0.0390 Recon Loss: 0.0278 [03/29 02:57:36 TiTok]: Data (t): 0.0033, 60.15/s/gpu Batch (t): 0.5985 LR: 0.000099 Step: 33600 Total Loss: 0.0433 Recon Loss: 0.0304 [03/29 02:58:34 TiTok]: Data (t): 0.0032, 60.08/s/gpu Batch (t): 0.5992 LR: 0.000099 Step: 33700 Total Loss: 0.0412 Recon Loss: 0.0300 [03/29 02:59:33 TiTok]: Data (t): 0.0033, 59.74/s/gpu Batch (t): 0.6027 LR: 0.000099 Step: 33800 Total Loss: 0.0426 Recon Loss: 0.0288 [03/29 03:00:31 TiTok]: Data (t): 0.0032, 59.59/s/gpu Batch (t): 0.6041 LR: 0.000099 Step: 33900 Total Loss: 0.0422 Recon Loss: 0.0249 [03/29 03:01:30 TiTok]: Data (t): 0.0032, 54.77/s/gpu Batch (t): 0.6573 LR: 0.000099 Step: 34000 Total Loss: 0.0422 Recon Loss: 0.0257 [03/29 03:02:29 TiTok]: Data (t): 0.0033, 62.33/s/gpu Batch (t): 0.5776 LR: 0.000099 Step: 34100 Total Loss: 0.0443 Recon Loss: 0.0296 [03/29 03:03:28 TiTok]: Data (t): 0.0033, 59.97/s/gpu Batch (t): 0.6003 LR: 0.000099 Step: 34200 Total Loss: 0.0408 Recon Loss: 0.0269 [03/29 03:04:26 TiTok]: Data (t): 0.0032, 59.61/s/gpu Batch (t): 0.6039 LR: 0.000099 Step: 34300 Total Loss: 0.0406 Recon Loss: 0.0291 [03/29 03:05:26 TiTok]: Data (t): 0.0032, 63.25/s/gpu Batch (t): 0.5692 LR: 0.000099 Step: 34400 Total Loss: 0.0433 Recon Loss: 0.0297 [03/29 03:06:23 TiTok]: Data (t): 0.0033, 59.81/s/gpu Batch (t): 0.6019 LR: 0.000099 Step: 34500 Total Loss: 0.0395 Recon Loss: 0.0255 [03/29 03:07:23 TiTok]: Data (t): 0.0032, 63.26/s/gpu Batch (t): 0.5691 LR: 0.000099 Step: 34600 Total Loss: 0.0460 Recon Loss: 0.0297 [03/29 03:08:20 TiTok]: Data (t): 0.0032, 63.30/s/gpu Batch (t): 0.5687 LR: 0.000099 Step: 34700 Total Loss: 0.0434 Recon Loss: 0.0283 [03/29 03:09:19 TiTok]: Data (t): 0.0032, 60.26/s/gpu Batch (t): 0.5974 LR: 0.000099 Step: 34800 Total Loss: 0.0429 Recon Loss: 0.0286 [03/29 03:10:19 TiTok]: Data (t): 0.0033, 63.06/s/gpu Batch (t): 0.5709 LR: 0.000099 Step: 34900 Total Loss: 0.0437 Recon Loss: 0.0293 [03/29 03:11:16 TiTok]: Data (t): 0.0035, 56.88/s/gpu Batch (t): 0.6329 LR: 0.000099 Step: 35000 Total Loss: 0.0402 Recon Loss: 0.0274 [03/29 03:12:15 TiTok]: Data (t): 0.0032, 59.99/s/gpu Batch (t): 0.6001 LR: 0.000099 Step: 35100 Total Loss: 0.0434 Recon Loss: 0.0294 [03/29 03:13:15 TiTok]: Data (t): 0.0033, 60.06/s/gpu Batch (t): 0.5994 LR: 0.000099 Step: 35200 Total Loss: 0.0444 Recon Loss: 0.0294 [03/29 03:14:14 TiTok]: Data (t): 0.0033, 62.97/s/gpu Batch (t): 0.5717 LR: 0.000099 Step: 35300 Total Loss: 0.0355 Recon Loss: 0.0256 [03/29 03:15:12 TiTok]: Data (t): 0.0033, 42.87/s/gpu Batch (t): 0.8398 LR: 0.000099 Step: 35400 Total Loss: 0.0447 Recon Loss: 0.0302 [03/29 03:16:10 TiTok]: Data (t): 0.0034, 59.89/s/gpu Batch (t): 0.6011 LR: 0.000099 Step: 35500 Total Loss: 0.0448 Recon Loss: 0.0297 [03/29 03:17:10 TiTok]: Data (t): 0.0034, 60.15/s/gpu Batch (t): 0.5985 LR: 0.000099 Step: 35600 Total Loss: 0.0537 Recon Loss: 0.0390 [03/29 03:18:11 TiTok]: Data (t): 0.0033, 60.00/s/gpu Batch (t): 0.6000 LR: 0.000099 Step: 35700 Total Loss: 0.0419 Recon Loss: 0.0275 [03/29 03:19:11 TiTok]: Data (t): 0.0032, 60.07/s/gpu Batch (t): 0.5993 LR: 0.000099 Step: 35800 Total Loss: 0.0455 Recon Loss: 0.0291 [03/29 03:20:09 TiTok]: Data (t): 0.0034, 62.98/s/gpu Batch (t): 0.5716 LR: 0.000099 Step: 35900 Total Loss: 0.0415 Recon Loss: 0.0292 [03/29 03:21:07 TiTok]: Data (t): 0.0032, 56.93/s/gpu Batch (t): 0.6324 LR: 0.000099 Step: 36000 Total Loss: 0.0422 Recon Loss: 0.0279 [03/29 03:22:04 TiTok]: Data (t): 0.0032, 63.08/s/gpu Batch (t): 0.5707 LR: 0.000099 Step: 36100 Total Loss: 0.0410 Recon Loss: 0.0270 [03/29 03:23:01 TiTok]: Data (t): 0.0033, 62.95/s/gpu Batch (t): 0.5718 LR: 0.000099 Step: 36200 Total Loss: 0.0483 Recon Loss: 0.0306 [03/29 03:23:58 TiTok]: Data (t): 0.0033, 63.10/s/gpu Batch (t): 0.5706 LR: 0.000099 Step: 36300 Total Loss: 0.0448 Recon Loss: 0.0311 [03/29 03:24:58 TiTok]: Data (t): 0.0032, 60.03/s/gpu Batch (t): 0.5997 LR: 0.000099 Step: 36400 Total Loss: 0.0504 Recon Loss: 0.0342 [03/29 03:25:58 TiTok]: Data (t): 0.0033, 59.65/s/gpu Batch (t): 0.6035 LR: 0.000099 Step: 36500 Total Loss: 0.0456 Recon Loss: 0.0304 [03/29 03:26:58 TiTok]: Data (t): 0.0032, 60.15/s/gpu Batch (t): 0.5985 LR: 0.000099 Step: 36600 Total Loss: 0.0423 Recon Loss: 0.0285 [03/29 03:27:58 TiTok]: Data (t): 0.0033, 60.11/s/gpu Batch (t): 0.5989 LR: 0.000099 Step: 36700 Total Loss: 0.0415 Recon Loss: 0.0280 [03/29 03:28:58 TiTok]: Data (t): 0.0032, 60.05/s/gpu Batch (t): 0.5995 LR: 0.000099 Step: 36800 Total Loss: 0.0444 Recon Loss: 0.0300 [03/29 03:29:58 TiTok]: Data (t): 0.0033, 60.16/s/gpu Batch (t): 0.5984 LR: 0.000099 Step: 36900 Total Loss: 0.0468 Recon Loss: 0.0310 [03/29 03:30:58 TiTok]: Data (t): 0.0032, 54.39/s/gpu Batch (t): 0.6619 LR: 0.000099 Step: 37000 Total Loss: 0.0472 Recon Loss: 0.0304 [03/29 03:31:59 TiTok]: Data (t): 0.0033, 60.11/s/gpu Batch (t): 0.5989 LR: 0.000099 Step: 37100 Total Loss: 0.0455 Recon Loss: 0.0325 [03/29 03:32:58 TiTok]: Data (t): 0.0032, 60.23/s/gpu Batch (t): 0.5977 LR: 0.000099 Step: 37200 Total Loss: 0.0437 Recon Loss: 0.0318 [03/29 03:33:58 TiTok]: Data (t): 0.0032, 60.00/s/gpu Batch (t): 0.6000 LR: 0.000099 Step: 37300 Total Loss: 0.0414 Recon Loss: 0.0272 [03/29 03:34:58 TiTok]: Data (t): 0.0033, 59.81/s/gpu Batch (t): 0.6020 LR: 0.000099 Step: 37400 Total Loss: 0.0389 Recon Loss: 0.0247 [03/29 03:35:57 TiTok]: Data (t): 0.0032, 62.84/s/gpu Batch (t): 0.5729 LR: 0.000099 Step: 37500 Total Loss: 0.0449 Recon Loss: 0.0302 [03/29 03:36:54 TiTok]: Data (t): 0.0032, 62.94/s/gpu Batch (t): 0.5719 LR: 0.000099 Step: 37600 Total Loss: 0.0462 Recon Loss: 0.0318 [03/29 03:37:52 TiTok]: Data (t): 0.0032, 62.98/s/gpu Batch (t): 0.5716 LR: 0.000099 Step: 37700 Total Loss: 0.0418 Recon Loss: 0.0249 [03/29 03:38:50 TiTok]: Data (t): 0.0033, 62.92/s/gpu Batch (t): 0.5721 LR: 0.000099 Step: 37800 Total Loss: 0.0460 Recon Loss: 0.0294 [03/29 03:39:47 TiTok]: Data (t): 0.0032, 62.96/s/gpu Batch (t): 0.5718 LR: 0.000099 Step: 37900 Total Loss: 0.0423 Recon Loss: 0.0262 [03/29 03:40:45 TiTok]: Data (t): 0.0033, 53.72/s/gpu Batch (t): 0.6702 LR: 0.000099 Step: 38000 Total Loss: 0.0342 Recon Loss: 0.0251 [03/29 03:41:42 TiTok]: Data (t): 0.0034, 62.92/s/gpu Batch (t): 0.5721 LR: 0.000099 Step: 38100 Total Loss: 0.0366 Recon Loss: 0.0249 [03/29 03:42:39 TiTok]: Data (t): 0.0033, 62.98/s/gpu Batch (t): 0.5716 LR: 0.000099 Step: 38200 Total Loss: 0.0457 Recon Loss: 0.0294 [03/29 03:43:37 TiTok]: Data (t): 0.0033, 62.67/s/gpu Batch (t): 0.5745 LR: 0.000099 Step: 38300 Total Loss: 0.0469 Recon Loss: 0.0310 [03/29 03:44:34 TiTok]: Data (t): 0.0032, 62.53/s/gpu Batch (t): 0.5757 LR: 0.000099 Step: 38400 Total Loss: 0.0439 Recon Loss: 0.0282 [03/29 03:45:31 TiTok]: Data (t): 0.0033, 63.15/s/gpu Batch (t): 0.5701 LR: 0.000099 Step: 38500 Total Loss: 0.0415 Recon Loss: 0.0288 [03/29 03:46:28 TiTok]: Data (t): 0.0032, 63.11/s/gpu Batch (t): 0.5704 LR: 0.000099 Step: 38600 Total Loss: 0.0438 Recon Loss: 0.0277 [03/29 03:47:25 TiTok]: Data (t): 0.0031, 63.15/s/gpu Batch (t): 0.5701 LR: 0.000099 Step: 38700 Total Loss: 0.0446 Recon Loss: 0.0285 [03/29 03:48:22 TiTok]: Data (t): 0.0031, 63.17/s/gpu Batch (t): 0.5699 LR: 0.000099 Step: 38800 Total Loss: 0.0426 Recon Loss: 0.0266 [03/29 03:49:20 TiTok]: Data (t): 0.0033, 63.14/s/gpu Batch (t): 0.5701 LR: 0.000099 Step: 38900 Total Loss: 0.0413 Recon Loss: 0.0269 [03/29 03:50:17 TiTok]: Data (t): 0.0033, 57.22/s/gpu Batch (t): 0.6291 LR: 0.000099 Step: 39000 Total Loss: 0.0445 Recon Loss: 0.0294 [03/29 03:51:14 TiTok]: Data (t): 0.0032, 63.33/s/gpu Batch (t): 0.5685 LR: 0.000099 Step: 39100 Total Loss: 0.0447 Recon Loss: 0.0305 [03/29 03:52:11 TiTok]: Data (t): 0.0032, 63.12/s/gpu Batch (t): 0.5703 LR: 0.000099 Step: 39200 Total Loss: 0.0459 Recon Loss: 0.0324 [03/29 03:53:08 TiTok]: Data (t): 0.0034, 62.92/s/gpu Batch (t): 0.5722 LR: 0.000099 Step: 39300 Total Loss: 0.0439 Recon Loss: 0.0291 [03/29 03:54:06 TiTok]: Data (t): 0.0034, 62.71/s/gpu Batch (t): 0.5741 LR: 0.000099 Step: 39400 Total Loss: 0.0434 Recon Loss: 0.0291 [03/29 03:55:03 TiTok]: Data (t): 0.0033, 62.91/s/gpu Batch (t): 0.5722 LR: 0.000099 Step: 39500 Total Loss: 0.0412 Recon Loss: 0.0283 [03/29 03:56:01 TiTok]: Data (t): 0.0033, 62.79/s/gpu Batch (t): 0.5734 LR: 0.000099 Step: 39600 Total Loss: 0.0450 Recon Loss: 0.0302 [03/29 03:56:58 TiTok]: Data (t): 0.0032, 63.04/s/gpu Batch (t): 0.5711 LR: 0.000099 Step: 39700 Total Loss: 0.0441 Recon Loss: 0.0304 [03/29 03:57:56 TiTok]: Data (t): 0.0033, 63.03/s/gpu Batch (t): 0.5712 LR: 0.000099 Step: 39800 Total Loss: 0.0472 Recon Loss: 0.0329 [03/29 03:58:54 TiTok]: Data (t): 0.0032, 62.24/s/gpu Batch (t): 0.5784 LR: 0.000099 Step: 39900 Total Loss: 0.0458 Recon Loss: 0.0311 [03/29 03:59:53 TiTok]: Data (t): 0.0033, 53.96/s/gpu Batch (t): 0.6671 LR: 0.000099 Step: 40000 Total Loss: 0.0362 Recon Loss: 0.0263 [03/29 03:59:55 TiTok]: Saved state to /mnt/books/train_stage2/order_32_stage2/checkpoint-40000 [03/29 04:00:09 TiTok]: Reconstructing images... [03/29 04:01:09 TiTok]: Data (t): 0.0034, 62.94/s/gpu Batch (t): 0.5720 LR: 0.000099 Step: 40100 Total Loss: 0.0429 Recon Loss: 0.0280 [03/29 04:02:06 TiTok]: Data (t): 0.0033, 63.00/s/gpu Batch (t): 0.5714 LR: 0.000099 Step: 40200 Total Loss: 0.0396 Recon Loss: 0.0259 [03/29 04:03:03 TiTok]: Data (t): 0.0033, 62.81/s/gpu Batch (t): 0.5732 LR: 0.000099 Step: 40300 Total Loss: 0.0409 Recon Loss: 0.0281 [03/29 04:04:01 TiTok]: Data (t): 0.0033, 62.97/s/gpu Batch (t): 0.5717 LR: 0.000099 Step: 40400 Total Loss: 0.0437 Recon Loss: 0.0296 [03/29 04:04:58 TiTok]: Data (t): 0.0032, 63.15/s/gpu Batch (t): 0.5701 LR: 0.000099 Step: 40500 Total Loss: 0.0429 Recon Loss: 0.0278 [03/29 04:05:55 TiTok]: Data (t): 0.0032, 62.97/s/gpu Batch (t): 0.5717 LR: 0.000099 Step: 40600 Total Loss: 0.0449 Recon Loss: 0.0311 [03/29 04:06:53 TiTok]: Data (t): 0.0033, 62.81/s/gpu Batch (t): 0.5731 LR: 0.000099 Step: 40700 Total Loss: 0.0382 Recon Loss: 0.0251 [03/29 04:07:50 TiTok]: Data (t): 0.0033, 62.94/s/gpu Batch (t): 0.5720 LR: 0.000099 Step: 40800 Total Loss: 0.0414 Recon Loss: 0.0282 [03/29 04:08:48 TiTok]: Data (t): 0.0033, 60.27/s/gpu Batch (t): 0.5973 LR: 0.000099 Step: 40900 Total Loss: 0.0479 Recon Loss: 0.0331 [03/29 04:09:46 TiTok]: Data (t): 0.0032, 56.72/s/gpu Batch (t): 0.6347 LR: 0.000099 Step: 41000 Total Loss: 0.0472 Recon Loss: 0.0327 [03/29 04:10:43 TiTok]: Data (t): 0.0033, 62.78/s/gpu Batch (t): 0.5734 LR: 0.000099 Step: 41100 Total Loss: 0.0469 Recon Loss: 0.0301 [03/29 04:11:40 TiTok]: Data (t): 0.0034, 62.75/s/gpu Batch (t): 0.5737 LR: 0.000099 Step: 41200 Total Loss: 0.0418 Recon Loss: 0.0273 [03/29 04:12:38 TiTok]: Data (t): 0.0033, 63.11/s/gpu Batch (t): 0.5704 LR: 0.000099 Step: 41300 Total Loss: 0.0473 Recon Loss: 0.0321 [03/29 04:13:35 TiTok]: Data (t): 0.0033, 62.01/s/gpu Batch (t): 0.5805 LR: 0.000099 Step: 41400 Total Loss: 0.0463 Recon Loss: 0.0320 [03/29 04:14:32 TiTok]: Data (t): 0.0032, 63.00/s/gpu Batch (t): 0.5714 LR: 0.000099 Step: 41500 Total Loss: 0.0456 Recon Loss: 0.0318 [03/29 04:15:29 TiTok]: Data (t): 0.0034, 62.11/s/gpu Batch (t): 0.5796 LR: 0.000099 Step: 41600 Total Loss: 0.0461 Recon Loss: 0.0312 [03/29 04:16:27 TiTok]: Data (t): 0.0033, 62.87/s/gpu Batch (t): 0.5726 LR: 0.000099 Step: 41700 Total Loss: 0.0455 Recon Loss: 0.0317 [03/29 04:17:24 TiTok]: Data (t): 0.0034, 60.02/s/gpu Batch (t): 0.5998 LR: 0.000099 Step: 41800 Total Loss: 0.0410 Recon Loss: 0.0275 [03/29 04:18:22 TiTok]: Data (t): 0.0032, 62.89/s/gpu Batch (t): 0.5724 LR: 0.000099 Step: 41900 Total Loss: 0.0424 Recon Loss: 0.0295 [03/29 04:19:19 TiTok]: Data (t): 0.0034, 51.18/s/gpu Batch (t): 0.7034 LR: 0.000099 Step: 42000 Total Loss: 0.0460 Recon Loss: 0.0310 [03/29 04:20:17 TiTok]: Data (t): 0.0035, 62.86/s/gpu Batch (t): 0.5727 LR: 0.000099 Step: 42100 Total Loss: 0.0416 Recon Loss: 0.0275 [03/29 04:21:14 TiTok]: Data (t): 0.0033, 62.95/s/gpu Batch (t): 0.5719 LR: 0.000099 Step: 42200 Total Loss: 0.0447 Recon Loss: 0.0290 [03/29 04:22:13 TiTok]: Data (t): 0.0035, 62.87/s/gpu Batch (t): 0.5726 LR: 0.000099 Step: 42300 Total Loss: 0.0406 Recon Loss: 0.0290 [03/29 04:23:11 TiTok]: Data (t): 0.0032, 63.09/s/gpu Batch (t): 0.5706 LR: 0.000099 Step: 42400 Total Loss: 0.0451 Recon Loss: 0.0303 [03/29 04:24:08 TiTok]: Data (t): 0.0032, 63.25/s/gpu Batch (t): 0.5692 LR: 0.000099 Step: 42500 Total Loss: 0.0416 Recon Loss: 0.0291 [03/29 04:25:05 TiTok]: Data (t): 0.0032, 63.24/s/gpu Batch (t): 0.5693 LR: 0.000099 Step: 42600 Total Loss: 0.0404 Recon Loss: 0.0282 [03/29 04:26:02 TiTok]: Data (t): 0.0032, 63.08/s/gpu Batch (t): 0.5707 LR: 0.000099 Step: 42700 Total Loss: 0.0395 Recon Loss: 0.0282 [03/29 04:27:00 TiTok]: Data (t): 0.0033, 63.19/s/gpu Batch (t): 0.5697 LR: 0.000099 Step: 42800 Total Loss: 0.0420 Recon Loss: 0.0293 [03/29 04:27:57 TiTok]: Data (t): 0.0033, 63.11/s/gpu Batch (t): 0.5704 LR: 0.000099 Step: 42900 Total Loss: 0.0417 Recon Loss: 0.0278 [03/29 04:28:55 TiTok]: Data (t): 0.0032, 57.17/s/gpu Batch (t): 0.6297 LR: 0.000099 Step: 43000 Total Loss: 0.0466 Recon Loss: 0.0339 [03/29 04:29:52 TiTok]: Data (t): 0.0031, 63.12/s/gpu Batch (t): 0.5703 LR: 0.000099 Step: 43100 Total Loss: 0.0476 Recon Loss: 0.0336 [03/29 04:30:49 TiTok]: Data (t): 0.0032, 63.18/s/gpu Batch (t): 0.5698 LR: 0.000099 Step: 43200 Total Loss: 0.0431 Recon Loss: 0.0285 [03/29 04:31:46 TiTok]: Data (t): 0.0031, 63.06/s/gpu Batch (t): 0.5708 LR: 0.000099 Step: 43300 Total Loss: 0.0435 Recon Loss: 0.0291 [03/29 04:32:44 TiTok]: Data (t): 0.0032, 63.20/s/gpu Batch (t): 0.5697 LR: 0.000099 Step: 43400 Total Loss: 0.0409 Recon Loss: 0.0286 [03/29 04:33:41 TiTok]: Data (t): 0.0032, 63.23/s/gpu Batch (t): 0.5694 LR: 0.000099 Step: 43500 Total Loss: 0.0390 Recon Loss: 0.0263 [03/29 04:34:38 TiTok]: Data (t): 0.0032, 63.32/s/gpu Batch (t): 0.5686 LR: 0.000099 Step: 43600 Total Loss: 0.0371 Recon Loss: 0.0268 [03/29 04:35:35 TiTok]: Data (t): 0.0032, 63.36/s/gpu Batch (t): 0.5681 LR: 0.000099 Step: 43700 Total Loss: 0.0433 Recon Loss: 0.0289 [03/29 04:36:32 TiTok]: Data (t): 0.0032, 63.31/s/gpu Batch (t): 0.5687 LR: 0.000099 Step: 43800 Total Loss: 0.0391 Recon Loss: 0.0268 [03/29 04:37:29 TiTok]: Data (t): 0.0032, 63.19/s/gpu Batch (t): 0.5697 LR: 0.000099 Step: 43900 Total Loss: 0.0430 Recon Loss: 0.0288 [03/29 04:38:26 TiTok]: Data (t): 0.0032, 57.21/s/gpu Batch (t): 0.6292 LR: 0.000099 Step: 44000 Total Loss: 0.0458 Recon Loss: 0.0310 [03/29 04:39:23 TiTok]: Data (t): 0.0032, 63.15/s/gpu Batch (t): 0.5701 LR: 0.000099 Step: 44100 Total Loss: 0.0408 Recon Loss: 0.0270 [03/29 04:40:21 TiTok]: Data (t): 0.0032, 63.02/s/gpu Batch (t): 0.5712 LR: 0.000099 Step: 44200 Total Loss: 0.0447 Recon Loss: 0.0299 [03/29 04:41:18 TiTok]: Data (t): 0.0033, 63.20/s/gpu Batch (t): 0.5696 LR: 0.000099 Step: 44300 Total Loss: 0.0408 Recon Loss: 0.0277 [03/29 04:42:15 TiTok]: Data (t): 0.0033, 63.28/s/gpu Batch (t): 0.5689 LR: 0.000099 Step: 44400 Total Loss: 0.0412 Recon Loss: 0.0281 [03/29 04:43:13 TiTok]: Data (t): 0.0035, 63.06/s/gpu Batch (t): 0.5709 LR: 0.000099 Step: 44500 Total Loss: 0.0459 Recon Loss: 0.0299 [03/29 04:44:12 TiTok]: Data (t): 0.0032, 63.18/s/gpu Batch (t): 0.5698 LR: 0.000099 Step: 44600 Total Loss: 0.0440 Recon Loss: 0.0294 [03/29 04:45:09 TiTok]: Data (t): 0.0032, 63.12/s/gpu Batch (t): 0.5703 LR: 0.000099 Step: 44700 Total Loss: 0.0419 Recon Loss: 0.0280 [03/29 04:46:07 TiTok]: Data (t): 0.0032, 62.45/s/gpu Batch (t): 0.5765 LR: 0.000099 Step: 44800 Total Loss: 0.0464 Recon Loss: 0.0316 [03/29 04:47:04 TiTok]: Data (t): 0.0032, 63.03/s/gpu Batch (t): 0.5711 LR: 0.000099 Step: 44900 Total Loss: 0.0442 Recon Loss: 0.0284 [03/29 04:48:01 TiTok]: Data (t): 0.0032, 57.05/s/gpu Batch (t): 0.6310 LR: 0.000099 Step: 45000 Total Loss: 0.0387 Recon Loss: 0.0269 [03/29 04:48:58 TiTok]: Data (t): 0.0032, 63.15/s/gpu Batch (t): 0.5701 LR: 0.000099 Step: 45100 Total Loss: 0.0425 Recon Loss: 0.0296 [03/29 04:49:56 TiTok]: Data (t): 0.0032, 63.28/s/gpu Batch (t): 0.5689 LR: 0.000099 Step: 45200 Total Loss: 0.0468 Recon Loss: 0.0321 [03/29 04:50:53 TiTok]: Data (t): 0.0033, 63.10/s/gpu Batch (t): 0.5706 LR: 0.000099 Step: 45300 Total Loss: 0.0470 Recon Loss: 0.0329 [03/29 04:51:50 TiTok]: Data (t): 0.0033, 63.08/s/gpu Batch (t): 0.5707 LR: 0.000099 Step: 45400 Total Loss: 0.0426 Recon Loss: 0.0291 [03/29 04:52:47 TiTok]: Data (t): 0.0031, 63.21/s/gpu Batch (t): 0.5695 LR: 0.000099 Step: 45500 Total Loss: 0.0375 Recon Loss: 0.0261 [03/29 04:53:45 TiTok]: Data (t): 0.0032, 63.21/s/gpu Batch (t): 0.5695 LR: 0.000099 Step: 45600 Total Loss: 0.0400 Recon Loss: 0.0265 [03/29 04:54:42 TiTok]: Data (t): 0.0033, 63.11/s/gpu Batch (t): 0.5705 LR: 0.000099 Step: 45700 Total Loss: 0.0409 Recon Loss: 0.0264 [03/29 04:55:39 TiTok]: Data (t): 0.0031, 63.09/s/gpu Batch (t): 0.5706 LR: 0.000099 Step: 45800 Total Loss: 0.0418 Recon Loss: 0.0293 [03/29 04:56:36 TiTok]: Data (t): 0.0033, 63.11/s/gpu Batch (t): 0.5704 LR: 0.000098 Step: 45900 Total Loss: 0.0431 Recon Loss: 0.0297 [03/29 04:57:33 TiTok]: Data (t): 0.0032, 57.12/s/gpu Batch (t): 0.6303 LR: 0.000098 Step: 46000 Total Loss: 0.0435 Recon Loss: 0.0291 [03/29 04:58:30 TiTok]: Data (t): 0.0032, 63.08/s/gpu Batch (t): 0.5707 LR: 0.000098 Step: 46100 Total Loss: 0.0408 Recon Loss: 0.0276 [03/29 04:59:28 TiTok]: Data (t): 0.0032, 63.05/s/gpu Batch (t): 0.5709 LR: 0.000098 Step: 46200 Total Loss: 0.0428 Recon Loss: 0.0282 [03/29 05:00:25 TiTok]: Data (t): 0.0033, 63.02/s/gpu Batch (t): 0.5712 LR: 0.000098 Step: 46300 Total Loss: 0.0420 Recon Loss: 0.0290 [03/29 05:01:22 TiTok]: Data (t): 0.0033, 63.18/s/gpu Batch (t): 0.5698 LR: 0.000098 Step: 46400 Total Loss: 0.0445 Recon Loss: 0.0317 [03/29 05:02:19 TiTok]: Data (t): 0.0033, 63.06/s/gpu Batch (t): 0.5709 LR: 0.000098 Step: 46500 Total Loss: 0.0422 Recon Loss: 0.0294 [03/29 05:03:16 TiTok]: Data (t): 0.0032, 63.10/s/gpu Batch (t): 0.5705 LR: 0.000098 Step: 46600 Total Loss: 0.0418 Recon Loss: 0.0281 [03/29 05:04:14 TiTok]: Data (t): 0.0032, 62.63/s/gpu Batch (t): 0.5748 LR: 0.000098 Step: 46700 Total Loss: 0.0467 Recon Loss: 0.0311 [03/29 05:05:12 TiTok]: Data (t): 0.0034, 63.20/s/gpu Batch (t): 0.5696 LR: 0.000098 Step: 46800 Total Loss: 0.0436 Recon Loss: 0.0284 [03/29 05:06:10 TiTok]: Data (t): 0.0033, 63.03/s/gpu Batch (t): 0.5711 LR: 0.000098 Step: 46900 Total Loss: 0.0431 Recon Loss: 0.0290 [03/29 05:07:07 TiTok]: Data (t): 0.0032, 57.01/s/gpu Batch (t): 0.6315 LR: 0.000098 Step: 47000 Total Loss: 0.0437 Recon Loss: 0.0307 [03/29 05:08:04 TiTok]: Data (t): 0.0032, 63.07/s/gpu Batch (t): 0.5708 LR: 0.000098 Step: 47100 Total Loss: 0.0408 Recon Loss: 0.0300 [03/29 05:09:01 TiTok]: Data (t): 0.0033, 63.01/s/gpu Batch (t): 0.5714 LR: 0.000098 Step: 47200 Total Loss: 0.0400 Recon Loss: 0.0280 [03/29 05:09:58 TiTok]: Data (t): 0.0032, 63.12/s/gpu Batch (t): 0.5703 LR: 0.000098 Step: 47300 Total Loss: 0.0438 Recon Loss: 0.0287 [03/29 05:10:56 TiTok]: Data (t): 0.0033, 63.01/s/gpu Batch (t): 0.5713 LR: 0.000098 Step: 47400 Total Loss: 0.0412 Recon Loss: 0.0292 [03/29 05:11:53 TiTok]: Data (t): 0.0032, 63.25/s/gpu Batch (t): 0.5692 LR: 0.000098 Step: 47500 Total Loss: 0.0431 Recon Loss: 0.0297 [03/29 05:12:50 TiTok]: Data (t): 0.0031, 63.16/s/gpu Batch (t): 0.5700 LR: 0.000098 Step: 47600 Total Loss: 0.0429 Recon Loss: 0.0277 [03/29 05:13:47 TiTok]: Data (t): 0.0031, 63.21/s/gpu Batch (t): 0.5695 LR: 0.000098 Step: 47700 Total Loss: 0.0414 Recon Loss: 0.0266 [03/29 05:14:44 TiTok]: Data (t): 0.0032, 63.12/s/gpu Batch (t): 0.5703 LR: 0.000098 Step: 47800 Total Loss: 0.0432 Recon Loss: 0.0295 [03/29 05:15:41 TiTok]: Data (t): 0.0032, 63.08/s/gpu Batch (t): 0.5707 LR: 0.000098 Step: 47900 Total Loss: 0.0398 Recon Loss: 0.0278 [03/29 05:16:39 TiTok]: Data (t): 0.0032, 57.22/s/gpu Batch (t): 0.6291 LR: 0.000098 Step: 48000 Total Loss: 0.0443 Recon Loss: 0.0302 [03/29 05:17:36 TiTok]: Data (t): 0.0032, 63.29/s/gpu Batch (t): 0.5688 LR: 0.000098 Step: 48100 Total Loss: 0.0410 Recon Loss: 0.0280 [03/29 05:18:33 TiTok]: Data (t): 0.0032, 63.21/s/gpu Batch (t): 0.5696 LR: 0.000098 Step: 48200 Total Loss: 0.0446 Recon Loss: 0.0311 [03/29 05:19:30 TiTok]: Data (t): 0.0031, 63.05/s/gpu Batch (t): 0.5710 LR: 0.000098 Step: 48300 Total Loss: 0.0439 Recon Loss: 0.0287 [03/29 05:20:27 TiTok]: Data (t): 0.0032, 63.14/s/gpu Batch (t): 0.5702 LR: 0.000098 Step: 48400 Total Loss: 0.0420 Recon Loss: 0.0274 [03/29 05:21:24 TiTok]: Data (t): 0.0032, 63.17/s/gpu Batch (t): 0.5698 LR: 0.000098 Step: 48500 Total Loss: 0.0420 Recon Loss: 0.0275 [03/29 05:22:22 TiTok]: Data (t): 0.0032, 63.17/s/gpu Batch (t): 0.5699 LR: 0.000098 Step: 48600 Total Loss: 0.0443 Recon Loss: 0.0323 [03/29 05:23:19 TiTok]: Data (t): 0.0031, 63.22/s/gpu Batch (t): 0.5694 LR: 0.000098 Step: 48700 Total Loss: 0.0412 Recon Loss: 0.0273 [03/29 05:24:16 TiTok]: Data (t): 0.0032, 63.16/s/gpu Batch (t): 0.5699 LR: 0.000098 Step: 48800 Total Loss: 0.0442 Recon Loss: 0.0305 [03/29 05:25:13 TiTok]: Data (t): 0.0032, 59.91/s/gpu Batch (t): 0.6009 LR: 0.000098 Step: 48900 Total Loss: 0.0414 Recon Loss: 0.0279 [03/29 05:26:12 TiTok]: Data (t): 0.0032, 56.99/s/gpu Batch (t): 0.6317 LR: 0.000098 Step: 49000 Total Loss: 0.0418 Recon Loss: 0.0294 [03/29 05:27:11 TiTok]: Data (t): 0.0032, 62.59/s/gpu Batch (t): 0.5752 LR: 0.000098 Step: 49100 Total Loss: 0.0434 Recon Loss: 0.0290 [03/29 05:28:08 TiTok]: Data (t): 0.0031, 63.09/s/gpu Batch (t): 0.5706 LR: 0.000098 Step: 49200 Total Loss: 0.0439 Recon Loss: 0.0301 [03/29 05:29:05 TiTok]: Data (t): 0.0032, 62.91/s/gpu Batch (t): 0.5722 LR: 0.000098 Step: 49300 Total Loss: 0.0431 Recon Loss: 0.0285 [03/29 05:30:03 TiTok]: Data (t): 0.0033, 63.06/s/gpu Batch (t): 0.5709 LR: 0.000098 Step: 49400 Total Loss: 0.0415 Recon Loss: 0.0292 [03/29 05:31:00 TiTok]: Data (t): 0.0033, 63.16/s/gpu Batch (t): 0.5700 LR: 0.000098 Step: 49500 Total Loss: 0.0420 Recon Loss: 0.0282 [03/29 05:31:57 TiTok]: Data (t): 0.0033, 63.14/s/gpu Batch (t): 0.5702 LR: 0.000098 Step: 49600 Total Loss: 0.0430 Recon Loss: 0.0306 [03/29 05:32:54 TiTok]: Data (t): 0.0031, 63.18/s/gpu Batch (t): 0.5698 LR: 0.000098 Step: 49700 Total Loss: 0.0410 Recon Loss: 0.0295 [03/29 05:33:51 TiTok]: Data (t): 0.0032, 63.09/s/gpu Batch (t): 0.5706 LR: 0.000098 Step: 49800 Total Loss: 0.0411 Recon Loss: 0.0278 [03/29 05:34:48 TiTok]: Data (t): 0.0032, 63.17/s/gpu Batch (t): 0.5699 LR: 0.000098 Step: 49900 Total Loss: 0.0441 Recon Loss: 0.0288 [03/29 05:35:46 TiTok]: Data (t): 0.0032, 57.00/s/gpu Batch (t): 0.6316 LR: 0.000098 Step: 50000 Total Loss: 0.0480 Recon Loss: 0.0319 [03/29 05:35:48 TiTok]: Saved state to /mnt/books/train_stage2/order_32_stage2/checkpoint-50000 [03/29 05:36:01 TiTok]: Reconstructing images... [03/29 05:36:59 TiTok]: Data (t): 0.0033, 62.29/s/gpu Batch (t): 0.5780 LR: 0.000098 Step: 50100 Total Loss: 0.0405 Recon Loss: 0.0291 [03/29 05:37:57 TiTok]: Data (t): 0.0032, 63.17/s/gpu Batch (t): 0.5699 LR: 0.000098 Step: 50200 Total Loss: 0.0420 Recon Loss: 0.0285 [03/29 05:38:54 TiTok]: Data (t): 0.0032, 62.93/s/gpu Batch (t): 0.5721 LR: 0.000098 Step: 50300 Total Loss: 0.0443 Recon Loss: 0.0313 [03/29 05:39:51 TiTok]: Data (t): 0.0033, 63.20/s/gpu Batch (t): 0.5696 LR: 0.000098 Step: 50400 Total Loss: 0.0405 Recon Loss: 0.0270 [03/29 05:40:48 TiTok]: Data (t): 0.0032, 63.13/s/gpu Batch (t): 0.5703 LR: 0.000098 Step: 50500 Total Loss: 0.0407 Recon Loss: 0.0269 [03/29 05:41:45 TiTok]: Data (t): 0.0032, 59.21/s/gpu Batch (t): 0.6080 LR: 0.000098 Step: 50600 Total Loss: 0.0442 Recon Loss: 0.0312 [03/29 05:42:42 TiTok]: Data (t): 0.0032, 63.23/s/gpu Batch (t): 0.5694 LR: 0.000098 Step: 50700 Total Loss: 0.0410 Recon Loss: 0.0294 [03/29 05:43:39 TiTok]: Data (t): 0.0033, 63.12/s/gpu Batch (t): 0.5704 LR: 0.000098 Step: 50800 Total Loss: 0.0426 Recon Loss: 0.0285 [03/29 05:44:36 TiTok]: Data (t): 0.0031, 63.16/s/gpu Batch (t): 0.5700 LR: 0.000098 Step: 50900 Total Loss: 0.0454 Recon Loss: 0.0298 [03/29 05:45:33 TiTok]: Data (t): 0.0032, 57.43/s/gpu Batch (t): 0.6268 LR: 0.000098 Step: 51000 Total Loss: 0.0451 Recon Loss: 0.0312 [03/29 05:46:30 TiTok]: Data (t): 0.0032, 62.92/s/gpu Batch (t): 0.5722 LR: 0.000098 Step: 51100 Total Loss: 0.0447 Recon Loss: 0.0301 [03/29 05:47:28 TiTok]: Data (t): 0.0032, 63.30/s/gpu Batch (t): 0.5687 LR: 0.000098 Step: 51200 Total Loss: 0.0427 Recon Loss: 0.0294 [03/29 05:48:25 TiTok]: Data (t): 0.0032, 63.19/s/gpu Batch (t): 0.5697 LR: 0.000098 Step: 51300 Total Loss: 0.0423 Recon Loss: 0.0274 [03/29 05:49:23 TiTok]: Data (t): 0.0032, 63.30/s/gpu Batch (t): 0.5687 LR: 0.000098 Step: 51400 Total Loss: 0.0428 Recon Loss: 0.0303 [03/29 05:50:20 TiTok]: Data (t): 0.0032, 63.26/s/gpu Batch (t): 0.5691 LR: 0.000098 Step: 51500 Total Loss: 0.0440 Recon Loss: 0.0305 [03/29 05:51:17 TiTok]: Data (t): 0.0031, 63.19/s/gpu Batch (t): 0.5697 LR: 0.000098 Step: 51600 Total Loss: 0.0425 Recon Loss: 0.0288 [03/29 05:52:15 TiTok]: Data (t): 0.0032, 63.20/s/gpu Batch (t): 0.5696 LR: 0.000098 Step: 51700 Total Loss: 0.0414 Recon Loss: 0.0271 [03/29 05:53:12 TiTok]: Data (t): 0.0031, 62.98/s/gpu Batch (t): 0.5716 LR: 0.000098 Step: 51800 Total Loss: 0.0413 Recon Loss: 0.0278 [03/29 05:54:09 TiTok]: Data (t): 0.0032, 63.13/s/gpu Batch (t): 0.5702 LR: 0.000098 Step: 51900 Total Loss: 0.0428 Recon Loss: 0.0294 [03/29 05:55:06 TiTok]: Data (t): 0.0032, 57.53/s/gpu Batch (t): 0.6258 LR: 0.000098 Step: 52000 Total Loss: 0.0410 Recon Loss: 0.0281 [03/29 05:56:03 TiTok]: Data (t): 0.0033, 63.27/s/gpu Batch (t): 0.5690 LR: 0.000098 Step: 52100 Total Loss: 0.0410 Recon Loss: 0.0285 [03/29 05:57:00 TiTok]: Data (t): 0.0032, 63.22/s/gpu Batch (t): 0.5694 LR: 0.000098 Step: 52200 Total Loss: 0.0401 Recon Loss: 0.0271 [03/29 05:57:58 TiTok]: Data (t): 0.0032, 63.06/s/gpu Batch (t): 0.5709 LR: 0.000098 Step: 52300 Total Loss: 0.0463 Recon Loss: 0.0308 [03/29 05:58:55 TiTok]: Data (t): 0.0033, 62.78/s/gpu Batch (t): 0.5734 LR: 0.000098 Step: 52400 Total Loss: 0.0428 Recon Loss: 0.0298 [03/29 05:59:52 TiTok]: Data (t): 0.0033, 62.80/s/gpu Batch (t): 0.5732 LR: 0.000098 Step: 52500 Total Loss: 0.0423 Recon Loss: 0.0283 [03/29 06:00:50 TiTok]: Data (t): 0.0033, 63.22/s/gpu Batch (t): 0.5694 LR: 0.000098 Step: 52600 Total Loss: 0.0435 Recon Loss: 0.0300 [03/29 06:01:47 TiTok]: Data (t): 0.0033, 63.27/s/gpu Batch (t): 0.5690 LR: 0.000098 Step: 52700 Total Loss: 0.0430 Recon Loss: 0.0300 [03/29 06:02:44 TiTok]: Data (t): 0.0031, 63.31/s/gpu Batch (t): 0.5686 LR: 0.000098 Step: 52800 Total Loss: 0.0411 Recon Loss: 0.0286 [03/29 06:03:41 TiTok]: Data (t): 0.0032, 63.34/s/gpu Batch (t): 0.5684 LR: 0.000098 Step: 52900 Total Loss: 0.0382 Recon Loss: 0.0276 [03/29 06:04:38 TiTok]: Data (t): 0.0033, 56.73/s/gpu Batch (t): 0.6345 LR: 0.000098 Step: 53000 Total Loss: 0.0459 Recon Loss: 0.0316 [03/29 06:05:35 TiTok]: Data (t): 0.0032, 63.00/s/gpu Batch (t): 0.5715 LR: 0.000098 Step: 53100 Total Loss: 0.0412 Recon Loss: 0.0291 [03/29 06:06:32 TiTok]: Data (t): 0.0033, 63.29/s/gpu Batch (t): 0.5688 LR: 0.000098 Step: 53200 Total Loss: 0.0401 Recon Loss: 0.0288 [03/29 06:07:30 TiTok]: Data (t): 0.0032, 62.96/s/gpu Batch (t): 0.5718 LR: 0.000098 Step: 53300 Total Loss: 0.0442 Recon Loss: 0.0296 [03/29 06:08:27 TiTok]: Data (t): 0.0033, 63.33/s/gpu Batch (t): 0.5685 LR: 0.000098 Step: 53400 Total Loss: 0.0401 Recon Loss: 0.0275 [03/29 06:09:25 TiTok]: Data (t): 0.0032, 62.99/s/gpu Batch (t): 0.5715 LR: 0.000098 Step: 53500 Total Loss: 0.0437 Recon Loss: 0.0292 [03/29 06:10:23 TiTok]: Data (t): 0.0032, 63.06/s/gpu Batch (t): 0.5709 LR: 0.000098 Step: 53600 Total Loss: 0.0429 Recon Loss: 0.0314 [03/29 06:11:22 TiTok]: Data (t): 0.0033, 63.23/s/gpu Batch (t): 0.5694 LR: 0.000098 Step: 53700 Total Loss: 0.0442 Recon Loss: 0.0286 [03/29 06:12:19 TiTok]: Data (t): 0.0032, 63.11/s/gpu Batch (t): 0.5704 LR: 0.000098 Step: 53800 Total Loss: 0.0444 Recon Loss: 0.0300 [03/29 06:13:16 TiTok]: Data (t): 0.0032, 63.07/s/gpu Batch (t): 0.5708 LR: 0.000098 Step: 53900 Total Loss: 0.0446 Recon Loss: 0.0307 [03/29 06:14:14 TiTok]: Data (t): 0.0033, 54.04/s/gpu Batch (t): 0.6661 LR: 0.000098 Step: 54000 Total Loss: 0.0465 Recon Loss: 0.0306 [03/29 06:15:11 TiTok]: Data (t): 0.0032, 63.15/s/gpu Batch (t): 0.5701 LR: 0.000098 Step: 54100 Total Loss: 0.0402 Recon Loss: 0.0282 [03/29 06:16:08 TiTok]: Data (t): 0.0032, 63.22/s/gpu Batch (t): 0.5695 LR: 0.000098 Step: 54200 Total Loss: 0.0461 Recon Loss: 0.0307 [03/29 06:17:05 TiTok]: Data (t): 0.0032, 63.00/s/gpu Batch (t): 0.5714 LR: 0.000098 Step: 54300 Total Loss: 0.0409 Recon Loss: 0.0271 [03/29 06:18:02 TiTok]: Data (t): 0.0033, 63.25/s/gpu Batch (t): 0.5691 LR: 0.000098 Step: 54400 Total Loss: 0.0406 Recon Loss: 0.0268 [03/29 06:19:00 TiTok]: Data (t): 0.0033, 62.93/s/gpu Batch (t): 0.5720 LR: 0.000098 Step: 54500 Total Loss: 0.0473 Recon Loss: 0.0324 [03/29 06:19:57 TiTok]: Data (t): 0.0032, 63.26/s/gpu Batch (t): 0.5691 LR: 0.000098 Step: 54600 Total Loss: 0.0409 Recon Loss: 0.0284 [03/29 06:20:54 TiTok]: Data (t): 0.0032, 63.20/s/gpu Batch (t): 0.5696 LR: 0.000098 Step: 54700 Total Loss: 0.0421 Recon Loss: 0.0305 [03/29 06:21:51 TiTok]: Data (t): 0.0032, 62.97/s/gpu Batch (t): 0.5717 LR: 0.000098 Step: 54800 Total Loss: 0.0418 Recon Loss: 0.0292 [03/29 06:22:48 TiTok]: Data (t): 0.0032, 63.11/s/gpu Batch (t): 0.5705 LR: 0.000098 Step: 54900 Total Loss: 0.0431 Recon Loss: 0.0282 [03/29 06:23:45 TiTok]: Data (t): 0.0032, 56.44/s/gpu Batch (t): 0.6378 LR: 0.000098 Step: 55000 Total Loss: 0.0435 Recon Loss: 0.0293 [03/29 06:24:43 TiTok]: Data (t): 0.0033, 62.94/s/gpu Batch (t): 0.5720 LR: 0.000098 Step: 55100 Total Loss: 0.0397 Recon Loss: 0.0271 [03/29 06:25:40 TiTok]: Data (t): 0.0032, 63.02/s/gpu Batch (t): 0.5713 LR: 0.000098 Step: 55200 Total Loss: 0.0449 Recon Loss: 0.0288 [03/29 06:26:37 TiTok]: Data (t): 0.0032, 63.00/s/gpu Batch (t): 0.5714 LR: 0.000098 Step: 55300 Total Loss: 0.0438 Recon Loss: 0.0293 [03/29 06:27:34 TiTok]: Data (t): 0.0032, 63.28/s/gpu Batch (t): 0.5689 LR: 0.000098 Step: 55400 Total Loss: 0.0414 Recon Loss: 0.0277 [03/29 06:28:31 TiTok]: Data (t): 0.0032, 62.89/s/gpu Batch (t): 0.5724 LR: 0.000098 Step: 55500 Total Loss: 0.0486 Recon Loss: 0.0325 [03/29 06:29:28 TiTok]: Data (t): 0.0032, 62.93/s/gpu Batch (t): 0.5720 LR: 0.000098 Step: 55600 Total Loss: 0.0394 Recon Loss: 0.0266 [03/29 06:30:26 TiTok]: Data (t): 0.0031, 62.97/s/gpu Batch (t): 0.5717 LR: 0.000098 Step: 55700 Total Loss: 0.0427 Recon Loss: 0.0302 [03/29 06:31:23 TiTok]: Data (t): 0.0033, 62.88/s/gpu Batch (t): 0.5725 LR: 0.000098 Step: 55800 Total Loss: 0.0431 Recon Loss: 0.0294 [03/29 06:32:21 TiTok]: Data (t): 0.0032, 59.87/s/gpu Batch (t): 0.6013 LR: 0.000098 Step: 55900 Total Loss: 0.0438 Recon Loss: 0.0295 [03/29 06:33:19 TiTok]: Data (t): 0.0032, 57.21/s/gpu Batch (t): 0.6293 LR: 0.000098 Step: 56000 Total Loss: 0.0413 Recon Loss: 0.0297 [03/29 06:34:16 TiTok]: Data (t): 0.0033, 63.35/s/gpu Batch (t): 0.5683 LR: 0.000098 Step: 56100 Total Loss: 0.0404 Recon Loss: 0.0275 [03/29 06:35:13 TiTok]: Data (t): 0.0033, 63.25/s/gpu Batch (t): 0.5692 LR: 0.000098 Step: 56200 Total Loss: 0.0459 Recon Loss: 0.0316 [03/29 06:36:10 TiTok]: Data (t): 0.0032, 63.03/s/gpu Batch (t): 0.5712 LR: 0.000098 Step: 56300 Total Loss: 0.0446 Recon Loss: 0.0297 [03/29 06:37:07 TiTok]: Data (t): 0.0032, 62.98/s/gpu Batch (t): 0.5716 LR: 0.000098 Step: 56400 Total Loss: 0.0400 Recon Loss: 0.0272 [03/29 06:38:05 TiTok]: Data (t): 0.0032, 63.09/s/gpu Batch (t): 0.5706 LR: 0.000098 Step: 56500 Total Loss: 0.0412 Recon Loss: 0.0273 [03/29 06:39:02 TiTok]: Data (t): 0.0032, 63.07/s/gpu Batch (t): 0.5708 LR: 0.000098 Step: 56600 Total Loss: 0.0421 Recon Loss: 0.0287 [03/29 06:39:59 TiTok]: Data (t): 0.0033, 63.01/s/gpu Batch (t): 0.5713 LR: 0.000098 Step: 56700 Total Loss: 0.0475 Recon Loss: 0.0307 [03/29 06:40:57 TiTok]: Data (t): 0.0032, 63.13/s/gpu Batch (t): 0.5703 LR: 0.000098 Step: 56800 Total Loss: 0.0457 Recon Loss: 0.0307 [03/29 06:41:54 TiTok]: Data (t): 0.0032, 63.01/s/gpu Batch (t): 0.5713 LR: 0.000098 Step: 56900 Total Loss: 0.0427 Recon Loss: 0.0298 [03/29 06:42:51 TiTok]: Data (t): 0.0033, 56.95/s/gpu Batch (t): 0.6321 LR: 0.000098 Step: 57000 Total Loss: 0.0436 Recon Loss: 0.0288 [03/29 06:43:49 TiTok]: Data (t): 0.0032, 62.92/s/gpu Batch (t): 0.5722 LR: 0.000098 Step: 57100 Total Loss: 0.0417 Recon Loss: 0.0303 [03/29 06:44:46 TiTok]: Data (t): 0.0032, 63.07/s/gpu Batch (t): 0.5708 LR: 0.000098 Step: 57200 Total Loss: 0.0443 Recon Loss: 0.0287 [03/29 06:45:44 TiTok]: Data (t): 0.0034, 63.02/s/gpu Batch (t): 0.5712 LR: 0.000098 Step: 57300 Total Loss: 0.0428 Recon Loss: 0.0305 [03/29 06:46:41 TiTok]: Data (t): 0.0033, 62.32/s/gpu Batch (t): 0.5776 LR: 0.000098 Step: 57400 Total Loss: 0.0402 Recon Loss: 0.0292 [03/29 06:47:38 TiTok]: Data (t): 0.0032, 63.19/s/gpu Batch (t): 0.5697 LR: 0.000098 Step: 57500 Total Loss: 0.0423 Recon Loss: 0.0284 [03/29 06:48:35 TiTok]: Data (t): 0.0032, 63.16/s/gpu Batch (t): 0.5700 LR: 0.000098 Step: 57600 Total Loss: 0.0440 Recon Loss: 0.0299 [03/29 06:49:32 TiTok]: Data (t): 0.0032, 63.10/s/gpu Batch (t): 0.5705 LR: 0.000098 Step: 57700 Total Loss: 0.0383 Recon Loss: 0.0271 [03/29 06:50:29 TiTok]: Data (t): 0.0032, 63.01/s/gpu Batch (t): 0.5714 LR: 0.000097 Step: 57800 Total Loss: 0.0434 Recon Loss: 0.0286 [03/29 06:51:28 TiTok]: Data (t): 0.0031, 63.16/s/gpu Batch (t): 0.5700 LR: 0.000097 Step: 57900 Total Loss: 0.0403 Recon Loss: 0.0273 [03/29 06:52:25 TiTok]: Data (t): 0.0032, 57.28/s/gpu Batch (t): 0.6285 LR: 0.000097 Step: 58000 Total Loss: 0.0407 Recon Loss: 0.0276 [03/29 06:53:23 TiTok]: Data (t): 0.0033, 63.14/s/gpu Batch (t): 0.5702 LR: 0.000097 Step: 58100 Total Loss: 0.0423 Recon Loss: 0.0278 [03/29 06:54:21 TiTok]: Data (t): 0.0033, 62.09/s/gpu Batch (t): 0.5798 LR: 0.000097 Step: 58200 Total Loss: 0.0429 Recon Loss: 0.0298 [03/29 06:55:18 TiTok]: Data (t): 0.0032, 63.28/s/gpu Batch (t): 0.5689 LR: 0.000097 Step: 58300 Total Loss: 0.0402 Recon Loss: 0.0270 [03/29 06:56:15 TiTok]: Data (t): 0.0033, 63.19/s/gpu Batch (t): 0.5697 LR: 0.000097 Step: 58400 Total Loss: 0.0474 Recon Loss: 0.0314 [03/29 06:57:12 TiTok]: Data (t): 0.0034, 62.89/s/gpu Batch (t): 0.5724 LR: 0.000097 Step: 58500 Total Loss: 0.0423 Recon Loss: 0.0285 [03/29 06:58:10 TiTok]: Data (t): 0.0033, 61.78/s/gpu Batch (t): 0.5827 LR: 0.000097 Step: 58600 Total Loss: 0.0417 Recon Loss: 0.0294 [03/29 06:59:07 TiTok]: Data (t): 0.0033, 63.29/s/gpu Batch (t): 0.5688 LR: 0.000097 Step: 58700 Total Loss: 0.0440 Recon Loss: 0.0304 [03/29 07:00:04 TiTok]: Data (t): 0.0033, 63.12/s/gpu Batch (t): 0.5703 LR: 0.000097 Step: 58800 Total Loss: 0.0402 Recon Loss: 0.0276 [03/29 07:01:01 TiTok]: Data (t): 0.0032, 62.22/s/gpu Batch (t): 0.5786 LR: 0.000097 Step: 58900 Total Loss: 0.0417 Recon Loss: 0.0277 [03/29 07:01:58 TiTok]: Data (t): 0.0032, 56.95/s/gpu Batch (t): 0.6321 LR: 0.000097 Step: 59000 Total Loss: 0.0433 Recon Loss: 0.0292 [03/29 07:02:55 TiTok]: Data (t): 0.0032, 63.30/s/gpu Batch (t): 0.5687 LR: 0.000097 Step: 59100 Total Loss: 0.0412 Recon Loss: 0.0275 [03/29 07:03:53 TiTok]: Data (t): 0.0032, 62.42/s/gpu Batch (t): 0.5768 LR: 0.000097 Step: 59200 Total Loss: 0.0434 Recon Loss: 0.0291 [03/29 07:04:50 TiTok]: Data (t): 0.0032, 63.27/s/gpu Batch (t): 0.5690 LR: 0.000097 Step: 59300 Total Loss: 0.0421 Recon Loss: 0.0284 [03/29 07:05:47 TiTok]: Data (t): 0.0032, 63.19/s/gpu Batch (t): 0.5697 LR: 0.000097 Step: 59400 Total Loss: 0.0386 Recon Loss: 0.0285 [03/29 07:06:44 TiTok]: Data (t): 0.0032, 63.21/s/gpu Batch (t): 0.5695 LR: 0.000097 Step: 59500 Total Loss: 0.0432 Recon Loss: 0.0306 [03/29 07:07:41 TiTok]: Data (t): 0.0031, 63.14/s/gpu Batch (t): 0.5701 LR: 0.000097 Step: 59600 Total Loss: 0.0419 Recon Loss: 0.0298 [03/29 07:08:38 TiTok]: Data (t): 0.0033, 63.09/s/gpu Batch (t): 0.5706 LR: 0.000097 Step: 59700 Total Loss: 0.0432 Recon Loss: 0.0301 [03/29 07:09:35 TiTok]: Data (t): 0.0033, 63.32/s/gpu Batch (t): 0.5685 LR: 0.000097 Step: 59800 Total Loss: 0.0467 Recon Loss: 0.0297 [03/29 07:10:32 TiTok]: Data (t): 0.0032, 63.13/s/gpu Batch (t): 0.5703 LR: 0.000097 Step: 59900 Total Loss: 0.0406 Recon Loss: 0.0286 [03/29 07:11:30 TiTok]: Data (t): 0.0033, 56.62/s/gpu Batch (t): 0.6358 LR: 0.000097 Step: 60000 Total Loss: 0.0405 Recon Loss: 0.0280 [03/29 07:11:32 TiTok]: Saved state to /mnt/books/train_stage2/order_32_stage2/checkpoint-60000 [03/29 07:11:45 TiTok]: Reconstructing images... [03/29 07:12:43 TiTok]: Data (t): 0.0033, 63.02/s/gpu Batch (t): 0.5713 LR: 0.000097 Step: 60100 Total Loss: 0.0431 Recon Loss: 0.0287 [03/29 07:13:40 TiTok]: Data (t): 0.0032, 63.22/s/gpu Batch (t): 0.5695 LR: 0.000097 Step: 60200 Total Loss: 0.0436 Recon Loss: 0.0282 [03/29 07:14:38 TiTok]: Data (t): 0.0033, 63.12/s/gpu Batch (t): 0.5703 LR: 0.000097 Step: 60300 Total Loss: 0.0424 Recon Loss: 0.0294 [03/29 07:15:35 TiTok]: Data (t): 0.0031, 63.25/s/gpu Batch (t): 0.5692 LR: 0.000097 Step: 60400 Total Loss: 0.0443 Recon Loss: 0.0305 [03/29 07:16:33 TiTok]: Data (t): 0.0032, 62.97/s/gpu Batch (t): 0.5717 LR: 0.000097 Step: 60500 Total Loss: 0.0465 Recon Loss: 0.0306 [03/29 07:17:30 TiTok]: Data (t): 0.0031, 63.25/s/gpu Batch (t): 0.5691 LR: 0.000097 Step: 60600 Total Loss: 0.0433 Recon Loss: 0.0291 [03/29 07:18:28 TiTok]: Data (t): 0.0033, 62.28/s/gpu Batch (t): 0.5780 LR: 0.000097 Step: 60700 Total Loss: 0.0407 Recon Loss: 0.0289 [03/29 07:19:25 TiTok]: Data (t): 0.0032, 63.17/s/gpu Batch (t): 0.5699 LR: 0.000097 Step: 60800 Total Loss: 0.0434 Recon Loss: 0.0279 [03/29 07:20:22 TiTok]: Data (t): 0.0032, 63.17/s/gpu Batch (t): 0.5699 LR: 0.000097 Step: 60900 Total Loss: 0.0430 Recon Loss: 0.0287 [03/29 07:21:19 TiTok]: Data (t): 0.0032, 52.53/s/gpu Batch (t): 0.6853 LR: 0.000097 Step: 61000 Total Loss: 0.0422 Recon Loss: 0.0292 [03/29 07:22:16 TiTok]: Data (t): 0.0032, 63.06/s/gpu Batch (t): 0.5709 LR: 0.000097 Step: 61100 Total Loss: 0.0417 Recon Loss: 0.0271 [03/29 07:23:13 TiTok]: Data (t): 0.0032, 63.21/s/gpu Batch (t): 0.5695 LR: 0.000097 Step: 61200 Total Loss: 0.0437 Recon Loss: 0.0302 [03/29 07:24:11 TiTok]: Data (t): 0.0032, 63.05/s/gpu Batch (t): 0.5710 LR: 0.000097 Step: 61300 Total Loss: 0.0422 Recon Loss: 0.0286 [03/29 07:25:08 TiTok]: Data (t): 0.0033, 63.06/s/gpu Batch (t): 0.5709 LR: 0.000097 Step: 61400 Total Loss: 0.0419 Recon Loss: 0.0282 [03/29 07:26:05 TiTok]: Data (t): 0.0032, 63.07/s/gpu Batch (t): 0.5708 LR: 0.000097 Step: 61500 Total Loss: 0.0422 Recon Loss: 0.0285 [03/29 07:27:02 TiTok]: Data (t): 0.0031, 63.12/s/gpu Batch (t): 0.5704 LR: 0.000097 Step: 61600 Total Loss: 0.0438 Recon Loss: 0.0292 [03/29 07:27:59 TiTok]: Data (t): 0.0032, 63.12/s/gpu Batch (t): 0.5703 LR: 0.000097 Step: 61700 Total Loss: 0.0423 Recon Loss: 0.0285 [03/29 07:28:56 TiTok]: Data (t): 0.0032, 63.24/s/gpu Batch (t): 0.5693 LR: 0.000097 Step: 61800 Total Loss: 0.0447 Recon Loss: 0.0290 [03/29 07:29:53 TiTok]: Data (t): 0.0032, 63.05/s/gpu Batch (t): 0.5710 LR: 0.000097 Step: 61900 Total Loss: 0.0409 Recon Loss: 0.0272 [03/29 07:30:51 TiTok]: Data (t): 0.0032, 57.21/s/gpu Batch (t): 0.6292 LR: 0.000097 Step: 62000 Total Loss: 0.0406 Recon Loss: 0.0312 [03/29 07:31:48 TiTok]: Data (t): 0.0033, 63.25/s/gpu Batch (t): 0.5691 LR: 0.000097 Step: 62100 Total Loss: 0.0366 Recon Loss: 0.0258 [03/29 07:32:45 TiTok]: Data (t): 0.0032, 63.06/s/gpu Batch (t): 0.5709 LR: 0.000097 Step: 62200 Total Loss: 0.0425 Recon Loss: 0.0286 [03/29 07:33:42 TiTok]: Data (t): 0.0032, 62.99/s/gpu Batch (t): 0.5715 LR: 0.000097 Step: 62300 Total Loss: 0.0439 Recon Loss: 0.0298 [03/29 07:34:40 TiTok]: Data (t): 0.0032, 63.13/s/gpu Batch (t): 0.5703 LR: 0.000097 Step: 62400 Total Loss: 0.0428 Recon Loss: 0.0299 [03/29 07:35:37 TiTok]: Data (t): 0.0032, 61.79/s/gpu Batch (t): 0.5826 LR: 0.000097 Step: 62500 Total Loss: 0.0462 Recon Loss: 0.0304 [03/29 07:36:35 TiTok]: Data (t): 0.0032, 62.90/s/gpu Batch (t): 0.5723 LR: 0.000097 Step: 62600 Total Loss: 0.0412 Recon Loss: 0.0283 [03/29 07:37:33 TiTok]: Data (t): 0.0034, 62.83/s/gpu Batch (t): 0.5729 LR: 0.000097 Step: 62700 Total Loss: 0.0437 Recon Loss: 0.0287 [03/29 07:38:31 TiTok]: Data (t): 0.0035, 62.99/s/gpu Batch (t): 0.5715 LR: 0.000097 Step: 62800 Total Loss: 0.0463 Recon Loss: 0.0308 [03/29 07:39:28 TiTok]: Data (t): 0.0032, 63.04/s/gpu Batch (t): 0.5710 LR: 0.000097 Step: 62900 Total Loss: 0.0414 Recon Loss: 0.0290 [03/29 07:40:26 TiTok]: Data (t): 0.0033, 57.16/s/gpu Batch (t): 0.6298 LR: 0.000097 Step: 63000 Total Loss: 0.0442 Recon Loss: 0.0291 [03/29 07:41:23 TiTok]: Data (t): 0.0032, 58.76/s/gpu Batch (t): 0.6127 LR: 0.000097 Step: 63100 Total Loss: 0.0407 Recon Loss: 0.0277 [03/29 07:42:20 TiTok]: Data (t): 0.0032, 63.11/s/gpu Batch (t): 0.5705 LR: 0.000097 Step: 63200 Total Loss: 0.0392 Recon Loss: 0.0283 [03/29 07:43:17 TiTok]: Data (t): 0.0032, 62.40/s/gpu Batch (t): 0.5769 LR: 0.000097 Step: 63300 Total Loss: 0.0376 Recon Loss: 0.0257 [03/29 07:44:15 TiTok]: Data (t): 0.0031, 63.21/s/gpu Batch (t): 0.5695 LR: 0.000097 Step: 63400 Total Loss: 0.0413 Recon Loss: 0.0286 [03/29 07:45:12 TiTok]: Data (t): 0.0032, 63.10/s/gpu Batch (t): 0.5706 LR: 0.000097 Step: 63500 Total Loss: 0.0420 Recon Loss: 0.0280 [03/29 07:46:09 TiTok]: Data (t): 0.0032, 63.11/s/gpu Batch (t): 0.5704 LR: 0.000097 Step: 63600 Total Loss: 0.0432 Recon Loss: 0.0290 [03/29 07:47:06 TiTok]: Data (t): 0.0031, 63.18/s/gpu Batch (t): 0.5698 LR: 0.000097 Step: 63700 Total Loss: 0.0399 Recon Loss: 0.0286 [03/29 07:48:03 TiTok]: Data (t): 0.0032, 62.34/s/gpu Batch (t): 0.5775 LR: 0.000097 Step: 63800 Total Loss: 0.0462 Recon Loss: 0.0295 [03/29 07:49:01 TiTok]: Data (t): 0.0033, 63.10/s/gpu Batch (t): 0.5705 LR: 0.000097 Step: 63900 Total Loss: 0.0464 Recon Loss: 0.0317 [03/29 07:49:58 TiTok]: Data (t): 0.0032, 57.19/s/gpu Batch (t): 0.6294 LR: 0.000097 Step: 64000 Total Loss: 0.0395 Recon Loss: 0.0269 [03/29 07:50:55 TiTok]: Data (t): 0.0032, 63.22/s/gpu Batch (t): 0.5694 LR: 0.000097 Step: 64100 Total Loss: 0.0427 Recon Loss: 0.0294 [03/29 07:51:52 TiTok]: Data (t): 0.0035, 63.09/s/gpu Batch (t): 0.5706 LR: 0.000097 Step: 64200 Total Loss: 0.0435 Recon Loss: 0.0313 [03/29 07:52:50 TiTok]: Data (t): 0.0034, 62.98/s/gpu Batch (t): 0.5716 LR: 0.000097 Step: 64300 Total Loss: 0.0422 Recon Loss: 0.0310 [03/29 07:53:47 TiTok]: Data (t): 0.0033, 63.08/s/gpu Batch (t): 0.5707 LR: 0.000097 Step: 64400 Total Loss: 0.0397 Recon Loss: 0.0284 [03/29 07:54:44 TiTok]: Data (t): 0.0033, 62.93/s/gpu Batch (t): 0.5720 LR: 0.000097 Step: 64500 Total Loss: 0.0413 Recon Loss: 0.0288 [03/29 07:55:41 TiTok]: Data (t): 0.0033, 62.97/s/gpu Batch (t): 0.5717 LR: 0.000097 Step: 64600 Total Loss: 0.0416 Recon Loss: 0.0288 [03/29 07:56:39 TiTok]: Data (t): 0.0032, 63.08/s/gpu Batch (t): 0.5707 LR: 0.000097 Step: 64700 Total Loss: 0.0415 Recon Loss: 0.0282 [03/29 07:57:36 TiTok]: Data (t): 0.0032, 62.68/s/gpu Batch (t): 0.5744 LR: 0.000097 Step: 64800 Total Loss: 0.0441 Recon Loss: 0.0301 [03/29 07:58:34 TiTok]: Data (t): 0.0033, 62.69/s/gpu Batch (t): 0.5742 LR: 0.000097 Step: 64900 Total Loss: 0.0403 Recon Loss: 0.0288 [03/29 07:59:34 TiTok]: Data (t): 0.0032, 55.83/s/gpu Batch (t): 0.6448 LR: 0.000097 Step: 65000 Total Loss: 0.0410 Recon Loss: 0.0281 [03/29 08:00:31 TiTok]: Data (t): 0.0033, 62.77/s/gpu Batch (t): 0.5735 LR: 0.000097 Step: 65100 Total Loss: 0.0411 Recon Loss: 0.0283 [03/29 08:01:28 TiTok]: Data (t): 0.0033, 62.19/s/gpu Batch (t): 0.5788 LR: 0.000097 Step: 65200 Total Loss: 0.0415 Recon Loss: 0.0287 [03/29 08:02:26 TiTok]: Data (t): 0.0032, 63.01/s/gpu Batch (t): 0.5713 LR: 0.000097 Step: 65300 Total Loss: 0.0412 Recon Loss: 0.0312 [03/29 08:03:23 TiTok]: Data (t): 0.0034, 62.90/s/gpu Batch (t): 0.5723 LR: 0.000097 Step: 65400 Total Loss: 0.0409 Recon Loss: 0.0278 [03/29 08:04:20 TiTok]: Data (t): 0.0033, 62.93/s/gpu Batch (t): 0.5720 LR: 0.000097 Step: 65500 Total Loss: 0.0423 Recon Loss: 0.0290 [03/29 08:05:18 TiTok]: Data (t): 0.0032, 62.91/s/gpu Batch (t): 0.5722 LR: 0.000097 Step: 65600 Total Loss: 0.0443 Recon Loss: 0.0293 [03/29 08:06:15 TiTok]: Data (t): 0.0032, 63.22/s/gpu Batch (t): 0.5694 LR: 0.000097 Step: 65700 Total Loss: 0.0398 Recon Loss: 0.0280 [03/29 08:07:12 TiTok]: Data (t): 0.0033, 63.04/s/gpu Batch (t): 0.5710 LR: 0.000097 Step: 65800 Total Loss: 0.0465 Recon Loss: 0.0300 [03/29 08:08:09 TiTok]: Data (t): 0.0033, 62.92/s/gpu Batch (t): 0.5722 LR: 0.000097 Step: 65900 Total Loss: 0.0425 Recon Loss: 0.0294 [03/29 08:09:06 TiTok]: Data (t): 0.0032, 56.94/s/gpu Batch (t): 0.6323 LR: 0.000097 Step: 66000 Total Loss: 0.0441 Recon Loss: 0.0295 [03/29 08:10:03 TiTok]: Data (t): 0.0032, 63.21/s/gpu Batch (t): 0.5696 LR: 0.000097 Step: 66100 Total Loss: 0.0407 Recon Loss: 0.0275 [03/29 08:11:00 TiTok]: Data (t): 0.0033, 63.03/s/gpu Batch (t): 0.5711 LR: 0.000097 Step: 66200 Total Loss: 0.0424 Recon Loss: 0.0276 [03/29 08:11:57 TiTok]: Data (t): 0.0032, 63.29/s/gpu Batch (t): 0.5688 LR: 0.000097 Step: 66300 Total Loss: 0.0423 Recon Loss: 0.0290 [03/29 08:12:54 TiTok]: Data (t): 0.0032, 63.12/s/gpu Batch (t): 0.5703 LR: 0.000097 Step: 66400 Total Loss: 0.0423 Recon Loss: 0.0283 [03/29 08:13:51 TiTok]: Data (t): 0.0032, 63.15/s/gpu Batch (t): 0.5701 LR: 0.000097 Step: 66500 Total Loss: 0.0414 Recon Loss: 0.0265 [03/29 08:14:48 TiTok]: Data (t): 0.0032, 63.01/s/gpu Batch (t): 0.5713 LR: 0.000097 Step: 66600 Total Loss: 0.0412 Recon Loss: 0.0276 [03/29 08:15:46 TiTok]: Data (t): 0.0032, 63.12/s/gpu Batch (t): 0.5704 LR: 0.000097 Step: 66700 Total Loss: 0.0412 Recon Loss: 0.0287 [03/29 08:16:44 TiTok]: Data (t): 0.0032, 62.72/s/gpu Batch (t): 0.5740 LR: 0.000097 Step: 66800 Total Loss: 0.0432 Recon Loss: 0.0306 [03/29 08:17:41 TiTok]: Data (t): 0.0032, 63.11/s/gpu Batch (t): 0.5704 LR: 0.000097 Step: 66900 Total Loss: 0.0381 Recon Loss: 0.0275 [03/29 08:18:38 TiTok]: Data (t): 0.0032, 56.16/s/gpu Batch (t): 0.6410 LR: 0.000097 Step: 67000 Total Loss: 0.0414 Recon Loss: 0.0272 [03/29 08:19:35 TiTok]: Data (t): 0.0031, 63.14/s/gpu Batch (t): 0.5701 LR: 0.000097 Step: 67100 Total Loss: 0.0503 Recon Loss: 0.0340 [03/29 08:20:33 TiTok]: Data (t): 0.0033, 62.88/s/gpu Batch (t): 0.5725 LR: 0.000097 Step: 67200 Total Loss: 0.0460 Recon Loss: 0.0298 [03/29 08:21:32 TiTok]: Data (t): 0.0032, 63.11/s/gpu Batch (t): 0.5704 LR: 0.000097 Step: 67300 Total Loss: 0.0396 Recon Loss: 0.0260 [03/29 08:22:29 TiTok]: Data (t): 0.0032, 63.08/s/gpu Batch (t): 0.5707 LR: 0.000097 Step: 67400 Total Loss: 0.0408 Recon Loss: 0.0292 [03/29 08:23:26 TiTok]: Data (t): 0.0032, 63.09/s/gpu Batch (t): 0.5706 LR: 0.000097 Step: 67500 Total Loss: 0.0424 Recon Loss: 0.0296 [03/29 08:24:23 TiTok]: Data (t): 0.0032, 63.21/s/gpu Batch (t): 0.5695 LR: 0.000096 Step: 67600 Total Loss: 0.0441 Recon Loss: 0.0304 [03/29 08:25:20 TiTok]: Data (t): 0.0033, 63.02/s/gpu Batch (t): 0.5712 LR: 0.000096 Step: 67700 Total Loss: 0.0433 Recon Loss: 0.0293 [03/29 08:26:18 TiTok]: Data (t): 0.0031, 63.05/s/gpu Batch (t): 0.5710 LR: 0.000096 Step: 67800 Total Loss: 0.0401 Recon Loss: 0.0282 [03/29 08:27:15 TiTok]: Data (t): 0.0031, 62.47/s/gpu Batch (t): 0.5762 LR: 0.000096 Step: 67900 Total Loss: 0.0416 Recon Loss: 0.0283 [03/29 08:28:12 TiTok]: Data (t): 0.0031, 57.12/s/gpu Batch (t): 0.6302 LR: 0.000096 Step: 68000 Total Loss: 0.0401 Recon Loss: 0.0274 [03/29 08:29:09 TiTok]: Data (t): 0.0031, 63.03/s/gpu Batch (t): 0.5711 LR: 0.000096 Step: 68100 Total Loss: 0.0389 Recon Loss: 0.0267 [03/29 08:30:07 TiTok]: Data (t): 0.0031, 63.05/s/gpu Batch (t): 0.5710 LR: 0.000096 Step: 68200 Total Loss: 0.0417 Recon Loss: 0.0287 [03/29 08:31:04 TiTok]: Data (t): 0.0032, 63.06/s/gpu Batch (t): 0.5709 LR: 0.000096 Step: 68300 Total Loss: 0.0411 Recon Loss: 0.0285 [03/29 08:32:01 TiTok]: Data (t): 0.0032, 63.10/s/gpu Batch (t): 0.5705 LR: 0.000096 Step: 68400 Total Loss: 0.0444 Recon Loss: 0.0305 [03/29 08:32:58 TiTok]: Data (t): 0.0032, 63.23/s/gpu Batch (t): 0.5693 LR: 0.000096 Step: 68500 Total Loss: 0.0434 Recon Loss: 0.0288 [03/29 08:33:55 TiTok]: Data (t): 0.0032, 63.12/s/gpu Batch (t): 0.5703 LR: 0.000096 Step: 68600 Total Loss: 0.0450 Recon Loss: 0.0321 [03/29 08:34:52 TiTok]: Data (t): 0.0033, 63.17/s/gpu Batch (t): 0.5699 LR: 0.000096 Step: 68700 Total Loss: 0.0437 Recon Loss: 0.0296 [03/29 08:35:50 TiTok]: Data (t): 0.0032, 62.52/s/gpu Batch (t): 0.5758 LR: 0.000096 Step: 68800 Total Loss: 0.0405 Recon Loss: 0.0272 [03/29 08:36:47 TiTok]: Data (t): 0.0032, 63.21/s/gpu Batch (t): 0.5696 LR: 0.000096 Step: 68900 Total Loss: 0.0402 Recon Loss: 0.0274 [03/29 08:37:44 TiTok]: Data (t): 0.0032, 57.21/s/gpu Batch (t): 0.6293 LR: 0.000096 Step: 69000 Total Loss: 0.0410 Recon Loss: 0.0290 [03/29 08:38:41 TiTok]: Data (t): 0.0032, 61.97/s/gpu Batch (t): 0.5809 LR: 0.000096 Step: 69100 Total Loss: 0.0398 Recon Loss: 0.0255 [03/29 08:39:38 TiTok]: Data (t): 0.0032, 63.16/s/gpu Batch (t): 0.5700 LR: 0.000096 Step: 69200 Total Loss: 0.0425 Recon Loss: 0.0299 [03/29 08:40:35 TiTok]: Data (t): 0.0031, 63.25/s/gpu Batch (t): 0.5692 LR: 0.000096 Step: 69300 Total Loss: 0.0409 Recon Loss: 0.0278 [03/29 08:41:33 TiTok]: Data (t): 0.0032, 63.16/s/gpu Batch (t): 0.5700 LR: 0.000096 Step: 69400 Total Loss: 0.0453 Recon Loss: 0.0305 [03/29 08:42:30 TiTok]: Data (t): 0.0033, 62.86/s/gpu Batch (t): 0.5727 LR: 0.000096 Step: 69500 Total Loss: 0.0404 Recon Loss: 0.0298 [03/29 08:43:29 TiTok]: Data (t): 0.0035, 62.55/s/gpu Batch (t): 0.5755 LR: 0.000096 Step: 69600 Total Loss: 0.0418 Recon Loss: 0.0287 [03/29 08:44:26 TiTok]: Data (t): 0.0033, 62.90/s/gpu Batch (t): 0.5723 LR: 0.000096 Step: 69700 Total Loss: 0.0417 Recon Loss: 0.0272 [03/29 08:45:24 TiTok]: Data (t): 0.0032, 62.80/s/gpu Batch (t): 0.5733 LR: 0.000096 Step: 69800 Total Loss: 0.0426 Recon Loss: 0.0292 [03/29 08:46:21 TiTok]: Data (t): 0.0032, 63.22/s/gpu Batch (t): 0.5694 LR: 0.000096 Step: 69900 Total Loss: 0.0411 Recon Loss: 0.0293 [03/29 08:47:18 TiTok]: Data (t): 0.0031, 57.08/s/gpu Batch (t): 0.6307 LR: 0.000096 Step: 70000 Total Loss: 0.0423 Recon Loss: 0.0297 [03/29 08:47:20 TiTok]: Saved state to /mnt/books/train_stage2/order_32_stage2/checkpoint-70000 [03/29 08:47:34 TiTok]: Reconstructing images... [03/29 08:48:32 TiTok]: Data (t): 0.0032, 63.25/s/gpu Batch (t): 0.5692 LR: 0.000096 Step: 70100 Total Loss: 0.0481 Recon Loss: 0.0330 [03/29 08:49:29 TiTok]: Data (t): 0.0032, 63.18/s/gpu Batch (t): 0.5698 LR: 0.000096 Step: 70200 Total Loss: 0.0385 Recon Loss: 0.0270 [03/29 08:50:26 TiTok]: Data (t): 0.0031, 63.34/s/gpu Batch (t): 0.5683 LR: 0.000096 Step: 70300 Total Loss: 0.0419 Recon Loss: 0.0302 [03/29 08:51:23 TiTok]: Data (t): 0.0032, 63.15/s/gpu Batch (t): 0.5700 LR: 0.000096 Step: 70400 Total Loss: 0.0425 Recon Loss: 0.0304 [03/29 08:52:20 TiTok]: Data (t): 0.0033, 63.22/s/gpu Batch (t): 0.5694 LR: 0.000096 Step: 70500 Total Loss: 0.0419 Recon Loss: 0.0285 [03/29 08:53:17 TiTok]: Data (t): 0.0031, 63.24/s/gpu Batch (t): 0.5692 LR: 0.000096 Step: 70600 Total Loss: 0.0361 Recon Loss: 0.0260 [03/29 08:54:14 TiTok]: Data (t): 0.0033, 63.08/s/gpu Batch (t): 0.5707 LR: 0.000096 Step: 70700 Total Loss: 0.0414 Recon Loss: 0.0285 [03/29 08:55:11 TiTok]: Data (t): 0.0032, 63.23/s/gpu Batch (t): 0.5694 LR: 0.000096 Step: 70800 Total Loss: 0.0453 Recon Loss: 0.0314 [03/29 08:56:08 TiTok]: Data (t): 0.0032, 63.33/s/gpu Batch (t): 0.5684 LR: 0.000096 Step: 70900 Total Loss: 0.0430 Recon Loss: 0.0307 [03/29 08:57:05 TiTok]: Data (t): 0.0032, 52.69/s/gpu Batch (t): 0.6832 LR: 0.000096 Step: 71000 Total Loss: 0.0420 Recon Loss: 0.0284 [03/29 08:58:02 TiTok]: Data (t): 0.0032, 63.34/s/gpu Batch (t): 0.5683 LR: 0.000096 Step: 71100 Total Loss: 0.0406 Recon Loss: 0.0289 [03/29 08:58:59 TiTok]: Data (t): 0.0032, 63.24/s/gpu Batch (t): 0.5693 LR: 0.000096 Step: 71200 Total Loss: 0.0404 Recon Loss: 0.0293 [03/29 08:59:58 TiTok]: Data (t): 0.0033, 62.51/s/gpu Batch (t): 0.5759 LR: 0.000096 Step: 71300 Total Loss: 0.0466 Recon Loss: 0.0323 [03/29 09:00:56 TiTok]: Data (t): 0.0032, 63.01/s/gpu Batch (t): 0.5714 LR: 0.000096 Step: 71400 Total Loss: 0.0410 Recon Loss: 0.0278 [03/29 09:01:53 TiTok]: Data (t): 0.0033, 62.73/s/gpu Batch (t): 0.5739 LR: 0.000096 Step: 71500 Total Loss: 0.0419 Recon Loss: 0.0298 [03/29 09:02:51 TiTok]: Data (t): 0.0033, 62.81/s/gpu Batch (t): 0.5732 LR: 0.000096 Step: 71600 Total Loss: 0.0418 Recon Loss: 0.0302 [03/29 09:03:48 TiTok]: Data (t): 0.0032, 62.98/s/gpu Batch (t): 0.5716 LR: 0.000096 Step: 71700 Total Loss: 0.0392 Recon Loss: 0.0280 [03/29 09:04:46 TiTok]: Data (t): 0.0032, 63.09/s/gpu Batch (t): 0.5706 LR: 0.000096 Step: 71800 Total Loss: 0.0418 Recon Loss: 0.0298 [03/29 09:05:44 TiTok]: Data (t): 0.0033, 63.08/s/gpu Batch (t): 0.5707 LR: 0.000096 Step: 71900 Total Loss: 0.0434 Recon Loss: 0.0280 [03/29 09:06:41 TiTok]: Data (t): 0.0066, 55.99/s/gpu Batch (t): 0.6430 LR: 0.000096 Step: 72000 Total Loss: 0.0404 Recon Loss: 0.0278 [03/29 09:07:39 TiTok]: Data (t): 0.0033, 63.12/s/gpu Batch (t): 0.5704 LR: 0.000096 Step: 72100 Total Loss: 0.0424 Recon Loss: 0.0293 [03/29 09:08:36 TiTok]: Data (t): 0.0032, 62.72/s/gpu Batch (t): 0.5740 LR: 0.000096 Step: 72200 Total Loss: 0.0435 Recon Loss: 0.0291 [03/29 09:09:33 TiTok]: Data (t): 0.0032, 63.10/s/gpu Batch (t): 0.5705 LR: 0.000096 Step: 72300 Total Loss: 0.0419 Recon Loss: 0.0299 [03/29 09:10:30 TiTok]: Data (t): 0.0032, 62.57/s/gpu Batch (t): 0.5753 LR: 0.000096 Step: 72400 Total Loss: 0.0419 Recon Loss: 0.0294 [03/29 09:11:27 TiTok]: Data (t): 0.0031, 62.97/s/gpu Batch (t): 0.5717 LR: 0.000096 Step: 72500 Total Loss: 0.0387 Recon Loss: 0.0309 [03/29 09:12:24 TiTok]: Data (t): 0.0032, 62.95/s/gpu Batch (t): 0.5719 LR: 0.000096 Step: 72600 Total Loss: 0.0434 Recon Loss: 0.0292 [03/29 09:13:22 TiTok]: Data (t): 0.0032, 62.86/s/gpu Batch (t): 0.5727 LR: 0.000096 Step: 72700 Total Loss: 0.0388 Recon Loss: 0.0260 [03/29 09:14:19 TiTok]: Data (t): 0.0031, 62.46/s/gpu Batch (t): 0.5764 LR: 0.000096 Step: 72800 Total Loss: 0.0483 Recon Loss: 0.0319 [03/29 09:15:16 TiTok]: Data (t): 0.0031, 63.17/s/gpu Batch (t): 0.5699 LR: 0.000096 Step: 72900 Total Loss: 0.0454 Recon Loss: 0.0306 [03/29 09:16:13 TiTok]: Data (t): 0.0031, 56.64/s/gpu Batch (t): 0.6356 LR: 0.000096 Step: 73000 Total Loss: 0.0437 Recon Loss: 0.0280 [03/29 09:17:10 TiTok]: Data (t): 0.0032, 62.99/s/gpu Batch (t): 0.5716 LR: 0.000096 Step: 73100 Total Loss: 0.0442 Recon Loss: 0.0302 [03/29 09:18:07 TiTok]: Data (t): 0.0032, 63.09/s/gpu Batch (t): 0.5706 LR: 0.000096 Step: 73200 Total Loss: 0.0422 Recon Loss: 0.0297 [03/29 09:19:05 TiTok]: Data (t): 0.0032, 62.88/s/gpu Batch (t): 0.5725 LR: 0.000096 Step: 73300 Total Loss: 0.0417 Recon Loss: 0.0293 [03/29 09:20:02 TiTok]: Data (t): 0.0032, 63.03/s/gpu Batch (t): 0.5711 LR: 0.000096 Step: 73400 Total Loss: 0.0402 Recon Loss: 0.0283 [03/29 09:20:59 TiTok]: Data (t): 0.0033, 62.98/s/gpu Batch (t): 0.5716 LR: 0.000096 Step: 73500 Total Loss: 0.0406 Recon Loss: 0.0293 [03/29 09:21:56 TiTok]: Data (t): 0.0031, 63.14/s/gpu Batch (t): 0.5702 LR: 0.000096 Step: 73600 Total Loss: 0.0409 Recon Loss: 0.0276 [03/29 09:22:54 TiTok]: Data (t): 0.0033, 63.00/s/gpu Batch (t): 0.5714 LR: 0.000096 Step: 73700 Total Loss: 0.0454 Recon Loss: 0.0308 [03/29 09:23:51 TiTok]: Data (t): 0.0032, 63.06/s/gpu Batch (t): 0.5708 LR: 0.000096 Step: 73800 Total Loss: 0.0434 Recon Loss: 0.0296 [03/29 09:24:48 TiTok]: Data (t): 0.0031, 63.09/s/gpu Batch (t): 0.5706 LR: 0.000096 Step: 73900 Total Loss: 0.0408 Recon Loss: 0.0284 [03/29 09:25:46 TiTok]: Data (t): 0.0034, 57.22/s/gpu Batch (t): 0.6291 LR: 0.000096 Step: 74000 Total Loss: 0.0437 Recon Loss: 0.0295 [03/29 09:26:44 TiTok]: Data (t): 0.0032, 59.47/s/gpu Batch (t): 0.6053 LR: 0.000096 Step: 74100 Total Loss: 0.0417 Recon Loss: 0.0281 [03/29 09:27:41 TiTok]: Data (t): 0.0032, 63.19/s/gpu Batch (t): 0.5697 LR: 0.000096 Step: 74200 Total Loss: 0.0420 Recon Loss: 0.0292 [03/29 09:28:39 TiTok]: Data (t): 0.0032, 63.23/s/gpu Batch (t): 0.5694 LR: 0.000096 Step: 74300 Total Loss: 0.0416 Recon Loss: 0.0292 [03/29 09:29:36 TiTok]: Data (t): 0.0031, 63.01/s/gpu Batch (t): 0.5714 LR: 0.000096 Step: 74400 Total Loss: 0.0460 Recon Loss: 0.0303 [03/29 09:30:33 TiTok]: Data (t): 0.0032, 63.09/s/gpu Batch (t): 0.5706 LR: 0.000096 Step: 74500 Total Loss: 0.0393 Recon Loss: 0.0277 [03/29 09:31:30 TiTok]: Data (t): 0.0033, 63.12/s/gpu Batch (t): 0.5703 LR: 0.000096 Step: 74600 Total Loss: 0.0416 Recon Loss: 0.0252 [03/29 09:32:27 TiTok]: Data (t): 0.0032, 63.13/s/gpu Batch (t): 0.5702 LR: 0.000096 Step: 74700 Total Loss: 0.0412 Recon Loss: 0.0286 [03/29 09:33:24 TiTok]: Data (t): 0.0032, 62.75/s/gpu Batch (t): 0.5737 LR: 0.000096 Step: 74800 Total Loss: 0.0402 Recon Loss: 0.0278 [03/29 09:34:22 TiTok]: Data (t): 0.0032, 63.11/s/gpu Batch (t): 0.5705 LR: 0.000096 Step: 74900 Total Loss: 0.0395 Recon Loss: 0.0280 [03/29 09:35:19 TiTok]: Data (t): 0.0034, 56.67/s/gpu Batch (t): 0.6353 LR: 0.000096 Step: 75000 Total Loss: 0.0397 Recon Loss: 0.0279 [03/29 09:36:16 TiTok]: Data (t): 0.0032, 62.97/s/gpu Batch (t): 0.5717 LR: 0.000096 Step: 75100 Total Loss: 0.0416 Recon Loss: 0.0288 [03/29 09:37:13 TiTok]: Data (t): 0.0033, 63.17/s/gpu Batch (t): 0.5699 LR: 0.000096 Step: 75200 Total Loss: 0.0390 Recon Loss: 0.0276 [03/29 09:38:10 TiTok]: Data (t): 0.0034, 63.03/s/gpu Batch (t): 0.5711 LR: 0.000096 Step: 75300 Total Loss: 0.0405 Recon Loss: 0.0279 [03/29 09:39:07 TiTok]: Data (t): 0.0058, 62.44/s/gpu Batch (t): 0.5766 LR: 0.000096 Step: 75400 Total Loss: 0.0449 Recon Loss: 0.0303 [03/29 09:40:05 TiTok]: Data (t): 0.0033, 62.67/s/gpu Batch (t): 0.5744 LR: 0.000096 Step: 75500 Total Loss: 0.0407 Recon Loss: 0.0285 [03/29 09:41:02 TiTok]: Data (t): 0.0033, 61.79/s/gpu Batch (t): 0.5826 LR: 0.000096 Step: 75600 Total Loss: 0.0468 Recon Loss: 0.0303 [03/29 09:42:01 TiTok]: Data (t): 0.0033, 62.42/s/gpu Batch (t): 0.5767 LR: 0.000096 Step: 75700 Total Loss: 0.0382 Recon Loss: 0.0283 [03/29 09:42:58 TiTok]: Data (t): 0.0031, 63.11/s/gpu Batch (t): 0.5704 LR: 0.000096 Step: 75800 Total Loss: 0.0451 Recon Loss: 0.0316 [03/29 09:43:56 TiTok]: Data (t): 0.0032, 63.06/s/gpu Batch (t): 0.5709 LR: 0.000096 Step: 75900 Total Loss: 0.0395 Recon Loss: 0.0277 [03/29 09:44:53 TiTok]: Data (t): 0.0032, 56.92/s/gpu Batch (t): 0.6325 LR: 0.000096 Step: 76000 Total Loss: 0.0455 Recon Loss: 0.0295 [03/29 09:45:50 TiTok]: Data (t): 0.0031, 63.12/s/gpu Batch (t): 0.5703 LR: 0.000096 Step: 76100 Total Loss: 0.0412 Recon Loss: 0.0282 [03/29 09:46:48 TiTok]: Data (t): 0.0032, 63.09/s/gpu Batch (t): 0.5706 LR: 0.000095 Step: 76200 Total Loss: 0.0434 Recon Loss: 0.0287 [03/29 09:47:45 TiTok]: Data (t): 0.0032, 63.15/s/gpu Batch (t): 0.5701 LR: 0.000095 Step: 76300 Total Loss: 0.0434 Recon Loss: 0.0298 [03/29 09:48:44 TiTok]: Data (t): 0.0031, 63.17/s/gpu Batch (t): 0.5699 LR: 0.000095 Step: 76400 Total Loss: 0.0398 Recon Loss: 0.0279 [03/29 09:49:41 TiTok]: Data (t): 0.0032, 62.48/s/gpu Batch (t): 0.5761 LR: 0.000095 Step: 76500 Total Loss: 0.0411 Recon Loss: 0.0301 [03/29 09:50:38 TiTok]: Data (t): 0.0032, 63.02/s/gpu Batch (t): 0.5712 LR: 0.000095 Step: 76600 Total Loss: 0.0411 Recon Loss: 0.0290 [03/29 09:51:35 TiTok]: Data (t): 0.0033, 62.27/s/gpu Batch (t): 0.5781 LR: 0.000095 Step: 76700 Total Loss: 0.0411 Recon Loss: 0.0283 [03/29 09:52:33 TiTok]: Data (t): 0.0033, 62.88/s/gpu Batch (t): 0.5725 LR: 0.000095 Step: 76800 Total Loss: 0.0409 Recon Loss: 0.0285 [03/29 09:53:30 TiTok]: Data (t): 0.0034, 62.88/s/gpu Batch (t): 0.5725 LR: 0.000095 Step: 76900 Total Loss: 0.0430 Recon Loss: 0.0293 [03/29 09:54:28 TiTok]: Data (t): 0.0032, 56.98/s/gpu Batch (t): 0.6318 LR: 0.000095 Step: 77000 Total Loss: 0.0417 Recon Loss: 0.0290 [03/29 09:55:25 TiTok]: Data (t): 0.0032, 62.93/s/gpu Batch (t): 0.5721 LR: 0.000095 Step: 77100 Total Loss: 0.0404 Recon Loss: 0.0281 [03/29 09:56:23 TiTok]: Data (t): 0.0034, 62.58/s/gpu Batch (t): 0.5753 LR: 0.000095 Step: 77200 Total Loss: 0.0433 Recon Loss: 0.0261 [03/29 09:57:20 TiTok]: Data (t): 0.0031, 63.03/s/gpu Batch (t): 0.5711 LR: 0.000095 Step: 77300 Total Loss: 0.0439 Recon Loss: 0.0300 [03/29 09:58:17 TiTok]: Data (t): 0.0033, 63.12/s/gpu Batch (t): 0.5703 LR: 0.000095 Step: 77400 Total Loss: 0.0411 Recon Loss: 0.0295 [03/29 09:59:14 TiTok]: Data (t): 0.0032, 62.44/s/gpu Batch (t): 0.5766 LR: 0.000095 Step: 77500 Total Loss: 0.0438 Recon Loss: 0.0311 [03/29 10:00:11 TiTok]: Data (t): 0.0032, 63.16/s/gpu Batch (t): 0.5700 LR: 0.000095 Step: 77600 Total Loss: 0.0400 Recon Loss: 0.0282 [03/29 10:01:08 TiTok]: Data (t): 0.0033, 63.10/s/gpu Batch (t): 0.5705 LR: 0.000095 Step: 77700 Total Loss: 0.0442 Recon Loss: 0.0323 [03/29 10:02:06 TiTok]: Data (t): 0.0031, 63.11/s/gpu Batch (t): 0.5704 LR: 0.000095 Step: 77800 Total Loss: 0.0418 Recon Loss: 0.0289 [03/29 10:03:03 TiTok]: Data (t): 0.0031, 62.28/s/gpu Batch (t): 0.5781 LR: 0.000095 Step: 77900 Total Loss: 0.0416 Recon Loss: 0.0304 [03/29 10:04:00 TiTok]: Data (t): 0.0032, 57.07/s/gpu Batch (t): 0.6308 LR: 0.000095 Step: 78000 Total Loss: 0.0408 Recon Loss: 0.0284 [03/29 10:04:57 TiTok]: Data (t): 0.0032, 63.23/s/gpu Batch (t): 0.5694 LR: 0.000095 Step: 78100 Total Loss: 0.0428 Recon Loss: 0.0295 [03/29 10:05:55 TiTok]: Data (t): 0.0032, 63.10/s/gpu Batch (t): 0.5705 LR: 0.000095 Step: 78200 Total Loss: 0.0407 Recon Loss: 0.0280 [03/29 10:06:52 TiTok]: Data (t): 0.0032, 63.20/s/gpu Batch (t): 0.5696 LR: 0.000095 Step: 78300 Total Loss: 0.0417 Recon Loss: 0.0292 [03/29 10:07:49 TiTok]: Data (t): 0.0032, 63.24/s/gpu Batch (t): 0.5692 LR: 0.000095 Step: 78400 Total Loss: 0.0442 Recon Loss: 0.0296 [03/29 10:08:46 TiTok]: Data (t): 0.0032, 63.23/s/gpu Batch (t): 0.5694 LR: 0.000095 Step: 78500 Total Loss: 0.0412 Recon Loss: 0.0304 [03/29 10:09:43 TiTok]: Data (t): 0.0032, 43.30/s/gpu Batch (t): 0.8315 LR: 0.000095 Step: 78600 Total Loss: 0.0410 Recon Loss: 0.0278 [03/29 10:10:42 TiTok]: Data (t): 0.0033, 63.02/s/gpu Batch (t): 0.5713 LR: 0.000095 Step: 78700 Total Loss: 0.0422 Recon Loss: 0.0305 [03/29 10:11:39 TiTok]: Data (t): 0.0032, 62.71/s/gpu Batch (t): 0.5741 LR: 0.000095 Step: 78800 Total Loss: 0.0421 Recon Loss: 0.0293 [03/29 10:12:36 TiTok]: Data (t): 0.0032, 62.31/s/gpu Batch (t): 0.5777 LR: 0.000095 Step: 78900 Total Loss: 0.0433 Recon Loss: 0.0295 [03/29 10:13:33 TiTok]: Data (t): 0.0031, 57.09/s/gpu Batch (t): 0.6306 LR: 0.000095 Step: 79000 Total Loss: 0.0439 Recon Loss: 0.0290 [03/29 10:14:30 TiTok]: Data (t): 0.0032, 63.12/s/gpu Batch (t): 0.5704 LR: 0.000095 Step: 79100 Total Loss: 0.0467 Recon Loss: 0.0324 [03/29 10:15:27 TiTok]: Data (t): 0.0054, 59.86/s/gpu Batch (t): 0.6014 LR: 0.000095 Step: 79200 Total Loss: 0.0415 Recon Loss: 0.0288 [03/29 10:16:25 TiTok]: Data (t): 0.0032, 62.93/s/gpu Batch (t): 0.5721 LR: 0.000095 Step: 79300 Total Loss: 0.0454 Recon Loss: 0.0313 [03/29 10:17:22 TiTok]: Data (t): 0.0032, 63.08/s/gpu Batch (t): 0.5707 LR: 0.000095 Step: 79400 Total Loss: 0.0421 Recon Loss: 0.0293 [03/29 10:18:19 TiTok]: Data (t): 0.0032, 62.14/s/gpu Batch (t): 0.5793 LR: 0.000095 Step: 79500 Total Loss: 0.0395 Recon Loss: 0.0285 [03/29 10:19:16 TiTok]: Data (t): 0.0031, 63.03/s/gpu Batch (t): 0.5712 LR: 0.000095 Step: 79600 Total Loss: 0.0460 Recon Loss: 0.0313 [03/29 10:20:13 TiTok]: Data (t): 0.0032, 62.51/s/gpu Batch (t): 0.5759 LR: 0.000095 Step: 79700 Total Loss: 0.0438 Recon Loss: 0.0288 [03/29 10:21:11 TiTok]: Data (t): 0.0032, 62.90/s/gpu Batch (t): 0.5724 LR: 0.000095 Step: 79800 Total Loss: 0.0417 Recon Loss: 0.0288 [03/29 10:22:08 TiTok]: Data (t): 0.0032, 62.45/s/gpu Batch (t): 0.5765 LR: 0.000095 Step: 79900 Total Loss: 0.0436 Recon Loss: 0.0287 [03/29 10:23:05 TiTok]: Data (t): 0.0031, 56.94/s/gpu Batch (t): 0.6322 LR: 0.000095 Step: 80000 Total Loss: 0.0437 Recon Loss: 0.0311 [03/29 10:23:07 TiTok]: Saved state to /mnt/books/train_stage2/order_32_stage2/checkpoint-80000 [03/29 10:23:21 TiTok]: Reconstructing images... [03/29 10:59:14 TiTok]: Saving config to /mnt/books/train_stage2/order_32_stage2/config.yaml [03/29 10:59:14 TiTok]: Config: experiment: project: stage2 name: stage2 output_dir: /mnt/books/train_stage2/order_32_stage2/ max_train_examples: 1281167 save_every: 10000 eval_every: 1000000 generate_every: 10000 log_every: 100 log_grad_norm_every: 1000 resume: true init_weight: ckpt/OrderTok.bin logging_dir: /mnt/books/train_stage2/order_32_stage2/logs model: vq_model: codebook_size: 4096 token_size: 12 use_l2_norm: true commitment_cost: 0.25 vit_enc_model_size: large vit_dec_model_size: large vit_enc_patch_size: 16 vit_dec_patch_size: 16 num_latent_tokens: 32 layers_x: 18 layers_token: 2 embedding_width: 1024 width: 256 finetune_decoder: true pretrained_tokenizer_weight: maskgit-vqgan-imagenet-f16-256.bin losses: discriminator_start: 20000 quantizer_weight: 0.0 discriminator_factor: 1.0 discriminator_weight: 0.01 perceptual_loss: convnext_s perceptual_weight: 0.1 reconstruction_loss: l2 reconstruction_weight: 1.0 lecam_regularization_weight: 0.001 dataset: params: train_shards_path_or_url: imagenet/imagenet1k-train-{0000..1023}.tar eval_shards_path_or_url: imagenet/imagenet1k-validation-{00..63}.tar num_workers_per_gpu: 12 preprocessing: resize_shorter_edge: 256 crop_size: 256 random_crop: true random_flip: true optimizer: name: adamw params: learning_rate: 0.0001 discriminator_learning_rate: 0.0001 beta1: 0.9 beta2: 0.999 weight_decay: 0.0001 lr_scheduler: scheduler: cosine params: learning_rate: ${optimizer.params.learning_rate} warmup_steps: 5000 end_lr: 1.0e-05 training: gradient_accumulation_steps: 1 per_gpu_batch_size: 36 mixed_precision: fp16 enable_tf32: true enable_wandb: true use_ema: true seed: 42 max_train_steps: 500000 num_generated_images: 2 max_grad_norm: 1.0 config: configs/training/TiTok/stage2/titok_new.yaml [03/29 10:59:34 TiTok]: Creating model and loss module. [03/29 11:00:08 TiTok]: loading weight from ckpt/OrderTok.bin, msg: [03/29 11:00:12 TiTok]: Creating optimizers. [03/29 11:00:12 TiTok]: Creating lr_schedulers. [03/29 11:00:12 TiTok]: Creating dataloaders. [03/29 11:00:12 TiTok]: Creating evaluator. [03/29 11:00:13 TiTok]: Preparing model, optimizer and dataloaders [03/29 11:00:16 TiTok]: ***** Running training ***** [03/29 11:00:16 TiTok]:  Num training steps = 500000 [03/29 11:00:16 TiTok]:  Gradient Accumulation steps = 1 [03/29 11:00:16 TiTok]:  Instantaneous batch size per gpu = 36 [03/29 11:00:16 TiTok]:  Total train batch size (w. parallel, distributed & accumulation) = 288 [03/29 11:00:16 TiTok]: All globbed checkpoints are: ['/mnt/books/train_stage2/order_32_stage2/checkpoint-60000', '/mnt/books/train_stage2/order_32_stage2/checkpoint-50000', '/mnt/books/train_stage2/order_32_stage2/checkpoint-80000', '/mnt/books/train_stage2/order_32_stage2/checkpoint-30000', '/mnt/books/train_stage2/order_32_stage2/checkpoint-10000', '/mnt/books/train_stage2/order_32_stage2/checkpoint-20000', '/mnt/books/train_stage2/order_32_stage2/checkpoint-40000', '/mnt/books/train_stage2/order_32_stage2/checkpoint-70000'] [03/29 11:00:16 TiTok]: Load checkpoint from /mnt/books/train_stage2/order_32_stage2/checkpoint-80000 [03/29 11:00:41 TiTok]: Resuming at global_step 80000 [03/29 11:02:00 TiTok]: Data (t): 0.0034, 62.33/s/gpu Batch (t): 0.5775 LR: 0.000095 Step: 80100 Total Loss: 0.0391 Recon Loss: 0.0267 [03/29 11:02:58 TiTok]: Data (t): 0.0079, 61.74/s/gpu Batch (t): 0.5831 LR: 0.000095 Step: 80200 Total Loss: 0.0381 Recon Loss: 0.0292 [03/29 11:03:55 TiTok]: Data (t): 0.0033, 62.53/s/gpu Batch (t): 0.5758 LR: 0.000095 Step: 80300 Total Loss: 0.0431 Recon Loss: 0.0299 [03/29 11:04:53 TiTok]: Data (t): 0.0033, 62.46/s/gpu Batch (t): 0.5764 LR: 0.000095 Step: 80400 Total Loss: 0.0434 Recon Loss: 0.0290 [03/29 11:05:52 TiTok]: Data (t): 0.0032, 61.48/s/gpu Batch (t): 0.5856 LR: 0.000095 Step: 80500 Total Loss: 0.0381 Recon Loss: 0.0269 [03/29 11:06:51 TiTok]: Data (t): 0.0032, 61.24/s/gpu Batch (t): 0.5879 LR: 0.000095 Step: 80600 Total Loss: 0.0422 Recon Loss: 0.0290 [03/29 11:07:49 TiTok]: Data (t): 0.0033, 62.51/s/gpu Batch (t): 0.5759 LR: 0.000095 Step: 80700 Total Loss: 0.0413 Recon Loss: 0.0281 [03/29 11:08:47 TiTok]: Data (t): 0.0033, 62.43/s/gpu Batch (t): 0.5767 LR: 0.000095 Step: 80800 Total Loss: 0.0417 Recon Loss: 0.0290 [03/29 11:09:44 TiTok]: Data (t): 0.0033, 62.35/s/gpu Batch (t): 0.5774 LR: 0.000095 Step: 80900 Total Loss: 0.0407 Recon Loss: 0.0284 [03/29 11:10:42 TiTok]: Data (t): 0.0032, 55.75/s/gpu Batch (t): 0.6457 LR: 0.000095 Step: 81000 Total Loss: 0.0437 Recon Loss: 0.0286 [03/29 11:11:40 TiTok]: Data (t): 0.0032, 62.44/s/gpu Batch (t): 0.5766 LR: 0.000095 Step: 81100 Total Loss: 0.0452 Recon Loss: 0.0318 [03/29 11:12:38 TiTok]: Data (t): 0.0033, 62.47/s/gpu Batch (t): 0.5763 LR: 0.000095 Step: 81200 Total Loss: 0.0422 Recon Loss: 0.0281 [03/29 11:13:36 TiTok]: Data (t): 0.0033, 62.57/s/gpu Batch (t): 0.5753 LR: 0.000095 Step: 81300 Total Loss: 0.0418 Recon Loss: 0.0304 [03/29 11:14:33 TiTok]: Data (t): 0.0034, 62.47/s/gpu Batch (t): 0.5763 LR: 0.000095 Step: 81400 Total Loss: 0.0446 Recon Loss: 0.0302 [03/29 11:15:32 TiTok]: Data (t): 0.0033, 62.23/s/gpu Batch (t): 0.5785 LR: 0.000095 Step: 81500 Total Loss: 0.0398 Recon Loss: 0.0271 [03/29 11:16:29 TiTok]: Data (t): 0.0033, 62.42/s/gpu Batch (t): 0.5768 LR: 0.000095 Step: 81600 Total Loss: 0.0403 Recon Loss: 0.0292 [03/29 11:17:27 TiTok]: Data (t): 0.0033, 62.36/s/gpu Batch (t): 0.5773 LR: 0.000095 Step: 81700 Total Loss: 0.0430 Recon Loss: 0.0292 [03/29 11:18:25 TiTok]: Data (t): 0.0032, 62.18/s/gpu Batch (t): 0.5790 LR: 0.000095 Step: 81800 Total Loss: 0.0406 Recon Loss: 0.0298 [03/29 11:19:23 TiTok]: Data (t): 0.0033, 61.98/s/gpu Batch (t): 0.5809 LR: 0.000095 Step: 81900 Total Loss: 0.0418 Recon Loss: 0.0285 [03/29 11:20:21 TiTok]: Data (t): 0.0032, 51.15/s/gpu Batch (t): 0.7038 LR: 0.000095 Step: 82000 Total Loss: 0.0393 Recon Loss: 0.0267 [03/29 11:21:19 TiTok]: Data (t): 0.0033, 62.33/s/gpu Batch (t): 0.5775 LR: 0.000095 Step: 82100 Total Loss: 0.0420 Recon Loss: 0.0287 [03/29 11:22:16 TiTok]: Data (t): 0.0033, 62.51/s/gpu Batch (t): 0.5759 LR: 0.000095 Step: 82200 Total Loss: 0.0426 Recon Loss: 0.0285 [03/29 11:23:14 TiTok]: Data (t): 0.0032, 62.79/s/gpu Batch (t): 0.5733 LR: 0.000095 Step: 82300 Total Loss: 0.0434 Recon Loss: 0.0307 [03/29 11:24:11 TiTok]: Data (t): 0.0032, 62.70/s/gpu Batch (t): 0.5742 LR: 0.000095 Step: 82400 Total Loss: 0.0435 Recon Loss: 0.0276 [03/29 11:25:09 TiTok]: Data (t): 0.0033, 62.59/s/gpu Batch (t): 0.5752 LR: 0.000095 Step: 82500 Total Loss: 0.0402 Recon Loss: 0.0282 [03/29 11:26:06 TiTok]: Data (t): 0.0031, 62.66/s/gpu Batch (t): 0.5745 LR: 0.000095 Step: 82600 Total Loss: 0.0411 Recon Loss: 0.0278 [03/29 11:27:04 TiTok]: Data (t): 0.0033, 62.67/s/gpu Batch (t): 0.5745 LR: 0.000095 Step: 82700 Total Loss: 0.0434 Recon Loss: 0.0295 [03/29 11:28:02 TiTok]: Data (t): 0.0033, 62.58/s/gpu Batch (t): 0.5752 LR: 0.000095 Step: 82800 Total Loss: 0.0424 Recon Loss: 0.0287 [03/29 11:28:59 TiTok]: Data (t): 0.0032, 62.64/s/gpu Batch (t): 0.5747 LR: 0.000095 Step: 82900 Total Loss: 0.0420 Recon Loss: 0.0311 [03/29 11:29:57 TiTok]: Data (t): 0.0032, 55.99/s/gpu Batch (t): 0.6430 LR: 0.000095 Step: 83000 Total Loss: 0.0409 Recon Loss: 0.0272 [03/29 11:30:55 TiTok]: Data (t): 0.0032, 62.34/s/gpu Batch (t): 0.5775 LR: 0.000095 Step: 83100 Total Loss: 0.0414 Recon Loss: 0.0287 [03/29 11:31:53 TiTok]: Data (t): 0.0033, 62.52/s/gpu Batch (t): 0.5758 LR: 0.000095 Step: 83200 Total Loss: 0.0382 Recon Loss: 0.0280 [03/29 11:32:50 TiTok]: Data (t): 0.0037, 61.93/s/gpu Batch (t): 0.5813 LR: 0.000095 Step: 83300 Total Loss: 0.0435 Recon Loss: 0.0299 [03/29 11:33:48 TiTok]: Data (t): 0.0031, 62.58/s/gpu Batch (t): 0.5753 LR: 0.000095 Step: 83400 Total Loss: 0.0441 Recon Loss: 0.0294 [03/29 11:34:46 TiTok]: Data (t): 0.0033, 62.50/s/gpu Batch (t): 0.5760 LR: 0.000095 Step: 83500 Total Loss: 0.0386 Recon Loss: 0.0265 [03/29 11:35:43 TiTok]: Data (t): 0.0032, 62.46/s/gpu Batch (t): 0.5764 LR: 0.000095 Step: 83600 Total Loss: 0.0403 Recon Loss: 0.0276 [03/29 11:36:41 TiTok]: Data (t): 0.0032, 62.10/s/gpu Batch (t): 0.5797 LR: 0.000095 Step: 83700 Total Loss: 0.0408 Recon Loss: 0.0289 [03/29 11:37:39 TiTok]: Data (t): 0.0033, 62.47/s/gpu Batch (t): 0.5763 LR: 0.000094 Step: 83800 Total Loss: 0.0417 Recon Loss: 0.0283 [03/29 11:38:37 TiTok]: Data (t): 0.0033, 62.12/s/gpu Batch (t): 0.5796 LR: 0.000094 Step: 83900 Total Loss: 0.0458 Recon Loss: 0.0300 [03/29 11:39:35 TiTok]: Data (t): 0.0032, 56.61/s/gpu Batch (t): 0.6360 LR: 0.000094 Step: 84000 Total Loss: 0.0418 Recon Loss: 0.0281 [03/29 11:40:32 TiTok]: Data (t): 0.0033, 62.17/s/gpu Batch (t): 0.5791 LR: 0.000094 Step: 84100 Total Loss: 0.0381 Recon Loss: 0.0291 [03/29 11:41:30 TiTok]: Data (t): 0.0033, 62.59/s/gpu Batch (t): 0.5752 LR: 0.000094 Step: 84200 Total Loss: 0.0422 Recon Loss: 0.0292 [03/29 11:42:28 TiTok]: Data (t): 0.0033, 62.36/s/gpu Batch (t): 0.5773 LR: 0.000094 Step: 84300 Total Loss: 0.0434 Recon Loss: 0.0306 [03/29 11:43:25 TiTok]: Data (t): 0.0033, 62.40/s/gpu Batch (t): 0.5770 LR: 0.000094 Step: 84400 Total Loss: 0.0363 Recon Loss: 0.0257 [03/29 11:44:25 TiTok]: Data (t): 0.0034, 62.30/s/gpu Batch (t): 0.5779 LR: 0.000094 Step: 84500 Total Loss: 0.0430 Recon Loss: 0.0316 [03/29 11:45:23 TiTok]: Data (t): 0.0032, 62.37/s/gpu Batch (t): 0.5772 LR: 0.000094 Step: 84600 Total Loss: 0.0385 Recon Loss: 0.0272 [03/29 11:46:21 TiTok]: Data (t): 0.0031, 62.39/s/gpu Batch (t): 0.5770 LR: 0.000094 Step: 84700 Total Loss: 0.0443 Recon Loss: 0.0314 [03/29 11:47:18 TiTok]: Data (t): 0.0033, 62.46/s/gpu Batch (t): 0.5763 LR: 0.000094 Step: 84800 Total Loss: 0.0440 Recon Loss: 0.0281 [03/29 11:48:16 TiTok]: Data (t): 0.0032, 62.25/s/gpu Batch (t): 0.5783 LR: 0.000094 Step: 84900 Total Loss: 0.0385 Recon Loss: 0.0269 [03/29 11:49:14 TiTok]: Data (t): 0.0033, 56.59/s/gpu Batch (t): 0.6361 LR: 0.000094 Step: 85000 Total Loss: 0.0427 Recon Loss: 0.0298 [03/29 11:50:12 TiTok]: Data (t): 0.0033, 62.22/s/gpu Batch (t): 0.5786 LR: 0.000094 Step: 85100 Total Loss: 0.0439 Recon Loss: 0.0290 [03/29 11:51:10 TiTok]: Data (t): 0.0032, 62.22/s/gpu Batch (t): 0.5786 LR: 0.000094 Step: 85200 Total Loss: 0.0421 Recon Loss: 0.0292 [03/29 11:52:08 TiTok]: Data (t): 0.0032, 62.29/s/gpu Batch (t): 0.5779 LR: 0.000094 Step: 85300 Total Loss: 0.0400 Recon Loss: 0.0280 [03/29 11:53:06 TiTok]: Data (t): 0.0032, 62.35/s/gpu Batch (t): 0.5773 LR: 0.000094 Step: 85400 Total Loss: 0.0415 Recon Loss: 0.0280 [03/29 11:54:04 TiTok]: Data (t): 0.0035, 62.10/s/gpu Batch (t): 0.5797 LR: 0.000094 Step: 85500 Total Loss: 0.0403 Recon Loss: 0.0271 [03/29 11:55:02 TiTok]: Data (t): 0.0033, 62.56/s/gpu Batch (t): 0.5754 LR: 0.000094 Step: 85600 Total Loss: 0.0408 Recon Loss: 0.0296 [03/29 11:55:59 TiTok]: Data (t): 0.0032, 62.21/s/gpu Batch (t): 0.5787 LR: 0.000094 Step: 85700 Total Loss: 0.0433 Recon Loss: 0.0291 [03/29 11:56:57 TiTok]: Data (t): 0.0033, 62.48/s/gpu Batch (t): 0.5762 LR: 0.000094 Step: 85800 Total Loss: 0.0415 Recon Loss: 0.0279 [03/29 11:57:55 TiTok]: Data (t): 0.0032, 62.51/s/gpu Batch (t): 0.5759 LR: 0.000094 Step: 85900 Total Loss: 0.0447 Recon Loss: 0.0278 [03/29 11:58:53 TiTok]: Data (t): 0.0033, 56.61/s/gpu Batch (t): 0.6359 LR: 0.000094 Step: 86000 Total Loss: 0.0404 Recon Loss: 0.0260 [03/29 11:59:50 TiTok]: Data (t): 0.0033, 62.43/s/gpu Batch (t): 0.5766 LR: 0.000094 Step: 86100 Total Loss: 0.0450 Recon Loss: 0.0311 [03/29 12:00:48 TiTok]: Data (t): 0.0033, 62.52/s/gpu Batch (t): 0.5759 LR: 0.000094 Step: 86200 Total Loss: 0.0404 Recon Loss: 0.0285 [03/29 12:01:46 TiTok]: Data (t): 0.0032, 62.55/s/gpu Batch (t): 0.5756 LR: 0.000094 Step: 86300 Total Loss: 0.0447 Recon Loss: 0.0300 [03/29 12:02:44 TiTok]: Data (t): 0.0032, 62.23/s/gpu Batch (t): 0.5785 LR: 0.000094 Step: 86400 Total Loss: 0.0396 Recon Loss: 0.0275 [03/29 12:03:42 TiTok]: Data (t): 0.0033, 62.52/s/gpu Batch (t): 0.5759 LR: 0.000094 Step: 86500 Total Loss: 0.0406 Recon Loss: 0.0294 [03/29 12:04:39 TiTok]: Data (t): 0.0033, 62.49/s/gpu Batch (t): 0.5761 LR: 0.000094 Step: 86600 Total Loss: 0.0407 Recon Loss: 0.0278 [03/29 12:05:37 TiTok]: Data (t): 0.0032, 62.63/s/gpu Batch (t): 0.5748 LR: 0.000094 Step: 86700 Total Loss: 0.0406 Recon Loss: 0.0288 [03/29 12:06:35 TiTok]: Data (t): 0.0032, 62.49/s/gpu Batch (t): 0.5761 LR: 0.000094 Step: 86800 Total Loss: 0.0428 Recon Loss: 0.0291 [03/29 12:07:33 TiTok]: Data (t): 0.0033, 62.53/s/gpu Batch (t): 0.5757 LR: 0.000094 Step: 86900 Total Loss: 0.0448 Recon Loss: 0.0309 [03/29 12:08:30 TiTok]: Data (t): 0.0033, 56.64/s/gpu Batch (t): 0.6356 LR: 0.000094 Step: 87000 Total Loss: 0.0422 Recon Loss: 0.0298 [03/29 12:09:28 TiTok]: Data (t): 0.0032, 61.91/s/gpu Batch (t): 0.5815 LR: 0.000094 Step: 87100 Total Loss: 0.0445 Recon Loss: 0.0289 [03/29 12:10:26 TiTok]: Data (t): 0.0032, 62.54/s/gpu Batch (t): 0.5757 LR: 0.000094 Step: 87200 Total Loss: 0.0416 Recon Loss: 0.0278 [03/29 12:11:24 TiTok]: Data (t): 0.0032, 62.00/s/gpu Batch (t): 0.5806 LR: 0.000094 Step: 87300 Total Loss: 0.0393 Recon Loss: 0.0289 [03/29 12:12:22 TiTok]: Data (t): 0.0035, 62.56/s/gpu Batch (t): 0.5755 LR: 0.000094 Step: 87400 Total Loss: 0.0420 Recon Loss: 0.0303 [03/29 12:13:20 TiTok]: Data (t): 0.0033, 62.43/s/gpu Batch (t): 0.5766 LR: 0.000094 Step: 87500 Total Loss: 0.0384 Recon Loss: 0.0253 [03/29 12:14:17 TiTok]: Data (t): 0.0033, 62.51/s/gpu Batch (t): 0.5760 LR: 0.000094 Step: 87600 Total Loss: 0.0429 Recon Loss: 0.0282 [03/29 12:15:15 TiTok]: Data (t): 0.0032, 59.55/s/gpu Batch (t): 0.6045 LR: 0.000094 Step: 87700 Total Loss: 0.0422 Recon Loss: 0.0283 [03/29 12:16:13 TiTok]: Data (t): 0.0032, 62.44/s/gpu Batch (t): 0.5766 LR: 0.000094 Step: 87800 Total Loss: 0.0413 Recon Loss: 0.0282 [03/29 12:17:11 TiTok]: Data (t): 0.0033, 62.40/s/gpu Batch (t): 0.5770 LR: 0.000094 Step: 87900 Total Loss: 0.0441 Recon Loss: 0.0301 [03/29 12:18:09 TiTok]: Data (t): 0.0033, 56.67/s/gpu Batch (t): 0.6353 LR: 0.000094 Step: 88000 Total Loss: 0.0407 Recon Loss: 0.0296 [03/29 12:19:07 TiTok]: Data (t): 0.0033, 62.56/s/gpu Batch (t): 0.5755 LR: 0.000094 Step: 88100 Total Loss: 0.0436 Recon Loss: 0.0302 [03/29 12:20:04 TiTok]: Data (t): 0.0033, 62.44/s/gpu Batch (t): 0.5766 LR: 0.000094 Step: 88200 Total Loss: 0.0420 Recon Loss: 0.0286 [03/29 12:21:02 TiTok]: Data (t): 0.0034, 62.47/s/gpu Batch (t): 0.5763 LR: 0.000094 Step: 88300 Total Loss: 0.0405 Recon Loss: 0.0280 [03/29 12:22:00 TiTok]: Data (t): 0.0033, 62.49/s/gpu Batch (t): 0.5761 LR: 0.000094 Step: 88400 Total Loss: 0.0443 Recon Loss: 0.0288 [03/29 12:22:58 TiTok]: Data (t): 0.0033, 62.53/s/gpu Batch (t): 0.5757 LR: 0.000094 Step: 88500 Total Loss: 0.0442 Recon Loss: 0.0297 [03/29 12:23:55 TiTok]: Data (t): 0.0032, 62.46/s/gpu Batch (t): 0.5764 LR: 0.000094 Step: 88600 Total Loss: 0.0448 Recon Loss: 0.0308 [03/29 12:24:53 TiTok]: Data (t): 0.0032, 62.39/s/gpu Batch (t): 0.5770 LR: 0.000094 Step: 88700 Total Loss: 0.0404 Recon Loss: 0.0291 [03/29 12:25:51 TiTok]: Data (t): 0.0034, 62.43/s/gpu Batch (t): 0.5766 LR: 0.000094 Step: 88800 Total Loss: 0.0403 Recon Loss: 0.0264 [03/29 12:26:49 TiTok]: Data (t): 0.0032, 62.75/s/gpu Batch (t): 0.5737 LR: 0.000094 Step: 88900 Total Loss: 0.0404 Recon Loss: 0.0297 [03/29 12:27:48 TiTok]: Data (t): 0.0032, 56.15/s/gpu Batch (t): 0.6412 LR: 0.000094 Step: 89000 Total Loss: 0.0401 Recon Loss: 0.0272 [03/29 12:28:46 TiTok]: Data (t): 0.0032, 62.39/s/gpu Batch (t): 0.5770 LR: 0.000094 Step: 89100 Total Loss: 0.0487 Recon Loss: 0.0313 [03/29 12:29:44 TiTok]: Data (t): 0.0033, 62.38/s/gpu Batch (t): 0.5771 LR: 0.000094 Step: 89200 Total Loss: 0.0405 Recon Loss: 0.0281 [03/29 12:30:42 TiTok]: Data (t): 0.0034, 62.18/s/gpu Batch (t): 0.5790 LR: 0.000094 Step: 89300 Total Loss: 0.0436 Recon Loss: 0.0301 [03/29 12:31:39 TiTok]: Data (t): 0.0034, 62.63/s/gpu Batch (t): 0.5748 LR: 0.000094 Step: 89400 Total Loss: 0.0410 Recon Loss: 0.0274 [03/29 12:32:37 TiTok]: Data (t): 0.0031, 62.52/s/gpu Batch (t): 0.5758 LR: 0.000094 Step: 89500 Total Loss: 0.0394 Recon Loss: 0.0273 [03/29 12:33:36 TiTok]: Data (t): 0.0032, 61.80/s/gpu Batch (t): 0.5826 LR: 0.000094 Step: 89600 Total Loss: 0.0420 Recon Loss: 0.0291 [03/29 12:34:34 TiTok]: Data (t): 0.0032, 62.42/s/gpu Batch (t): 0.5768 LR: 0.000094 Step: 89700 Total Loss: 0.0420 Recon Loss: 0.0295 [03/29 12:35:31 TiTok]: Data (t): 0.0032, 61.09/s/gpu Batch (t): 0.5893 LR: 0.000094 Step: 89800 Total Loss: 0.0400 Recon Loss: 0.0298 [03/29 12:36:29 TiTok]: Data (t): 0.0032, 62.55/s/gpu Batch (t): 0.5756 LR: 0.000094 Step: 89900 Total Loss: 0.0435 Recon Loss: 0.0294 [03/29 12:37:27 TiTok]: Data (t): 0.0032, 56.41/s/gpu Batch (t): 0.6382 LR: 0.000094 Step: 90000 Total Loss: 0.0415 Recon Loss: 0.0284 [03/29 12:37:30 TiTok]: Saved state to /mnt/books/train_stage2/order_32_stage2/checkpoint-90000 [03/29 12:37:47 TiTok]: Reconstructing images... [03/29 12:38:46 TiTok]: Data (t): 0.0033, 62.38/s/gpu Batch (t): 0.5771 LR: 0.000094 Step: 90100 Total Loss: 0.0410 Recon Loss: 0.0274 [03/29 12:39:44 TiTok]: Data (t): 0.0031, 62.38/s/gpu Batch (t): 0.5771 LR: 0.000094 Step: 90200 Total Loss: 0.0394 Recon Loss: 0.0266 [03/29 12:40:42 TiTok]: Data (t): 0.0032, 62.35/s/gpu Batch (t): 0.5774 LR: 0.000094 Step: 90300 Total Loss: 0.0424 Recon Loss: 0.0290 [03/29 12:41:39 TiTok]: Data (t): 0.0032, 62.26/s/gpu Batch (t): 0.5782 LR: 0.000094 Step: 90400 Total Loss: 0.0423 Recon Loss: 0.0287 [03/29 12:42:37 TiTok]: Data (t): 0.0032, 61.76/s/gpu Batch (t): 0.5829 LR: 0.000094 Step: 90500 Total Loss: 0.0408 Recon Loss: 0.0272 [03/29 12:43:35 TiTok]: Data (t): 0.0032, 61.84/s/gpu Batch (t): 0.5822 LR: 0.000094 Step: 90600 Total Loss: 0.0403 Recon Loss: 0.0278 [03/29 12:44:33 TiTok]: Data (t): 0.0032, 62.42/s/gpu Batch (t): 0.5767 LR: 0.000094 Step: 90700 Total Loss: 0.0409 Recon Loss: 0.0284 [03/29 12:45:31 TiTok]: Data (t): 0.0032, 62.15/s/gpu Batch (t): 0.5792 LR: 0.000093 Step: 90800 Total Loss: 0.0438 Recon Loss: 0.0288 [03/29 12:46:29 TiTok]: Data (t): 0.0033, 62.41/s/gpu Batch (t): 0.5769 LR: 0.000093 Step: 90900 Total Loss: 0.0396 Recon Loss: 0.0287 [03/29 12:47:27 TiTok]: Data (t): 0.0032, 51.05/s/gpu Batch (t): 0.7051 LR: 0.000093 Step: 91000 Total Loss: 0.0390 Recon Loss: 0.0270 [03/29 12:48:25 TiTok]: Data (t): 0.0032, 61.68/s/gpu Batch (t): 0.5837 LR: 0.000093 Step: 91100 Total Loss: 0.0426 Recon Loss: 0.0278 [03/29 12:49:23 TiTok]: Data (t): 0.0033, 60.65/s/gpu Batch (t): 0.5936 LR: 0.000093 Step: 91200 Total Loss: 0.0428 Recon Loss: 0.0289 [03/29 12:50:22 TiTok]: Data (t): 0.0033, 61.66/s/gpu Batch (t): 0.5838 LR: 0.000093 Step: 91300 Total Loss: 0.0405 Recon Loss: 0.0289 [03/29 12:51:20 TiTok]: Data (t): 0.0034, 61.68/s/gpu Batch (t): 0.5836 LR: 0.000093 Step: 91400 Total Loss: 0.0440 Recon Loss: 0.0292 [03/29 12:52:19 TiTok]: Data (t): 0.0033, 62.54/s/gpu Batch (t): 0.5756 LR: 0.000093 Step: 91500 Total Loss: 0.0401 Recon Loss: 0.0279 [03/29 12:53:17 TiTok]: Data (t): 0.0033, 62.50/s/gpu Batch (t): 0.5760 LR: 0.000093 Step: 91600 Total Loss: 0.0461 Recon Loss: 0.0332 [03/29 12:54:15 TiTok]: Data (t): 0.0033, 61.62/s/gpu Batch (t): 0.5842 LR: 0.000093 Step: 91700 Total Loss: 0.0368 Recon Loss: 0.0255 [03/29 12:55:13 TiTok]: Data (t): 0.0033, 62.57/s/gpu Batch (t): 0.5754 LR: 0.000093 Step: 91800 Total Loss: 0.0422 Recon Loss: 0.0305 [03/29 12:56:10 TiTok]: Data (t): 0.0034, 62.57/s/gpu Batch (t): 0.5753 LR: 0.000093 Step: 91900 Total Loss: 0.0426 Recon Loss: 0.0303 [03/29 12:57:08 TiTok]: Data (t): 0.0032, 56.84/s/gpu Batch (t): 0.6334 LR: 0.000093 Step: 92000 Total Loss: 0.0420 Recon Loss: 0.0277 [03/29 12:58:06 TiTok]: Data (t): 0.0033, 62.67/s/gpu Batch (t): 0.5745 LR: 0.000093 Step: 92100 Total Loss: 0.0423 Recon Loss: 0.0282 [03/29 12:59:04 TiTok]: Data (t): 0.0034, 62.59/s/gpu Batch (t): 0.5751 LR: 0.000093 Step: 92200 Total Loss: 0.0411 Recon Loss: 0.0295 [03/29 13:00:01 TiTok]: Data (t): 0.0034, 62.60/s/gpu Batch (t): 0.5751 LR: 0.000093 Step: 92300 Total Loss: 0.0433 Recon Loss: 0.0291 [03/29 13:00:59 TiTok]: Data (t): 0.0033, 61.82/s/gpu Batch (t): 0.5824 LR: 0.000093 Step: 92400 Total Loss: 0.0390 Recon Loss: 0.0272 [03/29 13:01:58 TiTok]: Data (t): 0.0033, 61.81/s/gpu Batch (t): 0.5825 LR: 0.000093 Step: 92500 Total Loss: 0.0394 Recon Loss: 0.0292 [03/29 13:02:56 TiTok]: Data (t): 0.0033, 62.02/s/gpu Batch (t): 0.5804 LR: 0.000093 Step: 92600 Total Loss: 0.0420 Recon Loss: 0.0290 [03/29 13:03:54 TiTok]: Data (t): 0.0033, 61.82/s/gpu Batch (t): 0.5823 LR: 0.000093 Step: 92700 Total Loss: 0.0452 Recon Loss: 0.0312 [03/29 13:04:52 TiTok]: Data (t): 0.0033, 62.59/s/gpu Batch (t): 0.5752 LR: 0.000093 Step: 92800 Total Loss: 0.0390 Recon Loss: 0.0284 [03/29 13:05:50 TiTok]: Data (t): 0.0034, 61.40/s/gpu Batch (t): 0.5863 LR: 0.000093 Step: 92900 Total Loss: 0.0434 Recon Loss: 0.0310 [03/29 13:06:47 TiTok]: Data (t): 0.0033, 56.58/s/gpu Batch (t): 0.6362 LR: 0.000093 Step: 93000 Total Loss: 0.0417 Recon Loss: 0.0293 [03/29 13:07:45 TiTok]: Data (t): 0.0034, 62.69/s/gpu Batch (t): 0.5743 LR: 0.000093 Step: 93100 Total Loss: 0.0412 Recon Loss: 0.0276 [03/29 13:08:43 TiTok]: Data (t): 0.0034, 62.62/s/gpu Batch (t): 0.5749 LR: 0.000093 Step: 93200 Total Loss: 0.0392 Recon Loss: 0.0286 [03/29 13:09:41 TiTok]: Data (t): 0.0035, 60.59/s/gpu Batch (t): 0.5942 LR: 0.000093 Step: 93300 Total Loss: 0.0393 Recon Loss: 0.0275 [03/29 13:10:39 TiTok]: Data (t): 0.0034, 62.43/s/gpu Batch (t): 0.5767 LR: 0.000093 Step: 93400 Total Loss: 0.0415 Recon Loss: 0.0284 [03/29 13:11:37 TiTok]: Data (t): 0.0033, 62.49/s/gpu Batch (t): 0.5761 LR: 0.000093 Step: 93500 Total Loss: 0.0440 Recon Loss: 0.0312 [03/29 13:12:35 TiTok]: Data (t): 0.0034, 62.50/s/gpu Batch (t): 0.5760 LR: 0.000093 Step: 93600 Total Loss: 0.0379 Recon Loss: 0.0263 [03/29 13:13:33 TiTok]: Data (t): 0.0033, 62.06/s/gpu Batch (t): 0.5801 LR: 0.000093 Step: 93700 Total Loss: 0.0409 Recon Loss: 0.0277 [03/29 13:14:31 TiTok]: Data (t): 0.0033, 61.06/s/gpu Batch (t): 0.5896 LR: 0.000093 Step: 93800 Total Loss: 0.0407 Recon Loss: 0.0281 [03/29 13:15:29 TiTok]: Data (t): 0.0032, 62.39/s/gpu Batch (t): 0.5770 LR: 0.000093 Step: 93900 Total Loss: 0.0426 Recon Loss: 0.0321 [03/29 13:16:27 TiTok]: Data (t): 0.0033, 56.45/s/gpu Batch (t): 0.6377 LR: 0.000093 Step: 94000 Total Loss: 0.0427 Recon Loss: 0.0295 [03/29 13:17:25 TiTok]: Data (t): 0.0034, 62.59/s/gpu Batch (t): 0.5752 LR: 0.000093 Step: 94100 Total Loss: 0.0421 Recon Loss: 0.0271 [03/29 13:18:23 TiTok]: Data (t): 0.0034, 62.52/s/gpu Batch (t): 0.5758 LR: 0.000093 Step: 94200 Total Loss: 0.0420 Recon Loss: 0.0301 [03/29 13:19:21 TiTok]: Data (t): 0.0034, 61.86/s/gpu Batch (t): 0.5819 LR: 0.000093 Step: 94300 Total Loss: 0.0395 Recon Loss: 0.0263 [03/29 13:20:19 TiTok]: Data (t): 0.0032, 62.38/s/gpu Batch (t): 0.5771 LR: 0.000093 Step: 94400 Total Loss: 0.0413 Recon Loss: 0.0283 [03/29 13:21:17 TiTok]: Data (t): 0.0033, 62.24/s/gpu Batch (t): 0.5784 LR: 0.000093 Step: 94500 Total Loss: 0.0416 Recon Loss: 0.0318 [03/29 13:22:15 TiTok]: Data (t): 0.0032, 62.33/s/gpu Batch (t): 0.5775 LR: 0.000093 Step: 94600 Total Loss: 0.0421 Recon Loss: 0.0279 [03/29 13:23:12 TiTok]: Data (t): 0.0034, 62.28/s/gpu Batch (t): 0.5780 LR: 0.000093 Step: 94700 Total Loss: 0.0416 Recon Loss: 0.0293 [03/29 13:24:10 TiTok]: Data (t): 0.0032, 62.44/s/gpu Batch (t): 0.5765 LR: 0.000093 Step: 94800 Total Loss: 0.0407 Recon Loss: 0.0283 [03/29 13:25:08 TiTok]: Data (t): 0.0032, 62.51/s/gpu Batch (t): 0.5759 LR: 0.000093 Step: 94900 Total Loss: 0.0417 Recon Loss: 0.0285 [03/29 13:26:06 TiTok]: Data (t): 0.0032, 56.49/s/gpu Batch (t): 0.6373 LR: 0.000093 Step: 95000 Total Loss: 0.0409 Recon Loss: 0.0274 [03/29 13:27:04 TiTok]: Data (t): 0.0035, 62.64/s/gpu Batch (t): 0.5747 LR: 0.000093 Step: 95100 Total Loss: 0.0469 Recon Loss: 0.0307 [03/29 13:28:01 TiTok]: Data (t): 0.0033, 62.40/s/gpu Batch (t): 0.5769 LR: 0.000093 Step: 95200 Total Loss: 0.0442 Recon Loss: 0.0271 [03/29 13:28:59 TiTok]: Data (t): 0.0033, 61.25/s/gpu Batch (t): 0.5878 LR: 0.000093 Step: 95300 Total Loss: 0.0400 Recon Loss: 0.0277 [03/29 13:29:57 TiTok]: Data (t): 0.0032, 62.58/s/gpu Batch (t): 0.5753 LR: 0.000093 Step: 95400 Total Loss: 0.0446 Recon Loss: 0.0302 [03/29 13:30:55 TiTok]: Data (t): 0.0033, 62.56/s/gpu Batch (t): 0.5755 LR: 0.000093 Step: 95500 Total Loss: 0.0408 Recon Loss: 0.0302 [03/29 13:31:52 TiTok]: Data (t): 0.0034, 62.46/s/gpu Batch (t): 0.5764 LR: 0.000093 Step: 95600 Total Loss: 0.0419 Recon Loss: 0.0297 [03/29 13:32:50 TiTok]: Data (t): 0.0035, 62.58/s/gpu Batch (t): 0.5753 LR: 0.000093 Step: 95700 Total Loss: 0.0446 Recon Loss: 0.0302 [03/29 13:33:48 TiTok]: Data (t): 0.0033, 62.22/s/gpu Batch (t): 0.5786 LR: 0.000093 Step: 95800 Total Loss: 0.0429 Recon Loss: 0.0291 [03/29 13:34:46 TiTok]: Data (t): 0.0034, 62.40/s/gpu Batch (t): 0.5769 LR: 0.000093 Step: 95900 Total Loss: 0.0402 Recon Loss: 0.0284 [03/29 13:35:43 TiTok]: Data (t): 0.0034, 56.43/s/gpu Batch (t): 0.6380 LR: 0.000093 Step: 96000 Total Loss: 0.0417 Recon Loss: 0.0278 [03/29 13:36:41 TiTok]: Data (t): 0.0033, 62.42/s/gpu Batch (t): 0.5768 LR: 0.000093 Step: 96100 Total Loss: 0.0389 Recon Loss: 0.0267 [03/29 13:37:39 TiTok]: Data (t): 0.0033, 62.48/s/gpu Batch (t): 0.5762 LR: 0.000093 Step: 96200 Total Loss: 0.0420 Recon Loss: 0.0300 [03/29 13:38:37 TiTok]: Data (t): 0.0033, 62.56/s/gpu Batch (t): 0.5754 LR: 0.000093 Step: 96300 Total Loss: 0.0398 Recon Loss: 0.0277 [03/29 13:39:36 TiTok]: Data (t): 0.0034, 62.34/s/gpu Batch (t): 0.5775 LR: 0.000093 Step: 96400 Total Loss: 0.0426 Recon Loss: 0.0292 [03/29 13:40:34 TiTok]: Data (t): 0.0033, 62.41/s/gpu Batch (t): 0.5768 LR: 0.000093 Step: 96500 Total Loss: 0.0421 Recon Loss: 0.0301 [03/29 13:41:32 TiTok]: Data (t): 0.0035, 62.57/s/gpu Batch (t): 0.5754 LR: 0.000093 Step: 96600 Total Loss: 0.0389 Recon Loss: 0.0281 [03/29 13:42:29 TiTok]: Data (t): 0.0033, 61.63/s/gpu Batch (t): 0.5841 LR: 0.000093 Step: 96700 Total Loss: 0.0392 Recon Loss: 0.0290 [03/29 13:43:27 TiTok]: Data (t): 0.0034, 62.34/s/gpu Batch (t): 0.5775 LR: 0.000093 Step: 96800 Total Loss: 0.0434 Recon Loss: 0.0292 [03/29 13:44:25 TiTok]: Data (t): 0.0034, 62.13/s/gpu Batch (t): 0.5794 LR: 0.000093 Step: 96900 Total Loss: 0.0425 Recon Loss: 0.0294 [03/29 13:45:23 TiTok]: Data (t): 0.0033, 56.67/s/gpu Batch (t): 0.6352 LR: 0.000093 Step: 97000 Total Loss: 0.0452 Recon Loss: 0.0289 [03/29 13:46:21 TiTok]: Data (t): 0.0035, 62.43/s/gpu Batch (t): 0.5767 LR: 0.000093 Step: 97100 Total Loss: 0.0465 Recon Loss: 0.0300 [03/29 13:47:18 TiTok]: Data (t): 0.0034, 62.56/s/gpu Batch (t): 0.5754 LR: 0.000093 Step: 97200 Total Loss: 0.0408 Recon Loss: 0.0292 [03/29 13:48:16 TiTok]: Data (t): 0.0033, 62.44/s/gpu Batch (t): 0.5766 LR: 0.000093 Step: 97300 Total Loss: 0.0424 Recon Loss: 0.0282 [03/29 13:49:14 TiTok]: Data (t): 0.0034, 62.41/s/gpu Batch (t): 0.5768 LR: 0.000092 Step: 97400 Total Loss: 0.0392 Recon Loss: 0.0310 [03/29 13:50:11 TiTok]: Data (t): 0.0033, 62.45/s/gpu Batch (t): 0.5765 LR: 0.000092 Step: 97500 Total Loss: 0.0403 Recon Loss: 0.0297 [03/29 13:51:09 TiTok]: Data (t): 0.0033, 61.63/s/gpu Batch (t): 0.5842 LR: 0.000092 Step: 97600 Total Loss: 0.0421 Recon Loss: 0.0282 [03/29 13:52:07 TiTok]: Data (t): 0.0032, 62.53/s/gpu Batch (t): 0.5757 LR: 0.000092 Step: 97700 Total Loss: 0.0412 Recon Loss: 0.0284 [03/29 13:53:05 TiTok]: Data (t): 0.0032, 62.81/s/gpu Batch (t): 0.5731 LR: 0.000092 Step: 97800 Total Loss: 0.0454 Recon Loss: 0.0301 [03/29 13:54:04 TiTok]: Data (t): 0.0032, 62.12/s/gpu Batch (t): 0.5795 LR: 0.000092 Step: 97900 Total Loss: 0.0393 Recon Loss: 0.0288 [03/29 13:55:02 TiTok]: Data (t): 0.0034, 56.49/s/gpu Batch (t): 0.6373 LR: 0.000092 Step: 98000 Total Loss: 0.0430 Recon Loss: 0.0286 [03/29 13:56:00 TiTok]: Data (t): 0.0034, 62.20/s/gpu Batch (t): 0.5787 LR: 0.000092 Step: 98100 Total Loss: 0.0400 Recon Loss: 0.0285 [03/29 13:56:58 TiTok]: Data (t): 0.0032, 62.25/s/gpu Batch (t): 0.5783 LR: 0.000092 Step: 98200 Total Loss: 0.0427 Recon Loss: 0.0283 [03/29 13:57:55 TiTok]: Data (t): 0.0032, 62.53/s/gpu Batch (t): 0.5757 LR: 0.000092 Step: 98300 Total Loss: 0.0384 Recon Loss: 0.0284 [03/29 13:58:54 TiTok]: Data (t): 0.0034, 58.54/s/gpu Batch (t): 0.6150 LR: 0.000092 Step: 98400 Total Loss: 0.0448 Recon Loss: 0.0288 [03/29 13:59:52 TiTok]: Data (t): 0.0033, 62.39/s/gpu Batch (t): 0.5770 LR: 0.000092 Step: 98500 Total Loss: 0.0378 Recon Loss: 0.0274 [03/29 14:00:49 TiTok]: Data (t): 0.0034, 61.67/s/gpu Batch (t): 0.5838 LR: 0.000092 Step: 98600 Total Loss: 0.0424 Recon Loss: 0.0296 [03/29 14:01:48 TiTok]: Data (t): 0.0033, 62.46/s/gpu Batch (t): 0.5764 LR: 0.000092 Step: 98700 Total Loss: 0.0409 Recon Loss: 0.0290 [03/29 14:02:46 TiTok]: Data (t): 0.0033, 59.39/s/gpu Batch (t): 0.6062 LR: 0.000092 Step: 98800 Total Loss: 0.0426 Recon Loss: 0.0295 [03/29 14:03:44 TiTok]: Data (t): 0.0034, 62.26/s/gpu Batch (t): 0.5782 LR: 0.000092 Step: 98900 Total Loss: 0.0410 Recon Loss: 0.0273 [03/29 14:04:42 TiTok]: Data (t): 0.0033, 56.50/s/gpu Batch (t): 0.6372 LR: 0.000092 Step: 99000 Total Loss: 0.0441 Recon Loss: 0.0303 [03/29 14:05:39 TiTok]: Data (t): 0.0034, 62.06/s/gpu Batch (t): 0.5800 LR: 0.000092 Step: 99100 Total Loss: 0.0380 Recon Loss: 0.0275 [03/29 14:06:37 TiTok]: Data (t): 0.0032, 62.49/s/gpu Batch (t): 0.5761 LR: 0.000092 Step: 99200 Total Loss: 0.0419 Recon Loss: 0.0288 [03/29 14:07:35 TiTok]: Data (t): 0.0033, 61.10/s/gpu Batch (t): 0.5892 LR: 0.000092 Step: 99300 Total Loss: 0.0414 Recon Loss: 0.0284 [03/29 14:08:34 TiTok]: Data (t): 0.0032, 62.35/s/gpu Batch (t): 0.5773 LR: 0.000092 Step: 99400 Total Loss: 0.0424 Recon Loss: 0.0286 [03/29 14:09:31 TiTok]: Data (t): 0.0032, 62.34/s/gpu Batch (t): 0.5775 LR: 0.000092 Step: 99500 Total Loss: 0.0447 Recon Loss: 0.0302 [03/29 14:10:29 TiTok]: Data (t): 0.0033, 62.33/s/gpu Batch (t): 0.5776 LR: 0.000092 Step: 99600 Total Loss: 0.0422 Recon Loss: 0.0299 [03/29 14:11:27 TiTok]: Data (t): 0.0033, 62.21/s/gpu Batch (t): 0.5787 LR: 0.000092 Step: 99700 Total Loss: 0.0428 Recon Loss: 0.0293 [03/29 14:12:25 TiTok]: Data (t): 0.0033, 62.58/s/gpu Batch (t): 0.5753 LR: 0.000092 Step: 99800 Total Loss: 0.0421 Recon Loss: 0.0282 [03/29 14:13:23 TiTok]: Data (t): 0.0034, 62.51/s/gpu Batch (t): 0.5759 LR: 0.000092 Step: 99900 Total Loss: 0.0421 Recon Loss: 0.0273 [03/29 14:14:21 TiTok]: Data (t): 0.0035, 56.09/s/gpu Batch (t): 0.6418 LR: 0.000092 Step: 100000 Total Loss: 0.0391 Recon Loss: 0.0281 [03/29 14:14:24 TiTok]: Saved state to /mnt/books/train_stage2/order_32_stage2/checkpoint-100000 [03/29 14:15:16 TiTok]: Reconstructing images... [03/29 14:16:15 TiTok]: Data (t): 0.0032, 62.38/s/gpu Batch (t): 0.5771 LR: 0.000092 Step: 100100 Total Loss: 0.0399 Recon Loss: 0.0297 [03/29 14:17:13 TiTok]: Data (t): 0.0032, 62.30/s/gpu Batch (t): 0.5778 LR: 0.000092 Step: 100200 Total Loss: 0.0438 Recon Loss: 0.0282 [03/29 14:18:11 TiTok]: Data (t): 0.0033, 62.08/s/gpu Batch (t): 0.5799 LR: 0.000092 Step: 100300 Total Loss: 0.0401 Recon Loss: 0.0282 [03/29 14:19:09 TiTok]: Data (t): 0.0033, 62.27/s/gpu Batch (t): 0.5781 LR: 0.000092 Step: 100400 Total Loss: 0.0448 Recon Loss: 0.0290 [03/29 14:20:07 TiTok]: Data (t): 0.0032, 62.31/s/gpu Batch (t): 0.5778 LR: 0.000092 Step: 100500 Total Loss: 0.0404 Recon Loss: 0.0296 [03/29 14:21:05 TiTok]: Data (t): 0.0033, 62.34/s/gpu Batch (t): 0.5775 LR: 0.000092 Step: 100600 Total Loss: 0.0397 Recon Loss: 0.0292 [03/29 14:22:03 TiTok]: Data (t): 0.0035, 62.04/s/gpu Batch (t): 0.5803 LR: 0.000092 Step: 100700 Total Loss: 0.0425 Recon Loss: 0.0290 [03/29 14:23:01 TiTok]: Data (t): 0.0034, 61.75/s/gpu Batch (t): 0.5830 LR: 0.000092 Step: 100800 Total Loss: 0.0405 Recon Loss: 0.0263 [03/29 14:24:00 TiTok]: Data (t): 0.0035, 62.40/s/gpu Batch (t): 0.5769 LR: 0.000092 Step: 100900 Total Loss: 0.0412 Recon Loss: 0.0294 [03/29 14:24:58 TiTok]: Data (t): 0.0033, 51.93/s/gpu Batch (t): 0.6932 LR: 0.000092 Step: 101000 Total Loss: 0.0401 Recon Loss: 0.0287 [03/29 14:25:55 TiTok]: Data (t): 0.0033, 62.24/s/gpu Batch (t): 0.5784 LR: 0.000092 Step: 101100 Total Loss: 0.0402 Recon Loss: 0.0274 [03/29 14:26:53 TiTok]: Data (t): 0.0034, 62.34/s/gpu Batch (t): 0.5775 LR: 0.000092 Step: 101200 Total Loss: 0.0487 Recon Loss: 0.0312 [03/29 14:27:51 TiTok]: Data (t): 0.0033, 62.41/s/gpu Batch (t): 0.5768 LR: 0.000092 Step: 101300 Total Loss: 0.0401 Recon Loss: 0.0270 [03/29 14:28:49 TiTok]: Data (t): 0.0033, 62.34/s/gpu Batch (t): 0.5775 LR: 0.000092 Step: 101400 Total Loss: 0.0389 Recon Loss: 0.0286 [03/29 14:29:47 TiTok]: Data (t): 0.0033, 62.26/s/gpu Batch (t): 0.5783 LR: 0.000092 Step: 101500 Total Loss: 0.0433 Recon Loss: 0.0274 [03/29 14:30:45 TiTok]: Data (t): 0.0034, 62.20/s/gpu Batch (t): 0.5787 LR: 0.000092 Step: 101600 Total Loss: 0.0424 Recon Loss: 0.0287 [03/29 14:31:43 TiTok]: Data (t): 0.0034, 62.45/s/gpu Batch (t): 0.5764 LR: 0.000092 Step: 101700 Total Loss: 0.0419 Recon Loss: 0.0298 [03/29 14:32:41 TiTok]: Data (t): 0.0035, 62.25/s/gpu Batch (t): 0.5783 LR: 0.000092 Step: 101800 Total Loss: 0.0413 Recon Loss: 0.0281 [03/29 14:33:38 TiTok]: Data (t): 0.0035, 62.21/s/gpu Batch (t): 0.5787 LR: 0.000092 Step: 101900 Total Loss: 0.0428 Recon Loss: 0.0301 [03/29 14:34:36 TiTok]: Data (t): 0.0035, 56.40/s/gpu Batch (t): 0.6383 LR: 0.000092 Step: 102000 Total Loss: 0.0382 Recon Loss: 0.0271 [03/29 14:35:34 TiTok]: Data (t): 0.0034, 62.28/s/gpu Batch (t): 0.5780 LR: 0.000092 Step: 102100 Total Loss: 0.0419 Recon Loss: 0.0287 [03/29 14:36:32 TiTok]: Data (t): 0.0032, 62.28/s/gpu Batch (t): 0.5780 LR: 0.000092 Step: 102200 Total Loss: 0.0407 Recon Loss: 0.0274 [03/29 14:37:32 TiTok]: Data (t): 0.0032, 62.30/s/gpu Batch (t): 0.5778 LR: 0.000092 Step: 102300 Total Loss: 0.0399 Recon Loss: 0.0277 [03/29 14:38:30 TiTok]: Data (t): 0.0033, 62.34/s/gpu Batch (t): 0.5774 LR: 0.000092 Step: 102400 Total Loss: 0.0383 Recon Loss: 0.0294 [03/29 14:39:28 TiTok]: Data (t): 0.0033, 62.38/s/gpu Batch (t): 0.5771 LR: 0.000092 Step: 102500 Total Loss: 0.0449 Recon Loss: 0.0303 [03/29 14:40:26 TiTok]: Data (t): 0.0034, 62.32/s/gpu Batch (t): 0.5776 LR: 0.000092 Step: 102600 Total Loss: 0.0402 Recon Loss: 0.0299 [03/29 14:41:24 TiTok]: Data (t): 0.0032, 62.33/s/gpu Batch (t): 0.5776 LR: 0.000092 Step: 102700 Total Loss: 0.0417 Recon Loss: 0.0289 [03/29 14:42:22 TiTok]: Data (t): 0.0034, 62.34/s/gpu Batch (t): 0.5775 LR: 0.000092 Step: 102800 Total Loss: 0.0381 Recon Loss: 0.0269 [03/29 14:43:19 TiTok]: Data (t): 0.0034, 62.27/s/gpu Batch (t): 0.5781 LR: 0.000092 Step: 102900 Total Loss: 0.0439 Recon Loss: 0.0301 [03/29 14:44:18 TiTok]: Data (t): 0.0034, 56.54/s/gpu Batch (t): 0.6367 LR: 0.000092 Step: 103000 Total Loss: 0.0410 Recon Loss: 0.0275 [03/29 14:45:15 TiTok]: Data (t): 0.0034, 62.23/s/gpu Batch (t): 0.5785 LR: 0.000092 Step: 103100 Total Loss: 0.0441 Recon Loss: 0.0291 [03/29 14:46:14 TiTok]: Data (t): 0.0032, 59.13/s/gpu Batch (t): 0.6089 LR: 0.000092 Step: 103200 Total Loss: 0.0390 Recon Loss: 0.0277 [03/29 14:47:12 TiTok]: Data (t): 0.0033, 62.47/s/gpu Batch (t): 0.5763 LR: 0.000092 Step: 103300 Total Loss: 0.0399 Recon Loss: 0.0271 [03/29 14:48:10 TiTok]: Data (t): 0.0033, 61.06/s/gpu Batch (t): 0.5896 LR: 0.000092 Step: 103400 Total Loss: 0.0401 Recon Loss: 0.0292 [03/29 14:49:08 TiTok]: Data (t): 0.0033, 62.40/s/gpu Batch (t): 0.5769 LR: 0.000092 Step: 103500 Total Loss: 0.0425 Recon Loss: 0.0292 [03/29 14:50:06 TiTok]: Data (t): 0.0035, 62.58/s/gpu Batch (t): 0.5752 LR: 0.000091 Step: 103600 Total Loss: 0.0408 Recon Loss: 0.0299 [03/29 14:51:04 TiTok]: Data (t): 0.0035, 62.35/s/gpu Batch (t): 0.5774 LR: 0.000091 Step: 103700 Total Loss: 0.0384 Recon Loss: 0.0273 [03/29 14:52:02 TiTok]: Data (t): 0.0034, 61.48/s/gpu Batch (t): 0.5856 LR: 0.000091 Step: 103800 Total Loss: 0.0437 Recon Loss: 0.0318 [03/29 14:53:00 TiTok]: Data (t): 0.0033, 61.62/s/gpu Batch (t): 0.5842 LR: 0.000091 Step: 103900 Total Loss: 0.0386 Recon Loss: 0.0275 [03/29 14:53:59 TiTok]: Data (t): 0.0033, 55.82/s/gpu Batch (t): 0.6449 LR: 0.000091 Step: 104000 Total Loss: 0.0377 Recon Loss: 0.0282 [03/29 14:54:58 TiTok]: Data (t): 0.0034, 61.58/s/gpu Batch (t): 0.5846 LR: 0.000091 Step: 104100 Total Loss: 0.0383 Recon Loss: 0.0280 [03/29 14:55:56 TiTok]: Data (t): 0.0033, 62.44/s/gpu Batch (t): 0.5766 LR: 0.000091 Step: 104200 Total Loss: 0.0424 Recon Loss: 0.0300 [03/29 14:56:54 TiTok]: Data (t): 0.0033, 62.27/s/gpu Batch (t): 0.5781 LR: 0.000091 Step: 104300 Total Loss: 0.0419 Recon Loss: 0.0300 [03/29 14:57:52 TiTok]: Data (t): 0.0034, 62.34/s/gpu Batch (t): 0.5775 LR: 0.000091 Step: 104400 Total Loss: 0.0395 Recon Loss: 0.0274 [03/29 14:58:49 TiTok]: Data (t): 0.0033, 62.38/s/gpu Batch (t): 0.5771 LR: 0.000091 Step: 104500 Total Loss: 0.0409 Recon Loss: 0.0287 [03/29 14:59:47 TiTok]: Data (t): 0.0033, 62.35/s/gpu Batch (t): 0.5774 LR: 0.000091 Step: 104600 Total Loss: 0.0406 Recon Loss: 0.0311 [03/29 15:00:45 TiTok]: Data (t): 0.0032, 62.46/s/gpu Batch (t): 0.5764 LR: 0.000091 Step: 104700 Total Loss: 0.0432 Recon Loss: 0.0290 [03/29 15:01:43 TiTok]: Data (t): 0.0033, 62.55/s/gpu Batch (t): 0.5755 LR: 0.000091 Step: 104800 Total Loss: 0.0405 Recon Loss: 0.0268 [03/29 15:02:41 TiTok]: Data (t): 0.0034, 61.11/s/gpu Batch (t): 0.5891 LR: 0.000091 Step: 104900 Total Loss: 0.0399 Recon Loss: 0.0284 [03/29 15:03:38 TiTok]: Data (t): 0.0033, 56.55/s/gpu Batch (t): 0.6367 LR: 0.000091 Step: 105000 Total Loss: 0.0427 Recon Loss: 0.0285 [03/29 15:04:36 TiTok]: Data (t): 0.0033, 62.49/s/gpu Batch (t): 0.5761 LR: 0.000091 Step: 105100 Total Loss: 0.0394 Recon Loss: 0.0273 [03/29 15:05:34 TiTok]: Data (t): 0.0033, 59.31/s/gpu Batch (t): 0.6069 LR: 0.000091 Step: 105200 Total Loss: 0.0408 Recon Loss: 0.0295 [03/29 15:06:32 TiTok]: Data (t): 0.0051, 58.67/s/gpu Batch (t): 0.6136 LR: 0.000091 Step: 105300 Total Loss: 0.0420 Recon Loss: 0.0290 [03/29 15:07:30 TiTok]: Data (t): 0.0031, 62.17/s/gpu Batch (t): 0.5791 LR: 0.000091 Step: 105400 Total Loss: 0.0407 Recon Loss: 0.0280 [03/29 15:08:29 TiTok]: Data (t): 0.0034, 62.00/s/gpu Batch (t): 0.5807 LR: 0.000091 Step: 105500 Total Loss: 0.0441 Recon Loss: 0.0303 [03/29 15:09:26 TiTok]: Data (t): 0.0034, 61.82/s/gpu Batch (t): 0.5823 LR: 0.000091 Step: 105600 Total Loss: 0.0397 Recon Loss: 0.0284 [03/29 15:10:24 TiTok]: Data (t): 0.0033, 62.42/s/gpu Batch (t): 0.5767 LR: 0.000091 Step: 105700 Total Loss: 0.0379 Recon Loss: 0.0256 [03/29 15:11:22 TiTok]: Data (t): 0.0032, 62.46/s/gpu Batch (t): 0.5763 LR: 0.000091 Step: 105800 Total Loss: 0.0405 Recon Loss: 0.0290 [03/29 15:12:20 TiTok]: Data (t): 0.0034, 62.36/s/gpu Batch (t): 0.5773 LR: 0.000091 Step: 105900 Total Loss: 0.0418 Recon Loss: 0.0277 [03/29 15:13:18 TiTok]: Data (t): 0.0033, 56.47/s/gpu Batch (t): 0.6375 LR: 0.000091 Step: 106000 Total Loss: 0.0369 Recon Loss: 0.0271 [03/29 15:14:16 TiTok]: Data (t): 0.0033, 62.39/s/gpu Batch (t): 0.5770 LR: 0.000091 Step: 106100 Total Loss: 0.0407 Recon Loss: 0.0294 [03/29 15:15:13 TiTok]: Data (t): 0.0033, 62.41/s/gpu Batch (t): 0.5768 LR: 0.000091 Step: 106200 Total Loss: 0.0435 Recon Loss: 0.0297 [03/29 15:16:11 TiTok]: Data (t): 0.0033, 62.35/s/gpu Batch (t): 0.5774 LR: 0.000091 Step: 106300 Total Loss: 0.0397 Recon Loss: 0.0297 [03/29 15:17:09 TiTok]: Data (t): 0.0034, 61.90/s/gpu Batch (t): 0.5816 LR: 0.000091 Step: 106400 Total Loss: 0.0398 Recon Loss: 0.0283 [03/29 15:18:07 TiTok]: Data (t): 0.0033, 62.35/s/gpu Batch (t): 0.5774 LR: 0.000091 Step: 106500 Total Loss: 0.0398 Recon Loss: 0.0280 [03/29 15:19:04 TiTok]: Data (t): 0.0033, 62.53/s/gpu Batch (t): 0.5757 LR: 0.000091 Step: 106600 Total Loss: 0.0429 Recon Loss: 0.0286 [03/29 15:20:02 TiTok]: Data (t): 0.0034, 62.44/s/gpu Batch (t): 0.5765 LR: 0.000091 Step: 106700 Total Loss: 0.0390 Recon Loss: 0.0287 [03/29 15:21:02 TiTok]: Data (t): 0.0034, 62.20/s/gpu Batch (t): 0.5788 LR: 0.000091 Step: 106800 Total Loss: 0.0419 Recon Loss: 0.0286 [03/29 15:22:00 TiTok]: Data (t): 0.0032, 61.44/s/gpu Batch (t): 0.5859 LR: 0.000091 Step: 106900 Total Loss: 0.0423 Recon Loss: 0.0306 [03/29 15:22:58 TiTok]: Data (t): 0.0032, 55.92/s/gpu Batch (t): 0.6438 LR: 0.000091 Step: 107000 Total Loss: 0.0391 Recon Loss: 0.0290 [03/29 15:23:56 TiTok]: Data (t): 0.0032, 62.41/s/gpu Batch (t): 0.5769 LR: 0.000091 Step: 107100 Total Loss: 0.0410 Recon Loss: 0.0276 [03/29 15:24:54 TiTok]: Data (t): 0.0032, 62.40/s/gpu Batch (t): 0.5769 LR: 0.000091 Step: 107200 Total Loss: 0.0404 Recon Loss: 0.0280 [03/29 15:25:52 TiTok]: Data (t): 0.0032, 61.88/s/gpu Batch (t): 0.5818 LR: 0.000091 Step: 107300 Total Loss: 0.0428 Recon Loss: 0.0307 [03/29 15:26:49 TiTok]: Data (t): 0.0032, 62.60/s/gpu Batch (t): 0.5751 LR: 0.000091 Step: 107400 Total Loss: 0.0397 Recon Loss: 0.0285 [03/29 15:27:47 TiTok]: Data (t): 0.0032, 62.39/s/gpu Batch (t): 0.5770 LR: 0.000091 Step: 107500 Total Loss: 0.0386 Recon Loss: 0.0266 [03/29 15:28:45 TiTok]: Data (t): 0.0032, 62.22/s/gpu Batch (t): 0.5786 LR: 0.000091 Step: 107600 Total Loss: 0.0445 Recon Loss: 0.0297 [03/29 15:29:43 TiTok]: Data (t): 0.0033, 62.45/s/gpu Batch (t): 0.5765 LR: 0.000091 Step: 107700 Total Loss: 0.0399 Recon Loss: 0.0290 [03/29 15:30:42 TiTok]: Data (t): 0.0033, 62.36/s/gpu Batch (t): 0.5773 LR: 0.000091 Step: 107800 Total Loss: 0.0416 Recon Loss: 0.0286 [03/29 15:31:40 TiTok]: Data (t): 0.0033, 62.32/s/gpu Batch (t): 0.5777 LR: 0.000091 Step: 107900 Total Loss: 0.0416 Recon Loss: 0.0289 [03/29 15:32:38 TiTok]: Data (t): 0.0033, 56.62/s/gpu Batch (t): 0.6358 LR: 0.000091 Step: 108000 Total Loss: 0.0426 Recon Loss: 0.0265 [03/29 15:33:36 TiTok]: Data (t): 0.0032, 62.36/s/gpu Batch (t): 0.5773 LR: 0.000091 Step: 108100 Total Loss: 0.0405 Recon Loss: 0.0277 [03/29 15:34:33 TiTok]: Data (t): 0.0032, 62.12/s/gpu Batch (t): 0.5796 LR: 0.000091 Step: 108200 Total Loss: 0.0482 Recon Loss: 0.0319 [03/29 15:35:31 TiTok]: Data (t): 0.0032, 62.40/s/gpu Batch (t): 0.5769 LR: 0.000091 Step: 108300 Total Loss: 0.0407 Recon Loss: 0.0284 [03/29 15:36:29 TiTok]: Data (t): 0.0033, 62.22/s/gpu Batch (t): 0.5786 LR: 0.000091 Step: 108400 Total Loss: 0.0397 Recon Loss: 0.0267 [03/29 15:37:27 TiTok]: Data (t): 0.0032, 62.37/s/gpu Batch (t): 0.5772 LR: 0.000091 Step: 108500 Total Loss: 0.0407 Recon Loss: 0.0281 [03/29 15:38:25 TiTok]: Data (t): 0.0033, 62.39/s/gpu Batch (t): 0.5770 LR: 0.000091 Step: 108600 Total Loss: 0.0410 Recon Loss: 0.0295 [03/29 15:39:22 TiTok]: Data (t): 0.0031, 62.42/s/gpu Batch (t): 0.5767 LR: 0.000091 Step: 108700 Total Loss: 0.0369 Recon Loss: 0.0282 [03/29 15:40:20 TiTok]: Data (t): 0.0031, 62.43/s/gpu Batch (t): 0.5767 LR: 0.000091 Step: 108800 Total Loss: 0.0416 Recon Loss: 0.0291 [03/29 15:41:18 TiTok]: Data (t): 0.0032, 62.15/s/gpu Batch (t): 0.5792 LR: 0.000091 Step: 108900 Total Loss: 0.0417 Recon Loss: 0.0290 [03/29 15:42:16 TiTok]: Data (t): 0.0033, 56.49/s/gpu Batch (t): 0.6372 LR: 0.000091 Step: 109000 Total Loss: 0.0402 Recon Loss: 0.0283 [03/29 15:43:14 TiTok]: Data (t): 0.0033, 62.41/s/gpu Batch (t): 0.5768 LR: 0.000091 Step: 109100 Total Loss: 0.0434 Recon Loss: 0.0299 [03/29 15:44:11 TiTok]: Data (t): 0.0032, 62.50/s/gpu Batch (t): 0.5760 LR: 0.000091 Step: 109200 Total Loss: 0.0399 Recon Loss: 0.0277 [03/29 15:45:09 TiTok]: Data (t): 0.0031, 62.51/s/gpu Batch (t): 0.5759 LR: 0.000091 Step: 109300 Total Loss: 0.0418 Recon Loss: 0.0287 [03/29 15:46:07 TiTok]: Data (t): 0.0032, 62.38/s/gpu Batch (t): 0.5771 LR: 0.000090 Step: 109400 Total Loss: 0.0375 Recon Loss: 0.0279 [03/29 15:47:05 TiTok]: Data (t): 0.0032, 62.02/s/gpu Batch (t): 0.5805 LR: 0.000090 Step: 109500 Total Loss: 0.0433 Recon Loss: 0.0307 [03/29 15:48:03 TiTok]: Data (t): 0.0031, 62.48/s/gpu Batch (t): 0.5762 LR: 0.000090 Step: 109600 Total Loss: 0.0397 Recon Loss: 0.0272 [03/29 15:49:01 TiTok]: Data (t): 0.0033, 62.51/s/gpu Batch (t): 0.5759 LR: 0.000090 Step: 109700 Total Loss: 0.0435 Recon Loss: 0.0316 [03/29 15:49:58 TiTok]: Data (t): 0.0032, 62.45/s/gpu Batch (t): 0.5765 LR: 0.000090 Step: 109800 Total Loss: 0.0438 Recon Loss: 0.0286 [03/29 15:50:56 TiTok]: Data (t): 0.0032, 62.40/s/gpu Batch (t): 0.5769 LR: 0.000090 Step: 109900 Total Loss: 0.0401 Recon Loss: 0.0280 [03/29 15:51:54 TiTok]: Data (t): 0.0032, 56.63/s/gpu Batch (t): 0.6357 LR: 0.000090 Step: 110000 Total Loss: 0.0427 Recon Loss: 0.0286 [03/29 15:52:06 TiTok]: Saved state to /mnt/books/train_stage2/order_32_stage2/checkpoint-110000 [03/29 19:15:11 TiTok]: Saving config to /mnt/books/train_stage2/order_32_stage2/config.yaml [03/29 19:15:11 TiTok]: Config: experiment: project: stage2 name: stage2 output_dir: /mnt/books/train_stage2/order_32_stage2/ max_train_examples: 1281167 save_every: 10000 eval_every: 1000000 generate_every: 10000 log_every: 100 log_grad_norm_every: 1000 resume: true init_weight: /mnt/books/train_stage2/order_32_stage2/checkpoint-80000/ema_model logging_dir: /mnt/books/train_stage2/order_32_stage2/logs model: vq_model: codebook_size: 4096 token_size: 12 use_l2_norm: true commitment_cost: 0.25 vit_enc_model_size: large vit_dec_model_size: large vit_enc_patch_size: 16 vit_dec_patch_size: 16 num_latent_tokens: 32 layers_x: 18 layers_token: 2 embedding_width: 1024 width: 256 finetune_decoder: true pretrained_tokenizer_weight: maskgit-vqgan-imagenet-f16-256.bin losses: discriminator_start: 20000 quantizer_weight: 0.0 discriminator_factor: 1.0 discriminator_weight: 0.01 perceptual_loss: convnext_s perceptual_weight: 0.1 reconstruction_loss: l2 reconstruction_weight: 1.0 lecam_regularization_weight: 0.001 dataset: params: train_shards_path_or_url: imagenet/imagenet1k-train-{0000..1023}.tar eval_shards_path_or_url: imagenet/imagenet1k-validation-{00..63}.tar num_workers_per_gpu: 12 preprocessing: resize_shorter_edge: 256 crop_size: 256 random_crop: true random_flip: true optimizer: name: adamw params: learning_rate: 0.0001 discriminator_learning_rate: 0.0001 beta1: 0.9 beta2: 0.999 weight_decay: 0.0001 lr_scheduler: scheduler: cosine params: learning_rate: ${optimizer.params.learning_rate} warmup_steps: 5000 end_lr: 1.0e-05 training: gradient_accumulation_steps: 1 per_gpu_batch_size: 36 mixed_precision: fp16 enable_tf32: true enable_wandb: true use_ema: true seed: 42 max_train_steps: 500000 num_generated_images: 2 max_grad_norm: 1.0 config: configs/training/TiTok/stage2/titok_new.yaml [03/29 19:15:28 TiTok]: Creating model and loss module. [03/29 19:16:19 TiTok]: Saving config to /mnt/books/train_stage2/order_32_stage2/config.yaml [03/29 19:16:19 TiTok]: Config: experiment: project: stage2 name: stage2 output_dir: /mnt/books/train_stage2/order_32_stage2/ max_train_examples: 1281167 save_every: 10000 eval_every: 1000000 generate_every: 10000 log_every: 100 log_grad_norm_every: 1000 resume: true logging_dir: /mnt/books/train_stage2/order_32_stage2/logs model: vq_model: codebook_size: 4096 token_size: 12 use_l2_norm: true commitment_cost: 0.25 vit_enc_model_size: large vit_dec_model_size: large vit_enc_patch_size: 16 vit_dec_patch_size: 16 num_latent_tokens: 32 layers_x: 18 layers_token: 2 embedding_width: 1024 width: 256 finetune_decoder: true pretrained_tokenizer_weight: maskgit-vqgan-imagenet-f16-256.bin losses: discriminator_start: 20000 quantizer_weight: 0.0 discriminator_factor: 1.0 discriminator_weight: 0.01 perceptual_loss: convnext_s perceptual_weight: 0.1 reconstruction_loss: l2 reconstruction_weight: 1.0 lecam_regularization_weight: 0.001 dataset: params: train_shards_path_or_url: imagenet/imagenet1k-train-{0000..1023}.tar eval_shards_path_or_url: imagenet/imagenet1k-validation-{00..63}.tar num_workers_per_gpu: 12 preprocessing: resize_shorter_edge: 256 crop_size: 256 random_crop: true random_flip: true optimizer: name: adamw params: learning_rate: 0.0001 discriminator_learning_rate: 0.0001 beta1: 0.9 beta2: 0.999 weight_decay: 0.0001 lr_scheduler: scheduler: cosine params: learning_rate: ${optimizer.params.learning_rate} warmup_steps: 5000 end_lr: 1.0e-05 training: gradient_accumulation_steps: 1 per_gpu_batch_size: 36 mixed_precision: fp16 enable_tf32: true enable_wandb: true use_ema: true seed: 42 max_train_steps: 500000 num_generated_images: 2 max_grad_norm: 1.0 config: configs/training/TiTok/stage2/titok_new.yaml [03/29 19:16:37 TiTok]: Creating model and loss module. [03/29 19:16:45 TiTok]: Creating optimizers. [03/29 19:16:45 TiTok]: Creating lr_schedulers. [03/29 19:16:45 TiTok]: Creating dataloaders. [03/29 19:16:45 TiTok]: Creating evaluator. [03/29 19:16:46 TiTok]: Preparing model, optimizer and dataloaders [03/29 19:16:47 TiTok]: ***** Running training ***** [03/29 19:16:47 TiTok]:  Num training steps = 500000 [03/29 19:16:47 TiTok]:  Gradient Accumulation steps = 1 [03/29 19:16:47 TiTok]:  Instantaneous batch size per gpu = 36 [03/29 19:16:47 TiTok]:  Total train batch size (w. parallel, distributed & accumulation) = 288 [03/29 19:16:47 TiTok]: All globbed checkpoints are: ['/mnt/books/train_stage2/order_32_stage2/checkpoint-60000', '/mnt/books/train_stage2/order_32_stage2/checkpoint-50000', '/mnt/books/train_stage2/order_32_stage2/checkpoint-80000', '/mnt/books/train_stage2/order_32_stage2/checkpoint-30000', '/mnt/books/train_stage2/order_32_stage2/checkpoint-100000', '/mnt/books/train_stage2/order_32_stage2/checkpoint-10000', '/mnt/books/train_stage2/order_32_stage2/checkpoint-110000', '/mnt/books/train_stage2/order_32_stage2/checkpoint-90000', '/mnt/books/train_stage2/order_32_stage2/checkpoint-20000', '/mnt/books/train_stage2/order_32_stage2/checkpoint-40000', '/mnt/books/train_stage2/order_32_stage2/checkpoint-70000'] [03/29 19:16:47 TiTok]: Load checkpoint from /mnt/books/train_stage2/order_32_stage2/checkpoint-110000 [03/29 19:20:41 TiTok]: Saving config to /mnt/books/train_stage2/order_32_stage2/config.yaml [03/29 19:20:41 TiTok]: Config: experiment: project: stage2 name: stage2 output_dir: /mnt/books/train_stage2/order_32_stage2/ max_train_examples: 1281167 save_every: 10000 eval_every: 1000000 generate_every: 10000 log_every: 100 log_grad_norm_every: 1000 resume: true logging_dir: /mnt/books/train_stage2/order_32_stage2/logs model: vq_model: codebook_size: 4096 token_size: 12 use_l2_norm: true commitment_cost: 0.25 vit_enc_model_size: large vit_dec_model_size: large vit_enc_patch_size: 16 vit_dec_patch_size: 16 num_latent_tokens: 32 layers_x: 18 layers_token: 2 embedding_width: 1024 width: 256 finetune_decoder: true pretrained_tokenizer_weight: maskgit-vqgan-imagenet-f16-256.bin losses: discriminator_start: 20000 quantizer_weight: 0.0 discriminator_factor: 1.0 discriminator_weight: 0.01 perceptual_loss: convnext_s perceptual_weight: 0.1 reconstruction_loss: l2 reconstruction_weight: 1.0 lecam_regularization_weight: 0.001 dataset: params: train_shards_path_or_url: imagenet/imagenet1k-train-{0000..1023}.tar eval_shards_path_or_url: imagenet/imagenet1k-validation-{00..63}.tar num_workers_per_gpu: 12 preprocessing: resize_shorter_edge: 256 crop_size: 256 random_crop: true random_flip: true optimizer: name: adamw params: learning_rate: 0.0001 discriminator_learning_rate: 0.0001 beta1: 0.9 beta2: 0.999 weight_decay: 0.0001 lr_scheduler: scheduler: cosine params: learning_rate: ${optimizer.params.learning_rate} warmup_steps: 5000 end_lr: 1.0e-05 training: gradient_accumulation_steps: 1 per_gpu_batch_size: 36 mixed_precision: fp16 enable_tf32: true enable_wandb: true use_ema: true seed: 42 max_train_steps: 500000 num_generated_images: 2 max_grad_norm: 1.0 config: configs/training/TiTok/stage2/titok_new.yaml [03/29 19:20:58 TiTok]: Creating model and loss module. [03/29 19:21:06 TiTok]: Creating optimizers. [03/29 19:21:06 TiTok]: Creating lr_schedulers. [03/29 19:21:06 TiTok]: Creating dataloaders. [03/29 19:21:06 TiTok]: Creating evaluator. [03/29 19:21:06 TiTok]: Preparing model, optimizer and dataloaders [03/29 19:21:08 TiTok]: ***** Running training ***** [03/29 19:21:08 TiTok]:  Num training steps = 500000 [03/29 19:21:08 TiTok]:  Gradient Accumulation steps = 1 [03/29 19:21:08 TiTok]:  Instantaneous batch size per gpu = 36 [03/29 19:21:08 TiTok]:  Total train batch size (w. parallel, distributed & accumulation) = 288 [03/29 19:21:08 TiTok]: All globbed checkpoints are: ['/mnt/books/train_stage2/order_32_stage2/checkpoint-60000', '/mnt/books/train_stage2/order_32_stage2/checkpoint-50000', '/mnt/books/train_stage2/order_32_stage2/checkpoint-80000', '/mnt/books/train_stage2/order_32_stage2/checkpoint-30000', '/mnt/books/train_stage2/order_32_stage2/checkpoint-100000', '/mnt/books/train_stage2/order_32_stage2/checkpoint-10000', '/mnt/books/train_stage2/order_32_stage2/checkpoint-110000', '/mnt/books/train_stage2/order_32_stage2/checkpoint-90000', '/mnt/books/train_stage2/order_32_stage2/checkpoint-20000', '/mnt/books/train_stage2/order_32_stage2/checkpoint-40000', '/mnt/books/train_stage2/order_32_stage2/checkpoint-70000'] [03/29 19:21:08 TiTok]: Load checkpoint from /mnt/books/train_stage2/order_32_stage2/checkpoint-110000 [03/29 19:25:44 TiTok]: Saving config to /mnt/books/train_stage2/order_32_stage2/config.yaml [03/29 19:25:44 TiTok]: Config: experiment: project: stage2 name: stage2 output_dir: /mnt/books/train_stage2/order_32_stage2/ max_train_examples: 1281167 save_every: 10000 eval_every: 1000000 generate_every: 10000 log_every: 100 log_grad_norm_every: 1000 resume: true logging_dir: /mnt/books/train_stage2/order_32_stage2/logs model: vq_model: codebook_size: 4096 token_size: 12 use_l2_norm: true commitment_cost: 0.25 vit_enc_model_size: large vit_dec_model_size: large vit_enc_patch_size: 16 vit_dec_patch_size: 16 num_latent_tokens: 32 layers_x: 18 layers_token: 2 embedding_width: 1024 width: 256 finetune_decoder: true pretrained_tokenizer_weight: maskgit-vqgan-imagenet-f16-256.bin losses: discriminator_start: 20000 quantizer_weight: 0.0 discriminator_factor: 1.0 discriminator_weight: 0.01 perceptual_loss: convnext_s perceptual_weight: 0.1 reconstruction_loss: l2 reconstruction_weight: 1.0 lecam_regularization_weight: 0.001 dataset: params: train_shards_path_or_url: imagenet/imagenet1k-train-{0000..1023}.tar eval_shards_path_or_url: imagenet/imagenet1k-validation-{00..63}.tar num_workers_per_gpu: 12 preprocessing: resize_shorter_edge: 256 crop_size: 256 random_crop: true random_flip: true optimizer: name: adamw params: learning_rate: 0.0001 discriminator_learning_rate: 0.0001 beta1: 0.9 beta2: 0.999 weight_decay: 0.0001 lr_scheduler: scheduler: cosine params: learning_rate: ${optimizer.params.learning_rate} warmup_steps: 5000 end_lr: 1.0e-05 training: gradient_accumulation_steps: 1 per_gpu_batch_size: 36 mixed_precision: fp16 enable_tf32: true enable_wandb: true use_ema: true seed: 42 max_train_steps: 500000 num_generated_images: 2 max_grad_norm: 1.0 config: configs/training/TiTok/stage2/titok_new.yaml [03/29 19:26:01 TiTok]: Creating model and loss module. [03/29 19:26:10 TiTok]: Creating optimizers. [03/29 19:26:10 TiTok]: Creating lr_schedulers. [03/29 19:26:10 TiTok]: Creating dataloaders. [03/29 19:26:10 TiTok]: Creating evaluator. [03/29 19:26:10 TiTok]: Preparing model, optimizer and dataloaders [03/29 19:26:12 TiTok]: ***** Running training ***** [03/29 19:26:12 TiTok]:  Num training steps = 500000 [03/29 19:26:12 TiTok]:  Gradient Accumulation steps = 1 [03/29 19:26:12 TiTok]:  Instantaneous batch size per gpu = 36 [03/29 19:26:12 TiTok]:  Total train batch size (w. parallel, distributed & accumulation) = 288 [03/29 19:26:12 TiTok]: All globbed checkpoints are: ['/mnt/books/train_stage2/order_32_stage2/checkpoint-60000', '/mnt/books/train_stage2/order_32_stage2/checkpoint-50000', '/mnt/books/train_stage2/order_32_stage2/checkpoint-80000', '/mnt/books/train_stage2/order_32_stage2/checkpoint-30000', '/mnt/books/train_stage2/order_32_stage2/checkpoint-100000', '/mnt/books/train_stage2/order_32_stage2/checkpoint-10000', '/mnt/books/train_stage2/order_32_stage2/checkpoint-90000', '/mnt/books/train_stage2/order_32_stage2/checkpoint-20000', '/mnt/books/train_stage2/order_32_stage2/checkpoint-40000', '/mnt/books/train_stage2/order_32_stage2/checkpoint-70000'] [03/29 19:26:12 TiTok]: Load checkpoint from /mnt/books/train_stage2/order_32_stage2/checkpoint-100000 [03/29 19:26:27 TiTok]: Resuming at global_step 100000 [03/29 19:27:42 TiTok]: Data (t): 0.0033, 62.13/s/gpu Batch (t): 0.5794 LR: 0.000092 Step: 100100 Total Loss: 0.0398 Recon Loss: 0.0288 [03/29 19:28:40 TiTok]: Data (t): 0.0033, 62.22/s/gpu Batch (t): 0.5786 LR: 0.000092 Step: 100200 Total Loss: 0.0423 Recon Loss: 0.0285 [03/29 19:29:38 TiTok]: Data (t): 0.0032, 62.11/s/gpu Batch (t): 0.5796 LR: 0.000092 Step: 100300 Total Loss: 0.0392 Recon Loss: 0.0269 [03/29 19:30:36 TiTok]: Data (t): 0.0032, 62.13/s/gpu Batch (t): 0.5794 LR: 0.000092 Step: 100400 Total Loss: 0.0418 Recon Loss: 0.0291 [03/29 19:31:34 TiTok]: Data (t): 0.0031, 61.86/s/gpu Batch (t): 0.5820 LR: 0.000092 Step: 100500 Total Loss: 0.0379 Recon Loss: 0.0261 [03/29 19:32:32 TiTok]: Data (t): 0.0031, 61.89/s/gpu Batch (t): 0.5817 LR: 0.000092 Step: 100600 Total Loss: 0.0431 Recon Loss: 0.0292 [03/29 19:33:30 TiTok]: Data (t): 0.0032, 62.46/s/gpu Batch (t): 0.5764 LR: 0.000092 Step: 100700 Total Loss: 0.0418 Recon Loss: 0.0282 [03/29 19:34:27 TiTok]: Data (t): 0.0031, 62.31/s/gpu Batch (t): 0.5778 LR: 0.000092 Step: 100800 Total Loss: 0.0415 Recon Loss: 0.0266 [03/29 19:35:25 TiTok]: Data (t): 0.0033, 62.43/s/gpu Batch (t): 0.5767 LR: 0.000092 Step: 100900 Total Loss: 0.0421 Recon Loss: 0.0277 [03/29 19:36:23 TiTok]: Data (t): 0.0032, 55.39/s/gpu Batch (t): 0.6499 LR: 0.000092 Step: 101000 Total Loss: 0.0423 Recon Loss: 0.0304 [03/29 19:37:21 TiTok]: Data (t): 0.0031, 62.31/s/gpu Batch (t): 0.5777 LR: 0.000092 Step: 101100 Total Loss: 0.0398 Recon Loss: 0.0263 [03/29 19:38:18 TiTok]: Data (t): 0.0032, 62.58/s/gpu Batch (t): 0.5753 LR: 0.000092 Step: 101200 Total Loss: 0.0416 Recon Loss: 0.0266 [03/29 19:39:16 TiTok]: Data (t): 0.0031, 62.59/s/gpu Batch (t): 0.5752 LR: 0.000092 Step: 101300 Total Loss: 0.0432 Recon Loss: 0.0303 [03/29 19:40:14 TiTok]: Data (t): 0.0032, 62.56/s/gpu Batch (t): 0.5755 LR: 0.000092 Step: 101400 Total Loss: 0.0453 Recon Loss: 0.0308 [03/29 19:41:11 TiTok]: Data (t): 0.0032, 62.40/s/gpu Batch (t): 0.5770 LR: 0.000092 Step: 101500 Total Loss: 0.0407 Recon Loss: 0.0263 [03/29 19:42:09 TiTok]: Data (t): 0.0033, 62.39/s/gpu Batch (t): 0.5770 LR: 0.000092 Step: 101600 Total Loss: 0.0410 Recon Loss: 0.0286 [03/29 19:43:07 TiTok]: Data (t): 0.0032, 62.50/s/gpu Batch (t): 0.5760 LR: 0.000092 Step: 101700 Total Loss: 0.0398 Recon Loss: 0.0277 [03/29 19:44:05 TiTok]: Data (t): 0.0032, 62.40/s/gpu Batch (t): 0.5770 LR: 0.000092 Step: 101800 Total Loss: 0.0436 Recon Loss: 0.0291 [03/29 19:45:02 TiTok]: Data (t): 0.0031, 61.81/s/gpu Batch (t): 0.5825 LR: 0.000092 Step: 101900 Total Loss: 0.0401 Recon Loss: 0.0286 [03/29 19:46:00 TiTok]: Data (t): 0.0033, 53.44/s/gpu Batch (t): 0.6737 LR: 0.000092 Step: 102000 Total Loss: 0.0405 Recon Loss: 0.0296 [03/29 19:46:58 TiTok]: Data (t): 0.0032, 62.39/s/gpu Batch (t): 0.5770 LR: 0.000092 Step: 102100 Total Loss: 0.0421 Recon Loss: 0.0281 [03/29 19:47:56 TiTok]: Data (t): 0.0032, 62.39/s/gpu Batch (t): 0.5770 LR: 0.000092 Step: 102200 Total Loss: 0.0415 Recon Loss: 0.0283 [03/29 19:48:54 TiTok]: Data (t): 0.0032, 62.48/s/gpu Batch (t): 0.5762 LR: 0.000092 Step: 102300 Total Loss: 0.0417 Recon Loss: 0.0299 [03/29 19:49:52 TiTok]: Data (t): 0.0033, 62.22/s/gpu Batch (t): 0.5786 LR: 0.000092 Step: 102400 Total Loss: 0.0428 Recon Loss: 0.0300 [03/29 19:50:50 TiTok]: Data (t): 0.0034, 62.39/s/gpu Batch (t): 0.5770 LR: 0.000092 Step: 102500 Total Loss: 0.0377 Recon Loss: 0.0269 [03/29 19:51:47 TiTok]: Data (t): 0.0032, 62.48/s/gpu Batch (t): 0.5762 LR: 0.000092 Step: 102600 Total Loss: 0.0427 Recon Loss: 0.0290 [03/29 19:52:45 TiTok]: Data (t): 0.0032, 62.46/s/gpu Batch (t): 0.5764 LR: 0.000092 Step: 102700 Total Loss: 0.0384 Recon Loss: 0.0270 [03/29 19:53:43 TiTok]: Data (t): 0.0032, 59.61/s/gpu Batch (t): 0.6039 LR: 0.000092 Step: 102800 Total Loss: 0.0451 Recon Loss: 0.0304 [03/29 19:54:41 TiTok]: Data (t): 0.0032, 62.57/s/gpu Batch (t): 0.5754 LR: 0.000092 Step: 102900 Total Loss: 0.0390 Recon Loss: 0.0282 [03/29 19:55:39 TiTok]: Data (t): 0.0034, 56.70/s/gpu Batch (t): 0.6349 LR: 0.000092 Step: 103000 Total Loss: 0.0390 Recon Loss: 0.0277 [03/29 19:56:36 TiTok]: Data (t): 0.0032, 62.49/s/gpu Batch (t): 0.5761 LR: 0.000092 Step: 103100 Total Loss: 0.0383 Recon Loss: 0.0277 [03/29 19:57:34 TiTok]: Data (t): 0.0032, 62.50/s/gpu Batch (t): 0.5760 LR: 0.000092 Step: 103200 Total Loss: 0.0398 Recon Loss: 0.0294 [03/29 19:58:32 TiTok]: Data (t): 0.0032, 62.52/s/gpu Batch (t): 0.5758 LR: 0.000092 Step: 103300 Total Loss: 0.0403 Recon Loss: 0.0287 [03/29 19:59:29 TiTok]: Data (t): 0.0033, 62.46/s/gpu Batch (t): 0.5763 LR: 0.000092 Step: 103400 Total Loss: 0.0378 Recon Loss: 0.0271 [03/29 20:00:27 TiTok]: Data (t): 0.0032, 61.95/s/gpu Batch (t): 0.5811 LR: 0.000092 Step: 103500 Total Loss: 0.0401 Recon Loss: 0.0291 [03/29 20:01:25 TiTok]: Data (t): 0.0032, 62.40/s/gpu Batch (t): 0.5769 LR: 0.000091 Step: 103600 Total Loss: 0.0425 Recon Loss: 0.0283 [03/29 20:02:22 TiTok]: Data (t): 0.0032, 62.50/s/gpu Batch (t): 0.5760 LR: 0.000091 Step: 103700 Total Loss: 0.0388 Recon Loss: 0.0283 [03/29 20:03:20 TiTok]: Data (t): 0.0032, 62.47/s/gpu Batch (t): 0.5763 LR: 0.000091 Step: 103800 Total Loss: 0.0428 Recon Loss: 0.0318 [03/29 20:04:18 TiTok]: Data (t): 0.0032, 62.48/s/gpu Batch (t): 0.5762 LR: 0.000091 Step: 103900 Total Loss: 0.0428 Recon Loss: 0.0289 [03/29 20:05:15 TiTok]: Data (t): 0.0031, 56.77/s/gpu Batch (t): 0.6342 LR: 0.000091 Step: 104000 Total Loss: 0.0398 Recon Loss: 0.0270 [03/29 20:06:13 TiTok]: Data (t): 0.0031, 62.56/s/gpu Batch (t): 0.5755 LR: 0.000091 Step: 104100 Total Loss: 0.0384 Recon Loss: 0.0269 [03/29 20:07:11 TiTok]: Data (t): 0.0032, 62.42/s/gpu Batch (t): 0.5767 LR: 0.000091 Step: 104200 Total Loss: 0.0429 Recon Loss: 0.0288 [03/29 20:08:09 TiTok]: Data (t): 0.0050, 62.23/s/gpu Batch (t): 0.5785 LR: 0.000091 Step: 104300 Total Loss: 0.0395 Recon Loss: 0.0271 [03/29 20:09:07 TiTok]: Data (t): 0.0032, 58.14/s/gpu Batch (t): 0.6192 LR: 0.000091 Step: 104400 Total Loss: 0.0483 Recon Loss: 0.0341 [03/29 20:10:06 TiTok]: Data (t): 0.0033, 59.54/s/gpu Batch (t): 0.6046 LR: 0.000091 Step: 104500 Total Loss: 0.0433 Recon Loss: 0.0293 [03/29 20:11:04 TiTok]: Data (t): 0.0032, 62.53/s/gpu Batch (t): 0.5757 LR: 0.000091 Step: 104600 Total Loss: 0.0398 Recon Loss: 0.0272 [03/29 20:12:02 TiTok]: Data (t): 0.0034, 62.37/s/gpu Batch (t): 0.5772 LR: 0.000091 Step: 104700 Total Loss: 0.0437 Recon Loss: 0.0305 [03/29 20:13:00 TiTok]: Data (t): 0.0033, 62.32/s/gpu Batch (t): 0.5776 LR: 0.000091 Step: 104800 Total Loss: 0.0403 Recon Loss: 0.0292 [03/29 20:13:58 TiTok]: Data (t): 0.0031, 62.33/s/gpu Batch (t): 0.5775 LR: 0.000091 Step: 104900 Total Loss: 0.0417 Recon Loss: 0.0282 [03/29 20:14:56 TiTok]: Data (t): 0.0033, 56.89/s/gpu Batch (t): 0.6328 LR: 0.000091 Step: 105000 Total Loss: 0.0442 Recon Loss: 0.0292 [03/29 20:15:54 TiTok]: Data (t): 0.0032, 62.48/s/gpu Batch (t): 0.5761 LR: 0.000091 Step: 105100 Total Loss: 0.0416 Recon Loss: 0.0291 [03/29 20:16:52 TiTok]: Data (t): 0.0031, 62.50/s/gpu Batch (t): 0.5760 LR: 0.000091 Step: 105200 Total Loss: 0.0403 Recon Loss: 0.0293 [03/29 20:17:49 TiTok]: Data (t): 0.0031, 62.51/s/gpu Batch (t): 0.5759 LR: 0.000091 Step: 105300 Total Loss: 0.0414 Recon Loss: 0.0288 [03/29 20:18:47 TiTok]: Data (t): 0.0032, 62.30/s/gpu Batch (t): 0.5778 LR: 0.000091 Step: 105400 Total Loss: 0.0417 Recon Loss: 0.0280 [03/29 20:19:45 TiTok]: Data (t): 0.0032, 62.48/s/gpu Batch (t): 0.5762 LR: 0.000091 Step: 105500 Total Loss: 0.0426 Recon Loss: 0.0286 [03/29 20:20:43 TiTok]: Data (t): 0.0033, 62.46/s/gpu Batch (t): 0.5764 LR: 0.000091 Step: 105600 Total Loss: 0.0392 Recon Loss: 0.0266 [03/29 20:21:40 TiTok]: Data (t): 0.0034, 62.46/s/gpu Batch (t): 0.5763 LR: 0.000091 Step: 105700 Total Loss: 0.0403 Recon Loss: 0.0289 [03/29 20:22:38 TiTok]: Data (t): 0.0032, 59.10/s/gpu Batch (t): 0.6091 LR: 0.000091 Step: 105800 Total Loss: 0.0404 Recon Loss: 0.0284 [03/29 20:23:36 TiTok]: Data (t): 0.0032, 62.59/s/gpu Batch (t): 0.5751 LR: 0.000091 Step: 105900 Total Loss: 0.0428 Recon Loss: 0.0285 [03/29 20:24:34 TiTok]: Data (t): 0.0033, 56.85/s/gpu Batch (t): 0.6332 LR: 0.000091 Step: 106000 Total Loss: 0.0406 Recon Loss: 0.0299 [03/29 20:25:31 TiTok]: Data (t): 0.0033, 62.43/s/gpu Batch (t): 0.5766 LR: 0.000091 Step: 106100 Total Loss: 0.0418 Recon Loss: 0.0291 [03/29 20:26:29 TiTok]: Data (t): 0.0033, 62.10/s/gpu Batch (t): 0.5797 LR: 0.000091 Step: 106200 Total Loss: 0.0424 Recon Loss: 0.0290 [03/29 20:27:27 TiTok]: Data (t): 0.0032, 62.56/s/gpu Batch (t): 0.5754 LR: 0.000091 Step: 106300 Total Loss: 0.0425 Recon Loss: 0.0290 [03/29 20:28:25 TiTok]: Data (t): 0.0032, 62.44/s/gpu Batch (t): 0.5766 LR: 0.000091 Step: 106400 Total Loss: 0.0404 Recon Loss: 0.0279 [03/29 20:29:23 TiTok]: Data (t): 0.0033, 62.42/s/gpu Batch (t): 0.5768 LR: 0.000091 Step: 106500 Total Loss: 0.0412 Recon Loss: 0.0285 [03/29 20:30:20 TiTok]: Data (t): 0.0033, 62.55/s/gpu Batch (t): 0.5756 LR: 0.000091 Step: 106600 Total Loss: 0.0435 Recon Loss: 0.0295 [03/29 20:31:18 TiTok]: Data (t): 0.0031, 62.51/s/gpu Batch (t): 0.5759 LR: 0.000091 Step: 106700 Total Loss: 0.0433 Recon Loss: 0.0270 [03/29 20:32:16 TiTok]: Data (t): 0.0032, 62.53/s/gpu Batch (t): 0.5757 LR: 0.000091 Step: 106800 Total Loss: 0.0425 Recon Loss: 0.0282 [03/29 20:33:14 TiTok]: Data (t): 0.0033, 62.50/s/gpu Batch (t): 0.5760 LR: 0.000091 Step: 106900 Total Loss: 0.0398 Recon Loss: 0.0286 [03/29 20:34:11 TiTok]: Data (t): 0.0032, 56.91/s/gpu Batch (t): 0.6326 LR: 0.000091 Step: 107000 Total Loss: 0.0424 Recon Loss: 0.0298 [03/29 20:35:09 TiTok]: Data (t): 0.0032, 62.50/s/gpu Batch (t): 0.5760 LR: 0.000091 Step: 107100 Total Loss: 0.0408 Recon Loss: 0.0272 [03/29 20:36:07 TiTok]: Data (t): 0.0031, 62.55/s/gpu Batch (t): 0.5755 LR: 0.000091 Step: 107200 Total Loss: 0.0411 Recon Loss: 0.0290 [03/29 20:37:05 TiTok]: Data (t): 0.0031, 62.54/s/gpu Batch (t): 0.5756 LR: 0.000091 Step: 107300 Total Loss: 0.0419 Recon Loss: 0.0297 [03/29 20:38:03 TiTok]: Data (t): 0.0032, 62.53/s/gpu Batch (t): 0.5757 LR: 0.000091 Step: 107400 Total Loss: 0.0423 Recon Loss: 0.0288 [03/29 20:39:01 TiTok]: Data (t): 0.0033, 62.54/s/gpu Batch (t): 0.5756 LR: 0.000091 Step: 107500 Total Loss: 0.0377 Recon Loss: 0.0287 [03/29 20:39:59 TiTok]: Data (t): 0.0032, 62.56/s/gpu Batch (t): 0.5755 LR: 0.000091 Step: 107600 Total Loss: 0.0414 Recon Loss: 0.0289 [03/29 20:40:56 TiTok]: Data (t): 0.0031, 62.47/s/gpu Batch (t): 0.5763 LR: 0.000091 Step: 107700 Total Loss: 0.0424 Recon Loss: 0.0271 [03/29 20:41:54 TiTok]: Data (t): 0.0033, 62.51/s/gpu Batch (t): 0.5759 LR: 0.000091 Step: 107800 Total Loss: 0.0434 Recon Loss: 0.0297 [03/29 20:42:52 TiTok]: Data (t): 0.0032, 62.49/s/gpu Batch (t): 0.5761 LR: 0.000091 Step: 107900 Total Loss: 0.0409 Recon Loss: 0.0292 [03/29 20:43:50 TiTok]: Data (t): 0.0033, 56.59/s/gpu Batch (t): 0.6361 LR: 0.000091 Step: 108000 Total Loss: 0.0424 Recon Loss: 0.0304 [03/29 20:44:47 TiTok]: Data (t): 0.0032, 62.17/s/gpu Batch (t): 0.5790 LR: 0.000091 Step: 108100 Total Loss: 0.0423 Recon Loss: 0.0298 [03/29 20:45:46 TiTok]: Data (t): 0.0034, 62.50/s/gpu Batch (t): 0.5760 LR: 0.000091 Step: 108200 Total Loss: 0.0407 Recon Loss: 0.0273 [03/29 20:46:44 TiTok]: Data (t): 0.0032, 59.51/s/gpu Batch (t): 0.6049 LR: 0.000091 Step: 108300 Total Loss: 0.0414 Recon Loss: 0.0277 [03/29 20:47:42 TiTok]: Data (t): 0.0033, 62.57/s/gpu Batch (t): 0.5754 LR: 0.000091 Step: 108400 Total Loss: 0.0431 Recon Loss: 0.0296 [03/29 20:48:40 TiTok]: Data (t): 0.0033, 62.49/s/gpu Batch (t): 0.5761 LR: 0.000091 Step: 108500 Total Loss: 0.0429 Recon Loss: 0.0291 [03/29 20:49:38 TiTok]: Data (t): 0.0032, 62.54/s/gpu Batch (t): 0.5757 LR: 0.000091 Step: 108600 Total Loss: 0.0414 Recon Loss: 0.0287 [03/29 20:50:35 TiTok]: Data (t): 0.0032, 62.49/s/gpu Batch (t): 0.5761 LR: 0.000091 Step: 108700 Total Loss: 0.0441 Recon Loss: 0.0282 [03/29 20:51:33 TiTok]: Data (t): 0.0032, 61.47/s/gpu Batch (t): 0.5856 LR: 0.000091 Step: 108800 Total Loss: 0.0449 Recon Loss: 0.0321 [03/29 20:52:31 TiTok]: Data (t): 0.0032, 62.80/s/gpu Batch (t): 0.5733 LR: 0.000091 Step: 108900 Total Loss: 0.0416 Recon Loss: 0.0290 [03/29 20:53:30 TiTok]: Data (t): 0.0033, 56.50/s/gpu Batch (t): 0.6372 LR: 0.000091 Step: 109000 Total Loss: 0.0396 Recon Loss: 0.0282 [03/29 20:54:28 TiTok]: Data (t): 0.0033, 62.30/s/gpu Batch (t): 0.5779 LR: 0.000091 Step: 109100 Total Loss: 0.0429 Recon Loss: 0.0275 [03/29 20:55:27 TiTok]: Data (t): 0.0033, 61.27/s/gpu Batch (t): 0.5876 LR: 0.000091 Step: 109200 Total Loss: 0.0423 Recon Loss: 0.0300 [03/29 20:56:25 TiTok]: Data (t): 0.0032, 62.43/s/gpu Batch (t): 0.5767 LR: 0.000091 Step: 109300 Total Loss: 0.0425 Recon Loss: 0.0301 [03/29 20:57:23 TiTok]: Data (t): 0.0034, 62.45/s/gpu Batch (t): 0.5765 LR: 0.000090 Step: 109400 Total Loss: 0.0415 Recon Loss: 0.0284 [03/29 20:58:20 TiTok]: Data (t): 0.0032, 62.45/s/gpu Batch (t): 0.5764 LR: 0.000090 Step: 109500 Total Loss: 0.0425 Recon Loss: 0.0288 [03/29 20:59:19 TiTok]: Data (t): 0.0033, 62.46/s/gpu Batch (t): 0.5763 LR: 0.000090 Step: 109600 Total Loss: 0.0368 Recon Loss: 0.0245 [03/29 21:00:17 TiTok]: Data (t): 0.0032, 62.41/s/gpu Batch (t): 0.5768 LR: 0.000090 Step: 109700 Total Loss: 0.0401 Recon Loss: 0.0303 [03/29 21:01:14 TiTok]: Data (t): 0.0032, 62.28/s/gpu Batch (t): 0.5780 LR: 0.000090 Step: 109800 Total Loss: 0.0386 Recon Loss: 0.0274 [03/29 21:02:12 TiTok]: Data (t): 0.0032, 62.52/s/gpu Batch (t): 0.5758 LR: 0.000090 Step: 109900 Total Loss: 0.0420 Recon Loss: 0.0297 [03/29 21:03:10 TiTok]: Data (t): 0.0034, 56.57/s/gpu Batch (t): 0.6364 LR: 0.000090 Step: 110000 Total Loss: 0.0421 Recon Loss: 0.0295 [03/29 21:03:12 TiTok]: Saved state to /mnt/books/train_stage2/order_32_stage2/checkpoint-110000 [03/29 21:03:26 TiTok]: Reconstructing images... [03/29 21:04:25 TiTok]: Data (t): 0.0034, 62.44/s/gpu Batch (t): 0.5766 LR: 0.000090 Step: 110100 Total Loss: 0.0436 Recon Loss: 0.0295 [03/29 21:05:22 TiTok]: Data (t): 0.0032, 62.38/s/gpu Batch (t): 0.5771 LR: 0.000090 Step: 110200 Total Loss: 0.0450 Recon Loss: 0.0320 [03/29 21:06:20 TiTok]: Data (t): 0.0033, 62.37/s/gpu Batch (t): 0.5772 LR: 0.000090 Step: 110300 Total Loss: 0.0392 Recon Loss: 0.0272 [03/29 21:07:18 TiTok]: Data (t): 0.0032, 62.37/s/gpu Batch (t): 0.5772 LR: 0.000090 Step: 110400 Total Loss: 0.0467 Recon Loss: 0.0311 [03/29 21:08:16 TiTok]: Data (t): 0.0032, 62.43/s/gpu Batch (t): 0.5767 LR: 0.000090 Step: 110500 Total Loss: 0.0373 Recon Loss: 0.0267 [03/29 21:09:14 TiTok]: Data (t): 0.0033, 62.17/s/gpu Batch (t): 0.5790 LR: 0.000090 Step: 110600 Total Loss: 0.0401 Recon Loss: 0.0282 [03/29 21:10:12 TiTok]: Data (t): 0.0034, 62.48/s/gpu Batch (t): 0.5762 LR: 0.000090 Step: 110700 Total Loss: 0.0405 Recon Loss: 0.0285 [03/29 21:11:09 TiTok]: Data (t): 0.0032, 62.43/s/gpu Batch (t): 0.5766 LR: 0.000090 Step: 110800 Total Loss: 0.0427 Recon Loss: 0.0292 [03/29 21:12:07 TiTok]: Data (t): 0.0033, 62.49/s/gpu Batch (t): 0.5761 LR: 0.000090 Step: 110900 Total Loss: 0.0429 Recon Loss: 0.0290 [03/29 21:13:05 TiTok]: Data (t): 0.0032, 52.06/s/gpu Batch (t): 0.6915 LR: 0.000090 Step: 111000 Total Loss: 0.0404 Recon Loss: 0.0295 [03/29 21:14:03 TiTok]: Data (t): 0.0032, 62.49/s/gpu Batch (t): 0.5761 LR: 0.000090 Step: 111100 Total Loss: 0.0400 Recon Loss: 0.0274 [03/29 21:15:01 TiTok]: Data (t): 0.0033, 62.50/s/gpu Batch (t): 0.5760 LR: 0.000090 Step: 111200 Total Loss: 0.0413 Recon Loss: 0.0279 [03/29 21:15:59 TiTok]: Data (t): 0.0033, 62.50/s/gpu Batch (t): 0.5760 LR: 0.000090 Step: 111300 Total Loss: 0.0416 Recon Loss: 0.0279 [03/29 21:16:56 TiTok]: Data (t): 0.0032, 62.56/s/gpu Batch (t): 0.5754 LR: 0.000090 Step: 111400 Total Loss: 0.0417 Recon Loss: 0.0301 [03/29 21:17:54 TiTok]: Data (t): 0.0032, 62.54/s/gpu Batch (t): 0.5756 LR: 0.000090 Step: 111500 Total Loss: 0.0407 Recon Loss: 0.0275 [03/29 21:18:52 TiTok]: Data (t): 0.0032, 61.97/s/gpu Batch (t): 0.5809 LR: 0.000090 Step: 111600 Total Loss: 0.0416 Recon Loss: 0.0289 [03/29 21:19:50 TiTok]: Data (t): 0.0033, 62.51/s/gpu Batch (t): 0.5759 LR: 0.000090 Step: 111700 Total Loss: 0.0423 Recon Loss: 0.0292 [03/29 21:20:47 TiTok]: Data (t): 0.0032, 62.47/s/gpu Batch (t): 0.5763 LR: 0.000090 Step: 111800 Total Loss: 0.0393 Recon Loss: 0.0291 [03/29 21:21:45 TiTok]: Data (t): 0.0032, 62.34/s/gpu Batch (t): 0.5775 LR: 0.000090 Step: 111900 Total Loss: 0.0402 Recon Loss: 0.0273 [03/29 21:22:43 TiTok]: Data (t): 0.0033, 56.23/s/gpu Batch (t): 0.6402 LR: 0.000090 Step: 112000 Total Loss: 0.0435 Recon Loss: 0.0313 [03/29 21:23:41 TiTok]: Data (t): 0.0033, 62.38/s/gpu Batch (t): 0.5771 LR: 0.000090 Step: 112100 Total Loss: 0.0427 Recon Loss: 0.0293 [03/29 21:24:39 TiTok]: Data (t): 0.0032, 62.55/s/gpu Batch (t): 0.5755 LR: 0.000090 Step: 112200 Total Loss: 0.0399 Recon Loss: 0.0273 [03/29 21:25:36 TiTok]: Data (t): 0.0032, 62.47/s/gpu Batch (t): 0.5763 LR: 0.000090 Step: 112300 Total Loss: 0.0393 Recon Loss: 0.0289 [03/29 21:26:34 TiTok]: Data (t): 0.0032, 62.51/s/gpu Batch (t): 0.5759 LR: 0.000090 Step: 112400 Total Loss: 0.0420 Recon Loss: 0.0283 [03/29 21:27:32 TiTok]: Data (t): 0.0033, 61.98/s/gpu Batch (t): 0.5808 LR: 0.000090 Step: 112500 Total Loss: 0.0405 Recon Loss: 0.0298 [03/29 21:28:30 TiTok]: Data (t): 0.0032, 62.24/s/gpu Batch (t): 0.5784 LR: 0.000090 Step: 112600 Total Loss: 0.0393 Recon Loss: 0.0294 [03/29 21:29:28 TiTok]: Data (t): 0.0033, 62.41/s/gpu Batch (t): 0.5768 LR: 0.000090 Step: 112700 Total Loss: 0.0392 Recon Loss: 0.0277 [03/29 21:30:25 TiTok]: Data (t): 0.0031, 62.43/s/gpu Batch (t): 0.5767 LR: 0.000090 Step: 112800 Total Loss: 0.0390 Recon Loss: 0.0283 [03/29 21:31:23 TiTok]: Data (t): 0.0034, 62.19/s/gpu Batch (t): 0.5788 LR: 0.000090 Step: 112900 Total Loss: 0.0426 Recon Loss: 0.0287 [03/29 21:32:21 TiTok]: Data (t): 0.0032, 56.73/s/gpu Batch (t): 0.6346 LR: 0.000090 Step: 113000 Total Loss: 0.0395 Recon Loss: 0.0273 [03/29 21:33:19 TiTok]: Data (t): 0.0032, 62.46/s/gpu Batch (t): 0.5763 LR: 0.000090 Step: 113100 Total Loss: 0.0411 Recon Loss: 0.0292 [03/29 21:34:16 TiTok]: Data (t): 0.0032, 62.54/s/gpu Batch (t): 0.5756 LR: 0.000090 Step: 113200 Total Loss: 0.0419 Recon Loss: 0.0290 [03/29 21:35:14 TiTok]: Data (t): 0.0033, 62.62/s/gpu Batch (t): 0.5749 LR: 0.000090 Step: 113300 Total Loss: 0.0424 Recon Loss: 0.0283 [03/29 21:36:13 TiTok]: Data (t): 0.0034, 62.47/s/gpu Batch (t): 0.5763 LR: 0.000090 Step: 113400 Total Loss: 0.0412 Recon Loss: 0.0288 [03/29 21:37:11 TiTok]: Data (t): 0.0034, 61.51/s/gpu Batch (t): 0.5852 LR: 0.000090 Step: 113500 Total Loss: 0.0389 Recon Loss: 0.0283 [03/29 21:38:09 TiTok]: Data (t): 0.0033, 62.34/s/gpu Batch (t): 0.5775 LR: 0.000090 Step: 113600 Total Loss: 0.0386 Recon Loss: 0.0261 [03/29 21:39:07 TiTok]: Data (t): 0.0032, 62.49/s/gpu Batch (t): 0.5761 LR: 0.000090 Step: 113700 Total Loss: 0.0407 Recon Loss: 0.0274 [03/29 21:40:05 TiTok]: Data (t): 0.0032, 62.48/s/gpu Batch (t): 0.5762 LR: 0.000090 Step: 113800 Total Loss: 0.0370 Recon Loss: 0.0287 [03/29 21:41:03 TiTok]: Data (t): 0.0033, 62.54/s/gpu Batch (t): 0.5756 LR: 0.000090 Step: 113900 Total Loss: 0.0401 Recon Loss: 0.0294 [03/29 21:42:00 TiTok]: Data (t): 0.0033, 56.66/s/gpu Batch (t): 0.6354 LR: 0.000090 Step: 114000 Total Loss: 0.0413 Recon Loss: 0.0301 [03/29 21:42:58 TiTok]: Data (t): 0.0033, 62.50/s/gpu Batch (t): 0.5760 LR: 0.000090 Step: 114100 Total Loss: 0.0416 Recon Loss: 0.0303 [03/29 21:43:56 TiTok]: Data (t): 0.0032, 62.42/s/gpu Batch (t): 0.5768 LR: 0.000090 Step: 114200 Total Loss: 0.0420 Recon Loss: 0.0287 [03/29 21:44:54 TiTok]: Data (t): 0.0032, 62.33/s/gpu Batch (t): 0.5776 LR: 0.000090 Step: 114300 Total Loss: 0.0392 Recon Loss: 0.0274 [03/29 21:45:52 TiTok]: Data (t): 0.0032, 62.44/s/gpu Batch (t): 0.5766 LR: 0.000090 Step: 114400 Total Loss: 0.0442 Recon Loss: 0.0289 [03/29 21:46:50 TiTok]: Data (t): 0.0032, 61.57/s/gpu Batch (t): 0.5847 LR: 0.000090 Step: 114500 Total Loss: 0.0425 Recon Loss: 0.0282 [03/29 21:47:47 TiTok]: Data (t): 0.0033, 62.45/s/gpu Batch (t): 0.5765 LR: 0.000090 Step: 114600 Total Loss: 0.0410 Recon Loss: 0.0285 [03/29 21:48:45 TiTok]: Data (t): 0.0032, 62.47/s/gpu Batch (t): 0.5763 LR: 0.000090 Step: 114700 Total Loss: 0.0441 Recon Loss: 0.0306 [03/29 21:49:43 TiTok]: Data (t): 0.0033, 62.54/s/gpu Batch (t): 0.5756 LR: 0.000090 Step: 114800 Total Loss: 0.0369 Recon Loss: 0.0268 [03/29 21:50:40 TiTok]: Data (t): 0.0033, 62.33/s/gpu Batch (t): 0.5776 LR: 0.000090 Step: 114900 Total Loss: 0.0412 Recon Loss: 0.0286 [03/29 21:51:38 TiTok]: Data (t): 0.0032, 56.71/s/gpu Batch (t): 0.6349 LR: 0.000089 Step: 115000 Total Loss: 0.0386 Recon Loss: 0.0279 [03/29 21:52:36 TiTok]: Data (t): 0.0032, 62.44/s/gpu Batch (t): 0.5765 LR: 0.000089 Step: 115100 Total Loss: 0.0427 Recon Loss: 0.0303 [03/29 21:53:34 TiTok]: Data (t): 0.0033, 62.40/s/gpu Batch (t): 0.5770 LR: 0.000089 Step: 115200 Total Loss: 0.0389 Recon Loss: 0.0272 [03/29 21:54:31 TiTok]: Data (t): 0.0033, 62.47/s/gpu Batch (t): 0.5763 LR: 0.000089 Step: 115300 Total Loss: 0.0377 Recon Loss: 0.0288 [03/29 21:55:29 TiTok]: Data (t): 0.0033, 62.38/s/gpu Batch (t): 0.5771 LR: 0.000089 Step: 115400 Total Loss: 0.0407 Recon Loss: 0.0296 [03/29 21:56:27 TiTok]: Data (t): 0.0033, 62.40/s/gpu Batch (t): 0.5769 LR: 0.000089 Step: 115500 Total Loss: 0.0464 Recon Loss: 0.0312 [03/29 21:57:25 TiTok]: Data (t): 0.0033, 62.25/s/gpu Batch (t): 0.5783 LR: 0.000089 Step: 115600 Total Loss: 0.0405 Recon Loss: 0.0265 [03/29 21:58:22 TiTok]: Data (t): 0.0032, 62.20/s/gpu Batch (t): 0.5788 LR: 0.000089 Step: 115700 Total Loss: 0.0414 Recon Loss: 0.0289 [03/29 21:59:20 TiTok]: Data (t): 0.0032, 62.42/s/gpu Batch (t): 0.5767 LR: 0.000089 Step: 115800 Total Loss: 0.0398 Recon Loss: 0.0290 [03/29 22:00:18 TiTok]: Data (t): 0.0032, 62.45/s/gpu Batch (t): 0.5764 LR: 0.000089 Step: 115900 Total Loss: 0.0428 Recon Loss: 0.0313 [03/29 22:01:16 TiTok]: Data (t): 0.0033, 56.75/s/gpu Batch (t): 0.6343 LR: 0.000089 Step: 116000 Total Loss: 0.0410 Recon Loss: 0.0287 [03/29 22:02:14 TiTok]: Data (t): 0.0032, 62.49/s/gpu Batch (t): 0.5761 LR: 0.000089 Step: 116100 Total Loss: 0.0431 Recon Loss: 0.0293 [03/29 22:03:11 TiTok]: Data (t): 0.0032, 62.36/s/gpu Batch (t): 0.5773 LR: 0.000089 Step: 116200 Total Loss: 0.0401 Recon Loss: 0.0282 [03/29 22:04:09 TiTok]: Data (t): 0.0031, 62.45/s/gpu Batch (t): 0.5765 LR: 0.000089 Step: 116300 Total Loss: 0.0434 Recon Loss: 0.0291 [03/29 22:05:07 TiTok]: Data (t): 0.0033, 62.39/s/gpu Batch (t): 0.5771 LR: 0.000089 Step: 116400 Total Loss: 0.0418 Recon Loss: 0.0288 [03/29 22:06:05 TiTok]: Data (t): 0.0032, 62.56/s/gpu Batch (t): 0.5755 LR: 0.000089 Step: 116500 Total Loss: 0.0416 Recon Loss: 0.0294 [03/29 22:07:03 TiTok]: Data (t): 0.0033, 62.38/s/gpu Batch (t): 0.5771 LR: 0.000089 Step: 116600 Total Loss: 0.0413 Recon Loss: 0.0289 [03/29 22:08:00 TiTok]: Data (t): 0.0032, 62.02/s/gpu Batch (t): 0.5805 LR: 0.000089 Step: 116700 Total Loss: 0.0386 Recon Loss: 0.0276 [03/29 22:08:58 TiTok]: Data (t): 0.0032, 62.45/s/gpu Batch (t): 0.5764 LR: 0.000089 Step: 116800 Total Loss: 0.0424 Recon Loss: 0.0278 [03/29 22:09:56 TiTok]: Data (t): 0.0033, 62.40/s/gpu Batch (t): 0.5769 LR: 0.000089 Step: 116900 Total Loss: 0.0460 Recon Loss: 0.0299 [03/29 22:10:54 TiTok]: Data (t): 0.0033, 56.74/s/gpu Batch (t): 0.6345 LR: 0.000089 Step: 117000 Total Loss: 0.0393 Recon Loss: 0.0262 [03/29 22:11:51 TiTok]: Data (t): 0.0033, 62.20/s/gpu Batch (t): 0.5788 LR: 0.000089 Step: 117100 Total Loss: 0.0408 Recon Loss: 0.0282 [03/29 22:12:49 TiTok]: Data (t): 0.0033, 62.48/s/gpu Batch (t): 0.5761 LR: 0.000089 Step: 117200 Total Loss: 0.0423 Recon Loss: 0.0291 [03/29 22:13:47 TiTok]: Data (t): 0.0033, 62.56/s/gpu Batch (t): 0.5755 LR: 0.000089 Step: 117300 Total Loss: 0.0387 Recon Loss: 0.0275 [03/29 22:14:45 TiTok]: Data (t): 0.0032, 62.41/s/gpu Batch (t): 0.5769 LR: 0.000089 Step: 117400 Total Loss: 0.0390 Recon Loss: 0.0291 [03/29 22:15:43 TiTok]: Data (t): 0.0033, 62.48/s/gpu Batch (t): 0.5762 LR: 0.000089 Step: 117500 Total Loss: 0.0413 Recon Loss: 0.0295 [03/29 22:16:40 TiTok]: Data (t): 0.0033, 62.43/s/gpu Batch (t): 0.5767 LR: 0.000089 Step: 117600 Total Loss: 0.0363 Recon Loss: 0.0250 [03/29 22:17:38 TiTok]: Data (t): 0.0033, 62.13/s/gpu Batch (t): 0.5794 LR: 0.000089 Step: 117700 Total Loss: 0.0410 Recon Loss: 0.0278 [03/29 22:18:36 TiTok]: Data (t): 0.0033, 62.88/s/gpu Batch (t): 0.5725 LR: 0.000089 Step: 117800 Total Loss: 0.0417 Recon Loss: 0.0281 [03/29 22:19:35 TiTok]: Data (t): 0.0031, 62.51/s/gpu Batch (t): 0.5760 LR: 0.000089 Step: 117900 Total Loss: 0.0378 Recon Loss: 0.0288 [03/29 22:20:33 TiTok]: Data (t): 0.0032, 54.31/s/gpu Batch (t): 0.6629 LR: 0.000089 Step: 118000 Total Loss: 0.0394 Recon Loss: 0.0280 [03/29 22:21:31 TiTok]: Data (t): 0.0033, 62.41/s/gpu Batch (t): 0.5768 LR: 0.000089 Step: 118100 Total Loss: 0.0431 Recon Loss: 0.0298 [03/29 22:22:29 TiTok]: Data (t): 0.0032, 62.42/s/gpu Batch (t): 0.5767 LR: 0.000089 Step: 118200 Total Loss: 0.0394 Recon Loss: 0.0291 [03/29 22:23:26 TiTok]: Data (t): 0.0031, 59.58/s/gpu Batch (t): 0.6042 LR: 0.000089 Step: 118300 Total Loss: 0.0408 Recon Loss: 0.0286 [03/29 22:24:25 TiTok]: Data (t): 0.0032, 62.44/s/gpu Batch (t): 0.5766 LR: 0.000089 Step: 118400 Total Loss: 0.0430 Recon Loss: 0.0295 [03/29 22:25:22 TiTok]: Data (t): 0.0031, 62.45/s/gpu Batch (t): 0.5765 LR: 0.000089 Step: 118500 Total Loss: 0.0424 Recon Loss: 0.0299 [03/29 22:26:20 TiTok]: Data (t): 0.0031, 62.39/s/gpu Batch (t): 0.5770 LR: 0.000089 Step: 118600 Total Loss: 0.0424 Recon Loss: 0.0301 [03/29 22:27:18 TiTok]: Data (t): 0.0031, 62.38/s/gpu Batch (t): 0.5771 LR: 0.000089 Step: 118700 Total Loss: 0.0422 Recon Loss: 0.0290 [03/29 22:28:16 TiTok]: Data (t): 0.0031, 62.52/s/gpu Batch (t): 0.5758 LR: 0.000089 Step: 118800 Total Loss: 0.0451 Recon Loss: 0.0286 [03/29 22:29:14 TiTok]: Data (t): 0.0031, 62.54/s/gpu Batch (t): 0.5756 LR: 0.000089 Step: 118900 Total Loss: 0.0406 Recon Loss: 0.0287 [03/29 22:30:11 TiTok]: Data (t): 0.0033, 56.64/s/gpu Batch (t): 0.6356 LR: 0.000089 Step: 119000 Total Loss: 0.0421 Recon Loss: 0.0304 [03/29 22:31:09 TiTok]: Data (t): 0.0031, 62.39/s/gpu Batch (t): 0.5770 LR: 0.000089 Step: 119100 Total Loss: 0.0405 Recon Loss: 0.0279 [03/29 22:32:07 TiTok]: Data (t): 0.0032, 62.37/s/gpu Batch (t): 0.5772 LR: 0.000089 Step: 119200 Total Loss: 0.0396 Recon Loss: 0.0284 [03/29 22:33:05 TiTok]: Data (t): 0.0032, 62.41/s/gpu Batch (t): 0.5769 LR: 0.000089 Step: 119300 Total Loss: 0.0393 Recon Loss: 0.0287 [03/29 22:34:02 TiTok]: Data (t): 0.0033, 62.29/s/gpu Batch (t): 0.5779 LR: 0.000089 Step: 119400 Total Loss: 0.0415 Recon Loss: 0.0277 [03/29 22:35:00 TiTok]: Data (t): 0.0032, 62.56/s/gpu Batch (t): 0.5755 LR: 0.000089 Step: 119500 Total Loss: 0.0391 Recon Loss: 0.0288 [03/29 22:35:58 TiTok]: Data (t): 0.0033, 61.99/s/gpu Batch (t): 0.5808 LR: 0.000089 Step: 119600 Total Loss: 0.0420 Recon Loss: 0.0290 [03/29 22:36:55 TiTok]: Data (t): 0.0032, 62.58/s/gpu Batch (t): 0.5752 LR: 0.000089 Step: 119700 Total Loss: 0.0416 Recon Loss: 0.0283 [03/29 22:37:53 TiTok]: Data (t): 0.0033, 62.55/s/gpu Batch (t): 0.5756 LR: 0.000089 Step: 119800 Total Loss: 0.0411 Recon Loss: 0.0278 [03/29 22:38:50 TiTok]: Data (t): 0.0032, 62.48/s/gpu Batch (t): 0.5762 LR: 0.000089 Step: 119900 Total Loss: 0.0411 Recon Loss: 0.0283 [03/29 22:39:48 TiTok]: Data (t): 0.0032, 56.62/s/gpu Batch (t): 0.6358 LR: 0.000089 Step: 120000 Total Loss: 0.0405 Recon Loss: 0.0279 [03/29 22:39:50 TiTok]: Saved state to /mnt/books/train_stage2/order_32_stage2/checkpoint-120000 [03/29 22:40:04 TiTok]: Reconstructing images... [03/29 22:41:02 TiTok]: Data (t): 0.0032, 62.39/s/gpu Batch (t): 0.5771 LR: 0.000089 Step: 120100 Total Loss: 0.0385 Recon Loss: 0.0281 [03/29 22:42:00 TiTok]: Data (t): 0.0033, 62.43/s/gpu Batch (t): 0.5767 LR: 0.000089 Step: 120200 Total Loss: 0.0405 Recon Loss: 0.0299 [03/29 22:42:57 TiTok]: Data (t): 0.0032, 62.54/s/gpu Batch (t): 0.5756 LR: 0.000088 Step: 120300 Total Loss: 0.0394 Recon Loss: 0.0266 [03/29 22:43:55 TiTok]: Data (t): 0.0032, 57.50/s/gpu Batch (t): 0.6261 LR: 0.000088 Step: 120400 Total Loss: 0.0417 Recon Loss: 0.0284 [03/29 22:44:53 TiTok]: Data (t): 0.0032, 62.57/s/gpu Batch (t): 0.5754 LR: 0.000088 Step: 120500 Total Loss: 0.0409 Recon Loss: 0.0283 [03/29 22:45:51 TiTok]: Data (t): 0.0033, 62.33/s/gpu Batch (t): 0.5776 LR: 0.000088 Step: 120600 Total Loss: 0.0414 Recon Loss: 0.0287 [03/29 22:46:48 TiTok]: Data (t): 0.0033, 62.53/s/gpu Batch (t): 0.5757 LR: 0.000088 Step: 120700 Total Loss: 0.0427 Recon Loss: 0.0304 [03/29 22:47:46 TiTok]: Data (t): 0.0032, 62.51/s/gpu Batch (t): 0.5759 LR: 0.000088 Step: 120800 Total Loss: 0.0382 Recon Loss: 0.0282 [03/29 22:48:44 TiTok]: Data (t): 0.0033, 62.50/s/gpu Batch (t): 0.5760 LR: 0.000088 Step: 120900 Total Loss: 0.0418 Recon Loss: 0.0288 [03/29 22:49:43 TiTok]: Data (t): 0.0033, 52.20/s/gpu Batch (t): 0.6896 LR: 0.000088 Step: 121000 Total Loss: 0.0408 Recon Loss: 0.0280 [03/29 22:50:40 TiTok]: Data (t): 0.0033, 62.18/s/gpu Batch (t): 0.5789 LR: 0.000088 Step: 121100 Total Loss: 0.0443 Recon Loss: 0.0312 [03/29 22:51:38 TiTok]: Data (t): 0.0032, 62.44/s/gpu Batch (t): 0.5766 LR: 0.000088 Step: 121200 Total Loss: 0.0407 Recon Loss: 0.0273 [03/29 22:52:35 TiTok]: Data (t): 0.0033, 62.59/s/gpu Batch (t): 0.5752 LR: 0.000088 Step: 121300 Total Loss: 0.0423 Recon Loss: 0.0318 [03/29 22:53:33 TiTok]: Data (t): 0.0034, 62.27/s/gpu Batch (t): 0.5781 LR: 0.000088 Step: 121400 Total Loss: 0.0393 Recon Loss: 0.0274 [03/29 22:54:31 TiTok]: Data (t): 0.0033, 62.31/s/gpu Batch (t): 0.5778 LR: 0.000088 Step: 121500 Total Loss: 0.0408 Recon Loss: 0.0285 [03/29 22:55:28 TiTok]: Data (t): 0.0032, 62.55/s/gpu Batch (t): 0.5756 LR: 0.000088 Step: 121600 Total Loss: 0.0397 Recon Loss: 0.0283 [03/29 22:56:26 TiTok]: Data (t): 0.0032, 62.53/s/gpu Batch (t): 0.5758 LR: 0.000088 Step: 121700 Total Loss: 0.0406 Recon Loss: 0.0283 [03/29 22:57:24 TiTok]: Data (t): 0.0033, 62.38/s/gpu Batch (t): 0.5771 LR: 0.000088 Step: 121800 Total Loss: 0.0402 Recon Loss: 0.0290 [03/29 22:58:21 TiTok]: Data (t): 0.0033, 62.55/s/gpu Batch (t): 0.5755 LR: 0.000088 Step: 121900 Total Loss: 0.0413 Recon Loss: 0.0278 [03/29 22:59:19 TiTok]: Data (t): 0.0032, 56.70/s/gpu Batch (t): 0.6350 LR: 0.000088 Step: 122000 Total Loss: 0.0445 Recon Loss: 0.0307 [03/29 23:00:17 TiTok]: Data (t): 0.0032, 62.45/s/gpu Batch (t): 0.5764 LR: 0.000088 Step: 122100 Total Loss: 0.0389 Recon Loss: 0.0276 [03/29 23:01:15 TiTok]: Data (t): 0.0033, 62.51/s/gpu Batch (t): 0.5759 LR: 0.000088 Step: 122200 Total Loss: 0.0401 Recon Loss: 0.0282 [03/29 23:02:13 TiTok]: Data (t): 0.0033, 62.48/s/gpu Batch (t): 0.5762 LR: 0.000088 Step: 122300 Total Loss: 0.0424 Recon Loss: 0.0282 [03/29 23:03:11 TiTok]: Data (t): 0.0033, 62.45/s/gpu Batch (t): 0.5765 LR: 0.000088 Step: 122400 Total Loss: 0.0415 Recon Loss: 0.0279 [03/29 23:04:09 TiTok]: Data (t): 0.0033, 62.53/s/gpu Batch (t): 0.5757 LR: 0.000088 Step: 122500 Total Loss: 0.0421 Recon Loss: 0.0291 [03/29 23:05:07 TiTok]: Data (t): 0.0033, 62.50/s/gpu Batch (t): 0.5760 LR: 0.000088 Step: 122600 Total Loss: 0.0393 Recon Loss: 0.0279 [03/29 23:06:04 TiTok]: Data (t): 0.0033, 62.50/s/gpu Batch (t): 0.5760 LR: 0.000088 Step: 122700 Total Loss: 0.0394 Recon Loss: 0.0298 [03/29 23:07:02 TiTok]: Data (t): 0.0032, 62.61/s/gpu Batch (t): 0.5749 LR: 0.000088 Step: 122800 Total Loss: 0.0414 Recon Loss: 0.0291 [03/29 23:08:00 TiTok]: Data (t): 0.0033, 62.58/s/gpu Batch (t): 0.5753 LR: 0.000088 Step: 122900 Total Loss: 0.0396 Recon Loss: 0.0288 [03/29 23:08:57 TiTok]: Data (t): 0.0044, 56.61/s/gpu Batch (t): 0.6359 LR: 0.000088 Step: 123000 Total Loss: 0.0430 Recon Loss: 0.0298 [03/29 23:09:55 TiTok]: Data (t): 0.0032, 61.61/s/gpu Batch (t): 0.5843 LR: 0.000088 Step: 123100 Total Loss: 0.0433 Recon Loss: 0.0308 [03/29 23:10:53 TiTok]: Data (t): 0.0032, 62.60/s/gpu Batch (t): 0.5751 LR: 0.000088 Step: 123200 Total Loss: 0.0407 Recon Loss: 0.0290 [03/29 23:11:52 TiTok]: Data (t): 0.0033, 62.29/s/gpu Batch (t): 0.5780 LR: 0.000088 Step: 123300 Total Loss: 0.0396 Recon Loss: 0.0285 [03/29 23:12:49 TiTok]: Data (t): 0.0032, 62.06/s/gpu Batch (t): 0.5801 LR: 0.000088 Step: 123400 Total Loss: 0.0384 Recon Loss: 0.0291 [03/29 23:13:47 TiTok]: Data (t): 0.0032, 62.57/s/gpu Batch (t): 0.5753 LR: 0.000088 Step: 123500 Total Loss: 0.0412 Recon Loss: 0.0283 [03/29 23:14:45 TiTok]: Data (t): 0.0032, 62.59/s/gpu Batch (t): 0.5751 LR: 0.000088 Step: 123600 Total Loss: 0.0423 Recon Loss: 0.0286 [03/29 23:15:42 TiTok]: Data (t): 0.0032, 62.57/s/gpu Batch (t): 0.5754 LR: 0.000088 Step: 123700 Total Loss: 0.0391 Recon Loss: 0.0281 [03/29 23:16:40 TiTok]: Data (t): 0.0033, 62.55/s/gpu Batch (t): 0.5756 LR: 0.000088 Step: 123800 Total Loss: 0.0437 Recon Loss: 0.0299 [03/29 23:17:37 TiTok]: Data (t): 0.0033, 62.49/s/gpu Batch (t): 0.5761 LR: 0.000088 Step: 123900 Total Loss: 0.0408 Recon Loss: 0.0278 [03/29 23:18:35 TiTok]: Data (t): 0.0032, 56.73/s/gpu Batch (t): 0.6345 LR: 0.000088 Step: 124000 Total Loss: 0.0413 Recon Loss: 0.0291 [03/29 23:19:33 TiTok]: Data (t): 0.0033, 62.35/s/gpu Batch (t): 0.5774 LR: 0.000088 Step: 124100 Total Loss: 0.0425 Recon Loss: 0.0280 [03/29 23:20:30 TiTok]: Data (t): 0.0032, 62.53/s/gpu Batch (t): 0.5757 LR: 0.000088 Step: 124200 Total Loss: 0.0406 Recon Loss: 0.0292 [03/29 23:21:28 TiTok]: Data (t): 0.0032, 62.50/s/gpu Batch (t): 0.5760 LR: 0.000088 Step: 124300 Total Loss: 0.0460 Recon Loss: 0.0302 [03/29 23:22:25 TiTok]: Data (t): 0.0032, 62.60/s/gpu Batch (t): 0.5751 LR: 0.000088 Step: 124400 Total Loss: 0.0408 Recon Loss: 0.0274 [03/29 23:23:23 TiTok]: Data (t): 0.0032, 62.46/s/gpu Batch (t): 0.5764 LR: 0.000088 Step: 124500 Total Loss: 0.0414 Recon Loss: 0.0281 [03/29 23:24:21 TiTok]: Data (t): 0.0033, 62.45/s/gpu Batch (t): 0.5764 LR: 0.000088 Step: 124600 Total Loss: 0.0433 Recon Loss: 0.0305 [03/29 23:25:18 TiTok]: Data (t): 0.0032, 62.53/s/gpu Batch (t): 0.5757 LR: 0.000088 Step: 124700 Total Loss: 0.0378 Recon Loss: 0.0270 [03/29 23:26:16 TiTok]: Data (t): 0.0034, 62.56/s/gpu Batch (t): 0.5754 LR: 0.000088 Step: 124800 Total Loss: 0.0425 Recon Loss: 0.0282 [03/29 23:27:14 TiTok]: Data (t): 0.0031, 62.45/s/gpu Batch (t): 0.5764 LR: 0.000088 Step: 124900 Total Loss: 0.0381 Recon Loss: 0.0276 [03/29 23:28:12 TiTok]: Data (t): 0.0033, 56.52/s/gpu Batch (t): 0.6370 LR: 0.000088 Step: 125000 Total Loss: 0.0397 Recon Loss: 0.0282 [03/29 23:29:09 TiTok]: Data (t): 0.0032, 62.42/s/gpu Batch (t): 0.5767 LR: 0.000088 Step: 125100 Total Loss: 0.0425 Recon Loss: 0.0270 [03/29 23:30:07 TiTok]: Data (t): 0.0032, 62.53/s/gpu Batch (t): 0.5758 LR: 0.000088 Step: 125200 Total Loss: 0.0424 Recon Loss: 0.0296 [03/29 23:31:05 TiTok]: Data (t): 0.0032, 62.11/s/gpu Batch (t): 0.5796 LR: 0.000088 Step: 125300 Total Loss: 0.0400 Recon Loss: 0.0273 [03/29 23:32:03 TiTok]: Data (t): 0.0033, 62.50/s/gpu Batch (t): 0.5760 LR: 0.000088 Step: 125400 Total Loss: 0.0388 Recon Loss: 0.0281 [03/29 23:33:01 TiTok]: Data (t): 0.0033, 62.39/s/gpu Batch (t): 0.5770 LR: 0.000087 Step: 125500 Total Loss: 0.0395 Recon Loss: 0.0282 [03/29 23:33:59 TiTok]: Data (t): 0.0033, 62.05/s/gpu Batch (t): 0.5802 LR: 0.000087 Step: 125600 Total Loss: 0.0413 Recon Loss: 0.0291 [03/29 23:34:57 TiTok]: Data (t): 0.0032, 62.42/s/gpu Batch (t): 0.5767 LR: 0.000087 Step: 125700 Total Loss: 0.0402 Recon Loss: 0.0277 [03/29 23:35:54 TiTok]: Data (t): 0.0032, 62.52/s/gpu Batch (t): 0.5758 LR: 0.000087 Step: 125800 Total Loss: 0.0436 Recon Loss: 0.0292 [03/29 23:36:52 TiTok]: Data (t): 0.0032, 62.16/s/gpu Batch (t): 0.5791 LR: 0.000087 Step: 125900 Total Loss: 0.0438 Recon Loss: 0.0288 [03/29 23:37:50 TiTok]: Data (t): 0.0032, 56.91/s/gpu Batch (t): 0.6326 LR: 0.000087 Step: 126000 Total Loss: 0.0414 Recon Loss: 0.0307 [03/29 23:38:48 TiTok]: Data (t): 0.0032, 62.36/s/gpu Batch (t): 0.5773 LR: 0.000087 Step: 126100 Total Loss: 0.0400 Recon Loss: 0.0283 [03/29 23:39:45 TiTok]: Data (t): 0.0033, 62.57/s/gpu Batch (t): 0.5754 LR: 0.000087 Step: 126200 Total Loss: 0.0420 Recon Loss: 0.0285 [03/29 23:40:43 TiTok]: Data (t): 0.0033, 62.43/s/gpu Batch (t): 0.5766 LR: 0.000087 Step: 126300 Total Loss: 0.0400 Recon Loss: 0.0276 [03/29 23:41:41 TiTok]: Data (t): 0.0034, 62.52/s/gpu Batch (t): 0.5758 LR: 0.000087 Step: 126400 Total Loss: 0.0451 Recon Loss: 0.0310 [03/29 23:42:38 TiTok]: Data (t): 0.0032, 62.47/s/gpu Batch (t): 0.5763 LR: 0.000087 Step: 126500 Total Loss: 0.0403 Recon Loss: 0.0288 [03/29 23:43:36 TiTok]: Data (t): 0.0032, 62.47/s/gpu Batch (t): 0.5763 LR: 0.000087 Step: 126600 Total Loss: 0.0364 Recon Loss: 0.0265 [03/29 23:44:34 TiTok]: Data (t): 0.0033, 62.78/s/gpu Batch (t): 0.5734 LR: 0.000087 Step: 126700 Total Loss: 0.0446 Recon Loss: 0.0295 [03/29 23:45:33 TiTok]: Data (t): 0.0032, 62.46/s/gpu Batch (t): 0.5764 LR: 0.000087 Step: 126800 Total Loss: 0.0387 Recon Loss: 0.0274 [03/29 23:46:31 TiTok]: Data (t): 0.0032, 61.67/s/gpu Batch (t): 0.5837 LR: 0.000087 Step: 126900 Total Loss: 0.0395 Recon Loss: 0.0285 [03/29 23:47:29 TiTok]: Data (t): 0.0032, 56.38/s/gpu Batch (t): 0.6385 LR: 0.000087 Step: 127000 Total Loss: 0.0417 Recon Loss: 0.0283 [03/29 23:48:27 TiTok]: Data (t): 0.0032, 62.38/s/gpu Batch (t): 0.5771 LR: 0.000087 Step: 127100 Total Loss: 0.0412 Recon Loss: 0.0274 [03/29 23:49:24 TiTok]: Data (t): 0.0032, 62.43/s/gpu Batch (t): 0.5766 LR: 0.000087 Step: 127200 Total Loss: 0.0406 Recon Loss: 0.0283 [03/29 23:50:22 TiTok]: Data (t): 0.0034, 62.32/s/gpu Batch (t): 0.5776 LR: 0.000087 Step: 127300 Total Loss: 0.0449 Recon Loss: 0.0302 [03/29 23:51:20 TiTok]: Data (t): 0.0033, 62.41/s/gpu Batch (t): 0.5768 LR: 0.000087 Step: 127400 Total Loss: 0.0422 Recon Loss: 0.0304 [03/29 23:52:18 TiTok]: Data (t): 0.0032, 62.32/s/gpu Batch (t): 0.5777 LR: 0.000087 Step: 127500 Total Loss: 0.0428 Recon Loss: 0.0293 [03/29 23:53:15 TiTok]: Data (t): 0.0032, 62.42/s/gpu Batch (t): 0.5767 LR: 0.000087 Step: 127600 Total Loss: 0.0416 Recon Loss: 0.0284 [03/29 23:54:13 TiTok]: Data (t): 0.0032, 62.57/s/gpu Batch (t): 0.5754 LR: 0.000087 Step: 127700 Total Loss: 0.0417 Recon Loss: 0.0298 [03/29 23:55:12 TiTok]: Data (t): 0.0032, 62.36/s/gpu Batch (t): 0.5773 LR: 0.000087 Step: 127800 Total Loss: 0.0384 Recon Loss: 0.0267 [03/29 23:56:10 TiTok]: Data (t): 0.0034, 62.49/s/gpu Batch (t): 0.5761 LR: 0.000087 Step: 127900 Total Loss: 0.0402 Recon Loss: 0.0291 [03/29 23:57:08 TiTok]: Data (t): 0.0034, 56.23/s/gpu Batch (t): 0.6402 LR: 0.000087 Step: 128000 Total Loss: 0.0393 Recon Loss: 0.0274 [03/29 23:58:06 TiTok]: Data (t): 0.0032, 62.54/s/gpu Batch (t): 0.5756 LR: 0.000087 Step: 128100 Total Loss: 0.0405 Recon Loss: 0.0298 [03/29 23:59:03 TiTok]: Data (t): 0.0032, 62.54/s/gpu Batch (t): 0.5757 LR: 0.000087 Step: 128200 Total Loss: 0.0390 Recon Loss: 0.0280 [03/30 00:00:01 TiTok]: Data (t): 0.0032, 62.22/s/gpu Batch (t): 0.5786 LR: 0.000087 Step: 128300 Total Loss: 0.0421 Recon Loss: 0.0308 [03/30 00:00:59 TiTok]: Data (t): 0.0033, 62.32/s/gpu Batch (t): 0.5776 LR: 0.000087 Step: 128400 Total Loss: 0.0441 Recon Loss: 0.0299 [03/30 00:01:56 TiTok]: Data (t): 0.0032, 62.56/s/gpu Batch (t): 0.5754 LR: 0.000087 Step: 128500 Total Loss: 0.0419 Recon Loss: 0.0292 [03/30 00:02:54 TiTok]: Data (t): 0.0032, 62.57/s/gpu Batch (t): 0.5753 LR: 0.000087 Step: 128600 Total Loss: 0.0383 Recon Loss: 0.0266 [03/30 00:03:52 TiTok]: Data (t): 0.0033, 62.53/s/gpu Batch (t): 0.5757 LR: 0.000087 Step: 128700 Total Loss: 0.0397 Recon Loss: 0.0282 [03/30 00:04:49 TiTok]: Data (t): 0.0031, 62.46/s/gpu Batch (t): 0.5763 LR: 0.000087 Step: 128800 Total Loss: 0.0465 Recon Loss: 0.0310 [03/30 00:05:47 TiTok]: Data (t): 0.0031, 62.40/s/gpu Batch (t): 0.5770 LR: 0.000087 Step: 128900 Total Loss: 0.0398 Recon Loss: 0.0267 [03/30 00:06:45 TiTok]: Data (t): 0.0031, 56.74/s/gpu Batch (t): 0.6345 LR: 0.000087 Step: 129000 Total Loss: 0.0388 Recon Loss: 0.0297 [03/30 00:07:43 TiTok]: Data (t): 0.0032, 62.58/s/gpu Batch (t): 0.5753 LR: 0.000087 Step: 129100 Total Loss: 0.0408 Recon Loss: 0.0281 [03/30 00:08:41 TiTok]: Data (t): 0.0033, 62.51/s/gpu Batch (t): 0.5759 LR: 0.000087 Step: 129200 Total Loss: 0.0373 Recon Loss: 0.0267 [03/30 00:09:38 TiTok]: Data (t): 0.0032, 58.58/s/gpu Batch (t): 0.6146 LR: 0.000087 Step: 129300 Total Loss: 0.0425 Recon Loss: 0.0291 [03/30 00:10:36 TiTok]: Data (t): 0.0033, 61.50/s/gpu Batch (t): 0.5854 LR: 0.000087 Step: 129400 Total Loss: 0.0401 Recon Loss: 0.0264 [03/30 00:11:34 TiTok]: Data (t): 0.0032, 62.53/s/gpu Batch (t): 0.5757 LR: 0.000087 Step: 129500 Total Loss: 0.0421 Recon Loss: 0.0284 [03/30 00:12:32 TiTok]: Data (t): 0.0032, 62.46/s/gpu Batch (t): 0.5764 LR: 0.000087 Step: 129600 Total Loss: 0.0410 Recon Loss: 0.0289 [03/30 00:13:30 TiTok]: Data (t): 0.0033, 61.94/s/gpu Batch (t): 0.5812 LR: 0.000087 Step: 129700 Total Loss: 0.0409 Recon Loss: 0.0296 [03/30 00:14:28 TiTok]: Data (t): 0.0032, 62.38/s/gpu Batch (t): 0.5771 LR: 0.000087 Step: 129800 Total Loss: 0.0373 Recon Loss: 0.0273 [03/30 00:15:26 TiTok]: Data (t): 0.0033, 62.35/s/gpu Batch (t): 0.5774 LR: 0.000087 Step: 129900 Total Loss: 0.0403 Recon Loss: 0.0296 [03/30 00:16:23 TiTok]: Data (t): 0.0033, 56.76/s/gpu Batch (t): 0.6342 LR: 0.000087 Step: 130000 Total Loss: 0.0395 Recon Loss: 0.0272 [03/30 00:16:26 TiTok]: Saved state to /mnt/books/train_stage2/order_32_stage2/checkpoint-130000 [03/30 00:16:39 TiTok]: Reconstructing images... [03/30 00:17:38 TiTok]: Data (t): 0.0032, 62.33/s/gpu Batch (t): 0.5776 LR: 0.000087 Step: 130100 Total Loss: 0.0407 Recon Loss: 0.0281 [03/30 00:18:36 TiTok]: Data (t): 0.0032, 62.15/s/gpu Batch (t): 0.5792 LR: 0.000087 Step: 130200 Total Loss: 0.0416 Recon Loss: 0.0296 [03/30 00:19:34 TiTok]: Data (t): 0.0032, 62.50/s/gpu Batch (t): 0.5760 LR: 0.000087 Step: 130300 Total Loss: 0.0425 Recon Loss: 0.0289 [03/30 00:20:31 TiTok]: Data (t): 0.0032, 62.20/s/gpu Batch (t): 0.5788 LR: 0.000087 Step: 130400 Total Loss: 0.0389 Recon Loss: 0.0287 [03/30 00:21:29 TiTok]: Data (t): 0.0033, 62.45/s/gpu Batch (t): 0.5765 LR: 0.000086 Step: 130500 Total Loss: 0.0441 Recon Loss: 0.0322 [03/30 00:22:27 TiTok]: Data (t): 0.0032, 62.42/s/gpu Batch (t): 0.5768 LR: 0.000086 Step: 130600 Total Loss: 0.0393 Recon Loss: 0.0292 [03/30 00:23:25 TiTok]: Data (t): 0.0033, 62.43/s/gpu Batch (t): 0.5766 LR: 0.000086 Step: 130700 Total Loss: 0.0425 Recon Loss: 0.0314 [03/30 00:24:23 TiTok]: Data (t): 0.0033, 62.47/s/gpu Batch (t): 0.5763 LR: 0.000086 Step: 130800 Total Loss: 0.0399 Recon Loss: 0.0282 [03/30 00:25:20 TiTok]: Data (t): 0.0032, 62.44/s/gpu Batch (t): 0.5766 LR: 0.000086 Step: 130900 Total Loss: 0.0405 Recon Loss: 0.0272 [03/30 00:26:18 TiTok]: Data (t): 0.0032, 52.06/s/gpu Batch (t): 0.6915 LR: 0.000086 Step: 131000 Total Loss: 0.0395 Recon Loss: 0.0276 [03/30 00:27:16 TiTok]: Data (t): 0.0032, 62.09/s/gpu Batch (t): 0.5798 LR: 0.000086 Step: 131100 Total Loss: 0.0439 Recon Loss: 0.0315 [03/30 00:28:15 TiTok]: Data (t): 0.0033, 62.48/s/gpu Batch (t): 0.5762 LR: 0.000086 Step: 131200 Total Loss: 0.0419 Recon Loss: 0.0286 [03/30 00:29:13 TiTok]: Data (t): 0.0033, 62.46/s/gpu Batch (t): 0.5763 LR: 0.000086 Step: 131300 Total Loss: 0.0438 Recon Loss: 0.0304 [03/30 00:30:11 TiTok]: Data (t): 0.0033, 62.43/s/gpu Batch (t): 0.5766 LR: 0.000086 Step: 131400 Total Loss: 0.0417 Recon Loss: 0.0294 [03/30 00:31:09 TiTok]: Data (t): 0.0033, 62.46/s/gpu Batch (t): 0.5764 LR: 0.000086 Step: 131500 Total Loss: 0.0388 Recon Loss: 0.0275 [03/30 00:32:07 TiTok]: Data (t): 0.0033, 62.43/s/gpu Batch (t): 0.5766 LR: 0.000086 Step: 131600 Total Loss: 0.0394 Recon Loss: 0.0286 [03/30 00:33:04 TiTok]: Data (t): 0.0032, 62.49/s/gpu Batch (t): 0.5761 LR: 0.000086 Step: 131700 Total Loss: 0.0408 Recon Loss: 0.0304 [03/30 00:34:02 TiTok]: Data (t): 0.0032, 62.38/s/gpu Batch (t): 0.5771 LR: 0.000086 Step: 131800 Total Loss: 0.0402 Recon Loss: 0.0276 [03/30 00:35:00 TiTok]: Data (t): 0.0032, 62.55/s/gpu Batch (t): 0.5756 LR: 0.000086 Step: 131900 Total Loss: 0.0404 Recon Loss: 0.0299 [03/30 00:35:58 TiTok]: Data (t): 0.0033, 56.58/s/gpu Batch (t): 0.6362 LR: 0.000086 Step: 132000 Total Loss: 0.0391 Recon Loss: 0.0280 [03/30 00:36:55 TiTok]: Data (t): 0.0032, 62.54/s/gpu Batch (t): 0.5756 LR: 0.000086 Step: 132100 Total Loss: 0.0424 Recon Loss: 0.0285 [03/30 00:37:53 TiTok]: Data (t): 0.0032, 62.47/s/gpu Batch (t): 0.5763 LR: 0.000086 Step: 132200 Total Loss: 0.0407 Recon Loss: 0.0280 [03/30 00:38:51 TiTok]: Data (t): 0.0032, 62.55/s/gpu Batch (t): 0.5755 LR: 0.000086 Step: 132300 Total Loss: 0.0430 Recon Loss: 0.0298 [03/30 00:39:49 TiTok]: Data (t): 0.0032, 62.43/s/gpu Batch (t): 0.5766 LR: 0.000086 Step: 132400 Total Loss: 0.0366 Recon Loss: 0.0267 [03/30 00:40:47 TiTok]: Data (t): 0.0032, 62.47/s/gpu Batch (t): 0.5763 LR: 0.000086 Step: 132500 Total Loss: 0.0381 Recon Loss: 0.0291 [03/30 00:41:45 TiTok]: Data (t): 0.0032, 62.47/s/gpu Batch (t): 0.5763 LR: 0.000086 Step: 132600 Total Loss: 0.0390 Recon Loss: 0.0290 [03/30 00:42:43 TiTok]: Data (t): 0.0033, 62.38/s/gpu Batch (t): 0.5771 LR: 0.000086 Step: 132700 Total Loss: 0.0423 Recon Loss: 0.0292 [03/30 00:43:40 TiTok]: Data (t): 0.0034, 62.51/s/gpu Batch (t): 0.5759 LR: 0.000086 Step: 132800 Total Loss: 0.0413 Recon Loss: 0.0293 [03/30 00:44:38 TiTok]: Data (t): 0.0032, 62.45/s/gpu Batch (t): 0.5765 LR: 0.000086 Step: 132900 Total Loss: 0.0398 Recon Loss: 0.0291 [03/30 00:45:36 TiTok]: Data (t): 0.0032, 56.63/s/gpu Batch (t): 0.6357 LR: 0.000086 Step: 133000 Total Loss: 0.0378 Recon Loss: 0.0281 [03/30 00:46:33 TiTok]: Data (t): 0.0032, 62.47/s/gpu Batch (t): 0.5763 LR: 0.000086 Step: 133100 Total Loss: 0.0385 Recon Loss: 0.0288 [03/30 00:47:31 TiTok]: Data (t): 0.0033, 62.42/s/gpu Batch (t): 0.5767 LR: 0.000086 Step: 133200 Total Loss: 0.0420 Recon Loss: 0.0287 [03/30 00:48:29 TiTok]: Data (t): 0.0032, 62.40/s/gpu Batch (t): 0.5769 LR: 0.000086 Step: 133300 Total Loss: 0.0409 Recon Loss: 0.0294 [03/30 00:49:27 TiTok]: Data (t): 0.0032, 62.38/s/gpu Batch (t): 0.5771 LR: 0.000086 Step: 133400 Total Loss: 0.0435 Recon Loss: 0.0290 [03/30 00:50:25 TiTok]: Data (t): 0.0035, 61.19/s/gpu Batch (t): 0.5883 LR: 0.000086 Step: 133500 Total Loss: 0.0398 Recon Loss: 0.0281 [03/30 00:51:23 TiTok]: Data (t): 0.0033, 62.44/s/gpu Batch (t): 0.5766 LR: 0.000086 Step: 133600 Total Loss: 0.0402 Recon Loss: 0.0293 [03/30 00:52:20 TiTok]: Data (t): 0.0032, 62.45/s/gpu Batch (t): 0.5765 LR: 0.000086 Step: 133700 Total Loss: 0.0414 Recon Loss: 0.0300 [03/30 00:53:18 TiTok]: Data (t): 0.0032, 62.25/s/gpu Batch (t): 0.5784 LR: 0.000086 Step: 133800 Total Loss: 0.0370 Recon Loss: 0.0264 [03/30 00:54:16 TiTok]: Data (t): 0.0032, 62.35/s/gpu Batch (t): 0.5774 LR: 0.000086 Step: 133900 Total Loss: 0.0415 Recon Loss: 0.0301 [03/30 00:55:14 TiTok]: Data (t): 0.0032, 56.69/s/gpu Batch (t): 0.6350 LR: 0.000086 Step: 134000 Total Loss: 0.0393 Recon Loss: 0.0290 [03/30 00:56:11 TiTok]: Data (t): 0.0032, 62.08/s/gpu Batch (t): 0.5799 LR: 0.000086 Step: 134100 Total Loss: 0.0397 Recon Loss: 0.0288 [03/30 00:57:09 TiTok]: Data (t): 0.0034, 61.87/s/gpu Batch (t): 0.5818 LR: 0.000086 Step: 134200 Total Loss: 0.0409 Recon Loss: 0.0295 [03/30 00:58:07 TiTok]: Data (t): 0.0032, 62.48/s/gpu Batch (t): 0.5762 LR: 0.000086 Step: 134300 Total Loss: 0.0442 Recon Loss: 0.0302 [03/30 00:59:04 TiTok]: Data (t): 0.0032, 62.50/s/gpu Batch (t): 0.5760 LR: 0.000086 Step: 134400 Total Loss: 0.0390 Recon Loss: 0.0277 [03/30 01:00:02 TiTok]: Data (t): 0.0033, 62.36/s/gpu Batch (t): 0.5773 LR: 0.000086 Step: 134500 Total Loss: 0.0407 Recon Loss: 0.0301 [03/30 01:01:00 TiTok]: Data (t): 0.0031, 62.27/s/gpu Batch (t): 0.5781 LR: 0.000086 Step: 134600 Total Loss: 0.0423 Recon Loss: 0.0293 [03/30 01:01:59 TiTok]: Data (t): 0.0032, 61.98/s/gpu Batch (t): 0.5809 LR: 0.000086 Step: 134700 Total Loss: 0.0403 Recon Loss: 0.0286 [03/30 01:02:57 TiTok]: Data (t): 0.0032, 62.45/s/gpu Batch (t): 0.5764 LR: 0.000086 Step: 134800 Total Loss: 0.0422 Recon Loss: 0.0292 [03/30 01:03:55 TiTok]: Data (t): 0.0032, 61.98/s/gpu Batch (t): 0.5808 LR: 0.000086 Step: 134900 Total Loss: 0.0405 Recon Loss: 0.0278 [03/30 01:04:53 TiTok]: Data (t): 0.0032, 56.37/s/gpu Batch (t): 0.6387 LR: 0.000086 Step: 135000 Total Loss: 0.0397 Recon Loss: 0.0280 [03/30 01:05:51 TiTok]: Data (t): 0.0033, 62.14/s/gpu Batch (t): 0.5794 LR: 0.000086 Step: 135100 Total Loss: 0.0405 Recon Loss: 0.0284 [03/30 01:06:48 TiTok]: Data (t): 0.0032, 62.39/s/gpu Batch (t): 0.5770 LR: 0.000086 Step: 135200 Total Loss: 0.0418 Recon Loss: 0.0288 [03/30 01:07:47 TiTok]: Data (t): 0.0033, 62.38/s/gpu Batch (t): 0.5771 LR: 0.000085 Step: 135300 Total Loss: 0.0410 Recon Loss: 0.0280 [03/30 01:08:45 TiTok]: Data (t): 0.0032, 62.52/s/gpu Batch (t): 0.5758 LR: 0.000085 Step: 135400 Total Loss: 0.0403 Recon Loss: 0.0275 [03/30 01:09:43 TiTok]: Data (t): 0.0032, 62.40/s/gpu Batch (t): 0.5769 LR: 0.000085 Step: 135500 Total Loss: 0.0415 Recon Loss: 0.0299 [03/30 01:10:40 TiTok]: Data (t): 0.0031, 62.54/s/gpu Batch (t): 0.5756 LR: 0.000085 Step: 135600 Total Loss: 0.0395 Recon Loss: 0.0283 [03/30 01:11:39 TiTok]: Data (t): 0.0033, 62.38/s/gpu Batch (t): 0.5771 LR: 0.000085 Step: 135700 Total Loss: 0.0386 Recon Loss: 0.0263 [03/30 01:12:37 TiTok]: Data (t): 0.0032, 62.24/s/gpu Batch (t): 0.5784 LR: 0.000085 Step: 135800 Total Loss: 0.0430 Recon Loss: 0.0289 [03/30 01:13:35 TiTok]: Data (t): 0.0034, 62.47/s/gpu Batch (t): 0.5763 LR: 0.000085 Step: 135900 Total Loss: 0.0415 Recon Loss: 0.0311 [03/30 01:14:33 TiTok]: Data (t): 0.0033, 56.55/s/gpu Batch (t): 0.6367 LR: 0.000085 Step: 136000 Total Loss: 0.0439 Recon Loss: 0.0300 [03/30 01:15:30 TiTok]: Data (t): 0.0032, 62.46/s/gpu Batch (t): 0.5763 LR: 0.000085 Step: 136100 Total Loss: 0.0386 Recon Loss: 0.0272 [03/30 01:16:28 TiTok]: Data (t): 0.0032, 62.45/s/gpu Batch (t): 0.5764 LR: 0.000085 Step: 136200 Total Loss: 0.0424 Recon Loss: 0.0285 [03/30 01:17:26 TiTok]: Data (t): 0.0032, 62.50/s/gpu Batch (t): 0.5760 LR: 0.000085 Step: 136300 Total Loss: 0.0432 Recon Loss: 0.0292 [03/30 01:18:24 TiTok]: Data (t): 0.0033, 62.40/s/gpu Batch (t): 0.5769 LR: 0.000085 Step: 136400 Total Loss: 0.0413 Recon Loss: 0.0300 [03/30 01:19:21 TiTok]: Data (t): 0.0032, 62.50/s/gpu Batch (t): 0.5760 LR: 0.000085 Step: 136500 Total Loss: 0.0382 Recon Loss: 0.0273 [03/30 01:20:19 TiTok]: Data (t): 0.0031, 62.51/s/gpu Batch (t): 0.5759 LR: 0.000085 Step: 136600 Total Loss: 0.0413 Recon Loss: 0.0273 [03/30 01:21:17 TiTok]: Data (t): 0.0032, 62.49/s/gpu Batch (t): 0.5761 LR: 0.000085 Step: 136700 Total Loss: 0.0410 Recon Loss: 0.0284 [03/30 01:22:15 TiTok]: Data (t): 0.0032, 62.52/s/gpu Batch (t): 0.5758 LR: 0.000085 Step: 136800 Total Loss: 0.0409 Recon Loss: 0.0292 [03/30 01:23:13 TiTok]: Data (t): 0.0034, 62.49/s/gpu Batch (t): 0.5761 LR: 0.000085 Step: 136900 Total Loss: 0.0399 Recon Loss: 0.0279 [03/30 01:24:11 TiTok]: Data (t): 0.0034, 56.33/s/gpu Batch (t): 0.6391 LR: 0.000085 Step: 137000 Total Loss: 0.0426 Recon Loss: 0.0302 [03/30 01:25:09 TiTok]: Data (t): 0.0032, 62.29/s/gpu Batch (t): 0.5780 LR: 0.000085 Step: 137100 Total Loss: 0.0394 Recon Loss: 0.0288 [03/30 01:26:06 TiTok]: Data (t): 0.0032, 62.56/s/gpu Batch (t): 0.5754 LR: 0.000085 Step: 137200 Total Loss: 0.0375 Recon Loss: 0.0280 [03/30 01:27:04 TiTok]: Data (t): 0.0032, 62.53/s/gpu Batch (t): 0.5758 LR: 0.000085 Step: 137300 Total Loss: 0.0383 Recon Loss: 0.0270 [03/30 01:28:02 TiTok]: Data (t): 0.0033, 62.32/s/gpu Batch (t): 0.5776 LR: 0.000085 Step: 137400 Total Loss: 0.0412 Recon Loss: 0.0287 [03/30 01:28:59 TiTok]: Data (t): 0.0032, 62.55/s/gpu Batch (t): 0.5755 LR: 0.000085 Step: 137500 Total Loss: 0.0435 Recon Loss: 0.0298 [03/30 01:29:57 TiTok]: Data (t): 0.0032, 62.15/s/gpu Batch (t): 0.5792 LR: 0.000085 Step: 137600 Total Loss: 0.0455 Recon Loss: 0.0317 [03/30 01:30:55 TiTok]: Data (t): 0.0032, 62.52/s/gpu Batch (t): 0.5758 LR: 0.000085 Step: 137700 Total Loss: 0.0393 Recon Loss: 0.0279 [03/30 01:31:53 TiTok]: Data (t): 0.0032, 62.46/s/gpu Batch (t): 0.5764 LR: 0.000085 Step: 137800 Total Loss: 0.0407 Recon Loss: 0.0289 [03/30 01:32:51 TiTok]: Data (t): 0.0032, 62.29/s/gpu Batch (t): 0.5779 LR: 0.000085 Step: 137900 Total Loss: 0.0414 Recon Loss: 0.0279 [03/30 01:33:48 TiTok]: Data (t): 0.0032, 56.74/s/gpu Batch (t): 0.6345 LR: 0.000085 Step: 138000 Total Loss: 0.0398 Recon Loss: 0.0277 [03/30 01:34:46 TiTok]: Data (t): 0.0033, 62.51/s/gpu Batch (t): 0.5759 LR: 0.000085 Step: 138100 Total Loss: 0.0404 Recon Loss: 0.0285 [03/30 01:35:44 TiTok]: Data (t): 0.0031, 58.34/s/gpu Batch (t): 0.6171 LR: 0.000085 Step: 138200 Total Loss: 0.0433 Recon Loss: 0.0307 [03/30 01:36:42 TiTok]: Data (t): 0.0032, 62.59/s/gpu Batch (t): 0.5752 LR: 0.000085 Step: 138300 Total Loss: 0.0397 Recon Loss: 0.0288 [03/30 01:37:39 TiTok]: Data (t): 0.0032, 62.39/s/gpu Batch (t): 0.5771 LR: 0.000085 Step: 138400 Total Loss: 0.0398 Recon Loss: 0.0281 [03/30 01:38:37 TiTok]: Data (t): 0.0032, 59.40/s/gpu Batch (t): 0.6061 LR: 0.000085 Step: 138500 Total Loss: 0.0431 Recon Loss: 0.0284 [03/30 01:39:35 TiTok]: Data (t): 0.0031, 62.57/s/gpu Batch (t): 0.5753 LR: 0.000085 Step: 138600 Total Loss: 0.0434 Recon Loss: 0.0306 [03/30 01:40:33 TiTok]: Data (t): 0.0033, 62.36/s/gpu Batch (t): 0.5773 LR: 0.000085 Step: 138700 Total Loss: 0.0373 Recon Loss: 0.0271 [03/30 01:41:31 TiTok]: Data (t): 0.0032, 62.43/s/gpu Batch (t): 0.5766 LR: 0.000085 Step: 138800 Total Loss: 0.0433 Recon Loss: 0.0292 [03/30 01:42:29 TiTok]: Data (t): 0.0033, 61.48/s/gpu Batch (t): 0.5856 LR: 0.000085 Step: 138900 Total Loss: 0.0364 Recon Loss: 0.0275 [03/30 01:43:27 TiTok]: Data (t): 0.0033, 54.18/s/gpu Batch (t): 0.6644 LR: 0.000085 Step: 139000 Total Loss: 0.0409 Recon Loss: 0.0318 [03/30 01:44:25 TiTok]: Data (t): 0.0032, 62.17/s/gpu Batch (t): 0.5791 LR: 0.000085 Step: 139100 Total Loss: 0.0387 Recon Loss: 0.0275 [03/30 01:45:23 TiTok]: Data (t): 0.0032, 62.34/s/gpu Batch (t): 0.5774 LR: 0.000085 Step: 139200 Total Loss: 0.0427 Recon Loss: 0.0285 [03/30 01:46:21 TiTok]: Data (t): 0.0032, 62.40/s/gpu Batch (t): 0.5769 LR: 0.000085 Step: 139300 Total Loss: 0.0396 Recon Loss: 0.0282 [03/30 01:47:19 TiTok]: Data (t): 0.0032, 62.44/s/gpu Batch (t): 0.5766 LR: 0.000085 Step: 139400 Total Loss: 0.0412 Recon Loss: 0.0276 [03/30 01:48:17 TiTok]: Data (t): 0.0032, 62.50/s/gpu Batch (t): 0.5760 LR: 0.000085 Step: 139500 Total Loss: 0.0428 Recon Loss: 0.0293 [03/30 01:49:15 TiTok]: Data (t): 0.0033, 62.64/s/gpu Batch (t): 0.5747 LR: 0.000085 Step: 139600 Total Loss: 0.0395 Recon Loss: 0.0275 [03/30 01:50:12 TiTok]: Data (t): 0.0033, 62.05/s/gpu Batch (t): 0.5802 LR: 0.000085 Step: 139700 Total Loss: 0.0391 Recon Loss: 0.0285 [03/30 01:51:10 TiTok]: Data (t): 0.0032, 62.53/s/gpu Batch (t): 0.5757 LR: 0.000085 Step: 139800 Total Loss: 0.0388 Recon Loss: 0.0283 [03/30 01:52:08 TiTok]: Data (t): 0.0033, 62.42/s/gpu Batch (t): 0.5767 LR: 0.000085 Step: 139900 Total Loss: 0.0402 Recon Loss: 0.0276 [03/30 01:53:06 TiTok]: Data (t): 0.0034, 56.66/s/gpu Batch (t): 0.6354 LR: 0.000084 Step: 140000 Total Loss: 0.0408 Recon Loss: 0.0288 [03/30 01:53:08 TiTok]: Saved state to /mnt/books/train_stage2/order_32_stage2/checkpoint-140000 [03/30 01:53:22 TiTok]: Reconstructing images... [03/30 01:54:21 TiTok]: Data (t): 0.0034, 61.43/s/gpu Batch (t): 0.5860 LR: 0.000084 Step: 140100 Total Loss: 0.0390 Recon Loss: 0.0294 [03/30 01:55:20 TiTok]: Data (t): 0.0032, 62.46/s/gpu Batch (t): 0.5763 LR: 0.000084 Step: 140200 Total Loss: 0.0375 Recon Loss: 0.0272 [03/30 01:56:18 TiTok]: Data (t): 0.0052, 59.03/s/gpu Batch (t): 0.6099 LR: 0.000084 Step: 140300 Total Loss: 0.0407 Recon Loss: 0.0288 [03/30 01:57:16 TiTok]: Data (t): 0.0032, 62.50/s/gpu Batch (t): 0.5760 LR: 0.000084 Step: 140400 Total Loss: 0.0402 Recon Loss: 0.0294 [03/30 01:58:14 TiTok]: Data (t): 0.0033, 62.46/s/gpu Batch (t): 0.5763 LR: 0.000084 Step: 140500 Total Loss: 0.0416 Recon Loss: 0.0287 [03/30 01:59:11 TiTok]: Data (t): 0.0034, 62.37/s/gpu Batch (t): 0.5772 LR: 0.000084 Step: 140600 Total Loss: 0.0399 Recon Loss: 0.0278 [03/30 02:00:09 TiTok]: Data (t): 0.0032, 62.47/s/gpu Batch (t): 0.5762 LR: 0.000084 Step: 140700 Total Loss: 0.0437 Recon Loss: 0.0280 [03/30 02:01:07 TiTok]: Data (t): 0.0032, 62.33/s/gpu Batch (t): 0.5776 LR: 0.000084 Step: 140800 Total Loss: 0.0391 Recon Loss: 0.0275 [03/30 02:02:05 TiTok]: Data (t): 0.0034, 61.56/s/gpu Batch (t): 0.5848 LR: 0.000084 Step: 140900 Total Loss: 0.0414 Recon Loss: 0.0296 [03/30 02:03:03 TiTok]: Data (t): 0.0034, 52.36/s/gpu Batch (t): 0.6875 LR: 0.000084 Step: 141000 Total Loss: 0.0404 Recon Loss: 0.0289 [03/30 02:04:01 TiTok]: Data (t): 0.0033, 62.42/s/gpu Batch (t): 0.5768 LR: 0.000084 Step: 141100 Total Loss: 0.0378 Recon Loss: 0.0263 [03/30 02:04:58 TiTok]: Data (t): 0.0032, 62.45/s/gpu Batch (t): 0.5765 LR: 0.000084 Step: 141200 Total Loss: 0.0423 Recon Loss: 0.0279 [03/30 02:05:56 TiTok]: Data (t): 0.0033, 62.49/s/gpu Batch (t): 0.5761 LR: 0.000084 Step: 141300 Total Loss: 0.0446 Recon Loss: 0.0303 [03/30 02:06:54 TiTok]: Data (t): 0.0034, 62.56/s/gpu Batch (t): 0.5755 LR: 0.000084 Step: 141400 Total Loss: 0.0406 Recon Loss: 0.0279 [03/30 02:07:53 TiTok]: Data (t): 0.0032, 62.44/s/gpu Batch (t): 0.5766 LR: 0.000084 Step: 141500 Total Loss: 0.0387 Recon Loss: 0.0253 [03/30 02:08:50 TiTok]: Data (t): 0.0032, 62.11/s/gpu Batch (t): 0.5796 LR: 0.000084 Step: 141600 Total Loss: 0.0397 Recon Loss: 0.0287 [03/30 02:09:48 TiTok]: Data (t): 0.0032, 62.47/s/gpu Batch (t): 0.5762 LR: 0.000084 Step: 141700 Total Loss: 0.0418 Recon Loss: 0.0279 [03/30 02:10:46 TiTok]: Data (t): 0.0032, 62.41/s/gpu Batch (t): 0.5769 LR: 0.000084 Step: 141800 Total Loss: 0.0405 Recon Loss: 0.0273 [03/30 02:11:44 TiTok]: Data (t): 0.0032, 62.43/s/gpu Batch (t): 0.5766 LR: 0.000084 Step: 141900 Total Loss: 0.0396 Recon Loss: 0.0299 [03/30 02:12:42 TiTok]: Data (t): 0.0032, 56.68/s/gpu Batch (t): 0.6351 LR: 0.000084 Step: 142000 Total Loss: 0.0397 Recon Loss: 0.0275 [03/30 02:13:39 TiTok]: Data (t): 0.0031, 62.39/s/gpu Batch (t): 0.5770 LR: 0.000084 Step: 142100 Total Loss: 0.0406 Recon Loss: 0.0287 [03/30 02:14:37 TiTok]: Data (t): 0.0032, 62.50/s/gpu Batch (t): 0.5760 LR: 0.000084 Step: 142200 Total Loss: 0.0415 Recon Loss: 0.0284 [03/30 02:15:35 TiTok]: Data (t): 0.0032, 62.50/s/gpu Batch (t): 0.5760 LR: 0.000084 Step: 142300 Total Loss: 0.0388 Recon Loss: 0.0290 [03/30 02:16:33 TiTok]: Data (t): 0.0033, 62.51/s/gpu Batch (t): 0.5759 LR: 0.000084 Step: 142400 Total Loss: 0.0422 Recon Loss: 0.0283 [03/30 02:17:30 TiTok]: Data (t): 0.0032, 62.46/s/gpu Batch (t): 0.5764 LR: 0.000084 Step: 142500 Total Loss: 0.0426 Recon Loss: 0.0313 [03/30 02:18:28 TiTok]: Data (t): 0.0032, 62.53/s/gpu Batch (t): 0.5758 LR: 0.000084 Step: 142600 Total Loss: 0.0374 Recon Loss: 0.0283 [03/30 02:19:26 TiTok]: Data (t): 0.0032, 62.44/s/gpu Batch (t): 0.5765 LR: 0.000084 Step: 142700 Total Loss: 0.0399 Recon Loss: 0.0293 [03/30 02:20:23 TiTok]: Data (t): 0.0032, 62.39/s/gpu Batch (t): 0.5770 LR: 0.000084 Step: 142800 Total Loss: 0.0371 Recon Loss: 0.0280 [03/30 02:21:21 TiTok]: Data (t): 0.0032, 62.49/s/gpu Batch (t): 0.5761 LR: 0.000084 Step: 142900 Total Loss: 0.0394 Recon Loss: 0.0287 [03/30 02:22:19 TiTok]: Data (t): 0.0032, 56.61/s/gpu Batch (t): 0.6359 LR: 0.000084 Step: 143000 Total Loss: 0.0391 Recon Loss: 0.0296 [03/30 02:23:16 TiTok]: Data (t): 0.0032, 62.49/s/gpu Batch (t): 0.5761 LR: 0.000084 Step: 143100 Total Loss: 0.0416 Recon Loss: 0.0280 [03/30 02:24:14 TiTok]: Data (t): 0.0032, 62.44/s/gpu Batch (t): 0.5766 LR: 0.000084 Step: 143200 Total Loss: 0.0408 Recon Loss: 0.0277 [03/30 02:25:12 TiTok]: Data (t): 0.0032, 62.36/s/gpu Batch (t): 0.5773 LR: 0.000084 Step: 143300 Total Loss: 0.0425 Recon Loss: 0.0293 [03/30 02:26:10 TiTok]: Data (t): 0.0033, 62.32/s/gpu Batch (t): 0.5776 LR: 0.000084 Step: 143400 Total Loss: 0.0403 Recon Loss: 0.0273 [03/30 02:27:08 TiTok]: Data (t): 0.0032, 62.41/s/gpu Batch (t): 0.5769 LR: 0.000084 Step: 143500 Total Loss: 0.0416 Recon Loss: 0.0295 [03/30 02:28:06 TiTok]: Data (t): 0.0037, 61.73/s/gpu Batch (t): 0.5831 LR: 0.000084 Step: 143600 Total Loss: 0.0360 Recon Loss: 0.0259 [03/30 02:29:03 TiTok]: Data (t): 0.0032, 62.34/s/gpu Batch (t): 0.5774 LR: 0.000084 Step: 143700 Total Loss: 0.0405 Recon Loss: 0.0289 [03/30 02:30:02 TiTok]: Data (t): 0.0032, 62.35/s/gpu Batch (t): 0.5774 LR: 0.000084 Step: 143800 Total Loss: 0.0425 Recon Loss: 0.0301 [03/30 02:31:00 TiTok]: Data (t): 0.0032, 62.26/s/gpu Batch (t): 0.5782 LR: 0.000084 Step: 143900 Total Loss: 0.0444 Recon Loss: 0.0298 [03/30 02:31:58 TiTok]: Data (t): 0.0031, 56.73/s/gpu Batch (t): 0.6346 LR: 0.000084 Step: 144000 Total Loss: 0.0424 Recon Loss: 0.0302 [03/30 02:32:56 TiTok]: Data (t): 0.0032, 62.47/s/gpu Batch (t): 0.5763 LR: 0.000084 Step: 144100 Total Loss: 0.0388 Recon Loss: 0.0273 [03/30 02:33:54 TiTok]: Data (t): 0.0032, 62.40/s/gpu Batch (t): 0.5769 LR: 0.000084 Step: 144200 Total Loss: 0.0407 Recon Loss: 0.0274 [03/30 02:34:51 TiTok]: Data (t): 0.0031, 62.31/s/gpu Batch (t): 0.5777 LR: 0.000084 Step: 144300 Total Loss: 0.0407 Recon Loss: 0.0291 [03/30 02:35:49 TiTok]: Data (t): 0.0033, 62.01/s/gpu Batch (t): 0.5806 LR: 0.000084 Step: 144400 Total Loss: 0.0375 Recon Loss: 0.0269 [03/30 02:36:47 TiTok]: Data (t): 0.0032, 62.73/s/gpu Batch (t): 0.5739 LR: 0.000084 Step: 144500 Total Loss: 0.0402 Recon Loss: 0.0279 [03/30 02:37:46 TiTok]: Data (t): 0.0031, 62.38/s/gpu Batch (t): 0.5771 LR: 0.000083 Step: 144600 Total Loss: 0.0417 Recon Loss: 0.0299 [03/30 02:38:44 TiTok]: Data (t): 0.0032, 62.41/s/gpu Batch (t): 0.5768 LR: 0.000083 Step: 144700 Total Loss: 0.0396 Recon Loss: 0.0278 [03/30 02:39:42 TiTok]: Data (t): 0.0032, 62.25/s/gpu Batch (t): 0.5784 LR: 0.000083 Step: 144800 Total Loss: 0.0416 Recon Loss: 0.0288 [03/30 02:40:40 TiTok]: Data (t): 0.0033, 64.83/s/gpu Batch (t): 0.5553 LR: 0.000083 Step: 144900 Total Loss: 0.0413 Recon Loss: 0.0282 [03/30 02:41:38 TiTok]: Data (t): 0.0031, 56.56/s/gpu Batch (t): 0.6365 LR: 0.000083 Step: 145000 Total Loss: 0.0416 Recon Loss: 0.0266 [03/30 02:42:36 TiTok]: Data (t): 0.0032, 62.38/s/gpu Batch (t): 0.5771 LR: 0.000083 Step: 145100 Total Loss: 0.0428 Recon Loss: 0.0309 [03/30 02:43:34 TiTok]: Data (t): 0.0033, 62.43/s/gpu Batch (t): 0.5766 LR: 0.000083 Step: 145200 Total Loss: 0.0391 Recon Loss: 0.0264 [03/30 02:44:32 TiTok]: Data (t): 0.0032, 62.37/s/gpu Batch (t): 0.5772 LR: 0.000083 Step: 145300 Total Loss: 0.0428 Recon Loss: 0.0306 [03/30 02:45:29 TiTok]: Data (t): 0.0032, 62.39/s/gpu Batch (t): 0.5770 LR: 0.000083 Step: 145400 Total Loss: 0.0409 Recon Loss: 0.0282 [03/30 02:46:27 TiTok]: Data (t): 0.0032, 62.36/s/gpu Batch (t): 0.5773 LR: 0.000083 Step: 145500 Total Loss: 0.0424 Recon Loss: 0.0283 [03/30 02:47:25 TiTok]: Data (t): 0.0032, 62.21/s/gpu Batch (t): 0.5787 LR: 0.000083 Step: 145600 Total Loss: 0.0396 Recon Loss: 0.0278 [03/30 02:48:23 TiTok]: Data (t): 0.0032, 62.45/s/gpu Batch (t): 0.5765 LR: 0.000083 Step: 145700 Total Loss: 0.0392 Recon Loss: 0.0294 [03/30 02:49:21 TiTok]: Data (t): 0.0032, 62.25/s/gpu Batch (t): 0.5783 LR: 0.000083 Step: 145800 Total Loss: 0.0432 Recon Loss: 0.0302 [03/30 02:50:19 TiTok]: Data (t): 0.0032, 62.36/s/gpu Batch (t): 0.5773 LR: 0.000083 Step: 145900 Total Loss: 0.0401 Recon Loss: 0.0266 [03/30 02:51:17 TiTok]: Data (t): 0.0032, 56.38/s/gpu Batch (t): 0.6386 LR: 0.000083 Step: 146000 Total Loss: 0.0396 Recon Loss: 0.0282 [03/30 02:52:16 TiTok]: Data (t): 0.0033, 62.33/s/gpu Batch (t): 0.5775 LR: 0.000083 Step: 146100 Total Loss: 0.0416 Recon Loss: 0.0287 [03/30 02:53:14 TiTok]: Data (t): 0.0031, 62.54/s/gpu Batch (t): 0.5756 LR: 0.000083 Step: 146200 Total Loss: 0.0397 Recon Loss: 0.0286 [03/30 02:54:12 TiTok]: Data (t): 0.0033, 62.36/s/gpu Batch (t): 0.5773 LR: 0.000083 Step: 146300 Total Loss: 0.0419 Recon Loss: 0.0295 [03/30 02:55:10 TiTok]: Data (t): 0.0031, 62.25/s/gpu Batch (t): 0.5783 LR: 0.000083 Step: 146400 Total Loss: 0.0403 Recon Loss: 0.0293 [03/30 02:56:07 TiTok]: Data (t): 0.0031, 62.46/s/gpu Batch (t): 0.5763 LR: 0.000083 Step: 146500 Total Loss: 0.0414 Recon Loss: 0.0269 [03/30 02:57:05 TiTok]: Data (t): 0.0031, 62.32/s/gpu Batch (t): 0.5776 LR: 0.000083 Step: 146600 Total Loss: 0.0395 Recon Loss: 0.0276 [03/30 02:58:03 TiTok]: Data (t): 0.0031, 62.22/s/gpu Batch (t): 0.5786 LR: 0.000083 Step: 146700 Total Loss: 0.0400 Recon Loss: 0.0280 [03/30 02:59:01 TiTok]: Data (t): 0.0032, 62.31/s/gpu Batch (t): 0.5777 LR: 0.000083 Step: 146800 Total Loss: 0.0402 Recon Loss: 0.0282 [03/30 02:59:59 TiTok]: Data (t): 0.0031, 62.32/s/gpu Batch (t): 0.5776 LR: 0.000083 Step: 146900 Total Loss: 0.0432 Recon Loss: 0.0293 [03/30 03:00:57 TiTok]: Data (t): 0.0031, 56.49/s/gpu Batch (t): 0.6372 LR: 0.000083 Step: 147000 Total Loss: 0.0415 Recon Loss: 0.0291 [03/30 03:01:55 TiTok]: Data (t): 0.0031, 61.46/s/gpu Batch (t): 0.5858 LR: 0.000083 Step: 147100 Total Loss: 0.0442 Recon Loss: 0.0291 [03/30 03:02:53 TiTok]: Data (t): 0.0031, 62.41/s/gpu Batch (t): 0.5769 LR: 0.000083 Step: 147200 Total Loss: 0.0432 Recon Loss: 0.0289 [03/30 03:03:51 TiTok]: Data (t): 0.0032, 62.34/s/gpu Batch (t): 0.5775 LR: 0.000083 Step: 147300 Total Loss: 0.0385 Recon Loss: 0.0283 [03/30 03:04:48 TiTok]: Data (t): 0.0032, 62.39/s/gpu Batch (t): 0.5770 LR: 0.000083 Step: 147400 Total Loss: 0.0389 Recon Loss: 0.0280 [03/30 03:05:46 TiTok]: Data (t): 0.0031, 62.52/s/gpu Batch (t): 0.5758 LR: 0.000083 Step: 147500 Total Loss: 0.0365 Recon Loss: 0.0280 [03/30 03:06:44 TiTok]: Data (t): 0.0032, 61.97/s/gpu Batch (t): 0.5809 LR: 0.000083 Step: 147600 Total Loss: 0.0395 Recon Loss: 0.0296 [03/30 03:07:42 TiTok]: Data (t): 0.0031, 62.45/s/gpu Batch (t): 0.5765 LR: 0.000083 Step: 147700 Total Loss: 0.0395 Recon Loss: 0.0277 [03/30 03:08:40 TiTok]: Data (t): 0.0032, 62.33/s/gpu Batch (t): 0.5776 LR: 0.000083 Step: 147800 Total Loss: 0.0414 Recon Loss: 0.0299 [03/30 03:09:37 TiTok]: Data (t): 0.0032, 62.55/s/gpu Batch (t): 0.5756 LR: 0.000083 Step: 147900 Total Loss: 0.0375 Recon Loss: 0.0260 [03/30 03:10:35 TiTok]: Data (t): 0.0031, 56.71/s/gpu Batch (t): 0.6349 LR: 0.000083 Step: 148000 Total Loss: 0.0406 Recon Loss: 0.0287 [03/30 03:11:33 TiTok]: Data (t): 0.0032, 62.41/s/gpu Batch (t): 0.5768 LR: 0.000083 Step: 148100 Total Loss: 0.0385 Recon Loss: 0.0273 [03/30 03:12:30 TiTok]: Data (t): 0.0032, 62.41/s/gpu Batch (t): 0.5768 LR: 0.000083 Step: 148200 Total Loss: 0.0390 Recon Loss: 0.0292 [03/30 03:13:29 TiTok]: Data (t): 0.0032, 62.31/s/gpu Batch (t): 0.5777 LR: 0.000083 Step: 148300 Total Loss: 0.0402 Recon Loss: 0.0285 [03/30 03:14:27 TiTok]: Data (t): 0.0032, 60.09/s/gpu Batch (t): 0.5991 LR: 0.000083 Step: 148400 Total Loss: 0.0391 Recon Loss: 0.0284 [03/30 03:15:25 TiTok]: Data (t): 0.0033, 62.02/s/gpu Batch (t): 0.5804 LR: 0.000083 Step: 148500 Total Loss: 0.0423 Recon Loss: 0.0282 [03/30 03:16:23 TiTok]: Data (t): 0.0032, 62.07/s/gpu Batch (t): 0.5800 LR: 0.000083 Step: 148600 Total Loss: 0.0395 Recon Loss: 0.0272 [03/30 03:17:21 TiTok]: Data (t): 0.0032, 62.30/s/gpu Batch (t): 0.5778 LR: 0.000083 Step: 148700 Total Loss: 0.0404 Recon Loss: 0.0271 [03/30 03:18:19 TiTok]: Data (t): 0.0032, 62.48/s/gpu Batch (t): 0.5761 LR: 0.000083 Step: 148800 Total Loss: 0.0385 Recon Loss: 0.0268 [03/30 03:19:17 TiTok]: Data (t): 0.0032, 62.48/s/gpu Batch (t): 0.5762 LR: 0.000083 Step: 148900 Total Loss: 0.0432 Recon Loss: 0.0295 [03/30 03:20:16 TiTok]: Data (t): 0.0032, 56.54/s/gpu Batch (t): 0.6367 LR: 0.000083 Step: 149000 Total Loss: 0.0380 Recon Loss: 0.0270 [03/30 03:21:14 TiTok]: Data (t): 0.0031, 62.46/s/gpu Batch (t): 0.5764 LR: 0.000082 Step: 149100 Total Loss: 0.0433 Recon Loss: 0.0294 [03/30 03:22:11 TiTok]: Data (t): 0.0032, 62.23/s/gpu Batch (t): 0.5785 LR: 0.000082 Step: 149200 Total Loss: 0.0401 Recon Loss: 0.0270 [03/30 03:23:09 TiTok]: Data (t): 0.0031, 62.42/s/gpu Batch (t): 0.5768 LR: 0.000082 Step: 149300 Total Loss: 0.0399 Recon Loss: 0.0281 [03/30 03:24:07 TiTok]: Data (t): 0.0031, 62.51/s/gpu Batch (t): 0.5759 LR: 0.000082 Step: 149400 Total Loss: 0.0394 Recon Loss: 0.0285 [03/30 03:25:05 TiTok]: Data (t): 0.0031, 59.27/s/gpu Batch (t): 0.6074 LR: 0.000082 Step: 149500 Total Loss: 0.0392 Recon Loss: 0.0300 [03/30 03:26:03 TiTok]: Data (t): 0.0033, 62.39/s/gpu Batch (t): 0.5770 LR: 0.000082 Step: 149600 Total Loss: 0.0408 Recon Loss: 0.0292 [03/30 03:27:01 TiTok]: Data (t): 0.0033, 62.34/s/gpu Batch (t): 0.5775 LR: 0.000082 Step: 149700 Total Loss: 0.0397 Recon Loss: 0.0282 [03/30 03:27:58 TiTok]: Data (t): 0.0106, 61.27/s/gpu Batch (t): 0.5875 LR: 0.000082 Step: 149800 Total Loss: 0.0380 Recon Loss: 0.0274 [03/30 03:28:56 TiTok]: Data (t): 0.0032, 62.36/s/gpu Batch (t): 0.5773 LR: 0.000082 Step: 149900 Total Loss: 0.0411 Recon Loss: 0.0267 [03/30 03:29:55 TiTok]: Data (t): 0.0031, 56.91/s/gpu Batch (t): 0.6326 LR: 0.000082 Step: 150000 Total Loss: 0.0396 Recon Loss: 0.0265 [03/30 03:29:57 TiTok]: Saved state to /mnt/books/train_stage2/order_32_stage2/checkpoint-150000 [03/30 03:30:10 TiTok]: Reconstructing images... [03/30 03:31:09 TiTok]: Data (t): 0.0032, 62.31/s/gpu Batch (t): 0.5778 LR: 0.000082 Step: 150100 Total Loss: 0.0418 Recon Loss: 0.0299 [03/30 03:32:06 TiTok]: Data (t): 0.0031, 62.46/s/gpu Batch (t): 0.5764 LR: 0.000082 Step: 150200 Total Loss: 0.0385 Recon Loss: 0.0282 [03/30 03:33:04 TiTok]: Data (t): 0.0032, 62.54/s/gpu Batch (t): 0.5756 LR: 0.000082 Step: 150300 Total Loss: 0.0416 Recon Loss: 0.0308 [03/30 03:34:02 TiTok]: Data (t): 0.0031, 62.36/s/gpu Batch (t): 0.5773 LR: 0.000082 Step: 150400 Total Loss: 0.0394 Recon Loss: 0.0284 [03/30 03:35:00 TiTok]: Data (t): 0.0032, 62.12/s/gpu Batch (t): 0.5795 LR: 0.000082 Step: 150500 Total Loss: 0.0409 Recon Loss: 0.0298 [03/30 03:35:59 TiTok]: Data (t): 0.0032, 62.24/s/gpu Batch (t): 0.5784 LR: 0.000082 Step: 150600 Total Loss: 0.0377 Recon Loss: 0.0266 [03/30 03:36:56 TiTok]: Data (t): 0.0033, 62.41/s/gpu Batch (t): 0.5768 LR: 0.000082 Step: 150700 Total Loss: 0.0443 Recon Loss: 0.0289 [03/30 03:37:54 TiTok]: Data (t): 0.0031, 62.30/s/gpu Batch (t): 0.5778 LR: 0.000082 Step: 150800 Total Loss: 0.0460 Recon Loss: 0.0321 [03/30 03:38:52 TiTok]: Data (t): 0.0033, 62.38/s/gpu Batch (t): 0.5771 LR: 0.000082 Step: 150900 Total Loss: 0.0410 Recon Loss: 0.0277 [03/30 03:39:50 TiTok]: Data (t): 0.0031, 52.08/s/gpu Batch (t): 0.6913 LR: 0.000082 Step: 151000 Total Loss: 0.0414 Recon Loss: 0.0278 [03/30 03:40:48 TiTok]: Data (t): 0.0031, 62.37/s/gpu Batch (t): 0.5772 LR: 0.000082 Step: 151100 Total Loss: 0.0433 Recon Loss: 0.0292 [03/30 03:41:46 TiTok]: Data (t): 0.0032, 62.40/s/gpu Batch (t): 0.5769 LR: 0.000082 Step: 151200 Total Loss: 0.0385 Recon Loss: 0.0288 [03/30 03:42:43 TiTok]: Data (t): 0.0032, 62.31/s/gpu Batch (t): 0.5778 LR: 0.000082 Step: 151300 Total Loss: 0.0391 Recon Loss: 0.0271 [03/30 03:43:41 TiTok]: Data (t): 0.0031, 62.43/s/gpu Batch (t): 0.5766 LR: 0.000082 Step: 151400 Total Loss: 0.0445 Recon Loss: 0.0297 [03/30 03:44:39 TiTok]: Data (t): 0.0032, 62.35/s/gpu Batch (t): 0.5774 LR: 0.000082 Step: 151500 Total Loss: 0.0412 Recon Loss: 0.0296 [03/30 03:45:37 TiTok]: Data (t): 0.0030, 62.37/s/gpu Batch (t): 0.5772 LR: 0.000082 Step: 151600 Total Loss: 0.0375 Recon Loss: 0.0293 [03/30 03:46:35 TiTok]: Data (t): 0.0031, 62.40/s/gpu Batch (t): 0.5770 LR: 0.000082 Step: 151700 Total Loss: 0.0398 Recon Loss: 0.0292 [03/30 03:47:33 TiTok]: Data (t): 0.0032, 62.43/s/gpu Batch (t): 0.5766 LR: 0.000082 Step: 151800 Total Loss: 0.0401 Recon Loss: 0.0291 [03/30 03:48:30 TiTok]: Data (t): 0.0031, 61.82/s/gpu Batch (t): 0.5824 LR: 0.000082 Step: 151900 Total Loss: 0.0379 Recon Loss: 0.0274 [03/30 03:49:28 TiTok]: Data (t): 0.0031, 56.54/s/gpu Batch (t): 0.6367 LR: 0.000082 Step: 152000 Total Loss: 0.0399 Recon Loss: 0.0277 [03/30 03:50:26 TiTok]: Data (t): 0.0031, 62.39/s/gpu Batch (t): 0.5771 LR: 0.000082 Step: 152100 Total Loss: 0.0407 Recon Loss: 0.0285 [03/30 03:51:24 TiTok]: Data (t): 0.0031, 62.38/s/gpu Batch (t): 0.5771 LR: 0.000082 Step: 152200 Total Loss: 0.0420 Recon Loss: 0.0313 [03/30 03:52:22 TiTok]: Data (t): 0.0031, 62.42/s/gpu Batch (t): 0.5768 LR: 0.000082 Step: 152300 Total Loss: 0.0410 Recon Loss: 0.0296 [03/30 03:53:20 TiTok]: Data (t): 0.0032, 62.43/s/gpu Batch (t): 0.5767 LR: 0.000082 Step: 152400 Total Loss: 0.0378 Recon Loss: 0.0289 [03/30 03:54:17 TiTok]: Data (t): 0.0031, 62.43/s/gpu Batch (t): 0.5767 LR: 0.000082 Step: 152500 Total Loss: 0.0428 Recon Loss: 0.0324 [03/30 03:55:15 TiTok]: Data (t): 0.0031, 62.57/s/gpu Batch (t): 0.5753 LR: 0.000082 Step: 152600 Total Loss: 0.0424 Recon Loss: 0.0304 [03/30 03:56:13 TiTok]: Data (t): 0.0033, 62.27/s/gpu Batch (t): 0.5781 LR: 0.000082 Step: 152700 Total Loss: 0.0396 Recon Loss: 0.0274 [03/30 03:57:10 TiTok]: Data (t): 0.0032, 62.46/s/gpu Batch (t): 0.5764 LR: 0.000082 Step: 152800 Total Loss: 0.0402 Recon Loss: 0.0289 [03/30 03:58:10 TiTok]: Data (t): 0.0031, 62.45/s/gpu Batch (t): 0.5764 LR: 0.000082 Step: 152900 Total Loss: 0.0397 Recon Loss: 0.0277 [03/30 03:59:07 TiTok]: Data (t): 0.0031, 56.69/s/gpu Batch (t): 0.6350 LR: 0.000082 Step: 153000 Total Loss: 0.0396 Recon Loss: 0.0287 [03/30 04:00:05 TiTok]: Data (t): 0.0031, 62.54/s/gpu Batch (t): 0.5756 LR: 0.000082 Step: 153100 Total Loss: 0.0400 Recon Loss: 0.0292 [03/30 04:01:03 TiTok]: Data (t): 0.0032, 62.46/s/gpu Batch (t): 0.5763 LR: 0.000082 Step: 153200 Total Loss: 0.0390 Recon Loss: 0.0282 [03/30 04:02:00 TiTok]: Data (t): 0.0032, 62.50/s/gpu Batch (t): 0.5760 LR: 0.000082 Step: 153300 Total Loss: 0.0395 Recon Loss: 0.0284 [03/30 04:02:58 TiTok]: Data (t): 0.0031, 62.40/s/gpu Batch (t): 0.5770 LR: 0.000081 Step: 153400 Total Loss: 0.0426 Recon Loss: 0.0297 [03/30 04:03:57 TiTok]: Data (t): 0.0032, 62.46/s/gpu Batch (t): 0.5764 LR: 0.000081 Step: 153500 Total Loss: 0.0431 Recon Loss: 0.0307 [03/30 04:04:55 TiTok]: Data (t): 0.0032, 62.42/s/gpu Batch (t): 0.5767 LR: 0.000081 Step: 153600 Total Loss: 0.0420 Recon Loss: 0.0293 [03/30 04:05:53 TiTok]: Data (t): 0.0031, 62.19/s/gpu Batch (t): 0.5788 LR: 0.000081 Step: 153700 Total Loss: 0.0376 Recon Loss: 0.0280 [03/30 04:06:51 TiTok]: Data (t): 0.0033, 62.11/s/gpu Batch (t): 0.5796 LR: 0.000081 Step: 153800 Total Loss: 0.0407 Recon Loss: 0.0275 [03/30 04:07:49 TiTok]: Data (t): 0.0032, 62.22/s/gpu Batch (t): 0.5786 LR: 0.000081 Step: 153900 Total Loss: 0.0389 Recon Loss: 0.0287 [03/30 04:08:47 TiTok]: Data (t): 0.0032, 56.59/s/gpu Batch (t): 0.6362 LR: 0.000081 Step: 154000 Total Loss: 0.0400 Recon Loss: 0.0290 [03/30 04:09:44 TiTok]: Data (t): 0.0032, 62.44/s/gpu Batch (t): 0.5765 LR: 0.000081 Step: 154100 Total Loss: 0.0379 Recon Loss: 0.0277 [03/30 04:10:42 TiTok]: Data (t): 0.0032, 62.41/s/gpu Batch (t): 0.5769 LR: 0.000081 Step: 154200 Total Loss: 0.0383 Recon Loss: 0.0287 [03/30 04:11:40 TiTok]: Data (t): 0.0031, 62.43/s/gpu Batch (t): 0.5767 LR: 0.000081 Step: 154300 Total Loss: 0.0391 Recon Loss: 0.0289 [03/30 04:12:37 TiTok]: Data (t): 0.0031, 62.42/s/gpu Batch (t): 0.5768 LR: 0.000081 Step: 154400 Total Loss: 0.0415 Recon Loss: 0.0277 [03/30 04:13:35 TiTok]: Data (t): 0.0032, 62.19/s/gpu Batch (t): 0.5788 LR: 0.000081 Step: 154500 Total Loss: 0.0419 Recon Loss: 0.0283 [03/30 04:14:33 TiTok]: Data (t): 0.0032, 62.51/s/gpu Batch (t): 0.5759 LR: 0.000081 Step: 154600 Total Loss: 0.0395 Recon Loss: 0.0291 [03/30 04:15:30 TiTok]: Data (t): 0.0032, 62.12/s/gpu Batch (t): 0.5795 LR: 0.000081 Step: 154700 Total Loss: 0.0384 Recon Loss: 0.0277 [03/30 04:16:28 TiTok]: Data (t): 0.0032, 62.55/s/gpu Batch (t): 0.5756 LR: 0.000081 Step: 154800 Total Loss: 0.0396 Recon Loss: 0.0287 [03/30 04:17:26 TiTok]: Data (t): 0.0031, 62.47/s/gpu Batch (t): 0.5762 LR: 0.000081 Step: 154900 Total Loss: 0.0407 Recon Loss: 0.0283 [03/30 04:18:24 TiTok]: Data (t): 0.0032, 56.93/s/gpu Batch (t): 0.6324 LR: 0.000081 Step: 155000 Total Loss: 0.0409 Recon Loss: 0.0293 [03/30 04:19:21 TiTok]: Data (t): 0.0033, 62.35/s/gpu Batch (t): 0.5774 LR: 0.000081 Step: 155100 Total Loss: 0.0395 Recon Loss: 0.0294 [03/30 04:20:20 TiTok]: Data (t): 0.0031, 62.35/s/gpu Batch (t): 0.5774 LR: 0.000081 Step: 155200 Total Loss: 0.0384 Recon Loss: 0.0282 [03/30 04:21:18 TiTok]: Data (t): 0.0031, 62.40/s/gpu Batch (t): 0.5769 LR: 0.000081 Step: 155300 Total Loss: 0.0402 Recon Loss: 0.0268 [03/30 04:22:16 TiTok]: Data (t): 0.0032, 62.55/s/gpu Batch (t): 0.5755 LR: 0.000081 Step: 155400 Total Loss: 0.0409 Recon Loss: 0.0283 [03/30 04:23:14 TiTok]: Data (t): 0.0031, 62.45/s/gpu Batch (t): 0.5765 LR: 0.000081 Step: 155500 Total Loss: 0.0370 Recon Loss: 0.0269 [03/30 04:24:11 TiTok]: Data (t): 0.0032, 62.43/s/gpu Batch (t): 0.5767 LR: 0.000081 Step: 155600 Total Loss: 0.0386 Recon Loss: 0.0264 [03/30 04:25:09 TiTok]: Data (t): 0.0033, 62.40/s/gpu Batch (t): 0.5770 LR: 0.000081 Step: 155700 Total Loss: 0.0380 Recon Loss: 0.0267 [03/30 04:26:07 TiTok]: Data (t): 0.0031, 61.94/s/gpu Batch (t): 0.5812 LR: 0.000081 Step: 155800 Total Loss: 0.0400 Recon Loss: 0.0273 [03/30 04:27:05 TiTok]: Data (t): 0.0032, 62.47/s/gpu Batch (t): 0.5762 LR: 0.000081 Step: 155900 Total Loss: 0.0407 Recon Loss: 0.0297 [03/30 04:28:02 TiTok]: Data (t): 0.0032, 56.83/s/gpu Batch (t): 0.6335 LR: 0.000081 Step: 156000 Total Loss: 0.0404 Recon Loss: 0.0293 [03/30 04:29:00 TiTok]: Data (t): 0.0032, 62.49/s/gpu Batch (t): 0.5761 LR: 0.000081 Step: 156100 Total Loss: 0.0394 Recon Loss: 0.0283 [03/30 04:29:58 TiTok]: Data (t): 0.0032, 62.48/s/gpu Batch (t): 0.5761 LR: 0.000081 Step: 156200 Total Loss: 0.0371 Recon Loss: 0.0261 [03/30 04:30:55 TiTok]: Data (t): 0.0032, 62.43/s/gpu Batch (t): 0.5766 LR: 0.000081 Step: 156300 Total Loss: 0.0410 Recon Loss: 0.0280 [03/30 04:31:53 TiTok]: Data (t): 0.0032, 62.48/s/gpu Batch (t): 0.5762 LR: 0.000081 Step: 156400 Total Loss: 0.0378 Recon Loss: 0.0289 [03/30 04:32:51 TiTok]: Data (t): 0.0032, 62.34/s/gpu Batch (t): 0.5775 LR: 0.000081 Step: 156500 Total Loss: 0.0396 Recon Loss: 0.0291 [03/30 04:33:48 TiTok]: Data (t): 0.0031, 62.39/s/gpu Batch (t): 0.5770 LR: 0.000081 Step: 156600 Total Loss: 0.0382 Recon Loss: 0.0269 [03/30 04:34:46 TiTok]: Data (t): 0.0031, 62.61/s/gpu Batch (t): 0.5750 LR: 0.000081 Step: 156700 Total Loss: 0.0384 Recon Loss: 0.0274 [03/30 04:35:44 TiTok]: Data (t): 0.0031, 62.48/s/gpu Batch (t): 0.5762 LR: 0.000081 Step: 156800 Total Loss: 0.0420 Recon Loss: 0.0315 [03/30 04:36:42 TiTok]: Data (t): 0.0033, 62.56/s/gpu Batch (t): 0.5754 LR: 0.000081 Step: 156900 Total Loss: 0.0391 Recon Loss: 0.0288 [03/30 04:37:39 TiTok]: Data (t): 0.0031, 56.73/s/gpu Batch (t): 0.6346 LR: 0.000081 Step: 157000 Total Loss: 0.0398 Recon Loss: 0.0280 [03/30 04:38:37 TiTok]: Data (t): 0.0031, 62.46/s/gpu Batch (t): 0.5764 LR: 0.000081 Step: 157100 Total Loss: 0.0413 Recon Loss: 0.0280 [03/30 04:39:35 TiTok]: Data (t): 0.0032, 62.63/s/gpu Batch (t): 0.5748 LR: 0.000081 Step: 157200 Total Loss: 0.0404 Recon Loss: 0.0264 [03/30 04:40:32 TiTok]: Data (t): 0.0031, 62.66/s/gpu Batch (t): 0.5746 LR: 0.000081 Step: 157300 Total Loss: 0.0400 Recon Loss: 0.0286 [03/30 04:41:30 TiTok]: Data (t): 0.0031, 62.55/s/gpu Batch (t): 0.5755 LR: 0.000081 Step: 157400 Total Loss: 0.0406 Recon Loss: 0.0292 [03/30 04:42:29 TiTok]: Data (t): 0.0032, 62.62/s/gpu Batch (t): 0.5749 LR: 0.000081 Step: 157500 Total Loss: 0.0405 Recon Loss: 0.0294 [03/30 04:43:27 TiTok]: Data (t): 0.0032, 62.69/s/gpu Batch (t): 0.5743 LR: 0.000081 Step: 157600 Total Loss: 0.0398 Recon Loss: 0.0275 [03/30 04:44:24 TiTok]: Data (t): 0.0032, 62.49/s/gpu Batch (t): 0.5761 LR: 0.000080 Step: 157700 Total Loss: 0.0393 Recon Loss: 0.0281 [03/30 04:45:22 TiTok]: Data (t): 0.0033, 62.60/s/gpu Batch (t): 0.5751 LR: 0.000080 Step: 157800 Total Loss: 0.0389 Recon Loss: 0.0276 [03/30 04:46:21 TiTok]: Data (t): 0.0032, 59.22/s/gpu Batch (t): 0.6079 LR: 0.000080 Step: 157900 Total Loss: 0.0411 Recon Loss: 0.0294 [03/30 04:47:19 TiTok]: Data (t): 0.0033, 56.80/s/gpu Batch (t): 0.6338 LR: 0.000080 Step: 158000 Total Loss: 0.0412 Recon Loss: 0.0301 [03/30 04:48:16 TiTok]: Data (t): 0.0032, 62.48/s/gpu Batch (t): 0.5761 LR: 0.000080 Step: 158100 Total Loss: 0.0422 Recon Loss: 0.0272 [03/30 04:49:14 TiTok]: Data (t): 0.0031, 62.63/s/gpu Batch (t): 0.5748 LR: 0.000080 Step: 158200 Total Loss: 0.0377 Recon Loss: 0.0255 [03/30 04:50:12 TiTok]: Data (t): 0.0032, 62.19/s/gpu Batch (t): 0.5789 LR: 0.000080 Step: 158300 Total Loss: 0.0381 Recon Loss: 0.0264 [03/30 04:51:10 TiTok]: Data (t): 0.0033, 62.24/s/gpu Batch (t): 0.5784 LR: 0.000080 Step: 158400 Total Loss: 0.0389 Recon Loss: 0.0288 [03/30 04:52:08 TiTok]: Data (t): 0.0031, 62.61/s/gpu Batch (t): 0.5750 LR: 0.000080 Step: 158500 Total Loss: 0.0389 Recon Loss: 0.0284 [03/30 04:53:06 TiTok]: Data (t): 0.0032, 62.52/s/gpu Batch (t): 0.5758 LR: 0.000080 Step: 158600 Total Loss: 0.0409 Recon Loss: 0.0279 [03/30 04:54:03 TiTok]: Data (t): 0.0032, 62.28/s/gpu Batch (t): 0.5780 LR: 0.000080 Step: 158700 Total Loss: 0.0400 Recon Loss: 0.0269 [03/30 04:55:01 TiTok]: Data (t): 0.0031, 62.29/s/gpu Batch (t): 0.5780 LR: 0.000080 Step: 158800 Total Loss: 0.0410 Recon Loss: 0.0300 [03/30 04:55:59 TiTok]: Data (t): 0.0032, 62.58/s/gpu Batch (t): 0.5752 LR: 0.000080 Step: 158900 Total Loss: 0.0393 Recon Loss: 0.0277 [03/30 04:56:56 TiTok]: Data (t): 0.0032, 56.75/s/gpu Batch (t): 0.6344 LR: 0.000080 Step: 159000 Total Loss: 0.0398 Recon Loss: 0.0296 [03/30 04:57:54 TiTok]: Data (t): 0.0031, 62.52/s/gpu Batch (t): 0.5758 LR: 0.000080 Step: 159100 Total Loss: 0.0396 Recon Loss: 0.0283 [03/30 04:58:52 TiTok]: Data (t): 0.0032, 62.50/s/gpu Batch (t): 0.5760 LR: 0.000080 Step: 159200 Total Loss: 0.0404 Recon Loss: 0.0284 [03/30 04:59:49 TiTok]: Data (t): 0.0031, 62.53/s/gpu Batch (t): 0.5757 LR: 0.000080 Step: 159300 Total Loss: 0.0380 Recon Loss: 0.0265 [03/30 05:00:47 TiTok]: Data (t): 0.0032, 62.54/s/gpu Batch (t): 0.5757 LR: 0.000080 Step: 159400 Total Loss: 0.0381 Recon Loss: 0.0282 [03/30 05:01:45 TiTok]: Data (t): 0.0033, 62.45/s/gpu Batch (t): 0.5764 LR: 0.000080 Step: 159500 Total Loss: 0.0385 Recon Loss: 0.0281 [03/30 05:02:42 TiTok]: Data (t): 0.0031, 62.52/s/gpu Batch (t): 0.5758 LR: 0.000080 Step: 159600 Total Loss: 0.0387 Recon Loss: 0.0283 [03/30 05:03:41 TiTok]: Data (t): 0.0032, 62.32/s/gpu Batch (t): 0.5777 LR: 0.000080 Step: 159700 Total Loss: 0.0396 Recon Loss: 0.0301 [03/30 05:04:39 TiTok]: Data (t): 0.0032, 62.35/s/gpu Batch (t): 0.5774 LR: 0.000080 Step: 159800 Total Loss: 0.0386 Recon Loss: 0.0273 [03/30 05:05:36 TiTok]: Data (t): 0.0031, 62.48/s/gpu Batch (t): 0.5762 LR: 0.000080 Step: 159900 Total Loss: 0.0398 Recon Loss: 0.0276 [03/30 05:06:34 TiTok]: Data (t): 0.0031, 56.73/s/gpu Batch (t): 0.6345 LR: 0.000080 Step: 160000 Total Loss: 0.0404 Recon Loss: 0.0280 [03/30 05:06:36 TiTok]: Saved state to /mnt/books/train_stage2/order_32_stage2/checkpoint-160000 [03/30 05:06:50 TiTok]: Reconstructing images... [03/30 05:07:48 TiTok]: Data (t): 0.0032, 62.49/s/gpu Batch (t): 0.5761 LR: 0.000080 Step: 160100 Total Loss: 0.0410 Recon Loss: 0.0297 [03/30 05:08:46 TiTok]: Data (t): 0.0031, 62.48/s/gpu Batch (t): 0.5762 LR: 0.000080 Step: 160200 Total Loss: 0.0393 Recon Loss: 0.0283 [03/30 05:09:44 TiTok]: Data (t): 0.0032, 62.46/s/gpu Batch (t): 0.5763 LR: 0.000080 Step: 160300 Total Loss: 0.0376 Recon Loss: 0.0271 [03/30 05:10:41 TiTok]: Data (t): 0.0032, 62.34/s/gpu Batch (t): 0.5775 LR: 0.000080 Step: 160400 Total Loss: 0.0372 Recon Loss: 0.0282 [03/30 05:11:39 TiTok]: Data (t): 0.0031, 62.45/s/gpu Batch (t): 0.5765 LR: 0.000080 Step: 160500 Total Loss: 0.0405 Recon Loss: 0.0257 [03/30 05:12:37 TiTok]: Data (t): 0.0032, 62.46/s/gpu Batch (t): 0.5764 LR: 0.000080 Step: 160600 Total Loss: 0.0383 Recon Loss: 0.0275 [03/30 05:13:34 TiTok]: Data (t): 0.0033, 62.37/s/gpu Batch (t): 0.5772 LR: 0.000080 Step: 160700 Total Loss: 0.0379 Recon Loss: 0.0258 [03/30 05:14:32 TiTok]: Data (t): 0.0031, 62.42/s/gpu Batch (t): 0.5767 LR: 0.000080 Step: 160800 Total Loss: 0.0389 Recon Loss: 0.0276 [03/30 05:15:30 TiTok]: Data (t): 0.0032, 62.22/s/gpu Batch (t): 0.5786 LR: 0.000080 Step: 160900 Total Loss: 0.0403 Recon Loss: 0.0294 [03/30 05:16:28 TiTok]: Data (t): 0.0032, 56.84/s/gpu Batch (t): 0.6334 LR: 0.000080 Step: 161000 Total Loss: 0.0429 Recon Loss: 0.0305 [03/30 05:17:26 TiTok]: Data (t): 0.0032, 62.26/s/gpu Batch (t): 0.5782 LR: 0.000080 Step: 161100 Total Loss: 0.0407 Recon Loss: 0.0273 [03/30 05:18:24 TiTok]: Data (t): 0.0032, 62.32/s/gpu Batch (t): 0.5777 LR: 0.000080 Step: 161200 Total Loss: 0.0371 Recon Loss: 0.0270 [03/30 05:19:21 TiTok]: Data (t): 0.0032, 62.32/s/gpu Batch (t): 0.5776 LR: 0.000080 Step: 161300 Total Loss: 0.0426 Recon Loss: 0.0295 [03/30 05:20:19 TiTok]: Data (t): 0.0031, 62.42/s/gpu Batch (t): 0.5767 LR: 0.000080 Step: 161400 Total Loss: 0.0446 Recon Loss: 0.0312 [03/30 05:21:17 TiTok]: Data (t): 0.0032, 62.45/s/gpu Batch (t): 0.5764 LR: 0.000080 Step: 161500 Total Loss: 0.0417 Recon Loss: 0.0299 [03/30 05:22:14 TiTok]: Data (t): 0.0031, 62.41/s/gpu Batch (t): 0.5768 LR: 0.000080 Step: 161600 Total Loss: 0.0398 Recon Loss: 0.0285 [03/30 05:23:12 TiTok]: Data (t): 0.0032, 62.40/s/gpu Batch (t): 0.5769 LR: 0.000080 Step: 161700 Total Loss: 0.0406 Recon Loss: 0.0277 [03/30 05:24:10 TiTok]: Data (t): 0.0032, 62.36/s/gpu Batch (t): 0.5773 LR: 0.000080 Step: 161800 Total Loss: 0.0421 Recon Loss: 0.0276 [03/30 05:25:08 TiTok]: Data (t): 0.0032, 62.39/s/gpu Batch (t): 0.5770 LR: 0.000079 Step: 161900 Total Loss: 0.0391 Recon Loss: 0.0276 [03/30 05:26:07 TiTok]: Data (t): 0.0032, 52.15/s/gpu Batch (t): 0.6904 LR: 0.000079 Step: 162000 Total Loss: 0.0413 Recon Loss: 0.0281 [03/30 05:27:05 TiTok]: Data (t): 0.0032, 62.41/s/gpu Batch (t): 0.5769 LR: 0.000079 Step: 162100 Total Loss: 0.0385 Recon Loss: 0.0277 [03/30 05:28:03 TiTok]: Data (t): 0.0033, 62.40/s/gpu Batch (t): 0.5769 LR: 0.000079 Step: 162200 Total Loss: 0.0394 Recon Loss: 0.0277 [03/30 05:29:01 TiTok]: Data (t): 0.0031, 62.45/s/gpu Batch (t): 0.5764 LR: 0.000079 Step: 162300 Total Loss: 0.0397 Recon Loss: 0.0288 [03/30 05:30:00 TiTok]: Data (t): 0.0031, 58.35/s/gpu Batch (t): 0.6170 LR: 0.000079 Step: 162400 Total Loss: 0.0385 Recon Loss: 0.0280 [03/30 05:30:58 TiTok]: Data (t): 0.0033, 62.42/s/gpu Batch (t): 0.5767 LR: 0.000079 Step: 162500 Total Loss: 0.0400 Recon Loss: 0.0269 [03/30 05:31:56 TiTok]: Data (t): 0.0033, 62.15/s/gpu Batch (t): 0.5793 LR: 0.000079 Step: 162600 Total Loss: 0.0422 Recon Loss: 0.0289 [03/30 05:32:54 TiTok]: Data (t): 0.0031, 62.51/s/gpu Batch (t): 0.5760 LR: 0.000079 Step: 162700 Total Loss: 0.0407 Recon Loss: 0.0289 [03/30 05:33:51 TiTok]: Data (t): 0.0033, 62.44/s/gpu Batch (t): 0.5766 LR: 0.000079 Step: 162800 Total Loss: 0.0392 Recon Loss: 0.0297 [03/30 05:34:49 TiTok]: Data (t): 0.0031, 62.39/s/gpu Batch (t): 0.5770 LR: 0.000079 Step: 162900 Total Loss: 0.0390 Recon Loss: 0.0288 [03/30 05:35:47 TiTok]: Data (t): 0.0031, 56.68/s/gpu Batch (t): 0.6352 LR: 0.000079 Step: 163000 Total Loss: 0.0399 Recon Loss: 0.0254 [03/30 05:36:45 TiTok]: Data (t): 0.0032, 62.15/s/gpu Batch (t): 0.5792 LR: 0.000079 Step: 163100 Total Loss: 0.0381 Recon Loss: 0.0272 [03/30 05:37:42 TiTok]: Data (t): 0.0031, 62.49/s/gpu Batch (t): 0.5761 LR: 0.000079 Step: 163200 Total Loss: 0.0382 Recon Loss: 0.0285 [03/30 05:38:40 TiTok]: Data (t): 0.0031, 61.98/s/gpu Batch (t): 0.5808 LR: 0.000079 Step: 163300 Total Loss: 0.0414 Recon Loss: 0.0279 [03/30 05:39:38 TiTok]: Data (t): 0.0031, 62.51/s/gpu Batch (t): 0.5759 LR: 0.000079 Step: 163400 Total Loss: 0.0387 Recon Loss: 0.0284 [03/30 05:40:36 TiTok]: Data (t): 0.0031, 62.21/s/gpu Batch (t): 0.5787 LR: 0.000079 Step: 163500 Total Loss: 0.0415 Recon Loss: 0.0292 [03/30 05:41:33 TiTok]: Data (t): 0.0031, 62.40/s/gpu Batch (t): 0.5769 LR: 0.000079 Step: 163600 Total Loss: 0.0404 Recon Loss: 0.0288 [03/30 05:42:31 TiTok]: Data (t): 0.0033, 62.37/s/gpu Batch (t): 0.5772 LR: 0.000079 Step: 163700 Total Loss: 0.0423 Recon Loss: 0.0283 [03/30 05:43:29 TiTok]: Data (t): 0.0031, 62.38/s/gpu Batch (t): 0.5771 LR: 0.000079 Step: 163800 Total Loss: 0.0419 Recon Loss: 0.0284 [03/30 05:44:27 TiTok]: Data (t): 0.0032, 62.36/s/gpu Batch (t): 0.5773 LR: 0.000079 Step: 163900 Total Loss: 0.0393 Recon Loss: 0.0286 [03/30 05:45:24 TiTok]: Data (t): 0.0032, 56.63/s/gpu Batch (t): 0.6357 LR: 0.000079 Step: 164000 Total Loss: 0.0377 Recon Loss: 0.0281 [03/30 05:46:22 TiTok]: Data (t): 0.0032, 62.49/s/gpu Batch (t): 0.5761 LR: 0.000079 Step: 164100 Total Loss: 0.0394 Recon Loss: 0.0296 [03/30 05:47:20 TiTok]: Data (t): 0.0031, 62.43/s/gpu Batch (t): 0.5767 LR: 0.000079 Step: 164200 Total Loss: 0.0415 Recon Loss: 0.0291 [03/30 05:48:19 TiTok]: Data (t): 0.0033, 61.24/s/gpu Batch (t): 0.5879 LR: 0.000079 Step: 164300 Total Loss: 0.0376 Recon Loss: 0.0267 [03/30 05:49:16 TiTok]: Data (t): 0.0032, 62.35/s/gpu Batch (t): 0.5774 LR: 0.000079 Step: 164400 Total Loss: 0.0406 Recon Loss: 0.0282 [03/30 05:50:14 TiTok]: Data (t): 0.0032, 62.28/s/gpu Batch (t): 0.5780 LR: 0.000079 Step: 164500 Total Loss: 0.0414 Recon Loss: 0.0303 [03/30 05:51:12 TiTok]: Data (t): 0.0031, 61.47/s/gpu Batch (t): 0.5857 LR: 0.000079 Step: 164600 Total Loss: 0.0414 Recon Loss: 0.0287 [03/30 05:52:10 TiTok]: Data (t): 0.0032, 62.37/s/gpu Batch (t): 0.5772 LR: 0.000079 Step: 164700 Total Loss: 0.0364 Recon Loss: 0.0268 [03/30 05:53:08 TiTok]: Data (t): 0.0033, 62.41/s/gpu Batch (t): 0.5768 LR: 0.000079 Step: 164800 Total Loss: 0.0378 Recon Loss: 0.0281 [03/30 05:54:05 TiTok]: Data (t): 0.0031, 62.46/s/gpu Batch (t): 0.5764 LR: 0.000079 Step: 164900 Total Loss: 0.0391 Recon Loss: 0.0282 [03/30 05:55:03 TiTok]: Data (t): 0.0031, 56.62/s/gpu Batch (t): 0.6358 LR: 0.000079 Step: 165000 Total Loss: 0.0383 Recon Loss: 0.0271 [03/30 05:56:01 TiTok]: Data (t): 0.0032, 62.28/s/gpu Batch (t): 0.5780 LR: 0.000079 Step: 165100 Total Loss: 0.0418 Recon Loss: 0.0280 [03/30 05:56:59 TiTok]: Data (t): 0.0032, 62.31/s/gpu Batch (t): 0.5777 LR: 0.000079 Step: 165200 Total Loss: 0.0423 Recon Loss: 0.0281 [03/30 05:57:56 TiTok]: Data (t): 0.0033, 62.44/s/gpu Batch (t): 0.5765 LR: 0.000079 Step: 165300 Total Loss: 0.0393 Recon Loss: 0.0276 [03/30 05:58:54 TiTok]: Data (t): 0.0032, 62.39/s/gpu Batch (t): 0.5770 LR: 0.000079 Step: 165400 Total Loss: 0.0401 Recon Loss: 0.0286 [03/30 05:59:52 TiTok]: Data (t): 0.0031, 62.42/s/gpu Batch (t): 0.5767 LR: 0.000079 Step: 165500 Total Loss: 0.0387 Recon Loss: 0.0268 [03/30 06:00:49 TiTok]: Data (t): 0.0032, 62.41/s/gpu Batch (t): 0.5768 LR: 0.000079 Step: 165600 Total Loss: 0.0397 Recon Loss: 0.0280 [03/30 06:01:47 TiTok]: Data (t): 0.0031, 62.46/s/gpu Batch (t): 0.5764 LR: 0.000079 Step: 165700 Total Loss: 0.0415 Recon Loss: 0.0284 [03/30 06:02:45 TiTok]: Data (t): 0.0031, 62.43/s/gpu Batch (t): 0.5767 LR: 0.000079 Step: 165800 Total Loss: 0.0418 Recon Loss: 0.0295 [03/30 06:03:43 TiTok]: Data (t): 0.0033, 62.39/s/gpu Batch (t): 0.5770 LR: 0.000079 Step: 165900 Total Loss: 0.0414 Recon Loss: 0.0280 [03/30 06:04:40 TiTok]: Data (t): 0.0032, 56.93/s/gpu Batch (t): 0.6324 LR: 0.000079 Step: 166000 Total Loss: 0.0407 Recon Loss: 0.0289 [03/30 06:05:38 TiTok]: Data (t): 0.0031, 62.47/s/gpu Batch (t): 0.5763 LR: 0.000078 Step: 166100 Total Loss: 0.0394 Recon Loss: 0.0268 [03/30 06:06:36 TiTok]: Data (t): 0.0031, 62.49/s/gpu Batch (t): 0.5761 LR: 0.000078 Step: 166200 Total Loss: 0.0396 Recon Loss: 0.0288 [03/30 06:07:34 TiTok]: Data (t): 0.0032, 62.45/s/gpu Batch (t): 0.5765 LR: 0.000078 Step: 166300 Total Loss: 0.0401 Recon Loss: 0.0276 [03/30 06:08:31 TiTok]: Data (t): 0.0032, 62.28/s/gpu Batch (t): 0.5780 LR: 0.000078 Step: 166400 Total Loss: 0.0384 Recon Loss: 0.0288 [03/30 06:09:29 TiTok]: Data (t): 0.0032, 62.48/s/gpu Batch (t): 0.5762 LR: 0.000078 Step: 166500 Total Loss: 0.0395 Recon Loss: 0.0291 [03/30 06:10:28 TiTok]: Data (t): 0.0032, 61.99/s/gpu Batch (t): 0.5807 LR: 0.000078 Step: 166600 Total Loss: 0.0403 Recon Loss: 0.0290 [03/30 06:11:25 TiTok]: Data (t): 0.0032, 62.50/s/gpu Batch (t): 0.5760 LR: 0.000078 Step: 166700 Total Loss: 0.0406 Recon Loss: 0.0279 [03/30 06:12:25 TiTok]: Data (t): 0.0031, 59.47/s/gpu Batch (t): 0.6053 LR: 0.000078 Step: 166800 Total Loss: 0.0398 Recon Loss: 0.0299 [03/30 06:13:23 TiTok]: Data (t): 0.0031, 62.14/s/gpu Batch (t): 0.5793 LR: 0.000078 Step: 166900 Total Loss: 0.0393 Recon Loss: 0.0268 [03/30 06:14:21 TiTok]: Data (t): 0.0032, 56.31/s/gpu Batch (t): 0.6393 LR: 0.000078 Step: 167000 Total Loss: 0.0400 Recon Loss: 0.0277 [03/30 06:15:19 TiTok]: Data (t): 0.0031, 62.40/s/gpu Batch (t): 0.5769 LR: 0.000078 Step: 167100 Total Loss: 0.0392 Recon Loss: 0.0290 [03/30 06:16:17 TiTok]: Data (t): 0.0031, 62.40/s/gpu Batch (t): 0.5769 LR: 0.000078 Step: 167200 Total Loss: 0.0379 Recon Loss: 0.0294 [03/30 06:17:15 TiTok]: Data (t): 0.0031, 62.40/s/gpu Batch (t): 0.5769 LR: 0.000078 Step: 167300 Total Loss: 0.0370 Recon Loss: 0.0261 [03/30 06:18:13 TiTok]: Data (t): 0.0031, 62.48/s/gpu Batch (t): 0.5762 LR: 0.000078 Step: 167400 Total Loss: 0.0367 Recon Loss: 0.0272 [03/30 06:19:10 TiTok]: Data (t): 0.0031, 62.49/s/gpu Batch (t): 0.5761 LR: 0.000078 Step: 167500 Total Loss: 0.0369 Recon Loss: 0.0284 [03/30 06:20:08 TiTok]: Data (t): 0.0032, 62.32/s/gpu Batch (t): 0.5777 LR: 0.000078 Step: 167600 Total Loss: 0.0399 Recon Loss: 0.0294 [03/30 06:21:06 TiTok]: Data (t): 0.0031, 62.45/s/gpu Batch (t): 0.5765 LR: 0.000078 Step: 167700 Total Loss: 0.0413 Recon Loss: 0.0294 [03/30 06:22:04 TiTok]: Data (t): 0.0031, 62.54/s/gpu Batch (t): 0.5757 LR: 0.000078 Step: 167800 Total Loss: 0.0403 Recon Loss: 0.0284 [03/30 06:23:01 TiTok]: Data (t): 0.0033, 62.52/s/gpu Batch (t): 0.5758 LR: 0.000078 Step: 167900 Total Loss: 0.0407 Recon Loss: 0.0288 [03/30 06:23:59 TiTok]: Data (t): 0.0032, 56.61/s/gpu Batch (t): 0.6359 LR: 0.000078 Step: 168000 Total Loss: 0.0399 Recon Loss: 0.0288 [03/30 06:24:57 TiTok]: Data (t): 0.0031, 61.56/s/gpu Batch (t): 0.5848 LR: 0.000078 Step: 168100 Total Loss: 0.0402 Recon Loss: 0.0269 [03/30 06:25:54 TiTok]: Data (t): 0.0031, 62.52/s/gpu Batch (t): 0.5758 LR: 0.000078 Step: 168200 Total Loss: 0.0398 Recon Loss: 0.0263 [03/30 06:26:52 TiTok]: Data (t): 0.0032, 62.57/s/gpu Batch (t): 0.5754 LR: 0.000078 Step: 168300 Total Loss: 0.0387 Recon Loss: 0.0265 [03/30 06:27:50 TiTok]: Data (t): 0.0031, 62.59/s/gpu Batch (t): 0.5752 LR: 0.000078 Step: 168400 Total Loss: 0.0405 Recon Loss: 0.0306 [03/30 06:28:47 TiTok]: Data (t): 0.0031, 62.55/s/gpu Batch (t): 0.5755 LR: 0.000078 Step: 168500 Total Loss: 0.0398 Recon Loss: 0.0280 [03/30 06:29:45 TiTok]: Data (t): 0.0032, 62.42/s/gpu Batch (t): 0.5768 LR: 0.000078 Step: 168600 Total Loss: 0.0391 Recon Loss: 0.0275 [03/30 06:30:43 TiTok]: Data (t): 0.0031, 62.61/s/gpu Batch (t): 0.5749 LR: 0.000078 Step: 168700 Total Loss: 0.0378 Recon Loss: 0.0276 [03/30 06:31:41 TiTok]: Data (t): 0.0033, 62.58/s/gpu Batch (t): 0.5753 LR: 0.000078 Step: 168800 Total Loss: 0.0388 Recon Loss: 0.0279 [03/30 06:32:39 TiTok]: Data (t): 0.0031, 62.67/s/gpu Batch (t): 0.5745 LR: 0.000078 Step: 168900 Total Loss: 0.0384 Recon Loss: 0.0279 [03/30 06:33:37 TiTok]: Data (t): 0.0032, 56.85/s/gpu Batch (t): 0.6333 LR: 0.000078 Step: 169000 Total Loss: 0.0408 Recon Loss: 0.0285 [03/30 06:34:34 TiTok]: Data (t): 0.0031, 62.62/s/gpu Batch (t): 0.5749 LR: 0.000078 Step: 169100 Total Loss: 0.0369 Recon Loss: 0.0274 [03/30 06:35:32 TiTok]: Data (t): 0.0031, 62.53/s/gpu Batch (t): 0.5758 LR: 0.000078 Step: 169200 Total Loss: 0.0388 Recon Loss: 0.0273 [03/30 06:36:30 TiTok]: Data (t): 0.0031, 62.59/s/gpu Batch (t): 0.5752 LR: 0.000078 Step: 169300 Total Loss: 0.0421 Recon Loss: 0.0284 [03/30 06:37:27 TiTok]: Data (t): 0.0032, 62.54/s/gpu Batch (t): 0.5757 LR: 0.000078 Step: 169400 Total Loss: 0.0388 Recon Loss: 0.0275 [03/30 06:38:25 TiTok]: Data (t): 0.0032, 62.50/s/gpu Batch (t): 0.5760 LR: 0.000078 Step: 169500 Total Loss: 0.0381 Recon Loss: 0.0279 [03/30 06:39:22 TiTok]: Data (t): 0.0031, 62.49/s/gpu Batch (t): 0.5761 LR: 0.000078 Step: 169600 Total Loss: 0.0395 Recon Loss: 0.0291 [03/30 06:40:20 TiTok]: Data (t): 0.0032, 62.50/s/gpu Batch (t): 0.5760 LR: 0.000078 Step: 169700 Total Loss: 0.0405 Recon Loss: 0.0291 [03/30 06:41:18 TiTok]: Data (t): 0.0032, 62.43/s/gpu Batch (t): 0.5767 LR: 0.000078 Step: 169800 Total Loss: 0.0386 Recon Loss: 0.0272 [03/30 06:42:15 TiTok]: Data (t): 0.0031, 62.48/s/gpu Batch (t): 0.5762 LR: 0.000078 Step: 169900 Total Loss: 0.0403 Recon Loss: 0.0289 [03/30 06:43:13 TiTok]: Data (t): 0.0031, 56.69/s/gpu Batch (t): 0.6350 LR: 0.000078 Step: 170000 Total Loss: 0.0379 Recon Loss: 0.0289 [03/30 06:43:16 TiTok]: Saved state to /mnt/books/train_stage2/order_32_stage2/checkpoint-170000 [03/30 06:43:29 TiTok]: Reconstructing images... [03/30 06:44:28 TiTok]: Data (t): 0.0031, 62.41/s/gpu Batch (t): 0.5769 LR: 0.000078 Step: 170100 Total Loss: 0.0382 Recon Loss: 0.0270 [03/30 06:45:25 TiTok]: Data (t): 0.0031, 62.38/s/gpu Batch (t): 0.5771 LR: 0.000077 Step: 170200 Total Loss: 0.0388 Recon Loss: 0.0286 [03/30 06:46:23 TiTok]: Data (t): 0.0031, 62.47/s/gpu Batch (t): 0.5762 LR: 0.000077 Step: 170300 Total Loss: 0.0409 Recon Loss: 0.0284 [03/30 06:47:21 TiTok]: Data (t): 0.0032, 62.50/s/gpu Batch (t): 0.5760 LR: 0.000077 Step: 170400 Total Loss: 0.0390 Recon Loss: 0.0267 [03/30 06:48:18 TiTok]: Data (t): 0.0031, 62.44/s/gpu Batch (t): 0.5766 LR: 0.000077 Step: 170500 Total Loss: 0.0411 Recon Loss: 0.0276 [03/30 06:49:16 TiTok]: Data (t): 0.0031, 62.49/s/gpu Batch (t): 0.5761 LR: 0.000077 Step: 170600 Total Loss: 0.0426 Recon Loss: 0.0285 [03/30 06:50:14 TiTok]: Data (t): 0.0031, 62.33/s/gpu Batch (t): 0.5776 LR: 0.000077 Step: 170700 Total Loss: 0.0408 Recon Loss: 0.0288 [03/30 06:51:11 TiTok]: Data (t): 0.0031, 62.25/s/gpu Batch (t): 0.5783 LR: 0.000077 Step: 170800 Total Loss: 0.0410 Recon Loss: 0.0272 [03/30 06:52:09 TiTok]: Data (t): 0.0031, 62.45/s/gpu Batch (t): 0.5765 LR: 0.000077 Step: 170900 Total Loss: 0.0359 Recon Loss: 0.0266 [03/30 06:53:07 TiTok]: Data (t): 0.0031, 52.27/s/gpu Batch (t): 0.6887 LR: 0.000077 Step: 171000 Total Loss: 0.0378 Recon Loss: 0.0285 [03/30 06:54:06 TiTok]: Data (t): 0.0031, 62.56/s/gpu Batch (t): 0.5754 LR: 0.000077 Step: 171100 Total Loss: 0.0398 Recon Loss: 0.0271 [03/30 06:55:04 TiTok]: Data (t): 0.0031, 62.58/s/gpu Batch (t): 0.5753 LR: 0.000077 Step: 171200 Total Loss: 0.0413 Recon Loss: 0.0312 [03/30 06:56:03 TiTok]: Data (t): 0.0033, 62.41/s/gpu Batch (t): 0.5769 LR: 0.000077 Step: 171300 Total Loss: 0.0407 Recon Loss: 0.0307 [03/30 06:57:01 TiTok]: Data (t): 0.0034, 62.35/s/gpu Batch (t): 0.5774 LR: 0.000077 Step: 171400 Total Loss: 0.0415 Recon Loss: 0.0284 [03/30 06:57:58 TiTok]: Data (t): 0.0034, 62.47/s/gpu Batch (t): 0.5763 LR: 0.000077 Step: 171500 Total Loss: 0.0396 Recon Loss: 0.0276 [03/30 06:58:56 TiTok]: Data (t): 0.0032, 62.52/s/gpu Batch (t): 0.5758 LR: 0.000077 Step: 171600 Total Loss: 0.0408 Recon Loss: 0.0272 [03/30 06:59:54 TiTok]: Data (t): 0.0033, 62.62/s/gpu Batch (t): 0.5749 LR: 0.000077 Step: 171700 Total Loss: 0.0446 Recon Loss: 0.0294 [03/30 07:00:51 TiTok]: Data (t): 0.0034, 62.42/s/gpu Batch (t): 0.5767 LR: 0.000077 Step: 171800 Total Loss: 0.0419 Recon Loss: 0.0277 [03/30 07:01:49 TiTok]: Data (t): 0.0031, 62.40/s/gpu Batch (t): 0.5769 LR: 0.000077 Step: 171900 Total Loss: 0.0395 Recon Loss: 0.0276 [03/30 07:02:47 TiTok]: Data (t): 0.0032, 56.11/s/gpu Batch (t): 0.6416 LR: 0.000077 Step: 172000 Total Loss: 0.0389 Recon Loss: 0.0265 [03/30 07:03:45 TiTok]: Data (t): 0.0032, 62.62/s/gpu Batch (t): 0.5749 LR: 0.000077 Step: 172100 Total Loss: 0.0378 Recon Loss: 0.0280 [03/30 07:04:42 TiTok]: Data (t): 0.0032, 62.52/s/gpu Batch (t): 0.5758 LR: 0.000077 Step: 172200 Total Loss: 0.0416 Recon Loss: 0.0287 [03/30 07:05:40 TiTok]: Data (t): 0.0033, 62.46/s/gpu Batch (t): 0.5763 LR: 0.000077 Step: 172300 Total Loss: 0.0388 Recon Loss: 0.0273 [03/30 07:06:38 TiTok]: Data (t): 0.0033, 62.45/s/gpu Batch (t): 0.5765 LR: 0.000077 Step: 172400 Total Loss: 0.0426 Recon Loss: 0.0313 [03/30 07:07:35 TiTok]: Data (t): 0.0033, 62.47/s/gpu Batch (t): 0.5763 LR: 0.000077 Step: 172500 Total Loss: 0.0411 Recon Loss: 0.0279 [03/30 07:08:33 TiTok]: Data (t): 0.0032, 62.51/s/gpu Batch (t): 0.5759 LR: 0.000077 Step: 172600 Total Loss: 0.0409 Recon Loss: 0.0286 [03/30 07:09:31 TiTok]: Data (t): 0.0033, 62.51/s/gpu Batch (t): 0.5759 LR: 0.000077 Step: 172700 Total Loss: 0.0402 Recon Loss: 0.0294 [03/30 07:10:29 TiTok]: Data (t): 0.0033, 62.37/s/gpu Batch (t): 0.5772 LR: 0.000077 Step: 172800 Total Loss: 0.0377 Recon Loss: 0.0275 [03/30 07:11:26 TiTok]: Data (t): 0.0032, 62.43/s/gpu Batch (t): 0.5766 LR: 0.000077 Step: 172900 Total Loss: 0.0419 Recon Loss: 0.0278 [03/30 07:12:24 TiTok]: Data (t): 0.0032, 56.69/s/gpu Batch (t): 0.6351 LR: 0.000077 Step: 173000 Total Loss: 0.0400 Recon Loss: 0.0302 [03/30 07:13:22 TiTok]: Data (t): 0.0034, 62.20/s/gpu Batch (t): 0.5788 LR: 0.000077 Step: 173100 Total Loss: 0.0373 Recon Loss: 0.0267 [03/30 07:14:20 TiTok]: Data (t): 0.0031, 62.45/s/gpu Batch (t): 0.5764 LR: 0.000077 Step: 173200 Total Loss: 0.0389 Recon Loss: 0.0282 [03/30 07:15:17 TiTok]: Data (t): 0.0032, 62.47/s/gpu Batch (t): 0.5763 LR: 0.000077 Step: 173300 Total Loss: 0.0413 Recon Loss: 0.0287 [03/30 07:16:17 TiTok]: Data (t): 0.0031, 62.46/s/gpu Batch (t): 0.5764 LR: 0.000077 Step: 173400 Total Loss: 0.0409 Recon Loss: 0.0286 [03/30 07:17:14 TiTok]: Data (t): 0.0032, 62.50/s/gpu Batch (t): 0.5760 LR: 0.000077 Step: 173500 Total Loss: 0.0396 Recon Loss: 0.0281 [03/30 07:18:12 TiTok]: Data (t): 0.0032, 62.56/s/gpu Batch (t): 0.5755 LR: 0.000077 Step: 173600 Total Loss: 0.0422 Recon Loss: 0.0314 [03/30 07:19:09 TiTok]: Data (t): 0.0031, 62.49/s/gpu Batch (t): 0.5761 LR: 0.000077 Step: 173700 Total Loss: 0.0411 Recon Loss: 0.0272 [03/30 07:20:07 TiTok]: Data (t): 0.0031, 62.54/s/gpu Batch (t): 0.5756 LR: 0.000077 Step: 173800 Total Loss: 0.0411 Recon Loss: 0.0289 [03/30 07:21:05 TiTok]: Data (t): 0.0032, 62.53/s/gpu Batch (t): 0.5757 LR: 0.000077 Step: 173900 Total Loss: 0.0385 Recon Loss: 0.0273 [03/30 07:22:02 TiTok]: Data (t): 0.0031, 56.79/s/gpu Batch (t): 0.6340 LR: 0.000077 Step: 174000 Total Loss: 0.0402 Recon Loss: 0.0294 [03/30 07:23:00 TiTok]: Data (t): 0.0031, 62.42/s/gpu Batch (t): 0.5767 LR: 0.000077 Step: 174100 Total Loss: 0.0405 Recon Loss: 0.0287 [03/30 07:23:58 TiTok]: Data (t): 0.0031, 62.51/s/gpu Batch (t): 0.5759 LR: 0.000076 Step: 174200 Total Loss: 0.0416 Recon Loss: 0.0298 [03/30 07:24:56 TiTok]: Data (t): 0.0032, 62.52/s/gpu Batch (t): 0.5758 LR: 0.000076 Step: 174300 Total Loss: 0.0376 Recon Loss: 0.0274 [03/30 07:25:53 TiTok]: Data (t): 0.0031, 62.43/s/gpu Batch (t): 0.5766 LR: 0.000076 Step: 174400 Total Loss: 0.0407 Recon Loss: 0.0292 [03/30 07:26:51 TiTok]: Data (t): 0.0032, 62.51/s/gpu Batch (t): 0.5760 LR: 0.000076 Step: 174500 Total Loss: 0.0420 Recon Loss: 0.0292 [03/30 07:27:49 TiTok]: Data (t): 0.0032, 62.47/s/gpu Batch (t): 0.5763 LR: 0.000076 Step: 174600 Total Loss: 0.0386 Recon Loss: 0.0284 [03/30 07:28:46 TiTok]: Data (t): 0.0031, 62.45/s/gpu Batch (t): 0.5765 LR: 0.000076 Step: 174700 Total Loss: 0.0379 Recon Loss: 0.0274 [03/30 07:29:44 TiTok]: Data (t): 0.0031, 62.53/s/gpu Batch (t): 0.5757 LR: 0.000076 Step: 174800 Total Loss: 0.0396 Recon Loss: 0.0270 [03/30 07:30:42 TiTok]: Data (t): 0.0031, 62.46/s/gpu Batch (t): 0.5764 LR: 0.000076 Step: 174900 Total Loss: 0.0412 Recon Loss: 0.0298 [03/30 07:31:40 TiTok]: Data (t): 0.0032, 56.75/s/gpu Batch (t): 0.6343 LR: 0.000076 Step: 175000 Total Loss: 0.0388 Recon Loss: 0.0265 [03/30 07:32:38 TiTok]: Data (t): 0.0033, 62.37/s/gpu Batch (t): 0.5772 LR: 0.000076 Step: 175100 Total Loss: 0.0376 Recon Loss: 0.0264 [03/30 07:33:35 TiTok]: Data (t): 0.0031, 62.52/s/gpu Batch (t): 0.5759 LR: 0.000076 Step: 175200 Total Loss: 0.0388 Recon Loss: 0.0287 [03/30 07:34:33 TiTok]: Data (t): 0.0031, 62.57/s/gpu Batch (t): 0.5754 LR: 0.000076 Step: 175300 Total Loss: 0.0410 Recon Loss: 0.0283 [03/30 07:35:31 TiTok]: Data (t): 0.0032, 62.43/s/gpu Batch (t): 0.5767 LR: 0.000076 Step: 175400 Total Loss: 0.0402 Recon Loss: 0.0295 [03/30 07:36:28 TiTok]: Data (t): 0.0031, 62.48/s/gpu Batch (t): 0.5762 LR: 0.000076 Step: 175500 Total Loss: 0.0426 Recon Loss: 0.0286 [03/30 07:37:26 TiTok]: Data (t): 0.0031, 62.55/s/gpu Batch (t): 0.5755 LR: 0.000076 Step: 175600 Total Loss: 0.0410 Recon Loss: 0.0300 [03/30 07:38:26 TiTok]: Data (t): 0.0032, 58.25/s/gpu Batch (t): 0.6180 LR: 0.000076 Step: 175700 Total Loss: 0.0426 Recon Loss: 0.0286 [03/30 07:39:24 TiTok]: Data (t): 0.0033, 59.80/s/gpu Batch (t): 0.6021 LR: 0.000076 Step: 175800 Total Loss: 0.0391 Recon Loss: 0.0271 [03/30 07:40:22 TiTok]: Data (t): 0.0031, 62.55/s/gpu Batch (t): 0.5755 LR: 0.000076 Step: 175900 Total Loss: 0.0372 Recon Loss: 0.0273 [03/30 07:41:20 TiTok]: Data (t): 0.0031, 56.75/s/gpu Batch (t): 0.6344 LR: 0.000076 Step: 176000 Total Loss: 0.0404 Recon Loss: 0.0268 [03/30 07:42:18 TiTok]: Data (t): 0.0031, 62.50/s/gpu Batch (t): 0.5760 LR: 0.000076 Step: 176100 Total Loss: 0.0412 Recon Loss: 0.0293 [03/30 07:43:15 TiTok]: Data (t): 0.0032, 62.37/s/gpu Batch (t): 0.5772 LR: 0.000076 Step: 176200 Total Loss: 0.0394 Recon Loss: 0.0286 [03/30 07:44:13 TiTok]: Data (t): 0.0031, 62.46/s/gpu Batch (t): 0.5764 LR: 0.000076 Step: 176300 Total Loss: 0.0403 Recon Loss: 0.0292 [03/30 07:45:11 TiTok]: Data (t): 0.0032, 61.43/s/gpu Batch (t): 0.5861 LR: 0.000076 Step: 176400 Total Loss: 0.0403 Recon Loss: 0.0296 [03/30 07:46:08 TiTok]: Data (t): 0.0031, 62.53/s/gpu Batch (t): 0.5757 LR: 0.000076 Step: 176500 Total Loss: 0.0409 Recon Loss: 0.0292 [03/30 07:47:06 TiTok]: Data (t): 0.0031, 62.44/s/gpu Batch (t): 0.5765 LR: 0.000076 Step: 176600 Total Loss: 0.0421 Recon Loss: 0.0299 [03/30 07:48:04 TiTok]: Data (t): 0.0031, 58.66/s/gpu Batch (t): 0.6137 LR: 0.000076 Step: 176700 Total Loss: 0.0389 Recon Loss: 0.0280 [03/30 07:49:01 TiTok]: Data (t): 0.0033, 62.24/s/gpu Batch (t): 0.5784 LR: 0.000076 Step: 176800 Total Loss: 0.0394 Recon Loss: 0.0276 [03/30 07:49:59 TiTok]: Data (t): 0.0031, 62.54/s/gpu Batch (t): 0.5756 LR: 0.000076 Step: 176900 Total Loss: 0.0407 Recon Loss: 0.0277 [03/30 07:50:57 TiTok]: Data (t): 0.0031, 56.91/s/gpu Batch (t): 0.6326 LR: 0.000076 Step: 177000 Total Loss: 0.0390 Recon Loss: 0.0281 [03/30 07:51:55 TiTok]: Data (t): 0.0031, 62.45/s/gpu Batch (t): 0.5764 LR: 0.000076 Step: 177100 Total Loss: 0.0405 Recon Loss: 0.0293 [03/30 07:52:52 TiTok]: Data (t): 0.0031, 62.49/s/gpu Batch (t): 0.5761 LR: 0.000076 Step: 177200 Total Loss: 0.0403 Recon Loss: 0.0288 [03/30 07:53:50 TiTok]: Data (t): 0.0031, 62.53/s/gpu Batch (t): 0.5758 LR: 0.000076 Step: 177300 Total Loss: 0.0422 Recon Loss: 0.0300 [03/30 07:54:48 TiTok]: Data (t): 0.0031, 62.51/s/gpu Batch (t): 0.5760 LR: 0.000076 Step: 177400 Total Loss: 0.0417 Recon Loss: 0.0298 [03/30 07:55:45 TiTok]: Data (t): 0.0031, 62.49/s/gpu Batch (t): 0.5761 LR: 0.000076 Step: 177500 Total Loss: 0.0384 Recon Loss: 0.0277 [03/30 07:56:43 TiTok]: Data (t): 0.0031, 62.52/s/gpu Batch (t): 0.5758 LR: 0.000076 Step: 177600 Total Loss: 0.0401 Recon Loss: 0.0288 [03/30 07:57:41 TiTok]: Data (t): 0.0031, 61.69/s/gpu Batch (t): 0.5835 LR: 0.000076 Step: 177700 Total Loss: 0.0397 Recon Loss: 0.0268 [03/30 07:58:39 TiTok]: Data (t): 0.0031, 61.96/s/gpu Batch (t): 0.5810 LR: 0.000076 Step: 177800 Total Loss: 0.0414 Recon Loss: 0.0275 [03/30 07:59:38 TiTok]: Data (t): 0.0031, 62.00/s/gpu Batch (t): 0.5807 LR: 0.000076 Step: 177900 Total Loss: 0.0406 Recon Loss: 0.0290 [03/30 08:00:37 TiTok]: Data (t): 0.0031, 56.46/s/gpu Batch (t): 0.6376 LR: 0.000076 Step: 178000 Total Loss: 0.0430 Recon Loss: 0.0297 [03/30 08:01:35 TiTok]: Data (t): 0.0031, 62.05/s/gpu Batch (t): 0.5802 LR: 0.000075 Step: 178100 Total Loss: 0.0389 Recon Loss: 0.0294 [03/30 08:02:33 TiTok]: Data (t): 0.0031, 61.65/s/gpu Batch (t): 0.5839 LR: 0.000075 Step: 178200 Total Loss: 0.0385 Recon Loss: 0.0271 [03/30 08:03:32 TiTok]: Data (t): 0.0034, 61.54/s/gpu Batch (t): 0.5849 LR: 0.000075 Step: 178300 Total Loss: 0.0399 Recon Loss: 0.0293 [03/30 08:04:30 TiTok]: Data (t): 0.0032, 62.07/s/gpu Batch (t): 0.5800 LR: 0.000075 Step: 178400 Total Loss: 0.0401 Recon Loss: 0.0303 [03/30 08:05:28 TiTok]: Data (t): 0.0031, 62.11/s/gpu Batch (t): 0.5796 LR: 0.000075 Step: 178500 Total Loss: 0.0387 Recon Loss: 0.0289 [03/30 08:06:26 TiTok]: Data (t): 0.0031, 62.00/s/gpu Batch (t): 0.5807 LR: 0.000075 Step: 178600 Total Loss: 0.0447 Recon Loss: 0.0298 [03/30 08:07:24 TiTok]: Data (t): 0.0031, 61.96/s/gpu Batch (t): 0.5810 LR: 0.000075 Step: 178700 Total Loss: 0.0418 Recon Loss: 0.0271 [03/30 08:08:22 TiTok]: Data (t): 0.0031, 62.05/s/gpu Batch (t): 0.5802 LR: 0.000075 Step: 178800 Total Loss: 0.0388 Recon Loss: 0.0274 [03/30 08:09:21 TiTok]: Data (t): 0.0031, 59.26/s/gpu Batch (t): 0.6075 LR: 0.000075 Step: 178900 Total Loss: 0.0398 Recon Loss: 0.0291 [03/30 08:10:19 TiTok]: Data (t): 0.0031, 56.48/s/gpu Batch (t): 0.6374 LR: 0.000075 Step: 179000 Total Loss: 0.0399 Recon Loss: 0.0279 [03/30 08:11:17 TiTok]: Data (t): 0.0031, 62.44/s/gpu Batch (t): 0.5765 LR: 0.000075 Step: 179100 Total Loss: 0.0387 Recon Loss: 0.0264 [03/30 08:12:15 TiTok]: Data (t): 0.0031, 62.47/s/gpu Batch (t): 0.5763 LR: 0.000075 Step: 179200 Total Loss: 0.0374 Recon Loss: 0.0266 [03/30 08:13:13 TiTok]: Data (t): 0.0032, 62.29/s/gpu Batch (t): 0.5780 LR: 0.000075 Step: 179300 Total Loss: 0.0392 Recon Loss: 0.0275 [03/30 08:14:10 TiTok]: Data (t): 0.0031, 62.34/s/gpu Batch (t): 0.5774 LR: 0.000075 Step: 179400 Total Loss: 0.0391 Recon Loss: 0.0279 [03/30 08:15:08 TiTok]: Data (t): 0.0031, 62.46/s/gpu Batch (t): 0.5764 LR: 0.000075 Step: 179500 Total Loss: 0.0373 Recon Loss: 0.0276 [03/30 08:16:06 TiTok]: Data (t): 0.0032, 62.35/s/gpu Batch (t): 0.5774 LR: 0.000075 Step: 179600 Total Loss: 0.0373 Recon Loss: 0.0277 [03/30 08:17:04 TiTok]: Data (t): 0.0032, 62.38/s/gpu Batch (t): 0.5771 LR: 0.000075 Step: 179700 Total Loss: 0.0409 Recon Loss: 0.0294 [03/30 08:18:02 TiTok]: Data (t): 0.0031, 62.30/s/gpu Batch (t): 0.5779 LR: 0.000075 Step: 179800 Total Loss: 0.0407 Recon Loss: 0.0282 [03/30 08:19:00 TiTok]: Data (t): 0.0031, 62.43/s/gpu Batch (t): 0.5766 LR: 0.000075 Step: 179900 Total Loss: 0.0411 Recon Loss: 0.0295 [03/30 08:19:57 TiTok]: Data (t): 0.0032, 56.38/s/gpu Batch (t): 0.6385 LR: 0.000075 Step: 180000 Total Loss: 0.0393 Recon Loss: 0.0281 [03/30 08:20:00 TiTok]: Saved state to /mnt/books/train_stage2/order_32_stage2/checkpoint-180000 [03/30 08:20:13 TiTok]: Reconstructing images... [03/30 08:21:12 TiTok]: Data (t): 0.0031, 62.39/s/gpu Batch (t): 0.5770 LR: 0.000075 Step: 180100 Total Loss: 0.0418 Recon Loss: 0.0278 [03/30 08:22:12 TiTok]: Data (t): 0.0031, 62.41/s/gpu Batch (t): 0.5768 LR: 0.000075 Step: 180200 Total Loss: 0.0370 Recon Loss: 0.0279 [03/30 08:23:10 TiTok]: Data (t): 0.0030, 62.46/s/gpu Batch (t): 0.5763 LR: 0.000075 Step: 180300 Total Loss: 0.0378 Recon Loss: 0.0258 [03/30 08:24:08 TiTok]: Data (t): 0.0032, 61.74/s/gpu Batch (t): 0.5831 LR: 0.000075 Step: 180400 Total Loss: 0.0398 Recon Loss: 0.0283 [03/30 08:25:06 TiTok]: Data (t): 0.0031, 62.35/s/gpu Batch (t): 0.5774 LR: 0.000075 Step: 180500 Total Loss: 0.0398 Recon Loss: 0.0287 [03/30 08:26:04 TiTok]: Data (t): 0.0032, 62.29/s/gpu Batch (t): 0.5780 LR: 0.000075 Step: 180600 Total Loss: 0.0387 Recon Loss: 0.0279 [03/30 08:27:02 TiTok]: Data (t): 0.0032, 62.40/s/gpu Batch (t): 0.5770 LR: 0.000075 Step: 180700 Total Loss: 0.0400 Recon Loss: 0.0287 [03/30 08:27:59 TiTok]: Data (t): 0.0032, 62.30/s/gpu Batch (t): 0.5778 LR: 0.000075 Step: 180800 Total Loss: 0.0392 Recon Loss: 0.0268 [03/30 08:28:57 TiTok]: Data (t): 0.0032, 62.39/s/gpu Batch (t): 0.5770 LR: 0.000075 Step: 180900 Total Loss: 0.0431 Recon Loss: 0.0300 [03/30 08:29:55 TiTok]: Data (t): 0.0031, 51.02/s/gpu Batch (t): 0.7056 LR: 0.000075 Step: 181000 Total Loss: 0.0405 Recon Loss: 0.0288 [03/30 08:30:53 TiTok]: Data (t): 0.0032, 62.35/s/gpu Batch (t): 0.5774 LR: 0.000075 Step: 181100 Total Loss: 0.0386 Recon Loss: 0.0275 [03/30 08:31:51 TiTok]: Data (t): 0.0031, 62.42/s/gpu Batch (t): 0.5768 LR: 0.000075 Step: 181200 Total Loss: 0.0397 Recon Loss: 0.0280 [03/30 08:32:49 TiTok]: Data (t): 0.0032, 62.43/s/gpu Batch (t): 0.5767 LR: 0.000075 Step: 181300 Total Loss: 0.0385 Recon Loss: 0.0287 [03/30 08:33:46 TiTok]: Data (t): 0.0032, 62.41/s/gpu Batch (t): 0.5768 LR: 0.000075 Step: 181400 Total Loss: 0.0371 Recon Loss: 0.0281 [03/30 08:34:44 TiTok]: Data (t): 0.0032, 62.49/s/gpu Batch (t): 0.5761 LR: 0.000075 Step: 181500 Total Loss: 0.0370 Recon Loss: 0.0265 [03/30 08:35:42 TiTok]: Data (t): 0.0030, 62.45/s/gpu Batch (t): 0.5764 LR: 0.000075 Step: 181600 Total Loss: 0.0378 Recon Loss: 0.0274 [03/30 08:36:40 TiTok]: Data (t): 0.0032, 62.45/s/gpu Batch (t): 0.5765 LR: 0.000075 Step: 181700 Total Loss: 0.0410 Recon Loss: 0.0290 [03/30 08:37:37 TiTok]: Data (t): 0.0032, 62.17/s/gpu Batch (t): 0.5790 LR: 0.000075 Step: 181800 Total Loss: 0.0401 Recon Loss: 0.0285 [03/30 08:38:35 TiTok]: Data (t): 0.0031, 62.45/s/gpu Batch (t): 0.5765 LR: 0.000075 Step: 181900 Total Loss: 0.0422 Recon Loss: 0.0283 [03/30 08:39:33 TiTok]: Data (t): 0.0031, 56.80/s/gpu Batch (t): 0.6338 LR: 0.000075 Step: 182000 Total Loss: 0.0399 Recon Loss: 0.0298 [03/30 08:40:31 TiTok]: Data (t): 0.0032, 62.39/s/gpu Batch (t): 0.5770 LR: 0.000074 Step: 182100 Total Loss: 0.0395 Recon Loss: 0.0280 [03/30 08:41:29 TiTok]: Data (t): 0.0031, 62.35/s/gpu Batch (t): 0.5774 LR: 0.000074 Step: 182200 Total Loss: 0.0372 Recon Loss: 0.0267 [03/30 08:42:26 TiTok]: Data (t): 0.0031, 62.47/s/gpu Batch (t): 0.5763 LR: 0.000074 Step: 182300 Total Loss: 0.0374 Recon Loss: 0.0271 [03/30 08:43:24 TiTok]: Data (t): 0.0031, 62.48/s/gpu Batch (t): 0.5762 LR: 0.000074 Step: 182400 Total Loss: 0.0426 Recon Loss: 0.0284 [03/30 08:44:23 TiTok]: Data (t): 0.0031, 62.47/s/gpu Batch (t): 0.5763 LR: 0.000074 Step: 182500 Total Loss: 0.0411 Recon Loss: 0.0293 [03/30 08:45:21 TiTok]: Data (t): 0.0065, 61.97/s/gpu Batch (t): 0.5809 LR: 0.000074 Step: 182600 Total Loss: 0.0377 Recon Loss: 0.0259 [03/30 08:46:18 TiTok]: Data (t): 0.0031, 62.44/s/gpu Batch (t): 0.5766 LR: 0.000074 Step: 182700 Total Loss: 0.0378 Recon Loss: 0.0255 [03/30 08:47:16 TiTok]: Data (t): 0.0031, 62.50/s/gpu Batch (t): 0.5760 LR: 0.000074 Step: 182800 Total Loss: 0.0394 Recon Loss: 0.0267 [03/30 08:48:14 TiTok]: Data (t): 0.0031, 62.40/s/gpu Batch (t): 0.5769 LR: 0.000074 Step: 182900 Total Loss: 0.0397 Recon Loss: 0.0282 [03/30 08:49:12 TiTok]: Data (t): 0.0032, 56.86/s/gpu Batch (t): 0.6331 LR: 0.000074 Step: 183000 Total Loss: 0.0367 Recon Loss: 0.0262 [03/30 08:50:09 TiTok]: Data (t): 0.0031, 62.43/s/gpu Batch (t): 0.5766 LR: 0.000074 Step: 183100 Total Loss: 0.0401 Recon Loss: 0.0285 [03/30 08:51:07 TiTok]: Data (t): 0.0031, 62.39/s/gpu Batch (t): 0.5770 LR: 0.000074 Step: 183200 Total Loss: 0.0361 Recon Loss: 0.0273 [03/30 08:52:05 TiTok]: Data (t): 0.0032, 62.55/s/gpu Batch (t): 0.5756 LR: 0.000074 Step: 183300 Total Loss: 0.0372 Recon Loss: 0.0274 [03/30 08:53:02 TiTok]: Data (t): 0.0032, 62.56/s/gpu Batch (t): 0.5754 LR: 0.000074 Step: 183400 Total Loss: 0.0399 Recon Loss: 0.0282 [03/30 08:54:00 TiTok]: Data (t): 0.0031, 62.31/s/gpu Batch (t): 0.5778 LR: 0.000074 Step: 183500 Total Loss: 0.0404 Recon Loss: 0.0288 [03/30 08:54:58 TiTok]: Data (t): 0.0030, 62.63/s/gpu Batch (t): 0.5748 LR: 0.000074 Step: 183600 Total Loss: 0.0399 Recon Loss: 0.0280 [03/30 08:55:55 TiTok]: Data (t): 0.0031, 62.49/s/gpu Batch (t): 0.5761 LR: 0.000074 Step: 183700 Total Loss: 0.0411 Recon Loss: 0.0303 [03/30 08:56:53 TiTok]: Data (t): 0.0033, 62.57/s/gpu Batch (t): 0.5754 LR: 0.000074 Step: 183800 Total Loss: 0.0404 Recon Loss: 0.0296 [03/30 08:57:51 TiTok]: Data (t): 0.0033, 62.42/s/gpu Batch (t): 0.5767 LR: 0.000074 Step: 183900 Total Loss: 0.0411 Recon Loss: 0.0277 [03/30 08:58:48 TiTok]: Data (t): 0.0033, 56.83/s/gpu Batch (t): 0.6334 LR: 0.000074 Step: 184000 Total Loss: 0.0380 Recon Loss: 0.0275 [03/30 08:59:46 TiTok]: Data (t): 0.0032, 62.64/s/gpu Batch (t): 0.5747 LR: 0.000074 Step: 184100 Total Loss: 0.0417 Recon Loss: 0.0285 [03/30 09:00:44 TiTok]: Data (t): 0.0032, 59.66/s/gpu Batch (t): 0.6035 LR: 0.000074 Step: 184200 Total Loss: 0.0381 Recon Loss: 0.0262 [03/30 09:01:43 TiTok]: Data (t): 0.0033, 62.58/s/gpu Batch (t): 0.5753 LR: 0.000074 Step: 184300 Total Loss: 0.0383 Recon Loss: 0.0275 [03/30 09:02:41 TiTok]: Data (t): 0.0033, 62.66/s/gpu Batch (t): 0.5745 LR: 0.000074 Step: 184400 Total Loss: 0.0371 Recon Loss: 0.0271 [03/30 09:03:39 TiTok]: Data (t): 0.0032, 62.53/s/gpu Batch (t): 0.5757 LR: 0.000074 Step: 184500 Total Loss: 0.0378 Recon Loss: 0.0279 [03/30 09:04:37 TiTok]: Data (t): 0.0033, 62.44/s/gpu Batch (t): 0.5765 LR: 0.000074 Step: 184600 Total Loss: 0.0380 Recon Loss: 0.0271 [03/30 09:05:35 TiTok]: Data (t): 0.0033, 62.43/s/gpu Batch (t): 0.5766 LR: 0.000074 Step: 184700 Total Loss: 0.0401 Recon Loss: 0.0279 [03/30 09:06:34 TiTok]: Data (t): 0.0032, 62.58/s/gpu Batch (t): 0.5752 LR: 0.000074 Step: 184800 Total Loss: 0.0381 Recon Loss: 0.0285 [03/30 09:07:32 TiTok]: Data (t): 0.0032, 62.35/s/gpu Batch (t): 0.5774 LR: 0.000074 Step: 184900 Total Loss: 0.0403 Recon Loss: 0.0291 [03/30 09:08:30 TiTok]: Data (t): 0.0034, 56.46/s/gpu Batch (t): 0.6376 LR: 0.000074 Step: 185000 Total Loss: 0.0404 Recon Loss: 0.0275 [03/30 09:09:28 TiTok]: Data (t): 0.0033, 62.25/s/gpu Batch (t): 0.5783 LR: 0.000074 Step: 185100 Total Loss: 0.0389 Recon Loss: 0.0265 [03/30 09:10:26 TiTok]: Data (t): 0.0032, 62.17/s/gpu Batch (t): 0.5790 LR: 0.000074 Step: 185200 Total Loss: 0.0395 Recon Loss: 0.0281 [03/30 09:11:24 TiTok]: Data (t): 0.0031, 61.86/s/gpu Batch (t): 0.5820 LR: 0.000074 Step: 185300 Total Loss: 0.0406 Recon Loss: 0.0264 [03/30 09:12:22 TiTok]: Data (t): 0.0032, 62.17/s/gpu Batch (t): 0.5791 LR: 0.000074 Step: 185400 Total Loss: 0.0435 Recon Loss: 0.0303 [03/30 09:13:20 TiTok]: Data (t): 0.0033, 62.30/s/gpu Batch (t): 0.5779 LR: 0.000074 Step: 185500 Total Loss: 0.0469 Recon Loss: 0.0309 [03/30 09:14:18 TiTok]: Data (t): 0.0033, 62.51/s/gpu Batch (t): 0.5759 LR: 0.000074 Step: 185600 Total Loss: 0.0412 Recon Loss: 0.0276 [03/30 09:15:16 TiTok]: Data (t): 0.0032, 62.62/s/gpu Batch (t): 0.5749 LR: 0.000074 Step: 185700 Total Loss: 0.0384 Recon Loss: 0.0273 [03/30 09:16:13 TiTok]: Data (t): 0.0032, 62.56/s/gpu Batch (t): 0.5754 LR: 0.000074 Step: 185800 Total Loss: 0.0387 Recon Loss: 0.0276 [03/30 09:17:11 TiTok]: Data (t): 0.0032, 62.45/s/gpu Batch (t): 0.5765 LR: 0.000073 Step: 185900 Total Loss: 0.0396 Recon Loss: 0.0261 [03/30 09:18:09 TiTok]: Data (t): 0.0032, 55.57/s/gpu Batch (t): 0.6479 LR: 0.000073 Step: 186000 Total Loss: 0.0369 Recon Loss: 0.0277 [03/30 09:19:07 TiTok]: Data (t): 0.0032, 62.60/s/gpu Batch (t): 0.5751 LR: 0.000073 Step: 186100 Total Loss: 0.0416 Recon Loss: 0.0286 [03/30 09:20:04 TiTok]: Data (t): 0.0033, 62.43/s/gpu Batch (t): 0.5766 LR: 0.000073 Step: 186200 Total Loss: 0.0402 Recon Loss: 0.0280 [03/30 09:21:02 TiTok]: Data (t): 0.0032, 62.49/s/gpu Batch (t): 0.5761 LR: 0.000073 Step: 186300 Total Loss: 0.0397 Recon Loss: 0.0261 [03/30 09:21:59 TiTok]: Data (t): 0.0033, 62.41/s/gpu Batch (t): 0.5769 LR: 0.000073 Step: 186400 Total Loss: 0.0400 Recon Loss: 0.0300 [03/30 09:22:57 TiTok]: Data (t): 0.0031, 62.47/s/gpu Batch (t): 0.5763 LR: 0.000073 Step: 186500 Total Loss: 0.0420 Recon Loss: 0.0279 [03/30 09:23:55 TiTok]: Data (t): 0.0032, 62.48/s/gpu Batch (t): 0.5761 LR: 0.000073 Step: 186600 Total Loss: 0.0412 Recon Loss: 0.0289 [03/30 09:24:53 TiTok]: Data (t): 0.0033, 62.25/s/gpu Batch (t): 0.5783 LR: 0.000073 Step: 186700 Total Loss: 0.0372 Recon Loss: 0.0285 [03/30 09:25:51 TiTok]: Data (t): 0.0032, 62.18/s/gpu Batch (t): 0.5789 LR: 0.000073 Step: 186800 Total Loss: 0.0392 Recon Loss: 0.0298 [03/30 09:26:49 TiTok]: Data (t): 0.0032, 62.26/s/gpu Batch (t): 0.5782 LR: 0.000073 Step: 186900 Total Loss: 0.0399 Recon Loss: 0.0291 [03/30 09:27:47 TiTok]: Data (t): 0.0032, 56.63/s/gpu Batch (t): 0.6358 LR: 0.000073 Step: 187000 Total Loss: 0.0394 Recon Loss: 0.0294 [03/30 09:28:47 TiTok]: Data (t): 0.0034, 62.02/s/gpu Batch (t): 0.5804 LR: 0.000073 Step: 187100 Total Loss: 0.0397 Recon Loss: 0.0275 [03/30 09:29:45 TiTok]: Data (t): 0.0032, 62.15/s/gpu Batch (t): 0.5792 LR: 0.000073 Step: 187200 Total Loss: 0.0408 Recon Loss: 0.0278 [03/30 09:30:43 TiTok]: Data (t): 0.0032, 62.02/s/gpu Batch (t): 0.5804 LR: 0.000073 Step: 187300 Total Loss: 0.0370 Recon Loss: 0.0266 [03/30 09:31:41 TiTok]: Data (t): 0.0033, 58.64/s/gpu Batch (t): 0.6139 LR: 0.000073 Step: 187400 Total Loss: 0.0389 Recon Loss: 0.0258 [03/30 09:32:39 TiTok]: Data (t): 0.0034, 62.50/s/gpu Batch (t): 0.5760 LR: 0.000073 Step: 187500 Total Loss: 0.0347 Recon Loss: 0.0241 [03/30 09:33:37 TiTok]: Data (t): 0.0033, 62.42/s/gpu Batch (t): 0.5768 LR: 0.000073 Step: 187600 Total Loss: 0.0384 Recon Loss: 0.0277 [03/30 09:34:35 TiTok]: Data (t): 0.0032, 62.44/s/gpu Batch (t): 0.5765 LR: 0.000073 Step: 187700 Total Loss: 0.0395 Recon Loss: 0.0280 [03/30 09:35:32 TiTok]: Data (t): 0.0032, 62.47/s/gpu Batch (t): 0.5763 LR: 0.000073 Step: 187800 Total Loss: 0.0381 Recon Loss: 0.0273 [03/30 09:36:30 TiTok]: Data (t): 0.0033, 62.11/s/gpu Batch (t): 0.5796 LR: 0.000073 Step: 187900 Total Loss: 0.0406 Recon Loss: 0.0288 [03/30 09:37:28 TiTok]: Data (t): 0.0032, 56.92/s/gpu Batch (t): 0.6325 LR: 0.000073 Step: 188000 Total Loss: 0.0397 Recon Loss: 0.0312 [03/30 09:38:26 TiTok]: Data (t): 0.0032, 62.42/s/gpu Batch (t): 0.5767 LR: 0.000073 Step: 188100 Total Loss: 0.0400 Recon Loss: 0.0296 [03/30 09:39:24 TiTok]: Data (t): 0.0032, 62.51/s/gpu Batch (t): 0.5759 LR: 0.000073 Step: 188200 Total Loss: 0.0392 Recon Loss: 0.0282 [03/30 09:40:21 TiTok]: Data (t): 0.0033, 62.52/s/gpu Batch (t): 0.5758 LR: 0.000073 Step: 188300 Total Loss: 0.0402 Recon Loss: 0.0286 [03/30 09:41:19 TiTok]: Data (t): 0.0032, 62.46/s/gpu Batch (t): 0.5764 LR: 0.000073 Step: 188400 Total Loss: 0.0399 Recon Loss: 0.0281 [03/30 09:42:17 TiTok]: Data (t): 0.0033, 62.44/s/gpu Batch (t): 0.5766 LR: 0.000073 Step: 188500 Total Loss: 0.0414 Recon Loss: 0.0287 [03/30 09:43:15 TiTok]: Data (t): 0.0032, 62.42/s/gpu Batch (t): 0.5767 LR: 0.000073 Step: 188600 Total Loss: 0.0381 Recon Loss: 0.0275 [03/30 09:44:13 TiTok]: Data (t): 0.0033, 62.43/s/gpu Batch (t): 0.5766 LR: 0.000073 Step: 188700 Total Loss: 0.0388 Recon Loss: 0.0269 [03/30 09:45:11 TiTok]: Data (t): 0.0034, 41.39/s/gpu Batch (t): 0.8698 LR: 0.000073 Step: 188800 Total Loss: 0.0399 Recon Loss: 0.0300 [03/30 09:46:09 TiTok]: Data (t): 0.0033, 62.40/s/gpu Batch (t): 0.5770 LR: 0.000073 Step: 188900 Total Loss: 0.0388 Recon Loss: 0.0274 [03/30 09:47:06 TiTok]: Data (t): 0.0033, 56.62/s/gpu Batch (t): 0.6359 LR: 0.000073 Step: 189000 Total Loss: 0.0423 Recon Loss: 0.0307 [03/30 09:48:06 TiTok]: Data (t): 0.0032, 62.46/s/gpu Batch (t): 0.5764 LR: 0.000073 Step: 189100 Total Loss: 0.0401 Recon Loss: 0.0279 [03/30 09:49:04 TiTok]: Data (t): 0.0032, 62.43/s/gpu Batch (t): 0.5767 LR: 0.000073 Step: 189200 Total Loss: 0.0403 Recon Loss: 0.0290 [03/30 09:50:02 TiTok]: Data (t): 0.0034, 62.38/s/gpu Batch (t): 0.5771 LR: 0.000073 Step: 189300 Total Loss: 0.0391 Recon Loss: 0.0276 [03/30 09:51:01 TiTok]: Data (t): 0.0032, 62.53/s/gpu Batch (t): 0.5757 LR: 0.000073 Step: 189400 Total Loss: 0.0422 Recon Loss: 0.0288 [03/30 09:51:59 TiTok]: Data (t): 0.0034, 62.46/s/gpu Batch (t): 0.5763 LR: 0.000073 Step: 189500 Total Loss: 0.0403 Recon Loss: 0.0306 [03/30 09:52:57 TiTok]: Data (t): 0.0031, 62.51/s/gpu Batch (t): 0.5759 LR: 0.000073 Step: 189600 Total Loss: 0.0429 Recon Loss: 0.0305 [03/30 09:53:55 TiTok]: Data (t): 0.0033, 62.46/s/gpu Batch (t): 0.5763 LR: 0.000072 Step: 189700 Total Loss: 0.0415 Recon Loss: 0.0281 [03/30 09:54:52 TiTok]: Data (t): 0.0032, 62.24/s/gpu Batch (t): 0.5784 LR: 0.000072 Step: 189800 Total Loss: 0.0387 Recon Loss: 0.0274 [03/30 09:55:50 TiTok]: Data (t): 0.0033, 62.40/s/gpu Batch (t): 0.5770 LR: 0.000072 Step: 189900 Total Loss: 0.0380 Recon Loss: 0.0284 [03/30 09:56:48 TiTok]: Data (t): 0.0033, 56.77/s/gpu Batch (t): 0.6341 LR: 0.000072 Step: 190000 Total Loss: 0.0403 Recon Loss: 0.0289 [03/30 09:56:50 TiTok]: Saved state to /mnt/books/train_stage2/order_32_stage2/checkpoint-190000 [03/30 09:57:04 TiTok]: Reconstructing images... [03/30 09:58:02 TiTok]: Data (t): 0.0034, 62.43/s/gpu Batch (t): 0.5767 LR: 0.000072 Step: 190100 Total Loss: 0.0394 Recon Loss: 0.0276 [03/30 09:59:00 TiTok]: Data (t): 0.0033, 62.44/s/gpu Batch (t): 0.5765 LR: 0.000072 Step: 190200 Total Loss: 0.0402 Recon Loss: 0.0284 [03/30 09:59:58 TiTok]: Data (t): 0.0032, 62.42/s/gpu Batch (t): 0.5767 LR: 0.000072 Step: 190300 Total Loss: 0.0411 Recon Loss: 0.0290 [03/30 10:00:55 TiTok]: Data (t): 0.0032, 62.46/s/gpu Batch (t): 0.5764 LR: 0.000072 Step: 190400 Total Loss: 0.0379 Recon Loss: 0.0269 [03/30 10:01:53 TiTok]: Data (t): 0.0033, 62.57/s/gpu Batch (t): 0.5753 LR: 0.000072 Step: 190500 Total Loss: 0.0412 Recon Loss: 0.0316 [03/30 10:02:51 TiTok]: Data (t): 0.0033, 61.95/s/gpu Batch (t): 0.5811 LR: 0.000072 Step: 190600 Total Loss: 0.0402 Recon Loss: 0.0286 [03/30 10:03:49 TiTok]: Data (t): 0.0032, 62.49/s/gpu Batch (t): 0.5761 LR: 0.000072 Step: 190700 Total Loss: 0.0396 Recon Loss: 0.0287 [03/30 10:04:47 TiTok]: Data (t): 0.0032, 62.34/s/gpu Batch (t): 0.5775 LR: 0.000072 Step: 190800 Total Loss: 0.0392 Recon Loss: 0.0279 [03/30 10:05:44 TiTok]: Data (t): 0.0031, 62.46/s/gpu Batch (t): 0.5764 LR: 0.000072 Step: 190900 Total Loss: 0.0425 Recon Loss: 0.0297 [03/30 10:06:43 TiTok]: Data (t): 0.0034, 52.04/s/gpu Batch (t): 0.6918 LR: 0.000072 Step: 191000 Total Loss: 0.0415 Recon Loss: 0.0286 [03/30 10:07:41 TiTok]: Data (t): 0.0032, 62.40/s/gpu Batch (t): 0.5769 LR: 0.000072 Step: 191100 Total Loss: 0.0391 Recon Loss: 0.0288 [03/30 10:08:38 TiTok]: Data (t): 0.0031, 62.49/s/gpu Batch (t): 0.5761 LR: 0.000072 Step: 191200 Total Loss: 0.0402 Recon Loss: 0.0276 [03/30 10:09:36 TiTok]: Data (t): 0.0031, 62.33/s/gpu Batch (t): 0.5775 LR: 0.000072 Step: 191300 Total Loss: 0.0395 Recon Loss: 0.0296 [03/30 10:10:34 TiTok]: Data (t): 0.0033, 62.48/s/gpu Batch (t): 0.5762 LR: 0.000072 Step: 191400 Total Loss: 0.0408 Recon Loss: 0.0284 [03/30 10:11:31 TiTok]: Data (t): 0.0032, 62.49/s/gpu Batch (t): 0.5761 LR: 0.000072 Step: 191500 Total Loss: 0.0404 Recon Loss: 0.0292 [03/30 10:12:31 TiTok]: Data (t): 0.0033, 62.45/s/gpu Batch (t): 0.5764 LR: 0.000072 Step: 191600 Total Loss: 0.0414 Recon Loss: 0.0289 [03/30 10:13:29 TiTok]: Data (t): 0.0033, 62.49/s/gpu Batch (t): 0.5761 LR: 0.000072 Step: 191700 Total Loss: 0.0379 Recon Loss: 0.0281 [03/30 10:14:26 TiTok]: Data (t): 0.0032, 62.43/s/gpu Batch (t): 0.5766 LR: 0.000072 Step: 191800 Total Loss: 0.0394 Recon Loss: 0.0304 [03/30 10:15:24 TiTok]: Data (t): 0.0032, 62.51/s/gpu Batch (t): 0.5759 LR: 0.000072 Step: 191900 Total Loss: 0.0417 Recon Loss: 0.0302 [03/30 10:16:22 TiTok]: Data (t): 0.0033, 56.68/s/gpu Batch (t): 0.6352 LR: 0.000072 Step: 192000 Total Loss: 0.0411 Recon Loss: 0.0299 [03/30 10:17:20 TiTok]: Data (t): 0.0034, 62.55/s/gpu Batch (t): 0.5756 LR: 0.000072 Step: 192100 Total Loss: 0.0381 Recon Loss: 0.0276 [03/30 10:18:17 TiTok]: Data (t): 0.0031, 62.44/s/gpu Batch (t): 0.5765 LR: 0.000072 Step: 192200 Total Loss: 0.0384 Recon Loss: 0.0270 [03/30 10:19:15 TiTok]: Data (t): 0.0031, 62.50/s/gpu Batch (t): 0.5760 LR: 0.000072 Step: 192300 Total Loss: 0.0383 Recon Loss: 0.0257 [03/30 10:20:13 TiTok]: Data (t): 0.0032, 62.48/s/gpu Batch (t): 0.5762 LR: 0.000072 Step: 192400 Total Loss: 0.0389 Recon Loss: 0.0281 [03/30 10:21:11 TiTok]: Data (t): 0.0033, 62.50/s/gpu Batch (t): 0.5760 LR: 0.000072 Step: 192500 Total Loss: 0.0409 Recon Loss: 0.0280 [03/30 10:22:09 TiTok]: Data (t): 0.0033, 62.41/s/gpu Batch (t): 0.5768 LR: 0.000072 Step: 192600 Total Loss: 0.0399 Recon Loss: 0.0285 [03/30 10:23:06 TiTok]: Data (t): 0.0032, 62.22/s/gpu Batch (t): 0.5785 LR: 0.000072 Step: 192700 Total Loss: 0.0394 Recon Loss: 0.0293 [03/30 10:24:04 TiTok]: Data (t): 0.0032, 62.50/s/gpu Batch (t): 0.5760 LR: 0.000072 Step: 192800 Total Loss: 0.0388 Recon Loss: 0.0279 [03/30 10:25:02 TiTok]: Data (t): 0.0032, 62.51/s/gpu Batch (t): 0.5759 LR: 0.000072 Step: 192900 Total Loss: 0.0391 Recon Loss: 0.0278 [03/30 10:26:00 TiTok]: Data (t): 0.0032, 56.81/s/gpu Batch (t): 0.6337 LR: 0.000072 Step: 193000 Total Loss: 0.0424 Recon Loss: 0.0294 [03/30 10:26:58 TiTok]: Data (t): 0.0033, 62.61/s/gpu Batch (t): 0.5750 LR: 0.000072 Step: 193100 Total Loss: 0.0417 Recon Loss: 0.0294 [03/30 10:27:55 TiTok]: Data (t): 0.0032, 62.48/s/gpu Batch (t): 0.5761 LR: 0.000072 Step: 193200 Total Loss: 0.0415 Recon Loss: 0.0275 [03/30 10:28:53 TiTok]: Data (t): 0.0032, 62.51/s/gpu Batch (t): 0.5759 LR: 0.000072 Step: 193300 Total Loss: 0.0410 Recon Loss: 0.0285 [03/30 10:29:51 TiTok]: Data (t): 0.0033, 62.41/s/gpu Batch (t): 0.5768 LR: 0.000072 Step: 193400 Total Loss: 0.0392 Recon Loss: 0.0278 [03/30 10:30:50 TiTok]: Data (t): 0.0032, 58.77/s/gpu Batch (t): 0.6125 LR: 0.000071 Step: 193500 Total Loss: 0.0403 Recon Loss: 0.0283 [03/30 10:31:48 TiTok]: Data (t): 0.0032, 62.47/s/gpu Batch (t): 0.5763 LR: 0.000071 Step: 193600 Total Loss: 0.0408 Recon Loss: 0.0274 [03/30 10:32:46 TiTok]: Data (t): 0.0032, 62.49/s/gpu Batch (t): 0.5761 LR: 0.000071 Step: 193700 Total Loss: 0.0401 Recon Loss: 0.0280 [03/30 10:33:44 TiTok]: Data (t): 0.0031, 62.50/s/gpu Batch (t): 0.5760 LR: 0.000071 Step: 193800 Total Loss: 0.0406 Recon Loss: 0.0281 [03/30 10:34:43 TiTok]: Data (t): 0.0033, 62.06/s/gpu Batch (t): 0.5801 LR: 0.000071 Step: 193900 Total Loss: 0.0413 Recon Loss: 0.0290 [03/30 10:35:41 TiTok]: Data (t): 0.0033, 56.79/s/gpu Batch (t): 0.6339 LR: 0.000071 Step: 194000 Total Loss: 0.0394 Recon Loss: 0.0288 [03/30 10:36:39 TiTok]: Data (t): 0.0033, 62.47/s/gpu Batch (t): 0.5763 LR: 0.000071 Step: 194100 Total Loss: 0.0383 Recon Loss: 0.0257 [03/30 10:37:37 TiTok]: Data (t): 0.0033, 62.40/s/gpu Batch (t): 0.5770 LR: 0.000071 Step: 194200 Total Loss: 0.0405 Recon Loss: 0.0267 [03/30 10:38:34 TiTok]: Data (t): 0.0032, 62.52/s/gpu Batch (t): 0.5758 LR: 0.000071 Step: 194300 Total Loss: 0.0384 Recon Loss: 0.0279 [03/30 10:39:32 TiTok]: Data (t): 0.0033, 62.49/s/gpu Batch (t): 0.5761 LR: 0.000071 Step: 194400 Total Loss: 0.0404 Recon Loss: 0.0282 [03/30 10:40:29 TiTok]: Data (t): 0.0032, 62.58/s/gpu Batch (t): 0.5752 LR: 0.000071 Step: 194500 Total Loss: 0.0403 Recon Loss: 0.0295 [03/30 10:41:27 TiTok]: Data (t): 0.0033, 62.62/s/gpu Batch (t): 0.5749 LR: 0.000071 Step: 194600 Total Loss: 0.0400 Recon Loss: 0.0282 [03/30 10:42:25 TiTok]: Data (t): 0.0032, 62.55/s/gpu Batch (t): 0.5755 LR: 0.000071 Step: 194700 Total Loss: 0.0416 Recon Loss: 0.0287 [03/30 10:43:23 TiTok]: Data (t): 0.0034, 62.15/s/gpu Batch (t): 0.5793 LR: 0.000071 Step: 194800 Total Loss: 0.0388 Recon Loss: 0.0281 [03/30 10:44:20 TiTok]: Data (t): 0.0032, 62.33/s/gpu Batch (t): 0.5775 LR: 0.000071 Step: 194900 Total Loss: 0.0403 Recon Loss: 0.0287 [03/30 10:45:18 TiTok]: Data (t): 0.0032, 56.65/s/gpu Batch (t): 0.6355 LR: 0.000071 Step: 195000 Total Loss: 0.0395 Recon Loss: 0.0273 [03/30 10:46:16 TiTok]: Data (t): 0.0033, 62.54/s/gpu Batch (t): 0.5757 LR: 0.000071 Step: 195100 Total Loss: 0.0387 Recon Loss: 0.0269 [03/30 10:47:13 TiTok]: Data (t): 0.0031, 62.47/s/gpu Batch (t): 0.5763 LR: 0.000071 Step: 195200 Total Loss: 0.0371 Recon Loss: 0.0246 [03/30 10:48:11 TiTok]: Data (t): 0.0034, 62.47/s/gpu Batch (t): 0.5763 LR: 0.000071 Step: 195300 Total Loss: 0.0373 Recon Loss: 0.0274 [03/30 10:49:09 TiTok]: Data (t): 0.0032, 62.46/s/gpu Batch (t): 0.5764 LR: 0.000071 Step: 195400 Total Loss: 0.0405 Recon Loss: 0.0281 [03/30 10:50:07 TiTok]: Data (t): 0.0032, 62.42/s/gpu Batch (t): 0.5767 LR: 0.000071 Step: 195500 Total Loss: 0.0388 Recon Loss: 0.0284 [03/30 10:51:04 TiTok]: Data (t): 0.0033, 62.54/s/gpu Batch (t): 0.5756 LR: 0.000071 Step: 195600 Total Loss: 0.0374 Recon Loss: 0.0273 [03/30 10:52:02 TiTok]: Data (t): 0.0032, 62.53/s/gpu Batch (t): 0.5757 LR: 0.000071 Step: 195700 Total Loss: 0.0421 Recon Loss: 0.0286 [03/30 10:53:00 TiTok]: Data (t): 0.0033, 62.31/s/gpu Batch (t): 0.5777 LR: 0.000071 Step: 195800 Total Loss: 0.0366 Recon Loss: 0.0249 [03/30 10:53:58 TiTok]: Data (t): 0.0033, 62.48/s/gpu Batch (t): 0.5762 LR: 0.000071 Step: 195900 Total Loss: 0.0380 Recon Loss: 0.0249 [03/30 10:54:55 TiTok]: Data (t): 0.0032, 56.64/s/gpu Batch (t): 0.6355 LR: 0.000071 Step: 196000 Total Loss: 0.0388 Recon Loss: 0.0280 [03/30 10:55:53 TiTok]: Data (t): 0.0034, 47.24/s/gpu Batch (t): 0.7621 LR: 0.000071 Step: 196100 Total Loss: 0.0404 Recon Loss: 0.0307 [03/30 10:56:52 TiTok]: Data (t): 0.0032, 62.48/s/gpu Batch (t): 0.5762 LR: 0.000071 Step: 196200 Total Loss: 0.0372 Recon Loss: 0.0276 [03/30 10:57:50 TiTok]: Data (t): 0.0032, 62.20/s/gpu Batch (t): 0.5787 LR: 0.000071 Step: 196300 Total Loss: 0.0385 Recon Loss: 0.0259 [03/30 10:58:47 TiTok]: Data (t): 0.0033, 62.43/s/gpu Batch (t): 0.5766 LR: 0.000071 Step: 196400 Total Loss: 0.0397 Recon Loss: 0.0281 [03/30 10:59:45 TiTok]: Data (t): 0.0054, 59.06/s/gpu Batch (t): 0.6096 LR: 0.000071 Step: 196500 Total Loss: 0.0418 Recon Loss: 0.0296 [03/30 11:00:43 TiTok]: Data (t): 0.0034, 62.53/s/gpu Batch (t): 0.5757 LR: 0.000071 Step: 196600 Total Loss: 0.0415 Recon Loss: 0.0288 [03/30 11:01:41 TiTok]: Data (t): 0.0033, 62.54/s/gpu Batch (t): 0.5756 LR: 0.000071 Step: 196700 Total Loss: 0.0385 Recon Loss: 0.0286 [03/30 11:02:38 TiTok]: Data (t): 0.0031, 62.52/s/gpu Batch (t): 0.5758 LR: 0.000071 Step: 196800 Total Loss: 0.0386 Recon Loss: 0.0289 [03/30 11:03:36 TiTok]: Data (t): 0.0032, 62.03/s/gpu Batch (t): 0.5804 LR: 0.000071 Step: 196900 Total Loss: 0.0407 Recon Loss: 0.0275 [03/30 11:04:34 TiTok]: Data (t): 0.0032, 56.75/s/gpu Batch (t): 0.6344 LR: 0.000071 Step: 197000 Total Loss: 0.0401 Recon Loss: 0.0280 [03/30 11:05:32 TiTok]: Data (t): 0.0031, 62.40/s/gpu Batch (t): 0.5769 LR: 0.000071 Step: 197100 Total Loss: 0.0398 Recon Loss: 0.0285 [03/30 11:06:30 TiTok]: Data (t): 0.0032, 62.39/s/gpu Batch (t): 0.5770 LR: 0.000071 Step: 197200 Total Loss: 0.0390 Recon Loss: 0.0260 [03/30 11:07:28 TiTok]: Data (t): 0.0036, 60.47/s/gpu Batch (t): 0.5954 LR: 0.000070 Step: 197300 Total Loss: 0.0422 Recon Loss: 0.0308 [03/30 11:08:26 TiTok]: Data (t): 0.0033, 61.99/s/gpu Batch (t): 0.5808 LR: 0.000070 Step: 197400 Total Loss: 0.0402 Recon Loss: 0.0295 [03/30 11:09:24 TiTok]: Data (t): 0.0032, 62.26/s/gpu Batch (t): 0.5782 LR: 0.000070 Step: 197500 Total Loss: 0.0419 Recon Loss: 0.0272 [03/30 11:10:22 TiTok]: Data (t): 0.0032, 62.41/s/gpu Batch (t): 0.5768 LR: 0.000070 Step: 197600 Total Loss: 0.0406 Recon Loss: 0.0279 [03/30 11:11:20 TiTok]: Data (t): 0.0033, 62.40/s/gpu Batch (t): 0.5769 LR: 0.000070 Step: 197700 Total Loss: 0.0385 Recon Loss: 0.0269 [03/30 11:12:18 TiTok]: Data (t): 0.0034, 62.11/s/gpu Batch (t): 0.5796 LR: 0.000070 Step: 197800 Total Loss: 0.0422 Recon Loss: 0.0286 [03/30 11:13:16 TiTok]: Data (t): 0.0032, 62.43/s/gpu Batch (t): 0.5766 LR: 0.000070 Step: 197900 Total Loss: 0.0377 Recon Loss: 0.0269 [03/30 11:14:15 TiTok]: Data (t): 0.0033, 56.68/s/gpu Batch (t): 0.6352 LR: 0.000070 Step: 198000 Total Loss: 0.0392 Recon Loss: 0.0281 [03/30 11:15:13 TiTok]: Data (t): 0.0033, 62.15/s/gpu Batch (t): 0.5792 LR: 0.000070 Step: 198100 Total Loss: 0.0367 Recon Loss: 0.0280 [03/30 11:16:11 TiTok]: Data (t): 0.0033, 61.75/s/gpu Batch (t): 0.5830 LR: 0.000070 Step: 198200 Total Loss: 0.0361 Recon Loss: 0.0283 [03/30 11:17:09 TiTok]: Data (t): 0.0034, 62.29/s/gpu Batch (t): 0.5780 LR: 0.000070 Step: 198300 Total Loss: 0.0401 Recon Loss: 0.0291 [03/30 11:18:07 TiTok]: Data (t): 0.0033, 62.30/s/gpu Batch (t): 0.5779 LR: 0.000070 Step: 198400 Total Loss: 0.0395 Recon Loss: 0.0283 [03/30 11:19:07 TiTok]: Data (t): 0.0033, 62.36/s/gpu Batch (t): 0.5773 LR: 0.000070 Step: 198500 Total Loss: 0.0397 Recon Loss: 0.0281 [03/30 11:20:04 TiTok]: Data (t): 0.0032, 62.47/s/gpu Batch (t): 0.5763 LR: 0.000070 Step: 198600 Total Loss: 0.0407 Recon Loss: 0.0279 [03/30 11:21:02 TiTok]: Data (t): 0.0032, 62.45/s/gpu Batch (t): 0.5765 LR: 0.000070 Step: 198700 Total Loss: 0.0423 Recon Loss: 0.0279 [03/30 11:22:00 TiTok]: Data (t): 0.0033, 62.25/s/gpu Batch (t): 0.5783 LR: 0.000070 Step: 198800 Total Loss: 0.0382 Recon Loss: 0.0275 [03/30 11:22:58 TiTok]: Data (t): 0.0033, 62.33/s/gpu Batch (t): 0.5776 LR: 0.000070 Step: 198900 Total Loss: 0.0354 Recon Loss: 0.0262 [03/30 11:23:56 TiTok]: Data (t): 0.0034, 56.42/s/gpu Batch (t): 0.6381 LR: 0.000070 Step: 199000 Total Loss: 0.0377 Recon Loss: 0.0267 [03/30 11:24:54 TiTok]: Data (t): 0.0033, 62.47/s/gpu Batch (t): 0.5763 LR: 0.000070 Step: 199100 Total Loss: 0.0327 Recon Loss: 0.0244 [03/30 11:25:52 TiTok]: Data (t): 0.0033, 62.54/s/gpu Batch (t): 0.5757 LR: 0.000070 Step: 199200 Total Loss: 0.0399 Recon Loss: 0.0274 [03/30 11:26:49 TiTok]: Data (t): 0.0032, 62.48/s/gpu Batch (t): 0.5762 LR: 0.000070 Step: 199300 Total Loss: 0.0395 Recon Loss: 0.0276 [03/30 11:27:47 TiTok]: Data (t): 0.0033, 62.48/s/gpu Batch (t): 0.5762 LR: 0.000070 Step: 199400 Total Loss: 0.0419 Recon Loss: 0.0288 [03/30 11:28:45 TiTok]: Data (t): 0.0032, 62.41/s/gpu Batch (t): 0.5769 LR: 0.000070 Step: 199500 Total Loss: 0.0429 Recon Loss: 0.0315 [03/30 11:29:43 TiTok]: Data (t): 0.0032, 62.34/s/gpu Batch (t): 0.5775 LR: 0.000070 Step: 199600 Total Loss: 0.0384 Recon Loss: 0.0289 [03/30 11:30:40 TiTok]: Data (t): 0.0033, 62.50/s/gpu Batch (t): 0.5760 LR: 0.000070 Step: 199700 Total Loss: 0.0379 Recon Loss: 0.0245 [03/30 11:31:38 TiTok]: Data (t): 0.0032, 62.51/s/gpu Batch (t): 0.5759 LR: 0.000070 Step: 199800 Total Loss: 0.0373 Recon Loss: 0.0256 [03/30 11:32:36 TiTok]: Data (t): 0.0031, 61.76/s/gpu Batch (t): 0.5829 LR: 0.000070 Step: 199900 Total Loss: 0.0422 Recon Loss: 0.0293 [03/30 11:33:34 TiTok]: Data (t): 0.0032, 56.78/s/gpu Batch (t): 0.6340 LR: 0.000070 Step: 200000 Total Loss: 0.0377 Recon Loss: 0.0277 [03/30 11:33:36 TiTok]: Saved state to /mnt/books/train_stage2/order_32_stage2/checkpoint-200000 [03/30 11:33:50 TiTok]: Reconstructing images... [03/30 11:34:48 TiTok]: Data (t): 0.0033, 62.20/s/gpu Batch (t): 0.5788 LR: 0.000070 Step: 200100 Total Loss: 0.0388 Recon Loss: 0.0283 [03/30 11:35:46 TiTok]: Data (t): 0.0031, 62.55/s/gpu Batch (t): 0.5756 LR: 0.000070 Step: 200200 Total Loss: 0.0397 Recon Loss: 0.0279 [03/30 11:36:43 TiTok]: Data (t): 0.0031, 62.51/s/gpu Batch (t): 0.5759 LR: 0.000070 Step: 200300 Total Loss: 0.0378 Recon Loss: 0.0277 [03/30 11:37:41 TiTok]: Data (t): 0.0033, 62.60/s/gpu Batch (t): 0.5751 LR: 0.000070 Step: 200400 Total Loss: 0.0406 Recon Loss: 0.0302 [03/30 11:38:39 TiTok]: Data (t): 0.0032, 62.45/s/gpu Batch (t): 0.5765 LR: 0.000070 Step: 200500 Total Loss: 0.0402 Recon Loss: 0.0270 [03/30 11:39:37 TiTok]: Data (t): 0.0032, 62.40/s/gpu Batch (t): 0.5769 LR: 0.000070 Step: 200600 Total Loss: 0.0415 Recon Loss: 0.0285 [03/30 11:40:36 TiTok]: Data (t): 0.0033, 42.86/s/gpu Batch (t): 0.8400 LR: 0.000070 Step: 200700 Total Loss: 0.0419 Recon Loss: 0.0313 [03/30 11:41:34 TiTok]: Data (t): 0.0032, 62.25/s/gpu Batch (t): 0.5783 LR: 0.000070 Step: 200800 Total Loss: 0.0359 Recon Loss: 0.0257 [03/30 11:42:32 TiTok]: Data (t): 0.0033, 62.00/s/gpu Batch (t): 0.5807 LR: 0.000070 Step: 200900 Total Loss: 0.0390 Recon Loss: 0.0277 [03/30 11:43:30 TiTok]: Data (t): 0.0033, 52.49/s/gpu Batch (t): 0.6859 LR: 0.000069 Step: 201000 Total Loss: 0.0384 Recon Loss: 0.0276 [03/30 11:44:27 TiTok]: Data (t): 0.0032, 62.39/s/gpu Batch (t): 0.5770 LR: 0.000069 Step: 201100 Total Loss: 0.0399 Recon Loss: 0.0267 [03/30 11:45:25 TiTok]: Data (t): 0.0032, 62.53/s/gpu Batch (t): 0.5757 LR: 0.000069 Step: 201200 Total Loss: 0.0426 Recon Loss: 0.0273 [03/30 11:46:23 TiTok]: Data (t): 0.0033, 62.41/s/gpu Batch (t): 0.5768 LR: 0.000069 Step: 201300 Total Loss: 0.0416 Recon Loss: 0.0280 [03/30 11:47:20 TiTok]: Data (t): 0.0032, 62.47/s/gpu Batch (t): 0.5763 LR: 0.000069 Step: 201400 Total Loss: 0.0403 Recon Loss: 0.0292 [03/30 11:48:18 TiTok]: Data (t): 0.0031, 62.48/s/gpu Batch (t): 0.5762 LR: 0.000069 Step: 201500 Total Loss: 0.0401 Recon Loss: 0.0285 [03/30 11:49:16 TiTok]: Data (t): 0.0032, 62.43/s/gpu Batch (t): 0.5767 LR: 0.000069 Step: 201600 Total Loss: 0.0392 Recon Loss: 0.0281 [03/30 11:50:14 TiTok]: Data (t): 0.0032, 62.50/s/gpu Batch (t): 0.5760 LR: 0.000069 Step: 201700 Total Loss: 0.0381 Recon Loss: 0.0280 [03/30 11:51:12 TiTok]: Data (t): 0.0032, 62.45/s/gpu Batch (t): 0.5765 LR: 0.000069 Step: 201800 Total Loss: 0.0371 Recon Loss: 0.0260 [03/30 11:52:09 TiTok]: Data (t): 0.0032, 62.45/s/gpu Batch (t): 0.5764 LR: 0.000069 Step: 201900 Total Loss: 0.0384 Recon Loss: 0.0281 [03/30 11:53:07 TiTok]: Data (t): 0.0032, 56.81/s/gpu Batch (t): 0.6336 LR: 0.000069 Step: 202000 Total Loss: 0.0364 Recon Loss: 0.0278 [03/30 11:54:05 TiTok]: Data (t): 0.0035, 62.61/s/gpu Batch (t): 0.5750 LR: 0.000069 Step: 202100 Total Loss: 0.0387 Recon Loss: 0.0268 [03/30 11:55:03 TiTok]: Data (t): 0.0033, 61.93/s/gpu Batch (t): 0.5813 LR: 0.000069 Step: 202200 Total Loss: 0.0406 Recon Loss: 0.0288 [03/30 11:56:00 TiTok]: Data (t): 0.0032, 61.64/s/gpu Batch (t): 0.5840 LR: 0.000069 Step: 202300 Total Loss: 0.0388 Recon Loss: 0.0279 [03/30 11:56:59 TiTok]: Data (t): 0.0033, 62.15/s/gpu Batch (t): 0.5793 LR: 0.000069 Step: 202400 Total Loss: 0.0382 Recon Loss: 0.0274 [03/30 11:57:58 TiTok]: Data (t): 0.0034, 62.24/s/gpu Batch (t): 0.5784 LR: 0.000069 Step: 202500 Total Loss: 0.0403 Recon Loss: 0.0271 [03/30 11:58:56 TiTok]: Data (t): 0.0034, 60.61/s/gpu Batch (t): 0.5939 LR: 0.000069 Step: 202600 Total Loss: 0.0421 Recon Loss: 0.0283 [03/30 11:59:54 TiTok]: Data (t): 0.0032, 61.53/s/gpu Batch (t): 0.5851 LR: 0.000069 Step: 202700 Total Loss: 0.0387 Recon Loss: 0.0285 [03/30 12:00:52 TiTok]: Data (t): 0.0032, 62.17/s/gpu Batch (t): 0.5790 LR: 0.000069 Step: 202800 Total Loss: 0.0377 Recon Loss: 0.0283 [03/30 12:01:49 TiTok]: Data (t): 0.0032, 62.27/s/gpu Batch (t): 0.5782 LR: 0.000069 Step: 202900 Total Loss: 0.0411 Recon Loss: 0.0275 [03/30 12:02:49 TiTok]: Data (t): 0.0033, 56.72/s/gpu Batch (t): 0.6347 LR: 0.000069 Step: 203000 Total Loss: 0.0398 Recon Loss: 0.0283 [03/30 12:03:47 TiTok]: Data (t): 0.0033, 61.20/s/gpu Batch (t): 0.5882 LR: 0.000069 Step: 203100 Total Loss: 0.0416 Recon Loss: 0.0288 [03/30 12:04:44 TiTok]: Data (t): 0.0032, 62.39/s/gpu Batch (t): 0.5770 LR: 0.000069 Step: 203200 Total Loss: 0.0382 Recon Loss: 0.0255 [03/30 12:05:42 TiTok]: Data (t): 0.0033, 62.18/s/gpu Batch (t): 0.5790 LR: 0.000069 Step: 203300 Total Loss: 0.0383 Recon Loss: 0.0266 [03/30 12:06:40 TiTok]: Data (t): 0.0032, 62.56/s/gpu Batch (t): 0.5755 LR: 0.000069 Step: 203400 Total Loss: 0.0385 Recon Loss: 0.0286 [03/30 12:07:38 TiTok]: Data (t): 0.0032, 62.46/s/gpu Batch (t): 0.5764 LR: 0.000069 Step: 203500 Total Loss: 0.0391 Recon Loss: 0.0287 [03/30 12:08:35 TiTok]: Data (t): 0.0032, 62.52/s/gpu Batch (t): 0.5759 LR: 0.000069 Step: 203600 Total Loss: 0.0399 Recon Loss: 0.0291 [03/30 12:09:33 TiTok]: Data (t): 0.0032, 62.49/s/gpu Batch (t): 0.5761 LR: 0.000069 Step: 203700 Total Loss: 0.0413 Recon Loss: 0.0292 [03/30 12:10:31 TiTok]: Data (t): 0.0033, 62.52/s/gpu Batch (t): 0.5758 LR: 0.000069 Step: 203800 Total Loss: 0.0388 Recon Loss: 0.0272 [03/30 12:11:28 TiTok]: Data (t): 0.0032, 62.55/s/gpu Batch (t): 0.5755 LR: 0.000069 Step: 203900 Total Loss: 0.0392 Recon Loss: 0.0267 [03/30 12:12:26 TiTok]: Data (t): 0.0033, 54.47/s/gpu Batch (t): 0.6609 LR: 0.000069 Step: 204000 Total Loss: 0.0386 Recon Loss: 0.0260 [03/30 12:13:24 TiTok]: Data (t): 0.0032, 62.58/s/gpu Batch (t): 0.5753 LR: 0.000069 Step: 204100 Total Loss: 0.0373 Recon Loss: 0.0270 [03/30 12:14:22 TiTok]: Data (t): 0.0033, 62.61/s/gpu Batch (t): 0.5750 LR: 0.000069 Step: 204200 Total Loss: 0.0413 Recon Loss: 0.0296 [03/30 12:15:19 TiTok]: Data (t): 0.0032, 62.41/s/gpu Batch (t): 0.5768 LR: 0.000069 Step: 204300 Total Loss: 0.0434 Recon Loss: 0.0290 [03/30 12:16:17 TiTok]: Data (t): 0.0032, 62.58/s/gpu Batch (t): 0.5753 LR: 0.000069 Step: 204400 Total Loss: 0.0389 Recon Loss: 0.0268 [03/30 12:17:15 TiTok]: Data (t): 0.0033, 62.52/s/gpu Batch (t): 0.5758 LR: 0.000069 Step: 204500 Total Loss: 0.0376 Recon Loss: 0.0268 [03/30 12:18:12 TiTok]: Data (t): 0.0032, 62.60/s/gpu Batch (t): 0.5750 LR: 0.000069 Step: 204600 Total Loss: 0.0400 Recon Loss: 0.0288 [03/30 12:19:10 TiTok]: Data (t): 0.0034, 62.53/s/gpu Batch (t): 0.5758 LR: 0.000068 Step: 204700 Total Loss: 0.0396 Recon Loss: 0.0265 [03/30 12:20:08 TiTok]: Data (t): 0.0032, 62.57/s/gpu Batch (t): 0.5754 LR: 0.000068 Step: 204800 Total Loss: 0.0398 Recon Loss: 0.0281 [03/30 12:21:06 TiTok]: Data (t): 0.0033, 61.48/s/gpu Batch (t): 0.5856 LR: 0.000068 Step: 204900 Total Loss: 0.0384 Recon Loss: 0.0278 [03/30 12:22:03 TiTok]: Data (t): 0.0032, 56.77/s/gpu Batch (t): 0.6342 LR: 0.000068 Step: 205000 Total Loss: 0.0423 Recon Loss: 0.0287 [03/30 12:23:01 TiTok]: Data (t): 0.0032, 62.27/s/gpu Batch (t): 0.5781 LR: 0.000068 Step: 205100 Total Loss: 0.0402 Recon Loss: 0.0290 [03/30 12:23:59 TiTok]: Data (t): 0.0032, 62.53/s/gpu Batch (t): 0.5757 LR: 0.000068 Step: 205200 Total Loss: 0.0403 Recon Loss: 0.0282 [03/30 12:24:58 TiTok]: Data (t): 0.0032, 62.59/s/gpu Batch (t): 0.5752 LR: 0.000068 Step: 205300 Total Loss: 0.0386 Recon Loss: 0.0285 [03/30 12:25:55 TiTok]: Data (t): 0.0032, 62.55/s/gpu Batch (t): 0.5755 LR: 0.000068 Step: 205400 Total Loss: 0.0402 Recon Loss: 0.0271 [03/30 12:26:53 TiTok]: Data (t): 0.0032, 62.52/s/gpu Batch (t): 0.5758 LR: 0.000068 Step: 205500 Total Loss: 0.0403 Recon Loss: 0.0272 [03/30 12:27:51 TiTok]: Data (t): 0.0032, 58.82/s/gpu Batch (t): 0.6121 LR: 0.000068 Step: 205600 Total Loss: 0.0382 Recon Loss: 0.0284 [03/30 12:28:49 TiTok]: Data (t): 0.0032, 62.55/s/gpu Batch (t): 0.5756 LR: 0.000068 Step: 205700 Total Loss: 0.0381 Recon Loss: 0.0271 [03/30 12:29:46 TiTok]: Data (t): 0.0032, 62.52/s/gpu Batch (t): 0.5758 LR: 0.000068 Step: 205800 Total Loss: 0.0377 Recon Loss: 0.0287 [03/30 12:30:44 TiTok]: Data (t): 0.0032, 62.50/s/gpu Batch (t): 0.5760 LR: 0.000068 Step: 205900 Total Loss: 0.0403 Recon Loss: 0.0277 [03/30 12:31:42 TiTok]: Data (t): 0.0033, 56.78/s/gpu Batch (t): 0.6340 LR: 0.000068 Step: 206000 Total Loss: 0.0417 Recon Loss: 0.0285 [03/30 12:32:39 TiTok]: Data (t): 0.0032, 62.60/s/gpu Batch (t): 0.5751 LR: 0.000068 Step: 206100 Total Loss: 0.0379 Recon Loss: 0.0269 [03/30 12:33:37 TiTok]: Data (t): 0.0032, 62.55/s/gpu Batch (t): 0.5756 LR: 0.000068 Step: 206200 Total Loss: 0.0384 Recon Loss: 0.0277 [03/30 12:34:35 TiTok]: Data (t): 0.0032, 62.64/s/gpu Batch (t): 0.5747 LR: 0.000068 Step: 206300 Total Loss: 0.0368 Recon Loss: 0.0260 [03/30 12:35:33 TiTok]: Data (t): 0.0033, 62.46/s/gpu Batch (t): 0.5764 LR: 0.000068 Step: 206400 Total Loss: 0.0383 Recon Loss: 0.0255 [03/30 12:36:30 TiTok]: Data (t): 0.0031, 62.71/s/gpu Batch (t): 0.5741 LR: 0.000068 Step: 206500 Total Loss: 0.0415 Recon Loss: 0.0295 [03/30 12:37:28 TiTok]: Data (t): 0.0033, 62.61/s/gpu Batch (t): 0.5750 LR: 0.000068 Step: 206600 Total Loss: 0.0382 Recon Loss: 0.0279 [03/30 12:38:25 TiTok]: Data (t): 0.0031, 61.37/s/gpu Batch (t): 0.5866 LR: 0.000068 Step: 206700 Total Loss: 0.0411 Recon Loss: 0.0306 [03/30 12:39:23 TiTok]: Data (t): 0.0033, 62.62/s/gpu Batch (t): 0.5749 LR: 0.000068 Step: 206800 Total Loss: 0.0407 Recon Loss: 0.0273 [03/30 12:40:22 TiTok]: Data (t): 0.0032, 62.42/s/gpu Batch (t): 0.5767 LR: 0.000068 Step: 206900 Total Loss: 0.0399 Recon Loss: 0.0264 [03/30 12:41:20 TiTok]: Data (t): 0.0032, 53.77/s/gpu Batch (t): 0.6696 LR: 0.000068 Step: 207000 Total Loss: 0.0397 Recon Loss: 0.0285 [03/30 12:42:18 TiTok]: Data (t): 0.0032, 62.48/s/gpu Batch (t): 0.5762 LR: 0.000068 Step: 207100 Total Loss: 0.0387 Recon Loss: 0.0286 [03/30 12:43:16 TiTok]: Data (t): 0.0032, 62.41/s/gpu Batch (t): 0.5769 LR: 0.000068 Step: 207200 Total Loss: 0.0390 Recon Loss: 0.0285 [03/30 12:44:13 TiTok]: Data (t): 0.0032, 62.43/s/gpu Batch (t): 0.5767 LR: 0.000068 Step: 207300 Total Loss: 0.0377 Recon Loss: 0.0276 [03/30 12:45:11 TiTok]: Data (t): 0.0034, 62.87/s/gpu Batch (t): 0.5726 LR: 0.000068 Step: 207400 Total Loss: 0.0422 Recon Loss: 0.0285 [03/30 12:46:09 TiTok]: Data (t): 0.0032, 62.43/s/gpu Batch (t): 0.5767 LR: 0.000068 Step: 207500 Total Loss: 0.0397 Recon Loss: 0.0292 [03/30 12:47:09 TiTok]: Data (t): 0.0033, 56.32/s/gpu Batch (t): 0.6392 LR: 0.000068 Step: 207600 Total Loss: 0.0409 Recon Loss: 0.0282 [03/30 12:48:07 TiTok]: Data (t): 0.0034, 62.50/s/gpu Batch (t): 0.5760 LR: 0.000068 Step: 207700 Total Loss: 0.0394 Recon Loss: 0.0263 [03/30 12:49:04 TiTok]: Data (t): 0.0032, 62.54/s/gpu Batch (t): 0.5757 LR: 0.000068 Step: 207800 Total Loss: 0.0408 Recon Loss: 0.0298 [03/30 12:50:02 TiTok]: Data (t): 0.0033, 61.63/s/gpu Batch (t): 0.5841 LR: 0.000068 Step: 207900 Total Loss: 0.0363 Recon Loss: 0.0281 [03/30 12:51:00 TiTok]: Data (t): 0.0032, 56.85/s/gpu Batch (t): 0.6333 LR: 0.000068 Step: 208000 Total Loss: 0.0373 Recon Loss: 0.0273 [03/30 12:51:58 TiTok]: Data (t): 0.0032, 62.46/s/gpu Batch (t): 0.5763 LR: 0.000068 Step: 208100 Total Loss: 0.0371 Recon Loss: 0.0261 [03/30 12:52:56 TiTok]: Data (t): 0.0032, 62.57/s/gpu Batch (t): 0.5754 LR: 0.000068 Step: 208200 Total Loss: 0.0402 Recon Loss: 0.0289 [03/30 12:53:54 TiTok]: Data (t): 0.0032, 62.39/s/gpu Batch (t): 0.5770 LR: 0.000067 Step: 208300 Total Loss: 0.0391 Recon Loss: 0.0294 [03/30 12:54:51 TiTok]: Data (t): 0.0032, 62.12/s/gpu Batch (t): 0.5795 LR: 0.000067 Step: 208400 Total Loss: 0.0419 Recon Loss: 0.0294 [03/30 12:55:49 TiTok]: Data (t): 0.0031, 62.37/s/gpu Batch (t): 0.5772 LR: 0.000067 Step: 208500 Total Loss: 0.0395 Recon Loss: 0.0293 [03/30 12:56:47 TiTok]: Data (t): 0.0032, 62.28/s/gpu Batch (t): 0.5780 LR: 0.000067 Step: 208600 Total Loss: 0.0377 Recon Loss: 0.0282 [03/30 12:57:45 TiTok]: Data (t): 0.0032, 62.49/s/gpu Batch (t): 0.5761 LR: 0.000067 Step: 208700 Total Loss: 0.0418 Recon Loss: 0.0313 [03/30 12:58:43 TiTok]: Data (t): 0.0033, 62.19/s/gpu Batch (t): 0.5789 LR: 0.000067 Step: 208800 Total Loss: 0.0374 Recon Loss: 0.0279 [03/30 12:59:40 TiTok]: Data (t): 0.0031, 62.33/s/gpu Batch (t): 0.5776 LR: 0.000067 Step: 208900 Total Loss: 0.0400 Recon Loss: 0.0274 [03/30 13:00:39 TiTok]: Data (t): 0.0032, 56.55/s/gpu Batch (t): 0.6366 LR: 0.000067 Step: 209000 Total Loss: 0.0426 Recon Loss: 0.0267 [03/30 13:01:36 TiTok]: Data (t): 0.0031, 62.45/s/gpu Batch (t): 0.5765 LR: 0.000067 Step: 209100 Total Loss: 0.0394 Recon Loss: 0.0256 [03/30 13:02:34 TiTok]: Data (t): 0.0032, 62.52/s/gpu Batch (t): 0.5758 LR: 0.000067 Step: 209200 Total Loss: 0.0432 Recon Loss: 0.0301 [03/30 13:03:32 TiTok]: Data (t): 0.0032, 62.36/s/gpu Batch (t): 0.5773 LR: 0.000067 Step: 209300 Total Loss: 0.0399 Recon Loss: 0.0276 [03/30 13:04:30 TiTok]: Data (t): 0.0032, 62.39/s/gpu Batch (t): 0.5770 LR: 0.000067 Step: 209400 Total Loss: 0.0385 Recon Loss: 0.0259 [03/30 13:05:27 TiTok]: Data (t): 0.0032, 62.39/s/gpu Batch (t): 0.5770 LR: 0.000067 Step: 209500 Total Loss: 0.0366 Recon Loss: 0.0258 [03/30 13:06:25 TiTok]: Data (t): 0.0032, 62.48/s/gpu Batch (t): 0.5762 LR: 0.000067 Step: 209600 Total Loss: 0.0398 Recon Loss: 0.0276 [03/30 13:07:23 TiTok]: Data (t): 0.0033, 62.59/s/gpu Batch (t): 0.5752 LR: 0.000067 Step: 209700 Total Loss: 0.0412 Recon Loss: 0.0277 [03/30 13:08:21 TiTok]: Data (t): 0.0033, 62.50/s/gpu Batch (t): 0.5760 LR: 0.000067 Step: 209800 Total Loss: 0.0404 Recon Loss: 0.0290 [03/30 13:09:19 TiTok]: Data (t): 0.0034, 62.44/s/gpu Batch (t): 0.5765 LR: 0.000067 Step: 209900 Total Loss: 0.0400 Recon Loss: 0.0279 [03/30 13:10:17 TiTok]: Data (t): 0.0031, 57.04/s/gpu Batch (t): 0.6311 LR: 0.000067 Step: 210000 Total Loss: 0.0390 Recon Loss: 0.0273 [03/30 13:10:19 TiTok]: Saved state to /mnt/books/train_stage2/order_32_stage2/checkpoint-210000 [03/30 13:10:33 TiTok]: Reconstructing images... [03/30 13:11:31 TiTok]: Data (t): 0.0033, 62.36/s/gpu Batch (t): 0.5773 LR: 0.000067 Step: 210100 Total Loss: 0.0394 Recon Loss: 0.0283 [03/30 13:12:29 TiTok]: Data (t): 0.0032, 62.56/s/gpu Batch (t): 0.5754 LR: 0.000067 Step: 210200 Total Loss: 0.0387 Recon Loss: 0.0288 [03/30 13:13:27 TiTok]: Data (t): 0.0033, 62.43/s/gpu Batch (t): 0.5766 LR: 0.000067 Step: 210300 Total Loss: 0.0354 Recon Loss: 0.0247 [03/30 13:14:25 TiTok]: Data (t): 0.0032, 62.49/s/gpu Batch (t): 0.5761 LR: 0.000067 Step: 210400 Total Loss: 0.0378 Recon Loss: 0.0262 [03/30 13:15:22 TiTok]: Data (t): 0.0032, 62.41/s/gpu Batch (t): 0.5768 LR: 0.000067 Step: 210500 Total Loss: 0.0416 Recon Loss: 0.0284 [03/30 13:16:20 TiTok]: Data (t): 0.0032, 62.26/s/gpu Batch (t): 0.5782 LR: 0.000067 Step: 210600 Total Loss: 0.0392 Recon Loss: 0.0283 [03/30 13:17:18 TiTok]: Data (t): 0.0032, 61.92/s/gpu Batch (t): 0.5814 LR: 0.000067 Step: 210700 Total Loss: 0.0384 Recon Loss: 0.0267 [03/30 13:18:16 TiTok]: Data (t): 0.0032, 62.46/s/gpu Batch (t): 0.5764 LR: 0.000067 Step: 210800 Total Loss: 0.0370 Recon Loss: 0.0272 [03/30 13:19:13 TiTok]: Data (t): 0.0032, 62.37/s/gpu Batch (t): 0.5772 LR: 0.000067 Step: 210900 Total Loss: 0.0377 Recon Loss: 0.0272 [03/30 13:20:11 TiTok]: Data (t): 0.0033, 52.69/s/gpu Batch (t): 0.6832 LR: 0.000067 Step: 211000 Total Loss: 0.0379 Recon Loss: 0.0280 [03/30 13:21:09 TiTok]: Data (t): 0.0032, 62.46/s/gpu Batch (t): 0.5764 LR: 0.000067 Step: 211100 Total Loss: 0.0372 Recon Loss: 0.0255 [03/30 13:22:07 TiTok]: Data (t): 0.0034, 62.42/s/gpu Batch (t): 0.5768 LR: 0.000067 Step: 211200 Total Loss: 0.0373 Recon Loss: 0.0267 [03/30 13:23:04 TiTok]: Data (t): 0.0032, 62.86/s/gpu Batch (t): 0.5727 LR: 0.000067 Step: 211300 Total Loss: 0.0383 Recon Loss: 0.0275 [03/30 13:24:04 TiTok]: Data (t): 0.0033, 62.41/s/gpu Batch (t): 0.5769 LR: 0.000067 Step: 211400 Total Loss: 0.0393 Recon Loss: 0.0287 [03/30 13:25:02 TiTok]: Data (t): 0.0032, 62.26/s/gpu Batch (t): 0.5782 LR: 0.000067 Step: 211500 Total Loss: 0.0399 Recon Loss: 0.0274 [03/30 13:26:00 TiTok]: Data (t): 0.0033, 62.16/s/gpu Batch (t): 0.5791 LR: 0.000067 Step: 211600 Total Loss: 0.0370 Recon Loss: 0.0276 [03/30 13:26:57 TiTok]: Data (t): 0.0033, 61.64/s/gpu Batch (t): 0.5841 LR: 0.000067 Step: 211700 Total Loss: 0.0394 Recon Loss: 0.0279 [03/30 13:27:55 TiTok]: Data (t): 0.0032, 62.42/s/gpu Batch (t): 0.5767 LR: 0.000067 Step: 211800 Total Loss: 0.0406 Recon Loss: 0.0267 [03/30 13:28:54 TiTok]: Data (t): 0.0032, 62.36/s/gpu Batch (t): 0.5773 LR: 0.000067 Step: 211900 Total Loss: 0.0390 Recon Loss: 0.0278 [03/30 13:29:52 TiTok]: Data (t): 0.0033, 56.48/s/gpu Batch (t): 0.6373 LR: 0.000066 Step: 212000 Total Loss: 0.0363 Recon Loss: 0.0259 [03/30 13:30:51 TiTok]: Data (t): 0.0031, 62.37/s/gpu Batch (t): 0.5772 LR: 0.000066 Step: 212100 Total Loss: 0.0407 Recon Loss: 0.0288 [03/30 13:31:49 TiTok]: Data (t): 0.0031, 62.47/s/gpu Batch (t): 0.5763 LR: 0.000066 Step: 212200 Total Loss: 0.0368 Recon Loss: 0.0276 [03/30 13:32:47 TiTok]: Data (t): 0.0032, 62.43/s/gpu Batch (t): 0.5767 LR: 0.000066 Step: 212300 Total Loss: 0.0389 Recon Loss: 0.0275 [03/30 13:33:44 TiTok]: Data (t): 0.0032, 62.47/s/gpu Batch (t): 0.5763 LR: 0.000066 Step: 212400 Total Loss: 0.0391 Recon Loss: 0.0278 [03/30 13:34:42 TiTok]: Data (t): 0.0032, 62.43/s/gpu Batch (t): 0.5766 LR: 0.000066 Step: 212500 Total Loss: 0.0423 Recon Loss: 0.0287 [03/30 13:35:40 TiTok]: Data (t): 0.0031, 62.40/s/gpu Batch (t): 0.5769 LR: 0.000066 Step: 212600 Total Loss: 0.0388 Recon Loss: 0.0268 [03/30 13:36:38 TiTok]: Data (t): 0.0033, 62.54/s/gpu Batch (t): 0.5756 LR: 0.000066 Step: 212700 Total Loss: 0.0394 Recon Loss: 0.0275 [03/30 13:37:36 TiTok]: Data (t): 0.0032, 62.42/s/gpu Batch (t): 0.5767 LR: 0.000066 Step: 212800 Total Loss: 0.0399 Recon Loss: 0.0275 [03/30 13:38:33 TiTok]: Data (t): 0.0032, 62.08/s/gpu Batch (t): 0.5799 LR: 0.000066 Step: 212900 Total Loss: 0.0367 Recon Loss: 0.0258 [03/30 13:39:31 TiTok]: Data (t): 0.0032, 56.74/s/gpu Batch (t): 0.6345 LR: 0.000066 Step: 213000 Total Loss: 0.0372 Recon Loss: 0.0279 [03/30 13:40:29 TiTok]: Data (t): 0.0033, 62.53/s/gpu Batch (t): 0.5757 LR: 0.000066 Step: 213100 Total Loss: 0.0422 Recon Loss: 0.0300 [03/30 13:41:26 TiTok]: Data (t): 0.0032, 62.40/s/gpu Batch (t): 0.5770 LR: 0.000066 Step: 213200 Total Loss: 0.0371 Recon Loss: 0.0259 [03/30 13:42:24 TiTok]: Data (t): 0.0032, 59.65/s/gpu Batch (t): 0.6035 LR: 0.000066 Step: 213300 Total Loss: 0.0351 Recon Loss: 0.0264 [03/30 13:43:22 TiTok]: Data (t): 0.0031, 62.52/s/gpu Batch (t): 0.5758 LR: 0.000066 Step: 213400 Total Loss: 0.0406 Recon Loss: 0.0275 [03/30 13:44:20 TiTok]: Data (t): 0.0032, 62.48/s/gpu Batch (t): 0.5762 LR: 0.000066 Step: 213500 Total Loss: 0.0388 Recon Loss: 0.0284 [03/30 13:45:18 TiTok]: Data (t): 0.0031, 62.50/s/gpu Batch (t): 0.5760 LR: 0.000066 Step: 213600 Total Loss: 0.0398 Recon Loss: 0.0260 [03/30 13:46:16 TiTok]: Data (t): 0.0032, 62.48/s/gpu Batch (t): 0.5761 LR: 0.000066 Step: 213700 Total Loss: 0.0399 Recon Loss: 0.0281 [03/30 13:47:13 TiTok]: Data (t): 0.0032, 62.32/s/gpu Batch (t): 0.5776 LR: 0.000066 Step: 213800 Total Loss: 0.0393 Recon Loss: 0.0274 [03/30 13:48:11 TiTok]: Data (t): 0.0032, 62.13/s/gpu Batch (t): 0.5795 LR: 0.000066 Step: 213900 Total Loss: 0.0364 Recon Loss: 0.0261 [03/30 13:49:09 TiTok]: Data (t): 0.0033, 56.53/s/gpu Batch (t): 0.6368 LR: 0.000066 Step: 214000 Total Loss: 0.0376 Recon Loss: 0.0286 [03/30 13:50:07 TiTok]: Data (t): 0.0034, 58.86/s/gpu Batch (t): 0.6116 LR: 0.000066 Step: 214100 Total Loss: 0.0394 Recon Loss: 0.0277 [03/30 13:51:04 TiTok]: Data (t): 0.0032, 62.61/s/gpu Batch (t): 0.5750 LR: 0.000066 Step: 214200 Total Loss: 0.0382 Recon Loss: 0.0263 [03/30 13:52:02 TiTok]: Data (t): 0.0032, 62.54/s/gpu Batch (t): 0.5757 LR: 0.000066 Step: 214300 Total Loss: 0.0369 Recon Loss: 0.0271 [03/30 13:53:01 TiTok]: Data (t): 0.0033, 62.50/s/gpu Batch (t): 0.5760 LR: 0.000066 Step: 214400 Total Loss: 0.0397 Recon Loss: 0.0279 [03/30 13:53:59 TiTok]: Data (t): 0.0033, 62.19/s/gpu Batch (t): 0.5789 LR: 0.000066 Step: 214500 Total Loss: 0.0380 Recon Loss: 0.0265 [03/30 13:54:57 TiTok]: Data (t): 0.0032, 62.56/s/gpu Batch (t): 0.5754 LR: 0.000066 Step: 214600 Total Loss: 0.0398 Recon Loss: 0.0276 [03/30 13:55:55 TiTok]: Data (t): 0.0032, 62.57/s/gpu Batch (t): 0.5753 LR: 0.000066 Step: 214700 Total Loss: 0.0373 Recon Loss: 0.0265 [03/30 13:56:53 TiTok]: Data (t): 0.0032, 62.60/s/gpu Batch (t): 0.5750 LR: 0.000066 Step: 214800 Total Loss: 0.0390 Recon Loss: 0.0285 [03/30 13:57:50 TiTok]: Data (t): 0.0032, 62.47/s/gpu Batch (t): 0.5762 LR: 0.000066 Step: 214900 Total Loss: 0.0383 Recon Loss: 0.0272 [03/30 13:58:48 TiTok]: Data (t): 0.0032, 56.90/s/gpu Batch (t): 0.6327 LR: 0.000066 Step: 215000 Total Loss: 0.0385 Recon Loss: 0.0289 [03/30 13:59:46 TiTok]: Data (t): 0.0031, 62.45/s/gpu Batch (t): 0.5765 LR: 0.000066 Step: 215100 Total Loss: 0.0370 Recon Loss: 0.0267 [03/30 14:00:44 TiTok]: Data (t): 0.0032, 62.41/s/gpu Batch (t): 0.5768 LR: 0.000066 Step: 215200 Total Loss: 0.0381 Recon Loss: 0.0284 [03/30 14:01:42 TiTok]: Data (t): 0.0032, 62.56/s/gpu Batch (t): 0.5754 LR: 0.000066 Step: 215300 Total Loss: 0.0383 Recon Loss: 0.0300 [03/30 14:02:39 TiTok]: Data (t): 0.0033, 62.37/s/gpu Batch (t): 0.5772 LR: 0.000066 Step: 215400 Total Loss: 0.0397 Recon Loss: 0.0283 [03/30 14:03:37 TiTok]: Data (t): 0.0033, 62.49/s/gpu Batch (t): 0.5761 LR: 0.000066 Step: 215500 Total Loss: 0.0395 Recon Loss: 0.0283 [03/30 14:04:35 TiTok]: Data (t): 0.0032, 62.49/s/gpu Batch (t): 0.5761 LR: 0.000065 Step: 215600 Total Loss: 0.0399 Recon Loss: 0.0279 [03/30 14:05:33 TiTok]: Data (t): 0.0033, 62.49/s/gpu Batch (t): 0.5761 LR: 0.000065 Step: 215700 Total Loss: 0.0413 Recon Loss: 0.0296 [03/30 14:06:31 TiTok]: Data (t): 0.0032, 62.44/s/gpu Batch (t): 0.5766 LR: 0.000065 Step: 215800 Total Loss: 0.0406 Recon Loss: 0.0279 [03/30 14:07:29 TiTok]: Data (t): 0.0032, 62.21/s/gpu Batch (t): 0.5787 LR: 0.000065 Step: 215900 Total Loss: 0.0399 Recon Loss: 0.0292 [03/30 14:08:28 TiTok]: Data (t): 0.0033, 56.74/s/gpu Batch (t): 0.6344 LR: 0.000065 Step: 216000 Total Loss: 0.0398 Recon Loss: 0.0287 [03/30 14:09:26 TiTok]: Data (t): 0.0032, 62.49/s/gpu Batch (t): 0.5761 LR: 0.000065 Step: 216100 Total Loss: 0.0388 Recon Loss: 0.0272 [03/30 14:10:23 TiTok]: Data (t): 0.0033, 62.42/s/gpu Batch (t): 0.5768 LR: 0.000065 Step: 216200 Total Loss: 0.0390 Recon Loss: 0.0288 [03/30 14:11:21 TiTok]: Data (t): 0.0032, 62.31/s/gpu Batch (t): 0.5778 LR: 0.000065 Step: 216300 Total Loss: 0.0376 Recon Loss: 0.0268 [03/30 14:12:19 TiTok]: Data (t): 0.0032, 62.47/s/gpu Batch (t): 0.5763 LR: 0.000065 Step: 216400 Total Loss: 0.0393 Recon Loss: 0.0290 [03/30 14:13:17 TiTok]: Data (t): 0.0033, 62.09/s/gpu Batch (t): 0.5798 LR: 0.000065 Step: 216500 Total Loss: 0.0363 Recon Loss: 0.0256 [03/30 14:14:15 TiTok]: Data (t): 0.0032, 62.55/s/gpu Batch (t): 0.5756 LR: 0.000065 Step: 216600 Total Loss: 0.0391 Recon Loss: 0.0278 [03/30 14:15:14 TiTok]: Data (t): 0.0032, 62.54/s/gpu Batch (t): 0.5756 LR: 0.000065 Step: 216700 Total Loss: 0.0381 Recon Loss: 0.0278 [03/30 14:16:12 TiTok]: Data (t): 0.0033, 62.52/s/gpu Batch (t): 0.5758 LR: 0.000065 Step: 216800 Total Loss: 0.0385 Recon Loss: 0.0267 [03/30 14:17:09 TiTok]: Data (t): 0.0032, 62.30/s/gpu Batch (t): 0.5778 LR: 0.000065 Step: 216900 Total Loss: 0.0394 Recon Loss: 0.0272 [03/30 14:18:07 TiTok]: Data (t): 0.0032, 56.71/s/gpu Batch (t): 0.6348 LR: 0.000065 Step: 217000 Total Loss: 0.0385 Recon Loss: 0.0274 [03/30 14:19:05 TiTok]: Data (t): 0.0034, 62.47/s/gpu Batch (t): 0.5763 LR: 0.000065 Step: 217100 Total Loss: 0.0382 Recon Loss: 0.0269 [03/30 14:20:03 TiTok]: Data (t): 0.0033, 62.00/s/gpu Batch (t): 0.5807 LR: 0.000065 Step: 217200 Total Loss: 0.0398 Recon Loss: 0.0279 [03/30 14:21:01 TiTok]: Data (t): 0.0033, 62.38/s/gpu Batch (t): 0.5771 LR: 0.000065 Step: 217300 Total Loss: 0.0385 Recon Loss: 0.0255 [03/30 14:21:59 TiTok]: Data (t): 0.0032, 62.36/s/gpu Batch (t): 0.5773 LR: 0.000065 Step: 217400 Total Loss: 0.0383 Recon Loss: 0.0279 [03/30 14:22:56 TiTok]: Data (t): 0.0032, 62.42/s/gpu Batch (t): 0.5767 LR: 0.000065 Step: 217500 Total Loss: 0.0405 Recon Loss: 0.0292 [03/30 14:23:54 TiTok]: Data (t): 0.0033, 62.46/s/gpu Batch (t): 0.5764 LR: 0.000065 Step: 217600 Total Loss: 0.0385 Recon Loss: 0.0298 [03/30 14:24:52 TiTok]: Data (t): 0.0032, 62.46/s/gpu Batch (t): 0.5763 LR: 0.000065 Step: 217700 Total Loss: 0.0407 Recon Loss: 0.0295 [03/30 14:25:50 TiTok]: Data (t): 0.0045, 58.34/s/gpu Batch (t): 0.6170 LR: 0.000065 Step: 217800 Total Loss: 0.0383 Recon Loss: 0.0274 [03/30 14:26:47 TiTok]: Data (t): 0.0032, 62.46/s/gpu Batch (t): 0.5764 LR: 0.000065 Step: 217900 Total Loss: 0.0394 Recon Loss: 0.0279 [03/30 14:27:45 TiTok]: Data (t): 0.0032, 56.65/s/gpu Batch (t): 0.6354 LR: 0.000065 Step: 218000 Total Loss: 0.0383 Recon Loss: 0.0271 [03/30 14:28:43 TiTok]: Data (t): 0.0032, 62.43/s/gpu Batch (t): 0.5766 LR: 0.000065 Step: 218100 Total Loss: 0.0380 Recon Loss: 0.0274 [03/30 14:29:41 TiTok]: Data (t): 0.0031, 61.39/s/gpu Batch (t): 0.5864 LR: 0.000065 Step: 218200 Total Loss: 0.0390 Recon Loss: 0.0286 [03/30 14:30:38 TiTok]: Data (t): 0.0032, 62.49/s/gpu Batch (t): 0.5761 LR: 0.000065 Step: 218300 Total Loss: 0.0403 Recon Loss: 0.0283 [03/30 14:31:36 TiTok]: Data (t): 0.0032, 62.54/s/gpu Batch (t): 0.5757 LR: 0.000065 Step: 218400 Total Loss: 0.0400 Recon Loss: 0.0293 [03/30 14:32:34 TiTok]: Data (t): 0.0031, 62.34/s/gpu Batch (t): 0.5774 LR: 0.000065 Step: 218500 Total Loss: 0.0411 Recon Loss: 0.0280 [03/30 14:33:32 TiTok]: Data (t): 0.0031, 62.38/s/gpu Batch (t): 0.5771 LR: 0.000065 Step: 218600 Total Loss: 0.0397 Recon Loss: 0.0288 [03/30 14:34:30 TiTok]: Data (t): 0.0032, 62.36/s/gpu Batch (t): 0.5773 LR: 0.000065 Step: 218700 Total Loss: 0.0394 Recon Loss: 0.0274 [03/30 14:35:28 TiTok]: Data (t): 0.0031, 62.43/s/gpu Batch (t): 0.5767 LR: 0.000065 Step: 218800 Total Loss: 0.0371 Recon Loss: 0.0260 [03/30 14:36:26 TiTok]: Data (t): 0.0031, 61.91/s/gpu Batch (t): 0.5815 LR: 0.000065 Step: 218900 Total Loss: 0.0401 Recon Loss: 0.0278 [03/30 14:37:25 TiTok]: Data (t): 0.0032, 56.56/s/gpu Batch (t): 0.6364 LR: 0.000065 Step: 219000 Total Loss: 0.0402 Recon Loss: 0.0293 [03/30 14:38:23 TiTok]: Data (t): 0.0032, 62.32/s/gpu Batch (t): 0.5777 LR: 0.000065 Step: 219100 Total Loss: 0.0396 Recon Loss: 0.0280 [03/30 14:39:21 TiTok]: Data (t): 0.0033, 62.28/s/gpu Batch (t): 0.5780 LR: 0.000064 Step: 219200 Total Loss: 0.0425 Recon Loss: 0.0284 [03/30 14:40:18 TiTok]: Data (t): 0.0031, 62.36/s/gpu Batch (t): 0.5773 LR: 0.000064 Step: 219300 Total Loss: 0.0399 Recon Loss: 0.0282 [03/30 14:41:16 TiTok]: Data (t): 0.0032, 62.31/s/gpu Batch (t): 0.5778 LR: 0.000064 Step: 219400 Total Loss: 0.0371 Recon Loss: 0.0279 [03/30 14:42:14 TiTok]: Data (t): 0.0031, 62.45/s/gpu Batch (t): 0.5765 LR: 0.000064 Step: 219500 Total Loss: 0.0399 Recon Loss: 0.0275 [03/30 14:43:12 TiTok]: Data (t): 0.0032, 60.52/s/gpu Batch (t): 0.5949 LR: 0.000064 Step: 219600 Total Loss: 0.0380 Recon Loss: 0.0255 [03/30 14:44:09 TiTok]: Data (t): 0.0031, 62.49/s/gpu Batch (t): 0.5761 LR: 0.000064 Step: 219700 Total Loss: 0.0379 Recon Loss: 0.0268 [03/30 14:45:07 TiTok]: Data (t): 0.0032, 62.46/s/gpu Batch (t): 0.5764 LR: 0.000064 Step: 219800 Total Loss: 0.0403 Recon Loss: 0.0273 [03/30 14:46:05 TiTok]: Data (t): 0.0032, 62.17/s/gpu Batch (t): 0.5790 LR: 0.000064 Step: 219900 Total Loss: 0.0398 Recon Loss: 0.0276 [03/30 14:47:03 TiTok]: Data (t): 0.0032, 56.61/s/gpu Batch (t): 0.6360 LR: 0.000064 Step: 220000 Total Loss: 0.0383 Recon Loss: 0.0267 [03/30 14:47:05 TiTok]: Saved state to /mnt/books/train_stage2/order_32_stage2/checkpoint-220000 [03/30 14:47:19 TiTok]: Reconstructing images... [03/30 14:48:17 TiTok]: Data (t): 0.0032, 62.37/s/gpu Batch (t): 0.5772 LR: 0.000064 Step: 220100 Total Loss: 0.0359 Recon Loss: 0.0272 [03/30 14:49:15 TiTok]: Data (t): 0.0031, 62.83/s/gpu Batch (t): 0.5730 LR: 0.000064 Step: 220200 Total Loss: 0.0374 Recon Loss: 0.0295 [03/30 14:50:14 TiTok]: Data (t): 0.0031, 62.43/s/gpu Batch (t): 0.5767 LR: 0.000064 Step: 220300 Total Loss: 0.0372 Recon Loss: 0.0276 [03/30 14:51:12 TiTok]: Data (t): 0.0032, 62.27/s/gpu Batch (t): 0.5781 LR: 0.000064 Step: 220400 Total Loss: 0.0390 Recon Loss: 0.0288 [03/30 14:52:10 TiTok]: Data (t): 0.0032, 62.04/s/gpu Batch (t): 0.5803 LR: 0.000064 Step: 220500 Total Loss: 0.0378 Recon Loss: 0.0292 [03/30 14:53:08 TiTok]: Data (t): 0.0032, 62.35/s/gpu Batch (t): 0.5774 LR: 0.000064 Step: 220600 Total Loss: 0.0377 Recon Loss: 0.0282 [03/30 14:54:06 TiTok]: Data (t): 0.0031, 62.35/s/gpu Batch (t): 0.5774 LR: 0.000064 Step: 220700 Total Loss: 0.0390 Recon Loss: 0.0276 [03/30 14:55:03 TiTok]: Data (t): 0.0032, 62.36/s/gpu Batch (t): 0.5773 LR: 0.000064 Step: 220800 Total Loss: 0.0399 Recon Loss: 0.0297 [03/30 14:56:01 TiTok]: Data (t): 0.0032, 61.22/s/gpu Batch (t): 0.5880 LR: 0.000064 Step: 220900 Total Loss: 0.0380 Recon Loss: 0.0285 [03/30 14:56:59 TiTok]: Data (t): 0.0031, 56.76/s/gpu Batch (t): 0.6343 LR: 0.000064 Step: 221000 Total Loss: 0.0376 Recon Loss: 0.0274 [03/30 14:57:57 TiTok]: Data (t): 0.0031, 62.44/s/gpu Batch (t): 0.5765 LR: 0.000064 Step: 221100 Total Loss: 0.0386 Recon Loss: 0.0280 [03/30 14:58:56 TiTok]: Data (t): 0.0032, 62.33/s/gpu Batch (t): 0.5776 LR: 0.000064 Step: 221200 Total Loss: 0.0373 Recon Loss: 0.0256 [03/30 14:59:54 TiTok]: Data (t): 0.0031, 62.24/s/gpu Batch (t): 0.5784 LR: 0.000064 Step: 221300 Total Loss: 0.0374 Recon Loss: 0.0273 [03/30 15:00:52 TiTok]: Data (t): 0.0031, 61.97/s/gpu Batch (t): 0.5809 LR: 0.000064 Step: 221400 Total Loss: 0.0378 Recon Loss: 0.0270 [03/30 15:01:50 TiTok]: Data (t): 0.0033, 62.36/s/gpu Batch (t): 0.5773 LR: 0.000064 Step: 221500 Total Loss: 0.0380 Recon Loss: 0.0282 [03/30 15:02:48 TiTok]: Data (t): 0.0032, 62.36/s/gpu Batch (t): 0.5773 LR: 0.000064 Step: 221600 Total Loss: 0.0385 Recon Loss: 0.0269 [03/30 15:03:46 TiTok]: Data (t): 0.0031, 62.42/s/gpu Batch (t): 0.5767 LR: 0.000064 Step: 221700 Total Loss: 0.0387 Recon Loss: 0.0284 [03/30 15:04:44 TiTok]: Data (t): 0.0032, 62.42/s/gpu Batch (t): 0.5768 LR: 0.000064 Step: 221800 Total Loss: 0.0394 Recon Loss: 0.0272 [03/30 15:05:42 TiTok]: Data (t): 0.0033, 62.32/s/gpu Batch (t): 0.5777 LR: 0.000064 Step: 221900 Total Loss: 0.0411 Recon Loss: 0.0286 [03/30 15:06:40 TiTok]: Data (t): 0.0033, 51.30/s/gpu Batch (t): 0.7018 LR: 0.000064 Step: 222000 Total Loss: 0.0374 Recon Loss: 0.0268 [03/30 15:07:37 TiTok]: Data (t): 0.0031, 62.49/s/gpu Batch (t): 0.5761 LR: 0.000064 Step: 222100 Total Loss: 0.0376 Recon Loss: 0.0271 [03/30 15:08:36 TiTok]: Data (t): 0.0051, 61.27/s/gpu Batch (t): 0.5875 LR: 0.000064 Step: 222200 Total Loss: 0.0401 Recon Loss: 0.0292 [03/30 15:09:34 TiTok]: Data (t): 0.0031, 62.41/s/gpu Batch (t): 0.5768 LR: 0.000064 Step: 222300 Total Loss: 0.0392 Recon Loss: 0.0284 [03/30 15:10:31 TiTok]: Data (t): 0.0032, 62.40/s/gpu Batch (t): 0.5769 LR: 0.000064 Step: 222400 Total Loss: 0.0412 Recon Loss: 0.0286 [03/30 15:11:29 TiTok]: Data (t): 0.0032, 62.37/s/gpu Batch (t): 0.5772 LR: 0.000064 Step: 222500 Total Loss: 0.0377 Recon Loss: 0.0263 [03/30 15:12:27 TiTok]: Data (t): 0.0031, 62.28/s/gpu Batch (t): 0.5780 LR: 0.000064 Step: 222600 Total Loss: 0.0399 Recon Loss: 0.0288 [03/30 15:13:25 TiTok]: Data (t): 0.0031, 62.13/s/gpu Batch (t): 0.5795 LR: 0.000064 Step: 222700 Total Loss: 0.0412 Recon Loss: 0.0306 [03/30 15:14:23 TiTok]: Data (t): 0.0033, 61.95/s/gpu Batch (t): 0.5811 LR: 0.000063 Step: 222800 Total Loss: 0.0395 Recon Loss: 0.0276 [03/30 15:15:21 TiTok]: Data (t): 0.0031, 62.26/s/gpu Batch (t): 0.5782 LR: 0.000063 Step: 222900 Total Loss: 0.0394 Recon Loss: 0.0279 [03/30 15:16:19 TiTok]: Data (t): 0.0032, 56.48/s/gpu Batch (t): 0.6374 LR: 0.000063 Step: 223000 Total Loss: 0.0381 Recon Loss: 0.0277 [03/30 15:17:17 TiTok]: Data (t): 0.0032, 62.22/s/gpu Batch (t): 0.5786 LR: 0.000063 Step: 223100 Total Loss: 0.0380 Recon Loss: 0.0264 [03/30 15:18:15 TiTok]: Data (t): 0.0032, 62.03/s/gpu Batch (t): 0.5804 LR: 0.000063 Step: 223200 Total Loss: 0.0381 Recon Loss: 0.0287 [03/30 15:19:13 TiTok]: Data (t): 0.0032, 62.10/s/gpu Batch (t): 0.5797 LR: 0.000063 Step: 223300 Total Loss: 0.0393 Recon Loss: 0.0270 [03/30 15:20:11 TiTok]: Data (t): 0.0032, 62.37/s/gpu Batch (t): 0.5772 LR: 0.000063 Step: 223400 Total Loss: 0.0389 Recon Loss: 0.0275 [03/30 15:21:10 TiTok]: Data (t): 0.0052, 57.32/s/gpu Batch (t): 0.6280 LR: 0.000063 Step: 223500 Total Loss: 0.0384 Recon Loss: 0.0269 [03/30 15:22:08 TiTok]: Data (t): 0.0031, 62.39/s/gpu Batch (t): 0.5771 LR: 0.000063 Step: 223600 Total Loss: 0.0408 Recon Loss: 0.0288 [03/30 15:23:06 TiTok]: Data (t): 0.0031, 62.39/s/gpu Batch (t): 0.5770 LR: 0.000063 Step: 223700 Total Loss: 0.0415 Recon Loss: 0.0295 [03/30 15:24:04 TiTok]: Data (t): 0.0031, 62.45/s/gpu Batch (t): 0.5765 LR: 0.000063 Step: 223800 Total Loss: 0.0396 Recon Loss: 0.0291 [03/30 15:25:02 TiTok]: Data (t): 0.0032, 62.41/s/gpu Batch (t): 0.5768 LR: 0.000063 Step: 223900 Total Loss: 0.0407 Recon Loss: 0.0296 [03/30 15:25:59 TiTok]: Data (t): 0.0032, 56.30/s/gpu Batch (t): 0.6394 LR: 0.000063 Step: 224000 Total Loss: 0.0372 Recon Loss: 0.0280 [03/30 15:26:57 TiTok]: Data (t): 0.0031, 62.28/s/gpu Batch (t): 0.5781 LR: 0.000063 Step: 224100 Total Loss: 0.0404 Recon Loss: 0.0284 [03/30 15:27:55 TiTok]: Data (t): 0.0031, 62.36/s/gpu Batch (t): 0.5773 LR: 0.000063 Step: 224200 Total Loss: 0.0404 Recon Loss: 0.0292 [03/30 15:28:53 TiTok]: Data (t): 0.0031, 62.31/s/gpu Batch (t): 0.5778 LR: 0.000063 Step: 224300 Total Loss: 0.0380 Recon Loss: 0.0275 [03/30 15:29:51 TiTok]: Data (t): 0.0032, 62.24/s/gpu Batch (t): 0.5784 LR: 0.000063 Step: 224400 Total Loss: 0.0399 Recon Loss: 0.0279 [03/30 15:30:48 TiTok]: Data (t): 0.0031, 62.35/s/gpu Batch (t): 0.5774 LR: 0.000063 Step: 224500 Total Loss: 0.0381 Recon Loss: 0.0277 [03/30 15:31:46 TiTok]: Data (t): 0.0031, 62.42/s/gpu Batch (t): 0.5768 LR: 0.000063 Step: 224600 Total Loss: 0.0406 Recon Loss: 0.0290 [03/30 15:32:46 TiTok]: Data (t): 0.0032, 58.45/s/gpu Batch (t): 0.6159 LR: 0.000063 Step: 224700 Total Loss: 0.0381 Recon Loss: 0.0282 [03/30 15:33:44 TiTok]: Data (t): 0.0031, 62.32/s/gpu Batch (t): 0.5776 LR: 0.000063 Step: 224800 Total Loss: 0.0410 Recon Loss: 0.0296 [03/30 15:34:42 TiTok]: Data (t): 0.0031, 59.17/s/gpu Batch (t): 0.6084 LR: 0.000063 Step: 224900 Total Loss: 0.0383 Recon Loss: 0.0282 [03/30 15:35:40 TiTok]: Data (t): 0.0032, 56.61/s/gpu Batch (t): 0.6359 LR: 0.000063 Step: 225000 Total Loss: 0.0395 Recon Loss: 0.0287 [03/30 15:36:38 TiTok]: Data (t): 0.0032, 62.37/s/gpu Batch (t): 0.5772 LR: 0.000063 Step: 225100 Total Loss: 0.0388 Recon Loss: 0.0276 [03/30 15:37:36 TiTok]: Data (t): 0.0031, 62.35/s/gpu Batch (t): 0.5774 LR: 0.000063 Step: 225200 Total Loss: 0.0365 Recon Loss: 0.0271 [03/30 15:38:34 TiTok]: Data (t): 0.0032, 62.43/s/gpu Batch (t): 0.5766 LR: 0.000063 Step: 225300 Total Loss: 0.0406 Recon Loss: 0.0305 [03/30 15:39:31 TiTok]: Data (t): 0.0031, 62.26/s/gpu Batch (t): 0.5783 LR: 0.000063 Step: 225400 Total Loss: 0.0380 Recon Loss: 0.0272 [03/30 15:40:29 TiTok]: Data (t): 0.0032, 58.06/s/gpu Batch (t): 0.6200 LR: 0.000063 Step: 225500 Total Loss: 0.0396 Recon Loss: 0.0281 [03/30 15:41:28 TiTok]: Data (t): 0.0032, 62.06/s/gpu Batch (t): 0.5801 LR: 0.000063 Step: 225600 Total Loss: 0.0390 Recon Loss: 0.0275 [03/30 15:42:26 TiTok]: Data (t): 0.0032, 62.44/s/gpu Batch (t): 0.5766 LR: 0.000063 Step: 225700 Total Loss: 0.0388 Recon Loss: 0.0278 [03/30 15:43:26 TiTok]: Data (t): 0.0031, 62.46/s/gpu Batch (t): 0.5764 LR: 0.000063 Step: 225800 Total Loss: 0.0398 Recon Loss: 0.0300 [03/30 15:44:24 TiTok]: Data (t): 0.0032, 62.38/s/gpu Batch (t): 0.5771 LR: 0.000063 Step: 225900 Total Loss: 0.0377 Recon Loss: 0.0271 [03/30 15:45:22 TiTok]: Data (t): 0.0032, 56.69/s/gpu Batch (t): 0.6350 LR: 0.000063 Step: 226000 Total Loss: 0.0398 Recon Loss: 0.0287 [03/30 15:46:20 TiTok]: Data (t): 0.0032, 62.41/s/gpu Batch (t): 0.5768 LR: 0.000063 Step: 226100 Total Loss: 0.0357 Recon Loss: 0.0273 [03/30 15:47:17 TiTok]: Data (t): 0.0033, 62.36/s/gpu Batch (t): 0.5773 LR: 0.000063 Step: 226200 Total Loss: 0.0390 Recon Loss: 0.0275 [03/30 15:48:15 TiTok]: Data (t): 0.0032, 62.40/s/gpu Batch (t): 0.5769 LR: 0.000062 Step: 226300 Total Loss: 0.0362 Recon Loss: 0.0258 [03/30 15:49:13 TiTok]: Data (t): 0.0034, 62.35/s/gpu Batch (t): 0.5774 LR: 0.000062 Step: 226400 Total Loss: 0.0363 Recon Loss: 0.0263 [03/30 15:50:11 TiTok]: Data (t): 0.0034, 62.36/s/gpu Batch (t): 0.5773 LR: 0.000062 Step: 226500 Total Loss: 0.0387 Recon Loss: 0.0279 [03/30 15:51:08 TiTok]: Data (t): 0.0034, 62.13/s/gpu Batch (t): 0.5795 LR: 0.000062 Step: 226600 Total Loss: 0.0363 Recon Loss: 0.0256 [03/30 15:52:06 TiTok]: Data (t): 0.0032, 62.43/s/gpu Batch (t): 0.5766 LR: 0.000062 Step: 226700 Total Loss: 0.0387 Recon Loss: 0.0283 [03/30 15:53:04 TiTok]: Data (t): 0.0032, 62.54/s/gpu Batch (t): 0.5756 LR: 0.000062 Step: 226800 Total Loss: 0.0385 Recon Loss: 0.0269 [03/30 15:54:02 TiTok]: Data (t): 0.0034, 62.30/s/gpu Batch (t): 0.5779 LR: 0.000062 Step: 226900 Total Loss: 0.0370 Recon Loss: 0.0272 [03/30 15:54:59 TiTok]: Data (t): 0.0032, 56.63/s/gpu Batch (t): 0.6358 LR: 0.000062 Step: 227000 Total Loss: 0.0393 Recon Loss: 0.0272 [03/30 15:55:57 TiTok]: Data (t): 0.0032, 62.24/s/gpu Batch (t): 0.5784 LR: 0.000062 Step: 227100 Total Loss: 0.0395 Recon Loss: 0.0286 [03/30 15:56:55 TiTok]: Data (t): 0.0032, 62.48/s/gpu Batch (t): 0.5762 LR: 0.000062 Step: 227200 Total Loss: 0.0409 Recon Loss: 0.0303 [03/30 15:57:53 TiTok]: Data (t): 0.0033, 62.41/s/gpu Batch (t): 0.5769 LR: 0.000062 Step: 227300 Total Loss: 0.0390 Recon Loss: 0.0280 [03/30 15:58:50 TiTok]: Data (t): 0.0033, 62.40/s/gpu Batch (t): 0.5770 LR: 0.000062 Step: 227400 Total Loss: 0.0378 Recon Loss: 0.0279 [03/30 15:59:48 TiTok]: Data (t): 0.0034, 61.56/s/gpu Batch (t): 0.5848 LR: 0.000062 Step: 227500 Total Loss: 0.0393 Recon Loss: 0.0282 [03/30 16:00:46 TiTok]: Data (t): 0.0032, 62.37/s/gpu Batch (t): 0.5772 LR: 0.000062 Step: 227600 Total Loss: 0.0376 Recon Loss: 0.0283 [03/30 16:01:44 TiTok]: Data (t): 0.0033, 62.52/s/gpu Batch (t): 0.5758 LR: 0.000062 Step: 227700 Total Loss: 0.0404 Recon Loss: 0.0281 [03/30 16:02:42 TiTok]: Data (t): 0.0032, 62.47/s/gpu Batch (t): 0.5763 LR: 0.000062 Step: 227800 Total Loss: 0.0399 Recon Loss: 0.0269 [03/30 16:03:39 TiTok]: Data (t): 0.0032, 62.35/s/gpu Batch (t): 0.5774 LR: 0.000062 Step: 227900 Total Loss: 0.0402 Recon Loss: 0.0280 [03/30 16:04:37 TiTok]: Data (t): 0.0031, 55.97/s/gpu Batch (t): 0.6432 LR: 0.000062 Step: 228000 Total Loss: 0.0395 Recon Loss: 0.0293 [03/30 16:05:36 TiTok]: Data (t): 0.0032, 62.44/s/gpu Batch (t): 0.5766 LR: 0.000062 Step: 228100 Total Loss: 0.0398 Recon Loss: 0.0279 [03/30 16:06:34 TiTok]: Data (t): 0.0056, 58.86/s/gpu Batch (t): 0.6116 LR: 0.000062 Step: 228200 Total Loss: 0.0377 Recon Loss: 0.0277 [03/30 16:07:32 TiTok]: Data (t): 0.0032, 62.46/s/gpu Batch (t): 0.5764 LR: 0.000062 Step: 228300 Total Loss: 0.0417 Recon Loss: 0.0296 [03/30 16:08:30 TiTok]: Data (t): 0.0033, 62.53/s/gpu Batch (t): 0.5757 LR: 0.000062 Step: 228400 Total Loss: 0.0403 Recon Loss: 0.0287 [03/30 16:09:28 TiTok]: Data (t): 0.0034, 62.58/s/gpu Batch (t): 0.5753 LR: 0.000062 Step: 228500 Total Loss: 0.0393 Recon Loss: 0.0285 [03/30 16:10:26 TiTok]: Data (t): 0.0031, 62.45/s/gpu Batch (t): 0.5765 LR: 0.000062 Step: 228600 Total Loss: 0.0363 Recon Loss: 0.0275 [03/30 16:11:24 TiTok]: Data (t): 0.0031, 62.34/s/gpu Batch (t): 0.5775 LR: 0.000062 Step: 228700 Total Loss: 0.0386 Recon Loss: 0.0273 [03/30 16:12:22 TiTok]: Data (t): 0.0033, 62.41/s/gpu Batch (t): 0.5768 LR: 0.000062 Step: 228800 Total Loss: 0.0398 Recon Loss: 0.0277 [03/30 16:13:20 TiTok]: Data (t): 0.0033, 62.34/s/gpu Batch (t): 0.5775 LR: 0.000062 Step: 228900 Total Loss: 0.0384 Recon Loss: 0.0298 [03/30 16:14:17 TiTok]: Data (t): 0.0032, 56.52/s/gpu Batch (t): 0.6369 LR: 0.000062 Step: 229000 Total Loss: 0.0394 Recon Loss: 0.0272 [03/30 16:15:15 TiTok]: Data (t): 0.0032, 62.90/s/gpu Batch (t): 0.5723 LR: 0.000062 Step: 229100 Total Loss: 0.0368 Recon Loss: 0.0264 [03/30 16:16:14 TiTok]: Data (t): 0.0031, 62.34/s/gpu Batch (t): 0.5775 LR: 0.000062 Step: 229200 Total Loss: 0.0387 Recon Loss: 0.0279 [03/30 16:17:12 TiTok]: Data (t): 0.0031, 62.28/s/gpu Batch (t): 0.5780 LR: 0.000062 Step: 229300 Total Loss: 0.0386 Recon Loss: 0.0261 [03/30 16:18:10 TiTok]: Data (t): 0.0032, 61.54/s/gpu Batch (t): 0.5850 LR: 0.000062 Step: 229400 Total Loss: 0.0387 Recon Loss: 0.0284 [03/30 16:19:08 TiTok]: Data (t): 0.0031, 62.39/s/gpu Batch (t): 0.5770 LR: 0.000062 Step: 229500 Total Loss: 0.0390 Recon Loss: 0.0277 [03/30 16:20:06 TiTok]: Data (t): 0.0031, 62.45/s/gpu Batch (t): 0.5764 LR: 0.000062 Step: 229600 Total Loss: 0.0377 Recon Loss: 0.0280 [03/30 16:21:04 TiTok]: Data (t): 0.0032, 62.51/s/gpu Batch (t): 0.5760 LR: 0.000062 Step: 229700 Total Loss: 0.0378 Recon Loss: 0.0260 [03/30 16:22:02 TiTok]: Data (t): 0.0032, 62.40/s/gpu Batch (t): 0.5769 LR: 0.000062 Step: 229800 Total Loss: 0.0382 Recon Loss: 0.0258 [03/30 16:22:59 TiTok]: Data (t): 0.0032, 62.42/s/gpu Batch (t): 0.5767 LR: 0.000061 Step: 229900 Total Loss: 0.0402 Recon Loss: 0.0286 [03/30 16:23:57 TiTok]: Data (t): 0.0032, 56.57/s/gpu Batch (t): 0.6364 LR: 0.000061 Step: 230000 Total Loss: 0.0403 Recon Loss: 0.0274 [03/30 16:23:59 TiTok]: Saved state to /mnt/books/train_stage2/order_32_stage2/checkpoint-230000 [03/30 16:24:13 TiTok]: Reconstructing images... [03/30 16:25:11 TiTok]: Data (t): 0.0031, 62.41/s/gpu Batch (t): 0.5768 LR: 0.000061 Step: 230100 Total Loss: 0.0396 Recon Loss: 0.0286 [03/30 16:26:09 TiTok]: Data (t): 0.0032, 62.27/s/gpu Batch (t): 0.5781 LR: 0.000061 Step: 230200 Total Loss: 0.0376 Recon Loss: 0.0284 [03/30 16:27:08 TiTok]: Data (t): 0.0032, 62.38/s/gpu Batch (t): 0.5771 LR: 0.000061 Step: 230300 Total Loss: 0.0389 Recon Loss: 0.0261 [03/30 16:28:07 TiTok]: Data (t): 0.0031, 62.51/s/gpu Batch (t): 0.5759 LR: 0.000061 Step: 230400 Total Loss: 0.0373 Recon Loss: 0.0270 [03/30 16:29:05 TiTok]: Data (t): 0.0032, 62.15/s/gpu Batch (t): 0.5792 LR: 0.000061 Step: 230500 Total Loss: 0.0383 Recon Loss: 0.0293 [03/30 16:30:02 TiTok]: Data (t): 0.0031, 62.45/s/gpu Batch (t): 0.5764 LR: 0.000061 Step: 230600 Total Loss: 0.0373 Recon Loss: 0.0269 [03/30 16:31:00 TiTok]: Data (t): 0.0032, 62.43/s/gpu Batch (t): 0.5767 LR: 0.000061 Step: 230700 Total Loss: 0.0420 Recon Loss: 0.0299 [03/30 16:31:58 TiTok]: Data (t): 0.0031, 61.48/s/gpu Batch (t): 0.5855 LR: 0.000061 Step: 230800 Total Loss: 0.0393 Recon Loss: 0.0289 [03/30 16:32:56 TiTok]: Data (t): 0.0031, 62.12/s/gpu Batch (t): 0.5795 LR: 0.000061 Step: 230900 Total Loss: 0.0387 Recon Loss: 0.0280 [03/30 16:33:54 TiTok]: Data (t): 0.0031, 56.63/s/gpu Batch (t): 0.6357 LR: 0.000061 Step: 231000 Total Loss: 0.0396 Recon Loss: 0.0296 [03/30 16:34:52 TiTok]: Data (t): 0.0033, 62.47/s/gpu Batch (t): 0.5763 LR: 0.000061 Step: 231100 Total Loss: 0.0395 Recon Loss: 0.0267 [03/30 16:35:49 TiTok]: Data (t): 0.0032, 62.52/s/gpu Batch (t): 0.5758 LR: 0.000061 Step: 231200 Total Loss: 0.0395 Recon Loss: 0.0266 [03/30 16:36:47 TiTok]: Data (t): 0.0032, 62.59/s/gpu Batch (t): 0.5752 LR: 0.000061 Step: 231300 Total Loss: 0.0398 Recon Loss: 0.0273 [03/30 16:37:45 TiTok]: Data (t): 0.0033, 62.56/s/gpu Batch (t): 0.5755 LR: 0.000061 Step: 231400 Total Loss: 0.0390 Recon Loss: 0.0280 [03/30 16:38:42 TiTok]: Data (t): 0.0032, 61.28/s/gpu Batch (t): 0.5875 LR: 0.000061 Step: 231500 Total Loss: 0.0398 Recon Loss: 0.0280 [03/30 16:39:41 TiTok]: Data (t): 0.0032, 62.51/s/gpu Batch (t): 0.5759 LR: 0.000061 Step: 231600 Total Loss: 0.0409 Recon Loss: 0.0301 [03/30 16:40:38 TiTok]: Data (t): 0.0033, 62.14/s/gpu Batch (t): 0.5793 LR: 0.000061 Step: 231700 Total Loss: 0.0405 Recon Loss: 0.0288 [03/30 16:41:36 TiTok]: Data (t): 0.0033, 62.44/s/gpu Batch (t): 0.5765 LR: 0.000061 Step: 231800 Total Loss: 0.0379 Recon Loss: 0.0289 [03/30 16:42:34 TiTok]: Data (t): 0.0033, 62.67/s/gpu Batch (t): 0.5745 LR: 0.000061 Step: 231900 Total Loss: 0.0370 Recon Loss: 0.0278 [03/30 16:43:32 TiTok]: Data (t): 0.0033, 56.82/s/gpu Batch (t): 0.6335 LR: 0.000061 Step: 232000 Total Loss: 0.0384 Recon Loss: 0.0278 [03/30 16:44:29 TiTok]: Data (t): 0.0032, 62.52/s/gpu Batch (t): 0.5758 LR: 0.000061 Step: 232100 Total Loss: 0.0374 Recon Loss: 0.0247 [03/30 16:45:27 TiTok]: Data (t): 0.0032, 62.52/s/gpu Batch (t): 0.5758 LR: 0.000061 Step: 232200 Total Loss: 0.0380 Recon Loss: 0.0261 [03/30 16:46:25 TiTok]: Data (t): 0.0032, 62.49/s/gpu Batch (t): 0.5761 LR: 0.000061 Step: 232300 Total Loss: 0.0380 Recon Loss: 0.0280 [03/30 16:47:23 TiTok]: Data (t): 0.0033, 62.34/s/gpu Batch (t): 0.5775 LR: 0.000061 Step: 232400 Total Loss: 0.0397 Recon Loss: 0.0277 [03/30 16:48:21 TiTok]: Data (t): 0.0032, 62.53/s/gpu Batch (t): 0.5758 LR: 0.000061 Step: 232500 Total Loss: 0.0382 Recon Loss: 0.0285 [03/30 16:49:20 TiTok]: Data (t): 0.0033, 62.33/s/gpu Batch (t): 0.5776 LR: 0.000061 Step: 232600 Total Loss: 0.0397 Recon Loss: 0.0270 [03/30 16:50:18 TiTok]: Data (t): 0.0033, 62.59/s/gpu Batch (t): 0.5752 LR: 0.000061 Step: 232700 Total Loss: 0.0406 Recon Loss: 0.0286 [03/30 16:51:16 TiTok]: Data (t): 0.0032, 62.56/s/gpu Batch (t): 0.5754 LR: 0.000061 Step: 232800 Total Loss: 0.0382 Recon Loss: 0.0279 [03/30 16:52:14 TiTok]: Data (t): 0.0032, 62.47/s/gpu Batch (t): 0.5763 LR: 0.000061 Step: 232900 Total Loss: 0.0392 Recon Loss: 0.0277 [03/30 16:53:11 TiTok]: Data (t): 0.0032, 56.23/s/gpu Batch (t): 0.6402 LR: 0.000061 Step: 233000 Total Loss: 0.0381 Recon Loss: 0.0268 [03/30 16:54:09 TiTok]: Data (t): 0.0032, 62.43/s/gpu Batch (t): 0.5766 LR: 0.000061 Step: 233100 Total Loss: 0.0380 Recon Loss: 0.0269 [03/30 16:55:07 TiTok]: Data (t): 0.0032, 62.49/s/gpu Batch (t): 0.5761 LR: 0.000061 Step: 233200 Total Loss: 0.0352 Recon Loss: 0.0258 [03/30 16:56:05 TiTok]: Data (t): 0.0032, 62.46/s/gpu Batch (t): 0.5764 LR: 0.000061 Step: 233300 Total Loss: 0.0380 Recon Loss: 0.0269 [03/30 16:57:02 TiTok]: Data (t): 0.0032, 62.57/s/gpu Batch (t): 0.5754 LR: 0.000060 Step: 233400 Total Loss: 0.0398 Recon Loss: 0.0290 [03/30 16:58:00 TiTok]: Data (t): 0.0032, 62.14/s/gpu Batch (t): 0.5793 LR: 0.000060 Step: 233500 Total Loss: 0.0386 Recon Loss: 0.0281 [03/30 16:59:00 TiTok]: Data (t): 0.0031, 62.39/s/gpu Batch (t): 0.5770 LR: 0.000060 Step: 233600 Total Loss: 0.0392 Recon Loss: 0.0278 [03/30 16:59:58 TiTok]: Data (t): 0.0031, 62.50/s/gpu Batch (t): 0.5760 LR: 0.000060 Step: 233700 Total Loss: 0.0377 Recon Loss: 0.0286 [03/30 17:00:56 TiTok]: Data (t): 0.0031, 62.43/s/gpu Batch (t): 0.5766 LR: 0.000060 Step: 233800 Total Loss: 0.0406 Recon Loss: 0.0306 [03/30 17:01:53 TiTok]: Data (t): 0.0032, 62.15/s/gpu Batch (t): 0.5792 LR: 0.000060 Step: 233900 Total Loss: 0.0442 Recon Loss: 0.0315 [03/30 17:02:51 TiTok]: Data (t): 0.0032, 56.54/s/gpu Batch (t): 0.6367 LR: 0.000060 Step: 234000 Total Loss: 0.0383 Recon Loss: 0.0273 [03/30 17:03:49 TiTok]: Data (t): 0.0031, 62.30/s/gpu Batch (t): 0.5779 LR: 0.000060 Step: 234100 Total Loss: 0.0395 Recon Loss: 0.0288 [03/30 17:04:47 TiTok]: Data (t): 0.0032, 61.50/s/gpu Batch (t): 0.5854 LR: 0.000060 Step: 234200 Total Loss: 0.0388 Recon Loss: 0.0261 [03/30 17:05:45 TiTok]: Data (t): 0.0031, 62.39/s/gpu Batch (t): 0.5770 LR: 0.000060 Step: 234300 Total Loss: 0.0393 Recon Loss: 0.0285 [03/30 17:06:43 TiTok]: Data (t): 0.0032, 62.27/s/gpu Batch (t): 0.5782 LR: 0.000060 Step: 234400 Total Loss: 0.0363 Recon Loss: 0.0268 [03/30 17:07:41 TiTok]: Data (t): 0.0032, 62.34/s/gpu Batch (t): 0.5775 LR: 0.000060 Step: 234500 Total Loss: 0.0346 Recon Loss: 0.0267 [03/30 17:08:39 TiTok]: Data (t): 0.0031, 62.38/s/gpu Batch (t): 0.5771 LR: 0.000060 Step: 234600 Total Loss: 0.0378 Recon Loss: 0.0268 [03/30 17:09:36 TiTok]: Data (t): 0.0032, 62.37/s/gpu Batch (t): 0.5772 LR: 0.000060 Step: 234700 Total Loss: 0.0395 Recon Loss: 0.0283 [03/30 17:10:35 TiTok]: Data (t): 0.0032, 62.26/s/gpu Batch (t): 0.5782 LR: 0.000060 Step: 234800 Total Loss: 0.0374 Recon Loss: 0.0273 [03/30 17:11:34 TiTok]: Data (t): 0.0031, 62.46/s/gpu Batch (t): 0.5764 LR: 0.000060 Step: 234900 Total Loss: 0.0357 Recon Loss: 0.0264 [03/30 17:12:31 TiTok]: Data (t): 0.0031, 56.39/s/gpu Batch (t): 0.6384 LR: 0.000060 Step: 235000 Total Loss: 0.0363 Recon Loss: 0.0266 [03/30 17:13:30 TiTok]: Data (t): 0.0032, 62.50/s/gpu Batch (t): 0.5760 LR: 0.000060 Step: 235100 Total Loss: 0.0381 Recon Loss: 0.0271 [03/30 17:14:27 TiTok]: Data (t): 0.0031, 62.44/s/gpu Batch (t): 0.5765 LR: 0.000060 Step: 235200 Total Loss: 0.0404 Recon Loss: 0.0278 [03/30 17:15:25 TiTok]: Data (t): 0.0031, 62.44/s/gpu Batch (t): 0.5766 LR: 0.000060 Step: 235300 Total Loss: 0.0394 Recon Loss: 0.0276 [03/30 17:16:23 TiTok]: Data (t): 0.0031, 62.57/s/gpu Batch (t): 0.5754 LR: 0.000060 Step: 235400 Total Loss: 0.0392 Recon Loss: 0.0269 [03/30 17:17:21 TiTok]: Data (t): 0.0031, 62.43/s/gpu Batch (t): 0.5766 LR: 0.000060 Step: 235500 Total Loss: 0.0403 Recon Loss: 0.0278 [03/30 17:18:18 TiTok]: Data (t): 0.0032, 61.22/s/gpu Batch (t): 0.5880 LR: 0.000060 Step: 235600 Total Loss: 0.0388 Recon Loss: 0.0277 [03/30 17:19:16 TiTok]: Data (t): 0.0031, 62.48/s/gpu Batch (t): 0.5762 LR: 0.000060 Step: 235700 Total Loss: 0.0402 Recon Loss: 0.0306 [03/30 17:20:14 TiTok]: Data (t): 0.0033, 62.28/s/gpu Batch (t): 0.5780 LR: 0.000060 Step: 235800 Total Loss: 0.0401 Recon Loss: 0.0276 [03/30 17:21:11 TiTok]: Data (t): 0.0031, 62.48/s/gpu Batch (t): 0.5762 LR: 0.000060 Step: 235900 Total Loss: 0.0390 Recon Loss: 0.0286 [03/30 17:22:09 TiTok]: Data (t): 0.0032, 56.68/s/gpu Batch (t): 0.6351 LR: 0.000060 Step: 236000 Total Loss: 0.0386 Recon Loss: 0.0278 [03/30 17:23:07 TiTok]: Data (t): 0.0031, 62.52/s/gpu Batch (t): 0.5758 LR: 0.000060 Step: 236100 Total Loss: 0.0391 Recon Loss: 0.0278 [03/30 17:24:05 TiTok]: Data (t): 0.0031, 62.52/s/gpu Batch (t): 0.5758 LR: 0.000060 Step: 236200 Total Loss: 0.0365 Recon Loss: 0.0264 [03/30 17:25:02 TiTok]: Data (t): 0.0031, 62.65/s/gpu Batch (t): 0.5746 LR: 0.000060 Step: 236300 Total Loss: 0.0378 Recon Loss: 0.0262 [03/30 17:26:00 TiTok]: Data (t): 0.0032, 62.46/s/gpu Batch (t): 0.5764 LR: 0.000060 Step: 236400 Total Loss: 0.0382 Recon Loss: 0.0282 [03/30 17:26:58 TiTok]: Data (t): 0.0032, 62.55/s/gpu Batch (t): 0.5755 LR: 0.000060 Step: 236500 Total Loss: 0.0363 Recon Loss: 0.0275 [03/30 17:27:56 TiTok]: Data (t): 0.0031, 62.56/s/gpu Batch (t): 0.5754 LR: 0.000060 Step: 236600 Total Loss: 0.0393 Recon Loss: 0.0278 [03/30 17:28:54 TiTok]: Data (t): 0.0031, 62.52/s/gpu Batch (t): 0.5758 LR: 0.000060 Step: 236700 Total Loss: 0.0380 Recon Loss: 0.0269 [03/30 17:29:51 TiTok]: Data (t): 0.0032, 62.50/s/gpu Batch (t): 0.5760 LR: 0.000060 Step: 236800 Total Loss: 0.0389 Recon Loss: 0.0269 [03/30 17:30:49 TiTok]: Data (t): 0.0031, 62.57/s/gpu Batch (t): 0.5754 LR: 0.000059 Step: 236900 Total Loss: 0.0389 Recon Loss: 0.0269 [03/30 17:31:47 TiTok]: Data (t): 0.0031, 56.88/s/gpu Batch (t): 0.6329 LR: 0.000059 Step: 237000 Total Loss: 0.0377 Recon Loss: 0.0265 [03/30 17:32:45 TiTok]: Data (t): 0.0031, 59.92/s/gpu Batch (t): 0.6008 LR: 0.000059 Step: 237100 Total Loss: 0.0389 Recon Loss: 0.0294 [03/30 17:33:44 TiTok]: Data (t): 0.0032, 62.51/s/gpu Batch (t): 0.5759 LR: 0.000059 Step: 237200 Total Loss: 0.0361 Recon Loss: 0.0283 [03/30 17:34:26 TiTok]: Saving config to /mnt/books/train_stage2/order_32_stage2/config.yaml [03/30 17:34:26 TiTok]: Config: experiment: project: stage2 name: stage2 output_dir: /mnt/books/train_stage2/order_32_stage2/ max_train_examples: 1281167 save_every: 10000 eval_every: 1000000 generate_every: 10000 log_every: 100 log_grad_norm_every: 1000 resume: true logging_dir: /mnt/books/train_stage2/order_32_stage2/logs model: vq_model: codebook_size: 4096 token_size: 12 use_l2_norm: true commitment_cost: 0.25 vit_enc_model_size: large vit_dec_model_size: large vit_enc_patch_size: 16 vit_dec_patch_size: 16 num_latent_tokens: 32 layers_x: 18 layers_token: 2 embedding_width: 1024 width: 256 finetune_decoder: true pretrained_tokenizer_weight: maskgit-vqgan-imagenet-f16-256.bin losses: discriminator_start: 20000 quantizer_weight: 0.0 discriminator_factor: 1.0 discriminator_weight: 0.02 perceptual_loss: convnext_s perceptual_weight: 0.1 reconstruction_loss: l2 reconstruction_weight: 1.0 lecam_regularization_weight: 0.001 dataset: params: train_shards_path_or_url: imagenet/imagenet1k-train-{0000..1023}.tar eval_shards_path_or_url: imagenet/imagenet1k-validation-{00..63}.tar num_workers_per_gpu: 12 preprocessing: resize_shorter_edge: 256 crop_size: 256 random_crop: true random_flip: true optimizer: name: adamw params: learning_rate: 0.0001 discriminator_learning_rate: 0.0001 beta1: 0.9 beta2: 0.999 weight_decay: 0.0001 lr_scheduler: scheduler: cosine params: learning_rate: ${optimizer.params.learning_rate} warmup_steps: 5000 end_lr: 1.0e-05 training: gradient_accumulation_steps: 1 per_gpu_batch_size: 36 mixed_precision: fp16 enable_tf32: true enable_wandb: true use_ema: true seed: 42 max_train_steps: 500000 num_generated_images: 2 max_grad_norm: 1.0 config: configs/training/TiTok/stage2/titok_new.yaml [03/30 17:34:43 TiTok]: Creating model and loss module. [03/30 17:34:50 TiTok]: Creating optimizers. [03/30 17:34:50 TiTok]: Creating lr_schedulers. [03/30 17:34:50 TiTok]: Creating dataloaders. [03/30 17:34:50 TiTok]: Creating evaluator. [03/30 17:34:51 TiTok]: Preparing model, optimizer and dataloaders [03/30 17:34:52 TiTok]: ***** Running training ***** [03/30 17:34:52 TiTok]:  Num training steps = 500000 [03/30 17:34:52 TiTok]:  Gradient Accumulation steps = 1 [03/30 17:34:52 TiTok]:  Instantaneous batch size per gpu = 36 [03/30 17:34:52 TiTok]:  Total train batch size (w. parallel, distributed & accumulation) = 288 [03/30 17:34:52 TiTok]: All globbed checkpoints are: ['/mnt/books/train_stage2/order_32_stage2/checkpoint-200000', '/mnt/books/train_stage2/order_32_stage2/checkpoint-190000', '/mnt/books/train_stage2/order_32_stage2/checkpoint-180000'] [03/30 17:34:52 TiTok]: Load checkpoint from /mnt/books/train_stage2/order_32_stage2/checkpoint-200000 [03/30 17:35:05 TiTok]: Resuming at global_step 200000 [03/30 17:36:14 TiTok]: Data (t): 0.0031, 62.73/s/gpu Batch (t): 0.5739 LR: 0.000070 Step: 200100 Total Loss: 0.0406 Recon Loss: 0.0259 [03/30 17:37:12 TiTok]: Data (t): 0.0032, 62.95/s/gpu Batch (t): 0.5719 LR: 0.000070 Step: 200200 Total Loss: 0.0552 Recon Loss: 0.0327 [03/30 17:38:09 TiTok]: Data (t): 0.0032, 62.75/s/gpu Batch (t): 0.5737 LR: 0.000070 Step: 200300 Total Loss: 0.0327 Recon Loss: 0.0248 [03/30 17:39:06 TiTok]: Data (t): 0.0031, 62.93/s/gpu Batch (t): 0.5720 LR: 0.000070 Step: 200400 Total Loss: 0.0541 Recon Loss: 0.0390 [03/30 17:40:04 TiTok]: Data (t): 0.0032, 62.84/s/gpu Batch (t): 0.5729 LR: 0.000070 Step: 200500 Total Loss: 0.0492 Recon Loss: 0.0282 [03/30 17:41:02 TiTok]: Data (t): 0.0033, 62.13/s/gpu Batch (t): 0.5794 LR: 0.000070 Step: 200600 Total Loss: 0.0353 Recon Loss: 0.0258 [03/30 17:41:59 TiTok]: Data (t): 0.0032, 62.85/s/gpu Batch (t): 0.5728 LR: 0.000070 Step: 200700 Total Loss: 0.0445 Recon Loss: 0.0352 [03/30 17:42:57 TiTok]: Data (t): 0.0032, 62.92/s/gpu Batch (t): 0.5721 LR: 0.000070 Step: 200800 Total Loss: 0.0408 Recon Loss: 0.0267 [03/30 17:43:54 TiTok]: Data (t): 0.0032, 62.83/s/gpu Batch (t): 0.5730 LR: 0.000070 Step: 200900 Total Loss: 0.0558 Recon Loss: 0.0371 [03/30 17:44:52 TiTok]: Data (t): 0.0032, 56.71/s/gpu Batch (t): 0.6348 LR: 0.000069 Step: 201000 Total Loss: 0.0579 Recon Loss: 0.0334 [03/30 17:45:49 TiTok]: Data (t): 0.0032, 62.74/s/gpu Batch (t): 0.5738 LR: 0.000069 Step: 201100 Total Loss: 0.0550 Recon Loss: 0.0335 [03/30 17:46:47 TiTok]: Data (t): 0.0032, 62.83/s/gpu Batch (t): 0.5730 LR: 0.000069 Step: 201200 Total Loss: 0.0434 Recon Loss: 0.0259 [03/30 17:47:44 TiTok]: Data (t): 0.0032, 62.81/s/gpu Batch (t): 0.5732 LR: 0.000069 Step: 201300 Total Loss: 0.0352 Recon Loss: 0.0256 [03/30 17:48:42 TiTok]: Data (t): 0.0032, 62.94/s/gpu Batch (t): 0.5720 LR: 0.000069 Step: 201400 Total Loss: 0.0540 Recon Loss: 0.0343 [03/30 17:49:39 TiTok]: Data (t): 0.0033, 59.95/s/gpu Batch (t): 0.6005 LR: 0.000069 Step: 201500 Total Loss: 0.0333 Recon Loss: 0.0261 [03/30 17:50:36 TiTok]: Data (t): 0.0032, 62.73/s/gpu Batch (t): 0.5739 LR: 0.000069 Step: 201600 Total Loss: 0.0526 Recon Loss: 0.0366 [03/30 17:51:34 TiTok]: Data (t): 0.0069, 61.96/s/gpu Batch (t): 0.5810 LR: 0.000069 Step: 201700 Total Loss: 0.0510 Recon Loss: 0.0313 [03/30 17:52:31 TiTok]: Data (t): 0.0032, 62.73/s/gpu Batch (t): 0.5739 LR: 0.000069 Step: 201800 Total Loss: 0.0368 Recon Loss: 0.0275 [03/30 17:53:29 TiTok]: Data (t): 0.0033, 62.48/s/gpu Batch (t): 0.5762 LR: 0.000069 Step: 201900 Total Loss: 0.0434 Recon Loss: 0.0325 [03/30 17:54:26 TiTok]: Data (t): 0.0031, 56.60/s/gpu Batch (t): 0.6360 LR: 0.000069 Step: 202000 Total Loss: 0.0558 Recon Loss: 0.0337 [03/30 17:55:24 TiTok]: Data (t): 0.0031, 62.77/s/gpu Batch (t): 0.5735 LR: 0.000069 Step: 202100 Total Loss: 0.0423 Recon Loss: 0.0274 [03/30 17:56:21 TiTok]: Data (t): 0.0032, 62.70/s/gpu Batch (t): 0.5741 LR: 0.000069 Step: 202200 Total Loss: 0.0457 Recon Loss: 0.0342 [03/30 17:57:18 TiTok]: Data (t): 0.0032, 62.84/s/gpu Batch (t): 0.5729 LR: 0.000069 Step: 202300 Total Loss: 0.0516 Recon Loss: 0.0287 [03/30 17:58:16 TiTok]: Data (t): 0.0032, 62.84/s/gpu Batch (t): 0.5729 LR: 0.000069 Step: 202400 Total Loss: 0.0312 Recon Loss: 0.0265 [03/30 17:59:13 TiTok]: Data (t): 0.0032, 62.82/s/gpu Batch (t): 0.5730 LR: 0.000069 Step: 202500 Total Loss: 0.0568 Recon Loss: 0.0333 [03/30 18:00:11 TiTok]: Data (t): 0.0032, 62.85/s/gpu Batch (t): 0.5728 LR: 0.000069 Step: 202600 Total Loss: 0.0430 Recon Loss: 0.0316 [03/30 18:01:08 TiTok]: Data (t): 0.0031, 62.83/s/gpu Batch (t): 0.5730 LR: 0.000069 Step: 202700 Total Loss: 0.0444 Recon Loss: 0.0259 [03/30 18:02:06 TiTok]: Data (t): 0.0032, 62.85/s/gpu Batch (t): 0.5728 LR: 0.000069 Step: 202800 Total Loss: 0.0416 Recon Loss: 0.0347 [03/30 18:03:03 TiTok]: Data (t): 0.0032, 62.88/s/gpu Batch (t): 0.5725 LR: 0.000069 Step: 202900 Total Loss: 0.0498 Recon Loss: 0.0293 [03/30 18:04:01 TiTok]: Data (t): 0.0032, 57.09/s/gpu Batch (t): 0.6306 LR: 0.000069 Step: 203000 Total Loss: 0.0397 Recon Loss: 0.0252 [03/30 18:04:58 TiTok]: Data (t): 0.0033, 62.53/s/gpu Batch (t): 0.5757 LR: 0.000069 Step: 203100 Total Loss: 0.0328 Recon Loss: 0.0268 [03/30 18:05:56 TiTok]: Data (t): 0.0032, 62.87/s/gpu Batch (t): 0.5727 LR: 0.000069 Step: 203200 Total Loss: 0.0550 Recon Loss: 0.0317 [03/30 18:06:53 TiTok]: Data (t): 0.0032, 62.85/s/gpu Batch (t): 0.5727 LR: 0.000069 Step: 203300 Total Loss: 0.0377 Recon Loss: 0.0275 [03/30 18:07:50 TiTok]: Data (t): 0.0032, 62.75/s/gpu Batch (t): 0.5737 LR: 0.000069 Step: 203400 Total Loss: 0.0530 Recon Loss: 0.0304 [03/30 18:08:48 TiTok]: Data (t): 0.0032, 62.81/s/gpu Batch (t): 0.5732 LR: 0.000069 Step: 203500 Total Loss: 0.0341 Recon Loss: 0.0286 [03/30 18:09:45 TiTok]: Data (t): 0.0031, 62.80/s/gpu Batch (t): 0.5732 LR: 0.000069 Step: 203600 Total Loss: 0.0556 Recon Loss: 0.0312 [03/30 18:10:43 TiTok]: Data (t): 0.0031, 62.86/s/gpu Batch (t): 0.5727 LR: 0.000069 Step: 203700 Total Loss: 0.0424 Recon Loss: 0.0269 [03/30 18:11:40 TiTok]: Data (t): 0.0039, 62.20/s/gpu Batch (t): 0.5788 LR: 0.000069 Step: 203800 Total Loss: 0.0603 Recon Loss: 0.0353 [03/30 18:12:38 TiTok]: Data (t): 0.0032, 62.83/s/gpu Batch (t): 0.5730 LR: 0.000069 Step: 203900 Total Loss: 0.0395 Recon Loss: 0.0275 [03/30 18:13:36 TiTok]: Data (t): 0.0034, 56.61/s/gpu Batch (t): 0.6360 LR: 0.000069 Step: 204000 Total Loss: 0.0480 Recon Loss: 0.0278 [03/30 18:14:33 TiTok]: Data (t): 0.0033, 62.35/s/gpu Batch (t): 0.5774 LR: 0.000069 Step: 204100 Total Loss: 0.0469 Recon Loss: 0.0270 [03/30 18:15:31 TiTok]: Data (t): 0.0032, 62.37/s/gpu Batch (t): 0.5772 LR: 0.000069 Step: 204200 Total Loss: 0.0500 Recon Loss: 0.0285 [03/30 18:16:29 TiTok]: Data (t): 0.0033, 62.29/s/gpu Batch (t): 0.5779 LR: 0.000069 Step: 204300 Total Loss: 0.0413 Recon Loss: 0.0266 [03/30 18:17:27 TiTok]: Data (t): 0.0033, 62.38/s/gpu Batch (t): 0.5771 LR: 0.000069 Step: 204400 Total Loss: 0.0560 Recon Loss: 0.0377 [03/30 18:18:26 TiTok]: Data (t): 0.0033, 61.97/s/gpu Batch (t): 0.5810 LR: 0.000069 Step: 204500 Total Loss: 0.0570 Recon Loss: 0.0333 [03/30 18:19:25 TiTok]: Data (t): 0.0033, 62.78/s/gpu Batch (t): 0.5734 LR: 0.000069 Step: 204600 Total Loss: 0.0428 Recon Loss: 0.0334 [03/30 18:20:22 TiTok]: Data (t): 0.0032, 62.51/s/gpu Batch (t): 0.5759 LR: 0.000068 Step: 204700 Total Loss: 0.0567 Recon Loss: 0.0323 [03/30 18:21:20 TiTok]: Data (t): 0.0032, 62.59/s/gpu Batch (t): 0.5752 LR: 0.000068 Step: 204800 Total Loss: 0.0445 Recon Loss: 0.0255 [03/30 18:22:18 TiTok]: Data (t): 0.0032, 62.57/s/gpu Batch (t): 0.5753 LR: 0.000068 Step: 204900 Total Loss: 0.0479 Recon Loss: 0.0361 [03/30 18:23:15 TiTok]: Data (t): 0.0031, 56.93/s/gpu Batch (t): 0.6323 LR: 0.000068 Step: 205000 Total Loss: 0.0607 Recon Loss: 0.0353 [03/30 18:24:13 TiTok]: Data (t): 0.0032, 62.52/s/gpu Batch (t): 0.5758 LR: 0.000068 Step: 205100 Total Loss: 0.0535 Recon Loss: 0.0314 [03/30 18:25:11 TiTok]: Data (t): 0.0033, 61.28/s/gpu Batch (t): 0.5874 LR: 0.000068 Step: 205200 Total Loss: 0.0408 Recon Loss: 0.0256 [03/30 18:26:09 TiTok]: Data (t): 0.0032, 62.58/s/gpu Batch (t): 0.5753 LR: 0.000068 Step: 205300 Total Loss: 0.0404 Recon Loss: 0.0264 [03/30 18:27:07 TiTok]: Data (t): 0.0032, 62.55/s/gpu Batch (t): 0.5756 LR: 0.000068 Step: 205400 Total Loss: 0.0414 Recon Loss: 0.0323 [03/30 18:28:05 TiTok]: Data (t): 0.0032, 62.60/s/gpu Batch (t): 0.5751 LR: 0.000068 Step: 205500 Total Loss: 0.0410 Recon Loss: 0.0278 [03/30 18:29:03 TiTok]: Data (t): 0.0032, 62.54/s/gpu Batch (t): 0.5756 LR: 0.000068 Step: 205600 Total Loss: 0.0560 Recon Loss: 0.0316 [03/30 18:30:00 TiTok]: Data (t): 0.0032, 61.88/s/gpu Batch (t): 0.5818 LR: 0.000068 Step: 205700 Total Loss: 0.0380 Recon Loss: 0.0257 [03/30 18:30:58 TiTok]: Data (t): 0.0032, 62.67/s/gpu Batch (t): 0.5745 LR: 0.000068 Step: 205800 Total Loss: 0.0544 Recon Loss: 0.0360 [03/30 18:31:56 TiTok]: Data (t): 0.0032, 62.66/s/gpu Batch (t): 0.5746 LR: 0.000068 Step: 205900 Total Loss: 0.0526 Recon Loss: 0.0288 [03/30 18:32:53 TiTok]: Data (t): 0.0032, 56.34/s/gpu Batch (t): 0.6390 LR: 0.000068 Step: 206000 Total Loss: 0.0383 Recon Loss: 0.0258 [03/30 18:33:51 TiTok]: Data (t): 0.0032, 61.45/s/gpu Batch (t): 0.5858 LR: 0.000068 Step: 206100 Total Loss: 0.0538 Recon Loss: 0.0294 [03/30 18:34:48 TiTok]: Data (t): 0.0031, 62.74/s/gpu Batch (t): 0.5738 LR: 0.000068 Step: 206200 Total Loss: 0.0481 Recon Loss: 0.0279 [03/30 18:35:46 TiTok]: Data (t): 0.0032, 62.45/s/gpu Batch (t): 0.5764 LR: 0.000068 Step: 206300 Total Loss: 0.0462 Recon Loss: 0.0311 [03/30 18:36:43 TiTok]: Data (t): 0.0032, 62.71/s/gpu Batch (t): 0.5741 LR: 0.000068 Step: 206400 Total Loss: 0.0464 Recon Loss: 0.0277 [03/30 18:37:41 TiTok]: Data (t): 0.0031, 62.62/s/gpu Batch (t): 0.5749 LR: 0.000068 Step: 206500 Total Loss: 0.0447 Recon Loss: 0.0324 [03/30 18:38:38 TiTok]: Data (t): 0.0032, 62.64/s/gpu Batch (t): 0.5747 LR: 0.000068 Step: 206600 Total Loss: 0.0556 Recon Loss: 0.0342 [03/30 18:39:36 TiTok]: Data (t): 0.0032, 62.59/s/gpu Batch (t): 0.5752 LR: 0.000068 Step: 206700 Total Loss: 0.0501 Recon Loss: 0.0285 [03/30 18:40:33 TiTok]: Data (t): 0.0032, 62.54/s/gpu Batch (t): 0.5757 LR: 0.000068 Step: 206800 Total Loss: 0.0401 Recon Loss: 0.0264 [03/30 18:41:31 TiTok]: Data (t): 0.0032, 62.57/s/gpu Batch (t): 0.5754 LR: 0.000068 Step: 206900 Total Loss: 0.0399 Recon Loss: 0.0281 [03/30 18:42:29 TiTok]: Data (t): 0.0032, 57.05/s/gpu Batch (t): 0.6311 LR: 0.000068 Step: 207000 Total Loss: 0.0541 Recon Loss: 0.0309 [03/30 18:43:26 TiTok]: Data (t): 0.0032, 62.61/s/gpu Batch (t): 0.5750 LR: 0.000068 Step: 207100 Total Loss: 0.0376 Recon Loss: 0.0276 [03/30 18:44:24 TiTok]: Data (t): 0.0031, 62.67/s/gpu Batch (t): 0.5745 LR: 0.000068 Step: 207200 Total Loss: 0.0463 Recon Loss: 0.0321 [03/30 18:45:22 TiTok]: Data (t): 0.0032, 62.47/s/gpu Batch (t): 0.5763 LR: 0.000068 Step: 207300 Total Loss: 0.0535 Recon Loss: 0.0305 [03/30 18:46:20 TiTok]: Data (t): 0.0033, 61.61/s/gpu Batch (t): 0.5843 LR: 0.000068 Step: 207400 Total Loss: 0.0398 Recon Loss: 0.0292 [03/30 18:47:17 TiTok]: Data (t): 0.0032, 62.55/s/gpu Batch (t): 0.5755 LR: 0.000068 Step: 207500 Total Loss: 0.0449 Recon Loss: 0.0296 [03/30 18:48:16 TiTok]: Data (t): 0.0058, 58.08/s/gpu Batch (t): 0.6198 LR: 0.000068 Step: 207600 Total Loss: 0.0458 Recon Loss: 0.0305 [03/30 18:49:14 TiTok]: Data (t): 0.0031, 62.35/s/gpu Batch (t): 0.5774 LR: 0.000068 Step: 207700 Total Loss: 0.0314 Recon Loss: 0.0265 [03/30 18:50:12 TiTok]: Data (t): 0.0032, 62.39/s/gpu Batch (t): 0.5770 LR: 0.000068 Step: 207800 Total Loss: 0.0587 Recon Loss: 0.0348 [03/30 18:51:10 TiTok]: Data (t): 0.0033, 62.37/s/gpu Batch (t): 0.5772 LR: 0.000068 Step: 207900 Total Loss: 0.0477 Recon Loss: 0.0291 [03/30 18:52:08 TiTok]: Data (t): 0.0032, 56.66/s/gpu Batch (t): 0.6354 LR: 0.000068 Step: 208000 Total Loss: 0.0540 Recon Loss: 0.0310 [03/30 18:53:06 TiTok]: Data (t): 0.0033, 62.31/s/gpu Batch (t): 0.5778 LR: 0.000068 Step: 208100 Total Loss: 0.0331 Recon Loss: 0.0269 [03/30 18:54:03 TiTok]: Data (t): 0.0032, 62.34/s/gpu Batch (t): 0.5775 LR: 0.000068 Step: 208200 Total Loss: 0.0589 Recon Loss: 0.0344 [03/30 18:55:01 TiTok]: Data (t): 0.0032, 62.56/s/gpu Batch (t): 0.5755 LR: 0.000067 Step: 208300 Total Loss: 0.0463 Recon Loss: 0.0273 [03/30 18:55:59 TiTok]: Data (t): 0.0033, 62.52/s/gpu Batch (t): 0.5758 LR: 0.000067 Step: 208400 Total Loss: 0.0512 Recon Loss: 0.0328 [03/30 18:56:57 TiTok]: Data (t): 0.0033, 62.42/s/gpu Batch (t): 0.5768 LR: 0.000067 Step: 208500 Total Loss: 0.0424 Recon Loss: 0.0268 [03/30 18:57:54 TiTok]: Data (t): 0.0032, 62.50/s/gpu Batch (t): 0.5760 LR: 0.000067 Step: 208600 Total Loss: 0.0425 Recon Loss: 0.0312 [03/30 18:58:52 TiTok]: Data (t): 0.0031, 62.47/s/gpu Batch (t): 0.5763 LR: 0.000067 Step: 208700 Total Loss: 0.0497 Recon Loss: 0.0277 [03/30 18:59:50 TiTok]: Data (t): 0.0031, 62.47/s/gpu Batch (t): 0.5762 LR: 0.000067 Step: 208800 Total Loss: 0.0455 Recon Loss: 0.0329 [03/30 19:00:47 TiTok]: Data (t): 0.0030, 63.07/s/gpu Batch (t): 0.5708 LR: 0.000067 Step: 208900 Total Loss: 0.0466 Recon Loss: 0.0275 [03/30 19:01:47 TiTok]: Data (t): 0.0032, 53.53/s/gpu Batch (t): 0.6726 LR: 0.000067 Step: 209000 Total Loss: 0.0334 Recon Loss: 0.0305 [03/30 19:02:45 TiTok]: Data (t): 0.0033, 61.74/s/gpu Batch (t): 0.5831 LR: 0.000067 Step: 209100 Total Loss: 0.0512 Recon Loss: 0.0296 [03/30 19:03:42 TiTok]: Data (t): 0.0031, 62.17/s/gpu Batch (t): 0.5790 LR: 0.000067 Step: 209200 Total Loss: 0.0535 Recon Loss: 0.0319 [03/30 19:04:40 TiTok]: Data (t): 0.0031, 62.52/s/gpu Batch (t): 0.5758 LR: 0.000067 Step: 209300 Total Loss: 0.0448 Recon Loss: 0.0301 [03/30 19:05:38 TiTok]: Data (t): 0.0032, 62.51/s/gpu Batch (t): 0.5759 LR: 0.000067 Step: 209400 Total Loss: 0.0485 Recon Loss: 0.0265 [03/30 19:06:36 TiTok]: Data (t): 0.0031, 62.03/s/gpu Batch (t): 0.5803 LR: 0.000067 Step: 209500 Total Loss: 0.0343 Recon Loss: 0.0284 [03/30 19:07:34 TiTok]: Data (t): 0.0032, 62.56/s/gpu Batch (t): 0.5754 LR: 0.000067 Step: 209600 Total Loss: 0.0507 Recon Loss: 0.0281 [03/30 19:08:32 TiTok]: Data (t): 0.0032, 62.58/s/gpu Batch (t): 0.5752 LR: 0.000067 Step: 209700 Total Loss: 0.0373 Recon Loss: 0.0299 [03/30 19:09:30 TiTok]: Data (t): 0.0032, 61.88/s/gpu Batch (t): 0.5818 LR: 0.000067 Step: 209800 Total Loss: 0.0536 Recon Loss: 0.0316 [03/30 19:10:27 TiTok]: Data (t): 0.0032, 62.07/s/gpu Batch (t): 0.5800 LR: 0.000067 Step: 209900 Total Loss: 0.0365 Recon Loss: 0.0262 [03/30 19:11:25 TiTok]: Data (t): 0.0031, 56.84/s/gpu Batch (t): 0.6333 LR: 0.000067 Step: 210000 Total Loss: 0.0560 Recon Loss: 0.0352 [03/30 19:11:27 TiTok]: Saved state to /mnt/books/train_stage2/order_32_stage2/checkpoint-210000 [03/30 19:11:41 TiTok]: Reconstructing images... [03/30 19:12:40 TiTok]: Data (t): 0.0033, 61.80/s/gpu Batch (t): 0.5825 LR: 0.000067 Step: 210100 Total Loss: 0.0465 Recon Loss: 0.0266 [03/30 19:13:37 TiTok]: Data (t): 0.0040, 62.67/s/gpu Batch (t): 0.5745 LR: 0.000067 Step: 210200 Total Loss: 0.0474 Recon Loss: 0.0311 [03/30 19:14:35 TiTok]: Data (t): 0.0032, 62.65/s/gpu Batch (t): 0.5746 LR: 0.000067 Step: 210300 Total Loss: 0.0350 Recon Loss: 0.0310 [03/30 19:15:33 TiTok]: Data (t): 0.0032, 62.67/s/gpu Batch (t): 0.5744 LR: 0.000067 Step: 210400 Total Loss: 0.0506 Recon Loss: 0.0286 [03/30 19:16:30 TiTok]: Data (t): 0.0033, 62.64/s/gpu Batch (t): 0.5747 LR: 0.000067 Step: 210500 Total Loss: 0.0435 Recon Loss: 0.0292 [03/30 19:17:28 TiTok]: Data (t): 0.0032, 62.67/s/gpu Batch (t): 0.5744 LR: 0.000067 Step: 210600 Total Loss: 0.0515 Recon Loss: 0.0295 [03/30 19:18:25 TiTok]: Data (t): 0.0032, 62.64/s/gpu Batch (t): 0.5747 LR: 0.000067 Step: 210700 Total Loss: 0.0350 Recon Loss: 0.0291 [03/30 19:19:23 TiTok]: Data (t): 0.0032, 62.63/s/gpu Batch (t): 0.5748 LR: 0.000067 Step: 210800 Total Loss: 0.0406 Recon Loss: 0.0245 [03/30 19:20:20 TiTok]: Data (t): 0.0032, 62.70/s/gpu Batch (t): 0.5742 LR: 0.000067 Step: 210900 Total Loss: 0.0469 Recon Loss: 0.0309 [03/30 19:21:19 TiTok]: Data (t): 0.0032, 50.83/s/gpu Batch (t): 0.7082 LR: 0.000067 Step: 211000 Total Loss: 0.0476 Recon Loss: 0.0266 [03/30 19:22:17 TiTok]: Data (t): 0.0031, 62.69/s/gpu Batch (t): 0.5743 LR: 0.000067 Step: 211100 Total Loss: 0.0550 Recon Loss: 0.0334 [03/30 19:23:15 TiTok]: Data (t): 0.0035, 62.71/s/gpu Batch (t): 0.5741 LR: 0.000067 Step: 211200 Total Loss: 0.0450 Recon Loss: 0.0256 [03/30 19:24:12 TiTok]: Data (t): 0.0031, 65.26/s/gpu Batch (t): 0.5517 LR: 0.000067 Step: 211300 Total Loss: 0.0390 Recon Loss: 0.0309 [03/30 19:25:10 TiTok]: Data (t): 0.0033, 62.54/s/gpu Batch (t): 0.5756 LR: 0.000067 Step: 211400 Total Loss: 0.0539 Recon Loss: 0.0306 [03/30 19:26:08 TiTok]: Data (t): 0.0032, 62.68/s/gpu Batch (t): 0.5744 LR: 0.000067 Step: 211500 Total Loss: 0.0317 Recon Loss: 0.0276 [03/30 19:27:05 TiTok]: Data (t): 0.0032, 62.47/s/gpu Batch (t): 0.5762 LR: 0.000067 Step: 211600 Total Loss: 0.0566 Recon Loss: 0.0316 [03/30 19:28:03 TiTok]: Data (t): 0.0032, 62.69/s/gpu Batch (t): 0.5743 LR: 0.000067 Step: 211700 Total Loss: 0.0414 Recon Loss: 0.0255 [03/30 19:29:00 TiTok]: Data (t): 0.0032, 62.59/s/gpu Batch (t): 0.5752 LR: 0.000067 Step: 211800 Total Loss: 0.0452 Recon Loss: 0.0306 [03/30 19:29:58 TiTok]: Data (t): 0.0032, 62.71/s/gpu Batch (t): 0.5741 LR: 0.000067 Step: 211900 Total Loss: 0.0465 Recon Loss: 0.0255 [03/30 19:30:56 TiTok]: Data (t): 0.0031, 57.04/s/gpu Batch (t): 0.6312 LR: 0.000066 Step: 212000 Total Loss: 0.0431 Recon Loss: 0.0313 [03/30 19:31:53 TiTok]: Data (t): 0.0032, 62.80/s/gpu Batch (t): 0.5732 LR: 0.000066 Step: 212100 Total Loss: 0.0474 Recon Loss: 0.0279 [03/30 19:32:51 TiTok]: Data (t): 0.0032, 62.22/s/gpu Batch (t): 0.5786 LR: 0.000066 Step: 212200 Total Loss: 0.0398 Recon Loss: 0.0274 [03/30 19:33:49 TiTok]: Data (t): 0.0032, 62.13/s/gpu Batch (t): 0.5795 LR: 0.000066 Step: 212300 Total Loss: 0.0537 Recon Loss: 0.0324 [03/30 19:34:47 TiTok]: Data (t): 0.0032, 62.72/s/gpu Batch (t): 0.5740 LR: 0.000066 Step: 212400 Total Loss: 0.0381 Recon Loss: 0.0268 [03/30 19:35:45 TiTok]: Data (t): 0.0033, 62.64/s/gpu Batch (t): 0.5747 LR: 0.000066 Step: 212500 Total Loss: 0.0457 Recon Loss: 0.0363 [03/30 19:36:43 TiTok]: Data (t): 0.0033, 62.39/s/gpu Batch (t): 0.5770 LR: 0.000066 Step: 212600 Total Loss: 0.0544 Recon Loss: 0.0285 [03/30 19:37:41 TiTok]: Data (t): 0.0032, 62.54/s/gpu Batch (t): 0.5757 LR: 0.000066 Step: 212700 Total Loss: 0.0344 Recon Loss: 0.0265 [03/30 19:38:38 TiTok]: Data (t): 0.0033, 62.38/s/gpu Batch (t): 0.5771 LR: 0.000066 Step: 212800 Total Loss: 0.0524 Recon Loss: 0.0282 [03/30 19:39:36 TiTok]: Data (t): 0.0033, 62.37/s/gpu Batch (t): 0.5772 LR: 0.000066 Step: 212900 Total Loss: 0.0308 Recon Loss: 0.0269 [03/30 19:40:34 TiTok]: Data (t): 0.0032, 55.51/s/gpu Batch (t): 0.6486 LR: 0.000066 Step: 213000 Total Loss: 0.0472 Recon Loss: 0.0337 [03/30 19:41:32 TiTok]: Data (t): 0.0032, 62.40/s/gpu Batch (t): 0.5769 LR: 0.000066 Step: 213100 Total Loss: 0.0383 Recon Loss: 0.0288 [03/30 19:42:30 TiTok]: Data (t): 0.0033, 62.31/s/gpu Batch (t): 0.5778 LR: 0.000066 Step: 213200 Total Loss: 0.0516 Recon Loss: 0.0280 [03/30 19:43:28 TiTok]: Data (t): 0.0032, 62.36/s/gpu Batch (t): 0.5773 LR: 0.000066 Step: 213300 Total Loss: 0.0344 Recon Loss: 0.0291 [03/30 19:44:28 TiTok]: Data (t): 0.0034, 62.33/s/gpu Batch (t): 0.5775 LR: 0.000066 Step: 213400 Total Loss: 0.0491 Recon Loss: 0.0315 [03/30 19:45:26 TiTok]: Data (t): 0.0034, 62.39/s/gpu Batch (t): 0.5770 LR: 0.000066 Step: 213500 Total Loss: 0.0388 Recon Loss: 0.0279 [03/30 19:46:24 TiTok]: Data (t): 0.0033, 62.51/s/gpu Batch (t): 0.5759 LR: 0.000066 Step: 213600 Total Loss: 0.0369 Recon Loss: 0.0264 [03/30 19:47:22 TiTok]: Data (t): 0.0033, 62.42/s/gpu Batch (t): 0.5767 LR: 0.000066 Step: 213700 Total Loss: 0.0498 Recon Loss: 0.0382 [03/30 19:48:19 TiTok]: Data (t): 0.0033, 62.45/s/gpu Batch (t): 0.5765 LR: 0.000066 Step: 213800 Total Loss: 0.0547 Recon Loss: 0.0332 [03/30 19:49:17 TiTok]: Data (t): 0.0032, 62.52/s/gpu Batch (t): 0.5758 LR: 0.000066 Step: 213900 Total Loss: 0.0507 Recon Loss: 0.0289 [03/30 19:50:15 TiTok]: Data (t): 0.0032, 56.43/s/gpu Batch (t): 0.6380 LR: 0.000066 Step: 214000 Total Loss: 0.0428 Recon Loss: 0.0252 [03/30 19:51:13 TiTok]: Data (t): 0.0033, 62.52/s/gpu Batch (t): 0.5759 LR: 0.000066 Step: 214100 Total Loss: 0.0341 Recon Loss: 0.0275 [03/30 19:52:11 TiTok]: Data (t): 0.0033, 62.39/s/gpu Batch (t): 0.5770 LR: 0.000066 Step: 214200 Total Loss: 0.0474 Recon Loss: 0.0279 [03/30 19:53:09 TiTok]: Data (t): 0.0033, 62.48/s/gpu Batch (t): 0.5762 LR: 0.000066 Step: 214300 Total Loss: 0.0381 Recon Loss: 0.0301 [03/30 19:54:06 TiTok]: Data (t): 0.0033, 62.54/s/gpu Batch (t): 0.5756 LR: 0.000066 Step: 214400 Total Loss: 0.0534 Recon Loss: 0.0305 [03/30 19:55:04 TiTok]: Data (t): 0.0034, 62.41/s/gpu Batch (t): 0.5768 LR: 0.000066 Step: 214500 Total Loss: 0.0382 Recon Loss: 0.0266 [03/30 19:56:03 TiTok]: Data (t): 0.0032, 59.53/s/gpu Batch (t): 0.6047 LR: 0.000066 Step: 214600 Total Loss: 0.0542 Recon Loss: 0.0331 [03/30 19:57:01 TiTok]: Data (t): 0.0034, 62.50/s/gpu Batch (t): 0.5760 LR: 0.000066 Step: 214700 Total Loss: 0.0492 Recon Loss: 0.0283 [03/30 19:57:58 TiTok]: Data (t): 0.0032, 62.57/s/gpu Batch (t): 0.5754 LR: 0.000066 Step: 214800 Total Loss: 0.0434 Recon Loss: 0.0312 [03/30 19:58:56 TiTok]: Data (t): 0.0032, 62.47/s/gpu Batch (t): 0.5763 LR: 0.000066 Step: 214900 Total Loss: 0.0362 Recon Loss: 0.0281 [03/30 19:59:54 TiTok]: Data (t): 0.0033, 56.72/s/gpu Batch (t): 0.6347 LR: 0.000066 Step: 215000 Total Loss: 0.0474 Recon Loss: 0.0359 [03/30 20:00:51 TiTok]: Data (t): 0.0033, 62.50/s/gpu Batch (t): 0.5760 LR: 0.000066 Step: 215100 Total Loss: 0.0437 Recon Loss: 0.0274 [03/30 20:01:49 TiTok]: Data (t): 0.0032, 62.56/s/gpu Batch (t): 0.5754 LR: 0.000066 Step: 215200 Total Loss: 0.0526 Recon Loss: 0.0298 [03/30 20:02:47 TiTok]: Data (t): 0.0033, 62.65/s/gpu Batch (t): 0.5746 LR: 0.000066 Step: 215300 Total Loss: 0.0330 Recon Loss: 0.0259 [03/30 20:03:44 TiTok]: Data (t): 0.0033, 62.43/s/gpu Batch (t): 0.5766 LR: 0.000066 Step: 215400 Total Loss: 0.0403 Recon Loss: 0.0355 [03/30 20:04:42 TiTok]: Data (t): 0.0032, 62.38/s/gpu Batch (t): 0.5771 LR: 0.000066 Step: 215500 Total Loss: 0.0373 Recon Loss: 0.0265 [03/30 20:05:40 TiTok]: Data (t): 0.0032, 62.44/s/gpu Batch (t): 0.5765 LR: 0.000065 Step: 215600 Total Loss: 0.0533 Recon Loss: 0.0331 [03/30 20:06:38 TiTok]: Data (t): 0.0033, 62.46/s/gpu Batch (t): 0.5764 LR: 0.000065 Step: 215700 Total Loss: 0.0444 Recon Loss: 0.0305 [03/30 20:07:35 TiTok]: Data (t): 0.0032, 62.46/s/gpu Batch (t): 0.5764 LR: 0.000065 Step: 215800 Total Loss: 0.0540 Recon Loss: 0.0317 [03/30 20:08:33 TiTok]: Data (t): 0.0033, 62.60/s/gpu Batch (t): 0.5751 LR: 0.000065 Step: 215900 Total Loss: 0.0483 Recon Loss: 0.0261 [03/30 20:09:31 TiTok]: Data (t): 0.0034, 56.05/s/gpu Batch (t): 0.6423 LR: 0.000065 Step: 216000 Total Loss: 0.0328 Recon Loss: 0.0272 [03/30 20:10:29 TiTok]: Data (t): 0.0032, 62.55/s/gpu Batch (t): 0.5756 LR: 0.000065 Step: 216100 Total Loss: 0.0522 Recon Loss: 0.0307 [03/30 20:11:27 TiTok]: Data (t): 0.0033, 62.48/s/gpu Batch (t): 0.5762 LR: 0.000065 Step: 216200 Total Loss: 0.0495 Recon Loss: 0.0342 [03/30 20:12:25 TiTok]: Data (t): 0.0032, 62.38/s/gpu Batch (t): 0.5771 LR: 0.000065 Step: 216300 Total Loss: 0.0548 Recon Loss: 0.0316 [03/30 20:13:23 TiTok]: Data (t): 0.0032, 62.45/s/gpu Batch (t): 0.5764 LR: 0.000065 Step: 216400 Total Loss: 0.0461 Recon Loss: 0.0270 [03/30 20:14:21 TiTok]: Data (t): 0.0033, 62.56/s/gpu Batch (t): 0.5754 LR: 0.000065 Step: 216500 Total Loss: 0.0342 Recon Loss: 0.0300 [03/30 20:15:19 TiTok]: Data (t): 0.0032, 62.52/s/gpu Batch (t): 0.5758 LR: 0.000065 Step: 216600 Total Loss: 0.0559 Recon Loss: 0.0313 [03/30 20:16:17 TiTok]: Data (t): 0.0036, 62.35/s/gpu Batch (t): 0.5774 LR: 0.000065 Step: 216700 Total Loss: 0.0396 Recon Loss: 0.0242 [03/30 20:17:14 TiTok]: Data (t): 0.0033, 62.58/s/gpu Batch (t): 0.5753 LR: 0.000065 Step: 216800 Total Loss: 0.0435 Recon Loss: 0.0319 [03/30 20:18:12 TiTok]: Data (t): 0.0033, 62.40/s/gpu Batch (t): 0.5769 LR: 0.000065 Step: 216900 Total Loss: 0.0522 Recon Loss: 0.0293 [03/30 20:19:10 TiTok]: Data (t): 0.0032, 56.51/s/gpu Batch (t): 0.6370 LR: 0.000065 Step: 217000 Total Loss: 0.0429 Recon Loss: 0.0260 [03/30 20:20:08 TiTok]: Data (t): 0.0032, 62.51/s/gpu Batch (t): 0.5759 LR: 0.000065 Step: 217100 Total Loss: 0.0487 Recon Loss: 0.0343 [03/30 20:21:06 TiTok]: Data (t): 0.0032, 59.52/s/gpu Batch (t): 0.6048 LR: 0.000065 Step: 217200 Total Loss: 0.0494 Recon Loss: 0.0266 [03/30 20:22:04 TiTok]: Data (t): 0.0032, 62.43/s/gpu Batch (t): 0.5767 LR: 0.000065 Step: 217300 Total Loss: 0.0367 Recon Loss: 0.0286 [03/30 20:23:02 TiTok]: Data (t): 0.0033, 58.28/s/gpu Batch (t): 0.6177 LR: 0.000065 Step: 217400 Total Loss: 0.0551 Recon Loss: 0.0326 [03/30 20:23:59 TiTok]: Data (t): 0.0032, 61.86/s/gpu Batch (t): 0.5819 LR: 0.000065 Step: 217500 Total Loss: 0.0457 Recon Loss: 0.0272 [03/30 20:24:57 TiTok]: Data (t): 0.0032, 62.46/s/gpu Batch (t): 0.5764 LR: 0.000065 Step: 217600 Total Loss: 0.0355 Recon Loss: 0.0317 [03/30 20:25:55 TiTok]: Data (t): 0.0032, 62.24/s/gpu Batch (t): 0.5784 LR: 0.000065 Step: 217700 Total Loss: 0.0527 Recon Loss: 0.0296 [03/30 20:26:53 TiTok]: Data (t): 0.0031, 62.80/s/gpu Batch (t): 0.5733 LR: 0.000065 Step: 217800 Total Loss: 0.0443 Recon Loss: 0.0272 [03/30 20:27:52 TiTok]: Data (t): 0.0032, 62.61/s/gpu Batch (t): 0.5750 LR: 0.000065 Step: 217900 Total Loss: 0.0431 Recon Loss: 0.0314 [03/30 20:28:50 TiTok]: Data (t): 0.0032, 56.26/s/gpu Batch (t): 0.6399 LR: 0.000065 Step: 218000 Total Loss: 0.0567 Recon Loss: 0.0331 [03/30 20:29:48 TiTok]: Data (t): 0.0032, 62.60/s/gpu Batch (t): 0.5750 LR: 0.000065 Step: 218100 Total Loss: 0.0472 Recon Loss: 0.0274 [03/30 20:30:45 TiTok]: Data (t): 0.0032, 58.56/s/gpu Batch (t): 0.6148 LR: 0.000065 Step: 218200 Total Loss: 0.0417 Recon Loss: 0.0321 [03/30 20:31:43 TiTok]: Data (t): 0.0032, 62.57/s/gpu Batch (t): 0.5753 LR: 0.000065 Step: 218300 Total Loss: 0.0459 Recon Loss: 0.0285 [03/30 20:32:41 TiTok]: Data (t): 0.0033, 62.44/s/gpu Batch (t): 0.5766 LR: 0.000065 Step: 218400 Total Loss: 0.0355 Recon Loss: 0.0280 [03/30 20:33:39 TiTok]: Data (t): 0.0048, 61.52/s/gpu Batch (t): 0.5852 LR: 0.000065 Step: 218500 Total Loss: 0.0537 Recon Loss: 0.0330 [03/30 20:34:37 TiTok]: Data (t): 0.0032, 62.58/s/gpu Batch (t): 0.5753 LR: 0.000065 Step: 218600 Total Loss: 0.0461 Recon Loss: 0.0254 [03/30 20:35:35 TiTok]: Data (t): 0.0032, 62.50/s/gpu Batch (t): 0.5760 LR: 0.000065 Step: 218700 Total Loss: 0.0351 Recon Loss: 0.0308 [03/30 20:36:33 TiTok]: Data (t): 0.0055, 61.06/s/gpu Batch (t): 0.5896 LR: 0.000065 Step: 218800 Total Loss: 0.0538 Recon Loss: 0.0328 [03/30 20:37:30 TiTok]: Data (t): 0.0033, 62.44/s/gpu Batch (t): 0.5766 LR: 0.000065 Step: 218900 Total Loss: 0.0504 Recon Loss: 0.0282 [03/30 20:38:29 TiTok]: Data (t): 0.0033, 56.58/s/gpu Batch (t): 0.6363 LR: 0.000065 Step: 219000 Total Loss: 0.0442 Recon Loss: 0.0306 [03/30 20:39:26 TiTok]: Data (t): 0.0032, 62.50/s/gpu Batch (t): 0.5760 LR: 0.000065 Step: 219100 Total Loss: 0.0471 Recon Loss: 0.0260 [03/30 20:40:24 TiTok]: Data (t): 0.0032, 62.37/s/gpu Batch (t): 0.5772 LR: 0.000064 Step: 219200 Total Loss: 0.0482 Recon Loss: 0.0311 [03/30 20:41:22 TiTok]: Data (t): 0.0032, 62.31/s/gpu Batch (t): 0.5777 LR: 0.000064 Step: 219300 Total Loss: 0.0494 Recon Loss: 0.0278 [03/30 20:42:20 TiTok]: Data (t): 0.0033, 62.39/s/gpu Batch (t): 0.5770 LR: 0.000064 Step: 219400 Total Loss: 0.0363 Recon Loss: 0.0257 [03/30 20:43:17 TiTok]: Data (t): 0.0032, 62.54/s/gpu Batch (t): 0.5756 LR: 0.000064 Step: 219500 Total Loss: 0.0471 Recon Loss: 0.0308 [03/30 20:44:15 TiTok]: Data (t): 0.0032, 62.39/s/gpu Batch (t): 0.5770 LR: 0.000064 Step: 219600 Total Loss: 0.0583 Recon Loss: 0.0326 [03/30 20:45:13 TiTok]: Data (t): 0.0033, 62.48/s/gpu Batch (t): 0.5762 LR: 0.000064 Step: 219700 Total Loss: 0.0381 Recon Loss: 0.0262 [03/30 20:46:11 TiTok]: Data (t): 0.0033, 62.55/s/gpu Batch (t): 0.5755 LR: 0.000064 Step: 219800 Total Loss: 0.0442 Recon Loss: 0.0340 [03/30 20:47:08 TiTok]: Data (t): 0.0032, 62.63/s/gpu Batch (t): 0.5748 LR: 0.000064 Step: 219900 Total Loss: 0.0462 Recon Loss: 0.0287 [03/30 20:48:06 TiTok]: Data (t): 0.0032, 56.78/s/gpu Batch (t): 0.6340 LR: 0.000064 Step: 220000 Total Loss: 0.0462 Recon Loss: 0.0352 [03/30 20:48:08 TiTok]: Saved state to /mnt/books/train_stage2/order_32_stage2/checkpoint-220000 [03/30 20:48:22 TiTok]: Reconstructing images... [03/30 20:49:20 TiTok]: Data (t): 0.0033, 62.60/s/gpu Batch (t): 0.5751 LR: 0.000064 Step: 220100 Total Loss: 0.0508 Recon Loss: 0.0295 [03/30 20:50:18 TiTok]: Data (t): 0.0033, 62.55/s/gpu Batch (t): 0.5755 LR: 0.000064 Step: 220200 Total Loss: 0.0433 Recon Loss: 0.0329 [03/30 20:51:15 TiTok]: Data (t): 0.0033, 62.54/s/gpu Batch (t): 0.5756 LR: 0.000064 Step: 220300 Total Loss: 0.0532 Recon Loss: 0.0318 [03/30 20:52:13 TiTok]: Data (t): 0.0033, 62.58/s/gpu Batch (t): 0.5753 LR: 0.000064 Step: 220400 Total Loss: 0.0374 Recon Loss: 0.0261 [03/30 20:53:11 TiTok]: Data (t): 0.0033, 62.18/s/gpu Batch (t): 0.5790 LR: 0.000064 Step: 220500 Total Loss: 0.0557 Recon Loss: 0.0320 [03/30 20:54:08 TiTok]: Data (t): 0.0032, 62.62/s/gpu Batch (t): 0.5749 LR: 0.000064 Step: 220600 Total Loss: 0.0399 Recon Loss: 0.0292 [03/30 20:55:06 TiTok]: Data (t): 0.0032, 62.58/s/gpu Batch (t): 0.5753 LR: 0.000064 Step: 220700 Total Loss: 0.0531 Recon Loss: 0.0321 [03/30 20:56:04 TiTok]: Data (t): 0.0032, 62.59/s/gpu Batch (t): 0.5752 LR: 0.000064 Step: 220800 Total Loss: 0.0409 Recon Loss: 0.0276 [03/30 20:57:01 TiTok]: Data (t): 0.0032, 62.54/s/gpu Batch (t): 0.5756 LR: 0.000064 Step: 220900 Total Loss: 0.0500 Recon Loss: 0.0309 [03/30 20:58:00 TiTok]: Data (t): 0.0033, 50.49/s/gpu Batch (t): 0.7130 LR: 0.000064 Step: 221000 Total Loss: 0.0498 Recon Loss: 0.0264 [03/30 20:58:58 TiTok]: Data (t): 0.0032, 62.37/s/gpu Batch (t): 0.5772 LR: 0.000064 Step: 221100 Total Loss: 0.0365 Recon Loss: 0.0258 [03/30 20:59:55 TiTok]: Data (t): 0.0032, 62.40/s/gpu Batch (t): 0.5769 LR: 0.000064 Step: 221200 Total Loss: 0.0474 Recon Loss: 0.0327 [03/30 21:00:53 TiTok]: Data (t): 0.0033, 62.32/s/gpu Batch (t): 0.5776 LR: 0.000064 Step: 221300 Total Loss: 0.0517 Recon Loss: 0.0283 [03/30 21:01:51 TiTok]: Data (t): 0.0033, 61.85/s/gpu Batch (t): 0.5821 LR: 0.000064 Step: 221400 Total Loss: 0.0439 Recon Loss: 0.0266 [03/30 21:02:49 TiTok]: Data (t): 0.0032, 62.43/s/gpu Batch (t): 0.5766 LR: 0.000064 Step: 221500 Total Loss: 0.0375 Recon Loss: 0.0309 [03/30 21:03:47 TiTok]: Data (t): 0.0033, 62.31/s/gpu Batch (t): 0.5777 LR: 0.000064 Step: 221600 Total Loss: 0.0461 Recon Loss: 0.0293 [03/30 21:04:44 TiTok]: Data (t): 0.0033, 62.32/s/gpu Batch (t): 0.5776 LR: 0.000064 Step: 221700 Total Loss: 0.0410 Recon Loss: 0.0276 [03/30 21:05:42 TiTok]: Data (t): 0.0032, 62.23/s/gpu Batch (t): 0.5785 LR: 0.000064 Step: 221800 Total Loss: 0.0523 Recon Loss: 0.0321 [03/30 21:06:40 TiTok]: Data (t): 0.0032, 62.23/s/gpu Batch (t): 0.5785 LR: 0.000064 Step: 221900 Total Loss: 0.0492 Recon Loss: 0.0265 [03/30 21:07:38 TiTok]: Data (t): 0.0032, 56.44/s/gpu Batch (t): 0.6378 LR: 0.000064 Step: 222000 Total Loss: 0.0343 Recon Loss: 0.0272 [03/30 21:08:36 TiTok]: Data (t): 0.0033, 62.22/s/gpu Batch (t): 0.5786 LR: 0.000064 Step: 222100 Total Loss: 0.0581 Recon Loss: 0.0336 [03/30 21:09:34 TiTok]: Data (t): 0.0033, 62.28/s/gpu Batch (t): 0.5781 LR: 0.000064 Step: 222200 Total Loss: 0.0405 Recon Loss: 0.0291 [03/30 21:10:33 TiTok]: Data (t): 0.0032, 62.45/s/gpu Batch (t): 0.5765 LR: 0.000064 Step: 222300 Total Loss: 0.0485 Recon Loss: 0.0268 [03/30 21:11:31 TiTok]: Data (t): 0.0032, 62.44/s/gpu Batch (t): 0.5765 LR: 0.000064 Step: 222400 Total Loss: 0.0341 Recon Loss: 0.0273 [03/30 21:12:29 TiTok]: Data (t): 0.0034, 62.37/s/gpu Batch (t): 0.5772 LR: 0.000064 Step: 222500 Total Loss: 0.0558 Recon Loss: 0.0316 [03/30 21:13:27 TiTok]: Data (t): 0.0033, 59.54/s/gpu Batch (t): 0.6047 LR: 0.000064 Step: 222600 Total Loss: 0.0490 Recon Loss: 0.0273 [03/30 21:14:25 TiTok]: Data (t): 0.0032, 62.29/s/gpu Batch (t): 0.5779 LR: 0.000064 Step: 222700 Total Loss: 0.0349 Recon Loss: 0.0304 [03/30 21:15:23 TiTok]: Data (t): 0.0032, 61.23/s/gpu Batch (t): 0.5880 LR: 0.000063 Step: 222800 Total Loss: 0.0536 Recon Loss: 0.0304 [03/30 21:16:21 TiTok]: Data (t): 0.0033, 62.37/s/gpu Batch (t): 0.5772 LR: 0.000063 Step: 222900 Total Loss: 0.0448 Recon Loss: 0.0268 [03/30 21:17:19 TiTok]: Data (t): 0.0033, 53.28/s/gpu Batch (t): 0.6756 LR: 0.000063 Step: 223000 Total Loss: 0.0342 Recon Loss: 0.0299 [03/30 21:18:17 TiTok]: Data (t): 0.0032, 62.45/s/gpu Batch (t): 0.5764 LR: 0.000063 Step: 223100 Total Loss: 0.0532 Recon Loss: 0.0314 [03/30 21:19:15 TiTok]: Data (t): 0.0032, 62.31/s/gpu Batch (t): 0.5777 LR: 0.000063 Step: 223200 Total Loss: 0.0385 Recon Loss: 0.0280 [03/30 21:20:13 TiTok]: Data (t): 0.0032, 62.31/s/gpu Batch (t): 0.5778 LR: 0.000063 Step: 223300 Total Loss: 0.0519 Recon Loss: 0.0274 [03/30 21:21:11 TiTok]: Data (t): 0.0032, 62.32/s/gpu Batch (t): 0.5777 LR: 0.000063 Step: 223400 Total Loss: 0.0398 Recon Loss: 0.0290 [03/30 21:22:09 TiTok]: Data (t): 0.0033, 62.38/s/gpu Batch (t): 0.5771 LR: 0.000063 Step: 223500 Total Loss: 0.0458 Recon Loss: 0.0294 [03/30 21:23:07 TiTok]: Data (t): 0.0032, 62.38/s/gpu Batch (t): 0.5771 LR: 0.000063 Step: 223600 Total Loss: 0.0450 Recon Loss: 0.0322 [03/30 21:24:05 TiTok]: Data (t): 0.0033, 62.34/s/gpu Batch (t): 0.5775 LR: 0.000063 Step: 223700 Total Loss: 0.0503 Recon Loss: 0.0283 [03/30 21:25:02 TiTok]: Data (t): 0.0031, 62.38/s/gpu Batch (t): 0.5771 LR: 0.000063 Step: 223800 Total Loss: 0.0482 Recon Loss: 0.0324 [03/30 21:26:00 TiTok]: Data (t): 0.0034, 62.38/s/gpu Batch (t): 0.5771 LR: 0.000063 Step: 223900 Total Loss: 0.0552 Recon Loss: 0.0296 [03/30 21:26:58 TiTok]: Data (t): 0.0033, 56.43/s/gpu Batch (t): 0.6380 LR: 0.000063 Step: 224000 Total Loss: 0.0493 Recon Loss: 0.0260 [03/30 21:27:56 TiTok]: Data (t): 0.0032, 62.35/s/gpu Batch (t): 0.5774 LR: 0.000063 Step: 224100 Total Loss: 0.0413 Recon Loss: 0.0269 [03/30 21:28:54 TiTok]: Data (t): 0.0035, 62.06/s/gpu Batch (t): 0.5801 LR: 0.000063 Step: 224200 Total Loss: 0.0516 Recon Loss: 0.0352 [03/30 21:29:51 TiTok]: Data (t): 0.0032, 62.39/s/gpu Batch (t): 0.5770 LR: 0.000063 Step: 224300 Total Loss: 0.0486 Recon Loss: 0.0265 [03/30 21:30:49 TiTok]: Data (t): 0.0032, 62.30/s/gpu Batch (t): 0.5778 LR: 0.000063 Step: 224400 Total Loss: 0.0317 Recon Loss: 0.0269 [03/30 21:31:47 TiTok]: Data (t): 0.0033, 62.28/s/gpu Batch (t): 0.5780 LR: 0.000063 Step: 224500 Total Loss: 0.0418 Recon Loss: 0.0317 [03/30 21:32:45 TiTok]: Data (t): 0.0032, 62.40/s/gpu Batch (t): 0.5769 LR: 0.000063 Step: 224600 Total Loss: 0.0553 Recon Loss: 0.0296 [03/30 21:33:43 TiTok]: Data (t): 0.0032, 62.38/s/gpu Batch (t): 0.5771 LR: 0.000063 Step: 224700 Total Loss: 0.0442 Recon Loss: 0.0274 [03/30 21:34:41 TiTok]: Data (t): 0.0033, 62.35/s/gpu Batch (t): 0.5774 LR: 0.000063 Step: 224800 Total Loss: 0.0549 Recon Loss: 0.0305 [03/30 21:35:39 TiTok]: Data (t): 0.0033, 61.94/s/gpu Batch (t): 0.5813 LR: 0.000063 Step: 224900 Total Loss: 0.0485 Recon Loss: 0.0308 [03/30 21:36:37 TiTok]: Data (t): 0.0033, 56.48/s/gpu Batch (t): 0.6373 LR: 0.000063 Step: 225000 Total Loss: 0.0369 Recon Loss: 0.0305 [03/30 21:37:35 TiTok]: Data (t): 0.0032, 61.94/s/gpu Batch (t): 0.5812 LR: 0.000063 Step: 225100 Total Loss: 0.0486 Recon Loss: 0.0327 [03/30 21:38:33 TiTok]: Data (t): 0.0032, 62.18/s/gpu Batch (t): 0.5790 LR: 0.000063 Step: 225200 Total Loss: 0.0458 Recon Loss: 0.0270 [03/30 21:39:31 TiTok]: Data (t): 0.0033, 62.28/s/gpu Batch (t): 0.5780 LR: 0.000063 Step: 225300 Total Loss: 0.0450 Recon Loss: 0.0262 [03/30 21:40:28 TiTok]: Data (t): 0.0033, 62.28/s/gpu Batch (t): 0.5780 LR: 0.000063 Step: 225400 Total Loss: 0.0499 Recon Loss: 0.0278 [03/30 21:41:26 TiTok]: Data (t): 0.0032, 62.31/s/gpu Batch (t): 0.5778 LR: 0.000063 Step: 225500 Total Loss: 0.0451 Recon Loss: 0.0282 [03/30 21:42:25 TiTok]: Data (t): 0.0034, 62.31/s/gpu Batch (t): 0.5778 LR: 0.000063 Step: 225600 Total Loss: 0.0462 Recon Loss: 0.0305 [03/30 21:43:23 TiTok]: Data (t): 0.0033, 62.24/s/gpu Batch (t): 0.5784 LR: 0.000063 Step: 225700 Total Loss: 0.0530 Recon Loss: 0.0294 [03/30 21:44:21 TiTok]: Data (t): 0.0033, 62.30/s/gpu Batch (t): 0.5778 LR: 0.000063 Step: 225800 Total Loss: 0.0440 Recon Loss: 0.0276 [03/30 21:45:19 TiTok]: Data (t): 0.0032, 62.36/s/gpu Batch (t): 0.5773 LR: 0.000063 Step: 225900 Total Loss: 0.0374 Recon Loss: 0.0300 [03/30 21:46:17 TiTok]: Data (t): 0.0034, 56.36/s/gpu Batch (t): 0.6388 LR: 0.000063 Step: 226000 Total Loss: 0.0523 Recon Loss: 0.0311 [03/30 21:47:15 TiTok]: Data (t): 0.0034, 62.31/s/gpu Batch (t): 0.5777 LR: 0.000063 Step: 226100 Total Loss: 0.0403 Recon Loss: 0.0274 [03/30 21:48:12 TiTok]: Data (t): 0.0032, 62.30/s/gpu Batch (t): 0.5778 LR: 0.000063 Step: 226200 Total Loss: 0.0525 Recon Loss: 0.0344 [03/30 21:49:10 TiTok]: Data (t): 0.0033, 62.19/s/gpu Batch (t): 0.5789 LR: 0.000062 Step: 226300 Total Loss: 0.0415 Recon Loss: 0.0264 [03/30 21:50:08 TiTok]: Data (t): 0.0031, 62.17/s/gpu Batch (t): 0.5791 LR: 0.000062 Step: 226400 Total Loss: 0.0478 Recon Loss: 0.0294 [03/30 21:51:06 TiTok]: Data (t): 0.0033, 62.31/s/gpu Batch (t): 0.5778 LR: 0.000062 Step: 226500 Total Loss: 0.0412 Recon Loss: 0.0256 [03/30 21:52:04 TiTok]: Data (t): 0.0032, 62.41/s/gpu Batch (t): 0.5768 LR: 0.000062 Step: 226600 Total Loss: 0.0411 Recon Loss: 0.0313 [03/30 21:53:02 TiTok]: Data (t): 0.0033, 62.26/s/gpu Batch (t): 0.5782 LR: 0.000062 Step: 226700 Total Loss: 0.0550 Recon Loss: 0.0307 [03/30 21:54:01 TiTok]: Data (t): 0.0032, 62.23/s/gpu Batch (t): 0.5785 LR: 0.000062 Step: 226800 Total Loss: 0.0479 Recon Loss: 0.0276 [03/30 21:54:59 TiTok]: Data (t): 0.0032, 62.33/s/gpu Batch (t): 0.5776 LR: 0.000062 Step: 226900 Total Loss: 0.0354 Recon Loss: 0.0295 [03/30 21:55:57 TiTok]: Data (t): 0.0032, 56.25/s/gpu Batch (t): 0.6400 LR: 0.000062 Step: 227000 Total Loss: 0.0529 Recon Loss: 0.0339 [03/30 21:56:55 TiTok]: Data (t): 0.0032, 62.26/s/gpu Batch (t): 0.5783 LR: 0.000062 Step: 227100 Total Loss: 0.0347 Recon Loss: 0.0282 [03/30 21:57:53 TiTok]: Data (t): 0.0032, 62.19/s/gpu Batch (t): 0.5788 LR: 0.000062 Step: 227200 Total Loss: 0.0432 Recon Loss: 0.0331 [03/30 21:58:51 TiTok]: Data (t): 0.0033, 62.05/s/gpu Batch (t): 0.5802 LR: 0.000062 Step: 227300 Total Loss: 0.0532 Recon Loss: 0.0304 [03/30 21:59:49 TiTok]: Data (t): 0.0032, 62.04/s/gpu Batch (t): 0.5802 LR: 0.000062 Step: 227400 Total Loss: 0.0428 Recon Loss: 0.0273 [03/30 22:00:47 TiTok]: Data (t): 0.0032, 62.25/s/gpu Batch (t): 0.5784 LR: 0.000062 Step: 227500 Total Loss: 0.0484 Recon Loss: 0.0315 [03/30 22:01:45 TiTok]: Data (t): 0.0032, 62.25/s/gpu Batch (t): 0.5783 LR: 0.000062 Step: 227600 Total Loss: 0.0542 Recon Loss: 0.0301 [03/30 22:02:43 TiTok]: Data (t): 0.0032, 62.26/s/gpu Batch (t): 0.5782 LR: 0.000062 Step: 227700 Total Loss: 0.0354 Recon Loss: 0.0285 [03/30 22:03:41 TiTok]: Data (t): 0.0033, 62.20/s/gpu Batch (t): 0.5788 LR: 0.000062 Step: 227800 Total Loss: 0.0501 Recon Loss: 0.0323 [03/30 22:04:39 TiTok]: Data (t): 0.0032, 62.23/s/gpu Batch (t): 0.5785 LR: 0.000062 Step: 227900 Total Loss: 0.0501 Recon Loss: 0.0280 [03/30 22:05:37 TiTok]: Data (t): 0.0059, 53.90/s/gpu Batch (t): 0.6679 LR: 0.000062 Step: 228000 Total Loss: 0.0405 Recon Loss: 0.0323 [03/30 22:06:35 TiTok]: Data (t): 0.0034, 62.22/s/gpu Batch (t): 0.5786 LR: 0.000062 Step: 228100 Total Loss: 0.0474 Recon Loss: 0.0278 [03/30 22:07:33 TiTok]: Data (t): 0.0033, 62.29/s/gpu Batch (t): 0.5779 LR: 0.000062 Step: 228200 Total Loss: 0.0410 Recon Loss: 0.0311 [03/30 22:08:31 TiTok]: Data (t): 0.0034, 62.28/s/gpu Batch (t): 0.5780 LR: 0.000062 Step: 228300 Total Loss: 0.0551 Recon Loss: 0.0328 [03/30 22:09:28 TiTok]: Data (t): 0.0034, 62.23/s/gpu Batch (t): 0.5785 LR: 0.000062 Step: 228400 Total Loss: 0.0501 Recon Loss: 0.0271 [03/30 22:10:26 TiTok]: Data (t): 0.0032, 61.91/s/gpu Batch (t): 0.5815 LR: 0.000062 Step: 228500 Total Loss: 0.0430 Recon Loss: 0.0261 [03/30 22:11:24 TiTok]: Data (t): 0.0032, 62.40/s/gpu Batch (t): 0.5769 LR: 0.000062 Step: 228600 Total Loss: 0.0345 Recon Loss: 0.0274 [03/30 22:12:22 TiTok]: Data (t): 0.0033, 62.21/s/gpu Batch (t): 0.5787 LR: 0.000062 Step: 228700 Total Loss: 0.0572 Recon Loss: 0.0320 [03/30 22:13:20 TiTok]: Data (t): 0.0034, 62.30/s/gpu Batch (t): 0.5779 LR: 0.000062 Step: 228800 Total Loss: 0.0410 Recon Loss: 0.0261 [03/30 22:14:18 TiTok]: Data (t): 0.0033, 62.25/s/gpu Batch (t): 0.5783 LR: 0.000062 Step: 228900 Total Loss: 0.0349 Recon Loss: 0.0288 [03/30 22:15:16 TiTok]: Data (t): 0.0033, 56.44/s/gpu Batch (t): 0.6378 LR: 0.000062 Step: 229000 Total Loss: 0.0534 Recon Loss: 0.0333 [03/30 22:16:13 TiTok]: Data (t): 0.0032, 62.28/s/gpu Batch (t): 0.5780 LR: 0.000062 Step: 229100 Total Loss: 0.0477 Recon Loss: 0.0267 [03/30 22:17:11 TiTok]: Data (t): 0.0032, 62.33/s/gpu Batch (t): 0.5775 LR: 0.000062 Step: 229200 Total Loss: 0.0305 Recon Loss: 0.0263 [03/30 22:18:09 TiTok]: Data (t): 0.0031, 62.39/s/gpu Batch (t): 0.5770 LR: 0.000062 Step: 229300 Total Loss: 0.0423 Recon Loss: 0.0324 [03/30 22:19:07 TiTok]: Data (t): 0.0032, 62.39/s/gpu Batch (t): 0.5770 LR: 0.000062 Step: 229400 Total Loss: 0.0539 Recon Loss: 0.0315 [03/30 22:20:05 TiTok]: Data (t): 0.0032, 62.36/s/gpu Batch (t): 0.5773 LR: 0.000062 Step: 229500 Total Loss: 0.0498 Recon Loss: 0.0268 [03/30 22:21:03 TiTok]: Data (t): 0.0033, 62.28/s/gpu Batch (t): 0.5780 LR: 0.000062 Step: 229600 Total Loss: 0.0313 Recon Loss: 0.0285 [03/30 22:22:01 TiTok]: Data (t): 0.0032, 62.51/s/gpu Batch (t): 0.5759 LR: 0.000062 Step: 229700 Total Loss: 0.0422 Recon Loss: 0.0314 [03/30 22:22:59 TiTok]: Data (t): 0.0054, 58.62/s/gpu Batch (t): 0.6141 LR: 0.000062 Step: 229800 Total Loss: 0.0490 Recon Loss: 0.0341 [03/30 22:23:57 TiTok]: Data (t): 0.0032, 62.62/s/gpu Batch (t): 0.5749 LR: 0.000061 Step: 229900 Total Loss: 0.0457 Recon Loss: 0.0269 [03/30 22:24:55 TiTok]: Data (t): 0.0033, 56.35/s/gpu Batch (t): 0.6388 LR: 0.000061 Step: 230000 Total Loss: 0.0518 Recon Loss: 0.0324 [03/30 22:24:57 TiTok]: Saved state to /mnt/books/train_stage2/order_32_stage2/checkpoint-230000 [03/30 22:25:11 TiTok]: Reconstructing images... [03/30 22:26:10 TiTok]: Data (t): 0.0033, 62.36/s/gpu Batch (t): 0.5773 LR: 0.000061 Step: 230100 Total Loss: 0.0457 Recon Loss: 0.0252 [03/30 22:27:08 TiTok]: Data (t): 0.0033, 62.26/s/gpu Batch (t): 0.5782 LR: 0.000061 Step: 230200 Total Loss: 0.0390 Recon Loss: 0.0322 [03/30 22:28:06 TiTok]: Data (t): 0.0033, 62.34/s/gpu Batch (t): 0.5775 LR: 0.000061 Step: 230300 Total Loss: 0.0541 Recon Loss: 0.0312 [03/30 22:29:03 TiTok]: Data (t): 0.0033, 62.34/s/gpu Batch (t): 0.5775 LR: 0.000061 Step: 230400 Total Loss: 0.0406 Recon Loss: 0.0273 [03/30 22:30:01 TiTok]: Data (t): 0.0034, 62.32/s/gpu Batch (t): 0.5776 LR: 0.000061 Step: 230500 Total Loss: 0.0373 Recon Loss: 0.0287 [03/30 22:33:16 TiTok]: Saving config to /mnt/books/train_stage2/order_32_stage2/config.yaml [03/30 22:33:16 TiTok]: Config: experiment: project: stage2 name: stage2 output_dir: /mnt/books/train_stage2/order_32_stage2/ max_train_examples: 1281167 save_every: 10000 eval_every: 1000000 generate_every: 10000 log_every: 100 log_grad_norm_every: 1000 resume: true logging_dir: /mnt/books/train_stage2/order_32_stage2/logs model: vq_model: codebook_size: 4096 token_size: 12 use_l2_norm: true commitment_cost: 0.25 vit_enc_model_size: large vit_dec_model_size: large vit_enc_patch_size: 16 vit_dec_patch_size: 16 num_latent_tokens: 32 layers_x: 18 layers_token: 2 embedding_width: 1024 width: 256 finetune_decoder: true pretrained_tokenizer_weight: maskgit-vqgan-imagenet-f16-256.bin losses: discriminator_start: 20000 quantizer_weight: 0.0 discriminator_factor: 1.0 discriminator_weight: 0.02 perceptual_loss: convnext_s perceptual_weight: 0.1 reconstruction_loss: l2 reconstruction_weight: 1.0 lecam_regularization_weight: 0.001 dataset: params: train_shards_path_or_url: imagenet/imagenet1k-train-{0000..1023}.tar eval_shards_path_or_url: imagenet/imagenet1k-validation-{00..63}.tar num_workers_per_gpu: 12 preprocessing: resize_shorter_edge: 256 crop_size: 256 random_crop: true random_flip: true optimizer: name: adamw params: learning_rate: 0.0001 discriminator_learning_rate: 0.0001 beta1: 0.9 beta2: 0.999 weight_decay: 0.0001 lr_scheduler: scheduler: cosine params: learning_rate: ${optimizer.params.learning_rate} warmup_steps: 5000 end_lr: 1.0e-05 training: gradient_accumulation_steps: 1 per_gpu_batch_size: 36 mixed_precision: fp16 enable_tf32: true enable_wandb: true use_ema: true seed: 42 max_train_steps: 500000 num_generated_images: 2 max_grad_norm: 1.0 config: configs/training/TiTok/stage2/titok_new.yaml [03/30 22:33:34 TiTok]: Creating model and loss module. [03/30 22:33:40 TiTok]: Creating optimizers. [03/30 22:33:40 TiTok]: Creating lr_schedulers. [03/30 22:33:40 TiTok]: Creating dataloaders. [03/30 22:33:40 TiTok]: Creating evaluator. [03/30 22:33:40 TiTok]: Preparing model, optimizer and dataloaders [03/30 22:33:41 TiTok]: ***** Running training ***** [03/30 22:33:41 TiTok]:  Num training steps = 500000 [03/30 22:33:41 TiTok]:  Gradient Accumulation steps = 1 [03/30 22:33:41 TiTok]:  Instantaneous batch size per gpu = 36 [03/30 22:33:41 TiTok]:  Total train batch size (w. parallel, distributed & accumulation) = 288 [03/30 22:33:41 TiTok]: All globbed checkpoints are: ['/mnt/books/train_stage2/order_32_stage2/checkpoint-70000'] [03/30 22:33:41 TiTok]: Load checkpoint from /mnt/books/train_stage2/order_32_stage2/checkpoint-70000 [03/30 22:40:15 TiTok]: Saving config to /mnt/books/train_stage2/order_32_stage2/config.yaml [03/30 22:40:15 TiTok]: Config: experiment: project: stage2 name: stage2 output_dir: /mnt/books/train_stage2/order_32_stage2/ max_train_examples: 1281167 save_every: 10000 eval_every: 1000000 generate_every: 10000 log_every: 100 log_grad_norm_every: 1000 resume: true logging_dir: /mnt/books/train_stage2/order_32_stage2/logs model: vq_model: codebook_size: 4096 token_size: 12 use_l2_norm: true commitment_cost: 0.25 vit_enc_model_size: large vit_dec_model_size: large vit_enc_patch_size: 16 vit_dec_patch_size: 16 num_latent_tokens: 32 layers_x: 18 layers_token: 2 embedding_width: 1024 width: 256 finetune_decoder: true pretrained_tokenizer_weight: maskgit-vqgan-imagenet-f16-256.bin losses: discriminator_start: 20000 quantizer_weight: 0.0 discriminator_factor: 1.0 discriminator_weight: 0.02 perceptual_loss: convnext_s perceptual_weight: 0.1 reconstruction_loss: l2 reconstruction_weight: 1.0 lecam_regularization_weight: 0.001 dataset: params: train_shards_path_or_url: imagenet/imagenet1k-train-{0000..1023}.tar eval_shards_path_or_url: imagenet/imagenet1k-validation-{00..63}.tar num_workers_per_gpu: 12 preprocessing: resize_shorter_edge: 256 crop_size: 256 random_crop: true random_flip: true optimizer: name: adamw params: learning_rate: 0.0001 discriminator_learning_rate: 0.0001 beta1: 0.9 beta2: 0.999 weight_decay: 0.0001 lr_scheduler: scheduler: cosine params: learning_rate: ${optimizer.params.learning_rate} warmup_steps: 5000 end_lr: 1.0e-05 training: gradient_accumulation_steps: 1 per_gpu_batch_size: 36 mixed_precision: fp16 enable_tf32: true enable_wandb: true use_ema: true seed: 42 max_train_steps: 500000 num_generated_images: 2 max_grad_norm: 1.0 config: configs/training/TiTok/stage2/titok_new.yaml [03/30 22:40:32 TiTok]: Creating model and loss module. [03/30 22:40:42 TiTok]: Creating optimizers. [03/30 22:40:42 TiTok]: Creating lr_schedulers. [03/30 22:40:42 TiTok]: Creating dataloaders. [03/30 22:40:42 TiTok]: Creating evaluator. [03/30 22:40:42 TiTok]: Preparing model, optimizer and dataloaders [03/30 22:40:43 TiTok]: ***** Running training ***** [03/30 22:40:43 TiTok]:  Num training steps = 500000 [03/30 22:40:43 TiTok]:  Gradient Accumulation steps = 1 [03/30 22:40:43 TiTok]:  Instantaneous batch size per gpu = 36 [03/30 22:40:43 TiTok]:  Total train batch size (w. parallel, distributed & accumulation) = 288 [03/30 22:40:43 TiTok]: All globbed checkpoints are: ['/mnt/books/train_stage2/order_32_stage2/checkpoint-70000'] [03/30 22:40:43 TiTok]: Load checkpoint from /mnt/books/train_stage2/order_32_stage2/checkpoint-70000 [03/30 22:58:51 TiTok]: Saving config to /mnt/books/train_stage2/order_32_stage2/config.yaml [03/30 22:58:51 TiTok]: Config: experiment: project: stage2 name: stage2 output_dir: /mnt/books/train_stage2/order_32_stage2/ max_train_examples: 1281167 save_every: 10000 eval_every: 1000000 generate_every: 10000 log_every: 100 log_grad_norm_every: 1000 resume: true logging_dir: /mnt/books/train_stage2/order_32_stage2/logs model: vq_model: codebook_size: 4096 token_size: 12 use_l2_norm: true commitment_cost: 0.25 vit_enc_model_size: large vit_dec_model_size: large vit_enc_patch_size: 16 vit_dec_patch_size: 16 num_latent_tokens: 32 layers_x: 18 layers_token: 2 embedding_width: 1024 width: 256 finetune_decoder: true pretrained_tokenizer_weight: maskgit-vqgan-imagenet-f16-256.bin losses: discriminator_start: 20000 quantizer_weight: 0.0 discriminator_factor: 1.0 discriminator_weight: 0.02 perceptual_loss: convnext_s perceptual_weight: 0.1 reconstruction_loss: l2 reconstruction_weight: 1.0 lecam_regularization_weight: 0.001 dataset: params: train_shards_path_or_url: imagenet/imagenet1k-train-{0000..1023}.tar eval_shards_path_or_url: imagenet/imagenet1k-validation-{00..63}.tar num_workers_per_gpu: 12 preprocessing: resize_shorter_edge: 256 crop_size: 256 random_crop: true random_flip: true optimizer: name: adamw params: learning_rate: 0.0001 discriminator_learning_rate: 0.0001 beta1: 0.9 beta2: 0.999 weight_decay: 0.0001 lr_scheduler: scheduler: cosine params: learning_rate: ${optimizer.params.learning_rate} warmup_steps: 5000 end_lr: 1.0e-05 training: gradient_accumulation_steps: 1 per_gpu_batch_size: 36 mixed_precision: fp16 enable_tf32: true enable_wandb: true use_ema: true seed: 42 max_train_steps: 500000 num_generated_images: 2 max_grad_norm: 1.0 config: configs/training/TiTok/stage2/titok_new.yaml [03/30 22:59:07 TiTok]: Creating model and loss module. [03/30 22:59:16 TiTok]: Creating optimizers. [03/30 22:59:16 TiTok]: Creating lr_schedulers. [03/30 22:59:16 TiTok]: Creating dataloaders. [03/30 22:59:16 TiTok]: Creating evaluator. [03/30 22:59:16 TiTok]: Preparing model, optimizer and dataloaders [03/30 22:59:18 TiTok]: ***** Running training ***** [03/30 22:59:18 TiTok]:  Num training steps = 500000 [03/30 22:59:18 TiTok]:  Gradient Accumulation steps = 1 [03/30 22:59:18 TiTok]:  Instantaneous batch size per gpu = 36 [03/30 22:59:18 TiTok]:  Total train batch size (w. parallel, distributed & accumulation) = 288 [03/30 22:59:18 TiTok]: All globbed checkpoints are: ['/mnt/books/train_stage2/order_32_stage2/checkpoint-200000', '/mnt/books/train_stage2/order_32_stage2/checkpoint-70000'] [03/30 22:59:18 TiTok]: Load checkpoint from /mnt/books/train_stage2/order_32_stage2/checkpoint-200000 [03/30 22:59:34 TiTok]: Resuming at global_step 200000 [03/30 23:00:45 TiTok]: Data (t): 0.0032, 62.70/s/gpu Batch (t): 0.5741 LR: 0.000070 Step: 200100 Total Loss: 0.0579 Recon Loss: 0.0378 [03/30 23:01:42 TiTok]: Data (t): 0.0031, 62.61/s/gpu Batch (t): 0.5750 LR: 0.000070 Step: 200200 Total Loss: 0.0808 Recon Loss: 0.0469 [03/30 23:02:37 TiTok]: Saving config to /mnt/books/train_stage2/order_32_stage2/config.yaml [03/30 23:02:37 TiTok]: Config: experiment: project: stage2 name: stage2 output_dir: /mnt/books/train_stage2/order_32_stage2/ max_train_examples: 1281167 save_every: 10000 eval_every: 1000000 generate_every: 10000 log_every: 100 log_grad_norm_every: 1000 resume: true logging_dir: /mnt/books/train_stage2/order_32_stage2/logs model: vq_model: codebook_size: 4096 token_size: 12 use_l2_norm: true commitment_cost: 0.25 vit_enc_model_size: large vit_dec_model_size: large vit_enc_patch_size: 16 vit_dec_patch_size: 16 num_latent_tokens: 32 layers_x: 18 layers_token: 2 embedding_width: 1024 width: 256 finetune_decoder: true pretrained_tokenizer_weight: maskgit-vqgan-imagenet-f16-256.bin losses: discriminator_start: 20000 quantizer_weight: 0.0 discriminator_factor: 1.0 discriminator_weight: 0.01 perceptual_loss: convnext_s perceptual_weight: 0.1 reconstruction_loss: l2 reconstruction_weight: 1.0 lecam_regularization_weight: 0.001 dataset: params: train_shards_path_or_url: imagenet/imagenet1k-train-{0000..1023}.tar eval_shards_path_or_url: imagenet/imagenet1k-validation-{00..63}.tar num_workers_per_gpu: 12 preprocessing: resize_shorter_edge: 256 crop_size: 256 random_crop: true random_flip: true optimizer: name: adamw params: learning_rate: 0.0001 discriminator_learning_rate: 0.0001 beta1: 0.9 beta2: 0.999 weight_decay: 0.0001 lr_scheduler: scheduler: cosine params: learning_rate: ${optimizer.params.learning_rate} warmup_steps: 5000 end_lr: 1.0e-05 training: gradient_accumulation_steps: 1 per_gpu_batch_size: 36 mixed_precision: fp16 enable_tf32: true enable_wandb: true use_ema: true seed: 42 max_train_steps: 500000 num_generated_images: 2 max_grad_norm: 1.0 config: configs/training/TiTok/stage2/titok_new.yaml [03/30 23:02:54 TiTok]: Creating model and loss module. [03/30 23:03:02 TiTok]: Creating optimizers. [03/30 23:03:02 TiTok]: Creating lr_schedulers. [03/30 23:03:02 TiTok]: Creating dataloaders. [03/30 23:03:02 TiTok]: Creating evaluator. [03/30 23:03:03 TiTok]: Preparing model, optimizer and dataloaders [03/30 23:03:04 TiTok]: ***** Running training ***** [03/30 23:03:04 TiTok]:  Num training steps = 500000 [03/30 23:03:04 TiTok]:  Gradient Accumulation steps = 1 [03/30 23:03:04 TiTok]:  Instantaneous batch size per gpu = 36 [03/30 23:03:04 TiTok]:  Total train batch size (w. parallel, distributed & accumulation) = 288 [03/30 23:03:04 TiTok]: All globbed checkpoints are: ['/mnt/books/train_stage2/order_32_stage2/checkpoint-200000', '/mnt/books/train_stage2/order_32_stage2/checkpoint-70000'] [03/30 23:03:04 TiTok]: Load checkpoint from /mnt/books/train_stage2/order_32_stage2/checkpoint-200000 [03/30 23:03:17 TiTok]: Resuming at global_step 200000 [03/30 23:04:26 TiTok]: Data (t): 0.0031, 62.84/s/gpu Batch (t): 0.5729 LR: 0.000070 Step: 200100 Total Loss: 0.0602 Recon Loss: 0.0388 [03/30 23:05:24 TiTok]: Data (t): 0.0031, 62.93/s/gpu Batch (t): 0.5720 LR: 0.000070 Step: 200200 Total Loss: 0.0562 Recon Loss: 0.0367 [03/30 23:06:21 TiTok]: Data (t): 0.0032, 62.81/s/gpu Batch (t): 0.5732 LR: 0.000070 Step: 200300 Total Loss: 0.0576 Recon Loss: 0.0367 [03/30 23:07:19 TiTok]: Data (t): 0.0032, 62.06/s/gpu Batch (t): 0.5801 LR: 0.000070 Step: 200400 Total Loss: 0.0547 Recon Loss: 0.0363 [03/30 23:08:17 TiTok]: Data (t): 0.0032, 62.88/s/gpu Batch (t): 0.5726 LR: 0.000070 Step: 200500 Total Loss: 0.0544 Recon Loss: 0.0358 [03/30 23:09:14 TiTok]: Data (t): 0.0031, 62.81/s/gpu Batch (t): 0.5732 LR: 0.000070 Step: 200600 Total Loss: 0.0547 Recon Loss: 0.0373 [03/30 23:10:12 TiTok]: Data (t): 0.0032, 62.75/s/gpu Batch (t): 0.5737 LR: 0.000070 Step: 200700 Total Loss: 0.0532 Recon Loss: 0.0357 [03/30 23:11:09 TiTok]: Data (t): 0.0031, 62.85/s/gpu Batch (t): 0.5728 LR: 0.000070 Step: 200800 Total Loss: 0.0504 Recon Loss: 0.0335 [03/30 23:12:07 TiTok]: Data (t): 0.0032, 62.78/s/gpu Batch (t): 0.5734 LR: 0.000070 Step: 200900 Total Loss: 0.0545 Recon Loss: 0.0367 [03/30 23:13:04 TiTok]: Data (t): 0.0031, 56.73/s/gpu Batch (t): 0.6346 LR: 0.000069 Step: 201000 Total Loss: 0.0548 Recon Loss: 0.0359 [03/30 23:14:02 TiTok]: Data (t): 0.0032, 62.85/s/gpu Batch (t): 0.5728 LR: 0.000069 Step: 201100 Total Loss: 0.0535 Recon Loss: 0.0349 [03/30 23:14:59 TiTok]: Data (t): 0.0032, 62.85/s/gpu Batch (t): 0.5728 LR: 0.000069 Step: 201200 Total Loss: 0.0540 Recon Loss: 0.0390 [03/30 23:15:57 TiTok]: Data (t): 0.0032, 62.79/s/gpu Batch (t): 0.5734 LR: 0.000069 Step: 201300 Total Loss: 0.0530 Recon Loss: 0.0356 [03/30 23:16:54 TiTok]: Data (t): 0.0031, 62.78/s/gpu Batch (t): 0.5734 LR: 0.000069 Step: 201400 Total Loss: 0.0561 Recon Loss: 0.0376 [03/30 23:17:51 TiTok]: Data (t): 0.0032, 62.56/s/gpu Batch (t): 0.5754 LR: 0.000069 Step: 201500 Total Loss: 0.0560 Recon Loss: 0.0370 [03/30 23:18:49 TiTok]: Data (t): 0.0032, 62.81/s/gpu Batch (t): 0.5732 LR: 0.000069 Step: 201600 Total Loss: 0.0553 Recon Loss: 0.0369 [03/30 23:19:47 TiTok]: Data (t): 0.0033, 61.94/s/gpu Batch (t): 0.5812 LR: 0.000069 Step: 201700 Total Loss: 0.0550 Recon Loss: 0.0361 [03/30 23:20:44 TiTok]: Data (t): 0.0031, 62.94/s/gpu Batch (t): 0.5720 LR: 0.000069 Step: 201800 Total Loss: 0.0516 Recon Loss: 0.0369 [03/30 23:21:42 TiTok]: Data (t): 0.0032, 59.46/s/gpu Batch (t): 0.6054 LR: 0.000069 Step: 201900 Total Loss: 0.0538 Recon Loss: 0.0350 [03/30 23:22:39 TiTok]: Data (t): 0.0032, 51.32/s/gpu Batch (t): 0.7015 LR: 0.000069 Step: 202000 Total Loss: 0.0521 Recon Loss: 0.0324 [03/30 23:23:36 TiTok]: Data (t): 0.0031, 62.76/s/gpu Batch (t): 0.5736 LR: 0.000069 Step: 202100 Total Loss: 0.0494 Recon Loss: 0.0340 [03/30 23:24:34 TiTok]: Data (t): 0.0031, 62.88/s/gpu Batch (t): 0.5725 LR: 0.000069 Step: 202200 Total Loss: 0.0514 Recon Loss: 0.0369 [03/30 23:25:31 TiTok]: Data (t): 0.0031, 62.81/s/gpu Batch (t): 0.5732 LR: 0.000069 Step: 202300 Total Loss: 0.0530 Recon Loss: 0.0370 [03/30 23:26:29 TiTok]: Data (t): 0.0032, 62.89/s/gpu Batch (t): 0.5724 LR: 0.000069 Step: 202400 Total Loss: 0.0539 Recon Loss: 0.0367 [03/30 23:27:26 TiTok]: Data (t): 0.0031, 62.87/s/gpu Batch (t): 0.5726 LR: 0.000069 Step: 202500 Total Loss: 0.0562 Recon Loss: 0.0353 [03/30 23:28:23 TiTok]: Data (t): 0.0031, 62.48/s/gpu Batch (t): 0.5761 LR: 0.000069 Step: 202600 Total Loss: 0.0506 Recon Loss: 0.0346 [03/30 23:29:21 TiTok]: Data (t): 0.0031, 62.72/s/gpu Batch (t): 0.5740 LR: 0.000069 Step: 202700 Total Loss: 0.0509 Recon Loss: 0.0338 [03/30 23:30:19 TiTok]: Data (t): 0.0032, 62.81/s/gpu Batch (t): 0.5732 LR: 0.000069 Step: 202800 Total Loss: 0.0530 Recon Loss: 0.0342 [03/30 23:31:17 TiTok]: Data (t): 0.0031, 62.81/s/gpu Batch (t): 0.5732 LR: 0.000069 Step: 202900 Total Loss: 0.0533 Recon Loss: 0.0363 [03/30 23:32:14 TiTok]: Data (t): 0.0031, 56.94/s/gpu Batch (t): 0.6323 LR: 0.000069 Step: 203000 Total Loss: 0.0517 Recon Loss: 0.0370 [03/30 23:33:12 TiTok]: Data (t): 0.0031, 62.85/s/gpu Batch (t): 0.5728 LR: 0.000069 Step: 203100 Total Loss: 0.0527 Recon Loss: 0.0343 [03/30 23:34:10 TiTok]: Data (t): 0.0032, 62.81/s/gpu Batch (t): 0.5732 LR: 0.000069 Step: 203200 Total Loss: 0.0470 Recon Loss: 0.0323 [03/30 23:35:07 TiTok]: Data (t): 0.0031, 62.81/s/gpu Batch (t): 0.5732 LR: 0.000069 Step: 203300 Total Loss: 0.0531 Recon Loss: 0.0349 [03/30 23:36:05 TiTok]: Data (t): 0.0031, 62.22/s/gpu Batch (t): 0.5786 LR: 0.000069 Step: 203400 Total Loss: 0.0533 Recon Loss: 0.0345 [03/30 23:37:02 TiTok]: Data (t): 0.0031, 62.79/s/gpu Batch (t): 0.5733 LR: 0.000069 Step: 203500 Total Loss: 0.0504 Recon Loss: 0.0345 [03/30 23:38:00 TiTok]: Data (t): 0.0032, 62.71/s/gpu Batch (t): 0.5741 LR: 0.000069 Step: 203600 Total Loss: 0.0519 Recon Loss: 0.0340 [03/30 23:38:57 TiTok]: Data (t): 0.0031, 62.77/s/gpu Batch (t): 0.5735 LR: 0.000069 Step: 203700 Total Loss: 0.0547 Recon Loss: 0.0352 [03/30 23:39:55 TiTok]: Data (t): 0.0032, 62.90/s/gpu Batch (t): 0.5723 LR: 0.000069 Step: 203800 Total Loss: 0.0520 Recon Loss: 0.0336 [03/30 23:40:52 TiTok]: Data (t): 0.0032, 62.85/s/gpu Batch (t): 0.5728 LR: 0.000069 Step: 203900 Total Loss: 0.0518 Recon Loss: 0.0355 [03/30 23:41:49 TiTok]: Data (t): 0.0031, 56.90/s/gpu Batch (t): 0.6327 LR: 0.000069 Step: 204000 Total Loss: 0.0512 Recon Loss: 0.0349 [03/30 23:42:47 TiTok]: Data (t): 0.0031, 62.80/s/gpu Batch (t): 0.5732 LR: 0.000069 Step: 204100 Total Loss: 0.0485 Recon Loss: 0.0336 [03/30 23:43:44 TiTok]: Data (t): 0.0031, 62.74/s/gpu Batch (t): 0.5738 LR: 0.000069 Step: 204200 Total Loss: 0.0500 Recon Loss: 0.0353 [03/30 23:44:42 TiTok]: Data (t): 0.0032, 62.91/s/gpu Batch (t): 0.5723 LR: 0.000069 Step: 204300 Total Loss: 0.0519 Recon Loss: 0.0344 [03/30 23:45:39 TiTok]: Data (t): 0.0032, 61.92/s/gpu Batch (t): 0.5814 LR: 0.000069 Step: 204400 Total Loss: 0.0532 Recon Loss: 0.0343 [03/30 23:46:38 TiTok]: Data (t): 0.0049, 58.63/s/gpu Batch (t): 0.6140 LR: 0.000069 Step: 204500 Total Loss: 0.0508 Recon Loss: 0.0327 [03/30 23:47:36 TiTok]: Data (t): 0.0031, 62.74/s/gpu Batch (t): 0.5738 LR: 0.000069 Step: 204600 Total Loss: 0.0518 Recon Loss: 0.0345 [03/30 23:48:33 TiTok]: Data (t): 0.0031, 62.14/s/gpu Batch (t): 0.5793 LR: 0.000068 Step: 204700 Total Loss: 0.0494 Recon Loss: 0.0326 [03/30 23:49:31 TiTok]: Data (t): 0.0032, 62.03/s/gpu Batch (t): 0.5803 LR: 0.000068 Step: 204800 Total Loss: 0.0507 Recon Loss: 0.0348 [03/30 23:50:29 TiTok]: Data (t): 0.0033, 62.40/s/gpu Batch (t): 0.5770 LR: 0.000068 Step: 204900 Total Loss: 0.0502 Recon Loss: 0.0356 [03/30 23:51:26 TiTok]: Data (t): 0.0031, 56.94/s/gpu Batch (t): 0.6322 LR: 0.000068 Step: 205000 Total Loss: 0.0502 Recon Loss: 0.0341 [03/30 23:52:24 TiTok]: Data (t): 0.0030, 62.81/s/gpu Batch (t): 0.5731 LR: 0.000068 Step: 205100 Total Loss: 0.0514 Recon Loss: 0.0347 [03/30 23:53:22 TiTok]: Data (t): 0.0031, 62.81/s/gpu Batch (t): 0.5732 LR: 0.000068 Step: 205200 Total Loss: 0.0513 Recon Loss: 0.0341 [03/30 23:54:19 TiTok]: Data (t): 0.0031, 62.82/s/gpu Batch (t): 0.5731 LR: 0.000068 Step: 205300 Total Loss: 0.0514 Recon Loss: 0.0362 [03/30 23:55:17 TiTok]: Data (t): 0.0031, 62.44/s/gpu Batch (t): 0.5766 LR: 0.000068 Step: 205400 Total Loss: 0.0501 Recon Loss: 0.0342 [03/30 23:56:14 TiTok]: Data (t): 0.0031, 62.88/s/gpu Batch (t): 0.5725 LR: 0.000068 Step: 205500 Total Loss: 0.0522 Recon Loss: 0.0353 [03/30 23:57:12 TiTok]: Data (t): 0.0032, 62.76/s/gpu Batch (t): 0.5737 LR: 0.000068 Step: 205600 Total Loss: 0.0512 Recon Loss: 0.0340 [03/30 23:58:09 TiTok]: Data (t): 0.0032, 62.82/s/gpu Batch (t): 0.5730 LR: 0.000068 Step: 205700 Total Loss: 0.0540 Recon Loss: 0.0362 [03/30 23:59:07 TiTok]: Data (t): 0.0031, 62.80/s/gpu Batch (t): 0.5732 LR: 0.000068 Step: 205800 Total Loss: 0.0548 Recon Loss: 0.0362 [03/31 00:00:04 TiTok]: Data (t): 0.0031, 62.31/s/gpu Batch (t): 0.5778 LR: 0.000068 Step: 205900 Total Loss: 0.0524 Recon Loss: 0.0353 [03/31 00:01:02 TiTok]: Data (t): 0.0031, 56.58/s/gpu Batch (t): 0.6363 LR: 0.000068 Step: 206000 Total Loss: 0.0515 Recon Loss: 0.0342 [03/31 00:01:59 TiTok]: Data (t): 0.0031, 62.79/s/gpu Batch (t): 0.5734 LR: 0.000068 Step: 206100 Total Loss: 0.0545 Recon Loss: 0.0373 [03/31 00:02:57 TiTok]: Data (t): 0.0032, 62.12/s/gpu Batch (t): 0.5795 LR: 0.000068 Step: 206200 Total Loss: 0.0534 Recon Loss: 0.0368 [03/31 00:03:54 TiTok]: Data (t): 0.0031, 62.80/s/gpu Batch (t): 0.5732 LR: 0.000068 Step: 206300 Total Loss: 0.0535 Recon Loss: 0.0388 [03/31 00:04:52 TiTok]: Data (t): 0.0031, 62.75/s/gpu Batch (t): 0.5737 LR: 0.000068 Step: 206400 Total Loss: 0.0498 Recon Loss: 0.0340 [03/31 00:05:49 TiTok]: Data (t): 0.0031, 62.73/s/gpu Batch (t): 0.5739 LR: 0.000068 Step: 206500 Total Loss: 0.0517 Recon Loss: 0.0368 [03/31 00:06:47 TiTok]: Data (t): 0.0033, 62.75/s/gpu Batch (t): 0.5737 LR: 0.000068 Step: 206600 Total Loss: 0.0504 Recon Loss: 0.0369 [03/31 00:07:44 TiTok]: Data (t): 0.0031, 62.79/s/gpu Batch (t): 0.5733 LR: 0.000068 Step: 206700 Total Loss: 0.0492 Recon Loss: 0.0334 [03/31 00:08:42 TiTok]: Data (t): 0.0031, 58.63/s/gpu Batch (t): 0.6141 LR: 0.000068 Step: 206800 Total Loss: 0.0503 Recon Loss: 0.0345 [03/31 00:09:40 TiTok]: Data (t): 0.0031, 62.43/s/gpu Batch (t): 0.5767 LR: 0.000068 Step: 206900 Total Loss: 0.0521 Recon Loss: 0.0342 [03/31 00:10:37 TiTok]: Data (t): 0.0031, 57.27/s/gpu Batch (t): 0.6286 LR: 0.000068 Step: 207000 Total Loss: 0.0524 Recon Loss: 0.0359 [03/31 00:11:35 TiTok]: Data (t): 0.0032, 62.79/s/gpu Batch (t): 0.5734 LR: 0.000068 Step: 207100 Total Loss: 0.0516 Recon Loss: 0.0347 [03/31 00:12:32 TiTok]: Data (t): 0.0032, 62.73/s/gpu Batch (t): 0.5739 LR: 0.000068 Step: 207200 Total Loss: 0.0513 Recon Loss: 0.0349 [03/31 00:13:30 TiTok]: Data (t): 0.0032, 62.84/s/gpu Batch (t): 0.5729 LR: 0.000068 Step: 207300 Total Loss: 0.0495 Recon Loss: 0.0356 [03/31 00:14:27 TiTok]: Data (t): 0.0031, 62.85/s/gpu Batch (t): 0.5728 LR: 0.000068 Step: 207400 Total Loss: 0.0508 Recon Loss: 0.0366 [03/31 00:15:25 TiTok]: Data (t): 0.0031, 62.34/s/gpu Batch (t): 0.5775 LR: 0.000068 Step: 207500 Total Loss: 0.0498 Recon Loss: 0.0349 [03/31 00:16:22 TiTok]: Data (t): 0.0031, 62.83/s/gpu Batch (t): 0.5730 LR: 0.000068 Step: 207600 Total Loss: 0.0499 Recon Loss: 0.0338 [03/31 00:17:19 TiTok]: Data (t): 0.0033, 62.99/s/gpu Batch (t): 0.5715 LR: 0.000068 Step: 207700 Total Loss: 0.0513 Recon Loss: 0.0338 [03/31 00:18:17 TiTok]: Data (t): 0.0032, 62.81/s/gpu Batch (t): 0.5732 LR: 0.000068 Step: 207800 Total Loss: 0.0509 Recon Loss: 0.0344 [03/31 00:19:14 TiTok]: Data (t): 0.0031, 62.82/s/gpu Batch (t): 0.5731 LR: 0.000068 Step: 207900 Total Loss: 0.0514 Recon Loss: 0.0333 [03/31 00:20:12 TiTok]: Data (t): 0.0032, 56.93/s/gpu Batch (t): 0.6324 LR: 0.000068 Step: 208000 Total Loss: 0.0493 Recon Loss: 0.0349 [03/31 00:21:09 TiTok]: Data (t): 0.0032, 62.73/s/gpu Batch (t): 0.5739 LR: 0.000068 Step: 208100 Total Loss: 0.0478 Recon Loss: 0.0338 [03/31 00:22:07 TiTok]: Data (t): 0.0032, 62.68/s/gpu Batch (t): 0.5743 LR: 0.000068 Step: 208200 Total Loss: 0.0495 Recon Loss: 0.0347 [03/31 00:23:04 TiTok]: Data (t): 0.0032, 62.69/s/gpu Batch (t): 0.5742 LR: 0.000067 Step: 208300 Total Loss: 0.0512 Recon Loss: 0.0340 [03/31 00:24:01 TiTok]: Data (t): 0.0032, 62.67/s/gpu Batch (t): 0.5744 LR: 0.000067 Step: 208400 Total Loss: 0.0504 Recon Loss: 0.0349 [03/31 00:24:59 TiTok]: Data (t): 0.0031, 62.86/s/gpu Batch (t): 0.5727 LR: 0.000067 Step: 208500 Total Loss: 0.0530 Recon Loss: 0.0364 [03/31 00:25:56 TiTok]: Data (t): 0.0031, 62.81/s/gpu Batch (t): 0.5732 LR: 0.000067 Step: 208600 Total Loss: 0.0509 Recon Loss: 0.0348 [03/31 00:26:54 TiTok]: Data (t): 0.0031, 62.72/s/gpu Batch (t): 0.5740 LR: 0.000067 Step: 208700 Total Loss: 0.0521 Recon Loss: 0.0354 [03/31 00:27:51 TiTok]: Data (t): 0.0031, 62.82/s/gpu Batch (t): 0.5731 LR: 0.000067 Step: 208800 Total Loss: 0.0514 Recon Loss: 0.0343 [03/31 00:28:48 TiTok]: Data (t): 0.0030, 62.72/s/gpu Batch (t): 0.5740 LR: 0.000067 Step: 208900 Total Loss: 0.0511 Recon Loss: 0.0346 [03/31 00:29:48 TiTok]: Data (t): 0.0032, 56.37/s/gpu Batch (t): 0.6386 LR: 0.000067 Step: 209000 Total Loss: 0.0519 Recon Loss: 0.0343 [03/31 00:30:45 TiTok]: Data (t): 0.0031, 62.83/s/gpu Batch (t): 0.5730 LR: 0.000067 Step: 209100 Total Loss: 0.0516 Recon Loss: 0.0350 [03/31 00:31:43 TiTok]: Data (t): 0.0031, 62.80/s/gpu Batch (t): 0.5732 LR: 0.000067 Step: 209200 Total Loss: 0.0485 Recon Loss: 0.0355 [03/31 00:32:41 TiTok]: Data (t): 0.0031, 62.59/s/gpu Batch (t): 0.5752 LR: 0.000067 Step: 209300 Total Loss: 0.0499 Recon Loss: 0.0337 [03/31 00:33:38 TiTok]: Data (t): 0.0031, 62.93/s/gpu Batch (t): 0.5721 LR: 0.000067 Step: 209400 Total Loss: 0.0502 Recon Loss: 0.0335 [03/31 00:34:36 TiTok]: Data (t): 0.0031, 62.73/s/gpu Batch (t): 0.5739 LR: 0.000067 Step: 209500 Total Loss: 0.0497 Recon Loss: 0.0337 [03/31 00:35:34 TiTok]: Data (t): 0.0032, 62.75/s/gpu Batch (t): 0.5737 LR: 0.000067 Step: 209600 Total Loss: 0.0542 Recon Loss: 0.0339 [03/31 00:36:31 TiTok]: Data (t): 0.0031, 59.09/s/gpu Batch (t): 0.6092 LR: 0.000067 Step: 209700 Total Loss: 0.0531 Recon Loss: 0.0371 [03/31 00:37:29 TiTok]: Data (t): 0.0031, 62.81/s/gpu Batch (t): 0.5732 LR: 0.000067 Step: 209800 Total Loss: 0.0495 Recon Loss: 0.0337 [03/31 00:38:27 TiTok]: Data (t): 0.0032, 62.85/s/gpu Batch (t): 0.5728 LR: 0.000067 Step: 209900 Total Loss: 0.0496 Recon Loss: 0.0345 [03/31 00:39:24 TiTok]: Data (t): 0.0031, 57.00/s/gpu Batch (t): 0.6315 LR: 0.000067 Step: 210000 Total Loss: 0.0529 Recon Loss: 0.0344 [03/31 00:39:26 TiTok]: Saved state to /mnt/books/train_stage2/order_32_stage2/checkpoint-210000 [03/31 00:39:44 TiTok]: Reconstructing images... [03/31 00:40:42 TiTok]: Data (t): 0.0032, 61.57/s/gpu Batch (t): 0.5847 LR: 0.000067 Step: 210100 Total Loss: 0.0514 Recon Loss: 0.0349 [03/31 00:41:49 TiTok]: Saving config to /mnt/books/train_stage2/order_32_stage2/config.yaml [03/31 00:41:49 TiTok]: Config: experiment: project: stage2 name: stage2 output_dir: /mnt/books/train_stage2/order_32_stage2/ max_train_examples: 1281167 save_every: 10000 eval_every: 1000000 generate_every: 10000 log_every: 100 log_grad_norm_every: 1000 resume: true logging_dir: /mnt/books/train_stage2/order_32_stage2/logs model: vq_model: codebook_size: 4096 token_size: 12 use_l2_norm: true commitment_cost: 0.25 vit_enc_model_size: large vit_dec_model_size: large vit_enc_patch_size: 16 vit_dec_patch_size: 16 num_latent_tokens: 32 layers_x: 18 layers_token: 2 embedding_width: 1024 width: 256 finetune_decoder: true pretrained_tokenizer_weight: maskgit-vqgan-imagenet-f16-256.bin losses: discriminator_start: 20000 quantizer_weight: 0.0 discriminator_factor: 1.0 discriminator_weight: 0.01 perceptual_loss: convnext_s perceptual_weight: 0.1 reconstruction_loss: l2 reconstruction_weight: 1.0 lecam_regularization_weight: 0.001 dataset: params: train_shards_path_or_url: imagenet/imagenet1k-train-{0000..1023}.tar eval_shards_path_or_url: imagenet/imagenet1k-validation-{00..63}.tar num_workers_per_gpu: 12 preprocessing: resize_shorter_edge: 256 crop_size: 256 random_crop: true random_flip: true optimizer: name: adamw params: learning_rate: 0.0001 discriminator_learning_rate: 0.0001 beta1: 0.9 beta2: 0.999 weight_decay: 0.0001 lr_scheduler: scheduler: cosine params: learning_rate: ${optimizer.params.learning_rate} warmup_steps: 5000 end_lr: 1.0e-05 training: gradient_accumulation_steps: 1 per_gpu_batch_size: 36 mixed_precision: fp16 enable_tf32: true enable_wandb: true use_ema: true seed: 42 max_train_steps: 500000 num_generated_images: 2 max_grad_norm: 1.0 config: configs/training/TiTok/stage2/titok_new.yaml [03/31 00:42:05 TiTok]: Creating model and loss module. [03/31 00:42:13 TiTok]: Creating optimizers. [03/31 00:42:13 TiTok]: Creating lr_schedulers. [03/31 00:42:13 TiTok]: Creating dataloaders. [03/31 00:42:13 TiTok]: Creating evaluator. [03/31 00:42:13 TiTok]: Preparing model, optimizer and dataloaders [03/31 00:42:15 TiTok]: ***** Running training ***** [03/31 00:42:15 TiTok]:  Num training steps = 500000 [03/31 00:42:15 TiTok]:  Gradient Accumulation steps = 1 [03/31 00:42:15 TiTok]:  Instantaneous batch size per gpu = 36 [03/31 00:42:15 TiTok]:  Total train batch size (w. parallel, distributed & accumulation) = 288 [03/31 00:42:15 TiTok]: All globbed checkpoints are: ['/mnt/books/train_stage2/order_32_stage2/checkpoint-200000', '/mnt/books/train_stage2/order_32_stage2/checkpoint-230000', '/mnt/books/train_stage2/order_32_stage2/checkpoint-210000'] [03/31 00:42:15 TiTok]: Load checkpoint from /mnt/books/train_stage2/order_32_stage2/checkpoint-230000 [03/31 00:42:28 TiTok]: Resuming at global_step 230000 [03/31 00:43:36 TiTok]: Data (t): 0.0031, 62.60/s/gpu Batch (t): 0.5751 LR: 0.000061 Step: 230100 Total Loss: 0.0402 Recon Loss: 0.0284 [03/31 00:44:34 TiTok]: Data (t): 0.0031, 62.68/s/gpu Batch (t): 0.5744 LR: 0.000061 Step: 230200 Total Loss: 0.0392 Recon Loss: 0.0296 [03/31 00:45:31 TiTok]: Data (t): 0.0032, 62.59/s/gpu Batch (t): 0.5752 LR: 0.000061 Step: 230300 Total Loss: 0.0390 Recon Loss: 0.0283 [03/31 00:46:29 TiTok]: Data (t): 0.0032, 62.66/s/gpu Batch (t): 0.5746 LR: 0.000061 Step: 230400 Total Loss: 0.0374 Recon Loss: 0.0263 [03/31 00:47:27 TiTok]: Data (t): 0.0031, 62.73/s/gpu Batch (t): 0.5739 LR: 0.000061 Step: 230500 Total Loss: 0.0390 Recon Loss: 0.0277 [03/31 00:48:24 TiTok]: Data (t): 0.0031, 62.67/s/gpu Batch (t): 0.5744 LR: 0.000061 Step: 230600 Total Loss: 0.0382 Recon Loss: 0.0278 [03/31 00:49:22 TiTok]: Data (t): 0.0032, 62.76/s/gpu Batch (t): 0.5736 LR: 0.000061 Step: 230700 Total Loss: 0.0407 Recon Loss: 0.0290 [03/31 00:50:19 TiTok]: Data (t): 0.0031, 62.67/s/gpu Batch (t): 0.5745 LR: 0.000061 Step: 230800 Total Loss: 0.0405 Recon Loss: 0.0289 [03/31 00:51:17 TiTok]: Data (t): 0.0031, 61.77/s/gpu Batch (t): 0.5828 LR: 0.000061 Step: 230900 Total Loss: 0.0402 Recon Loss: 0.0291 [03/31 00:52:15 TiTok]: Data (t): 0.0031, 56.81/s/gpu Batch (t): 0.6336 LR: 0.000061 Step: 231000 Total Loss: 0.0397 Recon Loss: 0.0290 [03/31 00:53:12 TiTok]: Data (t): 0.0031, 62.76/s/gpu Batch (t): 0.5736 LR: 0.000061 Step: 231100 Total Loss: 0.0374 Recon Loss: 0.0276 [03/31 00:54:10 TiTok]: Data (t): 0.0031, 62.75/s/gpu Batch (t): 0.5737 LR: 0.000061 Step: 231200 Total Loss: 0.0378 Recon Loss: 0.0270 [03/31 00:55:07 TiTok]: Data (t): 0.0031, 62.84/s/gpu Batch (t): 0.5729 LR: 0.000061 Step: 231300 Total Loss: 0.0390 Recon Loss: 0.0281 [03/31 00:56:05 TiTok]: Data (t): 0.0033, 62.25/s/gpu Batch (t): 0.5784 LR: 0.000061 Step: 231400 Total Loss: 0.0395 Recon Loss: 0.0289 [03/31 00:57:03 TiTok]: Data (t): 0.0032, 62.31/s/gpu Batch (t): 0.5777 LR: 0.000061 Step: 231500 Total Loss: 0.0386 Recon Loss: 0.0290 [03/31 00:58:01 TiTok]: Data (t): 0.0033, 61.74/s/gpu Batch (t): 0.5831 LR: 0.000061 Step: 231600 Total Loss: 0.0376 Recon Loss: 0.0272 [03/31 00:58:59 TiTok]: Data (t): 0.0032, 62.02/s/gpu Batch (t): 0.5804 LR: 0.000061 Step: 231700 Total Loss: 0.0377 Recon Loss: 0.0263 [03/31 00:59:58 TiTok]: Data (t): 0.0034, 61.83/s/gpu Batch (t): 0.5822 LR: 0.000061 Step: 231800 Total Loss: 0.0375 Recon Loss: 0.0258 [03/31 01:00:56 TiTok]: Data (t): 0.0032, 62.24/s/gpu Batch (t): 0.5784 LR: 0.000061 Step: 231900 Total Loss: 0.0407 Recon Loss: 0.0279 [03/31 01:01:54 TiTok]: Data (t): 0.0033, 55.71/s/gpu Batch (t): 0.6462 LR: 0.000061 Step: 232000 Total Loss: 0.0420 Recon Loss: 0.0303 [03/31 01:02:52 TiTok]: Data (t): 0.0032, 62.25/s/gpu Batch (t): 0.5783 LR: 0.000061 Step: 232100 Total Loss: 0.0377 Recon Loss: 0.0281 [03/31 01:03:50 TiTok]: Data (t): 0.0033, 62.07/s/gpu Batch (t): 0.5799 LR: 0.000061 Step: 232200 Total Loss: 0.0401 Recon Loss: 0.0280 [03/31 01:04:48 TiTok]: Data (t): 0.0032, 61.48/s/gpu Batch (t): 0.5856 LR: 0.000061 Step: 232300 Total Loss: 0.0393 Recon Loss: 0.0276 [03/31 01:05:46 TiTok]: Data (t): 0.0032, 62.30/s/gpu Batch (t): 0.5779 LR: 0.000061 Step: 232400 Total Loss: 0.0404 Recon Loss: 0.0292 [03/31 01:06:44 TiTok]: Data (t): 0.0032, 62.28/s/gpu Batch (t): 0.5780 LR: 0.000061 Step: 232500 Total Loss: 0.0396 Recon Loss: 0.0285 [03/31 01:07:42 TiTok]: Data (t): 0.0032, 62.24/s/gpu Batch (t): 0.5784 LR: 0.000061 Step: 232600 Total Loss: 0.0380 Recon Loss: 0.0263 [03/31 01:08:40 TiTok]: Data (t): 0.0033, 62.17/s/gpu Batch (t): 0.5791 LR: 0.000061 Step: 232700 Total Loss: 0.0409 Recon Loss: 0.0286 [03/31 01:09:39 TiTok]: Data (t): 0.0032, 61.86/s/gpu Batch (t): 0.5820 LR: 0.000061 Step: 232800 Total Loss: 0.0405 Recon Loss: 0.0281 [03/31 01:10:37 TiTok]: Data (t): 0.0031, 62.28/s/gpu Batch (t): 0.5781 LR: 0.000061 Step: 232900 Total Loss: 0.0350 Recon Loss: 0.0264 [03/31 01:11:35 TiTok]: Data (t): 0.0032, 56.68/s/gpu Batch (t): 0.6351 LR: 0.000061 Step: 233000 Total Loss: 0.0383 Recon Loss: 0.0283 [03/31 01:12:33 TiTok]: Data (t): 0.0033, 62.18/s/gpu Batch (t): 0.5790 LR: 0.000061 Step: 233100 Total Loss: 0.0386 Recon Loss: 0.0275 [03/31 01:13:31 TiTok]: Data (t): 0.0034, 62.05/s/gpu Batch (t): 0.5802 LR: 0.000061 Step: 233200 Total Loss: 0.0403 Recon Loss: 0.0280 [03/31 01:14:29 TiTok]: Data (t): 0.0033, 62.32/s/gpu Batch (t): 0.5777 LR: 0.000061 Step: 233300 Total Loss: 0.0377 Recon Loss: 0.0284 [03/31 01:15:26 TiTok]: Data (t): 0.0032, 62.19/s/gpu Batch (t): 0.5789 LR: 0.000060 Step: 233400 Total Loss: 0.0365 Recon Loss: 0.0277 [03/31 01:16:24 TiTok]: Data (t): 0.0033, 61.23/s/gpu Batch (t): 0.5880 LR: 0.000060 Step: 233500 Total Loss: 0.0374 Recon Loss: 0.0272 [03/31 01:17:22 TiTok]: Data (t): 0.0032, 62.22/s/gpu Batch (t): 0.5786 LR: 0.000060 Step: 233600 Total Loss: 0.0410 Recon Loss: 0.0283 [03/31 01:18:21 TiTok]: Data (t): 0.0034, 61.96/s/gpu Batch (t): 0.5810 LR: 0.000060 Step: 233700 Total Loss: 0.0405 Recon Loss: 0.0270 [03/31 01:19:19 TiTok]: Data (t): 0.0033, 62.26/s/gpu Batch (t): 0.5782 LR: 0.000060 Step: 233800 Total Loss: 0.0391 Recon Loss: 0.0277 [03/31 01:20:16 TiTok]: Data (t): 0.0034, 62.21/s/gpu Batch (t): 0.5787 LR: 0.000060 Step: 233900 Total Loss: 0.0412 Recon Loss: 0.0278 [03/31 01:21:14 TiTok]: Data (t): 0.0033, 56.62/s/gpu Batch (t): 0.6358 LR: 0.000060 Step: 234000 Total Loss: 0.0387 Recon Loss: 0.0280 [03/31 01:22:12 TiTok]: Data (t): 0.0034, 62.13/s/gpu Batch (t): 0.5794 LR: 0.000060 Step: 234100 Total Loss: 0.0391 Recon Loss: 0.0287 [03/31 01:23:10 TiTok]: Data (t): 0.0033, 62.14/s/gpu Batch (t): 0.5793 LR: 0.000060 Step: 234200 Total Loss: 0.0397 Recon Loss: 0.0288 [03/31 01:24:08 TiTok]: Data (t): 0.0032, 62.19/s/gpu Batch (t): 0.5789 LR: 0.000060 Step: 234300 Total Loss: 0.0404 Recon Loss: 0.0284 [03/31 01:25:06 TiTok]: Data (t): 0.0032, 62.42/s/gpu Batch (t): 0.5768 LR: 0.000060 Step: 234400 Total Loss: 0.0373 Recon Loss: 0.0272 [03/31 01:26:05 TiTok]: Data (t): 0.0031, 58.97/s/gpu Batch (t): 0.6105 LR: 0.000060 Step: 234500 Total Loss: 0.0400 Recon Loss: 0.0271 [03/31 01:27:04 TiTok]: Data (t): 0.0031, 61.15/s/gpu Batch (t): 0.5887 LR: 0.000060 Step: 234600 Total Loss: 0.0405 Recon Loss: 0.0284 [03/31 01:28:02 TiTok]: Data (t): 0.0032, 62.35/s/gpu Batch (t): 0.5774 LR: 0.000060 Step: 234700 Total Loss: 0.0405 Recon Loss: 0.0285 [03/31 01:28:59 TiTok]: Data (t): 0.0031, 62.31/s/gpu Batch (t): 0.5777 LR: 0.000060 Step: 234800 Total Loss: 0.0371 Recon Loss: 0.0263 [03/31 01:29:57 TiTok]: Data (t): 0.0032, 62.43/s/gpu Batch (t): 0.5767 LR: 0.000060 Step: 234900 Total Loss: 0.0368 Recon Loss: 0.0270 [03/31 01:30:55 TiTok]: Data (t): 0.0031, 56.97/s/gpu Batch (t): 0.6319 LR: 0.000060 Step: 235000 Total Loss: 0.0376 Recon Loss: 0.0275 [03/31 01:31:53 TiTok]: Data (t): 0.0032, 62.46/s/gpu Batch (t): 0.5764 LR: 0.000060 Step: 235100 Total Loss: 0.0380 Recon Loss: 0.0268 [03/31 01:32:51 TiTok]: Data (t): 0.0031, 62.44/s/gpu Batch (t): 0.5766 LR: 0.000060 Step: 235200 Total Loss: 0.0410 Recon Loss: 0.0289 [03/31 01:33:49 TiTok]: Data (t): 0.0032, 62.57/s/gpu Batch (t): 0.5754 LR: 0.000060 Step: 235300 Total Loss: 0.0384 Recon Loss: 0.0293 [03/31 01:34:46 TiTok]: Data (t): 0.0031, 62.43/s/gpu Batch (t): 0.5766 LR: 0.000060 Step: 235400 Total Loss: 0.0376 Recon Loss: 0.0255 [03/31 01:35:44 TiTok]: Data (t): 0.0031, 62.41/s/gpu Batch (t): 0.5768 LR: 0.000060 Step: 235500 Total Loss: 0.0371 Recon Loss: 0.0274 [03/31 01:36:42 TiTok]: Data (t): 0.0031, 61.73/s/gpu Batch (t): 0.5832 LR: 0.000060 Step: 235600 Total Loss: 0.0395 Recon Loss: 0.0270 [03/31 01:37:39 TiTok]: Data (t): 0.0032, 62.35/s/gpu Batch (t): 0.5774 LR: 0.000060 Step: 235700 Total Loss: 0.0385 Recon Loss: 0.0288 [03/31 01:38:37 TiTok]: Data (t): 0.0031, 62.55/s/gpu Batch (t): 0.5755 LR: 0.000060 Step: 235800 Total Loss: 0.0422 Recon Loss: 0.0278 [03/31 01:39:35 TiTok]: Data (t): 0.0031, 62.59/s/gpu Batch (t): 0.5752 LR: 0.000060 Step: 235900 Total Loss: 0.0362 Recon Loss: 0.0275 [03/31 01:40:33 TiTok]: Data (t): 0.0031, 56.98/s/gpu Batch (t): 0.6318 LR: 0.000060 Step: 236000 Total Loss: 0.0414 Recon Loss: 0.0291 [03/31 01:41:31 TiTok]: Data (t): 0.0031, 62.39/s/gpu Batch (t): 0.5771 LR: 0.000060 Step: 236100 Total Loss: 0.0372 Recon Loss: 0.0278 [03/31 01:42:28 TiTok]: Data (t): 0.0031, 62.45/s/gpu Batch (t): 0.5764 LR: 0.000060 Step: 236200 Total Loss: 0.0377 Recon Loss: 0.0268 [03/31 01:43:26 TiTok]: Data (t): 0.0031, 62.39/s/gpu Batch (t): 0.5770 LR: 0.000060 Step: 236300 Total Loss: 0.0393 Recon Loss: 0.0266 [03/31 01:44:24 TiTok]: Data (t): 0.0031, 62.35/s/gpu Batch (t): 0.5774 LR: 0.000060 Step: 236400 Total Loss: 0.0348 Recon Loss: 0.0257 [03/31 01:45:22 TiTok]: Data (t): 0.0032, 61.65/s/gpu Batch (t): 0.5839 LR: 0.000060 Step: 236500 Total Loss: 0.0388 Recon Loss: 0.0273 [03/31 01:46:20 TiTok]: Data (t): 0.0035, 61.89/s/gpu Batch (t): 0.5816 LR: 0.000060 Step: 236600 Total Loss: 0.0379 Recon Loss: 0.0268 [03/31 01:47:18 TiTok]: Data (t): 0.0033, 62.03/s/gpu Batch (t): 0.5803 LR: 0.000060 Step: 236700 Total Loss: 0.0354 Recon Loss: 0.0258 [03/31 01:48:16 TiTok]: Data (t): 0.0033, 62.30/s/gpu Batch (t): 0.5779 LR: 0.000060 Step: 236800 Total Loss: 0.0375 Recon Loss: 0.0267 [03/31 01:49:14 TiTok]: Data (t): 0.0032, 62.07/s/gpu Batch (t): 0.5800 LR: 0.000059 Step: 236900 Total Loss: 0.0383 Recon Loss: 0.0274 [03/31 01:50:12 TiTok]: Data (t): 0.0033, 56.93/s/gpu Batch (t): 0.6323 LR: 0.000059 Step: 237000 Total Loss: 0.0379 Recon Loss: 0.0275 [03/31 01:51:09 TiTok]: Data (t): 0.0032, 61.15/s/gpu Batch (t): 0.5888 LR: 0.000059 Step: 237100 Total Loss: 0.0372 Recon Loss: 0.0274 [03/31 01:52:08 TiTok]: Data (t): 0.0031, 62.44/s/gpu Batch (t): 0.5766 LR: 0.000059 Step: 237200 Total Loss: 0.0394 Recon Loss: 0.0277 [03/31 01:53:06 TiTok]: Data (t): 0.0034, 61.89/s/gpu Batch (t): 0.5817 LR: 0.000059 Step: 237300 Total Loss: 0.0417 Recon Loss: 0.0294 [03/31 01:54:04 TiTok]: Data (t): 0.0031, 62.39/s/gpu Batch (t): 0.5770 LR: 0.000059 Step: 237400 Total Loss: 0.0356 Recon Loss: 0.0265 [03/31 01:55:02 TiTok]: Data (t): 0.0034, 62.34/s/gpu Batch (t): 0.5775 LR: 0.000059 Step: 237500 Total Loss: 0.0395 Recon Loss: 0.0293 [03/31 01:56:00 TiTok]: Data (t): 0.0033, 62.41/s/gpu Batch (t): 0.5768 LR: 0.000059 Step: 237600 Total Loss: 0.0363 Recon Loss: 0.0267 [03/31 01:56:57 TiTok]: Data (t): 0.0033, 62.39/s/gpu Batch (t): 0.5771 LR: 0.000059 Step: 237700 Total Loss: 0.0402 Recon Loss: 0.0282 [03/31 01:57:55 TiTok]: Data (t): 0.0034, 62.42/s/gpu Batch (t): 0.5768 LR: 0.000059 Step: 237800 Total Loss: 0.0396 Recon Loss: 0.0291 [03/31 01:58:53 TiTok]: Data (t): 0.0033, 62.36/s/gpu Batch (t): 0.5773 LR: 0.000059 Step: 237900 Total Loss: 0.0400 Recon Loss: 0.0294 [03/31 01:59:51 TiTok]: Data (t): 0.0032, 56.76/s/gpu Batch (t): 0.6342 LR: 0.000059 Step: 238000 Total Loss: 0.0388 Recon Loss: 0.0282 [03/31 02:00:49 TiTok]: Data (t): 0.0034, 62.32/s/gpu Batch (t): 0.5776 LR: 0.000059 Step: 238100 Total Loss: 0.0386 Recon Loss: 0.0281 [03/31 02:01:47 TiTok]: Data (t): 0.0032, 62.31/s/gpu Batch (t): 0.5777 LR: 0.000059 Step: 238200 Total Loss: 0.0391 Recon Loss: 0.0283 [03/31 02:02:44 TiTok]: Data (t): 0.0032, 62.29/s/gpu Batch (t): 0.5779 LR: 0.000059 Step: 238300 Total Loss: 0.0370 Recon Loss: 0.0263 [03/31 02:03:42 TiTok]: Data (t): 0.0031, 62.44/s/gpu Batch (t): 0.5765 LR: 0.000059 Step: 238400 Total Loss: 0.0389 Recon Loss: 0.0277 [03/31 02:04:40 TiTok]: Data (t): 0.0032, 62.29/s/gpu Batch (t): 0.5780 LR: 0.000059 Step: 238500 Total Loss: 0.0376 Recon Loss: 0.0284 [03/31 02:05:38 TiTok]: Data (t): 0.0033, 60.91/s/gpu Batch (t): 0.5911 LR: 0.000059 Step: 238600 Total Loss: 0.0422 Recon Loss: 0.0288 [03/31 02:06:36 TiTok]: Data (t): 0.0032, 62.28/s/gpu Batch (t): 0.5780 LR: 0.000059 Step: 238700 Total Loss: 0.0409 Recon Loss: 0.0273 [03/31 02:07:34 TiTok]: Data (t): 0.0031, 62.42/s/gpu Batch (t): 0.5767 LR: 0.000059 Step: 238800 Total Loss: 0.0396 Recon Loss: 0.0289 [03/31 02:08:32 TiTok]: Data (t): 0.0031, 62.76/s/gpu Batch (t): 0.5736 LR: 0.000059 Step: 238900 Total Loss: 0.0370 Recon Loss: 0.0276 [03/31 02:09:32 TiTok]: Data (t): 0.0032, 56.70/s/gpu Batch (t): 0.6349 LR: 0.000059 Step: 239000 Total Loss: 0.0369 Recon Loss: 0.0267 [03/31 02:10:30 TiTok]: Data (t): 0.0034, 61.84/s/gpu Batch (t): 0.5821 LR: 0.000059 Step: 239100 Total Loss: 0.0396 Recon Loss: 0.0269 [03/31 02:11:28 TiTok]: Data (t): 0.0032, 62.31/s/gpu Batch (t): 0.5778 LR: 0.000059 Step: 239200 Total Loss: 0.0400 Recon Loss: 0.0286 [03/31 02:12:26 TiTok]: Data (t): 0.0032, 61.97/s/gpu Batch (t): 0.5809 LR: 0.000059 Step: 239300 Total Loss: 0.0411 Recon Loss: 0.0280 [03/31 02:13:24 TiTok]: Data (t): 0.0032, 62.37/s/gpu Batch (t): 0.5772 LR: 0.000059 Step: 239400 Total Loss: 0.0399 Recon Loss: 0.0285 [03/31 02:14:22 TiTok]: Data (t): 0.0032, 62.35/s/gpu Batch (t): 0.5774 LR: 0.000059 Step: 239500 Total Loss: 0.0411 Recon Loss: 0.0286 [03/31 02:15:20 TiTok]: Data (t): 0.0034, 62.19/s/gpu Batch (t): 0.5789 LR: 0.000059 Step: 239600 Total Loss: 0.0377 Recon Loss: 0.0271 [03/31 02:16:19 TiTok]: Data (t): 0.0033, 62.38/s/gpu Batch (t): 0.5771 LR: 0.000059 Step: 239700 Total Loss: 0.0376 Recon Loss: 0.0279 [03/31 02:17:16 TiTok]: Data (t): 0.0033, 62.22/s/gpu Batch (t): 0.5786 LR: 0.000059 Step: 239800 Total Loss: 0.0383 Recon Loss: 0.0285 [03/31 02:18:15 TiTok]: Data (t): 0.0032, 62.29/s/gpu Batch (t): 0.5779 LR: 0.000059 Step: 239900 Total Loss: 0.0387 Recon Loss: 0.0269 [03/31 02:19:13 TiTok]: Data (t): 0.0033, 56.67/s/gpu Batch (t): 0.6352 LR: 0.000059 Step: 240000 Total Loss: 0.0366 Recon Loss: 0.0261 [03/31 02:19:16 TiTok]: Saved state to /mnt/books/train_stage2/order_32_stage2/checkpoint-240000 [03/31 02:20:11 TiTok]: Reconstructing images... [03/31 02:21:10 TiTok]: Data (t): 0.0034, 62.02/s/gpu Batch (t): 0.5804 LR: 0.000059 Step: 240100 Total Loss: 0.0383 Recon Loss: 0.0276 [03/31 02:22:08 TiTok]: Data (t): 0.0035, 62.07/s/gpu Batch (t): 0.5800 LR: 0.000059 Step: 240200 Total Loss: 0.0388 Recon Loss: 0.0276 [03/31 02:23:06 TiTok]: Data (t): 0.0033, 62.37/s/gpu Batch (t): 0.5772 LR: 0.000059 Step: 240300 Total Loss: 0.0378 Recon Loss: 0.0271 [03/31 02:24:04 TiTok]: Data (t): 0.0033, 62.18/s/gpu Batch (t): 0.5789 LR: 0.000059 Step: 240400 Total Loss: 0.0394 Recon Loss: 0.0252 [03/31 02:25:02 TiTok]: Data (t): 0.0034, 62.08/s/gpu Batch (t): 0.5799 LR: 0.000058 Step: 240500 Total Loss: 0.0401 Recon Loss: 0.0289 [03/31 02:26:00 TiTok]: Data (t): 0.0035, 61.83/s/gpu Batch (t): 0.5822 LR: 0.000058 Step: 240600 Total Loss: 0.0391 Recon Loss: 0.0283 [03/31 02:26:58 TiTok]: Data (t): 0.0034, 61.63/s/gpu Batch (t): 0.5841 LR: 0.000058 Step: 240700 Total Loss: 0.0347 Recon Loss: 0.0269 [03/31 02:27:56 TiTok]: Data (t): 0.0032, 62.29/s/gpu Batch (t): 0.5780 LR: 0.000058 Step: 240800 Total Loss: 0.0364 Recon Loss: 0.0265 [03/31 02:28:54 TiTok]: Data (t): 0.0033, 62.26/s/gpu Batch (t): 0.5783 LR: 0.000058 Step: 240900 Total Loss: 0.0384 Recon Loss: 0.0269 [03/31 02:29:52 TiTok]: Data (t): 0.0032, 48.65/s/gpu Batch (t): 0.7399 LR: 0.000058 Step: 241000 Total Loss: 0.0380 Recon Loss: 0.0283 [03/31 02:30:51 TiTok]: Data (t): 0.0034, 62.19/s/gpu Batch (t): 0.5789 LR: 0.000058 Step: 241100 Total Loss: 0.0391 Recon Loss: 0.0271 [03/31 02:31:49 TiTok]: Data (t): 0.0034, 61.78/s/gpu Batch (t): 0.5827 LR: 0.000058 Step: 241200 Total Loss: 0.0389 Recon Loss: 0.0298 [03/31 02:32:47 TiTok]: Data (t): 0.0034, 62.14/s/gpu Batch (t): 0.5794 LR: 0.000058 Step: 241300 Total Loss: 0.0389 Recon Loss: 0.0283 [03/31 02:33:45 TiTok]: Data (t): 0.0032, 62.29/s/gpu Batch (t): 0.5779 LR: 0.000058 Step: 241400 Total Loss: 0.0386 Recon Loss: 0.0275 [03/31 02:34:43 TiTok]: Data (t): 0.0033, 61.33/s/gpu Batch (t): 0.5870 LR: 0.000058 Step: 241500 Total Loss: 0.0395 Recon Loss: 0.0278 [03/31 02:35:41 TiTok]: Data (t): 0.0032, 62.23/s/gpu Batch (t): 0.5785 LR: 0.000058 Step: 241600 Total Loss: 0.0402 Recon Loss: 0.0275 [03/31 02:36:39 TiTok]: Data (t): 0.0032, 62.19/s/gpu Batch (t): 0.5788 LR: 0.000058 Step: 241700 Total Loss: 0.0388 Recon Loss: 0.0278 [03/31 02:37:37 TiTok]: Data (t): 0.0032, 62.21/s/gpu Batch (t): 0.5787 LR: 0.000058 Step: 241800 Total Loss: 0.0362 Recon Loss: 0.0256 [03/31 02:38:36 TiTok]: Data (t): 0.0032, 62.17/s/gpu Batch (t): 0.5790 LR: 0.000058 Step: 241900 Total Loss: 0.0392 Recon Loss: 0.0283 [03/31 02:39:34 TiTok]: Data (t): 0.0033, 56.59/s/gpu Batch (t): 0.6361 LR: 0.000058 Step: 242000 Total Loss: 0.0367 Recon Loss: 0.0264 [03/31 02:40:32 TiTok]: Data (t): 0.0033, 62.24/s/gpu Batch (t): 0.5784 LR: 0.000058 Step: 242100 Total Loss: 0.0397 Recon Loss: 0.0283 [03/31 02:41:30 TiTok]: Data (t): 0.0033, 62.08/s/gpu Batch (t): 0.5799 LR: 0.000058 Step: 242200 Total Loss: 0.0377 Recon Loss: 0.0279 [03/31 02:42:28 TiTok]: Data (t): 0.0033, 62.21/s/gpu Batch (t): 0.5787 LR: 0.000058 Step: 242300 Total Loss: 0.0395 Recon Loss: 0.0262 [03/31 02:43:26 TiTok]: Data (t): 0.0034, 61.99/s/gpu Batch (t): 0.5808 LR: 0.000058 Step: 242400 Total Loss: 0.0390 Recon Loss: 0.0290 [03/31 02:44:24 TiTok]: Data (t): 0.0033, 62.14/s/gpu Batch (t): 0.5794 LR: 0.000058 Step: 242500 Total Loss: 0.0357 Recon Loss: 0.0253 [03/31 02:45:22 TiTok]: Data (t): 0.0032, 62.29/s/gpu Batch (t): 0.5780 LR: 0.000058 Step: 242600 Total Loss: 0.0359 Recon Loss: 0.0268 [03/31 02:46:20 TiTok]: Data (t): 0.0033, 62.10/s/gpu Batch (t): 0.5797 LR: 0.000058 Step: 242700 Total Loss: 0.0371 Recon Loss: 0.0271 [03/31 02:47:19 TiTok]: Data (t): 0.0034, 62.03/s/gpu Batch (t): 0.5803 LR: 0.000058 Step: 242800 Total Loss: 0.0401 Recon Loss: 0.0279 [03/31 02:48:17 TiTok]: Data (t): 0.0033, 61.10/s/gpu Batch (t): 0.5892 LR: 0.000058 Step: 242900 Total Loss: 0.0386 Recon Loss: 0.0272 [03/31 02:49:15 TiTok]: Data (t): 0.0033, 56.18/s/gpu Batch (t): 0.6408 LR: 0.000058 Step: 243000 Total Loss: 0.0393 Recon Loss: 0.0276 [03/31 02:50:13 TiTok]: Data (t): 0.0032, 61.95/s/gpu Batch (t): 0.5812 LR: 0.000058 Step: 243100 Total Loss: 0.0392 Recon Loss: 0.0283 [03/31 02:51:11 TiTok]: Data (t): 0.0033, 61.87/s/gpu Batch (t): 0.5819 LR: 0.000058 Step: 243200 Total Loss: 0.0371 Recon Loss: 0.0271 [03/31 02:52:10 TiTok]: Data (t): 0.0033, 61.93/s/gpu Batch (t): 0.5813 LR: 0.000058 Step: 243300 Total Loss: 0.0376 Recon Loss: 0.0284 [03/31 02:53:09 TiTok]: Data (t): 0.0033, 62.04/s/gpu Batch (t): 0.5803 LR: 0.000058 Step: 243400 Total Loss: 0.0409 Recon Loss: 0.0301 [03/31 02:54:07 TiTok]: Data (t): 0.0033, 62.27/s/gpu Batch (t): 0.5782 LR: 0.000058 Step: 243500 Total Loss: 0.0398 Recon Loss: 0.0287 [03/31 02:55:05 TiTok]: Data (t): 0.0032, 62.28/s/gpu Batch (t): 0.5780 LR: 0.000058 Step: 243600 Total Loss: 0.0382 Recon Loss: 0.0270 [03/31 02:56:03 TiTok]: Data (t): 0.0033, 61.94/s/gpu Batch (t): 0.5812 LR: 0.000058 Step: 243700 Total Loss: 0.0379 Recon Loss: 0.0268 [03/31 02:57:01 TiTok]: Data (t): 0.0034, 61.94/s/gpu Batch (t): 0.5812 LR: 0.000058 Step: 243800 Total Loss: 0.0384 Recon Loss: 0.0276 [03/31 02:58:00 TiTok]: Data (t): 0.0034, 61.97/s/gpu Batch (t): 0.5809 LR: 0.000058 Step: 243900 Total Loss: 0.0383 Recon Loss: 0.0282 [03/31 02:58:58 TiTok]: Data (t): 0.0032, 56.40/s/gpu Batch (t): 0.6383 LR: 0.000057 Step: 244000 Total Loss: 0.0379 Recon Loss: 0.0274 [03/31 02:59:56 TiTok]: Data (t): 0.0032, 62.20/s/gpu Batch (t): 0.5788 LR: 0.000057 Step: 244100 Total Loss: 0.0381 Recon Loss: 0.0296 [03/31 03:00:54 TiTok]: Data (t): 0.0033, 62.10/s/gpu Batch (t): 0.5797 LR: 0.000057 Step: 244200 Total Loss: 0.0385 Recon Loss: 0.0290 [03/31 03:01:53 TiTok]: Data (t): 0.0032, 62.10/s/gpu Batch (t): 0.5797 LR: 0.000057 Step: 244300 Total Loss: 0.0404 Recon Loss: 0.0288 [03/31 03:02:51 TiTok]: Data (t): 0.0035, 61.10/s/gpu Batch (t): 0.5892 LR: 0.000057 Step: 244400 Total Loss: 0.0364 Recon Loss: 0.0254 [03/31 03:03:49 TiTok]: Data (t): 0.0033, 61.61/s/gpu Batch (t): 0.5843 LR: 0.000057 Step: 244500 Total Loss: 0.0385 Recon Loss: 0.0272 [03/31 03:04:47 TiTok]: Data (t): 0.0032, 62.22/s/gpu Batch (t): 0.5786 LR: 0.000057 Step: 244600 Total Loss: 0.0401 Recon Loss: 0.0292 [03/31 03:05:45 TiTok]: Data (t): 0.0032, 62.19/s/gpu Batch (t): 0.5789 LR: 0.000057 Step: 244700 Total Loss: 0.0373 Recon Loss: 0.0260 [03/31 03:06:43 TiTok]: Data (t): 0.0033, 62.16/s/gpu Batch (t): 0.5791 LR: 0.000057 Step: 244800 Total Loss: 0.0395 Recon Loss: 0.0281 [03/31 03:07:41 TiTok]: Data (t): 0.0033, 62.07/s/gpu Batch (t): 0.5800 LR: 0.000057 Step: 244900 Total Loss: 0.0404 Recon Loss: 0.0287 [03/31 03:08:39 TiTok]: Data (t): 0.0034, 56.27/s/gpu Batch (t): 0.6398 LR: 0.000057 Step: 245000 Total Loss: 0.0382 Recon Loss: 0.0274 [03/31 03:09:37 TiTok]: Data (t): 0.0032, 62.26/s/gpu Batch (t): 0.5782 LR: 0.000057 Step: 245100 Total Loss: 0.0397 Recon Loss: 0.0277 [03/31 03:10:35 TiTok]: Data (t): 0.0032, 62.24/s/gpu Batch (t): 0.5784 LR: 0.000057 Step: 245200 Total Loss: 0.0404 Recon Loss: 0.0287 [03/31 03:11:33 TiTok]: Data (t): 0.0033, 61.36/s/gpu Batch (t): 0.5867 LR: 0.000057 Step: 245300 Total Loss: 0.0406 Recon Loss: 0.0283 [03/31 03:12:31 TiTok]: Data (t): 0.0033, 62.07/s/gpu Batch (t): 0.5800 LR: 0.000057 Step: 245400 Total Loss: 0.0406 Recon Loss: 0.0293 [03/31 03:13:29 TiTok]: Data (t): 0.0034, 62.06/s/gpu Batch (t): 0.5801 LR: 0.000057 Step: 245500 Total Loss: 0.0362 Recon Loss: 0.0268 [03/31 03:14:27 TiTok]: Data (t): 0.0036, 62.06/s/gpu Batch (t): 0.5801 LR: 0.000057 Step: 245600 Total Loss: 0.0379 Recon Loss: 0.0285 [03/31 03:15:25 TiTok]: Data (t): 0.0032, 62.18/s/gpu Batch (t): 0.5789 LR: 0.000057 Step: 245700 Total Loss: 0.0378 Recon Loss: 0.0268 [03/31 03:16:23 TiTok]: Data (t): 0.0034, 62.13/s/gpu Batch (t): 0.5795 LR: 0.000057 Step: 245800 Total Loss: 0.0397 Recon Loss: 0.0277 [03/31 03:17:21 TiTok]: Data (t): 0.0033, 62.25/s/gpu Batch (t): 0.5783 LR: 0.000057 Step: 245900 Total Loss: 0.0381 Recon Loss: 0.0274 [03/31 03:18:19 TiTok]: Data (t): 0.0032, 56.22/s/gpu Batch (t): 0.6403 LR: 0.000057 Step: 246000 Total Loss: 0.0376 Recon Loss: 0.0282 [03/31 03:19:17 TiTok]: Data (t): 0.0034, 62.19/s/gpu Batch (t): 0.5789 LR: 0.000057 Step: 246100 Total Loss: 0.0402 Recon Loss: 0.0289 [03/31 03:20:15 TiTok]: Data (t): 0.0033, 62.01/s/gpu Batch (t): 0.5806 LR: 0.000057 Step: 246200 Total Loss: 0.0390 Recon Loss: 0.0287 [03/31 03:21:14 TiTok]: Data (t): 0.0032, 62.14/s/gpu Batch (t): 0.5793 LR: 0.000057 Step: 246300 Total Loss: 0.0392 Recon Loss: 0.0277 [03/31 03:22:13 TiTok]: Data (t): 0.0032, 62.22/s/gpu Batch (t): 0.5786 LR: 0.000057 Step: 246400 Total Loss: 0.0381 Recon Loss: 0.0286 [03/31 03:23:11 TiTok]: Data (t): 0.0035, 61.64/s/gpu Batch (t): 0.5840 LR: 0.000057 Step: 246500 Total Loss: 0.0395 Recon Loss: 0.0281 [03/31 03:24:09 TiTok]: Data (t): 0.0034, 62.13/s/gpu Batch (t): 0.5795 LR: 0.000057 Step: 246600 Total Loss: 0.0381 Recon Loss: 0.0281 [03/31 03:25:07 TiTok]: Data (t): 0.0034, 62.21/s/gpu Batch (t): 0.5787 LR: 0.000057 Step: 246700 Total Loss: 0.0402 Recon Loss: 0.0289 [03/31 03:26:05 TiTok]: Data (t): 0.0032, 62.21/s/gpu Batch (t): 0.5787 LR: 0.000057 Step: 246800 Total Loss: 0.0404 Recon Loss: 0.0286 [03/31 03:27:03 TiTok]: Data (t): 0.0033, 62.10/s/gpu Batch (t): 0.5797 LR: 0.000057 Step: 246900 Total Loss: 0.0390 Recon Loss: 0.0277 [03/31 03:28:01 TiTok]: Data (t): 0.0033, 56.51/s/gpu Batch (t): 0.6370 LR: 0.000057 Step: 247000 Total Loss: 0.0382 Recon Loss: 0.0284 [03/31 03:28:59 TiTok]: Data (t): 0.0033, 62.16/s/gpu Batch (t): 0.5791 LR: 0.000057 Step: 247100 Total Loss: 0.0398 Recon Loss: 0.0289 [03/31 03:29:57 TiTok]: Data (t): 0.0032, 62.30/s/gpu Batch (t): 0.5778 LR: 0.000057 Step: 247200 Total Loss: 0.0397 Recon Loss: 0.0277 [03/31 03:30:55 TiTok]: Data (t): 0.0032, 62.06/s/gpu Batch (t): 0.5801 LR: 0.000057 Step: 247300 Total Loss: 0.0396 Recon Loss: 0.0275 [03/31 03:31:53 TiTok]: Data (t): 0.0032, 62.19/s/gpu Batch (t): 0.5789 LR: 0.000057 Step: 247400 Total Loss: 0.0377 Recon Loss: 0.0289 [03/31 03:32:51 TiTok]: Data (t): 0.0033, 62.15/s/gpu Batch (t): 0.5793 LR: 0.000056 Step: 247500 Total Loss: 0.0365 Recon Loss: 0.0275 [03/31 03:33:49 TiTok]: Data (t): 0.0033, 62.07/s/gpu Batch (t): 0.5800 LR: 0.000056 Step: 247600 Total Loss: 0.0379 Recon Loss: 0.0268 [03/31 03:34:47 TiTok]: Data (t): 0.0032, 62.14/s/gpu Batch (t): 0.5793 LR: 0.000056 Step: 247700 Total Loss: 0.0375 Recon Loss: 0.0271 [03/31 03:35:45 TiTok]: Data (t): 0.0031, 62.65/s/gpu Batch (t): 0.5746 LR: 0.000056 Step: 247800 Total Loss: 0.0386 Recon Loss: 0.0291 [03/31 03:36:44 TiTok]: Data (t): 0.0032, 62.22/s/gpu Batch (t): 0.5786 LR: 0.000056 Step: 247900 Total Loss: 0.0380 Recon Loss: 0.0272 [03/31 03:37:42 TiTok]: Data (t): 0.0034, 56.35/s/gpu Batch (t): 0.6389 LR: 0.000056 Step: 248000 Total Loss: 0.0372 Recon Loss: 0.0278 [03/31 03:38:41 TiTok]: Data (t): 0.0033, 61.76/s/gpu Batch (t): 0.5829 LR: 0.000056 Step: 248100 Total Loss: 0.0403 Recon Loss: 0.0286 [03/31 03:39:39 TiTok]: Data (t): 0.0032, 62.20/s/gpu Batch (t): 0.5788 LR: 0.000056 Step: 248200 Total Loss: 0.0381 Recon Loss: 0.0277 [03/31 03:40:37 TiTok]: Data (t): 0.0035, 61.86/s/gpu Batch (t): 0.5820 LR: 0.000056 Step: 248300 Total Loss: 0.0404 Recon Loss: 0.0284 [03/31 03:41:35 TiTok]: Data (t): 0.0032, 62.20/s/gpu Batch (t): 0.5788 LR: 0.000056 Step: 248400 Total Loss: 0.0364 Recon Loss: 0.0268 [03/31 03:42:33 TiTok]: Data (t): 0.0032, 62.15/s/gpu Batch (t): 0.5792 LR: 0.000056 Step: 248500 Total Loss: 0.0388 Recon Loss: 0.0294 [03/31 03:43:31 TiTok]: Data (t): 0.0032, 62.24/s/gpu Batch (t): 0.5784 LR: 0.000056 Step: 248600 Total Loss: 0.0380 Recon Loss: 0.0274 [03/31 03:44:30 TiTok]: Data (t): 0.0032, 62.30/s/gpu Batch (t): 0.5779 LR: 0.000056 Step: 248700 Total Loss: 0.0378 Recon Loss: 0.0268 [03/31 03:45:28 TiTok]: Data (t): 0.0032, 61.53/s/gpu Batch (t): 0.5851 LR: 0.000056 Step: 248800 Total Loss: 0.0394 Recon Loss: 0.0266 [03/31 03:46:26 TiTok]: Data (t): 0.0033, 62.25/s/gpu Batch (t): 0.5783 LR: 0.000056 Step: 248900 Total Loss: 0.0403 Recon Loss: 0.0283 [03/31 03:47:24 TiTok]: Data (t): 0.0033, 56.33/s/gpu Batch (t): 0.6391 LR: 0.000056 Step: 249000 Total Loss: 0.0400 Recon Loss: 0.0278 [03/31 03:48:22 TiTok]: Data (t): 0.0034, 62.23/s/gpu Batch (t): 0.5785 LR: 0.000056 Step: 249100 Total Loss: 0.0402 Recon Loss: 0.0284 [03/31 03:49:20 TiTok]: Data (t): 0.0033, 58.70/s/gpu Batch (t): 0.6133 LR: 0.000056 Step: 249200 Total Loss: 0.0403 Recon Loss: 0.0295 [03/31 03:50:18 TiTok]: Data (t): 0.0034, 61.72/s/gpu Batch (t): 0.5833 LR: 0.000056 Step: 249300 Total Loss: 0.0410 Recon Loss: 0.0294 [03/31 03:51:16 TiTok]: Data (t): 0.0033, 62.18/s/gpu Batch (t): 0.5789 LR: 0.000056 Step: 249400 Total Loss: 0.0379 Recon Loss: 0.0261 [03/31 03:52:14 TiTok]: Data (t): 0.0133, 61.14/s/gpu Batch (t): 0.5888 LR: 0.000056 Step: 249500 Total Loss: 0.0388 Recon Loss: 0.0284 [03/31 03:53:12 TiTok]: Data (t): 0.0033, 62.23/s/gpu Batch (t): 0.5785 LR: 0.000056 Step: 249600 Total Loss: 0.0369 Recon Loss: 0.0275 [03/31 03:54:10 TiTok]: Data (t): 0.0033, 62.13/s/gpu Batch (t): 0.5794 LR: 0.000056 Step: 249700 Total Loss: 0.0371 Recon Loss: 0.0278 [03/31 03:55:08 TiTok]: Data (t): 0.0032, 62.27/s/gpu Batch (t): 0.5782 LR: 0.000056 Step: 249800 Total Loss: 0.0407 Recon Loss: 0.0283 [03/31 03:56:06 TiTok]: Data (t): 0.0033, 62.26/s/gpu Batch (t): 0.5782 LR: 0.000056 Step: 249900 Total Loss: 0.0386 Recon Loss: 0.0288 [03/31 03:57:04 TiTok]: Data (t): 0.0033, 56.37/s/gpu Batch (t): 0.6386 LR: 0.000056 Step: 250000 Total Loss: 0.0381 Recon Loss: 0.0263 [03/31 03:57:18 TiTok]: Saved state to /mnt/books/train_stage2/order_32_stage2/checkpoint-250000 [03/31 08:56:00 TiTok]: Saving config to /mnt/books/train_stage2/order_32_stage2/config.yaml [03/31 08:56:00 TiTok]: Config: experiment: project: stage2 name: stage2 output_dir: /mnt/books/train_stage2/order_32_stage2/ max_train_examples: 1281167 save_every: 10000 eval_every: 1000000 generate_every: 10000 log_every: 100 log_grad_norm_every: 1000 resume: true logging_dir: /mnt/books/train_stage2/order_32_stage2/logs model: vq_model: codebook_size: 4096 token_size: 12 use_l2_norm: true commitment_cost: 0.25 vit_enc_model_size: large vit_dec_model_size: large vit_enc_patch_size: 16 vit_dec_pa[03/31 09:02:00 TiTok]: Saving config to /mnt/books/train_stage2/order_32_stage2/config.yaml [03/31 09:02:00 TiTok]: Config: experiment: project: stage2 name: stage2 output_dir: /mnt/books/train_stage2/order_32_stage2/ max_train_examples: 1281167 save_every: 10000 eval_every: 1000000 generate_every: 10000 log_every: 100 log_grad_norm_every: 1000 resume: true logging_dir: /mnt/books/train_stage2/order_32_stage2/logs model: vq_model: codebook_size: 4096 token_size: 12 use_l2_norm: true commitment_cost: 0.25 vit_enc_model_size: large vit_dec_model_size: large vit_enc_patch_size: 16 vit_dec_patch_size: 16 num_latent_tokens: 32 layers_x: 18 layers_token: 2 embedding_width: 1024 width: 256 finetune_decoder: true pretrained_tokenizer_weight: maskgit-vqgan-imagenet-f16-256.bin losses: discriminator_start: 20000 quantizer_weight: 0.0 discriminator_factor: 1.0 discriminator_weight: 0.01 perceptual_loss: convnext_s perceptual_weight: 0.1 reconstruction_loss: l2 reconstruction_weight: 1.0 lecam_regularization_weight: 0.001 dataset: params: train_shards_path_or_url: imagenet/imagenet1k-train-{0000..1023}.tar eval_shards_path_or_url: imagenet/imagenet1k-validation-{00..63}.tar num_workers_per_gpu: 12 preprocessing: resize_shorter_edge: 256 crop_size: 256 random_crop: true random_flip: true optimizer: name: adamw params: learning_rate: 0.0001 discriminator_learning_rate: 0.0001 beta1: 0.9 beta2: 0.999 weight_decay: 0.0001 lr_scheduler: scheduler: cosine params: learning_rate: ${optimizer.params.learning_rate} warmup_steps: 5000 end_lr: 1.0e-05 training: gradient_accumulation_steps: 1 per_gpu_batch_size: 36 mixed_precision: fp16 enable_tf32: true enable_wandb: true use_ema: true seed: 42 max_train_steps: 500000 num_generated_images: 2 max_grad_norm: 1.0 config: configs/training/TiTok/stage2/titok_new.yaml [03/31 09:02:16 TiTok]: Creating model and loss module. [03/31 09:02:24 TiTok]: Creating optimizers. [03/31 09:02:24 TiTok]: Creating lr_schedulers. [03/31 09:02:24 TiTok]: Creating dataloaders. [03/31 09:02:24 TiTok]: Creating evaluator. [03/31 09:02:24 TiTok]: Preparing model, optimizer and dataloaders [03/31 09:02:26 TiTok]: ***** Running training ***** [03/31 09:02:26 TiTok]:  Num training steps = 500000 [03/31 09:02:26 TiTok]:  Gradient Accumulation steps = 1 [03/31 09:02:26 TiTok]:  Instantaneous batch size per gpu = 36 [03/31 09:02:26 TiTok]:  Total train batch size (w. parallel, distributed & accumulation) = 288 [03/31 09:02:26 TiTok]: All globbed checkpoints are: ['/mnt/books/train_stage2/order_32_stage2/checkpoint-240000', '/mnt/books/train_stage2/order_32_stage2/checkpoint-210000'] [03/31 09:02:26 TiTok]: Load checkpoint from /mnt/books/train_stage2/order_32_stage2/checkpoint-240000 [03/31 09:02:38 TiTok]: Resuming at global_step 240000 [03/31 09:03:49 TiTok]: Data (t): 0.0034, 62.41/s/gpu Batch (t): 0.5769 LR: 0.000059 Step: 240100 Total Loss: 0.0379 Recon Loss: 0.0271 [03/31 09:04:47 TiTok]: Data (t): 0.0033, 62.55/s/gpu Batch (t): 0.5756 LR: 0.000059 Step: 240200 Total Loss: 0.0375 Recon Loss: 0.0262 [03/31 09:05:44 TiTok]: Data (t): 0.0032, 62.46/s/gpu Batch (t): 0.5764 LR: 0.000059 Step: 240300 Total Loss: 0.0406 Recon Loss: 0.0281 [03/31 09:06:42 TiTok]: Data (t): 0.0033, 62.58/s/gpu Batch (t): 0.5752 LR: 0.000059 Step: 240400 Total Loss: 0.0364 Recon Loss: 0.0258 [03/31 09:07:40 TiTok]: Data (t): 0.0033, 62.20/s/gpu Batch (t): 0.5788 LR: 0.000058 Step: 240500 Total Loss: 0.0376 Recon Loss: 0.0266 [03/31 09:08:38 TiTok]: Data (t): 0.0033, 62.36/s/gpu Batch (t): 0.5773 LR: 0.000058 Step: 240600 Total Loss: 0.0384 Recon Loss: 0.0279 [03/31 09:09:36 TiTok]: Data (t): 0.0032, 62.60/s/gpu Batch (t): 0.5751 LR: 0.000058 Step: 240700 Total Loss: 0.0380 Recon Loss: 0.0283 [03/31 09:10:34 TiTok]: Data (t): 0.0032, 62.42/s/gpu Batch (t): 0.5767 LR: 0.000058 Step: 240800 Total Loss: 0.0395 Recon Loss: 0.0277 [03/31 09:11:32 TiTok]: Data (t): 0.0034, 61.51/s/gpu Batch (t): 0.5853 LR: 0.000058 Step: 240900 Total Loss: 0.0416 Recon Loss: 0.0299 [03/31 09:12:29 TiTok]: Data (t): 0.0032, 56.54/s/gpu Batch (t): 0.6368 LR: 0.000058 Step: 241000 Total Loss: 0.0384 Recon Loss: 0.0280 [03/31 09:13:27 TiTok]: Data (t): 0.0033, 62.04/s/gpu Batch (t): 0.5803 LR: 0.000058 Step: 241100 Total Loss: 0.0400 Recon Loss: 0.0285 [03/31 09:14:25 TiTok]: Data (t): 0.0032, 62.29/s/gpu Batch (t): 0.5779 LR: 0.000058 Step: 241200 Total Loss: 0.0400 Recon Loss: 0.0274 [03/31 09:15:23 TiTok]: Data (t): 0.0033, 62.59/s/gpu Batch (t): 0.5752 LR: 0.000058 Step: 241300 Total Loss: 0.0388 Recon Loss: 0.0276 [03/31 09:16:20 TiTok]: Data (t): 0.0033, 60.15/s/gpu Batch (t): 0.5985 LR: 0.000058 Step: 241400 Total Loss: 0.0399 Recon Loss: 0.0279 [03/31 09:17:18 TiTok]: Data (t): 0.0032, 62.56/s/gpu Batch (t): 0.5755 LR: 0.000058 Step: 241500 Total Loss: 0.0404 Recon Loss: 0.0305 [03/31 09:18:16 TiTok]: Data (t): 0.0033, 62.59/s/gpu Batch (t): 0.5751 LR: 0.000058 Step: 241600 Total Loss: 0.0379 Recon Loss: 0.0276 [03/31 09:19:13 TiTok]: Data (t): 0.0033, 62.55/s/gpu Batch (t): 0.5756 LR: 0.000058 Step: 241700 Total Loss: 0.0378 Recon Loss: 0.0273 [03/31 09:20:11 TiTok]: Data (t): 0.0033, 62.48/s/gpu Batch (t): 0.5762 LR: 0.000058 Step: 241800 Total Loss: 0.0393 Recon Loss: 0.0287 [03/31 09:21:09 TiTok]: Data (t): 0.0034, 62.45/s/gpu Batch (t): 0.5764 LR: 0.000058 Step: 241900 Total Loss: 0.0387 Recon Loss: 0.0278 [03/31 09:22:06 TiTok]: Data (t): 0.0032, 54.52/s/gpu Batch (t): 0.6603 LR: 0.000058 Step: 242000 Total Loss: 0.0402 Recon Loss: 0.0273 [03/31 09:23:04 TiTok]: Data (t): 0.0031, 62.26/s/gpu Batch (t): 0.5783 LR: 0.000058 Step: 242100 Total Loss: 0.0365 Recon Loss: 0.0284 [03/31 09:24:02 TiTok]: Data (t): 0.0034, 62.44/s/gpu Batch (t): 0.5765 LR: 0.000058 Step: 242200 Total Loss: 0.0379 Recon Loss: 0.0275 [03/31 09:24:59 TiTok]: Data (t): 0.0033, 62.40/s/gpu Batch (t): 0.5769 LR: 0.000058 Step: 242300 Total Loss: 0.0388 Recon Loss: 0.0279 [03/31 09:25:57 TiTok]: Data (t): 0.0035, 62.05/s/gpu Batch (t): 0.5802 LR: 0.000058 Step: 242400 Total Loss: 0.0417 Recon Loss: 0.0279 [03/31 09:26:55 TiTok]: Data (t): 0.0032, 62.07/s/gpu Batch (t): 0.5800 LR: 0.000058 Step: 242500 Total Loss: 0.0410 Recon Loss: 0.0282 [03/31 09:27:52 TiTok]: Data (t): 0.0035, 62.42/s/gpu Batch (t): 0.5767 LR: 0.000058 Step: 242600 Total Loss: 0.0351 Recon Loss: 0.0264 [03/31 09:28:50 TiTok]: Data (t): 0.0033, 62.43/s/gpu Batch (t): 0.5766 LR: 0.000058 Step: 242700 Total Loss: 0.0384 Recon Loss: 0.0288 [03/31 09:29:48 TiTok]: Data (t): 0.0032, 62.63/s/gpu Batch (t): 0.5748 LR: 0.000058 Step: 242800 Total Loss: 0.0407 Recon Loss: 0.0305 [03/31 09:30:46 TiTok]: Data (t): 0.0032, 62.66/s/gpu Batch (t): 0.5745 LR: 0.000058 Step: 242900 Total Loss: 0.0390 Recon Loss: 0.0283 [03/31 09:31:44 TiTok]: Data (t): 0.0033, 56.78/s/gpu Batch (t): 0.6340 LR: 0.000058 Step: 243000 Total Loss: 0.0381 Recon Loss: 0.0266 [03/31 09:32:41 TiTok]: Data (t): 0.0033, 62.50/s/gpu Batch (t): 0.5760 LR: 0.000058 Step: 243100 Total Loss: 0.0385 Recon Loss: 0.0265 [03/31 09:33:39 TiTok]: Data (t): 0.0033, 62.52/s/gpu Batch (t): 0.5758 LR: 0.000058 Step: 243200 Total Loss: 0.0381 Recon Loss: 0.0266 [03/31 09:34:36 TiTok]: Data (t): 0.0033, 62.21/s/gpu Batch (t): 0.5787 LR: 0.000058 Step: 243300 Total Loss: 0.0409 Recon Loss: 0.0286 [03/31 09:35:34 TiTok]: Data (t): 0.0032, 62.46/s/gpu Batch (t): 0.5764 LR: 0.000058 Step: 243400 Total Loss: 0.0388 Recon Loss: 0.0276 [03/31 09:36:32 TiTok]: Data (t): 0.0031, 62.50/s/gpu Batch (t): 0.5760 LR: 0.000058 Step: 243500 Total Loss: 0.0390 Recon Loss: 0.0274 [03/31 09:37:30 TiTok]: Data (t): 0.0033, 62.50/s/gpu Batch (t): 0.5760 LR: 0.000058 Step: 243600 Total Loss: 0.0387 Recon Loss: 0.0279 [03/31 09:38:27 TiTok]: Data (t): 0.0033, 62.60/s/gpu Batch (t): 0.5750 LR: 0.000058 Step: 243700 Total Loss: 0.0381 Recon Loss: 0.0280 [03/31 09:39:25 TiTok]: Data (t): 0.0032, 62.60/s/gpu Batch (t): 0.5751 LR: 0.000058 Step: 243800 Total Loss: 0.0387 Recon Loss: 0.0273 [03/31 09:40:23 TiTok]: Data (t): 0.0032, 62.61/s/gpu Batch (t): 0.5750 LR: 0.000058 Step: 243900 Total Loss: 0.0380 Recon Loss: 0.0271 [03/31 09:41:20 TiTok]: Data (t): 0.0033, 56.71/s/gpu Batch (t): 0.6348 LR: 0.000057 Step: 244000 Total Loss: 0.0394 Recon Loss: 0.0290 [03/31 09:42:18 TiTok]: Data (t): 0.0032, 62.22/s/gpu Batch (t): 0.5786 LR: 0.000057 Step: 244100 Total Loss: 0.0400 Recon Loss: 0.0287 [03/31 09:43:16 TiTok]: Data (t): 0.0032, 62.65/s/gpu Batch (t): 0.5746 LR: 0.000057 Step: 244200 Total Loss: 0.0408 Recon Loss: 0.0296 [03/31 09:44:13 TiTok]: Data (t): 0.0032, 62.63/s/gpu Batch (t): 0.5748 LR: 0.000057 Step: 244300 Total Loss: 0.0381 Recon Loss: 0.0269 [03/31 09:45:11 TiTok]: Data (t): 0.0034, 62.59/s/gpu Batch (t): 0.5752 LR: 0.000057 Step: 244400 Total Loss: 0.0399 Recon Loss: 0.0284 [03/31 09:46:10 TiTok]: Data (t): 0.0034, 62.50/s/gpu Batch (t): 0.5760 LR: 0.000057 Step: 244500 Total Loss: 0.0394 Recon Loss: 0.0276 [03/31 09:47:08 TiTok]: Data (t): 0.0032, 62.62/s/gpu Batch (t): 0.5749 LR: 0.000057 Step: 244600 Total Loss: 0.0400 Recon Loss: 0.0281 [03/31 09:48:05 TiTok]: Data (t): 0.0032, 62.47/s/gpu Batch (t): 0.5762 LR: 0.000057 Step: 244700 Total Loss: 0.0425 Recon Loss: 0.0292 [03/31 09:49:03 TiTok]: Data (t): 0.0032, 62.57/s/gpu Batch (t): 0.5753 LR: 0.000057 Step: 244800 Total Loss: 0.0397 Recon Loss: 0.0284 [03/31 09:50:01 TiTok]: Data (t): 0.0034, 62.49/s/gpu Batch (t): 0.5761 LR: 0.000057 Step: 244900 Total Loss: 0.0375 Recon Loss: 0.0255 [03/31 09:50:58 TiTok]: Data (t): 0.0032, 56.96/s/gpu Batch (t): 0.6320 LR: 0.000057 Step: 245000 Total Loss: 0.0379 Recon Loss: 0.0270 [03/31 09:51:57 TiTok]: Data (t): 0.0034, 62.00/s/gpu Batch (t): 0.5806 LR: 0.000057 Step: 245100 Total Loss: 0.0361 Recon Loss: 0.0252 [03/31 09:52:54 TiTok]: Data (t): 0.0033, 62.56/s/gpu Batch (t): 0.5755 LR: 0.000057 Step: 245200 Total Loss: 0.0376 Recon Loss: 0.0278 [03/31 09:53:52 TiTok]: Data (t): 0.0033, 62.55/s/gpu Batch (t): 0.5755 LR: 0.000057 Step: 245300 Total Loss: 0.0390 Recon Loss: 0.0286 [03/31 09:54:50 TiTok]: Data (t): 0.0035, 62.13/s/gpu Batch (t): 0.5795 LR: 0.000057 Step: 245400 Total Loss: 0.0403 Recon Loss: 0.0279 [03/31 09:55:48 TiTok]: Data (t): 0.0034, 62.49/s/gpu Batch (t): 0.5761 LR: 0.000057 Step: 245500 Total Loss: 0.0400 Recon Loss: 0.0278 [03/31 09:56:45 TiTok]: Data (t): 0.0032, 62.54/s/gpu Batch (t): 0.5757 LR: 0.000057 Step: 245600 Total Loss: 0.0380 Recon Loss: 0.0277 [03/31 09:57:43 TiTok]: Data (t): 0.0033, 62.55/s/gpu Batch (t): 0.5755 LR: 0.000057 Step: 245700 Total Loss: 0.0400 Recon Loss: 0.0292 [03/31 09:58:41 TiTok]: Data (t): 0.0033, 62.52/s/gpu Batch (t): 0.5758 LR: 0.000057 Step: 245800 Total Loss: 0.0393 Recon Loss: 0.0294 [03/31 09:59:39 TiTok]: Data (t): 0.0033, 62.52/s/gpu Batch (t): 0.5758 LR: 0.000057 Step: 245900 Total Loss: 0.0369 Recon Loss: 0.0285 [03/31 10:00:36 TiTok]: Data (t): 0.0032, 57.00/s/gpu Batch (t): 0.6316 LR: 0.000057 Step: 246000 Total Loss: 0.0382 Recon Loss: 0.0281 [03/31 10:01:34 TiTok]: Data (t): 0.0034, 62.38/s/gpu Batch (t): 0.5771 LR: 0.000057 Step: 246100 Total Loss: 0.0401 Recon Loss: 0.0299 [03/31 10:02:32 TiTok]: Data (t): 0.0035, 62.52/s/gpu Batch (t): 0.5758 LR: 0.000057 Step: 246200 Total Loss: 0.0391 Recon Loss: 0.0282 [03/31 10:03:29 TiTok]: Data (t): 0.0033, 62.46/s/gpu Batch (t): 0.5764 LR: 0.000057 Step: 246300 Total Loss: 0.0375 Recon Loss: 0.0270 [03/31 10:04:27 TiTok]: Data (t): 0.0033, 62.42/s/gpu Batch (t): 0.5768 LR: 0.000057 Step: 246400 Total Loss: 0.0380 Recon Loss: 0.0272 [03/31 10:05:25 TiTok]: Data (t): 0.0034, 62.42/s/gpu Batch (t): 0.5767 LR: 0.000057 Step: 246500 Total Loss: 0.0393 Recon Loss: 0.0275 [03/31 10:06:23 TiTok]: Data (t): 0.0034, 62.41/s/gpu Batch (t): 0.5768 LR: 0.000057 Step: 246600 Total Loss: 0.0358 Recon Loss: 0.0257 [03/31 10:07:20 TiTok]: Data (t): 0.0033, 62.53/s/gpu Batch (t): 0.5757 LR: 0.000057 Step: 246700 Total Loss: 0.0394 Recon Loss: 0.0280 [03/31 10:08:18 TiTok]: Data (t): 0.0032, 62.53/s/gpu Batch (t): 0.5758 LR: 0.000057 Step: 246800 Total Loss: 0.0396 Recon Loss: 0.0287 [03/31 10:09:15 TiTok]: Data (t): 0.0032, 62.64/s/gpu Batch (t): 0.5748 LR: 0.000057 Step: 246900 Total Loss: 0.0391 Recon Loss: 0.0281 [03/31 10:10:13 TiTok]: Data (t): 0.0036, 54.68/s/gpu Batch (t): 0.6583 LR: 0.000057 Step: 247000 Total Loss: 0.0387 Recon Loss: 0.0283 [03/31 10:11:11 TiTok]: Data (t): 0.0032, 62.55/s/gpu Batch (t): 0.5755 LR: 0.000057 Step: 247100 Total Loss: 0.0393 Recon Loss: 0.0267 [03/31 10:12:09 TiTok]: Data (t): 0.0034, 62.56/s/gpu Batch (t): 0.5755 LR: 0.000057 Step: 247200 Total Loss: 0.0388 Recon Loss: 0.0281 [03/31 10:13:07 TiTok]: Data (t): 0.0032, 62.50/s/gpu Batch (t): 0.5760 LR: 0.000057 Step: 247300 Total Loss: 0.0395 Recon Loss: 0.0277 [03/31 10:14:05 TiTok]: Data (t): 0.0032, 62.41/s/gpu Batch (t): 0.5768 LR: 0.000057 Step: 247400 Total Loss: 0.0379 Recon Loss: 0.0282 [03/31 10:15:03 TiTok]: Data (t): 0.0033, 62.22/s/gpu Batch (t): 0.5786 LR: 0.000056 Step: 247500 Total Loss: 0.0378 Recon Loss: 0.0283 [03/31 10:16:00 TiTok]: Data (t): 0.0033, 62.45/s/gpu Batch (t): 0.5765 LR: 0.000056 Step: 247600 Total Loss: 0.0399 Recon Loss: 0.0284 [03/31 10:16:58 TiTok]: Data (t): 0.0032, 62.27/s/gpu Batch (t): 0.5781 LR: 0.000056 Step: 247700 Total Loss: 0.0388 Recon Loss: 0.0269 [03/31 10:17:56 TiTok]: Data (t): 0.0032, 62.52/s/gpu Batch (t): 0.5759 LR: 0.000056 Step: 247800 Total Loss: 0.0386 Recon Loss: 0.0291 [03/31 10:18:53 TiTok]: Data (t): 0.0033, 62.37/s/gpu Batch (t): 0.5772 LR: 0.000056 Step: 247900 Total Loss: 0.0410 Recon Loss: 0.0294 [03/31 10:19:52 TiTok]: Data (t): 0.0032, 54.03/s/gpu Batch (t): 0.6664 LR: 0.000056 Step: 248000 Total Loss: 0.0377 Recon Loss: 0.0283 [03/31 10:20:50 TiTok]: Data (t): 0.0033, 62.32/s/gpu Batch (t): 0.5777 LR: 0.000056 Step: 248100 Total Loss: 0.0364 Recon Loss: 0.0283 [03/31 10:21:47 TiTok]: Data (t): 0.0033, 62.42/s/gpu Batch (t): 0.5767 LR: 0.000056 Step: 248200 Total Loss: 0.0371 Recon Loss: 0.0272 [03/31 10:22:45 TiTok]: Data (t): 0.0032, 62.40/s/gpu Batch (t): 0.5769 LR: 0.000056 Step: 248300 Total Loss: 0.0382 Recon Loss: 0.0267 [03/31 10:23:43 TiTok]: Data (t): 0.0033, 62.38/s/gpu Batch (t): 0.5771 LR: 0.000056 Step: 248400 Total Loss: 0.0383 Recon Loss: 0.0263 [03/31 10:24:41 TiTok]: Data (t): 0.0034, 62.26/s/gpu Batch (t): 0.5782 LR: 0.000056 Step: 248500 Total Loss: 0.0393 Recon Loss: 0.0278 [03/31 10:25:39 TiTok]: Data (t): 0.0032, 61.80/s/gpu Batch (t): 0.5825 LR: 0.000056 Step: 248600 Total Loss: 0.0397 Recon Loss: 0.0278 [03/31 10:26:37 TiTok]: Data (t): 0.0033, 62.29/s/gpu Batch (t): 0.5780 LR: 0.000056 Step: 248700 Total Loss: 0.0390 Recon Loss: 0.0265 [03/31 10:27:35 TiTok]: Data (t): 0.0032, 62.17/s/gpu Batch (t): 0.5791 LR: 0.000056 Step: 248800 Total Loss: 0.0381 Recon Loss: 0.0275 [03/31 10:28:33 TiTok]: Data (t): 0.0032, 62.74/s/gpu Batch (t): 0.5738 LR: 0.000056 Step: 248900 Total Loss: 0.0382 Recon Loss: 0.0273 [03/31 10:29:33 TiTok]: Data (t): 0.0032, 56.72/s/gpu Batch (t): 0.6346 LR: 0.000056 Step: 249000 Total Loss: 0.0418 Recon Loss: 0.0294 [03/31 10:30:31 TiTok]: Data (t): 0.0034, 62.23/s/gpu Batch (t): 0.5785 LR: 0.000056 Step: 249100 Total Loss: 0.0404 Recon Loss: 0.0283 [03/31 10:31:28 TiTok]: Data (t): 0.0033, 62.32/s/gpu Batch (t): 0.5776 LR: 0.000056 Step: 249200 Total Loss: 0.0368 Recon Loss: 0.0282 [03/31 10:32:26 TiTok]: Data (t): 0.0033, 62.32/s/gpu Batch (t): 0.5776 LR: 0.000056 Step: 249300 Total Loss: 0.0375 Recon Loss: 0.0284 [03/31 10:33:24 TiTok]: Data (t): 0.0033, 62.45/s/gpu Batch (t): 0.5765 LR: 0.000056 Step: 249400 Total Loss: 0.0373 Recon Loss: 0.0264 [03/31 10:34:22 TiTok]: Data (t): 0.0033, 62.38/s/gpu Batch (t): 0.5771 LR: 0.000056 Step: 249500 Total Loss: 0.0374 Recon Loss: 0.0267 [03/31 10:35:20 TiTok]: Data (t): 0.0033, 62.02/s/gpu Batch (t): 0.5805 LR: 0.000056 Step: 249600 Total Loss: 0.0370 Recon Loss: 0.0260 [03/31 10:36:18 TiTok]: Data (t): 0.0032, 62.52/s/gpu Batch (t): 0.5759 LR: 0.000056 Step: 249700 Total Loss: 0.0382 Recon Loss: 0.0280 [03/31 10:37:16 TiTok]: Data (t): 0.0032, 62.41/s/gpu Batch (t): 0.5768 LR: 0.000056 Step: 249800 Total Loss: 0.0415 Recon Loss: 0.0286 [03/31 10:38:13 TiTok]: Data (t): 0.0032, 62.55/s/gpu Batch (t): 0.5755 LR: 0.000056 Step: 249900 Total Loss: 0.0397 Recon Loss: 0.0281 [03/31 10:39:11 TiTok]: Data (t): 0.0032, 56.80/s/gpu Batch (t): 0.6339 LR: 0.000056 Step: 250000 Total Loss: 0.0369 Recon Loss: 0.0285 [03/31 10:39:13 TiTok]: Saved state to /mnt/books/train_stage2/order_32_stage2/checkpoint-250000 [03/31 10:39:27 TiTok]: Reconstructing images... [03/31 10:40:26 TiTok]: Data (t): 0.0034, 62.33/s/gpu Batch (t): 0.5776 LR: 0.000056 Step: 250100 Total Loss: 0.0387 Recon Loss: 0.0266 [03/31 10:41:23 TiTok]: Data (t): 0.0033, 62.57/s/gpu Batch (t): 0.5754 LR: 0.000056 Step: 250200 Total Loss: 0.0402 Recon Loss: 0.0287 [03/31 10:42:21 TiTok]: Data (t): 0.0034, 62.48/s/gpu Batch (t): 0.5762 LR: 0.000056 Step: 250300 Total Loss: 0.0387 Recon Loss: 0.0273 [03/31 10:43:19 TiTok]: Data (t): 0.0036, 62.46/s/gpu Batch (t): 0.5764 LR: 0.000056 Step: 250400 Total Loss: 0.0397 Recon Loss: 0.0285 [03/31 10:44:16 TiTok]: Data (t): 0.0033, 62.42/s/gpu Batch (t): 0.5767 LR: 0.000056 Step: 250500 Total Loss: 0.0363 Recon Loss: 0.0280 [03/31 10:45:14 TiTok]: Data (t): 0.0032, 62.36/s/gpu Batch (t): 0.5773 LR: 0.000056 Step: 250600 Total Loss: 0.0372 Recon Loss: 0.0269 [03/31 10:46:12 TiTok]: Data (t): 0.0032, 62.66/s/gpu Batch (t): 0.5746 LR: 0.000056 Step: 250700 Total Loss: 0.0388 Recon Loss: 0.0280 [03/31 10:47:09 TiTok]: Data (t): 0.0032, 62.25/s/gpu Batch (t): 0.5783 LR: 0.000056 Step: 250800 Total Loss: 0.0380 Recon Loss: 0.0268 [03/31 10:48:07 TiTok]: Data (t): 0.0032, 62.66/s/gpu Batch (t): 0.5745 LR: 0.000056 Step: 250900 Total Loss: 0.0382 Recon Loss: 0.0269 [03/31 10:49:05 TiTok]: Data (t): 0.0032, 52.19/s/gpu Batch (t): 0.6899 LR: 0.000055 Step: 251000 Total Loss: 0.0394 Recon Loss: 0.0285 [03/31 10:50:02 TiTok]: Data (t): 0.0033, 62.65/s/gpu Batch (t): 0.5746 LR: 0.000055 Step: 251100 Total Loss: 0.0404 Recon Loss: 0.0288 [03/31 10:51:00 TiTok]: Data (t): 0.0034, 62.64/s/gpu Batch (t): 0.5747 LR: 0.000055 Step: 251200 Total Loss: 0.0406 Recon Loss: 0.0287 [03/31 10:51:57 TiTok]: Data (t): 0.0034, 62.49/s/gpu Batch (t): 0.5760 LR: 0.000055 Step: 251300 Total Loss: 0.0389 Recon Loss: 0.0290 [03/31 10:52:55 TiTok]: Data (t): 0.0034, 62.65/s/gpu Batch (t): 0.5746 LR: 0.000055 Step: 251400 Total Loss: 0.0403 Recon Loss: 0.0282 [03/31 10:53:53 TiTok]: Data (t): 0.0034, 62.58/s/gpu Batch (t): 0.5753 LR: 0.000055 Step: 251500 Total Loss: 0.0416 Recon Loss: 0.0299 [03/31 10:54:50 TiTok]: Data (t): 0.0033, 62.72/s/gpu Batch (t): 0.5740 LR: 0.000055 Step: 251600 Total Loss: 0.0412 Recon Loss: 0.0287 [03/31 10:55:48 TiTok]: Data (t): 0.0032, 62.71/s/gpu Batch (t): 0.5740 LR: 0.000055 Step: 251700 Total Loss: 0.0379 Recon Loss: 0.0280 [03/31 10:56:45 TiTok]: Data (t): 0.0032, 62.78/s/gpu Batch (t): 0.5734 LR: 0.000055 Step: 251800 Total Loss: 0.0369 Recon Loss: 0.0273 [03/31 10:57:43 TiTok]: Data (t): 0.0033, 62.52/s/gpu Batch (t): 0.5758 LR: 0.000055 Step: 251900 Total Loss: 0.0378 Recon Loss: 0.0270 [03/31 10:58:41 TiTok]: Data (t): 0.0033, 56.93/s/gpu Batch (t): 0.6324 LR: 0.000055 Step: 252000 Total Loss: 0.0376 Recon Loss: 0.0281 [03/31 10:59:39 TiTok]: Data (t): 0.0033, 62.58/s/gpu Batch (t): 0.5753 LR: 0.000055 Step: 252100 Total Loss: 0.0381 Recon Loss: 0.0273 [03/31 11:00:36 TiTok]: Data (t): 0.0032, 62.73/s/gpu Batch (t): 0.5739 LR: 0.000055 Step: 252200 Total Loss: 0.0376 Recon Loss: 0.0287 [03/31 11:01:33 TiTok]: Data (t): 0.0033, 62.48/s/gpu Batch (t): 0.5762 LR: 0.000055 Step: 252300 Total Loss: 0.0394 Recon Loss: 0.0287 [03/31 11:02:31 TiTok]: Data (t): 0.0032, 62.62/s/gpu Batch (t): 0.5749 LR: 0.000055 Step: 252400 Total Loss: 0.0390 Recon Loss: 0.0277 [03/31 11:03:29 TiTok]: Data (t): 0.0033, 62.03/s/gpu Batch (t): 0.5803 LR: 0.000055 Step: 252500 Total Loss: 0.0387 Recon Loss: 0.0276 [03/31 11:04:26 TiTok]: Data (t): 0.0034, 62.06/s/gpu Batch (t): 0.5801 LR: 0.000055 Step: 252600 Total Loss: 0.0388 Recon Loss: 0.0274 [03/31 11:05:24 TiTok]: Data (t): 0.0032, 62.63/s/gpu Batch (t): 0.5748 LR: 0.000055 Step: 252700 Total Loss: 0.0373 Recon Loss: 0.0272 [03/31 11:06:22 TiTok]: Data (t): 0.0032, 62.32/s/gpu Batch (t): 0.5777 LR: 0.000055 Step: 252800 Total Loss: 0.0378 Recon Loss: 0.0301 [03/31 11:07:20 TiTok]: Data (t): 0.0034, 62.40/s/gpu Batch (t): 0.5769 LR: 0.000055 Step: 252900 Total Loss: 0.0388 Recon Loss: 0.0286 [03/31 11:08:18 TiTok]: Data (t): 0.0033, 56.66/s/gpu Batch (t): 0.6354 LR: 0.000055 Step: 253000 Total Loss: 0.0410 Recon Loss: 0.0308 [03/31 11:09:15 TiTok]: Data (t): 0.0032, 62.43/s/gpu Batch (t): 0.5767 LR: 0.000055 Step: 253100 Total Loss: 0.0378 Recon Loss: 0.0280 [03/31 11:10:13 TiTok]: Data (t): 0.0034, 62.33/s/gpu Batch (t): 0.5776 LR: 0.000055 Step: 253200 Total Loss: 0.0390 Recon Loss: 0.0276 [03/31 11:11:11 TiTok]: Data (t): 0.0034, 62.40/s/gpu Batch (t): 0.5769 LR: 0.000055 Step: 253300 Total Loss: 0.0401 Recon Loss: 0.0290 [03/31 11:12:10 TiTok]: Data (t): 0.0035, 61.72/s/gpu Batch (t): 0.5833 LR: 0.000055 Step: 253400 Total Loss: 0.0376 Recon Loss: 0.0267 [03/31 11:13:08 TiTok]: Data (t): 0.0033, 62.46/s/gpu Batch (t): 0.5764 LR: 0.000055 Step: 253500 Total Loss: 0.0343 Recon Loss: 0.0261 [03/31 11:14:06 TiTok]: Data (t): 0.0035, 62.43/s/gpu Batch (t): 0.5767 LR: 0.000055 Step: 253600 Total Loss: 0.0379 Recon Loss: 0.0264 [03/31 11:15:04 TiTok]: Data (t): 0.0033, 62.04/s/gpu Batch (t): 0.5803 LR: 0.000055 Step: 253700 Total Loss: 0.0385 Recon Loss: 0.0275 [03/31 11:16:02 TiTok]: Data (t): 0.0036, 61.10/s/gpu Batch (t): 0.5892 LR: 0.000055 Step: 253800 Total Loss: 0.0363 Recon Loss: 0.0262 [03/31 11:17:01 TiTok]: Data (t): 0.0035, 61.93/s/gpu Batch (t): 0.5813 LR: 0.000055 Step: 253900 Total Loss: 0.0386 Recon Loss: 0.0271 [03/31 11:17:59 TiTok]: Data (t): 0.0036, 56.42/s/gpu Batch (t): 0.6380 LR: 0.000055 Step: 254000 Total Loss: 0.0372 Recon Loss: 0.0283 [03/31 11:18:56 TiTok]: Data (t): 0.0032, 62.54/s/gpu Batch (t): 0.5756 LR: 0.000055 Step: 254100 Total Loss: 0.0376 Recon Loss: 0.0263 [03/31 11:19:55 TiTok]: Data (t): 0.0033, 62.42/s/gpu Batch (t): 0.5767 LR: 0.000055 Step: 254200 Total Loss: 0.0387 Recon Loss: 0.0268 [03/31 11:20:53 TiTok]: Data (t): 0.0034, 62.45/s/gpu Batch (t): 0.5764 LR: 0.000055 Step: 254300 Total Loss: 0.0387 Recon Loss: 0.0292 [03/31 11:21:50 TiTok]: Data (t): 0.0033, 62.31/s/gpu Batch (t): 0.5777 LR: 0.000055 Step: 254400 Total Loss: 0.0383 Recon Loss: 0.0285 [03/31 11:22:48 TiTok]: Data (t): 0.0032, 62.41/s/gpu Batch (t): 0.5768 LR: 0.000054 Step: 254500 Total Loss: 0.0374 Recon Loss: 0.0267 [03/31 11:23:46 TiTok]: Data (t): 0.0034, 62.45/s/gpu Batch (t): 0.5765 LR: 0.000054 Step: 254600 Total Loss: 0.0372 Recon Loss: 0.0264 [03/31 11:24:44 TiTok]: Data (t): 0.0033, 62.35/s/gpu Batch (t): 0.5774 LR: 0.000054 Step: 254700 Total Loss: 0.0405 Recon Loss: 0.0292 [03/31 11:25:42 TiTok]: Data (t): 0.0052, 58.56/s/gpu Batch (t): 0.6148 LR: 0.000054 Step: 254800 Total Loss: 0.0370 Recon Loss: 0.0275 [03/31 11:26:40 TiTok]: Data (t): 0.0032, 62.40/s/gpu Batch (t): 0.5770 LR: 0.000054 Step: 254900 Total Loss: 0.0398 Recon Loss: 0.0292 [03/31 11:27:38 TiTok]: Data (t): 0.0033, 56.66/s/gpu Batch (t): 0.6354 LR: 0.000054 Step: 255000 Total Loss: 0.0377 Recon Loss: 0.0256 [03/31 11:28:36 TiTok]: Data (t): 0.0032, 62.38/s/gpu Batch (t): 0.5771 LR: 0.000054 Step: 255100 Total Loss: 0.0364 Recon Loss: 0.0262 [03/31 11:29:34 TiTok]: Data (t): 0.0032, 62.08/s/gpu Batch (t): 0.5799 LR: 0.000054 Step: 255200 Total Loss: 0.0380 Recon Loss: 0.0270 [03/31 11:30:32 TiTok]: Data (t): 0.0032, 62.48/s/gpu Batch (t): 0.5762 LR: 0.000054 Step: 255300 Total Loss: 0.0379 Recon Loss: 0.0281 [03/31 11:31:30 TiTok]: Data (t): 0.0032, 62.38/s/gpu Batch (t): 0.5771 LR: 0.000054 Step: 255400 Total Loss: 0.0355 Recon Loss: 0.0259 [03/31 11:32:28 TiTok]: Data (t): 0.0033, 62.56/s/gpu Batch (t): 0.5755 LR: 0.000054 Step: 255500 Total Loss: 0.0370 Recon Loss: 0.0275 [03/31 11:33:26 TiTok]: Data (t): 0.0033, 62.50/s/gpu Batch (t): 0.5760 LR: 0.000054 Step: 255600 Total Loss: 0.0370 Recon Loss: 0.0260 [03/31 11:34:23 TiTok]: Data (t): 0.0032, 62.52/s/gpu Batch (t): 0.5758 LR: 0.000054 Step: 255700 Total Loss: 0.0388 Recon Loss: 0.0269 [03/31 11:35:21 TiTok]: Data (t): 0.0032, 62.40/s/gpu Batch (t): 0.5769 LR: 0.000054 Step: 255800 Total Loss: 0.0396 Recon Loss: 0.0287 [03/31 11:36:19 TiTok]: Data (t): 0.0032, 62.28/s/gpu Batch (t): 0.5780 LR: 0.000054 Step: 255900 Total Loss: 0.0358 Recon Loss: 0.0287 [03/31 11:37:17 TiTok]: Data (t): 0.0032, 56.52/s/gpu Batch (t): 0.6370 LR: 0.000054 Step: 256000 Total Loss: 0.0391 Recon Loss: 0.0274 [03/31 11:38:15 TiTok]: Data (t): 0.0032, 62.41/s/gpu Batch (t): 0.5769 LR: 0.000054 Step: 256100 Total Loss: 0.0368 Recon Loss: 0.0285 [03/31 11:39:13 TiTok]: Data (t): 0.0032, 62.39/s/gpu Batch (t): 0.5770 LR: 0.000054 Step: 256200 Total Loss: 0.0375 Recon Loss: 0.0265 [03/31 11:40:10 TiTok]: Data (t): 0.0032, 62.10/s/gpu Batch (t): 0.5797 LR: 0.000054 Step: 256300 Total Loss: 0.0382 Recon Loss: 0.0287 [03/31 11:41:09 TiTok]: Data (t): 0.0032, 62.38/s/gpu Batch (t): 0.5771 LR: 0.000054 Step: 256400 Total Loss: 0.0397 Recon Loss: 0.0290 [03/31 11:42:07 TiTok]: Data (t): 0.0032, 62.42/s/gpu Batch (t): 0.5767 LR: 0.000054 Step: 256500 Total Loss: 0.0410 Recon Loss: 0.0293 [03/31 11:43:05 TiTok]: Data (t): 0.0033, 61.93/s/gpu Batch (t): 0.5813 LR: 0.000054 Step: 256600 Total Loss: 0.0415 Recon Loss: 0.0280 [03/31 11:44:03 TiTok]: Data (t): 0.0033, 62.38/s/gpu Batch (t): 0.5771 LR: 0.000054 Step: 256700 Total Loss: 0.0386 Recon Loss: 0.0279 [03/31 11:45:00 TiTok]: Data (t): 0.0031, 62.35/s/gpu Batch (t): 0.5774 LR: 0.000054 Step: 256800 Total Loss: 0.0376 Recon Loss: 0.0284 [03/31 11:45:58 TiTok]: Data (t): 0.0032, 62.42/s/gpu Batch (t): 0.5768 LR: 0.000054 Step: 256900 Total Loss: 0.0390 Recon Loss: 0.0282 [03/31 11:46:56 TiTok]: Data (t): 0.0032, 56.91/s/gpu Batch (t): 0.6325 LR: 0.000054 Step: 257000 Total Loss: 0.0396 Recon Loss: 0.0268 [03/31 11:47:54 TiTok]: Data (t): 0.0032, 62.47/s/gpu Batch (t): 0.5763 LR: 0.000054 Step: 257100 Total Loss: 0.0393 Recon Loss: 0.0289 [03/31 11:48:52 TiTok]: Data (t): 0.0032, 62.37/s/gpu Batch (t): 0.5772 LR: 0.000054 Step: 257200 Total Loss: 0.0401 Recon Loss: 0.0278 [03/31 11:49:49 TiTok]: Data (t): 0.0033, 62.50/s/gpu Batch (t): 0.5760 LR: 0.000054 Step: 257300 Total Loss: 0.0367 Recon Loss: 0.0269 [03/31 11:50:47 TiTok]: Data (t): 0.0032, 62.39/s/gpu Batch (t): 0.5770 LR: 0.000054 Step: 257400 Total Loss: 0.0391 Recon Loss: 0.0289 [03/31 11:51:45 TiTok]: Data (t): 0.0032, 61.85/s/gpu Batch (t): 0.5821 LR: 0.000054 Step: 257500 Total Loss: 0.0380 Recon Loss: 0.0269 [03/31 11:52:43 TiTok]: Data (t): 0.0032, 62.39/s/gpu Batch (t): 0.5770 LR: 0.000054 Step: 257600 Total Loss: 0.0422 Recon Loss: 0.0295 [03/31 11:53:41 TiTok]: Data (t): 0.0032, 62.20/s/gpu Batch (t): 0.5788 LR: 0.000054 Step: 257700 Total Loss: 0.0392 Recon Loss: 0.0286 [03/31 11:54:39 TiTok]: Data (t): 0.0031, 62.57/s/gpu Batch (t): 0.5754 LR: 0.000054 Step: 257800 Total Loss: 0.0396 Recon Loss: 0.0291 [03/31 11:55:38 TiTok]: Data (t): 0.0031, 62.45/s/gpu Batch (t): 0.5764 LR: 0.000054 Step: 257900 Total Loss: 0.0360 Recon Loss: 0.0265 [03/31 11:56:36 TiTok]: Data (t): 0.0033, 56.72/s/gpu Batch (t): 0.6347 LR: 0.000053 Step: 258000 Total Loss: 0.0374 Recon Loss: 0.0278 [03/31 11:57:34 TiTok]: Data (t): 0.0032, 62.41/s/gpu Batch (t): 0.5768 LR: 0.000053 Step: 258100 Total Loss: 0.0375 Recon Loss: 0.0282 [03/31 11:58:32 TiTok]: Data (t): 0.0033, 62.43/s/gpu Batch (t): 0.5767 LR: 0.000053 Step: 258200 Total Loss: 0.0375 Recon Loss: 0.0283 [03/31 11:59:30 TiTok]: Data (t): 0.0032, 62.32/s/gpu Batch (t): 0.5777 LR: 0.000053 Step: 258300 Total Loss: 0.0373 Recon Loss: 0.0278 [03/31 12:00:28 TiTok]: Data (t): 0.0032, 62.47/s/gpu Batch (t): 0.5763 LR: 0.000053 Step: 258400 Total Loss: 0.0371 Recon Loss: 0.0273 [03/31 12:01:26 TiTok]: Data (t): 0.0032, 62.51/s/gpu Batch (t): 0.5759 LR: 0.000053 Step: 258500 Total Loss: 0.0377 Recon Loss: 0.0267 [03/31 12:02:24 TiTok]: Data (t): 0.0033, 61.40/s/gpu Batch (t): 0.5863 LR: 0.000053 Step: 258600 Total Loss: 0.0349 Recon Loss: 0.0252 [03/31 12:03:23 TiTok]: Data (t): 0.0032, 62.53/s/gpu Batch (t): 0.5757 LR: 0.000053 Step: 258700 Total Loss: 0.0399 Recon Loss: 0.0276 [03/31 12:04:21 TiTok]: Data (t): 0.0032, 62.54/s/gpu Batch (t): 0.5757 LR: 0.000053 Step: 258800 Total Loss: 0.0391 Recon Loss: 0.0282 [03/31 12:05:19 TiTok]: Data (t): 0.0032, 62.38/s/gpu Batch (t): 0.5771 LR: 0.000053 Step: 258900 Total Loss: 0.0366 Recon Loss: 0.0254 [03/31 12:06:16 TiTok]: Data (t): 0.0032, 56.40/s/gpu Batch (t): 0.6383 LR: 0.000053 Step: 259000 Total Loss: 0.0400 Recon Loss: 0.0271 [03/31 12:07:14 TiTok]: Data (t): 0.0032, 62.57/s/gpu Batch (t): 0.5753 LR: 0.000053 Step: 259100 Total Loss: 0.0379 Recon Loss: 0.0279 [03/31 12:08:12 TiTok]: Data (t): 0.0032, 62.29/s/gpu Batch (t): 0.5780 LR: 0.000053 Step: 259200 Total Loss: 0.0378 Recon Loss: 0.0269 [03/31 12:09:09 TiTok]: Data (t): 0.0032, 62.70/s/gpu Batch (t): 0.5741 LR: 0.000053 Step: 259300 Total Loss: 0.0360 Recon Loss: 0.0261 [03/31 12:10:07 TiTok]: Data (t): 0.0032, 62.60/s/gpu Batch (t): 0.5751 LR: 0.000053 Step: 259400 Total Loss: 0.0389 Recon Loss: 0.0282 [03/31 12:11:05 TiTok]: Data (t): 0.0032, 62.48/s/gpu Batch (t): 0.5762 LR: 0.000053 Step: 259500 Total Loss: 0.0370 Recon Loss: 0.0267 [03/31 12:12:02 TiTok]: Data (t): 0.0032, 58.33/s/gpu Batch (t): 0.6172 LR: 0.000053 Step: 259600 Total Loss: 0.0352 Recon Loss: 0.0255 [03/31 12:13:00 TiTok]: Data (t): 0.0032, 62.05/s/gpu Batch (t): 0.5801 LR: 0.000053 Step: 259700 Total Loss: 0.0383 Recon Loss: 0.0271 [03/31 12:13:58 TiTok]: Data (t): 0.0031, 62.54/s/gpu Batch (t): 0.5756 LR: 0.000053 Step: 259800 Total Loss: 0.0375 Recon Loss: 0.0267 [03/31 12:14:56 TiTok]: Data (t): 0.0032, 62.55/s/gpu Batch (t): 0.5756 LR: 0.000053 Step: 259900 Total Loss: 0.0355 Recon Loss: 0.0256 [03/31 12:15:53 TiTok]: Data (t): 0.0033, 56.59/s/gpu Batch (t): 0.6361 LR: 0.000053 Step: 260000 Total Loss: 0.0393 Recon Loss: 0.0282 [03/31 12:15:56 TiTok]: Saved state to /mnt/books/train_stage2/order_32_stage2/checkpoint-260000 [03/31 12:16:09 TiTok]: Reconstructing images... [03/31 12:17:08 TiTok]: Data (t): 0.0034, 62.14/s/gpu Batch (t): 0.5793 LR: 0.000053 Step: 260100 Total Loss: 0.0399 Recon Loss: 0.0279 [03/31 12:18:06 TiTok]: Data (t): 0.0034, 62.49/s/gpu Batch (t): 0.5761 LR: 0.000053 Step: 260200 Total Loss: 0.0388 Recon Loss: 0.0286 [03/31 12:19:04 TiTok]: Data (t): 0.0033, 62.47/s/gpu Batch (t): 0.5763 LR: 0.000053 Step: 260300 Total Loss: 0.0402 Recon Loss: 0.0291 [03/31 12:20:02 TiTok]: Data (t): 0.0034, 62.19/s/gpu Batch (t): 0.5788 LR: 0.000053 Step: 260400 Total Loss: 0.0369 Recon Loss: 0.0271 [03/31 12:20:59 TiTok]: Data (t): 0.0033, 62.42/s/gpu Batch (t): 0.5768 LR: 0.000053 Step: 260500 Total Loss: 0.0383 Recon Loss: 0.0269 [03/31 12:21:57 TiTok]: Data (t): 0.0032, 62.51/s/gpu Batch (t): 0.5759 LR: 0.000053 Step: 260600 Total Loss: 0.0404 Recon Loss: 0.0288 [03/31 12:22:55 TiTok]: Data (t): 0.0033, 62.13/s/gpu Batch (t): 0.5794 LR: 0.000053 Step: 260700 Total Loss: 0.0385 Recon Loss: 0.0280 [03/31 12:23:52 TiTok]: Data (t): 0.0033, 62.48/s/gpu Batch (t): 0.5761 LR: 0.000053 Step: 260800 Total Loss: 0.0391 Recon Loss: 0.0285 [03/31 12:24:50 TiTok]: Data (t): 0.0033, 62.33/s/gpu Batch (t): 0.5775 LR: 0.000053 Step: 260900 Total Loss: 0.0381 Recon Loss: 0.0271 [03/31 12:25:49 TiTok]: Data (t): 0.0033, 51.96/s/gpu Batch (t): 0.6929 LR: 0.000053 Step: 261000 Total Loss: 0.0392 Recon Loss: 0.0278 [03/31 12:26:46 TiTok]: Data (t): 0.0032, 62.55/s/gpu Batch (t): 0.5756 LR: 0.000053 Step: 261100 Total Loss: 0.0380 Recon Loss: 0.0283 [03/31 12:27:44 TiTok]: Data (t): 0.0052, 61.30/s/gpu Batch (t): 0.5873 LR: 0.000053 Step: 261200 Total Loss: 0.0386 Recon Loss: 0.0277 [03/31 12:28:42 TiTok]: Data (t): 0.0033, 62.50/s/gpu Batch (t): 0.5760 LR: 0.000053 Step: 261300 Total Loss: 0.0384 Recon Loss: 0.0280 [03/31 12:29:40 TiTok]: Data (t): 0.0034, 61.96/s/gpu Batch (t): 0.5811 LR: 0.000053 Step: 261400 Total Loss: 0.0359 Recon Loss: 0.0260 [03/31 12:30:38 TiTok]: Data (t): 0.0032, 62.39/s/gpu Batch (t): 0.5770 LR: 0.000052 Step: 261500 Total Loss: 0.0370 Recon Loss: 0.0265 [03/31 12:31:35 TiTok]: Data (t): 0.0034, 62.45/s/gpu Batch (t): 0.5765 LR: 0.000052 Step: 261600 Total Loss: 0.0378 Recon Loss: 0.0277 [03/31 12:32:33 TiTok]: Data (t): 0.0034, 62.23/s/gpu Batch (t): 0.5785 LR: 0.000052 Step: 261700 Total Loss: 0.0371 Recon Loss: 0.0277 [03/31 12:33:31 TiTok]: Data (t): 0.0033, 62.48/s/gpu Batch (t): 0.5762 LR: 0.000052 Step: 261800 Total Loss: 0.0355 Recon Loss: 0.0269 [03/31 12:34:29 TiTok]: Data (t): 0.0032, 62.44/s/gpu Batch (t): 0.5765 LR: 0.000052 Step: 261900 Total Loss: 0.0404 Recon Loss: 0.0278 [03/31 12:35:26 TiTok]: Data (t): 0.0032, 56.70/s/gpu Batch (t): 0.6349 LR: 0.000052 Step: 262000 Total Loss: 0.0389 Recon Loss: 0.0273 [03/31 12:36:24 TiTok]: Data (t): 0.0033, 62.42/s/gpu Batch (t): 0.5767 LR: 0.000052 Step: 262100 Total Loss: 0.0415 Recon Loss: 0.0301 [03/31 12:37:22 TiTok]: Data (t): 0.0033, 62.23/s/gpu Batch (t): 0.5785 LR: 0.000052 Step: 262200 Total Loss: 0.0370 Recon Loss: 0.0278 [03/31 12:38:22 TiTok]: Data (t): 0.0033, 59.09/s/gpu Batch (t): 0.6092 LR: 0.000052 Step: 262300 Total Loss: 0.0380 Recon Loss: 0.0289 [03/31 12:39:20 TiTok]: Data (t): 0.0034, 62.11/s/gpu Batch (t): 0.5796 LR: 0.000052 Step: 262400 Total Loss: 0.0385 Recon Loss: 0.0282 [03/31 12:40:18 TiTok]: Data (t): 0.0032, 62.34/s/gpu Batch (t): 0.5775 LR: 0.000052 Step: 262500 Total Loss: 0.0380 Recon Loss: 0.0270 [03/31 12:41:16 TiTok]: Data (t): 0.0033, 62.29/s/gpu Batch (t): 0.5780 LR: 0.000052 Step: 262600 Total Loss: 0.0383 Recon Loss: 0.0278 [03/31 12:42:13 TiTok]: Data (t): 0.0034, 62.03/s/gpu Batch (t): 0.5804 LR: 0.000052 Step: 262700 Total Loss: 0.0380 Recon Loss: 0.0274 [03/31 12:43:11 TiTok]: Data (t): 0.0034, 61.96/s/gpu Batch (t): 0.5810 LR: 0.000052 Step: 262800 Total Loss: 0.0378 Recon Loss: 0.0265 [03/31 12:44:09 TiTok]: Data (t): 0.0033, 62.60/s/gpu Batch (t): 0.5751 LR: 0.000052 Step: 262900 Total Loss: 0.0373 Recon Loss: 0.0270 [03/31 12:45:07 TiTok]: Data (t): 0.0034, 56.89/s/gpu Batch (t): 0.6328 LR: 0.000052 Step: 263000 Total Loss: 0.0375 Recon Loss: 0.0273 [03/31 12:46:04 TiTok]: Data (t): 0.0032, 62.57/s/gpu Batch (t): 0.5754 LR: 0.000052 Step: 263100 Total Loss: 0.0385 Recon Loss: 0.0277 [03/31 12:47:02 TiTok]: Data (t): 0.0033, 62.48/s/gpu Batch (t): 0.5762 LR: 0.000052 Step: 263200 Total Loss: 0.0391 Recon Loss: 0.0275 [03/31 12:48:00 TiTok]: Data (t): 0.0033, 62.50/s/gpu Batch (t): 0.5760 LR: 0.000052 Step: 263300 Total Loss: 0.0370 Recon Loss: 0.0274 [03/31 12:48:58 TiTok]: Data (t): 0.0034, 62.34/s/gpu Batch (t): 0.5774 LR: 0.000052 Step: 263400 Total Loss: 0.0397 Recon Loss: 0.0286 [03/31 12:49:56 TiTok]: Data (t): 0.0034, 62.61/s/gpu Batch (t): 0.5749 LR: 0.000052 Step: 263500 Total Loss: 0.0365 Recon Loss: 0.0259 [03/31 12:50:53 TiTok]: Data (t): 0.0033, 62.30/s/gpu Batch (t): 0.5778 LR: 0.000052 Step: 263600 Total Loss: 0.0382 Recon Loss: 0.0262 [03/31 12:51:51 TiTok]: Data (t): 0.0032, 61.68/s/gpu Batch (t): 0.5837 LR: 0.000052 Step: 263700 Total Loss: 0.0391 Recon Loss: 0.0270 [03/31 12:52:49 TiTok]: Data (t): 0.0035, 54.76/s/gpu Batch (t): 0.6574 LR: 0.000052 Step: 263800 Total Loss: 0.0409 Recon Loss: 0.0292 [03/31 12:53:46 TiTok]: Data (t): 0.0034, 62.57/s/gpu Batch (t): 0.5754 LR: 0.000052 Step: 263900 Total Loss: 0.0401 Recon Loss: 0.0284 [03/31 12:54:44 TiTok]: Data (t): 0.0032, 56.69/s/gpu Batch (t): 0.6350 LR: 0.000052 Step: 264000 Total Loss: 0.0375 Recon Loss: 0.0267 [03/31 12:55:42 TiTok]: Data (t): 0.0033, 62.58/s/gpu Batch (t): 0.5752 LR: 0.000052 Step: 264100 Total Loss: 0.0364 Recon Loss: 0.0266 [03/31 12:56:39 TiTok]: Data (t): 0.0033, 62.55/s/gpu Batch (t): 0.5755 LR: 0.000052 Step: 264200 Total Loss: 0.0358 Recon Loss: 0.0261 [03/31 12:57:37 TiTok]: Data (t): 0.0033, 62.47/s/gpu Batch (t): 0.5763 LR: 0.000052 Step: 264300 Total Loss: 0.0383 Recon Loss: 0.0288 [03/31 12:58:35 TiTok]: Data (t): 0.0032, 62.63/s/gpu Batch (t): 0.5748 LR: 0.000052 Step: 264400 Total Loss: 0.0391 Recon Loss: 0.0283 [03/31 12:59:32 TiTok]: Data (t): 0.0032, 62.57/s/gpu Batch (t): 0.5753 LR: 0.000052 Step: 264500 Total Loss: 0.0374 Recon Loss: 0.0276 [03/31 13:00:30 TiTok]: Data (t): 0.0033, 62.54/s/gpu Batch (t): 0.5756 LR: 0.000052 Step: 264600 Total Loss: 0.0369 Recon Loss: 0.0261 [03/31 13:01:28 TiTok]: Data (t): 0.0034, 62.60/s/gpu Batch (t): 0.5751 LR: 0.000052 Step: 264700 Total Loss: 0.0374 Recon Loss: 0.0279 [03/31 13:02:25 TiTok]: Data (t): 0.0034, 61.90/s/gpu Batch (t): 0.5816 LR: 0.000052 Step: 264800 Total Loss: 0.0357 Recon Loss: 0.0252 [03/31 13:03:23 TiTok]: Data (t): 0.0032, 62.57/s/gpu Batch (t): 0.5753 LR: 0.000052 Step: 264900 Total Loss: 0.0343 Recon Loss: 0.0269 [03/31 13:04:21 TiTok]: Data (t): 0.0032, 56.96/s/gpu Batch (t): 0.6320 LR: 0.000051 Step: 265000 Total Loss: 0.0364 Recon Loss: 0.0268 [03/31 13:05:19 TiTok]: Data (t): 0.0034, 62.62/s/gpu Batch (t): 0.5749 LR: 0.000051 Step: 265100 Total Loss: 0.0393 Recon Loss: 0.0294 [03/31 13:06:16 TiTok]: Data (t): 0.0032, 62.65/s/gpu Batch (t): 0.5746 LR: 0.000051 Step: 265200 Total Loss: 0.0390 Recon Loss: 0.0285 [03/31 13:07:14 TiTok]: Data (t): 0.0032, 62.59/s/gpu Batch (t): 0.5752 LR: 0.000051 Step: 265300 Total Loss: 0.0377 Recon Loss: 0.0263 [03/31 13:08:11 TiTok]: Data (t): 0.0033, 62.55/s/gpu Batch (t): 0.5755 LR: 0.000051 Step: 265400 Total Loss: 0.0389 Recon Loss: 0.0275 [03/31 13:09:09 TiTok]: Data (t): 0.0032, 62.53/s/gpu Batch (t): 0.5757 LR: 0.000051 Step: 265500 Total Loss: 0.0362 Recon Loss: 0.0260 [03/31 13:10:07 TiTok]: Data (t): 0.0034, 62.45/s/gpu Batch (t): 0.5764 LR: 0.000051 Step: 265600 Total Loss: 0.0412 Recon Loss: 0.0297 [03/31 13:11:05 TiTok]: Data (t): 0.0033, 62.42/s/gpu Batch (t): 0.5768 LR: 0.000051 Step: 265700 Total Loss: 0.0370 Recon Loss: 0.0269 [03/31 13:12:02 TiTok]: Data (t): 0.0034, 62.55/s/gpu Batch (t): 0.5755 LR: 0.000051 Step: 265800 Total Loss: 0.0378 Recon Loss: 0.0268 [03/31 13:13:00 TiTok]: Data (t): 0.0034, 62.27/s/gpu Batch (t): 0.5781 LR: 0.000051 Step: 265900 Total Loss: 0.0373 Recon Loss: 0.0276 [03/31 13:13:58 TiTok]: Data (t): 0.0032, 56.80/s/gpu Batch (t): 0.6338 LR: 0.000051 Step: 266000 Total Loss: 0.0371 Recon Loss: 0.0272 [03/31 13:14:56 TiTok]: Data (t): 0.0032, 62.50/s/gpu Batch (t): 0.5760 LR: 0.000051 Step: 266100 Total Loss: 0.0360 Recon Loss: 0.0264 [03/31 13:15:54 TiTok]: Data (t): 0.0032, 62.47/s/gpu Batch (t): 0.5762 LR: 0.000051 Step: 266200 Total Loss: 0.0406 Recon Loss: 0.0280 [03/31 13:16:52 TiTok]: Data (t): 0.0033, 62.42/s/gpu Batch (t): 0.5767 LR: 0.000051 Step: 266300 Total Loss: 0.0376 Recon Loss: 0.0284 [03/31 13:17:49 TiTok]: Data (t): 0.0034, 62.35/s/gpu Batch (t): 0.5774 LR: 0.000051 Step: 266400 Total Loss: 0.0366 Recon Loss: 0.0263 [03/31 13:18:47 TiTok]: Data (t): 0.0034, 59.18/s/gpu Batch (t): 0.6084 LR: 0.000051 Step: 266500 Total Loss: 0.0397 Recon Loss: 0.0278 [03/31 13:19:45 TiTok]: Data (t): 0.0033, 61.59/s/gpu Batch (t): 0.5845 LR: 0.000051 Step: 266600 Total Loss: 0.0385 Recon Loss: 0.0285 [03/31 13:20:43 TiTok]: Data (t): 0.0032, 62.87/s/gpu Batch (t): 0.5726 LR: 0.000051 Step: 266700 Total Loss: 0.0410 Recon Loss: 0.0282 [03/31 13:21:42 TiTok]: Data (t): 0.0032, 61.89/s/gpu Batch (t): 0.5816 LR: 0.000051 Step: 266800 Total Loss: 0.0383 Recon Loss: 0.0290 [03/31 13:22:41 TiTok]: Data (t): 0.0033, 62.22/s/gpu Batch (t): 0.5786 LR: 0.000051 Step: 266900 Total Loss: 0.0355 Recon Loss: 0.0259 [03/31 13:23:39 TiTok]: Data (t): 0.0033, 56.44/s/gpu Batch (t): 0.6379 LR: 0.000051 Step: 267000 Total Loss: 0.0386 Recon Loss: 0.0279 [03/31 13:24:37 TiTok]: Data (t): 0.0032, 62.19/s/gpu Batch (t): 0.5788 LR: 0.000051 Step: 267100 Total Loss: 0.0419 Recon Loss: 0.0282 [03/31 13:25:35 TiTok]: Data (t): 0.0033, 61.34/s/gpu Batch (t): 0.5869 LR: 0.000051 Step: 267200 Total Loss: 0.0383 Recon Loss: 0.0278 [03/31 13:26:33 TiTok]: Data (t): 0.0033, 61.84/s/gpu Batch (t): 0.5822 LR: 0.000051 Step: 267300 Total Loss: 0.0369 Recon Loss: 0.0276 [03/31 13:27:31 TiTok]: Data (t): 0.0032, 58.17/s/gpu Batch (t): 0.6188 LR: 0.000051 Step: 267400 Total Loss: 0.0391 Recon Loss: 0.0279 [03/31 13:28:29 TiTok]: Data (t): 0.0033, 62.46/s/gpu Batch (t): 0.5763 LR: 0.000051 Step: 267500 Total Loss: 0.0408 Recon Loss: 0.0297 [03/31 13:29:27 TiTok]: Data (t): 0.0032, 61.47/s/gpu Batch (t): 0.5857 LR: 0.000051 Step: 267600 Total Loss: 0.0379 Recon Loss: 0.0262 [03/31 13:30:25 TiTok]: Data (t): 0.0033, 62.44/s/gpu Batch (t): 0.5766 LR: 0.000051 Step: 267700 Total Loss: 0.0340 Recon Loss: 0.0268 [03/31 13:31:23 TiTok]: Data (t): 0.0033, 62.54/s/gpu Batch (t): 0.5756 LR: 0.000051 Step: 267800 Total Loss: 0.0373 Recon Loss: 0.0276 [03/31 13:32:21 TiTok]: Data (t): 0.0033, 61.53/s/gpu Batch (t): 0.5851 LR: 0.000051 Step: 267900 Total Loss: 0.0380 Recon Loss: 0.0279 [03/31 13:33:19 TiTok]: Data (t): 0.0033, 56.94/s/gpu Batch (t): 0.6323 LR: 0.000051 Step: 268000 Total Loss: 0.0385 Recon Loss: 0.0282 [03/31 13:34:17 TiTok]: Data (t): 0.0032, 62.57/s/gpu Batch (t): 0.5753 LR: 0.000051 Step: 268100 Total Loss: 0.0369 Recon Loss: 0.0267 [03/31 13:35:14 TiTok]: Data (t): 0.0032, 62.67/s/gpu Batch (t): 0.5744 LR: 0.000051 Step: 268200 Total Loss: 0.0385 Recon Loss: 0.0273 [03/31 13:36:12 TiTok]: Data (t): 0.0031, 62.66/s/gpu Batch (t): 0.5745 LR: 0.000051 Step: 268300 Total Loss: 0.0384 Recon Loss: 0.0270 [03/31 13:37:10 TiTok]: Data (t): 0.0033, 62.55/s/gpu Batch (t): 0.5755 LR: 0.000051 Step: 268400 Total Loss: 0.0397 Recon Loss: 0.0274 [03/31 13:38:07 TiTok]: Data (t): 0.0032, 62.53/s/gpu Batch (t): 0.5757 LR: 0.000050 Step: 268500 Total Loss: 0.0394 Recon Loss: 0.0281 [03/31 13:39:05 TiTok]: Data (t): 0.0032, 62.62/s/gpu Batch (t): 0.5749 LR: 0.000050 Step: 268600 Total Loss: 0.0379 Recon Loss: 0.0282 [03/31 13:40:02 TiTok]: Data (t): 0.0031, 62.52/s/gpu Batch (t): 0.5758 LR: 0.000050 Step: 268700 Total Loss: 0.0380 Recon Loss: 0.0285 [03/31 13:41:00 TiTok]: Data (t): 0.0033, 62.48/s/gpu Batch (t): 0.5762 LR: 0.000050 Step: 268800 Total Loss: 0.0398 Recon Loss: 0.0283 [03/31 13:41:58 TiTok]: Data (t): 0.0031, 62.57/s/gpu Batch (t): 0.5753 LR: 0.000050 Step: 268900 Total Loss: 0.0365 Recon Loss: 0.0266 [03/31 13:42:55 TiTok]: Data (t): 0.0032, 56.60/s/gpu Batch (t): 0.6360 LR: 0.000050 Step: 269000 Total Loss: 0.0382 Recon Loss: 0.0270 [03/31 13:43:53 TiTok]: Data (t): 0.0031, 62.43/s/gpu Batch (t): 0.5767 LR: 0.000050 Step: 269100 Total Loss: 0.0381 Recon Loss: 0.0275 [03/31 13:44:51 TiTok]: Data (t): 0.0032, 62.66/s/gpu Batch (t): 0.5745 LR: 0.000050 Step: 269200 Total Loss: 0.0368 Recon Loss: 0.0263 [03/31 13:45:49 TiTok]: Data (t): 0.0032, 62.27/s/gpu Batch (t): 0.5782 LR: 0.000050 Step: 269300 Total Loss: 0.0394 Recon Loss: 0.0280 [03/31 13:46:47 TiTok]: Data (t): 0.0032, 62.27/s/gpu Batch (t): 0.5781 LR: 0.000050 Step: 269400 Total Loss: 0.0361 Recon Loss: 0.0268 [03/31 13:47:45 TiTok]: Data (t): 0.0032, 62.44/s/gpu Batch (t): 0.5765 LR: 0.000050 Step: 269500 Total Loss: 0.0374 Recon Loss: 0.0290 [03/31 13:48:43 TiTok]: Data (t): 0.0033, 62.22/s/gpu Batch (t): 0.5786 LR: 0.000050 Step: 269600 Total Loss: 0.0398 Recon Loss: 0.0298 [03/31 13:49:40 TiTok]: Data (t): 0.0033, 62.13/s/gpu Batch (t): 0.5794 LR: 0.000050 Step: 269700 Total Loss: 0.0339 Recon Loss: 0.0256 [03/31 13:50:38 TiTok]: Data (t): 0.0033, 62.58/s/gpu Batch (t): 0.5753 LR: 0.000050 Step: 269800 Total Loss: 0.0401 Recon Loss: 0.0284 [03/31 13:51:36 TiTok]: Data (t): 0.0032, 62.42/s/gpu Batch (t): 0.5767 LR: 0.000050 Step: 269900 Total Loss: 0.0375 Recon Loss: 0.0270 [03/31 13:52:33 TiTok]: Data (t): 0.0032, 56.66/s/gpu Batch (t): 0.6354 LR: 0.000050 Step: 270000 Total Loss: 0.0399 Recon Loss: 0.0288 [03/31 13:52:35 TiTok]: Saved state to /mnt/books/train_stage2/order_32_stage2/checkpoint-270000 [03/31 13:52:49 TiTok]: Reconstructing images... [03/31 13:53:48 TiTok]: Data (t): 0.0032, 62.50/s/gpu Batch (t): 0.5760 LR: 0.000050 Step: 270100 Total Loss: 0.0370 Recon Loss: 0.0274 [03/31 13:54:46 TiTok]: Data (t): 0.0031, 62.61/s/gpu Batch (t): 0.5750 LR: 0.000050 Step: 270200 Total Loss: 0.0374 Recon Loss: 0.0260 [03/31 13:55:44 TiTok]: Data (t): 0.0032, 62.58/s/gpu Batch (t): 0.5752 LR: 0.000050 Step: 270300 Total Loss: 0.0360 Recon Loss: 0.0254 [03/31 13:56:42 TiTok]: Data (t): 0.0032, 62.37/s/gpu Batch (t): 0.5772 LR: 0.000050 Step: 270400 Total Loss: 0.0346 Recon Loss: 0.0281 [03/31 13:57:39 TiTok]: Data (t): 0.0032, 62.29/s/gpu Batch (t): 0.5780 LR: 0.000050 Step: 270500 Total Loss: 0.0365 Recon Loss: 0.0272 [03/31 13:58:37 TiTok]: Data (t): 0.0031, 62.37/s/gpu Batch (t): 0.5772 LR: 0.000050 Step: 270600 Total Loss: 0.0399 Recon Loss: 0.0280 [03/31 13:59:35 TiTok]: Data (t): 0.0031, 62.39/s/gpu Batch (t): 0.5770 LR: 0.000050 Step: 270700 Total Loss: 0.0371 Recon Loss: 0.0294 [03/31 14:00:33 TiTok]: Data (t): 0.0032, 62.18/s/gpu Batch (t): 0.5789 LR: 0.000050 Step: 270800 Total Loss: 0.0382 Recon Loss: 0.0288 [03/31 14:01:30 TiTok]: Data (t): 0.0033, 62.32/s/gpu Batch (t): 0.5776 LR: 0.000050 Step: 270900 Total Loss: 0.0387 Recon Loss: 0.0278 [03/31 14:02:28 TiTok]: Data (t): 0.0032, 50.87/s/gpu Batch (t): 0.7078 LR: 0.000050 Step: 271000 Total Loss: 0.0379 Recon Loss: 0.0273 [03/31 14:03:26 TiTok]: Data (t): 0.0031, 62.38/s/gpu Batch (t): 0.5771 LR: 0.000050 Step: 271100 Total Loss: 0.0403 Recon Loss: 0.0278 [03/31 14:04:26 TiTok]: Data (t): 0.0032, 62.37/s/gpu Batch (t): 0.5772 LR: 0.000050 Step: 271200 Total Loss: 0.0376 Recon Loss: 0.0274 [03/31 14:05:24 TiTok]: Data (t): 0.0031, 62.37/s/gpu Batch (t): 0.5772 LR: 0.000050 Step: 271300 Total Loss: 0.0399 Recon Loss: 0.0283 [03/31 14:06:22 TiTok]: Data (t): 0.0032, 62.38/s/gpu Batch (t): 0.5771 LR: 0.000050 Step: 271400 Total Loss: 0.0389 Recon Loss: 0.0276 [03/31 14:07:20 TiTok]: Data (t): 0.0033, 62.43/s/gpu Batch (t): 0.5766 LR: 0.000050 Step: 271500 Total Loss: 0.0379 Recon Loss: 0.0273 [03/31 14:08:18 TiTok]: Data (t): 0.0032, 62.25/s/gpu Batch (t): 0.5783 LR: 0.000050 Step: 271600 Total Loss: 0.0388 Recon Loss: 0.0284 [03/31 14:09:15 TiTok]: Data (t): 0.0032, 62.28/s/gpu Batch (t): 0.5780 LR: 0.000050 Step: 271700 Total Loss: 0.0377 Recon Loss: 0.0283 [03/31 14:10:13 TiTok]: Data (t): 0.0032, 62.44/s/gpu Batch (t): 0.5766 LR: 0.000050 Step: 271800 Total Loss: 0.0373 Recon Loss: 0.0266 [03/31 14:11:11 TiTok]: Data (t): 0.0032, 62.25/s/gpu Batch (t): 0.5783 LR: 0.000050 Step: 271900 Total Loss: 0.0353 Recon Loss: 0.0262 [03/31 14:12:09 TiTok]: Data (t): 0.0033, 56.51/s/gpu Batch (t): 0.6371 LR: 0.000050 Step: 272000 Total Loss: 0.0360 Recon Loss: 0.0283 [03/31 14:13:07 TiTok]: Data (t): 0.0033, 61.99/s/gpu Batch (t): 0.5807 LR: 0.000049 Step: 272100 Total Loss: 0.0387 Recon Loss: 0.0287 [03/31 14:14:05 TiTok]: Data (t): 0.0033, 61.68/s/gpu Batch (t): 0.5837 LR: 0.000049 Step: 272200 Total Loss: 0.0360 Recon Loss: 0.0260 [03/31 14:15:02 TiTok]: Data (t): 0.0032, 62.37/s/gpu Batch (t): 0.5772 LR: 0.000049 Step: 272300 Total Loss: 0.0389 Recon Loss: 0.0279 [03/31 14:16:01 TiTok]: Data (t): 0.0033, 62.30/s/gpu Batch (t): 0.5779 LR: 0.000049 Step: 272400 Total Loss: 0.0373 Recon Loss: 0.0279 [03/31 14:16:59 TiTok]: Data (t): 0.0032, 62.41/s/gpu Batch (t): 0.5768 LR: 0.000049 Step: 272500 Total Loss: 0.0366 Recon Loss: 0.0274 [03/31 14:17:57 TiTok]: Data (t): 0.0032, 62.30/s/gpu Batch (t): 0.5779 LR: 0.000049 Step: 272600 Total Loss: 0.0366 Recon Loss: 0.0260 [03/31 14:18:55 TiTok]: Data (t): 0.0032, 62.40/s/gpu Batch (t): 0.5769 LR: 0.000049 Step: 272700 Total Loss: 0.0357 Recon Loss: 0.0267 [03/31 14:19:53 TiTok]: Data (t): 0.0032, 62.33/s/gpu Batch (t): 0.5776 LR: 0.000049 Step: 272800 Total Loss: 0.0383 Recon Loss: 0.0267 [03/31 14:20:51 TiTok]: Data (t): 0.0033, 62.19/s/gpu Batch (t): 0.5789 LR: 0.000049 Step: 272900 Total Loss: 0.0366 Recon Loss: 0.0259 [03/31 14:21:49 TiTok]: Data (t): 0.0033, 56.57/s/gpu Batch (t): 0.6364 LR: 0.000049 Step: 273000 Total Loss: 0.0372 Recon Loss: 0.0268 [03/31 14:22:46 TiTok]: Data (t): 0.0031, 62.29/s/gpu Batch (t): 0.5779 LR: 0.000049 Step: 273100 Total Loss: 0.0395 Recon Loss: 0.0278 [03/31 14:23:44 TiTok]: Data (t): 0.0033, 62.43/s/gpu Batch (t): 0.5767 LR: 0.000049 Step: 273200 Total Loss: 0.0383 Recon Loss: 0.0277 [03/31 14:24:42 TiTok]: Data (t): 0.0032, 62.35/s/gpu Batch (t): 0.5774 LR: 0.000049 Step: 273300 Total Loss: 0.0366 Recon Loss: 0.0273 [03/31 14:25:40 TiTok]: Data (t): 0.0033, 62.41/s/gpu Batch (t): 0.5769 LR: 0.000049 Step: 273400 Total Loss: 0.0390 Recon Loss: 0.0280 [03/31 14:26:37 TiTok]: Data (t): 0.0033, 62.04/s/gpu Batch (t): 0.5803 LR: 0.000049 Step: 273500 Total Loss: 0.0401 Recon Loss: 0.0294 [03/31 14:27:36 TiTok]: Data (t): 0.0033, 62.29/s/gpu Batch (t): 0.5779 LR: 0.000049 Step: 273600 Total Loss: 0.0361 Recon Loss: 0.0271 [03/31 14:28:33 TiTok]: Data (t): 0.0032, 62.28/s/gpu Batch (t): 0.5780 LR: 0.000049 Step: 273700 Total Loss: 0.0358 Recon Loss: 0.0275 [03/31 14:29:31 TiTok]: Data (t): 0.0033, 62.37/s/gpu Batch (t): 0.5772 LR: 0.000049 Step: 273800 Total Loss: 0.0405 Recon Loss: 0.0307 [03/31 14:30:29 TiTok]: Data (t): 0.0033, 62.27/s/gpu Batch (t): 0.5781 LR: 0.000049 Step: 273900 Total Loss: 0.0369 Recon Loss: 0.0271 [03/31 14:31:27 TiTok]: Data (t): 0.0032, 56.70/s/gpu Batch (t): 0.6349 LR: 0.000049 Step: 274000 Total Loss: 0.0375 Recon Loss: 0.0259 [03/31 14:32:25 TiTok]: Data (t): 0.0033, 62.37/s/gpu Batch (t): 0.5772 LR: 0.000049 Step: 274100 Total Loss: 0.0410 Recon Loss: 0.0281 [03/31 14:33:23 TiTok]: Data (t): 0.0032, 62.27/s/gpu Batch (t): 0.5781 LR: 0.000049 Step: 274200 Total Loss: 0.0385 Recon Loss: 0.0267 [03/31 14:34:20 TiTok]: Data (t): 0.0033, 62.24/s/gpu Batch (t): 0.5784 LR: 0.000049 Step: 274300 Total Loss: 0.0360 Recon Loss: 0.0262 [03/31 14:35:18 TiTok]: Data (t): 0.0033, 62.31/s/gpu Batch (t): 0.5777 LR: 0.000049 Step: 274400 Total Loss: 0.0395 Recon Loss: 0.0288 [03/31 14:36:16 TiTok]: Data (t): 0.0032, 62.33/s/gpu Batch (t): 0.5775 LR: 0.000049 Step: 274500 Total Loss: 0.0350 Recon Loss: 0.0271 [03/31 14:37:14 TiTok]: Data (t): 0.0032, 62.29/s/gpu Batch (t): 0.5780 LR: 0.000049 Step: 274600 Total Loss: 0.0399 Recon Loss: 0.0285 [03/31 14:38:13 TiTok]: Data (t): 0.0033, 62.37/s/gpu Batch (t): 0.5772 LR: 0.000049 Step: 274700 Total Loss: 0.0375 Recon Loss: 0.0271 [03/31 14:39:11 TiTok]: Data (t): 0.0033, 62.42/s/gpu Batch (t): 0.5767 LR: 0.000049 Step: 274800 Total Loss: 0.0386 Recon Loss: 0.0286 [03/31 14:40:09 TiTok]: Data (t): 0.0034, 61.40/s/gpu Batch (t): 0.5863 LR: 0.000049 Step: 274900 Total Loss: 0.0398 Recon Loss: 0.0290 [03/31 14:41:07 TiTok]: Data (t): 0.0032, 56.34/s/gpu Batch (t): 0.6390 LR: 0.000049 Step: 275000 Total Loss: 0.0389 Recon Loss: 0.0275 [03/31 14:42:05 TiTok]: Data (t): 0.0033, 62.41/s/gpu Batch (t): 0.5768 LR: 0.000049 Step: 275100 Total Loss: 0.0381 Recon Loss: 0.0270 [03/31 14:43:02 TiTok]: Data (t): 0.0032, 62.30/s/gpu Batch (t): 0.5778 LR: 0.000049 Step: 275200 Total Loss: 0.0373 Recon Loss: 0.0277 [03/31 14:44:00 TiTok]: Data (t): 0.0032, 62.35/s/gpu Batch (t): 0.5774 LR: 0.000049 Step: 275300 Total Loss: 0.0357 Recon Loss: 0.0275 [03/31 14:44:58 TiTok]: Data (t): 0.0033, 62.26/s/gpu Batch (t): 0.5782 LR: 0.000049 Step: 275400 Total Loss: 0.0375 Recon Loss: 0.0271 [03/31 14:45:56 TiTok]: Data (t): 0.0032, 62.49/s/gpu Batch (t): 0.5761 LR: 0.000049 Step: 275500 Total Loss: 0.0394 Recon Loss: 0.0289 [03/31 14:46:54 TiTok]: Data (t): 0.0032, 60.40/s/gpu Batch (t): 0.5960 LR: 0.000048 Step: 275600 Total Loss: 0.0385 Recon Loss: 0.0272 [03/31 14:47:53 TiTok]: Data (t): 0.0033, 62.41/s/gpu Batch (t): 0.5769 LR: 0.000048 Step: 275700 Total Loss: 0.0369 Recon Loss: 0.0275 [03/31 14:48:51 TiTok]: Data (t): 0.0032, 62.55/s/gpu Batch (t): 0.5755 LR: 0.000048 Step: 275800 Total Loss: 0.0378 Recon Loss: 0.0273 [03/31 14:49:49 TiTok]: Data (t): 0.0031, 62.46/s/gpu Batch (t): 0.5764 LR: 0.000048 Step: 275900 Total Loss: 0.0397 Recon Loss: 0.0284 [03/31 14:50:47 TiTok]: Data (t): 0.0033, 56.73/s/gpu Batch (t): 0.6346 LR: 0.000048 Step: 276000 Total Loss: 0.0386 Recon Loss: 0.0281 [03/31 14:51:44 TiTok]: Data (t): 0.0032, 62.58/s/gpu Batch (t): 0.5753 LR: 0.000048 Step: 276100 Total Loss: 0.0401 Recon Loss: 0.0292 [03/31 14:52:42 TiTok]: Data (t): 0.0034, 62.42/s/gpu Batch (t): 0.5767 LR: 0.000048 Step: 276200 Total Loss: 0.0369 Recon Loss: 0.0288 [03/31 14:53:40 TiTok]: Data (t): 0.0034, 62.51/s/gpu Batch (t): 0.5759 LR: 0.000048 Step: 276300 Total Loss: 0.0382 Recon Loss: 0.0262 [03/31 14:54:38 TiTok]: Data (t): 0.0032, 62.59/s/gpu Batch (t): 0.5752 LR: 0.000048 Step: 276400 Total Loss: 0.0373 Recon Loss: 0.0262 [03/31 14:55:36 TiTok]: Data (t): 0.0032, 62.54/s/gpu Batch (t): 0.5757 LR: 0.000048 Step: 276500 Total Loss: 0.0390 Recon Loss: 0.0282 [03/31 14:56:34 TiTok]: Data (t): 0.0034, 62.49/s/gpu Batch (t): 0.5761 LR: 0.000048 Step: 276600 Total Loss: 0.0354 Recon Loss: 0.0273 [03/31 14:57:31 TiTok]: Data (t): 0.0034, 62.54/s/gpu Batch (t): 0.5756 LR: 0.000048 Step: 276700 Total Loss: 0.0382 Recon Loss: 0.0282 [03/31 14:58:29 TiTok]: Data (t): 0.0033, 62.45/s/gpu Batch (t): 0.5764 LR: 0.000048 Step: 276800 Total Loss: 0.0398 Recon Loss: 0.0288 [03/31 14:59:28 TiTok]: Data (t): 0.0033, 40.81/s/gpu Batch (t): 0.8821 LR: 0.000048 Step: 276900 Total Loss: 0.0381 Recon Loss: 0.0280 [03/31 15:00:26 TiTok]: Data (t): 0.0035, 56.39/s/gpu Batch (t): 0.6384 LR: 0.000048 Step: 277000 Total Loss: 0.0412 Recon Loss: 0.0282 [03/31 15:01:23 TiTok]: Data (t): 0.0032, 62.53/s/gpu Batch (t): 0.5757 LR: 0.000048 Step: 277100 Total Loss: 0.0375 Recon Loss: 0.0265 [03/31 15:02:21 TiTok]: Data (t): 0.0033, 62.13/s/gpu Batch (t): 0.5795 LR: 0.000048 Step: 277200 Total Loss: 0.0357 Recon Loss: 0.0282 [03/31 15:03:19 TiTok]: Data (t): 0.0033, 62.34/s/gpu Batch (t): 0.5775 LR: 0.000048 Step: 277300 Total Loss: 0.0379 Recon Loss: 0.0268 [03/31 15:04:17 TiTok]: Data (t): 0.0035, 61.95/s/gpu Batch (t): 0.5811 LR: 0.000048 Step: 277400 Total Loss: 0.0366 Recon Loss: 0.0275 [03/31 15:05:14 TiTok]: Data (t): 0.0034, 62.57/s/gpu Batch (t): 0.5754 LR: 0.000048 Step: 277500 Total Loss: 0.0395 Recon Loss: 0.0260 [03/31 15:06:12 TiTok]: Data (t): 0.0031, 62.53/s/gpu Batch (t): 0.5757 LR: 0.000048 Step: 277600 Total Loss: 0.0407 Recon Loss: 0.0291 [03/31 15:07:10 TiTok]: Data (t): 0.0032, 62.46/s/gpu Batch (t): 0.5764 LR: 0.000048 Step: 277700 Total Loss: 0.0359 Recon Loss: 0.0276 [03/31 15:08:07 TiTok]: Data (t): 0.0033, 62.32/s/gpu Batch (t): 0.5777 LR: 0.000048 Step: 277800 Total Loss: 0.0370 Recon Loss: 0.0281 [03/31 15:09:05 TiTok]: Data (t): 0.0032, 61.41/s/gpu Batch (t): 0.5863 LR: 0.000048 Step: 277900 Total Loss: 0.0381 Recon Loss: 0.0268 [03/31 15:10:03 TiTok]: Data (t): 0.0032, 56.69/s/gpu Batch (t): 0.6350 LR: 0.000048 Step: 278000 Total Loss: 0.0379 Recon Loss: 0.0279 [03/31 15:11:01 TiTok]: Data (t): 0.0033, 62.61/s/gpu Batch (t): 0.5750 LR: 0.000048 Step: 278100 Total Loss: 0.0352 Recon Loss: 0.0267 [03/31 15:11:59 TiTok]: Data (t): 0.0032, 62.46/s/gpu Batch (t): 0.5764 LR: 0.000048 Step: 278200 Total Loss: 0.0394 Recon Loss: 0.0298 [03/31 15:12:56 TiTok]: Data (t): 0.0033, 62.42/s/gpu Batch (t): 0.5768 LR: 0.000048 Step: 278300 Total Loss: 0.0359 Recon Loss: 0.0263 [03/31 15:13:54 TiTok]: Data (t): 0.0035, 62.39/s/gpu Batch (t): 0.5771 LR: 0.000048 Step: 278400 Total Loss: 0.0369 Recon Loss: 0.0261 [03/31 15:14:52 TiTok]: Data (t): 0.0032, 62.48/s/gpu Batch (t): 0.5762 LR: 0.000048 Step: 278500 Total Loss: 0.0377 Recon Loss: 0.0265 [03/31 15:15:49 TiTok]: Data (t): 0.0054, 62.20/s/gpu Batch (t): 0.5788 LR: 0.000048 Step: 278600 Total Loss: 0.0380 Recon Loss: 0.0278 [03/31 15:16:47 TiTok]: Data (t): 0.0032, 62.50/s/gpu Batch (t): 0.5760 LR: 0.000048 Step: 278700 Total Loss: 0.0402 Recon Loss: 0.0276 [03/31 15:17:45 TiTok]: Data (t): 0.0032, 62.26/s/gpu Batch (t): 0.5782 LR: 0.000048 Step: 278800 Total Loss: 0.0413 Recon Loss: 0.0301 [03/31 15:18:43 TiTok]: Data (t): 0.0033, 62.45/s/gpu Batch (t): 0.5765 LR: 0.000048 Step: 278900 Total Loss: 0.0348 Recon Loss: 0.0267 [03/31 15:19:41 TiTok]: Data (t): 0.0033, 56.89/s/gpu Batch (t): 0.6328 LR: 0.000048 Step: 279000 Total Loss: 0.0383 Recon Loss: 0.0280 [03/31 15:20:39 TiTok]: Data (t): 0.0032, 62.45/s/gpu Batch (t): 0.5764 LR: 0.000047 Step: 279100 Total Loss: 0.0376 Recon Loss: 0.0277 [03/31 15:21:38 TiTok]: Data (t): 0.0032, 62.49/s/gpu Batch (t): 0.5761 LR: 0.000047 Step: 279200 Total Loss: 0.0383 Recon Loss: 0.0269 [03/31 15:22:35 TiTok]: Data (t): 0.0033, 62.46/s/gpu Batch (t): 0.5764 LR: 0.000047 Step: 279300 Total Loss: 0.0376 Recon Loss: 0.0271 [03/31 15:23:33 TiTok]: Data (t): 0.0034, 62.49/s/gpu Batch (t): 0.5761 LR: 0.000047 Step: 279400 Total Loss: 0.0417 Recon Loss: 0.0288 [03/31 15:24:31 TiTok]: Data (t): 0.0034, 61.95/s/gpu Batch (t): 0.5811 LR: 0.000047 Step: 279500 Total Loss: 0.0387 Recon Loss: 0.0283 [03/31 15:25:29 TiTok]: Data (t): 0.0034, 62.29/s/gpu Batch (t): 0.5779 LR: 0.000047 Step: 279600 Total Loss: 0.0390 Recon Loss: 0.0280 [03/31 15:26:26 TiTok]: Data (t): 0.0033, 62.05/s/gpu Batch (t): 0.5802 LR: 0.000047 Step: 279700 Total Loss: 0.0379 Recon Loss: 0.0260 [03/31 15:27:24 TiTok]: Data (t): 0.0032, 62.38/s/gpu Batch (t): 0.5771 LR: 0.000047 Step: 279800 Total Loss: 0.0373 Recon Loss: 0.0270 [03/31 15:28:22 TiTok]: Data (t): 0.0032, 62.51/s/gpu Batch (t): 0.5759 LR: 0.000047 Step: 279900 Total Loss: 0.0374 Recon Loss: 0.0265 [03/31 15:29:20 TiTok]: Data (t): 0.0032, 56.62/s/gpu Batch (t): 0.6358 LR: 0.000047 Step: 280000 Total Loss: 0.0404 Recon Loss: 0.0289 [03/31 15:29:22 TiTok]: Saved state to /mnt/books/train_stage2/order_32_stage2/checkpoint-280000 [03/31 15:29:35 TiTok]: Reconstructing images... [03/31 15:30:35 TiTok]: Data (t): 0.0033, 62.41/s/gpu Batch (t): 0.5768 LR: 0.000047 Step: 280100 Total Loss: 0.0370 Recon Loss: 0.0268 [03/31 15:31:33 TiTok]: Data (t): 0.0034, 62.41/s/gpu Batch (t): 0.5768 LR: 0.000047 Step: 280200 Total Loss: 0.0364 Recon Loss: 0.0265 [03/31 15:32:31 TiTok]: Data (t): 0.0033, 62.37/s/gpu Batch (t): 0.5772 LR: 0.000047 Step: 280300 Total Loss: 0.0378 Recon Loss: 0.0274 [03/31 15:33:29 TiTok]: Data (t): 0.0032, 62.21/s/gpu Batch (t): 0.5787 LR: 0.000047 Step: 280400 Total Loss: 0.0395 Recon Loss: 0.0287 [03/31 15:34:27 TiTok]: Data (t): 0.0033, 62.43/s/gpu Batch (t): 0.5767 LR: 0.000047 Step: 280500 Total Loss: 0.0383 Recon Loss: 0.0281 [03/31 15:35:25 TiTok]: Data (t): 0.0032, 62.43/s/gpu Batch (t): 0.5767 LR: 0.000047 Step: 280600 Total Loss: 0.0378 Recon Loss: 0.0273 [03/31 15:36:23 TiTok]: Data (t): 0.0033, 62.33/s/gpu Batch (t): 0.5775 LR: 0.000047 Step: 280700 Total Loss: 0.0393 Recon Loss: 0.0283 [03/31 15:37:21 TiTok]: Data (t): 0.0034, 62.32/s/gpu Batch (t): 0.5777 LR: 0.000047 Step: 280800 Total Loss: 0.0379 Recon Loss: 0.0285 [03/31 15:38:19 TiTok]: Data (t): 0.0033, 62.38/s/gpu Batch (t): 0.5771 LR: 0.000047 Step: 280900 Total Loss: 0.0377 Recon Loss: 0.0271 [03/31 15:39:17 TiTok]: Data (t): 0.0032, 51.80/s/gpu Batch (t): 0.6950 LR: 0.000047 Step: 281000 Total Loss: 0.0379 Recon Loss: 0.0281 [03/31 15:40:15 TiTok]: Data (t): 0.0033, 62.37/s/gpu Batch (t): 0.5772 LR: 0.000047 Step: 281100 Total Loss: 0.0367 Recon Loss: 0.0257 [03/31 15:41:12 TiTok]: Data (t): 0.0034, 62.37/s/gpu Batch (t): 0.5772 LR: 0.000047 Step: 281200 Total Loss: 0.0365 Recon Loss: 0.0270 [03/31 15:42:10 TiTok]: Data (t): 0.0035, 61.70/s/gpu Batch (t): 0.5835 LR: 0.000047 Step: 281300 Total Loss: 0.0366 Recon Loss: 0.0261 [03/31 15:43:08 TiTok]: Data (t): 0.0035, 62.04/s/gpu Batch (t): 0.5802 LR: 0.000047 Step: 281400 Total Loss: 0.0360 Recon Loss: 0.0255 [03/31 15:44:08 TiTok]: Data (t): 0.0034, 62.15/s/gpu Batch (t): 0.5792 LR: 0.000047 Step: 281500 Total Loss: 0.0375 Recon Loss: 0.0278 [03/31 15:45:06 TiTok]: Data (t): 0.0033, 61.79/s/gpu Batch (t): 0.5826 LR: 0.000047 Step: 281600 Total Loss: 0.0409 Recon Loss: 0.0299 [03/31 15:46:04 TiTok]: Data (t): 0.0033, 62.41/s/gpu Batch (t): 0.5768 LR: 0.000047 Step: 281700 Total Loss: 0.0424 Recon Loss: 0.0303 [03/31 15:47:01 TiTok]: Data (t): 0.0034, 62.35/s/gpu Batch (t): 0.5774 LR: 0.000047 Step: 281800 Total Loss: 0.0396 Recon Loss: 0.0286 [03/31 15:47:59 TiTok]: Data (t): 0.0033, 62.30/s/gpu Batch (t): 0.5779 LR: 0.000047 Step: 281900 Total Loss: 0.0397 Recon Loss: 0.0294 [03/31 15:48:57 TiTok]: Data (t): 0.0032, 56.66/s/gpu Batch (t): 0.6353 LR: 0.000047 Step: 282000 Total Loss: 0.0382 Recon Loss: 0.0283 [03/31 15:49:55 TiTok]: Data (t): 0.0033, 62.47/s/gpu Batch (t): 0.5763 LR: 0.000047 Step: 282100 Total Loss: 0.0371 Recon Loss: 0.0275 [03/31 15:50:52 TiTok]: Data (t): 0.0032, 62.38/s/gpu Batch (t): 0.5771 LR: 0.000047 Step: 282200 Total Loss: 0.0394 Recon Loss: 0.0270 [03/31 15:51:50 TiTok]: Data (t): 0.0034, 62.47/s/gpu Batch (t): 0.5762 LR: 0.000047 Step: 282300 Total Loss: 0.0385 Recon Loss: 0.0291 [03/31 15:52:48 TiTok]: Data (t): 0.0032, 62.73/s/gpu Batch (t): 0.5739 LR: 0.000047 Step: 282400 Total Loss: 0.0385 Recon Loss: 0.0293 [03/31 15:53:45 TiTok]: Data (t): 0.0033, 62.50/s/gpu Batch (t): 0.5760 LR: 0.000047 Step: 282500 Total Loss: 0.0349 Recon Loss: 0.0258 [03/31 15:54:43 TiTok]: Data (t): 0.0032, 62.57/s/gpu Batch (t): 0.5754 LR: 0.000047 Step: 282600 Total Loss: 0.0394 Recon Loss: 0.0278 [03/31 15:55:41 TiTok]: Data (t): 0.0034, 62.45/s/gpu Batch (t): 0.5764 LR: 0.000046 Step: 282700 Total Loss: 0.0389 Recon Loss: 0.0278 [03/31 15:56:38 TiTok]: Data (t): 0.0032, 62.58/s/gpu Batch (t): 0.5753 LR: 0.000046 Step: 282800 Total Loss: 0.0376 Recon Loss: 0.0272 [03/31 15:57:36 TiTok]: Data (t): 0.0033, 62.19/s/gpu Batch (t): 0.5788 LR: 0.000046 Step: 282900 Total Loss: 0.0362 Recon Loss: 0.0265 [03/31 15:58:34 TiTok]: Data (t): 0.0033, 56.80/s/gpu Batch (t): 0.6338 LR: 0.000046 Step: 283000 Total Loss: 0.0374 Recon Loss: 0.0272 [03/31 15:59:32 TiTok]: Data (t): 0.0034, 62.56/s/gpu Batch (t): 0.5754 LR: 0.000046 Step: 283100 Total Loss: 0.0405 Recon Loss: 0.0285 [03/31 16:00:30 TiTok]: Data (t): 0.0034, 62.47/s/gpu Batch (t): 0.5763 LR: 0.000046 Step: 283200 Total Loss: 0.0404 Recon Loss: 0.0283 [03/31 16:01:28 TiTok]: Data (t): 0.0033, 62.62/s/gpu Batch (t): 0.5749 LR: 0.000046 Step: 283300 Total Loss: 0.0396 Recon Loss: 0.0280 [03/31 16:02:26 TiTok]: Data (t): 0.0034, 62.56/s/gpu Batch (t): 0.5754 LR: 0.000046 Step: 283400 Total Loss: 0.0372 Recon Loss: 0.0282 [03/31 16:03:24 TiTok]: Data (t): 0.0033, 62.48/s/gpu Batch (t): 0.5762 LR: 0.000046 Step: 283500 Total Loss: 0.0386 Recon Loss: 0.0272 [03/31 16:04:21 TiTok]: Data (t): 0.0033, 62.55/s/gpu Batch (t): 0.5755 LR: 0.000046 Step: 283600 Total Loss: 0.0391 Recon Loss: 0.0280 [03/31 16:05:19 TiTok]: Data (t): 0.0033, 62.61/s/gpu Batch (t): 0.5750 LR: 0.000046 Step: 283700 Total Loss: 0.0392 Recon Loss: 0.0282 [03/31 16:06:18 TiTok]: Data (t): 0.0033, 62.53/s/gpu Batch (t): 0.5758 LR: 0.000046 Step: 283800 Total Loss: 0.0365 Recon Loss: 0.0267 [03/31 16:07:16 TiTok]: Data (t): 0.0032, 62.66/s/gpu Batch (t): 0.5745 LR: 0.000046 Step: 283900 Total Loss: 0.0386 Recon Loss: 0.0271 [03/31 16:08:14 TiTok]: Data (t): 0.0033, 56.96/s/gpu Batch (t): 0.6320 LR: 0.000046 Step: 284000 Total Loss: 0.0377 Recon Loss: 0.0285 [03/31 16:09:11 TiTok]: Data (t): 0.0033, 62.55/s/gpu Batch (t): 0.5755 LR: 0.000046 Step: 284100 Total Loss: 0.0373 Recon Loss: 0.0277 [03/31 16:10:09 TiTok]: Data (t): 0.0032, 62.56/s/gpu Batch (t): 0.5755 LR: 0.000046 Step: 284200 Total Loss: 0.0353 Recon Loss: 0.0254 [03/31 16:11:07 TiTok]: Data (t): 0.0035, 62.22/s/gpu Batch (t): 0.5786 LR: 0.000046 Step: 284300 Total Loss: 0.0400 Recon Loss: 0.0276 [03/31 16:12:04 TiTok]: Data (t): 0.0033, 62.56/s/gpu Batch (t): 0.5755 LR: 0.000046 Step: 284400 Total Loss: 0.0383 Recon Loss: 0.0271 [03/31 16:13:02 TiTok]: Data (t): 0.0034, 62.61/s/gpu Batch (t): 0.5750 LR: 0.000046 Step: 284500 Total Loss: 0.0370 Recon Loss: 0.0261 [03/31 16:14:02 TiTok]: Data (t): 0.0034, 61.94/s/gpu Batch (t): 0.5812 LR: 0.000046 Step: 284600 Total Loss: 0.0364 Recon Loss: 0.0260 [03/31 16:15:00 TiTok]: Data (t): 0.0033, 58.36/s/gpu Batch (t): 0.6168 LR: 0.000046 Step: 284700 Total Loss: 0.0391 Recon Loss: 0.0280 [03/31 16:15:58 TiTok]: Data (t): 0.0034, 61.54/s/gpu Batch (t): 0.5850 LR: 0.000046 Step: 284800 Total Loss: 0.0372 Recon Loss: 0.0261 [03/31 16:16:56 TiTok]: Data (t): 0.0034, 61.40/s/gpu Batch (t): 0.5863 LR: 0.000046 Step: 284900 Total Loss: 0.0381 Recon Loss: 0.0267 [03/31 16:17:54 TiTok]: Data (t): 0.0033, 56.84/s/gpu Batch (t): 0.6333 LR: 0.000046 Step: 285000 Total Loss: 0.0371 Recon Loss: 0.0271 [03/31 16:18:52 TiTok]: Data (t): 0.0034, 62.42/s/gpu Batch (t): 0.5767 LR: 0.000046 Step: 285100 Total Loss: 0.0389 Recon Loss: 0.0291 [03/31 16:19:49 TiTok]: Data (t): 0.0034, 62.44/s/gpu Batch (t): 0.5765 LR: 0.000046 Step: 285200 Total Loss: 0.0368 Recon Loss: 0.0276 [03/31 16:20:47 TiTok]: Data (t): 0.0033, 62.35/s/gpu Batch (t): 0.5774 LR: 0.000046 Step: 285300 Total Loss: 0.0368 Recon Loss: 0.0260 [03/31 16:21:45 TiTok]: Data (t): 0.0033, 62.49/s/gpu Batch (t): 0.5761 LR: 0.000046 Step: 285400 Total Loss: 0.0406 Recon Loss: 0.0283 [03/31 16:22:42 TiTok]: Data (t): 0.0034, 62.30/s/gpu Batch (t): 0.5779 LR: 0.000046 Step: 285500 Total Loss: 0.0402 Recon Loss: 0.0282 [03/31 16:23:40 TiTok]: Data (t): 0.0033, 62.50/s/gpu Batch (t): 0.5760 LR: 0.000046 Step: 285600 Total Loss: 0.0365 Recon Loss: 0.0270 [03/31 16:24:38 TiTok]: Data (t): 0.0033, 62.39/s/gpu Batch (t): 0.5770 LR: 0.000046 Step: 285700 Total Loss: 0.0374 Recon Loss: 0.0266 [03/31 16:25:36 TiTok]: Data (t): 0.0034, 62.40/s/gpu Batch (t): 0.5770 LR: 0.000046 Step: 285800 Total Loss: 0.0359 Recon Loss: 0.0256 [03/31 16:26:33 TiTok]: Data (t): 0.0033, 59.59/s/gpu Batch (t): 0.6041 LR: 0.000046 Step: 285900 Total Loss: 0.0364 Recon Loss: 0.0261 [03/31 16:27:32 TiTok]: Data (t): 0.0034, 56.34/s/gpu Batch (t): 0.6389 LR: 0.000046 Step: 286000 Total Loss: 0.0378 Recon Loss: 0.0281 [03/31 16:28:30 TiTok]: Data (t): 0.0033, 62.50/s/gpu Batch (t): 0.5760 LR: 0.000046 Step: 286100 Total Loss: 0.0359 Recon Loss: 0.0275 [03/31 16:29:28 TiTok]: Data (t): 0.0033, 62.08/s/gpu Batch (t): 0.5799 LR: 0.000046 Step: 286200 Total Loss: 0.0403 Recon Loss: 0.0292 [03/31 16:30:26 TiTok]: Data (t): 0.0033, 62.45/s/gpu Batch (t): 0.5764 LR: 0.000045 Step: 286300 Total Loss: 0.0381 Recon Loss: 0.0273 [03/31 16:31:24 TiTok]: Data (t): 0.0034, 62.42/s/gpu Batch (t): 0.5768 LR: 0.000045 Step: 286400 Total Loss: 0.0382 Recon Loss: 0.0281 [03/31 16:32:21 TiTok]: Data (t): 0.0033, 62.54/s/gpu Batch (t): 0.5756 LR: 0.000045 Step: 286500 Total Loss: 0.0365 Recon Loss: 0.0271 [03/31 16:33:19 TiTok]: Data (t): 0.0032, 62.55/s/gpu Batch (t): 0.5755 LR: 0.000045 Step: 286600 Total Loss: 0.0370 Recon Loss: 0.0265 [03/31 16:34:16 TiTok]: Data (t): 0.0034, 62.55/s/gpu Batch (t): 0.5755 LR: 0.000045 Step: 286700 Total Loss: 0.0380 Recon Loss: 0.0290 [03/31 16:35:14 TiTok]: Data (t): 0.0032, 62.43/s/gpu Batch (t): 0.5766 LR: 0.000045 Step: 286800 Total Loss: 0.0362 Recon Loss: 0.0259 [03/31 16:36:12 TiTok]: Data (t): 0.0032, 62.55/s/gpu Batch (t): 0.5756 LR: 0.000045 Step: 286900 Total Loss: 0.0372 Recon Loss: 0.0287 [03/31 16:37:10 TiTok]: Data (t): 0.0033, 56.65/s/gpu Batch (t): 0.6355 LR: 0.000045 Step: 287000 Total Loss: 0.0370 Recon Loss: 0.0277 [03/31 16:38:08 TiTok]: Data (t): 0.0033, 61.76/s/gpu Batch (t): 0.5829 LR: 0.000045 Step: 287100 Total Loss: 0.0383 Recon Loss: 0.0269 [03/31 16:39:05 TiTok]: Data (t): 0.0034, 61.88/s/gpu Batch (t): 0.5817 LR: 0.000045 Step: 287200 Total Loss: 0.0384 Recon Loss: 0.0262 [03/31 16:40:03 TiTok]: Data (t): 0.0033, 62.31/s/gpu Batch (t): 0.5778 LR: 0.000045 Step: 287300 Total Loss: 0.0396 Recon Loss: 0.0283 [03/31 16:41:01 TiTok]: Data (t): 0.0033, 62.23/s/gpu Batch (t): 0.5785 LR: 0.000045 Step: 287400 Total Loss: 0.0383 Recon Loss: 0.0276 [03/31 16:41:59 TiTok]: Data (t): 0.0032, 62.09/s/gpu Batch (t): 0.5798 LR: 0.000045 Step: 287500 Total Loss: 0.0372 Recon Loss: 0.0254 [03/31 16:42:57 TiTok]: Data (t): 0.0034, 62.23/s/gpu Batch (t): 0.5785 LR: 0.000045 Step: 287600 Total Loss: 0.0358 Recon Loss: 0.0262 [03/31 16:43:55 TiTok]: Data (t): 0.0032, 62.44/s/gpu Batch (t): 0.5765 LR: 0.000045 Step: 287700 Total Loss: 0.0364 Recon Loss: 0.0272 [03/31 16:44:53 TiTok]: Data (t): 0.0035, 61.62/s/gpu Batch (t): 0.5842 LR: 0.000045 Step: 287800 Total Loss: 0.0380 Recon Loss: 0.0270 [03/31 16:45:50 TiTok]: Data (t): 0.0033, 62.45/s/gpu Batch (t): 0.5764 LR: 0.000045 Step: 287900 Total Loss: 0.0381 Recon Loss: 0.0288 [03/31 16:46:49 TiTok]: Data (t): 0.0032, 56.66/s/gpu Batch (t): 0.6354 LR: 0.000045 Step: 288000 Total Loss: 0.0400 Recon Loss: 0.0274 [03/31 16:47:46 TiTok]: Data (t): 0.0032, 62.44/s/gpu Batch (t): 0.5766 LR: 0.000045 Step: 288100 Total Loss: 0.0361 Recon Loss: 0.0274 [03/31 16:48:44 TiTok]: Data (t): 0.0032, 62.48/s/gpu Batch (t): 0.5762 LR: 0.000045 Step: 288200 Total Loss: 0.0380 Recon Loss: 0.0278 [03/31 16:49:43 TiTok]: Data (t): 0.0032, 62.38/s/gpu Batch (t): 0.5771 LR: 0.000045 Step: 288300 Total Loss: 0.0366 Recon Loss: 0.0276 [03/31 16:50:40 TiTok]: Data (t): 0.0033, 62.49/s/gpu Batch (t): 0.5761 LR: 0.000045 Step: 288400 Total Loss: 0.0365 Recon Loss: 0.0259 [03/31 16:51:38 TiTok]: Data (t): 0.0035, 62.18/s/gpu Batch (t): 0.5790 LR: 0.000045 Step: 288500 Total Loss: 0.0356 Recon Loss: 0.0264 [03/31 16:52:36 TiTok]: Data (t): 0.0034, 62.50/s/gpu Batch (t): 0.5760 LR: 0.000045 Step: 288600 Total Loss: 0.0368 Recon Loss: 0.0265 [03/31 16:53:33 TiTok]: Data (t): 0.0032, 62.47/s/gpu Batch (t): 0.5763 LR: 0.000045 Step: 288700 Total Loss: 0.0401 Recon Loss: 0.0283 [03/31 16:54:31 TiTok]: Data (t): 0.0033, 62.42/s/gpu Batch (t): 0.5767 LR: 0.000045 Step: 288800 Total Loss: 0.0391 Recon Loss: 0.0292 [03/31 16:55:29 TiTok]: Data (t): 0.0034, 62.26/s/gpu Batch (t): 0.5783 LR: 0.000045 Step: 288900 Total Loss: 0.0389 Recon Loss: 0.0269 [03/31 16:56:28 TiTok]: Data (t): 0.0034, 56.68/s/gpu Batch (t): 0.6352 LR: 0.000045 Step: 289000 Total Loss: 0.0382 Recon Loss: 0.0287 [03/31 16:57:26 TiTok]: Data (t): 0.0033, 62.52/s/gpu Batch (t): 0.5758 LR: 0.000045 Step: 289100 Total Loss: 0.0375 Recon Loss: 0.0272 [03/31 16:58:24 TiTok]: Data (t): 0.0032, 62.61/s/gpu Batch (t): 0.5750 LR: 0.000045 Step: 289200 Total Loss: 0.0367 Recon Loss: 0.0255 [03/31 16:59:21 TiTok]: Data (t): 0.0032, 62.62/s/gpu Batch (t): 0.5749 LR: 0.000045 Step: 289300 Total Loss: 0.0382 Recon Loss: 0.0261 [03/31 17:00:19 TiTok]: Data (t): 0.0033, 62.45/s/gpu Batch (t): 0.5765 LR: 0.000045 Step: 289400 Total Loss: 0.0418 Recon Loss: 0.0298 [03/31 17:01:16 TiTok]: Data (t): 0.0033, 62.59/s/gpu Batch (t): 0.5751 LR: 0.000045 Step: 289500 Total Loss: 0.0375 Recon Loss: 0.0279 [03/31 17:02:14 TiTok]: Data (t): 0.0032, 62.62/s/gpu Batch (t): 0.5749 LR: 0.000045 Step: 289600 Total Loss: 0.0383 Recon Loss: 0.0286 [03/31 17:03:12 TiTok]: Data (t): 0.0034, 62.40/s/gpu Batch (t): 0.5769 LR: 0.000045 Step: 289700 Total Loss: 0.0372 Recon Loss: 0.0272 [03/31 17:04:10 TiTok]: Data (t): 0.0034, 62.14/s/gpu Batch (t): 0.5793 LR: 0.000045 Step: 289800 Total Loss: 0.0380 Recon Loss: 0.0279 [03/31 17:05:07 TiTok]: Data (t): 0.0033, 62.52/s/gpu Batch (t): 0.5758 LR: 0.000044 Step: 289900 Total Loss: 0.0367 Recon Loss: 0.0269 [03/31 17:06:05 TiTok]: Data (t): 0.0033, 56.79/s/gpu Batch (t): 0.6339 LR: 0.000044 Step: 290000 Total Loss: 0.0373 Recon Loss: 0.0273 [03/31 17:06:07 TiTok]: Saved state to /mnt/books/train_stage2/order_32_stage2/checkpoint-290000 [03/31 17:06:21 TiTok]: Reconstructing images... [03/31 17:07:19 TiTok]: Data (t): 0.0033, 62.32/s/gpu Batch (t): 0.5777 LR: 0.000044 Step: 290100 Total Loss: 0.0370 Recon Loss: 0.0252 [03/31 17:08:17 TiTok]: Data (t): 0.0036, 62.15/s/gpu Batch (t): 0.5792 LR: 0.000044 Step: 290200 Total Loss: 0.0384 Recon Loss: 0.0268 [03/31 17:09:14 TiTok]: Data (t): 0.0032, 62.57/s/gpu Batch (t): 0.5753 LR: 0.000044 Step: 290300 Total Loss: 0.0352 Recon Loss: 0.0269 [03/31 17:10:12 TiTok]: Data (t): 0.0033, 62.51/s/gpu Batch (t): 0.5759 LR: 0.000044 Step: 290400 Total Loss: 0.0375 Recon Loss: 0.0278 [03/31 17:11:10 TiTok]: Data (t): 0.0034, 62.09/s/gpu Batch (t): 0.5798 LR: 0.000044 Step: 290500 Total Loss: 0.0393 Recon Loss: 0.0275 [03/31 17:12:08 TiTok]: Data (t): 0.0032, 62.51/s/gpu Batch (t): 0.5759 LR: 0.000044 Step: 290600 Total Loss: 0.0377 Recon Loss: 0.0273 [03/31 17:13:06 TiTok]: Data (t): 0.0032, 62.50/s/gpu Batch (t): 0.5760 LR: 0.000044 Step: 290700 Total Loss: 0.0373 Recon Loss: 0.0279 [03/31 17:14:04 TiTok]: Data (t): 0.0034, 62.38/s/gpu Batch (t): 0.5771 LR: 0.000044 Step: 290800 Total Loss: 0.0374 Recon Loss: 0.0286 [03/31 17:15:01 TiTok]: Data (t): 0.0033, 62.42/s/gpu Batch (t): 0.5767 LR: 0.000044 Step: 290900 Total Loss: 0.0385 Recon Loss: 0.0285 [03/31 17:15:59 TiTok]: Data (t): 0.0032, 48.96/s/gpu Batch (t): 0.7354 LR: 0.000044 Step: 291000 Total Loss: 0.0425 Recon Loss: 0.0293 [03/31 17:16:58 TiTok]: Data (t): 0.0033, 62.40/s/gpu Batch (t): 0.5769 LR: 0.000044 Step: 291100 Total Loss: 0.0371 Recon Loss: 0.0277 [03/31 17:17:55 TiTok]: Data (t): 0.0033, 62.42/s/gpu Batch (t): 0.5767 LR: 0.000044 Step: 291200 Total Loss: 0.0384 Recon Loss: 0.0279 [03/31 17:18:53 TiTok]: Data (t): 0.0033, 62.43/s/gpu Batch (t): 0.5766 LR: 0.000044 Step: 291300 Total Loss: 0.0377 Recon Loss: 0.0279 [03/31 17:19:51 TiTok]: Data (t): 0.0033, 62.36/s/gpu Batch (t): 0.5773 LR: 0.000044 Step: 291400 Total Loss: 0.0379 Recon Loss: 0.0288 [03/31 17:20:48 TiTok]: Data (t): 0.0032, 62.49/s/gpu Batch (t): 0.5761 LR: 0.000044 Step: 291500 Total Loss: 0.0365 Recon Loss: 0.0265 [03/31 17:21:46 TiTok]: Data (t): 0.0034, 62.32/s/gpu Batch (t): 0.5777 LR: 0.000044 Step: 291600 Total Loss: 0.0345 Recon Loss: 0.0271 [03/31 17:22:44 TiTok]: Data (t): 0.0032, 62.51/s/gpu Batch (t): 0.5759 LR: 0.000044 Step: 291700 Total Loss: 0.0392 Recon Loss: 0.0286 [03/31 17:23:42 TiTok]: Data (t): 0.0033, 62.54/s/gpu Batch (t): 0.5756 LR: 0.000044 Step: 291800 Total Loss: 0.0366 Recon Loss: 0.0268 [03/31 17:24:39 TiTok]: Data (t): 0.0032, 62.29/s/gpu Batch (t): 0.5779 LR: 0.000044 Step: 291900 Total Loss: 0.0376 Recon Loss: 0.0269 [03/31 17:25:37 TiTok]: Data (t): 0.0033, 56.66/s/gpu Batch (t): 0.6354 LR: 0.000044 Step: 292000 Total Loss: 0.0368 Recon Loss: 0.0267 [03/31 17:26:35 TiTok]: Data (t): 0.0033, 62.46/s/gpu Batch (t): 0.5764 LR: 0.000044 Step: 292100 Total Loss: 0.0382 Recon Loss: 0.0259 [03/31 17:27:33 TiTok]: Data (t): 0.0033, 62.54/s/gpu Batch (t): 0.5757 LR: 0.000044 Step: 292200 Total Loss: 0.0388 Recon Loss: 0.0262 [03/31 17:28:30 TiTok]: Data (t): 0.0034, 62.45/s/gpu Batch (t): 0.5764 LR: 0.000044 Step: 292300 Total Loss: 0.0386 Recon Loss: 0.0270 [03/31 17:29:28 TiTok]: Data (t): 0.0033, 62.57/s/gpu Batch (t): 0.5754 LR: 0.000044 Step: 292400 Total Loss: 0.0375 Recon Loss: 0.0267 [03/31 17:30:25 TiTok]: Data (t): 0.0032, 62.32/s/gpu Batch (t): 0.5777 LR: 0.000044 Step: 292500 Total Loss: 0.0391 Recon Loss: 0.0290 [03/31 17:31:23 TiTok]: Data (t): 0.0032, 62.51/s/gpu Batch (t): 0.5759 LR: 0.000044 Step: 292600 Total Loss: 0.0405 Recon Loss: 0.0288 [03/31 17:32:21 TiTok]: Data (t): 0.0032, 62.53/s/gpu Batch (t): 0.5758 LR: 0.000044 Step: 292700 Total Loss: 0.0393 Recon Loss: 0.0270 [03/31 17:33:19 TiTok]: Data (t): 0.0033, 62.50/s/gpu Batch (t): 0.5760 LR: 0.000044 Step: 292800 Total Loss: 0.0354 Recon Loss: 0.0268 [03/31 17:34:18 TiTok]: Data (t): 0.0033, 62.50/s/gpu Batch (t): 0.5760 LR: 0.000044 Step: 292900 Total Loss: 0.0380 Recon Loss: 0.0278 [03/31 17:35:16 TiTok]: Data (t): 0.0032, 56.78/s/gpu Batch (t): 0.6341 LR: 0.000044 Step: 293000 Total Loss: 0.0382 Recon Loss: 0.0275 [03/31 17:36:14 TiTok]: Data (t): 0.0032, 62.64/s/gpu Batch (t): 0.5747 LR: 0.000044 Step: 293100 Total Loss: 0.0389 Recon Loss: 0.0278 [03/31 17:37:11 TiTok]: Data (t): 0.0034, 62.38/s/gpu Batch (t): 0.5771 LR: 0.000044 Step: 293200 Total Loss: 0.0375 Recon Loss: 0.0268 [03/31 17:38:09 TiTok]: Data (t): 0.0034, 62.41/s/gpu Batch (t): 0.5768 LR: 0.000044 Step: 293300 Total Loss: 0.0366 Recon Loss: 0.0273 [03/31 17:39:07 TiTok]: Data (t): 0.0032, 62.17/s/gpu Batch (t): 0.5790 LR: 0.000044 Step: 293400 Total Loss: 0.0369 Recon Loss: 0.0271 [03/31 17:40:06 TiTok]: Data (t): 0.0032, 62.48/s/gpu Batch (t): 0.5762 LR: 0.000043 Step: 293500 Total Loss: 0.0385 Recon Loss: 0.0280 [03/31 17:41:04 TiTok]: Data (t): 0.0032, 62.55/s/gpu Batch (t): 0.5755 LR: 0.000043 Step: 293600 Total Loss: 0.0369 Recon Loss: 0.0260 [03/31 17:42:02 TiTok]: Data (t): 0.0040, 62.00/s/gpu Batch (t): 0.5807 LR: 0.000043 Step: 293700 Total Loss: 0.0392 Recon Loss: 0.0285 [03/31 17:43:00 TiTok]: Data (t): 0.0032, 62.53/s/gpu Batch (t): 0.5757 LR: 0.000043 Step: 293800 Total Loss: 0.0368 Recon Loss: 0.0276 [03/31 17:43:57 TiTok]: Data (t): 0.0033, 62.57/s/gpu Batch (t): 0.5754 LR: 0.000043 Step: 293900 Total Loss: 0.0378 Recon Loss: 0.0267 [03/31 17:44:55 TiTok]: Data (t): 0.0033, 56.73/s/gpu Batch (t): 0.6346 LR: 0.000043 Step: 294000 Total Loss: 0.0405 Recon Loss: 0.0284 [03/31 17:45:53 TiTok]: Data (t): 0.0033, 62.67/s/gpu Batch (t): 0.5744 LR: 0.000043 Step: 294100 Total Loss: 0.0371 Recon Loss: 0.0278 [03/31 17:46:50 TiTok]: Data (t): 0.0034, 62.59/s/gpu Batch (t): 0.5752 LR: 0.000043 Step: 294200 Total Loss: 0.0373 Recon Loss: 0.0291 [03/31 17:47:48 TiTok]: Data (t): 0.0033, 62.53/s/gpu Batch (t): 0.5757 LR: 0.000043 Step: 294300 Total Loss: 0.0362 Recon Loss: 0.0254 [03/31 17:48:46 TiTok]: Data (t): 0.0033, 62.50/s/gpu Batch (t): 0.5760 LR: 0.000043 Step: 294400 Total Loss: 0.0407 Recon Loss: 0.0289 [03/31 17:49:43 TiTok]: Data (t): 0.0034, 62.72/s/gpu Batch (t): 0.5740 LR: 0.000043 Step: 294500 Total Loss: 0.0391 Recon Loss: 0.0276 [03/31 17:50:41 TiTok]: Data (t): 0.0033, 62.75/s/gpu Batch (t): 0.5737 LR: 0.000043 Step: 294600 Total Loss: 0.0374 Recon Loss: 0.0274 [03/31 17:51:38 TiTok]: Data (t): 0.0032, 62.60/s/gpu Batch (t): 0.5751 LR: 0.000043 Step: 294700 Total Loss: 0.0377 Recon Loss: 0.0271 [03/31 17:52:36 TiTok]: Data (t): 0.0036, 60.82/s/gpu Batch (t): 0.5919 LR: 0.000043 Step: 294800 Total Loss: 0.0374 Recon Loss: 0.0275 [03/31 17:53:34 TiTok]: Data (t): 0.0032, 62.71/s/gpu Batch (t): 0.5741 LR: 0.000043 Step: 294900 Total Loss: 0.0391 Recon Loss: 0.0284 [03/31 17:54:32 TiTok]: Data (t): 0.0033, 57.01/s/gpu Batch (t): 0.6315 LR: 0.000043 Step: 295000 Total Loss: 0.0368 Recon Loss: 0.0274 [03/31 17:55:29 TiTok]: Data (t): 0.0033, 62.26/s/gpu Batch (t): 0.5782 LR: 0.000043 Step: 295100 Total Loss: 0.0372 Recon Loss: 0.0273 [03/31 17:56:28 TiTok]: Data (t): 0.0033, 62.58/s/gpu Batch (t): 0.5753 LR: 0.000043 Step: 295200 Total Loss: 0.0403 Recon Loss: 0.0282 [03/31 17:57:26 TiTok]: Data (t): 0.0034, 62.48/s/gpu Batch (t): 0.5762 LR: 0.000043 Step: 295300 Total Loss: 0.0373 Recon Loss: 0.0277 [03/31 17:58:23 TiTok]: Data (t): 0.0032, 62.64/s/gpu Batch (t): 0.5747 LR: 0.000043 Step: 295400 Total Loss: 0.0357 Recon Loss: 0.0277 [03/31 17:59:21 TiTok]: Data (t): 0.0032, 62.37/s/gpu Batch (t): 0.5772 LR: 0.000043 Step: 295500 Total Loss: 0.0369 Recon Loss: 0.0269 [03/31 18:00:19 TiTok]: Data (t): 0.0033, 60.98/s/gpu Batch (t): 0.5904 LR: 0.000043 Step: 295600 Total Loss: 0.0393 Recon Loss: 0.0288 [03/31 18:01:17 TiTok]: Data (t): 0.0032, 62.28/s/gpu Batch (t): 0.5780 LR: 0.000043 Step: 295700 Total Loss: 0.0374 Recon Loss: 0.0278 [03/31 18:02:15 TiTok]: Data (t): 0.0032, 62.53/s/gpu Batch (t): 0.5757 LR: 0.000043 Step: 295800 Total Loss: 0.0399 Recon Loss: 0.0282 [03/31 18:03:12 TiTok]: Data (t): 0.0033, 62.30/s/gpu Batch (t): 0.5779 LR: 0.000043 Step: 295900 Total Loss: 0.0395 Recon Loss: 0.0293 [03/31 18:04:10 TiTok]: Data (t): 0.0032, 56.65/s/gpu Batch (t): 0.6355 LR: 0.000043 Step: 296000 Total Loss: 0.0359 Recon Loss: 0.0270 [03/31 18:05:08 TiTok]: Data (t): 0.0032, 62.21/s/gpu Batch (t): 0.5787 LR: 0.000043 Step: 296100 Total Loss: 0.0378 Recon Loss: 0.0279 [03/31 18:06:06 TiTok]: Data (t): 0.0034, 62.02/s/gpu Batch (t): 0.5805 LR: 0.000043 Step: 296200 Total Loss: 0.0368 Recon Loss: 0.0268 [03/31 18:07:04 TiTok]: Data (t): 0.0034, 62.41/s/gpu Batch (t): 0.5768 LR: 0.000043 Step: 296300 Total Loss: 0.0404 Recon Loss: 0.0286 [03/31 18:08:01 TiTok]: Data (t): 0.0034, 62.38/s/gpu Batch (t): 0.5771 LR: 0.000043 Step: 296400 Total Loss: 0.0390 Recon Loss: 0.0288 [03/31 18:08:59 TiTok]: Data (t): 0.0032, 62.56/s/gpu Batch (t): 0.5754 LR: 0.000043 Step: 296500 Total Loss: 0.0370 Recon Loss: 0.0291 [03/31 18:09:57 TiTok]: Data (t): 0.0034, 62.43/s/gpu Batch (t): 0.5766 LR: 0.000043 Step: 296600 Total Loss: 0.0366 Recon Loss: 0.0274 [03/31 18:10:55 TiTok]: Data (t): 0.0032, 62.46/s/gpu Batch (t): 0.5764 LR: 0.000043 Step: 296700 Total Loss: 0.0363 Recon Loss: 0.0257 [03/31 18:11:52 TiTok]: Data (t): 0.0033, 62.42/s/gpu Batch (t): 0.5768 LR: 0.000043 Step: 296800 Total Loss: 0.0365 Recon Loss: 0.0266 [03/31 18:12:50 TiTok]: Data (t): 0.0032, 62.46/s/gpu Batch (t): 0.5764 LR: 0.000043 Step: 296900 Total Loss: 0.0385 Recon Loss: 0.0295 [03/31 18:13:48 TiTok]: Data (t): 0.0032, 56.58/s/gpu Batch (t): 0.6363 LR: 0.000043 Step: 297000 Total Loss: 0.0376 Recon Loss: 0.0277 [03/31 18:14:45 TiTok]: Data (t): 0.0032, 62.16/s/gpu Batch (t): 0.5792 LR: 0.000042 Step: 297100 Total Loss: 0.0362 Recon Loss: 0.0274 [03/31 18:15:43 TiTok]: Data (t): 0.0033, 62.38/s/gpu Batch (t): 0.5771 LR: 0.000042 Step: 297200 Total Loss: 0.0373 Recon Loss: 0.0268 [03/31 18:16:41 TiTok]: Data (t): 0.0034, 62.31/s/gpu Batch (t): 0.5777 LR: 0.000042 Step: 297300 Total Loss: 0.0375 Recon Loss: 0.0263 [03/31 18:17:39 TiTok]: Data (t): 0.0032, 62.38/s/gpu Batch (t): 0.5771 LR: 0.000042 Step: 297400 Total Loss: 0.0392 Recon Loss: 0.0295 [03/31 18:18:38 TiTok]: Data (t): 0.0033, 62.49/s/gpu Batch (t): 0.5761 LR: 0.000042 Step: 297500 Total Loss: 0.0367 Recon Loss: 0.0285 [03/31 18:19:35 TiTok]: Data (t): 0.0032, 62.56/s/gpu Batch (t): 0.5755 LR: 0.000042 Step: 297600 Total Loss: 0.0382 Recon Loss: 0.0273 [03/31 18:20:33 TiTok]: Data (t): 0.0032, 62.36/s/gpu Batch (t): 0.5773 LR: 0.000042 Step: 297700 Total Loss: 0.0393 Recon Loss: 0.0268 [03/31 18:21:31 TiTok]: Data (t): 0.0033, 62.44/s/gpu Batch (t): 0.5765 LR: 0.000042 Step: 297800 Total Loss: 0.0365 Recon Loss: 0.0254 [03/31 18:22:30 TiTok]: Data (t): 0.0033, 62.47/s/gpu Batch (t): 0.5763 LR: 0.000042 Step: 297900 Total Loss: 0.0404 Recon Loss: 0.0293 [03/31 18:23:28 TiTok]: Data (t): 0.0033, 56.58/s/gpu Batch (t): 0.6363 LR: 0.000042 Step: 298000 Total Loss: 0.0388 Recon Loss: 0.0285 [03/31 18:24:26 TiTok]: Data (t): 0.0032, 62.51/s/gpu Batch (t): 0.5759 LR: 0.000042 Step: 298100 Total Loss: 0.0367 Recon Loss: 0.0273 [03/31 18:25:24 TiTok]: Data (t): 0.0033, 62.49/s/gpu Batch (t): 0.5761 LR: 0.000042 Step: 298200 Total Loss: 0.0400 Recon Loss: 0.0281 [03/31 18:26:21 TiTok]: Data (t): 0.0033, 62.49/s/gpu Batch (t): 0.5761 LR: 0.000042 Step: 298300 Total Loss: 0.0382 Recon Loss: 0.0264 [03/31 18:27:19 TiTok]: Data (t): 0.0032, 62.48/s/gpu Batch (t): 0.5762 LR: 0.000042 Step: 298400 Total Loss: 0.0358 Recon Loss: 0.0256 [03/31 18:28:17 TiTok]: Data (t): 0.0032, 62.62/s/gpu Batch (t): 0.5749 LR: 0.000042 Step: 298500 Total Loss: 0.0381 Recon Loss: 0.0271 [03/31 18:29:15 TiTok]: Data (t): 0.0036, 62.47/s/gpu Batch (t): 0.5763 LR: 0.000042 Step: 298600 Total Loss: 0.0364 Recon Loss: 0.0267 [03/31 18:30:13 TiTok]: Data (t): 0.0032, 62.56/s/gpu Batch (t): 0.5754 LR: 0.000042 Step: 298700 Total Loss: 0.0379 Recon Loss: 0.0273 [03/31 18:31:11 TiTok]: Data (t): 0.0034, 62.53/s/gpu Batch (t): 0.5758 LR: 0.000042 Step: 298800 Total Loss: 0.0393 Recon Loss: 0.0275 [03/31 18:32:08 TiTok]: Data (t): 0.0032, 62.60/s/gpu Batch (t): 0.5751 LR: 0.000042 Step: 298900 Total Loss: 0.0356 Recon Loss: 0.0265 [03/31 18:33:06 TiTok]: Data (t): 0.0033, 56.74/s/gpu Batch (t): 0.6345 LR: 0.000042 Step: 299000 Total Loss: 0.0347 Recon Loss: 0.0281 [03/31 18:34:04 TiTok]: Data (t): 0.0032, 62.21/s/gpu Batch (t): 0.5787 LR: 0.000042 Step: 299100 Total Loss: 0.0374 Recon Loss: 0.0276 [03/31 18:35:02 TiTok]: Data (t): 0.0034, 61.58/s/gpu Batch (t): 0.5846 LR: 0.000042 Step: 299200 Total Loss: 0.0385 Recon Loss: 0.0278 [03/31 18:36:00 TiTok]: Data (t): 0.0033, 62.39/s/gpu Batch (t): 0.5771 LR: 0.000042 Step: 299300 Total Loss: 0.0386 Recon Loss: 0.0266 [03/31 18:36:58 TiTok]: Data (t): 0.0033, 62.53/s/gpu Batch (t): 0.5758 LR: 0.000042 Step: 299400 Total Loss: 0.0372 Recon Loss: 0.0280 [03/31 18:37:55 TiTok]: Data (t): 0.0034, 62.51/s/gpu Batch (t): 0.5759 LR: 0.000042 Step: 299500 Total Loss: 0.0384 Recon Loss: 0.0296 [03/31 18:38:53 TiTok]: Data (t): 0.0032, 62.35/s/gpu Batch (t): 0.5774 LR: 0.000042 Step: 299600 Total Loss: 0.0376 Recon Loss: 0.0275 [03/31 18:39:51 TiTok]: Data (t): 0.0033, 62.57/s/gpu Batch (t): 0.5753 LR: 0.000042 Step: 299700 Total Loss: 0.0381 Recon Loss: 0.0265 [03/31 18:40:49 TiTok]: Data (t): 0.0032, 62.54/s/gpu Batch (t): 0.5757 LR: 0.000042 Step: 299800 Total Loss: 0.0360 Recon Loss: 0.0269 [03/31 18:41:47 TiTok]: Data (t): 0.0032, 62.15/s/gpu Batch (t): 0.5792 LR: 0.000042 Step: 299900 Total Loss: 0.0355 Recon Loss: 0.0272 [03/31 18:42:45 TiTok]: Data (t): 0.0033, 54.20/s/gpu Batch (t): 0.6642 LR: 0.000042 Step: 300000 Total Loss: 0.0358 Recon Loss: 0.0267 [03/31 18:42:47 TiTok]: Saved state to /mnt/books/train_stage2/order_32_stage2/checkpoint-300000 [03/31 18:43:01 TiTok]: Reconstructing images... [03/31 18:43:59 TiTok]: Data (t): 0.0033, 62.42/s/gpu Batch (t): 0.5768 LR: 0.000042 Step: 300100 Total Loss: 0.0360 Recon Loss: 0.0255 [03/31 18:44:57 TiTok]: Data (t): 0.0034, 62.52/s/gpu Batch (t): 0.5758 LR: 0.000042 Step: 300200 Total Loss: 0.0384 Recon Loss: 0.0266 [03/31 18:45:55 TiTok]: Data (t): 0.0041, 61.82/s/gpu Batch (t): 0.5823 LR: 0.000042 Step: 300300 Total Loss: 0.0382 Recon Loss: 0.0278 [03/31 18:46:53 TiTok]: Data (t): 0.0034, 62.06/s/gpu Batch (t): 0.5801 LR: 0.000042 Step: 300400 Total Loss: 0.0377 Recon Loss: 0.0269 [03/31 18:47:51 TiTok]: Data (t): 0.0034, 61.87/s/gpu Batch (t): 0.5819 LR: 0.000042 Step: 300500 Total Loss: 0.0370 Recon Loss: 0.0262 [03/31 18:48:48 TiTok]: Data (t): 0.0031, 62.48/s/gpu Batch (t): 0.5762 LR: 0.000042 Step: 300600 Total Loss: 0.0387 Recon Loss: 0.0284 [03/31 18:49:46 TiTok]: Data (t): 0.0032, 62.51/s/gpu Batch (t): 0.5760 LR: 0.000042 Step: 300700 Total Loss: 0.0368 Recon Loss: 0.0270 [03/31 18:50:44 TiTok]: Data (t): 0.0032, 62.53/s/gpu Batch (t): 0.5758 LR: 0.000041 Step: 300800 Total Loss: 0.0398 Recon Loss: 0.0268 [03/31 18:51:42 TiTok]: Data (t): 0.0034, 62.32/s/gpu Batch (t): 0.5777 LR: 0.000041 Step: 300900 Total Loss: 0.0388 Recon Loss: 0.0284 [03/31 18:52:40 TiTok]: Data (t): 0.0034, 55.53/s/gpu Batch (t): 0.6483 LR: 0.000041 Step: 301000 Total Loss: 0.0388 Recon Loss: 0.0274 [03/31 18:53:37 TiTok]: Data (t): 0.0032, 62.41/s/gpu Batch (t): 0.5768 LR: 0.000041 Step: 301100 Total Loss: 0.0375 Recon Loss: 0.0261 [03/31 18:54:35 TiTok]: Data (t): 0.0032, 62.46/s/gpu Batch (t): 0.5764 LR: 0.000041 Step: 301200 Total Loss: 0.0388 Recon Loss: 0.0278 [03/31 18:55:33 TiTok]: Data (t): 0.0032, 62.34/s/gpu Batch (t): 0.5775 LR: 0.000041 Step: 301300 Total Loss: 0.0371 Recon Loss: 0.0272 [03/31 18:56:31 TiTok]: Data (t): 0.0032, 62.34/s/gpu Batch (t): 0.5774 LR: 0.000041 Step: 301400 Total Loss: 0.0347 Recon Loss: 0.0265 [03/31 18:57:29 TiTok]: Data (t): 0.0035, 62.31/s/gpu Batch (t): 0.5778 LR: 0.000041 Step: 301500 Total Loss: 0.0387 Recon Loss: 0.0282 [03/31 18:58:26 TiTok]: Data (t): 0.0032, 62.49/s/gpu Batch (t): 0.5761 LR: 0.000041 Step: 301600 Total Loss: 0.0377 Recon Loss: 0.0269 [03/31 18:59:24 TiTok]: Data (t): 0.0032, 62.55/s/gpu Batch (t): 0.5755 LR: 0.000041 Step: 301700 Total Loss: 0.0368 Recon Loss: 0.0267 [03/31 19:00:22 TiTok]: Data (t): 0.0034, 62.55/s/gpu Batch (t): 0.5756 LR: 0.000041 Step: 301800 Total Loss: 0.0367 Recon Loss: 0.0252 [03/31 19:01:19 TiTok]: Data (t): 0.0033, 62.53/s/gpu Batch (t): 0.5757 LR: 0.000041 Step: 301900 Total Loss: 0.0376 Recon Loss: 0.0269 [03/31 19:02:18 TiTok]: Data (t): 0.0033, 51.88/s/gpu Batch (t): 0.6939 LR: 0.000041 Step: 302000 Total Loss: 0.0369 Recon Loss: 0.0273 [03/31 19:03:16 TiTok]: Data (t): 0.0032, 62.52/s/gpu Batch (t): 0.5758 LR: 0.000041 Step: 302100 Total Loss: 0.0374 Recon Loss: 0.0286 [03/31 19:04:14 TiTok]: Data (t): 0.0032, 62.49/s/gpu Batch (t): 0.5761 LR: 0.000041 Step: 302200 Total Loss: 0.0391 Recon Loss: 0.0287 [03/31 19:05:12 TiTok]: Data (t): 0.0034, 62.51/s/gpu Batch (t): 0.5759 LR: 0.000041 Step: 302300 Total Loss: 0.0385 Recon Loss: 0.0286 [03/31 19:06:11 TiTok]: Data (t): 0.0033, 62.22/s/gpu Batch (t): 0.5786 LR: 0.000041 Step: 302400 Total Loss: 0.0384 Recon Loss: 0.0276 [03/31 19:07:09 TiTok]: Data (t): 0.0032, 62.38/s/gpu Batch (t): 0.5771 LR: 0.000041 Step: 302500 Total Loss: 0.0360 Recon Loss: 0.0264 [03/31 19:08:07 TiTok]: Data (t): 0.0033, 61.86/s/gpu Batch (t): 0.5820 LR: 0.000041 Step: 302600 Total Loss: 0.0374 Recon Loss: 0.0281 [03/31 19:09:05 TiTok]: Data (t): 0.0032, 61.90/s/gpu Batch (t): 0.5816 LR: 0.000041 Step: 302700 Total Loss: 0.0385 Recon Loss: 0.0283 [03/31 19:10:03 TiTok]: Data (t): 0.0034, 62.34/s/gpu Batch (t): 0.5775 LR: 0.000041 Step: 302800 Total Loss: 0.0382 Recon Loss: 0.0279 [03/31 19:11:01 TiTok]: Data (t): 0.0033, 62.24/s/gpu Batch (t): 0.5784 LR: 0.000041 Step: 302900 Total Loss: 0.0373 Recon Loss: 0.0285 [03/31 19:11:59 TiTok]: Data (t): 0.0032, 56.66/s/gpu Batch (t): 0.6354 LR: 0.000041 Step: 303000 Total Loss: 0.0377 Recon Loss: 0.0266 [03/31 19:12:56 TiTok]: Data (t): 0.0032, 62.51/s/gpu Batch (t): 0.5759 LR: 0.000041 Step: 303100 Total Loss: 0.0381 Recon Loss: 0.0279 [03/31 19:13:54 TiTok]: Data (t): 0.0032, 62.48/s/gpu Batch (t): 0.5762 LR: 0.000041 Step: 303200 Total Loss: 0.0340 Recon Loss: 0.0253 [03/31 19:14:52 TiTok]: Data (t): 0.0034, 61.81/s/gpu Batch (t): 0.5825 LR: 0.000041 Step: 303300 Total Loss: 0.0348 Recon Loss: 0.0264 [03/31 19:15:50 TiTok]: Data (t): 0.0035, 62.36/s/gpu Batch (t): 0.5773 LR: 0.000041 Step: 303400 Total Loss: 0.0355 Recon Loss: 0.0261 [03/31 19:16:48 TiTok]: Data (t): 0.0034, 62.46/s/gpu Batch (t): 0.5764 LR: 0.000041 Step: 303500 Total Loss: 0.0366 Recon Loss: 0.0274 [03/31 19:17:45 TiTok]: Data (t): 0.0033, 62.44/s/gpu Batch (t): 0.5765 LR: 0.000041 Step: 303600 Total Loss: 0.0367 Recon Loss: 0.0282 [03/31 19:18:43 TiTok]: Data (t): 0.0033, 62.48/s/gpu Batch (t): 0.5762 LR: 0.000041 Step: 303700 Total Loss: 0.0353 Recon Loss: 0.0267 [03/31 19:19:41 TiTok]: Data (t): 0.0033, 61.59/s/gpu Batch (t): 0.5845 LR: 0.000041 Step: 303800 Total Loss: 0.0348 Recon Loss: 0.0267 [03/31 19:20:39 TiTok]: Data (t): 0.0033, 62.43/s/gpu Batch (t): 0.5766 LR: 0.000041 Step: 303900 Total Loss: 0.0379 Recon Loss: 0.0273 [03/31 19:21:37 TiTok]: Data (t): 0.0032, 56.77/s/gpu Batch (t): 0.6342 LR: 0.000041 Step: 304000 Total Loss: 0.0385 Recon Loss: 0.0274 [03/31 19:22:35 TiTok]: Data (t): 0.0033, 61.47/s/gpu Batch (t): 0.5857 LR: 0.000041 Step: 304100 Total Loss: 0.0364 Recon Loss: 0.0264 [03/31 19:23:32 TiTok]: Data (t): 0.0033, 62.45/s/gpu Batch (t): 0.5765 LR: 0.000041 Step: 304200 Total Loss: 0.0375 Recon Loss: 0.0275 [03/31 19:24:31 TiTok]: Data (t): 0.0033, 62.40/s/gpu Batch (t): 0.5769 LR: 0.000041 Step: 304300 Total Loss: 0.0374 Recon Loss: 0.0285 [03/31 19:25:29 TiTok]: Data (t): 0.0034, 62.17/s/gpu Batch (t): 0.5791 LR: 0.000041 Step: 304400 Total Loss: 0.0361 Recon Loss: 0.0276 [03/31 19:26:27 TiTok]: Data (t): 0.0032, 62.30/s/gpu Batch (t): 0.5779 LR: 0.000040 Step: 304500 Total Loss: 0.0386 Recon Loss: 0.0277 [03/31 19:27:25 TiTok]: Data (t): 0.0032, 62.34/s/gpu Batch (t): 0.5775 LR: 0.000040 Step: 304600 Total Loss: 0.0389 Recon Loss: 0.0281 [03/31 19:28:23 TiTok]: Data (t): 0.0034, 61.89/s/gpu Batch (t): 0.5817 LR: 0.000040 Step: 304700 Total Loss: 0.0373 Recon Loss: 0.0284 [03/31 19:29:20 TiTok]: Data (t): 0.0033, 62.26/s/gpu Batch (t): 0.5783 LR: 0.000040 Step: 304800 Total Loss: 0.0369 Recon Loss: 0.0292 [03/31 19:30:18 TiTok]: Data (t): 0.0033, 62.43/s/gpu Batch (t): 0.5766 LR: 0.000040 Step: 304900 Total Loss: 0.0340 Recon Loss: 0.0264 [03/31 19:31:16 TiTok]: Data (t): 0.0032, 56.73/s/gpu Batch (t): 0.6345 LR: 0.000040 Step: 305000 Total Loss: 0.0387 Recon Loss: 0.0272 [03/31 19:32:14 TiTok]: Data (t): 0.0033, 62.27/s/gpu Batch (t): 0.5781 LR: 0.000040 Step: 305100 Total Loss: 0.0382 Recon Loss: 0.0267 [03/31 19:33:11 TiTok]: Data (t): 0.0034, 62.33/s/gpu Batch (t): 0.5776 LR: 0.000040 Step: 305200 Total Loss: 0.0356 Recon Loss: 0.0266 [03/31 19:34:09 TiTok]: Data (t): 0.0033, 62.04/s/gpu Batch (t): 0.5802 LR: 0.000040 Step: 305300 Total Loss: 0.0342 Recon Loss: 0.0268 [03/31 19:35:07 TiTok]: Data (t): 0.0032, 62.39/s/gpu Batch (t): 0.5770 LR: 0.000040 Step: 305400 Total Loss: 0.0383 Recon Loss: 0.0282 [03/31 19:36:05 TiTok]: Data (t): 0.0034, 62.32/s/gpu Batch (t): 0.5777 LR: 0.000040 Step: 305500 Total Loss: 0.0368 Recon Loss: 0.0261 [03/31 19:37:03 TiTok]: Data (t): 0.0033, 62.42/s/gpu Batch (t): 0.5767 LR: 0.000040 Step: 305600 Total Loss: 0.0378 Recon Loss: 0.0272 [03/31 19:38:01 TiTok]: Data (t): 0.0033, 62.15/s/gpu Batch (t): 0.5793 LR: 0.000040 Step: 305700 Total Loss: 0.0354 Recon Loss: 0.0263 [03/31 19:38:59 TiTok]: Data (t): 0.0032, 62.42/s/gpu Batch (t): 0.5767 LR: 0.000040 Step: 305800 Total Loss: 0.0363 Recon Loss: 0.0259 [03/31 19:39:57 TiTok]: Data (t): 0.0032, 62.49/s/gpu Batch (t): 0.5761 LR: 0.000040 Step: 305900 Total Loss: 0.0401 Recon Loss: 0.0281 [03/31 19:40:55 TiTok]: Data (t): 0.0033, 56.77/s/gpu Batch (t): 0.6341 LR: 0.000040 Step: 306000 Total Loss: 0.0372 Recon Loss: 0.0286 [03/31 19:41:52 TiTok]: Data (t): 0.0032, 62.42/s/gpu Batch (t): 0.5767 LR: 0.000040 Step: 306100 Total Loss: 0.0383 Recon Loss: 0.0284 [03/31 19:42:50 TiTok]: Data (t): 0.0033, 62.40/s/gpu Batch (t): 0.5769 LR: 0.000040 Step: 306200 Total Loss: 0.0364 Recon Loss: 0.0270 [03/31 19:43:48 TiTok]: Data (t): 0.0032, 62.00/s/gpu Batch (t): 0.5806 LR: 0.000040 Step: 306300 Total Loss: 0.0365 Recon Loss: 0.0280 [03/31 19:44:46 TiTok]: Data (t): 0.0033, 62.28/s/gpu Batch (t): 0.5781 LR: 0.000040 Step: 306400 Total Loss: 0.0379 Recon Loss: 0.0280 [03/31 19:45:44 TiTok]: Data (t): 0.0033, 62.46/s/gpu Batch (t): 0.5764 LR: 0.000040 Step: 306500 Total Loss: 0.0363 Recon Loss: 0.0274 [03/31 19:46:43 TiTok]: Data (t): 0.0032, 62.35/s/gpu Batch (t): 0.5774 LR: 0.000040 Step: 306600 Total Loss: 0.0354 Recon Loss: 0.0259 [03/31 19:47:41 TiTok]: Data (t): 0.0033, 62.17/s/gpu Batch (t): 0.5790 LR: 0.000040 Step: 306700 Total Loss: 0.0365 Recon Loss: 0.0268 [03/31 19:48:41 TiTok]: Data (t): 0.0033, 62.22/s/gpu Batch (t): 0.5786 LR: 0.000040 Step: 306800 Total Loss: 0.0378 Recon Loss: 0.0272 [03/31 19:49:39 TiTok]: Data (t): 0.0033, 62.17/s/gpu Batch (t): 0.5790 LR: 0.000040 Step: 306900 Total Loss: 0.0367 Recon Loss: 0.0288 [03/31 19:50:38 TiTok]: Data (t): 0.0033, 56.35/s/gpu Batch (t): 0.6389 LR: 0.000040 Step: 307000 Total Loss: 0.0388 Recon Loss: 0.0271 [03/31 19:51:37 TiTok]: Data (t): 0.0033, 62.23/s/gpu Batch (t): 0.5785 LR: 0.000040 Step: 307100 Total Loss: 0.0385 Recon Loss: 0.0277 [03/31 19:52:35 TiTok]: Data (t): 0.0032, 62.17/s/gpu Batch (t): 0.5791 LR: 0.000040 Step: 307200 Total Loss: 0.0375 Recon Loss: 0.0268 [03/31 19:53:33 TiTok]: Data (t): 0.0033, 62.12/s/gpu Batch (t): 0.5796 LR: 0.000040 Step: 307300 Total Loss: 0.0349 Recon Loss: 0.0253 [03/31 19:54:31 TiTok]: Data (t): 0.0034, 62.43/s/gpu Batch (t): 0.5766 LR: 0.000040 Step: 307400 Total Loss: 0.0399 Recon Loss: 0.0284 [03/31 19:55:28 TiTok]: Data (t): 0.0032, 62.33/s/gpu Batch (t): 0.5776 LR: 0.000040 Step: 307500 Total Loss: 0.0395 Recon Loss: 0.0267 [03/31 19:56:26 TiTok]: Data (t): 0.0035, 62.26/s/gpu Batch (t): 0.5782 LR: 0.000040 Step: 307600 Total Loss: 0.0380 Recon Loss: 0.0268 [03/31 19:57:24 TiTok]: Data (t): 0.0033, 62.54/s/gpu Batch (t): 0.5756 LR: 0.000040 Step: 307700 Total Loss: 0.0396 Recon Loss: 0.0295 [03/31 19:58:22 TiTok]: Data (t): 0.0034, 62.25/s/gpu Batch (t): 0.5784 LR: 0.000040 Step: 307800 Total Loss: 0.0334 Recon Loss: 0.0259 [03/31 19:59:20 TiTok]: Data (t): 0.0033, 62.19/s/gpu Batch (t): 0.5788 LR: 0.000040 Step: 307900 Total Loss: 0.0359 Recon Loss: 0.0266 [03/31 20:00:17 TiTok]: Data (t): 0.0033, 53.63/s/gpu Batch (t): 0.6713 LR: 0.000040 Step: 308000 Total Loss: 0.0405 Recon Loss: 0.0289 [03/31 20:01:15 TiTok]: Data (t): 0.0032, 62.44/s/gpu Batch (t): 0.5766 LR: 0.000040 Step: 308100 Total Loss: 0.0383 Recon Loss: 0.0274 [03/31 20:02:13 TiTok]: Data (t): 0.0033, 62.31/s/gpu Batch (t): 0.5778 LR: 0.000039 Step: 308200 Total Loss: 0.0376 Recon Loss: 0.0280 [03/31 20:03:11 TiTok]: Data (t): 0.0034, 62.37/s/gpu Batch (t): 0.5772 LR: 0.000039 Step: 308300 Total Loss: 0.0376 Recon Loss: 0.0272 [03/31 20:04:09 TiTok]: Data (t): 0.0034, 62.37/s/gpu Batch (t): 0.5772 LR: 0.000039 Step: 308400 Total Loss: 0.0377 Recon Loss: 0.0288 [03/31 20:05:07 TiTok]: Data (t): 0.0035, 61.49/s/gpu Batch (t): 0.5854 LR: 0.000039 Step: 308500 Total Loss: 0.0357 Recon Loss: 0.0264 [03/31 20:06:05 TiTok]: Data (t): 0.0034, 62.31/s/gpu Batch (t): 0.5778 LR: 0.000039 Step: 308600 Total Loss: 0.0377 Recon Loss: 0.0285 [03/31 20:07:03 TiTok]: Data (t): 0.0033, 62.40/s/gpu Batch (t): 0.5769 LR: 0.000039 Step: 308700 Total Loss: 0.0350 Recon Loss: 0.0262 [03/31 20:08:01 TiTok]: Data (t): 0.0034, 62.34/s/gpu Batch (t): 0.5775 LR: 0.000039 Step: 308800 Total Loss: 0.0382 Recon Loss: 0.0276 [03/31 20:09:00 TiTok]: Data (t): 0.0034, 62.39/s/gpu Batch (t): 0.5770 LR: 0.000039 Step: 308900 Total Loss: 0.0365 Recon Loss: 0.0264 [03/31 20:09:57 TiTok]: Data (t): 0.0033, 56.22/s/gpu Batch (t): 0.6403 LR: 0.000039 Step: 309000 Total Loss: 0.0380 Recon Loss: 0.0277 [03/31 20:10:55 TiTok]: Data (t): 0.0032, 62.42/s/gpu Batch (t): 0.5768 LR: 0.000039 Step: 309100 Total Loss: 0.0363 Recon Loss: 0.0270 [03/31 20:11:53 TiTok]: Data (t): 0.0032, 62.41/s/gpu Batch (t): 0.5768 LR: 0.000039 Step: 309200 Total Loss: 0.0377 Recon Loss: 0.0269 [03/31 20:12:51 TiTok]: Data (t): 0.0034, 62.38/s/gpu Batch (t): 0.5771 LR: 0.000039 Step: 309300 Total Loss: 0.0391 Recon Loss: 0.0285 [03/31 20:13:49 TiTok]: Data (t): 0.0033, 62.21/s/gpu Batch (t): 0.5787 LR: 0.000039 Step: 309400 Total Loss: 0.0380 Recon Loss: 0.0284 [03/31 20:14:46 TiTok]: Data (t): 0.0032, 62.47/s/gpu Batch (t): 0.5763 LR: 0.000039 Step: 309500 Total Loss: 0.0378 Recon Loss: 0.0256 [03/31 20:15:44 TiTok]: Data (t): 0.0033, 62.43/s/gpu Batch (t): 0.5766 LR: 0.000039 Step: 309600 Total Loss: 0.0381 Recon Loss: 0.0259 [03/31 20:16:42 TiTok]: Data (t): 0.0034, 62.21/s/gpu Batch (t): 0.5787 LR: 0.000039 Step: 309700 Total Loss: 0.0355 Recon Loss: 0.0265 [03/31 20:17:40 TiTok]: Data (t): 0.0033, 62.46/s/gpu Batch (t): 0.5764 LR: 0.000039 Step: 309800 Total Loss: 0.0372 Recon Loss: 0.0287 [03/31 20:18:37 TiTok]: Data (t): 0.0032, 61.96/s/gpu Batch (t): 0.5810 LR: 0.000039 Step: 309900 Total Loss: 0.0403 Recon Loss: 0.0278 [03/31 20:19:35 TiTok]: Data (t): 0.0033, 56.55/s/gpu Batch (t): 0.6366 LR: 0.000039 Step: 310000 Total Loss: 0.0372 Recon Loss: 0.0264 [03/31 20:19:37 TiTok]: Saved state to /mnt/books/train_stage2/order_32_stage2/checkpoint-310000 [03/31 20:19:51 TiTok]: Reconstructing images... [03/31 20:20:49 TiTok]: Data (t): 0.0034, 62.41/s/gpu Batch (t): 0.5768 LR: 0.000039 Step: 310100 Total Loss: 0.0387 Recon Loss: 0.0283 [03/31 20:21:47 TiTok]: Data (t): 0.0032, 62.53/s/gpu Batch (t): 0.5758 LR: 0.000039 Step: 310200 Total Loss: 0.0362 Recon Loss: 0.0274 [03/31 20:22:44 TiTok]: Data (t): 0.0033, 62.45/s/gpu Batch (t): 0.5764 LR: 0.000039 Step: 310300 Total Loss: 0.0362 Recon Loss: 0.0271 [03/31 20:23:42 TiTok]: Data (t): 0.0032, 62.53/s/gpu Batch (t): 0.5757 LR: 0.000039 Step: 310400 Total Loss: 0.0359 Recon Loss: 0.0248 [03/31 20:24:40 TiTok]: Data (t): 0.0035, 62.35/s/gpu Batch (t): 0.5774 LR: 0.000039 Step: 310500 Total Loss: 0.0374 Recon Loss: 0.0273 [03/31 20:25:38 TiTok]: Data (t): 0.0032, 62.53/s/gpu Batch (t): 0.5757 LR: 0.000039 Step: 310600 Total Loss: 0.0376 Recon Loss: 0.0252 [03/31 20:26:35 TiTok]: Data (t): 0.0033, 62.44/s/gpu Batch (t): 0.5766 LR: 0.000039 Step: 310700 Total Loss: 0.0355 Recon Loss: 0.0270 [03/31 20:27:33 TiTok]: Data (t): 0.0034, 62.06/s/gpu Batch (t): 0.5800 LR: 0.000039 Step: 310800 Total Loss: 0.0371 Recon Loss: 0.0281 [03/31 20:28:31 TiTok]: Data (t): 0.0033, 62.44/s/gpu Batch (t): 0.5765 LR: 0.000039 Step: 310900 Total Loss: 0.0378 Recon Loss: 0.0278 [03/31 20:29:29 TiTok]: Data (t): 0.0035, 51.78/s/gpu Batch (t): 0.6952 LR: 0.000039 Step: 311000 Total Loss: 0.0374 Recon Loss: 0.0272 [03/31 20:30:28 TiTok]: Data (t): 0.0033, 62.44/s/gpu Batch (t): 0.5765 LR: 0.000039 Step: 311100 Total Loss: 0.0394 Recon Loss: 0.0282 [03/31 20:31:26 TiTok]: Data (t): 0.0033, 62.37/s/gpu Batch (t): 0.5772 LR: 0.000039 Step: 311200 Total Loss: 0.0380 Recon Loss: 0.0285 [03/31 20:32:26 TiTok]: Data (t): 0.0033, 58.75/s/gpu Batch (t): 0.6127 LR: 0.000039 Step: 311300 Total Loss: 0.0359 Recon Loss: 0.0258 [03/31 20:33:24 TiTok]: Data (t): 0.0034, 59.36/s/gpu Batch (t): 0.6064 LR: 0.000039 Step: 311400 Total Loss: 0.0370 Recon Loss: 0.0275 [03/31 20:34:22 TiTok]: Data (t): 0.0034, 62.09/s/gpu Batch (t): 0.5798 LR: 0.000039 Step: 311500 Total Loss: 0.0395 Recon Loss: 0.0293 [03/31 20:35:20 TiTok]: Data (t): 0.0033, 62.37/s/gpu Batch (t): 0.5772 LR: 0.000039 Step: 311600 Total Loss: 0.0368 Recon Loss: 0.0276 [03/31 20:36:18 TiTok]: Data (t): 0.0034, 62.47/s/gpu Batch (t): 0.5763 LR: 0.000039 Step: 311700 Total Loss: 0.0389 Recon Loss: 0.0290 [03/31 20:37:16 TiTok]: Data (t): 0.0032, 62.13/s/gpu Batch (t): 0.5794 LR: 0.000039 Step: 311800 Total Loss: 0.0356 Recon Loss: 0.0262 [03/31 20:38:14 TiTok]: Data (t): 0.0033, 62.07/s/gpu Batch (t): 0.5800 LR: 0.000038 Step: 311900 Total Loss: 0.0357 Recon Loss: 0.0265 [03/31 20:39:12 TiTok]: Data (t): 0.0032, 56.93/s/gpu Batch (t): 0.6324 LR: 0.000038 Step: 312000 Total Loss: 0.0381 Recon Loss: 0.0280 [03/31 20:40:10 TiTok]: Data (t): 0.0033, 62.51/s/gpu Batch (t): 0.5759 LR: 0.000038 Step: 312100 Total Loss: 0.0382 Recon Loss: 0.0260 [03/31 20:41:07 TiTok]: Data (t): 0.0032, 62.56/s/gpu Batch (t): 0.5754 LR: 0.000038 Step: 312200 Total Loss: 0.0375 Recon Loss: 0.0272 [03/31 20:42:05 TiTok]: Data (t): 0.0033, 62.69/s/gpu Batch (t): 0.5743 LR: 0.000038 Step: 312300 Total Loss: 0.0329 Recon Loss: 0.0256 [03/31 20:43:03 TiTok]: Data (t): 0.0033, 62.49/s/gpu Batch (t): 0.5761 LR: 0.000038 Step: 312400 Total Loss: 0.0374 Recon Loss: 0.0273 [03/31 20:44:00 TiTok]: Data (t): 0.0032, 62.30/s/gpu Batch (t): 0.5779 LR: 0.000038 Step: 312500 Total Loss: 0.0384 Recon Loss: 0.0261 [03/31 20:44:58 TiTok]: Data (t): 0.0032, 62.61/s/gpu Batch (t): 0.5750 LR: 0.000038 Step: 312600 Total Loss: 0.0366 Recon Loss: 0.0264 [03/31 20:45:55 TiTok]: Data (t): 0.0032, 62.62/s/gpu Batch (t): 0.5749 LR: 0.000038 Step: 312700 Total Loss: 0.0364 Recon Loss: 0.0263 [03/31 20:46:53 TiTok]: Data (t): 0.0032, 62.65/s/gpu Batch (t): 0.5746 LR: 0.000038 Step: 312800 Total Loss: 0.0380 Recon Loss: 0.0289 [03/31 20:47:51 TiTok]: Data (t): 0.0032, 62.55/s/gpu Batch (t): 0.5755 LR: 0.000038 Step: 312900 Total Loss: 0.0373 Recon Loss: 0.0277 [03/31 20:48:48 TiTok]: Data (t): 0.0033, 56.86/s/gpu Batch (t): 0.6331 LR: 0.000038 Step: 313000 Total Loss: 0.0368 Recon Loss: 0.0258 [03/31 20:49:46 TiTok]: Data (t): 0.0032, 62.69/s/gpu Batch (t): 0.5743 LR: 0.000038 Step: 313100 Total Loss: 0.0377 Recon Loss: 0.0257 [03/31 20:50:44 TiTok]: Data (t): 0.0032, 62.66/s/gpu Batch (t): 0.5746 LR: 0.000038 Step: 313200 Total Loss: 0.0350 Recon Loss: 0.0271 [03/31 20:51:42 TiTok]: Data (t): 0.0032, 62.50/s/gpu Batch (t): 0.5760 LR: 0.000038 Step: 313300 Total Loss: 0.0393 Recon Loss: 0.0282 [03/31 20:52:41 TiTok]: Data (t): 0.0032, 62.43/s/gpu Batch (t): 0.5766 LR: 0.000038 Step: 313400 Total Loss: 0.0368 Recon Loss: 0.0252 [03/31 20:53:39 TiTok]: Data (t): 0.0032, 62.55/s/gpu Batch (t): 0.5755 LR: 0.000038 Step: 313500 Total Loss: 0.0368 Recon Loss: 0.0275 [03/31 20:54:36 TiTok]: Data (t): 0.0032, 62.56/s/gpu Batch (t): 0.5755 LR: 0.000038 Step: 313600 Total Loss: 0.0362 Recon Loss: 0.0284 [03/31 20:55:34 TiTok]: Data (t): 0.0032, 62.53/s/gpu Batch (t): 0.5758 LR: 0.000038 Step: 313700 Total Loss: 0.0383 Recon Loss: 0.0283 [03/31 20:56:32 TiTok]: Data (t): 0.0033, 62.32/s/gpu Batch (t): 0.5776 LR: 0.000038 Step: 313800 Total Loss: 0.0367 Recon Loss: 0.0282 [03/31 20:57:29 TiTok]: Data (t): 0.0032, 62.44/s/gpu Batch (t): 0.5766 LR: 0.000038 Step: 313900 Total Loss: 0.0398 Recon Loss: 0.0284 [03/31 20:58:27 TiTok]: Data (t): 0.0032, 56.68/s/gpu Batch (t): 0.6351 LR: 0.000038 Step: 314000 Total Loss: 0.0386 Recon Loss: 0.0285 [03/31 20:59:25 TiTok]: Data (t): 0.0033, 62.53/s/gpu Batch (t): 0.5757 LR: 0.000038 Step: 314100 Total Loss: 0.0373 Recon Loss: 0.0266 [03/31 21:00:22 TiTok]: Data (t): 0.0034, 62.54/s/gpu Batch (t): 0.5756 LR: 0.000038 Step: 314200 Total Loss: 0.0376 Recon Loss: 0.0272 [03/31 21:01:20 TiTok]: Data (t): 0.0033, 62.39/s/gpu Batch (t): 0.5771 LR: 0.000038 Step: 314300 Total Loss: 0.0374 Recon Loss: 0.0272 [03/31 21:02:18 TiTok]: Data (t): 0.0033, 62.48/s/gpu Batch (t): 0.5762 LR: 0.000038 Step: 314400 Total Loss: 0.0364 Recon Loss: 0.0255 [03/31 21:03:15 TiTok]: Data (t): 0.0035, 62.39/s/gpu Batch (t): 0.5770 LR: 0.000038 Step: 314500 Total Loss: 0.0356 Recon Loss: 0.0260 [03/31 21:04:13 TiTok]: Data (t): 0.0034, 62.47/s/gpu Batch (t): 0.5763 LR: 0.000038 Step: 314600 Total Loss: 0.0392 Recon Loss: 0.0287 [03/31 21:05:11 TiTok]: Data (t): 0.0033, 62.08/s/gpu Batch (t): 0.5799 LR: 0.000038 Step: 314700 Total Loss: 0.0372 Recon Loss: 0.0279 [03/31 21:06:09 TiTok]: Data (t): 0.0034, 62.37/s/gpu Batch (t): 0.5772 LR: 0.000038 Step: 314800 Total Loss: 0.0354 Recon Loss: 0.0239 [03/31 21:07:07 TiTok]: Data (t): 0.0033, 62.22/s/gpu Batch (t): 0.5786 LR: 0.000038 Step: 314900 Total Loss: 0.0391 Recon Loss: 0.0280 [03/31 21:08:04 TiTok]: Data (t): 0.0034, 56.74/s/gpu Batch (t): 0.6345 LR: 0.000038 Step: 315000 Total Loss: 0.0372 Recon Loss: 0.0282 [03/31 21:09:02 TiTok]: Data (t): 0.0034, 62.50/s/gpu Batch (t): 0.5760 LR: 0.000038 Step: 315100 Total Loss: 0.0401 Recon Loss: 0.0288 [03/31 21:10:00 TiTok]: Data (t): 0.0034, 62.01/s/gpu Batch (t): 0.5806 LR: 0.000038 Step: 315200 Total Loss: 0.0373 Recon Loss: 0.0272 [03/31 21:10:58 TiTok]: Data (t): 0.0033, 62.42/s/gpu Batch (t): 0.5767 LR: 0.000038 Step: 315300 Total Loss: 0.0398 Recon Loss: 0.0286 [03/31 21:11:56 TiTok]: Data (t): 0.0033, 59.55/s/gpu Batch (t): 0.6045 LR: 0.000038 Step: 315400 Total Loss: 0.0355 Recon Loss: 0.0274 [03/31 21:12:54 TiTok]: Data (t): 0.0033, 62.53/s/gpu Batch (t): 0.5757 LR: 0.000038 Step: 315500 Total Loss: 0.0363 Recon Loss: 0.0260 [03/31 21:13:52 TiTok]: Data (t): 0.0033, 62.55/s/gpu Batch (t): 0.5755 LR: 0.000038 Step: 315600 Total Loss: 0.0386 Recon Loss: 0.0277 [03/31 21:14:52 TiTok]: Data (t): 0.0033, 62.51/s/gpu Batch (t): 0.5759 LR: 0.000037 Step: 315700 Total Loss: 0.0375 Recon Loss: 0.0278 [03/31 21:15:50 TiTok]: Data (t): 0.0036, 62.35/s/gpu Batch (t): 0.5774 LR: 0.000037 Step: 315800 Total Loss: 0.0357 Recon Loss: 0.0261 [03/31 21:16:48 TiTok]: Data (t): 0.0034, 62.47/s/gpu Batch (t): 0.5763 LR: 0.000037 Step: 315900 Total Loss: 0.0363 Recon Loss: 0.0270 [03/31 21:17:46 TiTok]: Data (t): 0.0034, 56.16/s/gpu Batch (t): 0.6411 LR: 0.000037 Step: 316000 Total Loss: 0.0402 Recon Loss: 0.0277 [03/31 21:18:43 TiTok]: Data (t): 0.0033, 62.46/s/gpu Batch (t): 0.5764 LR: 0.000037 Step: 316100 Total Loss: 0.0360 Recon Loss: 0.0272 [03/31 21:19:41 TiTok]: Data (t): 0.0034, 62.42/s/gpu Batch (t): 0.5767 LR: 0.000037 Step: 316200 Total Loss: 0.0387 Recon Loss: 0.0269 [03/31 21:20:39 TiTok]: Data (t): 0.0035, 62.52/s/gpu Batch (t): 0.5758 LR: 0.000037 Step: 316300 Total Loss: 0.0386 Recon Loss: 0.0281 [03/31 21:21:36 TiTok]: Data (t): 0.0032, 62.61/s/gpu Batch (t): 0.5750 LR: 0.000037 Step: 316400 Total Loss: 0.0381 Recon Loss: 0.0286 [03/31 21:22:34 TiTok]: Data (t): 0.0033, 62.51/s/gpu Batch (t): 0.5759 LR: 0.000037 Step: 316500 Total Loss: 0.0372 Recon Loss: 0.0271 [03/31 21:23:32 TiTok]: Data (t): 0.0033, 62.64/s/gpu Batch (t): 0.5747 LR: 0.000037 Step: 316600 Total Loss: 0.0370 Recon Loss: 0.0271 [03/31 21:24:30 TiTok]: Data (t): 0.0033, 62.50/s/gpu Batch (t): 0.5760 LR: 0.000037 Step: 316700 Total Loss: 0.0383 Recon Loss: 0.0286 [03/31 21:25:28 TiTok]: Data (t): 0.0034, 62.51/s/gpu Batch (t): 0.5759 LR: 0.000037 Step: 316800 Total Loss: 0.0366 Recon Loss: 0.0279 [03/31 21:26:25 TiTok]: Data (t): 0.0033, 62.72/s/gpu Batch (t): 0.5739 LR: 0.000037 Step: 316900 Total Loss: 0.0359 Recon Loss: 0.0261 [03/31 21:27:23 TiTok]: Data (t): 0.0033, 57.09/s/gpu Batch (t): 0.6306 LR: 0.000037 Step: 317000 Total Loss: 0.0395 Recon Loss: 0.0279 [03/31 21:28:20 TiTok]: Data (t): 0.0033, 62.55/s/gpu Batch (t): 0.5755 LR: 0.000037 Step: 317100 Total Loss: 0.0349 Recon Loss: 0.0256 [03/31 21:29:18 TiTok]: Data (t): 0.0034, 61.87/s/gpu Batch (t): 0.5819 LR: 0.000037 Step: 317200 Total Loss: 0.0362 Recon Loss: 0.0280 [03/31 21:30:16 TiTok]: Data (t): 0.0032, 62.27/s/gpu Batch (t): 0.5781 LR: 0.000037 Step: 317300 Total Loss: 0.0369 Recon Loss: 0.0266 [03/31 21:31:13 TiTok]: Data (t): 0.0033, 62.59/s/gpu Batch (t): 0.5751 LR: 0.000037 Step: 317400 Total Loss: 0.0382 Recon Loss: 0.0278 [03/31 21:32:11 TiTok]: Data (t): 0.0034, 62.58/s/gpu Batch (t): 0.5753 LR: 0.000037 Step: 317500 Total Loss: 0.0372 Recon Loss: 0.0262 [03/31 21:33:09 TiTok]: Data (t): 0.0034, 62.36/s/gpu Batch (t): 0.5773 LR: 0.000037 Step: 317600 Total Loss: 0.0352 Recon Loss: 0.0269 [03/31 21:34:06 TiTok]: Data (t): 0.0033, 62.55/s/gpu Batch (t): 0.5755 LR: 0.000037 Step: 317700 Total Loss: 0.0388 Recon Loss: 0.0293 [03/31 21:35:04 TiTok]: Data (t): 0.0033, 62.73/s/gpu Batch (t): 0.5739 LR: 0.000037 Step: 317800 Total Loss: 0.0377 Recon Loss: 0.0265 [03/31 21:36:03 TiTok]: Data (t): 0.0034, 61.32/s/gpu Batch (t): 0.5870 LR: 0.000037 Step: 317900 Total Loss: 0.0369 Recon Loss: 0.0275 [03/31 21:37:02 TiTok]: Data (t): 0.0034, 56.62/s/gpu Batch (t): 0.6358 LR: 0.000037 Step: 318000 Total Loss: 0.0362 Recon Loss: 0.0269 [03/31 21:38:00 TiTok]: Data (t): 0.0034, 62.58/s/gpu Batch (t): 0.5752 LR: 0.000037 Step: 318100 Total Loss: 0.0361 Recon Loss: 0.0279 [03/31 21:38:58 TiTok]: Data (t): 0.0033, 62.50/s/gpu Batch (t): 0.5760 LR: 0.000037 Step: 318200 Total Loss: 0.0371 Recon Loss: 0.0265 [03/31 21:39:55 TiTok]: Data (t): 0.0033, 62.57/s/gpu Batch (t): 0.5753 LR: 0.000037 Step: 318300 Total Loss: 0.0348 Recon Loss: 0.0262 [03/31 21:40:53 TiTok]: Data (t): 0.0033, 62.58/s/gpu Batch (t): 0.5753 LR: 0.000037 Step: 318400 Total Loss: 0.0379 Recon Loss: 0.0278 [03/31 21:41:51 TiTok]: Data (t): 0.0034, 62.01/s/gpu Batch (t): 0.5805 LR: 0.000037 Step: 318500 Total Loss: 0.0370 Recon Loss: 0.0262 [03/31 21:42:48 TiTok]: Data (t): 0.0033, 62.06/s/gpu Batch (t): 0.5801 LR: 0.000037 Step: 318600 Total Loss: 0.0387 Recon Loss: 0.0260 [03/31 21:43:46 TiTok]: Data (t): 0.0032, 62.62/s/gpu Batch (t): 0.5749 LR: 0.000037 Step: 318700 Total Loss: 0.0387 Recon Loss: 0.0284 [03/31 21:44:44 TiTok]: Data (t): 0.0033, 58.33/s/gpu Batch (t): 0.6171 LR: 0.000037 Step: 318800 Total Loss: 0.0365 Recon Loss: 0.0280 [03/31 21:45:42 TiTok]: Data (t): 0.0033, 62.60/s/gpu Batch (t): 0.5750 LR: 0.000037 Step: 318900 Total Loss: 0.0366 Recon Loss: 0.0262 [03/31 21:46:40 TiTok]: Data (t): 0.0034, 56.68/s/gpu Batch (t): 0.6351 LR: 0.000037 Step: 319000 Total Loss: 0.0374 Recon Loss: 0.0260 [03/31 21:47:37 TiTok]: Data (t): 0.0033, 62.64/s/gpu Batch (t): 0.5747 LR: 0.000037 Step: 319100 Total Loss: 0.0368 Recon Loss: 0.0265 [03/31 21:48:35 TiTok]: Data (t): 0.0033, 62.50/s/gpu Batch (t): 0.5760 LR: 0.000037 Step: 319200 Total Loss: 0.0372 Recon Loss: 0.0283 [03/31 21:49:33 TiTok]: Data (t): 0.0034, 62.54/s/gpu Batch (t): 0.5757 LR: 0.000037 Step: 319300 Total Loss: 0.0375 Recon Loss: 0.0263 [03/31 21:50:30 TiTok]: Data (t): 0.0034, 62.54/s/gpu Batch (t): 0.5756 LR: 0.000037 Step: 319400 Total Loss: 0.0393 Recon Loss: 0.0286 [03/31 21:51:28 TiTok]: Data (t): 0.0032, 62.60/s/gpu Batch (t): 0.5750 LR: 0.000036 Step: 319500 Total Loss: 0.0374 Recon Loss: 0.0291 [03/31 21:52:25 TiTok]: Data (t): 0.0033, 62.61/s/gpu Batch (t): 0.5750 LR: 0.000036 Step: 319600 Total Loss: 0.0383 Recon Loss: 0.0271 [03/31 21:53:23 TiTok]: Data (t): 0.0034, 62.43/s/gpu Batch (t): 0.5766 LR: 0.000036 Step: 319700 Total Loss: 0.0389 Recon Loss: 0.0281 [03/31 21:54:21 TiTok]: Data (t): 0.0034, 62.57/s/gpu Batch (t): 0.5753 LR: 0.000036 Step: 319800 Total Loss: 0.0367 Recon Loss: 0.0287 [03/31 21:55:18 TiTok]: Data (t): 0.0033, 62.35/s/gpu Batch (t): 0.5774 LR: 0.000036 Step: 319900 Total Loss: 0.0342 Recon Loss: 0.0258 [03/31 21:56:16 TiTok]: Data (t): 0.0033, 56.68/s/gpu Batch (t): 0.6352 LR: 0.000036 Step: 320000 Total Loss: 0.0392 Recon Loss: 0.0283 [03/31 21:56:19 TiTok]: Saved state to /mnt/books/train_stage2/order_32_stage2/checkpoint-320000 [03/31 21:56:36 TiTok]: Reconstructing images... [03/31 21:57:35 TiTok]: Data (t): 0.0034, 62.45/s/gpu Batch (t): 0.5765 LR: 0.000036 Step: 320100 Total Loss: 0.0394 Recon Loss: 0.0285 [03/31 21:58:35 TiTok]: Data (t): 0.0033, 62.55/s/gpu Batch (t): 0.5756 LR: 0.000036 Step: 320200 Total Loss: 0.0372 Recon Loss: 0.0288 [03/31 21:59:33 TiTok]: Data (t): 0.0033, 62.57/s/gpu Batch (t): 0.5754 LR: 0.000036 Step: 320300 Total Loss: 0.0374 Recon Loss: 0.0268 [03/31 22:00:31 TiTok]: Data (t): 0.0033, 62.63/s/gpu Batch (t): 0.5748 LR: 0.000036 Step: 320400 Total Loss: 0.0357 Recon Loss: 0.0279 [03/31 22:01:28 TiTok]: Data (t): 0.0032, 62.57/s/gpu Batch (t): 0.5753 LR: 0.000036 Step: 320500 Total Loss: 0.0387 Recon Loss: 0.0277 [03/31 22:02:26 TiTok]: Data (t): 0.0032, 62.47/s/gpu Batch (t): 0.5763 LR: 0.000036 Step: 320600 Total Loss: 0.0351 Recon Loss: 0.0269 [03/31 22:03:24 TiTok]: Data (t): 0.0032, 62.61/s/gpu Batch (t): 0.5750 LR: 0.000036 Step: 320700 Total Loss: 0.0373 Recon Loss: 0.0280 [03/31 22:04:22 TiTok]: Data (t): 0.0034, 62.59/s/gpu Batch (t): 0.5751 LR: 0.000036 Step: 320800 Total Loss: 0.0361 Recon Loss: 0.0266 [03/31 22:05:19 TiTok]: Data (t): 0.0034, 60.97/s/gpu Batch (t): 0.5905 LR: 0.000036 Step: 320900 Total Loss: 0.0370 Recon Loss: 0.0270 [03/31 22:06:17 TiTok]: Data (t): 0.0032, 51.94/s/gpu Batch (t): 0.6931 LR: 0.000036 Step: 321000 Total Loss: 0.0359 Recon Loss: 0.0262 [03/31 22:07:15 TiTok]: Data (t): 0.0033, 62.61/s/gpu Batch (t): 0.5750 LR: 0.000036 Step: 321100 Total Loss: 0.0386 Recon Loss: 0.0264 [03/31 22:08:12 TiTok]: Data (t): 0.0033, 62.46/s/gpu Batch (t): 0.5763 LR: 0.000036 Step: 321200 Total Loss: 0.0365 Recon Loss: 0.0276 [03/31 22:09:10 TiTok]: Data (t): 0.0034, 62.48/s/gpu Batch (t): 0.5762 LR: 0.000036 Step: 321300 Total Loss: 0.0376 Recon Loss: 0.0269 [03/31 22:10:08 TiTok]: Data (t): 0.0035, 62.65/s/gpu Batch (t): 0.5747 LR: 0.000036 Step: 321400 Total Loss: 0.0360 Recon Loss: 0.0260 [03/31 22:11:05 TiTok]: Data (t): 0.0033, 62.49/s/gpu Batch (t): 0.5761 LR: 0.000036 Step: 321500 Total Loss: 0.0379 Recon Loss: 0.0275 [03/31 22:12:03 TiTok]: Data (t): 0.0033, 62.55/s/gpu Batch (t): 0.5755 LR: 0.000036 Step: 321600 Total Loss: 0.0361 Recon Loss: 0.0268 [03/31 22:13:01 TiTok]: Data (t): 0.0033, 62.54/s/gpu Batch (t): 0.5756 LR: 0.000036 Step: 321700 Total Loss: 0.0371 Recon Loss: 0.0271 [03/31 22:13:59 TiTok]: Data (t): 0.0034, 62.36/s/gpu Batch (t): 0.5773 LR: 0.000036 Step: 321800 Total Loss: 0.0380 Recon Loss: 0.0283 [03/31 22:14:56 TiTok]: Data (t): 0.0032, 62.41/s/gpu Batch (t): 0.5768 LR: 0.000036 Step: 321900 Total Loss: 0.0366 Recon Loss: 0.0276 [03/31 22:15:54 TiTok]: Data (t): 0.0032, 56.62/s/gpu Batch (t): 0.6358 LR: 0.000036 Step: 322000 Total Loss: 0.0370 Recon Loss: 0.0276 [03/31 22:16:52 TiTok]: Data (t): 0.0033, 62.42/s/gpu Batch (t): 0.5768 LR: 0.000036 Step: 322100 Total Loss: 0.0382 Recon Loss: 0.0274 [03/31 22:17:50 TiTok]: Data (t): 0.0034, 62.59/s/gpu Batch (t): 0.5751 LR: 0.000036 Step: 322200 Total Loss: 0.0345 Recon Loss: 0.0265 [03/31 22:18:48 TiTok]: Data (t): 0.0031, 62.62/s/gpu Batch (t): 0.5749 LR: 0.000036 Step: 322300 Total Loss: 0.0379 Recon Loss: 0.0281 [03/31 22:19:45 TiTok]: Data (t): 0.0034, 62.51/s/gpu Batch (t): 0.5759 LR: 0.000036 Step: 322400 Total Loss: 0.0350 Recon Loss: 0.0263 [03/31 22:20:44 TiTok]: Data (t): 0.0032, 62.55/s/gpu Batch (t): 0.5756 LR: 0.000036 Step: 322500 Total Loss: 0.0368 Recon Loss: 0.0267 [03/31 22:21:42 TiTok]: Data (t): 0.0032, 62.63/s/gpu Batch (t): 0.5748 LR: 0.000036 Step: 322600 Total Loss: 0.0359 Recon Loss: 0.0274 [03/31 22:22:39 TiTok]: Data (t): 0.0034, 62.45/s/gpu Batch (t): 0.5765 LR: 0.000036 Step: 322700 Total Loss: 0.0361 Recon Loss: 0.0265 [03/31 22:23:37 TiTok]: Data (t): 0.0032, 62.72/s/gpu Batch (t): 0.5740 LR: 0.000036 Step: 322800 Total Loss: 0.0354 Recon Loss: 0.0267 [03/31 22:24:35 TiTok]: Data (t): 0.0032, 62.25/s/gpu Batch (t): 0.5784 LR: 0.000036 Step: 322900 Total Loss: 0.0387 Recon Loss: 0.0265 [03/31 22:25:33 TiTok]: Data (t): 0.0034, 56.27/s/gpu Batch (t): 0.6398 LR: 0.000036 Step: 323000 Total Loss: 0.0364 Recon Loss: 0.0268 [03/31 22:26:30 TiTok]: Data (t): 0.0034, 62.58/s/gpu Batch (t): 0.5752 LR: 0.000036 Step: 323100 Total Loss: 0.0350 Recon Loss: 0.0270 [03/31 22:27:28 TiTok]: Data (t): 0.0033, 62.64/s/gpu Batch (t): 0.5747 LR: 0.000036 Step: 323200 Total Loss: 0.0368 Recon Loss: 0.0274 [03/31 22:28:25 TiTok]: Data (t): 0.0033, 62.59/s/gpu Batch (t): 0.5752 LR: 0.000036 Step: 323300 Total Loss: 0.0394 Recon Loss: 0.0273 [03/31 22:29:23 TiTok]: Data (t): 0.0034, 62.54/s/gpu Batch (t): 0.5757 LR: 0.000035 Step: 323400 Total Loss: 0.0347 Recon Loss: 0.0267 [03/31 22:30:21 TiTok]: Data (t): 0.0033, 62.53/s/gpu Batch (t): 0.5757 LR: 0.000035 Step: 323500 Total Loss: 0.0361 Recon Loss: 0.0266 [03/31 22:31:18 TiTok]: Data (t): 0.0032, 56.36/s/gpu Batch (t): 0.6388 LR: 0.000035 Step: 323600 Total Loss: 0.0366 Recon Loss: 0.0275 [03/31 22:32:16 TiTok]: Data (t): 0.0032, 62.60/s/gpu Batch (t): 0.5751 LR: 0.000035 Step: 323700 Total Loss: 0.0356 Recon Loss: 0.0254 [03/31 22:33:14 TiTok]: Data (t): 0.0032, 59.24/s/gpu Batch (t): 0.6077 LR: 0.000035 Step: 323800 Total Loss: 0.0390 Recon Loss: 0.0286 [03/31 22:34:12 TiTok]: Data (t): 0.0033, 62.43/s/gpu Batch (t): 0.5767 LR: 0.000035 Step: 323900 Total Loss: 0.0355 Recon Loss: 0.0270 [03/31 22:35:09 TiTok]: Data (t): 0.0033, 56.53/s/gpu Batch (t): 0.6368 LR: 0.000035 Step: 324000 Total Loss: 0.0390 Recon Loss: 0.0277 [03/31 22:36:07 TiTok]: Data (t): 0.0032, 62.58/s/gpu Batch (t): 0.5753 LR: 0.000035 Step: 324100 Total Loss: 0.0368 Recon Loss: 0.0275 [03/31 22:37:05 TiTok]: Data (t): 0.0032, 62.14/s/gpu Batch (t): 0.5793 LR: 0.000035 Step: 324200 Total Loss: 0.0386 Recon Loss: 0.0294 [03/31 22:38:02 TiTok]: Data (t): 0.0033, 62.53/s/gpu Batch (t): 0.5757 LR: 0.000035 Step: 324300 Total Loss: 0.0387 Recon Loss: 0.0284 [03/31 22:39:00 TiTok]: Data (t): 0.0032, 62.00/s/gpu Batch (t): 0.5807 LR: 0.000035 Step: 324400 Total Loss: 0.0377 Recon Loss: 0.0288 [03/31 22:39:58 TiTok]: Data (t): 0.0032, 62.49/s/gpu Batch (t): 0.5761 LR: 0.000035 Step: 324500 Total Loss: 0.0384 Recon Loss: 0.0265 [03/31 22:40:57 TiTok]: Data (t): 0.0032, 61.90/s/gpu Batch (t): 0.5816 LR: 0.000035 Step: 324600 Total Loss: 0.0371 Recon Loss: 0.0265 [03/31 22:41:55 TiTok]: Data (t): 0.0032, 62.18/s/gpu Batch (t): 0.5790 LR: 0.000035 Step: 324700 Total Loss: 0.0356 Recon Loss: 0.0274 [03/31 22:42:55 TiTok]: Data (t): 0.0033, 62.16/s/gpu Batch (t): 0.5792 LR: 0.000035 Step: 324800 Total Loss: 0.0374 Recon Loss: 0.0272 [03/31 22:43:53 TiTok]: Data (t): 0.0034, 62.52/s/gpu Batch (t): 0.5758 LR: 0.000035 Step: 324900 Total Loss: 0.0380 Recon Loss: 0.0261 [03/31 22:44:51 TiTok]: Data (t): 0.0034, 56.65/s/gpu Batch (t): 0.6355 LR: 0.000035 Step: 325000 Total Loss: 0.0380 Recon Loss: 0.0274 [03/31 22:45:48 TiTok]: Data (t): 0.0033, 62.49/s/gpu Batch (t): 0.5761 LR: 0.000035 Step: 325100 Total Loss: 0.0379 Recon Loss: 0.0289 [03/31 22:46:46 TiTok]: Data (t): 0.0032, 62.51/s/gpu Batch (t): 0.5759 LR: 0.000035 Step: 325200 Total Loss: 0.0377 Recon Loss: 0.0274 [03/31 22:47:44 TiTok]: Data (t): 0.0032, 62.55/s/gpu Batch (t): 0.5755 LR: 0.000035 Step: 325300 Total Loss: 0.0373 Recon Loss: 0.0270 [03/31 22:48:42 TiTok]: Data (t): 0.0032, 62.49/s/gpu Batch (t): 0.5761 LR: 0.000035 Step: 325400 Total Loss: 0.0365 Recon Loss: 0.0292 [03/31 22:49:40 TiTok]: Data (t): 0.0035, 62.25/s/gpu Batch (t): 0.5783 LR: 0.000035 Step: 325500 Total Loss: 0.0388 Recon Loss: 0.0268 [03/31 22:50:38 TiTok]: Data (t): 0.0032, 62.25/s/gpu Batch (t): 0.5783 LR: 0.000035 Step: 325600 Total Loss: 0.0363 Recon Loss: 0.0276 [03/31 22:51:35 TiTok]: Data (t): 0.0034, 62.38/s/gpu Batch (t): 0.5771 LR: 0.000035 Step: 325700 Total Loss: 0.0361 Recon Loss: 0.0269 [03/31 22:52:33 TiTok]: Data (t): 0.0033, 62.48/s/gpu Batch (t): 0.5762 LR: 0.000035 Step: 325800 Total Loss: 0.0348 Recon Loss: 0.0264 [03/31 22:53:31 TiTok]: Data (t): 0.0033, 62.46/s/gpu Batch (t): 0.5764 LR: 0.000035 Step: 325900 Total Loss: 0.0391 Recon Loss: 0.0286 [03/31 22:54:29 TiTok]: Data (t): 0.0033, 56.59/s/gpu Batch (t): 0.6361 LR: 0.000035 Step: 326000 Total Loss: 0.0362 Recon Loss: 0.0270 [03/31 22:55:27 TiTok]: Data (t): 0.0033, 60.51/s/gpu Batch (t): 0.5950 LR: 0.000035 Step: 326100 Total Loss: 0.0366 Recon Loss: 0.0270 [03/31 22:56:24 TiTok]: Data (t): 0.0032, 62.57/s/gpu Batch (t): 0.5754 LR: 0.000035 Step: 326200 Total Loss: 0.0395 Recon Loss: 0.0286 [03/31 22:57:22 TiTok]: Data (t): 0.0032, 62.43/s/gpu Batch (t): 0.5766 LR: 0.000035 Step: 326300 Total Loss: 0.0361 Recon Loss: 0.0264 [03/31 22:58:20 TiTok]: Data (t): 0.0033, 62.19/s/gpu Batch (t): 0.5789 LR: 0.000035 Step: 326400 Total Loss: 0.0389 Recon Loss: 0.0277 [03/31 22:59:18 TiTok]: Data (t): 0.0032, 62.45/s/gpu Batch (t): 0.5765 LR: 0.000035 Step: 326500 Total Loss: 0.0367 Recon Loss: 0.0278 [03/31 23:00:16 TiTok]: Data (t): 0.0034, 62.03/s/gpu Batch (t): 0.5804 LR: 0.000035 Step: 326600 Total Loss: 0.0388 Recon Loss: 0.0294 [03/31 23:01:14 TiTok]: Data (t): 0.0034, 62.35/s/gpu Batch (t): 0.5774 LR: 0.000035 Step: 326700 Total Loss: 0.0368 Recon Loss: 0.0264 [03/31 23:02:12 TiTok]: Data (t): 0.0032, 62.51/s/gpu Batch (t): 0.5759 LR: 0.000035 Step: 326800 Total Loss: 0.0364 Recon Loss: 0.0265 [03/31 23:03:10 TiTok]: Data (t): 0.0034, 62.39/s/gpu Batch (t): 0.5771 LR: 0.000035 Step: 326900 Total Loss: 0.0362 Recon Loss: 0.0270 [03/31 23:04:08 TiTok]: Data (t): 0.0033, 56.30/s/gpu Batch (t): 0.6394 LR: 0.000035 Step: 327000 Total Loss: 0.0378 Recon Loss: 0.0282 [03/31 23:05:07 TiTok]: Data (t): 0.0032, 62.32/s/gpu Batch (t): 0.5777 LR: 0.000035 Step: 327100 Total Loss: 0.0388 Recon Loss: 0.0284 [03/31 23:06:05 TiTok]: Data (t): 0.0032, 62.56/s/gpu Batch (t): 0.5755 LR: 0.000035 Step: 327200 Total Loss: 0.0386 Recon Loss: 0.0286 [03/31 23:07:03 TiTok]: Data (t): 0.0032, 62.34/s/gpu Batch (t): 0.5775 LR: 0.000034 Step: 327300 Total Loss: 0.0350 Recon Loss: 0.0267 [03/31 23:08:01 TiTok]: Data (t): 0.0035, 62.50/s/gpu Batch (t): 0.5760 LR: 0.000034 Step: 327400 Total Loss: 0.0371 Recon Loss: 0.0257 [03/31 23:08:58 TiTok]: Data (t): 0.0032, 62.56/s/gpu Batch (t): 0.5755 LR: 0.000034 Step: 327500 Total Loss: 0.0350 Recon Loss: 0.0269 [03/31 23:09:56 TiTok]: Data (t): 0.0032, 62.12/s/gpu Batch (t): 0.5795 LR: 0.000034 Step: 327600 Total Loss: 0.0380 Recon Loss: 0.0270 [03/31 23:10:54 TiTok]: Data (t): 0.0033, 62.45/s/gpu Batch (t): 0.5764 LR: 0.000034 Step: 327700 Total Loss: 0.0365 Recon Loss: 0.0268 [03/31 23:11:51 TiTok]: Data (t): 0.0032, 62.48/s/gpu Batch (t): 0.5761 LR: 0.000034 Step: 327800 Total Loss: 0.0346 Recon Loss: 0.0270 [03/31 23:12:49 TiTok]: Data (t): 0.0033, 62.40/s/gpu Batch (t): 0.5769 LR: 0.000034 Step: 327900 Total Loss: 0.0353 Recon Loss: 0.0265 [03/31 23:13:47 TiTok]: Data (t): 0.0032, 56.84/s/gpu Batch (t): 0.6334 LR: 0.000034 Step: 328000 Total Loss: 0.0380 Recon Loss: 0.0273 [03/31 23:14:45 TiTok]: Data (t): 0.0033, 62.65/s/gpu Batch (t): 0.5746 LR: 0.000034 Step: 328100 Total Loss: 0.0353 Recon Loss: 0.0259 [03/31 23:15:42 TiTok]: Data (t): 0.0032, 62.52/s/gpu Batch (t): 0.5758 LR: 0.000034 Step: 328200 Total Loss: 0.0359 Recon Loss: 0.0274 [03/31 23:16:40 TiTok]: Data (t): 0.0032, 62.47/s/gpu Batch (t): 0.5763 LR: 0.000034 Step: 328300 Total Loss: 0.0376 Recon Loss: 0.0273 [03/31 23:17:38 TiTok]: Data (t): 0.0033, 62.58/s/gpu Batch (t): 0.5752 LR: 0.000034 Step: 328400 Total Loss: 0.0385 Recon Loss: 0.0284 [03/31 23:18:35 TiTok]: Data (t): 0.0032, 62.61/s/gpu Batch (t): 0.5750 LR: 0.000034 Step: 328500 Total Loss: 0.0367 Recon Loss: 0.0273 [03/31 23:19:33 TiTok]: Data (t): 0.0033, 62.37/s/gpu Batch (t): 0.5772 LR: 0.000034 Step: 328600 Total Loss: 0.0363 Recon Loss: 0.0287 [03/31 23:20:31 TiTok]: Data (t): 0.0033, 62.49/s/gpu Batch (t): 0.5761 LR: 0.000034 Step: 328700 Total Loss: 0.0367 Recon Loss: 0.0260 [03/31 23:21:28 TiTok]: Data (t): 0.0033, 62.46/s/gpu Batch (t): 0.5764 LR: 0.000034 Step: 328800 Total Loss: 0.0354 Recon Loss: 0.0265 [03/31 23:22:26 TiTok]: Data (t): 0.0033, 62.59/s/gpu Batch (t): 0.5752 LR: 0.000034 Step: 328900 Total Loss: 0.0360 Recon Loss: 0.0273 [03/31 23:23:24 TiTok]: Data (t): 0.0033, 56.43/s/gpu Batch (t): 0.6379 LR: 0.000034 Step: 329000 Total Loss: 0.0380 Recon Loss: 0.0272 [03/31 23:24:23 TiTok]: Data (t): 0.0033, 62.42/s/gpu Batch (t): 0.5768 LR: 0.000034 Step: 329100 Total Loss: 0.0370 Recon Loss: 0.0277 [03/31 23:25:21 TiTok]: Data (t): 0.0033, 62.46/s/gpu Batch (t): 0.5764 LR: 0.000034 Step: 329200 Total Loss: 0.0387 Recon Loss: 0.0285