| === STDOUT === |
| [2025-12-03 19:59:19,939] [INFO] [real_accelerator.py:203:get_accelerator] Setting ds_accelerator to cuda (auto detect) |
| [2025-12-03 19:59:20,529] [INFO] [root.spawn:60] [PID:121] gcc -pthread -B /root/miniconda3/envs/py3.11/compiler_compat -DNDEBUG -fwrapv -O2 -Wall -fPIC -O2 -isystem /root/miniconda3/envs/py3.11/include -fPIC -O2 -isystem /root/miniconda3/envs/py3.11/include -fPIC -c /tmp/tmpp0n1fqw2/test.c -o /tmp/tmpp0n1fqw2/test.o |
| [2025-12-03 19:59:20,571] [INFO] [root.spawn:60] [PID:121] gcc -pthread -B /root/miniconda3/envs/py3.11/compiler_compat /tmp/tmpp0n1fqw2/test.o -laio -o /tmp/tmpp0n1fqw2/a.out |
| [WARNING] Please specify the CUTLASS repo directory as environment variable $CUTLASS_PATH |
| [WARNING] sparse_attn requires a torch version >= 1.5 and < 2.0 but detected 2.3 |
| [WARNING] using untested triton version (2.3.1), only 1.0.0 is known to be compatible |
| [33m[2025-12-03 19:59:22,343] [WARNING] [axolotl.utils.config.models.input.hint_lora_8bit:1221] [PID:121] [RANK:0] We recommend setting `load_in_8bit: true` for LORA finetuning[39m |
| [2025-12-03 19:59:22,344] [DEBUG] [axolotl.normalize_config:83] [PID:121] [RANK:0] bf16 support detected, enabling for this configuration.[39m |
| [2025-12-03 19:59:22,897] [INFO] [axolotl.normalize_config:207] [PID:121] [RANK:0] GPU memory usage baseline: 0.000GB (+0.471GB misc)[39m |
|
|
| #@@ #@@ @@# @@# |
| @@ @@ @@ @@ =@@# @@ #@ =@@#. |
| @@ #@@@@@@@@@ @@ #@#@= @@ #@ .=@@ |
| #@@@@@@@@@@@@@@@@@ =@# @# ##= ## =####=+ @@ =#####+ =#@@###. @@ |
| @@@@@@@@@@/ +@@/ +@@ #@ =@= #@= @@ =@#+ +#@# @@ =@#+ +#@# #@. @@ |
| @@@@@@@@@@ ##@@ ##@@ =@# @# =@# @# @@ @@ @@ @@ #@ #@ @@ |
| @@@@@@@@@@@@@@@@@@@@ #@=+++#@= =@@# @@ @@ @@ @@ #@ #@ @@ |
| =@#=====@@ =@# @# @@ @@ @@ @@ #@ #@ @@ |
| @@@@@@@@@@@@@@@@ @@@@ #@ #@= #@= +@@ #@# =@# @@. =@# =@# #@. @@ |
| =@# @# #@= #@ =#@@@@#= +#@@= +#@@@@#= .##@@+ @@ |
| @@@@ @@@@@@@@@@@@@@@@ |
|
|
| [2025-12-03 19:59:26,240] [DEBUG] [axolotl.load_tokenizer:293] [PID:121] [RANK:0] EOS: 0 / <|endoftext|>[39m |
| [2025-12-03 19:59:26,240] [DEBUG] [axolotl.load_tokenizer:294] [PID:121] [RANK:0] BOS: 0 / <|endoftext|>[39m |
| [2025-12-03 19:59:26,240] [DEBUG] [axolotl.load_tokenizer:295] [PID:121] [RANK:0] PAD: 0 / <|endoftext|>[39m |
| [2025-12-03 19:59:26,240] [DEBUG] [axolotl.load_tokenizer:296] [PID:121] [RANK:0] UNK: 0 / <|endoftext|>[39m |
| [2025-12-03 19:59:26,240] [INFO] [axolotl.load_tokenizer:310] [PID:121] [RANK:0] No Chat template selected. Consider adding a chat template for easier inference.[39m |
| [2025-12-03 19:59:26,241] [INFO] [axolotl.load_tokenized_prepared_datasets:234] [PID:121] [RANK:0] Unable to find prepared dataset in last_run_prepared/c8b534ced2ddf0659aff669f20b527cd[39m |
| [2025-12-03 19:59:26,241] [INFO] [axolotl.load_tokenized_prepared_datasets:235] [PID:121] [RANK:0] Loading raw datasets...[39m |
| [33m[2025-12-03 19:59:26,241] [WARNING] [axolotl.load_tokenized_prepared_datasets:237] [PID:121] [RANK:0] Processing datasets during training can lead to VRAM instability. Please pre-process your dataset.[39m |
| [2025-12-03 19:59:26,241] [INFO] [axolotl.load_tokenized_prepared_datasets:244] [PID:121] [RANK:0] No seed provided, using default seed of 42[39m |
| [2025-12-03 19:59:31,440] [INFO] [axolotl.get_dataset_wrapper:612] [PID:121] [RANK:0] Loading dataset with base_type: alpaca and prompt_style: None[39m |
| [2025-12-03 19:59:35,953] [INFO] [axolotl.load_tokenized_prepared_datasets:491] [PID:121] [RANK:0] Saving merged prepared dataset to disk... last_run_prepared/c8b534ced2ddf0659aff669f20b527cd[39m |
| [2025-12-03 19:59:35,992] [DEBUG] [axolotl.calculate_total_num_steps:320] [PID:121] [RANK:0] total_num_tokens: 325_153[39m |
| [2025-12-03 19:59:36,008] [DEBUG] [axolotl.calculate_total_num_steps:338] [PID:121] [RANK:0] `total_supervised_tokens: 222_219`[39m |
| [2025-12-03 19:59:36,008] [DEBUG] [axolotl.calculate_total_num_steps:416] [PID:121] [RANK:0] total_num_steps: 475[39m |
| [2025-12-03 19:59:36,008] [INFO] [axolotl.prepare_dataset:152] [PID:121] [RANK:0] Maximum number of steps set at 100[39m |
| [2025-12-03 19:59:36,019] [DEBUG] [axolotl.train.train:66] [PID:121] [RANK:0] loading tokenizer... HuggingFaceTB/SmolLM2-135M[39m |
| [2025-12-03 19:59:36,499] [DEBUG] [axolotl.load_tokenizer:293] [PID:121] [RANK:0] EOS: 0 / <|endoftext|>[39m |
| [2025-12-03 19:59:36,499] [DEBUG] [axolotl.load_tokenizer:294] [PID:121] [RANK:0] BOS: 0 / <|endoftext|>[39m |
| [2025-12-03 19:59:36,499] [DEBUG] [axolotl.load_tokenizer:295] [PID:121] [RANK:0] PAD: 0 / <|endoftext|>[39m |
| [2025-12-03 19:59:36,499] [DEBUG] [axolotl.load_tokenizer:296] [PID:121] [RANK:0] UNK: 0 / <|endoftext|>[39m |
| [2025-12-03 19:59:36,499] [INFO] [axolotl.load_tokenizer:310] [PID:121] [RANK:0] No Chat template selected. Consider adding a chat template for easier inference.[39m |
| [2025-12-03 19:59:36,499] [DEBUG] [axolotl.train.train:98] [PID:121] [RANK:0] loading model and peft_config...[39m |
| [2025-12-03 19:59:40,766] [INFO] [axolotl.load_model:1074] [PID:121] [RANK:0] converting modules to torch.bfloat16 for flash attention[39m |
| trainable params: 460,800 || all params: 134,975,808 || trainable%: 0.3414 |
| [2025-12-03 19:59:40,880] [INFO] [axolotl.load_model:1137] [PID:121] [RANK:0] GPU memory usage after adapters: 0.000GB ()[39m |
| [2025-12-03 19:59:41,944] [INFO] [axolotl.train.train:141] [PID:121] [RANK:0] Pre-saving adapter config to ./outputs/admin_20251203_195913[39m |
| [2025-12-03 19:59:41,993] [INFO] [axolotl.train.train:178] [PID:121] [RANK:0] Starting trainer...[39m |
| [2025-12-03 19:59:43,462] [INFO] [axolotl.callbacks.on_step_end:128] [PID:121] [RANK:0] GPU memory usage while training: 0.272GB (+0.754GB cache, +0.978GB misc)[39m |
| {'loss': 1.7199, 'grad_norm': 0.6013352870941162, 'learning_rate': 0.0002961615786970389, 'epoch': 0.02} |
| {'loss': 1.8633, 'grad_norm': 0.2632163166999817, 'learning_rate': 0.0002778325235483954, 'epoch': 0.04} |
| {'loss': 1.6853, 'grad_norm': 0.43362969160079956, 'learning_rate': 0.00024621123294467096, 'epoch': 0.06} |
| {'loss': 1.8084, 'grad_norm': 0.3705150783061981, 'learning_rate': 0.00020458574054452313, 'epoch': 0.08} |
| {'loss': 1.5581, 'grad_norm': 0.4706592857837677, 'learning_rate': 0.00015728433331716724, 'epoch': 0.11} |
| {'loss': 1.7214, 'grad_norm': 0.4183749854564667, 'learning_rate': 0.00010922548916454855, 'epoch': 0.13} |
| {'loss': 1.7939, 'grad_norm': 0.34439632296562195, 'learning_rate': 6.540644552236401e-05, 'epoch': 0.15} |
| {'loss': 1.5683, 'grad_norm': 0.3492596745491028, 'learning_rate': 3.038357841559191e-05, 'epoch': 0.17} |
| {'loss': 1.505, 'grad_norm': 0.3816680610179901, 'learning_rate': 7.798623006559435e-06, 'epoch': 0.19} |
| {'loss': 1.675, 'grad_norm': 0.3192552626132965, 'learning_rate': 0.0, 'epoch': 0.21} |
| {'eval_loss': 1.6974681615829468, 'eval_runtime': 2.4034, 'eval_samples_per_second': 41.608, 'eval_steps_per_second': 20.804, 'epoch': 0.21} |
| {'train_runtime': 28.1039, 'train_samples_per_second': 14.233, 'train_steps_per_second': 3.558, 'train_loss': 1.6898639583587647, 'epoch': 0.21} |
| [2025-12-03 20:00:10,358] [INFO] [axolotl.train.train:195] [PID:121] [RANK:0] Training Completed!!! Saving pre-trained model to ./outputs/admin_20251203_195913[39m |
|
|
|
|
| === STDERR === |
| The following values were not passed to `accelerate launch` and had defaults used instead: |
| `--num_processes` was set to a value of `1` |
| `--num_machines` was set to a value of `1` |
| `--mixed_precision` was set to a value of `'no'` |
| `--dynamo_backend` was set to a value of `'no'` |
| To avoid this warning pass in values for each of the problematic parameters or run `accelerate config`. |
| WARNING: BNB_CUDA_VERSION=121 environment variable detected; loading libbitsandbytes_cuda121.so. |
| This can be used to load a bitsandbytes version that is different from the PyTorch CUDA version. |
| If this was unintended set the BNB_CUDA_VERSION variable to an empty string: export BNB_CUDA_VERSION= |
| If you use the manual override make sure the right libcudart.so is in your LD_LIBRARY_PATH |
| For example by adding the following to your .bashrc: export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:<path_to_cuda_dir/lib64 |
|
|
| Using the `WANDB_DISABLED` environment variable is deprecated and will be removed in v5. Use the --report_to flag to control the integrations used for logging result (for instance --report_to none). |
| df: /root/.triton/autotune: No such file or directory |
| Using the `WANDB_DISABLED` environment variable is deprecated and will be removed in v5. Use the --report_to flag to control the integrations used for logging result (for instance --report_to none). |
| Using the `WANDB_DISABLED` environment variable is deprecated and will be removed in v5. Use the --report_to flag to control the integrations used for logging result (for instance --report_to none). |
| Using the `WANDB_DISABLED` environment variable is deprecated and will be removed in v5. Use the --report_to flag to control the integrations used for logging result (for instance --report_to none). |
| Using the `WANDB_DISABLED` environment variable is deprecated and will be removed in v5. Use the --report_to flag to control the integrations used for logging result (for instance --report_to none). |
| /root/miniconda3/envs/py3.11/lib/python3.11/site-packages/pydantic/main.py:464: UserWarning: Pydantic serializer warnings: |
| PydanticSerializationUnexpectedValue(Expected `enum` - serialized value may not be as expected [field_name='lr_scheduler', input_value='cosine', input_type=str]) |
| PydanticSerializationUnexpectedValue(Expected `literal['one_cycle']` - serialized value may not be as expected [field_name='lr_scheduler', input_value='cosine', input_type=str]) |
| return self.__pydantic_serializer__.to_python( |
|
|
| Generating train split: 0%| | 0/2000 [00:00<?, ? examples/s] |
| Generating train split: 100%|ββββββββββ| 2000/2000 [00:00<00:00, 54614.40 examples/s] |
|
|
| Tokenizing Prompts (num_proc=64): 0%| | 0/2000 [00:00<?, ? examples/s] |
| Tokenizing Prompts (num_proc=64): 2%|β | 32/2000 [00:00<00:20, 95.15 examples/s] |
| Tokenizing Prompts (num_proc=64): 6%|β | 128/2000 [00:00<00:05, 321.92 examples/s] |
| Tokenizing Prompts (num_proc=64): 11%|β | 224/2000 [00:00<00:03, 466.48 examples/s] |
| Tokenizing Prompts (num_proc=64): 18%|ββ | 352/2000 [00:00<00:02, 687.18 examples/s] |
| Tokenizing Prompts (num_proc=64): 24%|βββ | 480/2000 [00:00<00:02, 736.90 examples/s] |
| Tokenizing Prompts (num_proc=64): 30%|βββ | 605/2000 [00:00<00:01, 861.28 examples/s] |
| Tokenizing Prompts (num_proc=64): 36%|ββββ | 729/2000 [00:01<00:01, 907.81 examples/s] |
| Tokenizing Prompts (num_proc=64): 43%|βββββ | 853/2000 [00:01<00:01, 818.44 examples/s] |
| Tokenizing Prompts (num_proc=64): 47%|βββββ | 946/2000 [00:01<00:01, 834.30 examples/s] |
| Tokenizing Prompts (num_proc=64): 52%|ββββββ | 1039/2000 [00:01<00:01, 844.74 examples/s] |
| Tokenizing Prompts (num_proc=64): 57%|ββββββ | 1132/2000 [00:01<00:01, 853.94 examples/s] |
| Tokenizing Prompts (num_proc=64): 63%|βββββββ | 1256/2000 [00:01<00:00, 838.53 examples/s] |
| Tokenizing Prompts (num_proc=64): 67%|βββββββ | 1349/2000 [00:01<00:00, 857.43 examples/s] |
| Tokenizing Prompts (num_proc=64): 74%|ββββββββ | 1473/2000 [00:01<00:00, 948.45 examples/s] |
| Tokenizing Prompts (num_proc=64): 80%|ββββββββ | 1597/2000 [00:02<00:00, 913.18 examples/s] |
| Tokenizing Prompts (num_proc=64): 86%|βββββββββ | 1721/2000 [00:02<00:00, 914.57 examples/s] |
| Tokenizing Prompts (num_proc=64): 91%|βββββββββ | 1814/2000 [00:02<00:00, 914.77 examples/s] |
| Tokenizing Prompts (num_proc=64): 95%|ββββββββββ| 1907/2000 [00:02<00:00, 910.07 examples/s] |
| Tokenizing Prompts (num_proc=64): 100%|ββββββββββ| 2000/2000 [00:02<00:00, 765.06 examples/s] |
|
|
| Saving the dataset (0/1 shards): 0%| | 0/2000 [00:00<?, ? examples/s] |
| Saving the dataset (1/1 shards): 100%|ββββββββββ| 2000/2000 [00:00<00:00, 113427.01 examples/s] |
| Saving the dataset (1/1 shards): 100%|ββββββββββ| 2000/2000 [00:00<00:00, 111810.84 examples/s] |
| You are attempting to use Flash Attention 2.0 with a model not initialized on GPU. Make sure to move the model to GPU after initializing it on CPU with `model.to('cuda')`. |
| /root/miniconda3/envs/py3.11/lib/python3.11/site-packages/transformers/training_args.py:1559: FutureWarning: `evaluation_strategy` is deprecated and will be removed in version 4.46 of π€ Transformers. Use `eval_strategy` instead |
| warnings.warn( |
| /workspace/axolotl/src/axolotl/core/trainer_builder.py:417: FutureWarning: `tokenizer` is deprecated and will be removed in version 5.0.0 for `AxolotlTrainer.__init__`. Use `processing_class` instead. |
| super().__init__(*_args, **kwargs) |
| max_steps is given, it will override any value given in num_train_epochs |
|
|
| 0%| | 0/100 [00:00<?, ?it/s]You're using a GPT2TokenizerFast tokenizer. Please note that with a fast tokenizer, using the `__call__` method is faster than using a method to encode the text followed by a call to the `pad` method to get a padded encoding. |
|
|
| 1%| | 1/100 [00:00<01:32, 1.07it/s] |
| 2%|β | 2/100 [00:01<00:53, 1.84it/s] |
| 3%|β | 3/100 [00:01<00:46, 2.07it/s] |
| 4%|β | 4/100 [00:02<00:42, 2.24it/s] |
| 5%|β | 5/100 [00:02<00:37, 2.56it/s] |
| 6%|β | 6/100 [00:02<00:32, 2.89it/s] |
| 7%|β | 7/100 [00:02<00:29, 3.14it/s] |
| 8%|β | 8/100 [00:03<00:27, 3.40it/s] |
| 9%|β | 9/100 [00:03<00:25, 3.59it/s] |
| 10%|β | 10/100 [00:03<00:24, 3.74it/s] |
| |
|
|
| 10%|β | 10/100 [00:03<00:24, 3.74it/s] |
| 11%|β | 11/100 [00:03<00:23, 3.83it/s] |
| 12%|ββ | 12/100 [00:04<00:22, 3.89it/s] |
| 13%|ββ | 13/100 [00:04<00:22, 3.90it/s] |
| 14%|ββ | 14/100 [00:04<00:21, 3.97it/s] |
| 15%|ββ | 15/100 [00:04<00:21, 3.99it/s] |
| 16%|ββ | 16/100 [00:05<00:20, 4.04it/s] |
| 17%|ββ | 17/100 [00:05<00:20, 4.04it/s] |
| 18%|ββ | 18/100 [00:05<00:20, 4.06it/s] |
| 19%|ββ | 19/100 [00:05<00:19, 4.09it/s] |
| 20%|ββ | 20/100 [00:05<00:19, 4.15it/s] |
| |
|
|
| 20%|ββ | 20/100 [00:05<00:19, 4.15it/s] |
| 21%|ββ | 21/100 [00:06<00:18, 4.16it/s] |
| 22%|βββ | 22/100 [00:06<00:18, 4.19it/s] |
| 23%|βββ | 23/100 [00:06<00:18, 4.22it/s] |
| 24%|βββ | 24/100 [00:06<00:17, 4.23it/s] |
| 25%|βββ | 25/100 [00:07<00:17, 4.25it/s] |
| 26%|βββ | 26/100 [00:07<00:17, 4.26it/s] |
| 27%|βββ | 27/100 [00:07<00:17, 4.27it/s] |
| 28%|βββ | 28/100 [00:07<00:16, 4.27it/s] |
| 29%|βββ | 29/100 [00:08<00:16, 4.29it/s] |
| 30%|βββ | 30/100 [00:08<00:16, 4.31it/s] |
| |
|
|
| 30%|βββ | 30/100 [00:08<00:16, 4.31it/s] |
| 31%|βββ | 31/100 [00:08<00:16, 4.31it/s] |
| 32%|ββββ | 32/100 [00:08<00:15, 4.29it/s] |
| 33%|ββββ | 33/100 [00:09<00:15, 4.30it/s] |
| 34%|ββββ | 34/100 [00:09<00:15, 4.31it/s] |
| 35%|ββββ | 35/100 [00:09<00:15, 4.27it/s] |
| 36%|ββββ | 36/100 [00:09<00:15, 4.23it/s] |
| 37%|ββββ | 37/100 [00:09<00:14, 4.26it/s] |
| 38%|ββββ | 38/100 [00:10<00:14, 4.29it/s] |
| 39%|ββββ | 39/100 [00:10<00:14, 4.29it/s] |
| 40%|ββββ | 40/100 [00:10<00:13, 4.29it/s] |
| |
|
|
| 40%|ββββ | 40/100 [00:10<00:13, 4.29it/s] |
| 41%|ββββ | 41/100 [00:10<00:13, 4.29it/s] |
| 42%|βββββ | 42/100 [00:11<00:13, 4.31it/s] |
| 43%|βββββ | 43/100 [00:11<00:13, 4.32it/s] |
| 44%|βββββ | 44/100 [00:11<00:12, 4.32it/s] |
| 45%|βββββ | 45/100 [00:11<00:12, 4.32it/s] |
| 46%|βββββ | 46/100 [00:12<00:12, 4.28it/s] |
| 47%|βββββ | 47/100 [00:12<00:12, 4.27it/s] |
| 48%|βββββ | 48/100 [00:12<00:12, 4.28it/s] |
| 49%|βββββ | 49/100 [00:12<00:11, 4.29it/s] |
| 50%|βββββ | 50/100 [00:12<00:11, 4.29it/s] |
| |
|
|
| 50%|βββββ | 50/100 [00:12<00:11, 4.29it/s] |
| 51%|βββββ | 51/100 [00:13<00:11, 4.23it/s] |
| 52%|ββββββ | 52/100 [00:13<00:11, 4.23it/s] |
| 53%|ββββββ | 53/100 [00:13<00:11, 4.23it/s] |
| 54%|ββββββ | 54/100 [00:13<00:10, 4.22it/s] |
| 55%|ββββββ | 55/100 [00:14<00:10, 4.24it/s] |
| 56%|ββββββ | 56/100 [00:14<00:10, 4.27it/s] |
| 57%|ββββββ | 57/100 [00:14<00:10, 4.29it/s] |
| 58%|ββββββ | 58/100 [00:14<00:09, 4.30it/s] |
| 59%|ββββββ | 59/100 [00:15<00:09, 4.32it/s] |
| 60%|ββββββ | 60/100 [00:15<00:09, 4.31it/s] |
| |
|
|
| 60%|ββββββ | 60/100 [00:15<00:09, 4.31it/s] |
| 61%|ββββββ | 61/100 [00:15<00:09, 4.32it/s] |
| 62%|βββββββ | 62/100 [00:15<00:08, 4.32it/s] |
| 63%|βββββββ | 63/100 [00:16<00:08, 4.31it/s] |
| 64%|βββββββ | 64/100 [00:16<00:08, 4.29it/s] |
| 65%|βββββββ | 65/100 [00:16<00:08, 4.31it/s] |
| 66%|βββββββ | 66/100 [00:16<00:07, 4.31it/s] |
| 67%|βββββββ | 67/100 [00:16<00:07, 4.30it/s] |
| 68%|βββββββ | 68/100 [00:17<00:07, 4.30it/s] |
| 69%|βββββββ | 69/100 [00:17<00:07, 4.31it/s] |
| 70%|βββββββ | 70/100 [00:17<00:06, 4.30it/s] |
| |
|
|
| 70%|βββββββ | 70/100 [00:17<00:06, 4.30it/s] |
| 71%|βββββββ | 71/100 [00:17<00:06, 4.28it/s] |
| 72%|ββββββββ | 72/100 [00:18<00:06, 4.30it/s] |
| 73%|ββββββββ | 73/100 [00:18<00:06, 4.31it/s] |
| 74%|ββββββββ | 74/100 [00:18<00:06, 4.31it/s] |
| 75%|ββββββββ | 75/100 [00:18<00:05, 4.28it/s] |
| 76%|ββββββββ | 76/100 [00:19<00:05, 4.29it/s] |
| 77%|ββββββββ | 77/100 [00:19<00:05, 4.30it/s] |
| 78%|ββββββββ | 78/100 [00:19<00:05, 4.31it/s] |
| 79%|ββββββββ | 79/100 [00:19<00:04, 4.29it/s] |
| 80%|ββββββββ | 80/100 [00:19<00:04, 4.29it/s] |
| |
|
|
| 80%|ββββββββ | 80/100 [00:19<00:04, 4.29it/s] |
| 81%|ββββββββ | 81/100 [00:20<00:04, 4.27it/s] |
| 82%|βββββββββ | 82/100 [00:20<00:04, 4.24it/s] |
| 83%|βββββββββ | 83/100 [00:20<00:04, 4.22it/s] |
| 84%|βββββββββ | 84/100 [00:20<00:03, 4.23it/s] |
| 85%|βββββββββ | 85/100 [00:21<00:03, 4.24it/s] |
| 86%|βββββββββ | 86/100 [00:21<00:03, 4.24it/s] |
| 87%|βββββββββ | 87/100 [00:21<00:03, 4.27it/s] |
| 88%|βββββββββ | 88/100 [00:21<00:02, 4.28it/s] |
| 89%|βββββββββ | 89/100 [00:22<00:02, 4.27it/s] |
| 90%|βββββββββ | 90/100 [00:22<00:02, 4.28it/s] |
| |
|
|
| 90%|βββββββββ | 90/100 [00:22<00:02, 4.28it/s] |
| 91%|βββββββββ | 91/100 [00:22<00:02, 4.27it/s] |
| 92%|ββββββββββ| 92/100 [00:22<00:01, 4.29it/s] |
| 93%|ββββββββββ| 93/100 [00:23<00:01, 4.28it/s] |
| 94%|ββββββββββ| 94/100 [00:23<00:01, 4.25it/s] |
| 95%|ββββββββββ| 95/100 [00:23<00:01, 4.23it/s] |
| 96%|ββββββββββ| 96/100 [00:23<00:00, 4.25it/s] |
| 97%|ββββββββββ| 97/100 [00:23<00:00, 4.28it/s] |
| 98%|ββββββββββ| 98/100 [00:24<00:00, 4.28it/s] |
| 99%|ββββββββββ| 99/100 [00:24<00:00, 4.24it/s] |
| 100%|ββββββββββ| 100/100 [00:24<00:00, 4.25it/s] |
| |
|
|
| 100%|ββββββββββ| 100/100 [00:24<00:00, 4.25it/s] |
|
|
| 0%| | 0/50 [00:00<?, ?it/s] |
|
|
| 6%|β | 3/50 [00:00<00:02, 22.38it/s] |
|
|
| 12%|ββ | 6/50 [00:00<00:02, 21.24it/s] |
|
|
| 18%|ββ | 9/50 [00:00<00:01, 21.21it/s] |
|
|
| 24%|βββ | 12/50 [00:00<00:01, 21.45it/s] |
|
|
| 30%|βββ | 15/50 [00:00<00:01, 21.38it/s] |
|
|
| 36%|ββββ | 18/50 [00:00<00:01, 21.34it/s] |
|
|
| 42%|βββββ | 21/50 [00:00<00:01, 21.19it/s] |
|
|
| 48%|βββββ | 24/50 [00:01<00:01, 21.28it/s] |
|
|
| 54%|ββββββ | 27/50 [00:01<00:01, 20.99it/s] |
|
|
| 60%|ββββββ | 30/50 [00:01<00:00, 21.22it/s] |
|
|
| 66%|βββββββ | 33/50 [00:01<00:00, 21.02it/s] |
|
|
| 72%|ββββββββ | 36/50 [00:01<00:00, 21.02it/s] |
|
|
| 78%|ββββββββ | 39/50 [00:01<00:00, 21.07it/s] |
|
|
| 84%|βββββββββ | 42/50 [00:01<00:00, 21.46it/s] |
|
|
| 90%|βββββββββ | 45/50 [00:02<00:00, 21.20it/s] |
|
|
| 96%|ββββββββββ| 48/50 [00:02<00:00, 21.47it/s] |
| |
|
|
| |
|
|
| 100%|ββββββββββ| 100/100 [00:27<00:00, 4.25it/s] |
|
|
| 100%|ββββββββββ| 50/50 [00:02<00:00, 21.47it/s] |
|
|
| |
| |
|
|
| 100%|ββββββββββ| 100/100 [00:28<00:00, 4.25it/s] |
| 100%|ββββββββββ| 100/100 [00:28<00:00, 3.56it/s] |
|
|