PEFT
Safetensors
model1 / training_logs.txt
magicsquares137's picture
Upload training_logs.txt with huggingface_hub
da2f57d verified
=== STDOUT ===
[2025-12-03 19:59:19,939] [INFO] [real_accelerator.py:203:get_accelerator] Setting ds_accelerator to cuda (auto detect)
[2025-12-03 19:59:20,529] [INFO] [root.spawn:60] [PID:121] gcc -pthread -B /root/miniconda3/envs/py3.11/compiler_compat -DNDEBUG -fwrapv -O2 -Wall -fPIC -O2 -isystem /root/miniconda3/envs/py3.11/include -fPIC -O2 -isystem /root/miniconda3/envs/py3.11/include -fPIC -c /tmp/tmpp0n1fqw2/test.c -o /tmp/tmpp0n1fqw2/test.o
[2025-12-03 19:59:20,571] [INFO] [root.spawn:60] [PID:121] gcc -pthread -B /root/miniconda3/envs/py3.11/compiler_compat /tmp/tmpp0n1fqw2/test.o -laio -o /tmp/tmpp0n1fqw2/a.out
[WARNING] Please specify the CUTLASS repo directory as environment variable $CUTLASS_PATH
[WARNING] sparse_attn requires a torch version >= 1.5 and < 2.0 but detected 2.3
[WARNING] using untested triton version (2.3.1), only 1.0.0 is known to be compatible
[2025-12-03 19:59:22,343] [WARNING] [axolotl.utils.config.models.input.hint_lora_8bit:1221] [PID:121] [RANK:0] We recommend setting `load_in_8bit: true` for LORA finetuning
[2025-12-03 19:59:22,344] [DEBUG] [axolotl.normalize_config:83] [PID:121] [RANK:0] bf16 support detected, enabling for this configuration.
[2025-12-03 19:59:22,897] [INFO] [axolotl.normalize_config:207] [PID:121] [RANK:0] GPU memory usage baseline: 0.000GB (+0.471GB misc)
#@@ #@@ @@# @@#
@@ @@ @@ @@ =@@# @@ #@ =@@#.
@@ #@@@@@@@@@ @@ #@#@= @@ #@ .=@@
#@@@@@@@@@@@@@@@@@ =@# @# ##= ## =####=+ @@ =#####+ =#@@###. @@
@@@@@@@@@@/ +@@/ +@@ #@ =@= #@= @@ =@#+ +#@# @@ =@#+ +#@# #@. @@
@@@@@@@@@@ ##@@ ##@@ =@# @# =@# @# @@ @@ @@ @@ #@ #@ @@
@@@@@@@@@@@@@@@@@@@@ #@=+++#@= =@@# @@ @@ @@ @@ #@ #@ @@
=@#=====@@ =@# @# @@ @@ @@ @@ #@ #@ @@
@@@@@@@@@@@@@@@@ @@@@ #@ #@= #@= +@@ #@# =@# @@. =@# =@# #@. @@
=@# @# #@= #@ =#@@@@#= +#@@= +#@@@@#= .##@@+ @@
@@@@ @@@@@@@@@@@@@@@@
[2025-12-03 19:59:26,240] [DEBUG] [axolotl.load_tokenizer:293] [PID:121] [RANK:0] EOS: 0 / <|endoftext|>
[2025-12-03 19:59:26,240] [DEBUG] [axolotl.load_tokenizer:294] [PID:121] [RANK:0] BOS: 0 / <|endoftext|>
[2025-12-03 19:59:26,240] [DEBUG] [axolotl.load_tokenizer:295] [PID:121] [RANK:0] PAD: 0 / <|endoftext|>
[2025-12-03 19:59:26,240] [DEBUG] [axolotl.load_tokenizer:296] [PID:121] [RANK:0] UNK: 0 / <|endoftext|>
[2025-12-03 19:59:26,240] [INFO] [axolotl.load_tokenizer:310] [PID:121] [RANK:0] No Chat template selected. Consider adding a chat template for easier inference.
[2025-12-03 19:59:26,241] [INFO] [axolotl.load_tokenized_prepared_datasets:234] [PID:121] [RANK:0] Unable to find prepared dataset in last_run_prepared/c8b534ced2ddf0659aff669f20b527cd
[2025-12-03 19:59:26,241] [INFO] [axolotl.load_tokenized_prepared_datasets:235] [PID:121] [RANK:0] Loading raw datasets...
[2025-12-03 19:59:26,241] [WARNING] [axolotl.load_tokenized_prepared_datasets:237] [PID:121] [RANK:0] Processing datasets during training can lead to VRAM instability. Please pre-process your dataset.
[2025-12-03 19:59:26,241] [INFO] [axolotl.load_tokenized_prepared_datasets:244] [PID:121] [RANK:0] No seed provided, using default seed of 42
[2025-12-03 19:59:31,440] [INFO] [axolotl.get_dataset_wrapper:612] [PID:121] [RANK:0] Loading dataset with base_type: alpaca and prompt_style: None
[2025-12-03 19:59:35,953] [INFO] [axolotl.load_tokenized_prepared_datasets:491] [PID:121] [RANK:0] Saving merged prepared dataset to disk... last_run_prepared/c8b534ced2ddf0659aff669f20b527cd
[2025-12-03 19:59:35,992] [DEBUG] [axolotl.calculate_total_num_steps:320] [PID:121] [RANK:0] total_num_tokens: 325_153
[2025-12-03 19:59:36,008] [DEBUG] [axolotl.calculate_total_num_steps:338] [PID:121] [RANK:0] `total_supervised_tokens: 222_219`
[2025-12-03 19:59:36,008] [DEBUG] [axolotl.calculate_total_num_steps:416] [PID:121] [RANK:0] total_num_steps: 475
[2025-12-03 19:59:36,008] [INFO] [axolotl.prepare_dataset:152] [PID:121] [RANK:0] Maximum number of steps set at 100
[2025-12-03 19:59:36,019] [DEBUG] [axolotl.train.train:66] [PID:121] [RANK:0] loading tokenizer... HuggingFaceTB/SmolLM2-135M
[2025-12-03 19:59:36,499] [DEBUG] [axolotl.load_tokenizer:293] [PID:121] [RANK:0] EOS: 0 / <|endoftext|>
[2025-12-03 19:59:36,499] [DEBUG] [axolotl.load_tokenizer:294] [PID:121] [RANK:0] BOS: 0 / <|endoftext|>
[2025-12-03 19:59:36,499] [DEBUG] [axolotl.load_tokenizer:295] [PID:121] [RANK:0] PAD: 0 / <|endoftext|>
[2025-12-03 19:59:36,499] [DEBUG] [axolotl.load_tokenizer:296] [PID:121] [RANK:0] UNK: 0 / <|endoftext|>
[2025-12-03 19:59:36,499] [INFO] [axolotl.load_tokenizer:310] [PID:121] [RANK:0] No Chat template selected. Consider adding a chat template for easier inference.
[2025-12-03 19:59:36,499] [DEBUG] [axolotl.train.train:98] [PID:121] [RANK:0] loading model and peft_config...
[2025-12-03 19:59:40,766] [INFO] [axolotl.load_model:1074] [PID:121] [RANK:0] converting modules to torch.bfloat16 for flash attention
trainable params: 460,800 || all params: 134,975,808 || trainable%: 0.3414
[2025-12-03 19:59:40,880] [INFO] [axolotl.load_model:1137] [PID:121] [RANK:0] GPU memory usage after adapters: 0.000GB ()
[2025-12-03 19:59:41,944] [INFO] [axolotl.train.train:141] [PID:121] [RANK:0] Pre-saving adapter config to ./outputs/admin_20251203_195913
[2025-12-03 19:59:41,993] [INFO] [axolotl.train.train:178] [PID:121] [RANK:0] Starting trainer...
[2025-12-03 19:59:43,462] [INFO] [axolotl.callbacks.on_step_end:128] [PID:121] [RANK:0] GPU memory usage while training: 0.272GB (+0.754GB cache, +0.978GB misc)
{'loss': 1.7199, 'grad_norm': 0.6013352870941162, 'learning_rate': 0.0002961615786970389, 'epoch': 0.02}
{'loss': 1.8633, 'grad_norm': 0.2632163166999817, 'learning_rate': 0.0002778325235483954, 'epoch': 0.04}
{'loss': 1.6853, 'grad_norm': 0.43362969160079956, 'learning_rate': 0.00024621123294467096, 'epoch': 0.06}
{'loss': 1.8084, 'grad_norm': 0.3705150783061981, 'learning_rate': 0.00020458574054452313, 'epoch': 0.08}
{'loss': 1.5581, 'grad_norm': 0.4706592857837677, 'learning_rate': 0.00015728433331716724, 'epoch': 0.11}
{'loss': 1.7214, 'grad_norm': 0.4183749854564667, 'learning_rate': 0.00010922548916454855, 'epoch': 0.13}
{'loss': 1.7939, 'grad_norm': 0.34439632296562195, 'learning_rate': 6.540644552236401e-05, 'epoch': 0.15}
{'loss': 1.5683, 'grad_norm': 0.3492596745491028, 'learning_rate': 3.038357841559191e-05, 'epoch': 0.17}
{'loss': 1.505, 'grad_norm': 0.3816680610179901, 'learning_rate': 7.798623006559435e-06, 'epoch': 0.19}
{'loss': 1.675, 'grad_norm': 0.3192552626132965, 'learning_rate': 0.0, 'epoch': 0.21}
{'eval_loss': 1.6974681615829468, 'eval_runtime': 2.4034, 'eval_samples_per_second': 41.608, 'eval_steps_per_second': 20.804, 'epoch': 0.21}
{'train_runtime': 28.1039, 'train_samples_per_second': 14.233, 'train_steps_per_second': 3.558, 'train_loss': 1.6898639583587647, 'epoch': 0.21}
[2025-12-03 20:00:10,358] [INFO] [axolotl.train.train:195] [PID:121] [RANK:0] Training Completed!!! Saving pre-trained model to ./outputs/admin_20251203_195913
=== STDERR ===
The following values were not passed to `accelerate launch` and had defaults used instead:
`--num_processes` was set to a value of `1`
`--num_machines` was set to a value of `1`
`--mixed_precision` was set to a value of `'no'`
`--dynamo_backend` was set to a value of `'no'`
To avoid this warning pass in values for each of the problematic parameters or run `accelerate config`.
WARNING: BNB_CUDA_VERSION=121 environment variable detected; loading libbitsandbytes_cuda121.so.
This can be used to load a bitsandbytes version that is different from the PyTorch CUDA version.
If this was unintended set the BNB_CUDA_VERSION variable to an empty string: export BNB_CUDA_VERSION=
If you use the manual override make sure the right libcudart.so is in your LD_LIBRARY_PATH
For example by adding the following to your .bashrc: export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:<path_to_cuda_dir/lib64
Using the `WANDB_DISABLED` environment variable is deprecated and will be removed in v5. Use the --report_to flag to control the integrations used for logging result (for instance --report_to none).
df: /root/.triton/autotune: No such file or directory
Using the `WANDB_DISABLED` environment variable is deprecated and will be removed in v5. Use the --report_to flag to control the integrations used for logging result (for instance --report_to none).
Using the `WANDB_DISABLED` environment variable is deprecated and will be removed in v5. Use the --report_to flag to control the integrations used for logging result (for instance --report_to none).
Using the `WANDB_DISABLED` environment variable is deprecated and will be removed in v5. Use the --report_to flag to control the integrations used for logging result (for instance --report_to none).
Using the `WANDB_DISABLED` environment variable is deprecated and will be removed in v5. Use the --report_to flag to control the integrations used for logging result (for instance --report_to none).
/root/miniconda3/envs/py3.11/lib/python3.11/site-packages/pydantic/main.py:464: UserWarning: Pydantic serializer warnings:
PydanticSerializationUnexpectedValue(Expected `enum` - serialized value may not be as expected [field_name='lr_scheduler', input_value='cosine', input_type=str])
PydanticSerializationUnexpectedValue(Expected `literal['one_cycle']` - serialized value may not be as expected [field_name='lr_scheduler', input_value='cosine', input_type=str])
return self.__pydantic_serializer__.to_python(
Generating train split: 0%| | 0/2000 [00:00<?, ? examples/s]
Generating train split: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 2000/2000 [00:00<00:00, 54614.40 examples/s]
Tokenizing Prompts (num_proc=64): 0%| | 0/2000 [00:00<?, ? examples/s]
Tokenizing Prompts (num_proc=64): 2%|▏ | 32/2000 [00:00<00:20, 95.15 examples/s]
Tokenizing Prompts (num_proc=64): 6%|β–‹ | 128/2000 [00:00<00:05, 321.92 examples/s]
Tokenizing Prompts (num_proc=64): 11%|β–ˆ | 224/2000 [00:00<00:03, 466.48 examples/s]
Tokenizing Prompts (num_proc=64): 18%|β–ˆβ–Š | 352/2000 [00:00<00:02, 687.18 examples/s]
Tokenizing Prompts (num_proc=64): 24%|β–ˆβ–ˆβ– | 480/2000 [00:00<00:02, 736.90 examples/s]
Tokenizing Prompts (num_proc=64): 30%|β–ˆβ–ˆβ–ˆ | 605/2000 [00:00<00:01, 861.28 examples/s]
Tokenizing Prompts (num_proc=64): 36%|β–ˆβ–ˆβ–ˆβ–‹ | 729/2000 [00:01<00:01, 907.81 examples/s]
Tokenizing Prompts (num_proc=64): 43%|β–ˆβ–ˆβ–ˆβ–ˆβ–Ž | 853/2000 [00:01<00:01, 818.44 examples/s]
Tokenizing Prompts (num_proc=64): 47%|β–ˆβ–ˆβ–ˆβ–ˆβ–‹ | 946/2000 [00:01<00:01, 834.30 examples/s]
Tokenizing Prompts (num_proc=64): 52%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ– | 1039/2000 [00:01<00:01, 844.74 examples/s]
Tokenizing Prompts (num_proc=64): 57%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–‹ | 1132/2000 [00:01<00:01, 853.94 examples/s]
Tokenizing Prompts (num_proc=64): 63%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–Ž | 1256/2000 [00:01<00:00, 838.53 examples/s]
Tokenizing Prompts (num_proc=64): 67%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–‹ | 1349/2000 [00:01<00:00, 857.43 examples/s]
Tokenizing Prompts (num_proc=64): 74%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–Ž | 1473/2000 [00:01<00:00, 948.45 examples/s]
Tokenizing Prompts (num_proc=64): 80%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–‰ | 1597/2000 [00:02<00:00, 913.18 examples/s]
Tokenizing Prompts (num_proc=64): 86%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–Œ | 1721/2000 [00:02<00:00, 914.57 examples/s]
Tokenizing Prompts (num_proc=64): 91%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ | 1814/2000 [00:02<00:00, 914.77 examples/s]
Tokenizing Prompts (num_proc=64): 95%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–Œ| 1907/2000 [00:02<00:00, 910.07 examples/s]
Tokenizing Prompts (num_proc=64): 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 2000/2000 [00:02<00:00, 765.06 examples/s]
Saving the dataset (0/1 shards): 0%| | 0/2000 [00:00<?, ? examples/s]
Saving the dataset (1/1 shards): 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 2000/2000 [00:00<00:00, 113427.01 examples/s]
Saving the dataset (1/1 shards): 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 2000/2000 [00:00<00:00, 111810.84 examples/s]
You are attempting to use Flash Attention 2.0 with a model not initialized on GPU. Make sure to move the model to GPU after initializing it on CPU with `model.to('cuda')`.
/root/miniconda3/envs/py3.11/lib/python3.11/site-packages/transformers/training_args.py:1559: FutureWarning: `evaluation_strategy` is deprecated and will be removed in version 4.46 of πŸ€— Transformers. Use `eval_strategy` instead
warnings.warn(
/workspace/axolotl/src/axolotl/core/trainer_builder.py:417: FutureWarning: `tokenizer` is deprecated and will be removed in version 5.0.0 for `AxolotlTrainer.__init__`. Use `processing_class` instead.
super().__init__(*_args, **kwargs)
max_steps is given, it will override any value given in num_train_epochs
0%| | 0/100 [00:00<?, ?it/s]You're using a GPT2TokenizerFast tokenizer. Please note that with a fast tokenizer, using the `__call__` method is faster than using a method to encode the text followed by a call to the `pad` method to get a padded encoding.
1%| | 1/100 [00:00<01:32, 1.07it/s]
2%|▏ | 2/100 [00:01<00:53, 1.84it/s]
3%|β–Ž | 3/100 [00:01<00:46, 2.07it/s]
4%|▍ | 4/100 [00:02<00:42, 2.24it/s]
5%|β–Œ | 5/100 [00:02<00:37, 2.56it/s]
6%|β–Œ | 6/100 [00:02<00:32, 2.89it/s]
7%|β–‹ | 7/100 [00:02<00:29, 3.14it/s]
8%|β–Š | 8/100 [00:03<00:27, 3.40it/s]
9%|β–‰ | 9/100 [00:03<00:25, 3.59it/s]
10%|β–ˆ | 10/100 [00:03<00:24, 3.74it/s]
10%|β–ˆ | 10/100 [00:03<00:24, 3.74it/s]
11%|β–ˆ | 11/100 [00:03<00:23, 3.83it/s]
12%|β–ˆβ– | 12/100 [00:04<00:22, 3.89it/s]
13%|β–ˆβ–Ž | 13/100 [00:04<00:22, 3.90it/s]
14%|β–ˆβ– | 14/100 [00:04<00:21, 3.97it/s]
15%|β–ˆβ–Œ | 15/100 [00:04<00:21, 3.99it/s]
16%|β–ˆβ–Œ | 16/100 [00:05<00:20, 4.04it/s]
17%|β–ˆβ–‹ | 17/100 [00:05<00:20, 4.04it/s]
18%|β–ˆβ–Š | 18/100 [00:05<00:20, 4.06it/s]
19%|β–ˆβ–‰ | 19/100 [00:05<00:19, 4.09it/s]
20%|β–ˆβ–ˆ | 20/100 [00:05<00:19, 4.15it/s]
20%|β–ˆβ–ˆ | 20/100 [00:05<00:19, 4.15it/s]
21%|β–ˆβ–ˆ | 21/100 [00:06<00:18, 4.16it/s]
22%|β–ˆβ–ˆβ– | 22/100 [00:06<00:18, 4.19it/s]
23%|β–ˆβ–ˆβ–Ž | 23/100 [00:06<00:18, 4.22it/s]
24%|β–ˆβ–ˆβ– | 24/100 [00:06<00:17, 4.23it/s]
25%|β–ˆβ–ˆβ–Œ | 25/100 [00:07<00:17, 4.25it/s]
26%|β–ˆβ–ˆβ–Œ | 26/100 [00:07<00:17, 4.26it/s]
27%|β–ˆβ–ˆβ–‹ | 27/100 [00:07<00:17, 4.27it/s]
28%|β–ˆβ–ˆβ–Š | 28/100 [00:07<00:16, 4.27it/s]
29%|β–ˆβ–ˆβ–‰ | 29/100 [00:08<00:16, 4.29it/s]
30%|β–ˆβ–ˆβ–ˆ | 30/100 [00:08<00:16, 4.31it/s]
30%|β–ˆβ–ˆβ–ˆ | 30/100 [00:08<00:16, 4.31it/s]
31%|β–ˆβ–ˆβ–ˆ | 31/100 [00:08<00:16, 4.31it/s]
32%|β–ˆβ–ˆβ–ˆβ– | 32/100 [00:08<00:15, 4.29it/s]
33%|β–ˆβ–ˆβ–ˆβ–Ž | 33/100 [00:09<00:15, 4.30it/s]
34%|β–ˆβ–ˆβ–ˆβ– | 34/100 [00:09<00:15, 4.31it/s]
35%|β–ˆβ–ˆβ–ˆβ–Œ | 35/100 [00:09<00:15, 4.27it/s]
36%|β–ˆβ–ˆβ–ˆβ–Œ | 36/100 [00:09<00:15, 4.23it/s]
37%|β–ˆβ–ˆβ–ˆβ–‹ | 37/100 [00:09<00:14, 4.26it/s]
38%|β–ˆβ–ˆβ–ˆβ–Š | 38/100 [00:10<00:14, 4.29it/s]
39%|β–ˆβ–ˆβ–ˆβ–‰ | 39/100 [00:10<00:14, 4.29it/s]
40%|β–ˆβ–ˆβ–ˆβ–ˆ | 40/100 [00:10<00:13, 4.29it/s]
40%|β–ˆβ–ˆβ–ˆβ–ˆ | 40/100 [00:10<00:13, 4.29it/s]
41%|β–ˆβ–ˆβ–ˆβ–ˆ | 41/100 [00:10<00:13, 4.29it/s]
42%|β–ˆβ–ˆβ–ˆβ–ˆβ– | 42/100 [00:11<00:13, 4.31it/s]
43%|β–ˆβ–ˆβ–ˆβ–ˆβ–Ž | 43/100 [00:11<00:13, 4.32it/s]
44%|β–ˆβ–ˆβ–ˆβ–ˆβ– | 44/100 [00:11<00:12, 4.32it/s]
45%|β–ˆβ–ˆβ–ˆβ–ˆβ–Œ | 45/100 [00:11<00:12, 4.32it/s]
46%|β–ˆβ–ˆβ–ˆβ–ˆβ–Œ | 46/100 [00:12<00:12, 4.28it/s]
47%|β–ˆβ–ˆβ–ˆβ–ˆβ–‹ | 47/100 [00:12<00:12, 4.27it/s]
48%|β–ˆβ–ˆβ–ˆβ–ˆβ–Š | 48/100 [00:12<00:12, 4.28it/s]
49%|β–ˆβ–ˆβ–ˆβ–ˆβ–‰ | 49/100 [00:12<00:11, 4.29it/s]
50%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆ | 50/100 [00:12<00:11, 4.29it/s]
50%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆ | 50/100 [00:12<00:11, 4.29it/s]
51%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆ | 51/100 [00:13<00:11, 4.23it/s]
52%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ– | 52/100 [00:13<00:11, 4.23it/s]
53%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–Ž | 53/100 [00:13<00:11, 4.23it/s]
54%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ– | 54/100 [00:13<00:10, 4.22it/s]
55%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–Œ | 55/100 [00:14<00:10, 4.24it/s]
56%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–Œ | 56/100 [00:14<00:10, 4.27it/s]
57%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–‹ | 57/100 [00:14<00:10, 4.29it/s]
58%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–Š | 58/100 [00:14<00:09, 4.30it/s]
59%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–‰ | 59/100 [00:15<00:09, 4.32it/s]
60%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ | 60/100 [00:15<00:09, 4.31it/s]
60%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ | 60/100 [00:15<00:09, 4.31it/s]
61%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ | 61/100 [00:15<00:09, 4.32it/s]
62%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ– | 62/100 [00:15<00:08, 4.32it/s]
63%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–Ž | 63/100 [00:16<00:08, 4.31it/s]
64%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ– | 64/100 [00:16<00:08, 4.29it/s]
65%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–Œ | 65/100 [00:16<00:08, 4.31it/s]
66%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–Œ | 66/100 [00:16<00:07, 4.31it/s]
67%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–‹ | 67/100 [00:16<00:07, 4.30it/s]
68%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–Š | 68/100 [00:17<00:07, 4.30it/s]
69%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–‰ | 69/100 [00:17<00:07, 4.31it/s]
70%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ | 70/100 [00:17<00:06, 4.30it/s]
70%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ | 70/100 [00:17<00:06, 4.30it/s]
71%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ | 71/100 [00:17<00:06, 4.28it/s]
72%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ– | 72/100 [00:18<00:06, 4.30it/s]
73%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–Ž | 73/100 [00:18<00:06, 4.31it/s]
74%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ– | 74/100 [00:18<00:06, 4.31it/s]
75%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–Œ | 75/100 [00:18<00:05, 4.28it/s]
76%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–Œ | 76/100 [00:19<00:05, 4.29it/s]
77%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–‹ | 77/100 [00:19<00:05, 4.30it/s]
78%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–Š | 78/100 [00:19<00:05, 4.31it/s]
79%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–‰ | 79/100 [00:19<00:04, 4.29it/s]
80%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ | 80/100 [00:19<00:04, 4.29it/s]
80%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ | 80/100 [00:19<00:04, 4.29it/s]
81%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ | 81/100 [00:20<00:04, 4.27it/s]
82%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ– | 82/100 [00:20<00:04, 4.24it/s]
83%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–Ž | 83/100 [00:20<00:04, 4.22it/s]
84%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ– | 84/100 [00:20<00:03, 4.23it/s]
85%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–Œ | 85/100 [00:21<00:03, 4.24it/s]
86%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–Œ | 86/100 [00:21<00:03, 4.24it/s]
87%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–‹ | 87/100 [00:21<00:03, 4.27it/s]
88%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–Š | 88/100 [00:21<00:02, 4.28it/s]
89%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–‰ | 89/100 [00:22<00:02, 4.27it/s]
90%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ | 90/100 [00:22<00:02, 4.28it/s]
90%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ | 90/100 [00:22<00:02, 4.28it/s]
91%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ | 91/100 [00:22<00:02, 4.27it/s]
92%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–| 92/100 [00:22<00:01, 4.29it/s]
93%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–Ž| 93/100 [00:23<00:01, 4.28it/s]
94%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–| 94/100 [00:23<00:01, 4.25it/s]
95%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–Œ| 95/100 [00:23<00:01, 4.23it/s]
96%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–Œ| 96/100 [00:23<00:00, 4.25it/s]
97%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–‹| 97/100 [00:23<00:00, 4.28it/s]
98%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–Š| 98/100 [00:24<00:00, 4.28it/s]
99%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–‰| 99/100 [00:24<00:00, 4.24it/s]
100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 100/100 [00:24<00:00, 4.25it/s]
100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 100/100 [00:24<00:00, 4.25it/s]
0%| | 0/50 [00:00<?, ?it/s]
6%|β–Œ | 3/50 [00:00<00:02, 22.38it/s]
12%|β–ˆβ– | 6/50 [00:00<00:02, 21.24it/s]
18%|β–ˆβ–Š | 9/50 [00:00<00:01, 21.21it/s]
24%|β–ˆβ–ˆβ– | 12/50 [00:00<00:01, 21.45it/s]
30%|β–ˆβ–ˆβ–ˆ | 15/50 [00:00<00:01, 21.38it/s]
36%|β–ˆβ–ˆβ–ˆβ–Œ | 18/50 [00:00<00:01, 21.34it/s]
42%|β–ˆβ–ˆβ–ˆβ–ˆβ– | 21/50 [00:00<00:01, 21.19it/s]
48%|β–ˆβ–ˆβ–ˆβ–ˆβ–Š | 24/50 [00:01<00:01, 21.28it/s]
54%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ– | 27/50 [00:01<00:01, 20.99it/s]
60%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ | 30/50 [00:01<00:00, 21.22it/s]
66%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–Œ | 33/50 [00:01<00:00, 21.02it/s]
72%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ– | 36/50 [00:01<00:00, 21.02it/s]
78%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–Š | 39/50 [00:01<00:00, 21.07it/s]
84%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ– | 42/50 [00:01<00:00, 21.46it/s]
90%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ | 45/50 [00:02<00:00, 21.20it/s]
96%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–Œ| 48/50 [00:02<00:00, 21.47it/s]
100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 100/100 [00:27<00:00, 4.25it/s]
100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 50/50 [00:02<00:00, 21.47it/s]
100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 100/100 [00:28<00:00, 4.25it/s]
100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 100/100 [00:28<00:00, 3.56it/s]