Upload PolyGuard artifact: log command_complete
Browse files
outputs/logs/hf_training.log
CHANGED
|
@@ -20090,3 +20090,11 @@ Loading checkpoint shards: 50%|βββββ | 1/2 [00:00<00:00, 1.87it/
|
|
| 20090 |
Loading checkpoint shards: 100%|ββββββββββ| 2/2 [00:00<00:00, 2.59it/s]
|
| 20091 |
Loading checkpoint shards: 100%|ββββββββββ| 2/2 [00:00<00:00, 2.45it/s]
|
| 20092 |
merge_done
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 20090 |
Loading checkpoint shards: 100%|ββββββββββ| 2/2 [00:00<00:00, 2.59it/s]
|
| 20091 |
Loading checkpoint shards: 100%|ββββββββββ| 2/2 [00:00<00:00, 2.45it/s]
|
| 20092 |
merge_done
|
| 20093 |
+
$ python scripts/test_inference_postsave.py --samples 5 --base-model Qwen/Qwen2.5-3B-Instruct --merged-model checkpoints/sweeps/qwen-qwen2-5-3b-instruct/merged --adapter-dir checkpoints/sweeps/qwen-qwen2-5-3b-instruct/sft_adapter --output outputs/reports/sweeps/qwen-qwen2-5-3b-instruct/postsave_inference_sft.json
|
| 20094 |
+
The tokenizer you are loading from '/app/checkpoints/sweeps/qwen-qwen2-5-3b-instruct/merged' with an incorrect regex pattern: https://huggingface.co/mistralai/Mistral-Small-3.1-24B-Instruct-2503/discussions/84#69121093e8b480e709447d5e. This will lead to incorrect tokenization. You should set the `fix_mistral_regex=True` flag when loading this tokenizer to fix this issue.
|
| 20095 |
+
`torch_dtype` is deprecated! Use `dtype` instead!
|
| 20096 |
+
|
| 20097 |
+
Loading checkpoint shards: 0%| | 0/2 [00:00<?, ?it/s]
|
| 20098 |
+
Loading checkpoint shards: 100%|ββββββββββ| 2/2 [00:00<00:00, 38.48it/s]
|
| 20099 |
+
The following generation flags are not valid and may be ignored: ['temperature', 'top_p', 'top_k']. Set `TRANSFORMERS_VERBOSITY=info` for more details.
|
| 20100 |
+
postsave_inference_ok
|