Error loading the model with LM Studio.
#1
by FlameF0X - opened
Failed to load last used model.
Failed to load model
error loading model: error loading model vocabulary: unknown pre-tokenizer type: 'qwen3'
Hello, as I understand this error is caused by the model, not a quantization. But still, I will remain this open until it will work. (I will update you if there will be fix)
Main problem is that BPE tokenizer is not recognized in llama.cpp, and I had to force qwen3
WARNING:hf-to-gguf:**************************************************************************************
WARNING:hf-to-gguf:** WARNING: The BPE pre-tokenizer was not recognized!
WARNING:hf-to-gguf:** There are 2 possible reasons for this:
WARNING:hf-to-gguf:** - the model has not been added to convert_hf_to_gguf_update.py yet
WARNING:hf-to-gguf:** - the pre-tokenization config has changed upstream
WARNING:hf-to-gguf:** Check your model files and convert_hf_to_gguf_update.py and update them accordingly.
WARNING:hf-to-gguf:** ref: https://github.com/ggml-org/llama.cpp/pull/6920
WARNING:hf-to-gguf:**
WARNING:hf-to-gguf:** chkhsh: 7a5e1981344bd694e1f742be597f5bcc3f02d5e5bbb45f5a8a7a4b89b12ac37d
WARNING:hf-to-gguf:**************************************************************************************
oh, okay.
I found the fix. Requantizing now and uploading.
fixed
sapbot changed discussion status to closed