Needing update of transformers while trying to make Q4_K_M GGUF for Jackrong/Qwopus3.5-4B-v3

#201
by TimexPeachtree - opened

Please see the following error reported in this space while trying to make a GGUF for Jackrong/Qwopus3.5-4B-v3 as in eed it specifically as a reasoner coding LLM to run locally. Q8_0 which the product owner have provided will take more of resources to run, so while trying to make more quantized GGUF got this error.

Error converting to fp16: INFO:hf-to-gguf:Loading model: Qwopus3.5-4B-v3
WARNING:hf-to-gguf:Failed to load model config from downloads/tmp0sazuz76/Qwopus3.5-4B-v3: The checkpoint you are trying to load has model type qwen3_5 but Transformers does not recognize this architecture. This could be because of an issue with the checkpoint, or because your version of Transformers is out of date.

You can update Transformers with the command pip install --upgrade transformers. If this does not work, and the checkpoint is very new, then there may not be a release version that supports this model yet. In this case, you can get the most up-to-date code by installing Transformers from source with the command pip install git+https://github.com/huggingface/transformers.git
WARNING:hf-to-gguf:Trying to load config.json instead
INFO:hf-to-gguf:Model architecture: Qwen3_5ForConditionalGeneration
ERROR:hf-to-gguf:Model Qwen3_5ForConditionalGeneration is not supported

Sign up or log in to comment