Which transformer version did you use?

#3
by stelterlab - opened

Hi!
I'm currently trying to quantize the newest Qwen/Qwen3.5-35B-A3B with 0.9.1.dev129+g9b7fb9f7, but I'm failed either with the transformers v4.57.6:

ValueError: The checkpoint you are trying to load has model type `qwen3_5_moe` but Transformers does not recognize this architecture. This could be because of an issue with the checkpoint, or because your version of Transformers is out of date.

or tf v5.2.0:

ImportError: cannot import name 'TORCH_INIT_FUNCTIONS' from 'transformers.modeling_utils' (/data/quant/src/llm-compressor/.venv/lib/python3.12/site-packages/transformers/modeling_utils.py). Did you mean: 'ROPE_INIT_FUNCTIONS'?

I've seen https://github.com/vllm-project/llm-compressor/issues/2289 (TF5 not yet supported).

So what is the trick? ;-)

Sign up or log in to comment