unlike the source model, I can't gguf this. tokenizer not supported

#1
by MikaSouthworth - opened

did you change the tokenizer more than required? llama.cpp does not recognize it any more, unlike the source model from qwen

I need to read the full output ;D
pip install git+https://github.com/huggingface/transformers.git <---- do this in your mamba/conda env and it works.... I am sure (I HOPE!!!) I am not the only one making this mistake 🫥

so apologies, but I let this thread stand here for others to see the solution

Sign up or log in to comment