config.json error

#1
by martoon - opened

Hello,
I am sorry, maybe I did not get something, I don't suceed in load this model with the code

Load model directly

from transformers import AutoModel
model = AutoModel.from_pretrained("jpacifico/Chocolatine-2-14B-Instruct-v2.0.3-Q8_0-GGUF")

error:
ValueError: Unrecognized model in jpacifico/Chocolatine-2-14B-Instruct-v2.0.3-Q8_0-GGUF. Should have a model_type key in its config.json

the not quantized model loads fine

Hi @martoon I’d say that with transformers you cannot directly load a GGUF-quantized model (.gguf) using AutoModel.from_pretrained(), because GGUF is a format specific to llama.cpp and is not natively supported by transformers.

Sign up or log in to comment