Conversion of 3.5-A4B-Think from the same series
Sorry for writing here; I really like this model and wanted to try aquif-3.5-A4B-Think from the same series, but conversion to gguf fails both on git cloned model and using --remote.
Since it still has no quantized versions on HF, is the model metadata for aquif-3.5-A4B-Think actually broken? That's the error I get most often.
Actually, I think the model weights are corrupted. If I iunterpret that correctly, the file is larger than the header indicates, which is not allowed in safetensor format. Or maybe it is just plain corrupted.
INFO:hf-to-gguf:gguf: loading model part 'model-00005-of-00005.safetensors'
safetensors_rust.SafetensorError: Error while deserializing header: incomplete metadata, file not fully covered
Yep, that's what I got too. Alas, 8b is very interesting!
Just a heads up, the developer has fixed the last shard! https://huggingface.co/aquif-ai/aquif-3.5-A4B-Think/discussions/1
Tried online conversion, and it seems to work well.
wow, and somebody not me sneakily already queued it in our queue, so it all done by now :-) well done, everybody