mmproj file for LM studio support?

#1
by userx1 - opened

I can't find any mmproj file in the file list, which makes this not usable in LM studio (requires mmproj files to show vision capability).
image

The first quant run did not produce the mmproj as it was not supported by llama.cpp at the time . I will try again

Still saying not supported. Make a feature request to get supported added here https://github.com/ggml-org/llama.cpp/issues . Here is the log output when trying to convert : Renaming temporary file to original file name: /home/mahadeva/code/models/llama-joycaption-beta-one-hf-llava/llama-joycaption-beta-one-hf-llava-bf16.gguf
Renamed /tmp/tmpc0tks06i to /home/mahadeva/code/models/llama-joycaption-beta-one-hf-llava/llama-joycaption-beta-one-hf-llava-bf16.gguf

Attempting mmproj conversion: python3 /home/mahadeva/code/models/llama.cpp/convert_hf_to_gguf.py /home/mahadeva/.cache/huggingface/hub/models--fancyfeast--llama-joycaption-beta-one-hf-llava/snapshots/ebf414ea497a020da0f82df3913e5b6cb8e9663a --outfile /home/mahadeva/code/models/llama-joycaption-beta-one-hf-llava/llama-joycaption-beta-one-hf-llava-f32.mmproj --model-name llama-joycaption-beta-one-hf-llava --mmproj --outtype f32
mmproj conversion failed for f32: INFO:hf-to-gguf:Loading model: ebf414ea497a020da0f82df3913e5b6cb8e9663a
INFO:hf-to-gguf:Model architecture: SiglipVisionModel
ERROR:hf-to-gguf:Model SiglipVisionModel is not supported

Attempting mmproj conversion: python3 /home/mahadeva/code/models/llama.cpp/convert_hf_to_gguf.py /home/mahadeva/.cache/huggingface/hub/models--fancyfeast--llama-joycaption-beta-one-hf-llava/snapshots/ebf414ea497a020da0f82df3913e5b6cb8e9663a --outfile /home/mahadeva/code/models/llama-joycaption-beta-one-hf-llava/llama-joycaption-beta-one-hf-llava-f16.mmproj --model-name llama-joycaption-beta-one-hf-llava --mmproj --outtype f16
mmproj conversion failed for f16: INFO:hf-to-gguf:Loading model: ebf414ea497a020da0f82df3913e5b6cb8e9663a
INFO:hf-to-gguf:Model architecture: SiglipVisionModel
ERROR:hf-to-gguf:Model SiglipVisionModel is not supported

Attempting mmproj conversion: python3 /home/mahadeva/code/models/llama.cpp/convert_hf_to_gguf.py /home/mahadeva/.cache/huggingface/hub/models--fancyfeast--llama-joycaption-beta-one-hf-llava/snapshots/ebf414ea497a020da0f82df3913e5b6cb8e9663a --outfile /home/mahadeva/code/models/llama-joycaption-beta-one-hf-llava/llama-joycaption-beta-one-hf-llava-bf16.mmproj --model-name llama-joycaption-beta-one-hf-llava --mmproj --outtype bf16
mmproj conversion failed for bf16: INFO:hf-to-gguf:Loading model: ebf414ea497a020da0f82df3913e5b6cb8e9663a
INFO:hf-to-gguf:Model architecture: SiglipVisionModel
ERROR:hf-to-gguf:Model SiglipVisionModel is not supported

Attempting mmproj conversion: python3 /home/mahadeva/code/models/llama.cpp/convert_hf_to_gguf.py /home/mahadeva/.cache/huggingface/hub/models--fancyfeast--llama-joycaption-beta-one-hf-llava/snapshots/ebf414ea497a020da0f82df3913e5b6cb8e9663a --outfile /home/mahadeva/code/models/llama-joycaption-beta-one-hf-llava/llama-joycaption-beta-one-hf-llava-q8_0.mmproj --model-name llama-joycaption-beta-one-hf-llava --mmproj --outtype q8_0
mmproj conversion failed for q8_0: INFO:hf-to-gguf:Loading model: ebf414ea497a020da0f82df3913e5b6cb8e9663a
INFO:hf-to-gguf:Model architecture: SiglipVisionModel
ERROR:hf-to-gguf:Model SiglipVisionModel is not supported

Sign up or log in to comment