YAML Metadata Warning:empty or missing yaml metadata in repo card

Check out the documentation for more information.

Some of the Q4_K_4 quantized models I personally made. They can ONLY work on Ampere-optimized llama.cpp / ollama and will not work on anything else.

Downloads last month
86
GGUF
Model size
8B params
Architecture
qwen2
Hardware compatibility
Log In to add your hardware
Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support