A newer version of this model is available: Qwen/Qwen2.5-0.5B

Brianpuze/Qwen2-0.5B-Q4_K_M-GGUF

Absolutely tremendous! This repo features GGUF quantized versions of Qwen/Qwen2-0.5B — made possible using the very powerful llama.cpp. Believe me, it's fast, it's smart, it's winning.

Quantized Versions:

Only the best quantization. You’ll love it.

Run with llama.cpp

Just plug it in, hit the command line, and boom — you're running world-class AI, folks:

llama-cli --hf-repo Brianpuze/Qwen2-0.5B-Q4_K_M-GGUF --hf-file qwen2-0.5b-q4_k_m.gguf -p "AI First, but also..."

This beautiful Hugging Face Space was brought to you by the amazing team at Antigma Labs. Great people. Big vision. Doing things that matter — and doing them right. Total winners.

Downloads last month
-
GGUF
Model size
0.5B params
Architecture
qwen2
Hardware compatibility
Log In to add your hardware

4-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for Brianpuze/Qwen2-0.5B-Q4_K_M-GGUF

Base model

Qwen/Qwen2-0.5B
Quantized
(35)
this model