Quantification
Base Model: Qwen/Qwen3-4B-Instruct-2507
Quantization Level: Q4_K_M
Original repo: https://huggingface.co/Qwen/Qwen3-4B-Instruct-2507
- Downloads last month
- 1
Hardware compatibility
Log In to add your hardware
4-bit
Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐ Ask for provider support
Model tree for SonyaCat/Qwen3-4B-Instruct-2507-q4-k-m-gguf
Base model
Qwen/Qwen3-4B-Instruct-2507