YAML Metadata Warning:empty or missing yaml metadata in repo card
Check out the documentation for more information.
Qwen3.5-0.8B GGUF
Q4_K_M quantized GGUF for Qwen3.5-0.8B.
Model Details
- Base Model: Qwen/Qwen3.5-0.8B
- Quantization: Q4_K_M
- Format: GGUF
- Size: 504 MB
Usage
./llama-cli -m qwen3.5-0.8b.gguf -p "Hello"
Note
This is a standard Q4_K_M quantization, not PreSINQ-optimized.
Source
Original quantization by diodel.
- Downloads last month
- 405
Hardware compatibility
Log In to add your hardware
We're not able to determine the quantization variants.
Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐ Ask for provider support