This is a quantized INT4 model based on CPU Qwen-3.5-4B. You can deploy it on your CPU devices.
Note: This is unoffical version,just for test and dev.
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support