pseudoz2-tuned-qwen3.5-4b : GGUF

Finetuned using Unsloth.

This model was finetuned and converted to GGUF format using Unsloth.

Example usage:

  • For text only LLMs: llama-cli -hf indus7ry/pseudoz2-tuned-qwen3.5-4b --jinja
  • For multimodal models: llama-mtmd-cli -hf indus7ry/pseudoz2-tuned-qwen3.5-4b --jinja

Available Model files:

  • Qwen3.5-4B.Q4_0.gguf
  • Qwen3.5-4B.BF16-mmproj.gguf
Downloads last month
161
GGUF
Model size
4B params
Architecture
qwen35
Hardware compatibility
Log In to add your hardware

4-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Model tree for indus7ry/pseudoz2-tuned-qwen3.5-4b

Finetuned
Qwen/Qwen3.5-4B
Quantized
(143)
this model