Mano200600/Fattah-2.5B-preview-Q8_0-GGUF

This model was converted to GGUF format from belal212/Fattah-2.5B-preview using llama.cpp.

Quantization: Q8_0 โ€” Maximum practical quality (~98%+ of original FP16)

Key Features:

  • 8-bit quantization for near-lossless quality
  • Optimized for use with llama.cpp
  • Compatible with llama-server for efficient serving
  • Ideal for small models where storage is not a concern

Refer to the original model card for more details on the base model.

Usage with llama.cpp

1. Install llama.cpp:

brew install llama.cpp  # For macOS/Linux

2. Run Inference:

CLI:

llama-cli --hf-repo Mano200600/Fattah-2.5B-preview-Q8_0-GGUF --hf-file fattah-2.5b-preview-q8_0.gguf -p "Your prompt here"

Server:

llama-server --hf-repo Mano200600/Fattah-2.5B-preview-Q8_0-GGUF --hf-file fattah-2.5b-preview-q8_0.gguf -c 2048

For more advanced usage, refer to the llama.cpp repository.

Downloads last month
31
GGUF
Model size
3B params
Architecture
qwen3
Hardware compatibility
Log In to add your hardware

8-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support