Jeethu/gemma-4-E4B-it-PARO

Pairwise Rotation Quantization for Efficient Reasoning LLM Inference

Paper Blog Models PyPI

ParoQuant is the state-of-the-art INT4 quantization for LLMs. It closes the accuracy gap with FP16 while running at near-AWQ speed. Supports NVIDIA GPUs (vLLM, Transformers) and Apple Silicon (MLX). For more information, see https://github.com/z-lab/paroquant.

Jeethu/gemma-4-E4B-it-PARO is a 4-bit google/gemma-4-E4B-it quantized with ParoQuant.

Downloads last month
15
Safetensors
Model size
5B params
Tensor type
I32
·
F16
·
I16
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for Jeethu/gemma-4-E4B-it-PARO

Quantized
(155)
this model

Collection including Jeethu/gemma-4-E4B-it-PARO

Paper for Jeethu/gemma-4-E4B-it-PARO