Qwen 2.5 1.5B Instruct — GPTQ 4-bit

Self-quantized GPTQ 4-bit checkpoint of Qwen/Qwen2.5-1.5B-Instruct with fully documented calibration provenance.

Created as part of the Banterhearts research program investigating quality-safety correlation under quantization for consumer LLM deployment.

Base model Qwen/Qwen2.5-1.5B-Instruct
Parameters 1.54B
Architecture GQA, 28 layers
Quantization GPTQ 4-bit, group_size=128
Model size 1.1 GB
VRAM required ~1.6 GB (inference)

Quantization Details

Parameter Value
Method GPTQ
Tool gptqmodel 5.8.0
Bits 4
Group size 128
Scheme Symmetric (4-bit, INT32 packing)
Calibration dataset allenai/c4 (en, shard 1 of 1024)
Calibration samples 128
Seed 42
Quantization time 555s
Hardware NVIDIA RTX 4080 Laptop (12 GB) via Docker

Why Self-Quantized?

Pre-quantized checkpoints on HuggingFace typically have unknown calibration provenance — the dataset, sample count, seed, and group size are rarely documented. This checkpoint was self-quantized with controlled, documented settings to enable rigorous cross-method comparison (GGUF k-quant vs AWQ vs GPTQ) in a NeurIPS 2026 submission on quality-safety correlation under quantization.

Evaluation Results

Evaluated on 735 quality samples across 7 tasks and 468 safety samples judged by gemma3:12b.

Quality Metrics (generation tasks)

Metric Score
BERTScore (F1) 0.614
ROUGE-L 0.270
Coherence 0.688

Accuracy (capability tasks)

Task Accuracy
MMLU 46.7%
ARC Challenge 70.0%
Classification 22.0%

Safety Metrics (gemma3:12b judge)

Metric Score
Refusal Rate (AdvBench) 64.0%
Truthfulness (TruthfulQA) 20.0%
Unbiased Rate (BBQ) 45.5%

Other Quantization Formats

Format Repository
AWQ 4-bit Crusadersk/qwen2.5-1.5b-awq-4bit
Original FP16 Qwen/Qwen2.5-1.5B-Instruct

Prompt Template

<|im_start|>user
{prompt}<|im_end|>
<|im_start|>assistant

Usage

from transformers import AutoModelForCausalLM, AutoTokenizer

model = AutoModelForCausalLM.from_pretrained(
    "Crusadersk/qwen2.5-1.5b-gptq-4bit",
    device_map="auto",
)
tokenizer = AutoTokenizer.from_pretrained("Crusadersk/qwen2.5-1.5b-gptq-4bit")

inputs = tokenizer("What is the capital of France?", return_tensors="pt").to(model.device)
outputs = model.generate(**inputs, max_new_tokens=64, do_sample=False)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))

Inference requirements: pip install gptqmodel (Linux only) or optimum+auto-gptq

Windows users: GPTQ inference requires gptqmodel which only builds on Linux. Use Docker or WSL2. See reproduction instructions below.

Compatibility

Framework Supported
Transformers Yes
vLLM Yes (GPTQ backend)
llama.cpp No (use GGUF format instead)
Ollama No (use GGUF format instead)
Windows (native) No — requires Linux/Docker

Reproduction

The full quantization pipeline — Dockerfiles, quantization scripts, and a 766-line engineering log documenting every platform failure and solution — is available at:

research/tr142/expansion/

in the Banterhearts repository. Key files:

File Purpose
QUANTIZATION_LOG.md 766-line engineering log with root cause analysis for every failure
quantize_models.py CLI for AWQ + GPTQ quantization with skip-existing and manifests
Dockerfile.gptq / Dockerfile.awq Separate Docker images (irreconcilable dependency conflict)
smoke_test.py Checkpoint verification with automatic Docker fallback for GPTQ
run_hf_eval.py HuggingFace .generate() evaluation backend

Citation

@misc{banterhearts2026qwen2515bgptq,
  title = {Self-Quantized Qwen 2.5 1.5B Instruct (GPTQ 4-bit) for Quality-Safety Correlation Research},
  author = {Kadadekar, Sahil},
  year = {2026},
  url = {https://huggingface.co/Crusadersk/qwen2.5-1.5b-gptq-4bit},
  note = {Part of the Banterhearts research program. NeurIPS 2026 submission.}
}

Acknowledgments

This work is part of a 40-TR research program on consumer LLM deployment safety, conducted independently as pre-doctoral research. Full program details at github.com/Sahil170595/Banterhearts.

Downloads last month
37
Safetensors
Model size
2B params
Tensor type
BF16
·
F16
·
I32
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for Crusadersk/qwen2.5-1.5b-gptq-4bit

Quantized
(168)
this model

Collection including Crusadersk/qwen2.5-1.5b-gptq-4bit