Llama 3.2 1B Instruct — GPTQ 4-bit

Self-quantized GPTQ 4-bit checkpoint of meta-llama/Llama-3.2-1B-Instruct with fully documented calibration provenance.

Created as part of the Banterhearts research program investigating quality-safety correlation under quantization for consumer LLM deployment.

Base model meta-llama/Llama-3.2-1B-Instruct
Parameters 1.24B
Architecture GQA, 22 layers
Quantization GPTQ 4-bit, group_size=128
Model size 1.0 GB
VRAM required ~1.5 GB (inference)

Quantization Details

Parameter Value
Method GPTQ
Tool gptqmodel 5.8.0
Bits 4
Group size 128
Scheme Symmetric (4-bit, INT32 packing)
Calibration dataset allenai/c4 (en, shard 1 of 1024)
Calibration samples 128
Seed 42
Quantization time 361s
Hardware NVIDIA RTX 4080 Laptop (12 GB) via Docker

Why Self-Quantized?

Pre-quantized checkpoints on HuggingFace typically have unknown calibration provenance — the dataset, sample count, seed, and group size are rarely documented. This checkpoint was self-quantized with controlled, documented settings to enable rigorous cross-method comparison (GGUF k-quant vs AWQ vs GPTQ) in a NeurIPS 2026 submission on quality-safety correlation under quantization.

Evaluation Results

Evaluated on 735 quality samples across 7 tasks and 468 safety samples judged by gemma3:12b.

Quality Metrics (generation tasks)

Metric Score
BERTScore (F1) 0.731
ROUGE-L 0.550
Coherence 0.763

Accuracy (capability tasks)

Task Accuracy
MMLU 33.3%
ARC Challenge 37.0%
Classification 72.0%

Safety Metrics (gemma3:12b judge)

Metric Score
Refusal Rate (AdvBench) 59.0%
Truthfulness (TruthfulQA) 26.0%
Unbiased Rate (BBQ) 38.4%

Other Quantization Formats

Format Repository
AWQ 4-bit Crusadersk/llama3.2-1b-awq-4bit
Original FP16 meta-llama/Llama-3.2-1B-Instruct

Prompt Template

<|begin_of_text|><|start_header_id|>user<|end_header_id|>

{prompt}<|eot_id|><|start_header_id|>assistant<|end_header_id|>

Usage

from transformers import AutoModelForCausalLM, AutoTokenizer

model = AutoModelForCausalLM.from_pretrained(
    "Crusadersk/llama3.2-1b-gptq-4bit",
    device_map="auto",
)
tokenizer = AutoTokenizer.from_pretrained("Crusadersk/llama3.2-1b-gptq-4bit")

inputs = tokenizer("What is the capital of France?", return_tensors="pt").to(model.device)
outputs = model.generate(**inputs, max_new_tokens=64, do_sample=False)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))

Inference requirements: pip install gptqmodel (Linux only) or optimum+auto-gptq

Windows users: GPTQ inference requires gptqmodel which only builds on Linux. Use Docker or WSL2. See reproduction instructions below.

Compatibility

Framework Supported
Transformers Yes
vLLM Yes (GPTQ backend)
llama.cpp No (use GGUF format instead)
Ollama No (use GGUF format instead)
Windows (native) No — requires Linux/Docker

Reproduction

The full quantization pipeline — Dockerfiles, quantization scripts, and a 766-line engineering log documenting every platform failure and solution — is available at:

research/tr142/expansion/

in the Banterhearts repository. Key files:

File Purpose
QUANTIZATION_LOG.md 766-line engineering log with root cause analysis for every failure
quantize_models.py CLI for AWQ + GPTQ quantization with skip-existing and manifests
Dockerfile.gptq / Dockerfile.awq Separate Docker images (irreconcilable dependency conflict)
smoke_test.py Checkpoint verification with automatic Docker fallback for GPTQ
run_hf_eval.py HuggingFace .generate() evaluation backend

Citation

@misc{banterhearts2026llama321bgptq,
  title = {Self-Quantized Llama 3.2 1B Instruct (GPTQ 4-bit) for Quality-Safety Correlation Research},
  author = {Kadadekar, Sahil},
  year = {2026},
  url = {https://huggingface.co/Crusadersk/llama3.2-1b-gptq-4bit},
  note = {Part of the Banterhearts research program. NeurIPS 2026 submission.}
}

Acknowledgments

This work is part of a 40-TR research program on consumer LLM deployment safety, conducted independently as pre-doctoral research. Full program details at github.com/Sahil170595/Banterhearts.

Downloads last month
50
Safetensors
Model size
1B params
Tensor type
BF16
·
I32
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for Crusadersk/llama3.2-1b-gptq-4bit

Quantized
(367)
this model

Collection including Crusadersk/llama3.2-1b-gptq-4bit