llama-3.2-3b-gptq-4bit

This repository contains a quantized model artifact produced in the graduation project.

Model Details

  • Technique: GPTQ
  • Quantization: INT4
  • Base model: meta-llama/Llama-3.2-3B-Instruct
  • Export date: 2026-03-23

Benchmark Summary

Metric Original Quantized
Disk size (GB) 5.98 2.85
Avg inference time 30.91 41.11
Tokens/sec 3.23 2.43
GPU memory 4400.24 2173.29

Comparison Highlights

  • Speedup: 0.75x
  • Memory reduction: 50.60%
  • Disk/model size reduction: 52.30%

Benchmark Notes

  • Numbers below are copied from local benchmark_results JSON in this project.

Local Source

  • Quantized folder: Advanced-Techniques/GPTQ/quantized/llama3.2-3b-gptq-4bit
  • Benchmark JSON: Advanced-Techniques/GPTQ/benchmark_results/gptq_benchmark_results.json

Usage

Use the model with the library and runtime that match the quantization technique in this repo.

Limitations

  • This model card is auto-generated from project files.
  • You should validate quality, safety, and license compatibility before public release.
Downloads last month
7
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for emreyigitozturk/llama-3.2-3b-gptq-4bit

Quantized
(439)
this model