yolov12n-qat

This repository contains a quantized model artifact produced in the graduation project.

Model Details

  • Technique: QAT
  • Quantization: QAT Fine-tuned
  • Base model: ultralytics/yolo12n
  • Export date: 2026-03-24

Benchmark Summary

Metric Original Quantized
Model size (MB) N/A 5.26
Latency mean (ms) N/A 15.60
FPS N/A 64.12
ONNX latency mean (ms) N/A 14.75

Comparison Highlights

  • Speedup: N/Ax
  • Memory reduction: N/A%
  • Disk/model size reduction: N/A%

Benchmark Notes

  • Numbers below are copied from local benchmark_results JSON in this project.

Local Source

  • Quantized folder: Basic-Techniques/QAT-Quantization-Aware-Training/quantized/yolov12n_qat
  • Benchmark JSON: Basic-Techniques/QAT-Quantization-Aware-Training/benchmark_results/qat_benchmark_results.json

Usage

Use the model with the library and runtime that match the quantization technique in this repo.

Limitations

  • This model card is auto-generated from project files.
  • You should validate quality, safety, and license compatibility before public release.
Downloads last month

-

Downloads are not tracked for this model. How to track
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support