ViT Base Patch16-224 โ BonfyreFPQ v12
Native .fpq v12 quantized weights for google/vit-base-patch16-224.
Model Info
| Property | Value |
|---|---|
| Architecture | Vision Transformer (Image Classification) |
| Parameters | 86M |
| Original Size | 330 MB (safetensors) |
| FPQ v12 Size | 78 MB |
| Compression | 4.2ร |
| Avg Cosine | 0.999733 |
| Worst Cosine | 0.998490 |
Format
BonfyreFPQ v12 native format: rANS entropy-coded E8 lattice coordinates with 6-bit packed tiles and FP16 scales.
Decode to safetensors
bonfyre-fpqx decode vit-base-patch16-224-v12-fpq.fpq output.safetensors
Source
Converted from google/vit-base-patch16-224 using BonfyreFPQ v12.
Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐ Ask for provider support