Face Detection
Exported PyTorch model (.pt2) for use with facetorch.
Model Details
| Task | Face Detection |
| Architecture | RetinaFace with ResNet-50 backbone |
| Format | torch.export (.pt2) โ no model source code needed |
| Dynamic shapes | Batch (1-32), Height and Width (multiples of 32, 64-2048) |
| Input | RGB image, spatial dims must be multiples of 32 |
| Output | (bbox_regressions, classifications, landmark_regressions) |
Original Work
This model is based on biubug6/Pytorch_Retinaface. Weights converted and exported by facetorch.
Dynamic Shape Export
The model is exported with derived dimensions using torch.export.Dim:
- Batch: 1-32
- Height:
32 * h_basewhere h_base in [2, 64] (i.e., 64-2048 in steps of 32) - Width:
32 * w_basewhere w_base in [2, 64] (i.e., 64-2048 in steps of 32)
The multiples-of-32 constraint matches the model stride chain (8, 16, 32) ensuring all feature map dimensions are integral.
Usage
import torch
# Load โ no model class needed
ep = torch.export.load("model.pt2")
model = ep.module()
# Inference (spatial dims must be multiples of 32)
x = torch.randn(1, 3, 640, 480)
output = model(x)
Or via facetorch:
from facetorch import FaceAnalyzer
from omegaconf import OmegaConf
cfg = OmegaConf.load("conf/config.yaml")
analyzer = FaceAnalyzer(cfg.analyzer)
response = analyzer.run(path_image="face.jpg")
Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐ Ask for provider support