You need to agree to share your contact information to access this model

This repository is publicly accessible, but you have to accept the conditions to access its files and content.

Log in or Sign Up to review the conditions and access this model content.

RT-Focuser: Real-Time Lightweight Model for Edge-side Image Deblurring

Official PyTorch and ONNX release of RT-Focuser, a lightweight image deblurring model designed for real-time edge deployment.

Model Details

Model Description

RT-Focuser is a lightweight single-input single-output image deblurring model for real-time deployment on edge devices. It is designed to balance restoration quality and inference speed, with support for both PyTorch and ONNX inference.

Highlights:

  • Lightweight: 5.85M parameters
  • Efficient: 15.76 GMACs
  • Real-time performance on desktop and mobile hardware
  • Suitable for edge-side image restoration and streaming scenarios

Available Checkpoints

This repository provides:

  • Pretrained_Weights/GoPro_RT_Focuser_Standard_256.pth
  • Pretrained_Weights/rt_focuser_wint8_afp16.onnx
  • Pretrained_Weights/rt_focuser_wint8_afp32.onnx
  • Pretrained_Weights/rt_focuser_wint8_aint8.onnx

Intended Use

Direct use:

  • Single-image motion deblurring
  • Real-time restoration pipelines
  • Edge/mobile/embedded deployment experiments
  • ONNX-based inference benchmarking

Out-of-scope use:

  • Medical or safety-critical decision making
  • Forensic restoration requiring guaranteed fidelity
  • General-purpose enhancement outside motion-deblur settings without validation

Training Data

The released model is trained and evaluated on the GoPro image deblurring dataset.

Performance

Reported results from the project README:

Model PSNR SSIM Params GMACs Time (s)
RT-Focuser 30.67 0.9005 5.85M 15.76 0.006

Additional reported deployment speed:

  • iPhone 15 (CoreML): 146.72 FPS
  • RTX 3090 (PyTorch CUDA): 154.42 FPS
  • Intel Xeon CPU (OpenVINO): 22.74 FPS
  • Intel Xeon CPU (ONNX Runtime): 14.95 FPS

Risks and Limitations

  • Performance depends on blur type, resolution, and domain shift.
  • The model is primarily presented for research and engineering use.
  • Results outside the GoPro-style motion blur setting may degrade noticeably.
  • Quantized ONNX checkpoints may trade image quality for speed depending on backend and hardware.

How to Use

PyTorch

import torch
from PIL import Image
import torchvision.transforms as transforms
from model.rt_focuser_model import RT_Focuser_Standard

model = RT_Focuser_Standard()
checkpoint = torch.load(
    "Pretrained_Weights/GoPro_RT_Focuser_Standard_256.pth",
    map_location="cpu"
)
model.load_state_dict(checkpoint, strict=True)
model.eval()

transform = transforms.Compose([
    transforms.Resize((256, 256)),
    transforms.ToTensor()
])

image = Image.open("Sample/Blurry.png").convert("RGB")
input_tensor = transform(image).unsqueeze(0)

with torch.no_grad():
    output = model(input_tensor)
Downloads last month

-

Downloads are not tracked for this model. How to track
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Paper for ReaganWZY/RT-Focuser