MAE โ€“ Masked Autoencoder Image Reconstruction

AI Assignment 02 | Generative AI AI4009 | FAST-NUCES Spring 2026

This Space demonstrates a self-supervised Masked Autoencoder (MAE) trained on TinyImageNet.

How it works

  1. Upload any image.
  2. Adjust the masking ratio slider (default 75 %).
  3. Click Reconstruct to see:
    • The original (resized to 224 ร— 224)
    • The masked input (grey patches = hidden)
    • The MAE reconstruction

Architecture

Component Spec
Encoder ViT-Base B/16, embed=768, depth=12, heads=12, ~86M params
Decoder ViT-Small S/16, embed=384, depth=12, heads=6, ~22M params
Patch size 16 ร— 16
Visible patches 25 % (49 of 196)
Loss MSE on masked patches only
Training AdamW + CosineAnnealing, Mixed Precision, 50 epochs

Setup

Upload mae_tiny_imagenet.pth (the trained weights) to the root of this Space.
The file is generated by running the Kaggle notebook AI_ASS02_XXF_YYYY.ipynb.

Downloads last month

-

Downloads are not tracked for this model. How to track
Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Space using hurairamuzammal/Image-regenerator-model 1