YAML Metadata Warning:empty or missing yaml metadata in repo card

Check out the documentation for more information.

Conditional GAN β€” MNIST Digit Generator

Model Description

A Conditional Generative Adversarial Network (cGAN) trained to generate handwritten digit images conditioned on a target label (0–9).

Both the Generator and Discriminator receive the digit label as input via a learned embedding, allowing the Generator to produce class-specific images.

Architecture

Generator

  • Input: noise vector z (latent dim = 100) + label embedding (dim = 10)
  • Fully connected layers: 110 β†’ 256 β†’ 512 β†’ 1024 β†’ 784
  • BatchNorm + LeakyReLU activations, Tanh output
  • Output: (1, 28, 28) grayscale image

Discriminator

  • Input: flattened image (784) + label embedding (dim = 10)
  • Fully connected layers: 794 β†’ 1024 β†’ 512 β†’ 256 β†’ 1
  • LeakyReLU + Dropout activations, Sigmoid output

Training

  • Dataset: MNIST (60,000 training images)
  • Epochs: 200
  • Batch size: 64
  • Optimizer: Adam (lr = 0.0002, Ξ² = 0.5, 0.999)
  • Loss: Binary Cross-Entropy
  • Hardware: Apple MPS (Metal Performance Shaders)

Usage

from huggingface_hub import hf_hub_download
import torch
from cgan_model import Generator

model = Generator(latent_dim=100, num_classes=10)
weights_path = hf_hub_download(repo_id="beatrizfarias/mnist-conditional-gan", filename="mnist_cgan_generator.pth")
model.load_state_dict(torch.load(weights_path, map_location="cpu"))
model.eval()

z = torch.randn(1, 100)
y = torch.tensor([7])  # generate a "7"
with torch.no_grad():
    img = model(z, y)  # shape: (1, 1, 28, 28)

Results

All 10 digit classes are clearly recognizable and well-formed after 200 epochs of training.

Downloads last month

-

Downloads are not tracked for this model. How to track
Inference Providers NEW
This model isn't deployed by any Inference Provider. πŸ™‹ Ask for provider support