YAML Metadata Warning:empty or missing yaml metadata in repo card

Check out the documentation for more information.

CLIP Sparse Autoencoder Checkpoint

This model is a sparse autoencoder trained on CLIP's internal representations.

Model Details

Architecture

  • Layer: 10
  • Layer Type: hook_resid_post
  • Model: open-clip:laion/CLIP-ViT-B-32-DataComp.XL-s13B-b90K
  • Dictionary Size: 49152.0
  • Input Dimension: 768.0
  • Expansion Factor: 64.0
  • CLS Token Only: False

Training

  • Training Images: 1271264.0000
  • Learning Rate: 0.0140
  • L1 Coefficient: 0.0000
  • Batch Size: 4096.0
  • Context Size: 49.0

Performance Metrics

Sparsity

  • L0 (Active Features): 90.03951
  • Dead Features: 0.0000

Reconstruction

  • Explained Variance: 0.72
  • Explained Variance Std: 0.0000
  • MSE Loss: 0.0000

Training Details

  • Training Duration: 6188 seconds
  • Final Learning Rate: 0.0000
  • Warm Up Steps: 200.0
  • Gradient Clipping: 1.0

Additional Information

Downloads last month
-
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support