| """ |
| LiqMamba: Liquid-Mamba Image Generator |
| |
| A novel lightweight architecture combining: |
| - Liquid Time-Constant (CfC) networks for adaptive continuous-time gating |
| - Mamba-2 State Space Duality (SSD) for linear-time sequence processing |
| - Flow Matching for stable image generation |
| - Multi-directional 2D scans for image understanding |
| - ConFIG gradient stabilization (from PINN research) |
| |
| Key innovations: |
| 1. CfC-Gated Mamba blocks: Replace static nonlinearities with learnable |
| continuous-time dynamics that adapt computation depth per-token |
| 2. Liquid State Modulation: The SSM state transition is modulated by CfC |
| dynamics, giving the model ODE-inspired expressivity |
| 3. Physics-informed training: ConFIG gradient composition prevents |
| competing loss terms from destabilizing training |
| 4. Extremely lightweight: ~25M params, trainable on Colab free tier |
| |
| Paper References: |
| - CfC: "Closed-form Continuous-time Neural Networks" (Hasani et al., 2021) |
| - Mamba-2: "Transformers are SSMs" (Dao & Gu, 2024) |
| - DiM: "Diffusion Mamba" (Teng et al., 2024) |
| - ConFIG: "Towards Conflict-free Training of PINNs" (Liu et al., 2024) |
| """ |
|
|
| __version__ = "0.1.0" |