asdf98 commited on
Commit
dc40562
·
verified ·
1 Parent(s): 02041e2

Upload README.md

Browse files
Files changed (1) hide show
  1. README.md +76 -0
README.md ADDED
@@ -0,0 +1,76 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # LuminaRS — Lightweight Recursive Art Image Generator
2
+
3
+ A novel ~90M parameter image generation model for art/illustration that runs on mobile devices (2-4GB VRAM).
4
+
5
+ ## Why LuminaRS?
6
+
7
+ | Problem | Current Solutions | LuminaRS |
8
+ |---------|------------------|----------|
9
+ | Heavy models (6-12GB) | SDXL, Flux | ~90M params, <500MB |
10
+ | Can't run mobile | Quantized SD (quality loss) | Designed small from scratch |
11
+ | Poor prompt adherence | SD 1.5 | TRM-style recursive reasoning |
12
+ | No art specialization | General photo models | Art-focused training stages |
13
+ | Unstable training | Diffusion (score matching) | Flow matching (stable ODE) |
14
+
15
+ ## Architecture (Novel Contributions)
16
+
17
+ ### 1. Recursive Shared-Weight Refinement (from TRM)
18
+ Inspired by [Tiny Recursive Models](https://arxiv.org/abs/2510.04871) — beat 200x larger LLMs with 7M params.
19
+ ```python
20
+ for _ in range(T): z = z + unet(z, text, t) # shared-weight refinement
21
+ ```
22
+ Effective depth = T x L without Tx parameters.
23
+
24
+ ### 2. Flow Matching (instead of Diffusion)
25
+ - v(x_t, t) = x_clean - x_noise (straight-line velocity)
26
+ - 10-12 inference steps vs 50+ for diffusion
27
+ - No score matching instability
28
+
29
+ ### 3. ConvNeXt + MQA Cross-Attention
30
+ Depthwise 7x7 conv, Adaptive LayerNorm (time), MQA cross-attn (text), GELU MLP
31
+
32
+ ### 4. Staged Freeze/Thaw Training
33
+ | Stage | What's Trained | LR |
34
+ |-------|---------------|-----|
35
+ | 1 | All denoiser params | 1e-4 |
36
+ | 2 | Cross-attention only | 1e-5 |
37
+ | 3 | All params, joint | 1e-6 |
38
+
39
+ VAE and CLIP always frozen.
40
+
41
+ ## Parameter Budget
42
+ | Component | Params |
43
+ |-----------|--------|
44
+ | Encoder | ~35M |
45
+ | Bottleneck | ~15M |
46
+ | Decoder | ~35M |
47
+ | Embeds | ~5M |
48
+ | **Total trainable** | **~90M** |
49
+ | VAE (frozen) | ~83M |
50
+ | CLIP (frozen) | ~303M |
51
+ | **Inference VRAM (b=1)** | **~1.5-2GB** |
52
+
53
+ ## Quick Start
54
+ ```python
55
+ from luminars.model import LuminaRS
56
+ from luminars.config import LuminaRSConfig
57
+ from luminars.sampler import sample_flow
58
+ cfg = LuminaRSConfig()
59
+ model = LuminaRS(cfg)
60
+ latents = sample_flow(model, text_emb, (1,16,32,32), 12)
61
+ ```
62
+
63
+ ## Files
64
+ - luminars/ -- model, config, loss, sampler, train helpers
65
+ - train.py -- main training script
66
+ - LuminaRS_Colab.ipynb -- Colab notebook
67
+
68
+ ## Research Foundations
69
+ - TRM (Jolicoeur-Martineau 2025): Recursive reasoning
70
+ - SnapGen (2024): Mobile UNet design
71
+ - ZigMa (2024): Mamba diffusion
72
+ - Flow Matching (Lipman 2023): Stable ODE generation
73
+ - MQA (Shazeer 2019): Multi-query attention
74
+ - ConvNeXt (Liu 2022): Modernized CNN
75
+
76
+ MIT License