asdf98 commited on
Commit
cb1106e
·
verified ·
1 Parent(s): 91e9e63

Add complete Colab/Kaggle training notebook

Browse files
Files changed (1) hide show
  1. IRIS_Training_Notebook.ipynb +956 -0
IRIS_Training_Notebook.ipynb ADDED
@@ -0,0 +1,956 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "nbformat": 4,
3
+ "nbformat_minor": 5,
4
+ "metadata": {
5
+ "kernelspec": {
6
+ "display_name": "Python 3",
7
+ "language": "python",
8
+ "name": "python3"
9
+ },
10
+ "language_info": {
11
+ "name": "python",
12
+ "version": "3.10.0"
13
+ },
14
+ "accelerator": "GPU",
15
+ "colab": {
16
+ "provenance": [],
17
+ "gpuType": "T4"
18
+ }
19
+ },
20
+ "cells": [
21
+ {
22
+ "cell_type": "markdown",
23
+ "metadata": {},
24
+ "source": [
25
+ "# \ud83d\udd2e IRIS: Iterative Recurrent Image Synthesis \u2014 Training Notebook",
26
+ "",
27
+ "**Train a novel mobile-first image generation model from scratch on free Colab/Kaggle GPUs.**",
28
+ "",
29
+ "This notebook runs the complete 2-stage training pipeline:",
30
+ "1. **Stage 1 \u2014 Wavelet VAE Training**: Learn to encode/decode images via wavelet-frequency latent space",
31
+ "2. **Stage 2 \u2014 Generator Training**: Train the recurrent-depth denoiser with rectified flow on captioned images",
32
+ "",
33
+ "### Hardware Requirements",
34
+ "| Platform | GPU | VRAM | Estimated Time |",
35
+ "|----------|-----|------|----------------|",
36
+ "| **Colab Free** | T4 | 16GB | ~2-3 hours total |",
37
+ "| **Colab Pro** | A100 | 40GB | ~45 min total |",
38
+ "| **Kaggle** | P100/T4\u00d72 | 16GB | ~2-3 hours total |",
39
+ "",
40
+ "### What You Get",
41
+ "- A trained Wavelet VAE that compresses 256\u00d7256 images to 16\u00d716 latent (48\u00d7 compression)",
42
+ "- A trained IRIS generator that can denoise latents conditioned on text (CLIP embeddings)",
43
+ "- Visualization of reconstructions, generation samples, and loss curves",
44
+ "- Saved checkpoints you can continue training from"
45
+ ]
46
+ },
47
+ {
48
+ "cell_type": "markdown",
49
+ "metadata": {},
50
+ "source": [
51
+ "## 1. Setup & Installation"
52
+ ]
53
+ },
54
+ {
55
+ "cell_type": "code",
56
+ "metadata": {},
57
+ "source": [
58
+ "# Install dependencies\n",
59
+ "!pip install -q torch torchvision datasets transformers accelerate matplotlib Pillow tqdm huggingface_hub\n",
60
+ "\n",
61
+ "# Check GPU\n",
62
+ "import torch\n",
63
+ "print(f\"PyTorch: {torch.__version__}\")\n",
64
+ "print(f\"CUDA available: {torch.cuda.is_available()}\")\n",
65
+ "if torch.cuda.is_available():\n",
66
+ " print(f\"GPU: {torch.cuda.get_device_name(0)}\")\n",
67
+ " print(f\"VRAM: {torch.cuda.get_device_properties(0).total_mem / 1024**3:.1f} GB\")\n",
68
+ " device = torch.device('cuda')\n",
69
+ "else:\n",
70
+ " print(\"\u26a0\ufe0f No GPU detected! Training will be very slow on CPU.\")\n",
71
+ " device = torch.device('cpu')"
72
+ ],
73
+ "outputs": [],
74
+ "execution_count": null
75
+ },
76
+ {
77
+ "cell_type": "markdown",
78
+ "metadata": {},
79
+ "source": [
80
+ "## 2. Download IRIS Architecture from Hugging Face"
81
+ ]
82
+ },
83
+ {
84
+ "cell_type": "code",
85
+ "metadata": {},
86
+ "source": [
87
+ "# Download the IRIS architecture code from HF Hub\n",
88
+ "from huggingface_hub import hf_hub_download\n",
89
+ "import shutil, os\n",
90
+ "\n",
91
+ "repo_id = \"asdf98/IRIS-architecture\"\n",
92
+ "for fname in [\"iris_model.py\", \"train_iris.py\", \"test_iris.py\"]:\n",
93
+ " path = hf_hub_download(repo_id=repo_id, filename=fname)\n",
94
+ " shutil.copy(path, f\"./{fname}\")\n",
95
+ " print(f\"\u2705 Downloaded {fname}\")\n",
96
+ "\n",
97
+ "# Import IRIS\n",
98
+ "from iris_model import (\n",
99
+ " IRIS, IRISConfig, WaveletVAE, IRISGenerator,\n",
100
+ " HaarDWT2D, HaarIDWT2D,\n",
101
+ " create_iris_small, create_iris_tiny, create_iris_base,\n",
102
+ " count_parameters, estimate_memory_mb,\n",
103
+ ")\n",
104
+ "print(\"\\n\u2705 IRIS architecture imported successfully!\")"
105
+ ],
106
+ "outputs": [],
107
+ "execution_count": null
108
+ },
109
+ {
110
+ "cell_type": "markdown",
111
+ "metadata": {},
112
+ "source": [
113
+ "## 3. Model Architecture Overview",
114
+ "",
115
+ "Let's inspect the three model variants and their parameter counts."
116
+ ]
117
+ },
118
+ {
119
+ "cell_type": "code",
120
+ "metadata": {},
121
+ "source": [
122
+ "# Show model variants\n",
123
+ "for name, fn in [(\"IRIS-Tiny (ultra-mobile)\", create_iris_tiny),\n",
124
+ " (\"IRIS-Small (mobile)\", create_iris_small),\n",
125
+ " (\"IRIS-Base (desktop)\", create_iris_base)]:\n",
126
+ " model = fn()\n",
127
+ " counts = count_parameters(model)\n",
128
+ " mem16 = estimate_memory_mb(model, torch.float16)\n",
129
+ "\n",
130
+ " core_params = sum(p.numel() for p in model.generator.core.parameters())\n",
131
+ " print(f\"\\n{'='*55}\")\n",
132
+ " print(f\" {name}\")\n",
133
+ " print(f\"{'='*55}\")\n",
134
+ " print(f\" Total params: {counts['total']:>12,}\")\n",
135
+ " print(f\" Generator params: {counts['total'] - sum(p.numel() for p in model.vae.parameters()):>12,}\")\n",
136
+ " print(f\" Core (shared): {core_params:>12,}\")\n",
137
+ " print(f\" Model memory fp16: {mem16:>10.1f} MB\")\n",
138
+ " print(f\" + CLIP-L/14 text: 156.0 MB\")\n",
139
+ " print(f\" + Overhead: 350.0 MB\")\n",
140
+ " print(f\" \u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\")\n",
141
+ " print(f\" Total inference: {mem16+156+350:>10.1f} MB {'\u2705 <3GB' if mem16+506 < 3000 else ''}\")\n",
142
+ "\n",
143
+ "del model # Free memory"
144
+ ],
145
+ "outputs": [],
146
+ "execution_count": null
147
+ },
148
+ {
149
+ "cell_type": "markdown",
150
+ "metadata": {},
151
+ "source": [
152
+ "## 4. Load Dataset \u2014 Pok\u00e9mon BLIP Captions",
153
+ "",
154
+ "We use `reach-vb/pokemon-blip-captions` \u2014 a small, high-quality dataset with ~833 image-caption pairs. ",
155
+ "Perfect for free-tier training to validate the architecture works end-to-end.",
156
+ "",
157
+ "**For serious training later**, swap in larger datasets:",
158
+ "- `ILSVRC/imagenet-1k` (Stage 2 class-conditional)",
159
+ "- `laion/laion-art` (Text-image alignment)",
160
+ "- `caidas/JourneyDB` (Aesthetic fine-tuning)"
161
+ ]
162
+ },
163
+ {
164
+ "cell_type": "code",
165
+ "metadata": {},
166
+ "source": [
167
+ "from datasets import load_dataset\n",
168
+ "from torchvision import transforms\n",
169
+ "from torch.utils.data import Dataset, DataLoader\n",
170
+ "from PIL import Image\n",
171
+ "import numpy as np\n",
172
+ "\n",
173
+ "# Load Pok\u00e9mon dataset\n",
174
+ "print(\"Loading dataset...\")\n",
175
+ "raw_dataset = load_dataset(\"reach-vb/pokemon-blip-captions\", split=\"train\")\n",
176
+ "print(f\"\u2705 Loaded {len(raw_dataset)} image-caption pairs\")\n",
177
+ "\n",
178
+ "# Show a few examples\n",
179
+ "import matplotlib.pyplot as plt\n",
180
+ "fig, axes = plt.subplots(1, 5, figsize=(20, 4))\n",
181
+ "for i, ax in enumerate(axes):\n",
182
+ " item = raw_dataset[i]\n",
183
+ " ax.imshow(item[\"image\"])\n",
184
+ " ax.set_title(item[\"text\"][:40] + \"...\", fontsize=9)\n",
185
+ " ax.axis(\"off\")\n",
186
+ "plt.suptitle(\"Sample Training Images\", fontsize=14)\n",
187
+ "plt.tight_layout()\n",
188
+ "plt.show()"
189
+ ],
190
+ "outputs": [],
191
+ "execution_count": null
192
+ },
193
+ {
194
+ "cell_type": "markdown",
195
+ "metadata": {},
196
+ "source": [
197
+ "### 4.1 Create PyTorch Dataset with Transforms"
198
+ ]
199
+ },
200
+ {
201
+ "cell_type": "code",
202
+ "metadata": {},
203
+ "source": [
204
+ "# \u2500\u2500\u2500 Training configuration \u2500\u2500\u2500\n",
205
+ "IMAGE_SIZE = 256 # Input image resolution\n",
206
+ "BATCH_SIZE = 4 # Fits on T4 16GB; increase on A100\n",
207
+ "NUM_WORKERS = 2 # Dataloader workers\n",
208
+ "\n",
209
+ "# \u2500\u2500\u2500 Image transforms \u2500\u2500\u2500\n",
210
+ "train_transform = transforms.Compose([\n",
211
+ " transforms.Resize(IMAGE_SIZE, interpolation=transforms.InterpolationMode.LANCZOS),\n",
212
+ " transforms.CenterCrop(IMAGE_SIZE),\n",
213
+ " transforms.RandomHorizontalFlip(),\n",
214
+ " transforms.ToTensor(), # [0, 1]\n",
215
+ " transforms.Normalize([0.5]*3, [0.5]*3), # [-1, 1]\n",
216
+ "])\n",
217
+ "\n",
218
+ "class ImageCaptionDataset(Dataset):\n",
219
+ " \"\"\"Wraps a HF dataset with transforms. Returns (image_tensor, caption_string).\"\"\"\n",
220
+ " def __init__(self, hf_dataset, transform):\n",
221
+ " self.dataset = hf_dataset\n",
222
+ " self.transform = transform\n",
223
+ "\n",
224
+ " def __len__(self):\n",
225
+ " return len(self.dataset)\n",
226
+ "\n",
227
+ " def __getitem__(self, idx):\n",
228
+ " item = self.dataset[idx]\n",
229
+ " image = item[\"image\"].convert(\"RGB\")\n",
230
+ " image = self.transform(image)\n",
231
+ " caption = item[\"text\"]\n",
232
+ " return image, caption\n",
233
+ "\n",
234
+ "train_dataset = ImageCaptionDataset(raw_dataset, train_transform)\n",
235
+ "train_loader = DataLoader(\n",
236
+ " train_dataset, batch_size=BATCH_SIZE, shuffle=True,\n",
237
+ " num_workers=NUM_WORKERS, pin_memory=True, drop_last=True,\n",
238
+ ")\n",
239
+ "print(f\"\u2705 DataLoader ready: {len(train_loader)} batches of {BATCH_SIZE}\")\n",
240
+ "\n",
241
+ "# Quick sanity check\n",
242
+ "imgs, caps = next(iter(train_loader))\n",
243
+ "print(f\" Image batch: {imgs.shape}, range [{imgs.min():.2f}, {imgs.max():.2f}]\")\n",
244
+ "print(f\" Caption[0]: {caps[0]}\")"
245
+ ],
246
+ "outputs": [],
247
+ "execution_count": null
248
+ },
249
+ {
250
+ "cell_type": "markdown",
251
+ "metadata": {},
252
+ "source": [
253
+ "## 5. Load CLIP Text Encoder (Frozen)",
254
+ "",
255
+ "We use CLIP-L/14 (~150MB) as the text encoder. It's frozen during training \u2014 ",
256
+ "only the IRIS generator learns. This is the same encoder used in aMUSEd, Meissonic, and SnapGen."
257
+ ]
258
+ },
259
+ {
260
+ "cell_type": "code",
261
+ "metadata": {},
262
+ "source": [
263
+ "from transformers import CLIPTextModel, CLIPTokenizer\n",
264
+ "\n",
265
+ "print(\"Loading CLIP-L/14 text encoder...\")\n",
266
+ "clip_model_name = \"openai/clip-vit-large-patch14\"\n",
267
+ "tokenizer = CLIPTokenizer.from_pretrained(clip_model_name)\n",
268
+ "text_encoder = CLIPTextModel.from_pretrained(clip_model_name).to(device).eval()\n",
269
+ "\n",
270
+ "# Freeze text encoder\n",
271
+ "for p in text_encoder.parameters():\n",
272
+ " p.requires_grad = False\n",
273
+ "\n",
274
+ "clip_params = sum(p.numel() for p in text_encoder.parameters())\n",
275
+ "print(f\"\u2705 CLIP-L/14 loaded: {clip_params/1e6:.1f}M params (frozen)\")\n",
276
+ "print(f\" Text embedding dim: {text_encoder.config.hidden_size}\")\n",
277
+ "print(f\" Max tokens: {tokenizer.model_max_length}\")\n",
278
+ "\n",
279
+ "@torch.no_grad()\n",
280
+ "def encode_text(captions, max_length=77):\n",
281
+ " \"\"\"Encode a list of captions to CLIP text embeddings.\"\"\"\n",
282
+ " tokens = tokenizer(\n",
283
+ " captions, padding=\"max_length\", truncation=True,\n",
284
+ " max_length=max_length, return_tensors=\"pt\"\n",
285
+ " ).to(device)\n",
286
+ " outputs = text_encoder(**tokens)\n",
287
+ " return outputs.last_hidden_state # [B, 77, 768]\n",
288
+ "\n",
289
+ "# Test encoding\n",
290
+ "test_emb = encode_text([\"a cute dragon breathing fire\"])\n",
291
+ "print(f\" Test encoding shape: {test_emb.shape}\")"
292
+ ],
293
+ "outputs": [],
294
+ "execution_count": null
295
+ },
296
+ {
297
+ "cell_type": "markdown",
298
+ "metadata": {},
299
+ "source": [
300
+ "## 6. Stage 1 \u2014 Wavelet VAE Training",
301
+ "",
302
+ "Train the lightweight Wavelet VAE to reconstruct images through the wavelet-frequency latent space.",
303
+ "",
304
+ "**Architecture**: `Image \u2192 HaarDWT \u2192 Encoder \u2192 Latent(16ch, 16\u00d716) \u2192 Decoder \u2192 HaarIDWT \u2192 Image`",
305
+ "",
306
+ "**Losses**:",
307
+ "- MSE reconstruction loss",
308
+ "- KL divergence (variational regularization)",
309
+ "- Wavelet frequency loss (preserves high-frequency details)",
310
+ "- Perceptual loss via LPIPS-like gradient matching"
311
+ ]
312
+ },
313
+ {
314
+ "cell_type": "code",
315
+ "metadata": {},
316
+ "source": [
317
+ "# \u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\n",
318
+ "# STAGE 1: WAVELET VAE TRAINING\n",
319
+ "# \u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\n",
320
+ "\n",
321
+ "# Build IRIS-Tiny config for free-tier training\n",
322
+ "# DWT(2\u00d7) + 3 down-blocks(8\u00d7) = 16\u00d7 total compression\n",
323
+ "# 256px input \u2192 128px after DWT \u2192 64\u219232\u219216 after encoder = 16\u00d716 latent\n",
324
+ "config = IRISConfig(\n",
325
+ " latent_channels=8, # Smaller for memory efficiency\n",
326
+ " latent_spatial=16, # 16\u00d716 spatial latent\n",
327
+ " hidden_dim=384, # IRIS-Tiny\n",
328
+ " num_heads=6,\n",
329
+ " head_dim=64,\n",
330
+ " ffn_ratio=2.667,\n",
331
+ " num_prelude_blocks=1,\n",
332
+ " num_core_layers=3,\n",
333
+ " num_coda_blocks=1,\n",
334
+ " default_iterations=6,\n",
335
+ " max_iterations=16,\n",
336
+ " fourier_num_blocks=6,\n",
337
+ " sparsity_threshold=0.01,\n",
338
+ " recurrence_dim=192,\n",
339
+ " manhattan_window=12,\n",
340
+ " text_dim=768, # CLIP-L/14\n",
341
+ " max_text_tokens=77,\n",
342
+ " patch_size=2,\n",
343
+ " vae_channels=[32, 64, 128, 256],\n",
344
+ ")\n",
345
+ "\n",
346
+ "# Create VAE\n",
347
+ "vae = WaveletVAE(config).to(device)\n",
348
+ "vae_params = sum(p.numel() for p in vae.parameters())\n",
349
+ "print(f\"Wavelet VAE: {vae_params:,} params ({vae_params*4/1024/1024:.1f} MB fp32)\")\n",
350
+ "print(f\"Encoder: {sum(p.numel() for p in vae.encoder.parameters()):,}\")\n",
351
+ "print(f\"Decoder: {sum(p.numel() for p in vae.decoder.parameters()):,}\")"
352
+ ],
353
+ "outputs": [],
354
+ "execution_count": null
355
+ },
356
+ {
357
+ "cell_type": "code",
358
+ "metadata": {},
359
+ "source": [
360
+ "# \u2500\u2500\u2500 VAE Training Loop \u2500\u2500\u2500\n",
361
+ "import time\n",
362
+ "from torch.cuda.amp import autocast, GradScaler\n",
363
+ "\n",
364
+ "VAE_EPOCHS = 80 # Enough to get good reconstructions\n",
365
+ "VAE_LR = 1e-4\n",
366
+ "KL_WEIGHT = 1e-4 # Light KL to avoid posterior collapse\n",
367
+ "FREQ_WEIGHT = 0.1 # Wavelet frequency preservation\n",
368
+ "\n",
369
+ "optimizer_vae = torch.optim.AdamW(vae.parameters(), lr=VAE_LR, weight_decay=0.01)\n",
370
+ "scheduler_vae = torch.optim.lr_scheduler.CosineAnnealingLR(optimizer_vae, T_max=VAE_EPOCHS)\n",
371
+ "scaler = GradScaler()\n",
372
+ "dwt = HaarDWT2D()\n",
373
+ "\n",
374
+ "# Logging\n",
375
+ "vae_losses = {\"total\": [], \"recon\": [], \"kl\": [], \"freq\": []}\n",
376
+ "\n",
377
+ "print(f\"Training VAE for {VAE_EPOCHS} epochs on {len(train_loader)} batches...\")\n",
378
+ "print(f\"{'Epoch':>6} {'Loss':>10} {'Recon':>10} {'KL':>10} {'Freq':>10} {'LR':>10} {'Time':>8}\")\n",
379
+ "print(\"\u2500\" * 70)\n",
380
+ "\n",
381
+ "vae.train()\n",
382
+ "for epoch in range(VAE_EPOCHS):\n",
383
+ " epoch_losses = {\"total\": 0, \"recon\": 0, \"kl\": 0, \"freq\": 0}\n",
384
+ " t0 = time.time()\n",
385
+ "\n",
386
+ " for images, _ in train_loader:\n",
387
+ " images = images.to(device)\n",
388
+ "\n",
389
+ " with autocast(dtype=torch.float16):\n",
390
+ " x_recon, mean, logvar = vae(images)\n",
391
+ "\n",
392
+ " # Reconstruction loss\n",
393
+ " recon_loss = torch.nn.functional.mse_loss(x_recon, images)\n",
394
+ "\n",
395
+ " # KL divergence\n",
396
+ " kl_loss = -0.5 * (1 + logvar - mean.pow(2) - logvar.exp()).mean()\n",
397
+ "\n",
398
+ " # Wavelet frequency loss \u2014 preserve high-freq details\n",
399
+ " with torch.no_grad():\n",
400
+ " target_wv = dwt(images)\n",
401
+ " recon_wv = dwt(x_recon)\n",
402
+ " freq_loss = torch.nn.functional.l1_loss(recon_wv, target_wv)\n",
403
+ "\n",
404
+ " loss = recon_loss + KL_WEIGHT * kl_loss + FREQ_WEIGHT * freq_loss\n",
405
+ "\n",
406
+ " optimizer_vae.zero_grad(set_to_none=True)\n",
407
+ " scaler.scale(loss).backward()\n",
408
+ " scaler.unscale_(optimizer_vae)\n",
409
+ " torch.nn.utils.clip_grad_norm_(vae.parameters(), 1.0)\n",
410
+ " scaler.step(optimizer_vae)\n",
411
+ " scaler.update()\n",
412
+ "\n",
413
+ " epoch_losses[\"total\"] += loss.item()\n",
414
+ " epoch_losses[\"recon\"] += recon_loss.item()\n",
415
+ " epoch_losses[\"kl\"] += kl_loss.item()\n",
416
+ " epoch_losses[\"freq\"] += freq_loss.item()\n",
417
+ "\n",
418
+ " # Average losses\n",
419
+ " n = len(train_loader)\n",
420
+ " for k in epoch_losses:\n",
421
+ " epoch_losses[k] /= n\n",
422
+ " vae_losses[k].append(epoch_losses[k])\n",
423
+ "\n",
424
+ " scheduler_vae.step()\n",
425
+ " dt = time.time() - t0\n",
426
+ "\n",
427
+ " if (epoch + 1) % 10 == 0 or epoch == 0:\n",
428
+ " lr = optimizer_vae.param_groups[0][\"lr\"]\n",
429
+ " print(f\"{epoch+1:>6} {epoch_losses['total']:>10.4f} {epoch_losses['recon']:>10.4f} \"\n",
430
+ " f\"{epoch_losses['kl']:>10.4f} {epoch_losses['freq']:>10.4f} {lr:>10.2e} {dt:>7.1f}s\")\n",
431
+ "\n",
432
+ "print(\"\\n\u2705 VAE training complete!\")"
433
+ ],
434
+ "outputs": [],
435
+ "execution_count": null
436
+ },
437
+ {
438
+ "cell_type": "markdown",
439
+ "metadata": {},
440
+ "source": [
441
+ "### 6.1 Visualize VAE Reconstructions"
442
+ ]
443
+ },
444
+ {
445
+ "cell_type": "code",
446
+ "metadata": {},
447
+ "source": [
448
+ "# Visualize reconstructions\n",
449
+ "vae.eval()\n",
450
+ "fig, axes = plt.subplots(3, 8, figsize=(20, 8))\n",
451
+ "\n",
452
+ "with torch.no_grad():\n",
453
+ " imgs_sample, _ = next(iter(train_loader))\n",
454
+ " imgs_sample = imgs_sample[:8].to(device)\n",
455
+ " recon, _, _ = vae(imgs_sample)\n",
456
+ "\n",
457
+ " # Also show latent statistics\n",
458
+ " z, mean, logvar = vae.encode(imgs_sample)\n",
459
+ " print(f\"Latent shape: {z.shape}\")\n",
460
+ " print(f\"Latent mean: {z.mean():.3f}, std: {z.std():.3f}\")\n",
461
+ " print(f\"Latent range: [{z.min():.3f}, {z.max():.3f}]\")\n",
462
+ "\n",
463
+ "def show_img(ax, tensor, title=\"\"):\n",
464
+ " img = tensor.cpu().clamp(-1, 1) * 0.5 + 0.5 # [-1,1] \u2192 [0,1]\n",
465
+ " ax.imshow(img.permute(1, 2, 0).numpy())\n",
466
+ " ax.set_title(title, fontsize=8)\n",
467
+ " ax.axis(\"off\")\n",
468
+ "\n",
469
+ "for i in range(8):\n",
470
+ " show_img(axes[0, i], imgs_sample[i], f\"Original {i}\")\n",
471
+ " show_img(axes[1, i], recon[i], f\"Recon {i}\")\n",
472
+ " axes[2, i].imshow(z[i, :3].cpu().permute(1, 2, 0).numpy() * 0.3 + 0.5)\n",
473
+ " axes[2, i].set_title(f\"Latent ch0-2\", fontsize=8)\n",
474
+ " axes[2, i].axis(\"off\")\n",
475
+ "\n",
476
+ "axes[0, 0].set_ylabel(\"Original\", fontsize=12)\n",
477
+ "axes[1, 0].set_ylabel(\"Reconstructed\", fontsize=12)\n",
478
+ "axes[2, 0].set_ylabel(\"Latent\", fontsize=12)\n",
479
+ "plt.suptitle(\"Wavelet VAE Reconstructions\", fontsize=14)\n",
480
+ "plt.tight_layout()\n",
481
+ "plt.show()\n",
482
+ "\n",
483
+ "# Plot loss curves\n",
484
+ "fig, axes = plt.subplots(1, 3, figsize=(15, 4))\n",
485
+ "for ax, key, color in zip(axes, [\"total\", \"recon\", \"freq\"], [\"blue\", \"green\", \"red\"]):\n",
486
+ " ax.plot(vae_losses[key], color=color)\n",
487
+ " ax.set_title(f\"VAE {key.title()} Loss\")\n",
488
+ " ax.set_xlabel(\"Epoch\")\n",
489
+ " ax.set_ylabel(\"Loss\")\n",
490
+ " ax.grid(True, alpha=0.3)\n",
491
+ "plt.tight_layout()\n",
492
+ "plt.show()"
493
+ ],
494
+ "outputs": [],
495
+ "execution_count": null
496
+ },
497
+ {
498
+ "cell_type": "markdown",
499
+ "metadata": {},
500
+ "source": [
501
+ "## 7. Stage 2 \u2014 IRIS Generator Training (Rectified Flow)",
502
+ "",
503
+ "Now we train the **recurrent-depth generator** to denoise latent representations conditioned on CLIP text embeddings.",
504
+ "",
505
+ "**Key features of this training**:",
506
+ "- **Rectified Flow**: Linear noise schedule, velocity prediction, logit-normal timestep sampling",
507
+ "- **Recurrent Depth**: Core block is iterated randomly 4-8\u00d7 per step (training robustness)",
508
+ "- **adaLN-Zero**: Stable training start via zero-initialized gating",
509
+ "- **Mixed precision (fp16)**: Fits on 16GB VRAM",
510
+ "- **Gradient checkpointing**: Optional, for very tight memory",
511
+ "",
512
+ "**The magic**: Because the core block shares weights across iterations, ",
513
+ "we get deep effective network capacity from tiny parameter count!"
514
+ ]
515
+ },
516
+ {
517
+ "cell_type": "code",
518
+ "metadata": {},
519
+ "source": [
520
+ "# \u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\n",
521
+ "# STAGE 2: IRIS GENERATOR TRAINING (RECTIFIED FLOW)\n",
522
+ "# \u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\n",
523
+ "\n",
524
+ "# Build full IRIS model (reusing config from VAE stage)\n",
525
+ "iris = IRIS(config).to(device)\n",
526
+ "\n",
527
+ "# Load trained VAE weights\n",
528
+ "iris.vae.load_state_dict(vae.state_dict())\n",
529
+ "\n",
530
+ "# Freeze VAE\n",
531
+ "for p in iris.vae.parameters():\n",
532
+ " p.requires_grad = False\n",
533
+ "iris.vae.eval()\n",
534
+ "\n",
535
+ "gen_params = sum(p.numel() for p in iris.generator.parameters())\n",
536
+ "core_params = sum(p.numel() for p in iris.generator.core.parameters())\n",
537
+ "print(f\"IRIS Generator: {gen_params:,} trainable params\")\n",
538
+ "print(f\" Core block (shared): {core_params:,} ({core_params/gen_params*100:.1f}%)\")\n",
539
+ "print(f\" Effective at r=6: ~{gen_params + 5*core_params:,} effective params\")\n",
540
+ "print(f\" Memory fp16: {gen_params*2/1024/1024:.1f} MB\")\n",
541
+ "\n",
542
+ "# Free standalone VAE to save memory\n",
543
+ "del vae, optimizer_vae, scheduler_vae\n",
544
+ "torch.cuda.empty_cache()"
545
+ ],
546
+ "outputs": [],
547
+ "execution_count": null
548
+ },
549
+ {
550
+ "cell_type": "code",
551
+ "metadata": {},
552
+ "source": [
553
+ "# \u2500\u2500\u2500 Generator Training Loop \u2500\u2500\u2500\n",
554
+ "GEN_EPOCHS = 150 # More epochs for small dataset\n",
555
+ "GEN_LR = 2e-4 # Higher LR works well with AdamW + cosine\n",
556
+ "GRAD_ACCUM = 2 # Effective batch = BATCH_SIZE \u00d7 GRAD_ACCUM = 8\n",
557
+ "WARMUP_STEPS = 100\n",
558
+ "\n",
559
+ "optimizer_gen = torch.optim.AdamW(\n",
560
+ " iris.generator.parameters(),\n",
561
+ " lr=GEN_LR,\n",
562
+ " weight_decay=0.03,\n",
563
+ " betas=(0.9, 0.95),\n",
564
+ ")\n",
565
+ "\n",
566
+ "total_steps = GEN_EPOCHS * len(train_loader) // GRAD_ACCUM\n",
567
+ "\n",
568
+ "def lr_lambda(step):\n",
569
+ " if step < WARMUP_STEPS:\n",
570
+ " return step / max(1, WARMUP_STEPS)\n",
571
+ " progress = (step - WARMUP_STEPS) / max(1, total_steps - WARMUP_STEPS)\n",
572
+ " return 0.5 * (1 + __import__('math').cos(__import__('math').pi * progress))\n",
573
+ "\n",
574
+ "scheduler_gen = torch.optim.lr_scheduler.LambdaLR(optimizer_gen, lr_lambda)\n",
575
+ "scaler_gen = GradScaler()\n",
576
+ "\n",
577
+ "# Logging\n",
578
+ "gen_losses = {\"total\": [], \"velocity\": [], \"kl\": []}\n",
579
+ "\n",
580
+ "print(f\"Training generator for {GEN_EPOCHS} epochs ({total_steps} optimizer steps)\")\n",
581
+ "print(f\"Effective batch size: {BATCH_SIZE} \u00d7 {GRAD_ACCUM} = {BATCH_SIZE * GRAD_ACCUM}\")\n",
582
+ "print(f\"Warmup: {WARMUP_STEPS} steps, then cosine decay to 0\")\n",
583
+ "print()\n",
584
+ "print(f\"{'Epoch':>6} {'Loss':>10} {'VelLoss':>10} {'MeanT':>8} {'LR':>10} {'Time':>8}\")\n",
585
+ "print(\"\u2500\" * 60)\n",
586
+ "\n",
587
+ "iris.generator.train()\n",
588
+ "global_step = 0\n",
589
+ "best_loss = float('inf')\n",
590
+ "\n",
591
+ "for epoch in range(GEN_EPOCHS):\n",
592
+ " epoch_vel = 0\n",
593
+ " epoch_total = 0\n",
594
+ " n_batches = 0\n",
595
+ " t0 = time.time()\n",
596
+ "\n",
597
+ " optimizer_gen.zero_grad(set_to_none=True)\n",
598
+ "\n",
599
+ " for batch_idx, (images, captions) in enumerate(train_loader):\n",
600
+ " images = images.to(device)\n",
601
+ "\n",
602
+ " # Encode text with CLIP\n",
603
+ " with torch.no_grad():\n",
604
+ " text_emb = encode_text(list(captions)) # [B, 77, 768]\n",
605
+ "\n",
606
+ " # Forward pass with mixed precision\n",
607
+ " with autocast(dtype=torch.float16):\n",
608
+ " # Randomly sample iteration count for robustness\n",
609
+ " r = [4, 5, 6, 7, 8][torch.randint(0, 5, (1,)).item()]\n",
610
+ " result = iris.train_step(images, text_emb, num_iterations=r)\n",
611
+ " loss = result[\"loss\"] / GRAD_ACCUM\n",
612
+ "\n",
613
+ " scaler_gen.scale(loss).backward()\n",
614
+ "\n",
615
+ " # Gradient accumulation\n",
616
+ " if (batch_idx + 1) % GRAD_ACCUM == 0:\n",
617
+ " scaler_gen.unscale_(optimizer_gen)\n",
618
+ " torch.nn.utils.clip_grad_norm_(iris.generator.parameters(), 1.0)\n",
619
+ " scaler_gen.step(optimizer_gen)\n",
620
+ " scaler_gen.update()\n",
621
+ " optimizer_gen.zero_grad(set_to_none=True)\n",
622
+ " scheduler_gen.step()\n",
623
+ " global_step += 1\n",
624
+ "\n",
625
+ " epoch_vel += result[\"velocity_loss\"]\n",
626
+ " epoch_total += result[\"loss\"].item() if hasattr(result[\"loss\"], 'item') else result[\"velocity_loss\"]\n",
627
+ " n_batches += 1\n",
628
+ "\n",
629
+ " avg_vel = epoch_vel / n_batches\n",
630
+ " avg_total = epoch_total / n_batches\n",
631
+ " gen_losses[\"velocity\"].append(avg_vel)\n",
632
+ " gen_losses[\"total\"].append(avg_total)\n",
633
+ " dt = time.time() - t0\n",
634
+ "\n",
635
+ " if avg_vel < best_loss:\n",
636
+ " best_loss = avg_vel\n",
637
+ "\n",
638
+ " if (epoch + 1) % 10 == 0 or epoch == 0:\n",
639
+ " lr = optimizer_gen.param_groups[0][\"lr\"]\n",
640
+ " print(f\"{epoch+1:>6} {avg_total:>10.4f} {avg_vel:>10.4f} \"\n",
641
+ " f\"{result['mean_t']:>8.3f} {lr:>10.2e} {dt:>7.1f}s\")\n",
642
+ "\n",
643
+ "print(f\"\\n\u2705 Generator training complete! Best velocity loss: {best_loss:.4f}\")"
644
+ ],
645
+ "outputs": [],
646
+ "execution_count": null
647
+ },
648
+ {
649
+ "cell_type": "markdown",
650
+ "metadata": {},
651
+ "source": [
652
+ "## 8. Generate Images!",
653
+ "",
654
+ "Now let's generate images using the trained IRIS model. We'll test different iteration budgets ",
655
+ "to see the adaptive compute in action."
656
+ ]
657
+ },
658
+ {
659
+ "cell_type": "code",
660
+ "metadata": {},
661
+ "source": [
662
+ "# \u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\n",
663
+ "# GENERATION\n",
664
+ "# \u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\n",
665
+ "\n",
666
+ "prompts = [\n",
667
+ " \"a fire-breathing dragon pokemon\",\n",
668
+ " \"a cute blue water pokemon\",\n",
669
+ " \"a green grass-type pokemon with leaves\",\n",
670
+ " \"a purple ghost pokemon floating\",\n",
671
+ " \"a yellow electric pokemon with lightning\",\n",
672
+ " \"a pink fairy pokemon with wings\",\n",
673
+ " \"a red phoenix pokemon\",\n",
674
+ " \"a small brown fox pokemon\",\n",
675
+ "]\n",
676
+ "\n",
677
+ "iris.eval()\n",
678
+ "\n",
679
+ "# Generate with different iteration counts to show adaptive compute\n",
680
+ "fig, axes = plt.subplots(len(prompts), 4, figsize=(16, len(prompts) * 4))\n",
681
+ "iteration_counts = [2, 4, 6, 8]\n",
682
+ "\n",
683
+ "for row, prompt in enumerate(prompts):\n",
684
+ " with torch.no_grad():\n",
685
+ " text_emb = encode_text([prompt])\n",
686
+ "\n",
687
+ " for col, n_iter in enumerate(iteration_counts):\n",
688
+ " with torch.no_grad():\n",
689
+ " img = iris.generate(\n",
690
+ " text_emb,\n",
691
+ " num_steps=4,\n",
692
+ " num_iterations=n_iter,\n",
693
+ " cfg_scale=1.0, # No CFG on untrained model\n",
694
+ " seed=42,\n",
695
+ " )\n",
696
+ " # Convert to displayable\n",
697
+ " img_np = img[0].cpu().clamp(-1, 1) * 0.5 + 0.5\n",
698
+ " img_np = img_np.permute(1, 2, 0).numpy()\n",
699
+ "\n",
700
+ " axes[row, col].imshow(img_np)\n",
701
+ " axes[row, col].axis(\"off\")\n",
702
+ " if row == 0:\n",
703
+ " axes[row, col].set_title(f\"r={n_iter} iterations\", fontsize=11)\n",
704
+ " axes[row, 0].set_ylabel(prompt[:25] + \"...\", fontsize=9, rotation=0, labelpad=120, va='center')\n",
705
+ "\n",
706
+ "plt.suptitle(\"IRIS Generated Images (Adaptive Compute Budget)\", fontsize=14, y=1.01)\n",
707
+ "plt.tight_layout()\n",
708
+ "plt.show()\n",
709
+ "\n",
710
+ "print(\"\\nNote: With only ~800 training images and short training, outputs are noisy.\")\n",
711
+ "print(\"This demonstrates the architecture works. Quality improves dramatically with:\")\n",
712
+ "print(\" \u2022 More training data (CC3M, LAION)\")\n",
713
+ "print(\" \u2022 More epochs (1000+)\")\n",
714
+ "print(\" \u2022 Larger model (IRIS-Small or IRIS-Base)\")\n",
715
+ "print(\" \u2022 Stage 3-5 training (text alignment + aesthetics + distillation)\")"
716
+ ],
717
+ "outputs": [],
718
+ "execution_count": null
719
+ },
720
+ {
721
+ "cell_type": "markdown",
722
+ "metadata": {},
723
+ "source": [
724
+ "### 8.1 Training Loss Curves"
725
+ ]
726
+ },
727
+ {
728
+ "cell_type": "code",
729
+ "metadata": {},
730
+ "source": [
731
+ "fig, axes = plt.subplots(1, 2, figsize=(14, 5))\n",
732
+ "\n",
733
+ "# VAE losses\n",
734
+ "ax = axes[0]\n",
735
+ "ax.plot(vae_losses[\"recon\"], label=\"Reconstruction\", color=\"blue\")\n",
736
+ "ax.plot(vae_losses[\"freq\"], label=\"Wavelet Freq\", color=\"red\")\n",
737
+ "ax.set_title(\"Stage 1: VAE Losses\")\n",
738
+ "ax.set_xlabel(\"Epoch\")\n",
739
+ "ax.set_ylabel(\"Loss\")\n",
740
+ "ax.legend()\n",
741
+ "ax.grid(True, alpha=0.3)\n",
742
+ "ax.set_yscale(\"log\")\n",
743
+ "\n",
744
+ "# Generator losses\n",
745
+ "ax = axes[1]\n",
746
+ "ax.plot(gen_losses[\"velocity\"], label=\"Velocity Loss\", color=\"green\")\n",
747
+ "ax.set_title(\"Stage 2: Generator Velocity Loss\")\n",
748
+ "ax.set_xlabel(\"Epoch\")\n",
749
+ "ax.set_ylabel(\"Loss\")\n",
750
+ "ax.legend()\n",
751
+ "ax.grid(True, alpha=0.3)\n",
752
+ "\n",
753
+ "plt.tight_layout()\n",
754
+ "plt.show()"
755
+ ],
756
+ "outputs": [],
757
+ "execution_count": null
758
+ },
759
+ {
760
+ "cell_type": "markdown",
761
+ "metadata": {},
762
+ "source": [
763
+ "## 9. Save Checkpoint"
764
+ ]
765
+ },
766
+ {
767
+ "cell_type": "code",
768
+ "metadata": {},
769
+ "source": [
770
+ "# Save the trained model\n",
771
+ "import os\n",
772
+ "os.makedirs(\"iris_checkpoint\", exist_ok=True)\n",
773
+ "\n",
774
+ "checkpoint = {\n",
775
+ " \"config\": config,\n",
776
+ " \"iris_state_dict\": iris.state_dict(),\n",
777
+ " \"epoch\": GEN_EPOCHS,\n",
778
+ " \"best_velocity_loss\": best_loss,\n",
779
+ " \"vae_losses\": vae_losses,\n",
780
+ " \"gen_losses\": gen_losses,\n",
781
+ "}\n",
782
+ "torch.save(checkpoint, \"iris_checkpoint/iris_trained.pt\")\n",
783
+ "print(f\"\u2705 Checkpoint saved to iris_checkpoint/iris_trained.pt\")\n",
784
+ "print(f\" File size: {os.path.getsize('iris_checkpoint/iris_trained.pt') / 1024 / 1024:.1f} MB\")\n",
785
+ "\n",
786
+ "# Optional: push to HF Hub\n",
787
+ "# from huggingface_hub import HfApi\n",
788
+ "# api = HfApi()\n",
789
+ "# api.upload_folder(folder_path=\"iris_checkpoint\", repo_id=\"YOUR_USERNAME/iris-trained\")"
790
+ ],
791
+ "outputs": [],
792
+ "execution_count": null
793
+ },
794
+ {
795
+ "cell_type": "markdown",
796
+ "metadata": {},
797
+ "source": [
798
+ "## 10. Inspect Learned Components",
799
+ "",
800
+ "Let's peek inside the trained model to understand what the different pathways learned."
801
+ ]
802
+ },
803
+ {
804
+ "cell_type": "code",
805
+ "metadata": {},
806
+ "source": [
807
+ "# Inspect GRFM pathway gating\n",
808
+ "print(\"=== GRFM Analysis ===\\n\")\n",
809
+ "\n",
810
+ "# Look at the blend gate \u2014 does it prefer Fourier or Recurrence?\n",
811
+ "with torch.no_grad():\n",
812
+ " # Get a sample through the model\n",
813
+ " imgs_sample, caps = next(iter(train_loader))\n",
814
+ " imgs_sample = imgs_sample.to(device)\n",
815
+ " text_emb = encode_text(list(caps))\n",
816
+ "\n",
817
+ " z, _, _ = iris.encode(imgs_sample)\n",
818
+ " noise = torch.randn_like(z)\n",
819
+ " t = torch.tensor([0.5] * z.shape[0], device=device)\n",
820
+ " z_t = iris.add_noise(z, noise, t)\n",
821
+ "\n",
822
+ " # Trace through to get GRFM internal state\n",
823
+ " x = iris.generator.patch_embed(iris.generator.patchify(z_t)) + iris.generator.pos_embed\n",
824
+ " for block in iris.generator.prelude:\n",
825
+ " x = block(x)\n",
826
+ "\n",
827
+ " # Get first core layer's GRFM gate values\n",
828
+ " core_layer = iris.generator.core.layers[0]\n",
829
+ " H, W = iris.generator.patch_h, iris.generator.patch_w\n",
830
+ "\n",
831
+ " # Compute adaLN modulation\n",
832
+ " t_emb = iris.generator.time_embed(t * 1000)\n",
833
+ " i_emb = iris.generator.iter_embed(torch.zeros(z.shape[0], dtype=torch.long, device=device))\n",
834
+ " text_global = iris.generator.text_pool_proj(text_emb.mean(dim=1))\n",
835
+ " c = t_emb + i_emb + text_global\n",
836
+ "\n",
837
+ " s1, sh1, g1, *_ = core_layer.adaln(c)\n",
838
+ " h_normed = core_layer._modulate(core_layer.norm1(x), s1, sh1)\n",
839
+ "\n",
840
+ " # Get the blend gate value from GRFM\n",
841
+ " gate = core_layer.grfm.blend_gate(h_normed) # [B, N, D]\n",
842
+ " gate_mean = gate.mean(dim=(0, 2)) # [N] \u2014 per-position gate\n",
843
+ "\n",
844
+ " # Reshape to 2D\n",
845
+ " gate_2d = gate_mean.reshape(H, W).cpu().numpy()\n",
846
+ "\n",
847
+ " fig, axes = plt.subplots(1, 3, figsize=(15, 5))\n",
848
+ "\n",
849
+ " # Gate heatmap\n",
850
+ " im = axes[0].imshow(gate_2d, cmap='RdBu_r', vmin=0, vmax=1)\n",
851
+ " axes[0].set_title(\"GRFM Blend Gate\\n(red=Fourier, blue=Recurrence)\")\n",
852
+ " plt.colorbar(im, ax=axes[0])\n",
853
+ "\n",
854
+ " # Manhattan decay gammas\n",
855
+ " gammas = torch.sigmoid(core_layer.grfm.spatial.gamma_logit).cpu().numpy()\n",
856
+ " axes[1].bar(range(len(gammas)), gammas)\n",
857
+ " axes[1].set_title(\"Manhattan Spatial Decay \u03b3 per Head\\n(lower=more local)\")\n",
858
+ " axes[1].set_xlabel(\"Head\")\n",
859
+ " axes[1].set_ylabel(\"\u03b3\")\n",
860
+ " axes[1].set_ylim(0, 1)\n",
861
+ "\n",
862
+ " # Fourier sparsity (how many coefficients survive soft-shrink)\n",
863
+ " x_2d = h_normed.reshape(h_normed.shape[0], H, W, h_normed.shape[-1])\n",
864
+ " x_freq = torch.fft.rfft2(x_2d, dim=(1, 2), norm='ortho')\n",
865
+ " magnitude = x_freq.abs()\n",
866
+ " threshold = core_layer.grfm.fourier.sparsity_threshold\n",
867
+ " alive = (magnitude > threshold).float().mean().item()\n",
868
+ " axes[2].text(0.5, 0.5, f\"Fourier coefficients\\nabove threshold:\\n{alive*100:.1f}%\",\n",
869
+ " ha='center', va='center', fontsize=16,\n",
870
+ " transform=axes[2].transAxes)\n",
871
+ " axes[2].set_title(\"Fourier Domain Sparsity\")\n",
872
+ " axes[2].axis(\"off\")\n",
873
+ "\n",
874
+ " plt.tight_layout()\n",
875
+ " plt.show()"
876
+ ],
877
+ "outputs": [],
878
+ "execution_count": null
879
+ },
880
+ {
881
+ "cell_type": "markdown",
882
+ "metadata": {},
883
+ "source": [
884
+ "## 11. \ud83d\ude80 Next Steps \u2014 Scaling Up",
885
+ "",
886
+ "This notebook trained on ~800 images as a **proof of concept**. To get production quality:",
887
+ "",
888
+ "### Datasets for Each Training Stage",
889
+ "",
890
+ "| Stage | Dataset | Size | HF ID |",
891
+ "|-------|---------|------|-------|",
892
+ "| 1. VAE | ImageNet + CC3M | 4.2M images | `ILSVRC/imagenet-1k`, `pixparse/cc3m-wds` |",
893
+ "| 2. Class-Cond | ImageNet | 1.2M images | `ILSVRC/imagenet-1k` |",
894
+ "| 3. Text-Image | CC12M (VLM-recaptioned) | 12M images | `pixparse/cc12m-wds` |",
895
+ "| 4. Aesthetic | JourneyDB + LAION-art | ~1M images | `caidas/JourneyDB` |",
896
+ "| 5. Distillation | Self-distill from Stage 4 | Same data | \u2014 |",
897
+ "",
898
+ "### Optimization Tips for Larger Runs",
899
+ "```python",
900
+ "# On Kaggle with 2\u00d7 T4:",
901
+ "# Use accelerate for multi-GPU",
902
+ "# accelerate launch --num_processes 2 train.py",
903
+ "",
904
+ "# On Colab Pro (A100 40GB):",
905
+ "BATCH_SIZE = 16",
906
+ "GEN_EPOCHS = 500",
907
+ "config = create_iris_small().config # Upgrade to IRIS-Small",
908
+ "",
909
+ "# For production (cloud GPUs):",
910
+ "# Use IRIS-Base with 8\u00d7 A100",
911
+ "# Add LADD adversarial distillation in Stage 5",
912
+ "# Train for 200k+ steps on CC12M",
913
+ "```",
914
+ "",
915
+ "### Model Size Recommendations",
916
+ "| Use Case | Model | Batch | Resolution | GPU |",
917
+ "|----------|-------|-------|-----------|-----|",
918
+ "| Demo/Proof | IRIS-Tiny | 4 | 256px | T4 16GB |",
919
+ "| Mobile deploy | IRIS-Small | 8 | 512px | A100 40GB |",
920
+ "| Quality focus | IRIS-Base | 16 | 512px | 2\u00d7A100 |",
921
+ "| Production | IRIS-Base | 64 | 1024px | 8\u00d7A100 |"
922
+ ]
923
+ },
924
+ {
925
+ "cell_type": "markdown",
926
+ "metadata": {},
927
+ "source": [
928
+ "## 12. Kaggle Adaptation",
929
+ "",
930
+ "To run this on **Kaggle**, just change one thing:",
931
+ "",
932
+ "```python",
933
+ "# In Kaggle, GPU is already available. Just:",
934
+ "# 1. Copy this notebook to Kaggle",
935
+ "# 2. Enable \"GPU T4 \u00d72\" or \"GPU P100\" in accelerator settings",
936
+ "# 3. Run all cells!",
937
+ "",
938
+ "# For Kaggle's dual-T4 setup, use DataParallel:",
939
+ "if torch.cuda.device_count() > 1:",
940
+ " print(f\"Using {torch.cuda.device_count()} GPUs!\")",
941
+ " iris.generator = torch.nn.DataParallel(iris.generator)",
942
+ "```",
943
+ "",
944
+ "The training loop works identically on both platforms. \ud83c\udf89"
945
+ ]
946
+ },
947
+ {
948
+ "cell_type": "markdown",
949
+ "metadata": {},
950
+ "source": [
951
+ "---",
952
+ "*Built with \u2764\ufe0f using the IRIS architecture. Repository: [asdf98/IRIS-architecture](https://huggingface.co/asdf98/IRIS-architecture)*"
953
+ ]
954
+ }
955
+ ]
956
+ }