AbstractPhil commited on
Commit
0f1f862
·
verified ·
1 Parent(s): db9f957

Create trainer_v6_fp64_geometric_coalescence.py

Browse files
trainer_v6_fp64_geometric_coalescence.py ADDED
@@ -0,0 +1,1520 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # train_cantor_fusion_hf.py - WITH GEOMETRIC COALESCENCE LOSS
2
+
3
+ """
4
+ Cantor Fusion Classifier with AdamW + Warm Restarts + LR Boost + Coalescence Loss
5
+ -----------------------------------------------------------------------------------
6
+ Features:
7
+ - AdamW optimizer (best for ViTs)
8
+ - CosineAnnealingWarmRestarts with configurable LR boost at restarts
9
+ - GeometricCoalescenceLoss: Unsupervised geometric supervision for shatter-reconstruct
10
+ - HuggingFace Hub uploads (ONE shared repo, organized by run)
11
+ - TensorBoard logging (loss, accuracy, fusion metrics, LR tracking, coalescence)
12
+ - SafeTensors format (ClamAV safe)
13
+
14
+ New Feature: Geometric Coalescence Loss
15
+ Provides geometric scaffolding during aggressive LR boosting:
16
+ - Consciousness Anchoring: High-awareness tokens cluster around learned attractors
17
+ - Distance Preservation: Cantor measure topology guides embedding distances
18
+ - Volume Preservation: Maintains simplex structural integrity
19
+ - Adaptive weighting: Increases stabilization during LR spikes (0.1 → 0.8)
20
+
21
+ Author: AbstractPhil
22
+ License: MIT
23
+ """
24
+
25
+ import torch
26
+ import torch.nn as nn
27
+ import torch.nn.functional as F
28
+ from torch.utils.data import DataLoader
29
+ from torch.utils.tensorboard import SummaryWriter
30
+ from torchvision import datasets, transforms
31
+ from torch.cuda.amp import autocast, GradScaler
32
+ from safetensors.torch import save_file, load_file
33
+
34
+ import math
35
+ import os
36
+ import json
37
+ from typing import Optional, Dict, List, Tuple, Union
38
+ from dataclasses import dataclass, asdict
39
+ import time
40
+ from pathlib import Path
41
+ from tqdm import tqdm
42
+
43
+ # HuggingFace
44
+ from huggingface_hub import HfApi, create_repo, upload_folder, upload_file
45
+ import yaml
46
+
47
+ # Import from your repo
48
+ from geovocab2.train.model.layers.attention.cantor_multiheaded_fusion_fp64 import (
49
+ CantorMultiheadFusion,
50
+ CantorFusionConfig
51
+ )
52
+ from geovocab2.shapes.factory.cantor_route_factory import (
53
+ CantorRouteFactory,
54
+ RouteMode,
55
+ SimplexConfig
56
+ )
57
+ from geovocab2.train.losses.geometric_coalescence_loss import (
58
+ GeometricCoalescenceLoss,
59
+ add_coalescence_loss_to_training
60
+ )
61
+
62
+
63
+ # ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
64
+ # Mixing Augmentations (AlphaMix / Fractal AlphaMix)
65
+ # ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
66
+
67
+ def alphamix_data(x, y, alpha_range=(0.3, 0.7), spatial_ratio=0.25):
68
+ """Standard AlphaMix: Single spatially localized transparent overlay."""
69
+ batch_size = x.size(0)
70
+ index = torch.randperm(batch_size, device=x.device)
71
+
72
+ y_a, y_b = y, y[index]
73
+
74
+ # Sample alpha from Beta distribution
75
+ alpha_min, alpha_max = alpha_range
76
+ beta_sample = torch.distributions.Beta(2.0, 2.0).sample().item()
77
+ alpha = alpha_min + (alpha_max - alpha_min) * beta_sample
78
+
79
+ # Compute overlay region
80
+ _, _, H, W = x.shape
81
+ overlay_ratio = torch.sqrt(torch.tensor(spatial_ratio)).item()
82
+ overlay_h = int(H * overlay_ratio)
83
+ overlay_w = int(W * overlay_ratio)
84
+
85
+ top = torch.randint(0, H - overlay_h + 1, (1,), device=x.device).item()
86
+ left = torch.randint(0, W - overlay_w + 1, (1,), device=x.device).item()
87
+
88
+ # Blend
89
+ composited_x = x.clone()
90
+ overlay_region = alpha * x[:, :, top:top+overlay_h, left:left+overlay_w]
91
+ background_region = (1 - alpha) * x[index, :, top:top+overlay_h, left:left+overlay_w]
92
+ composited_x[:, :, top:top+overlay_h, left:left+overlay_w] = overlay_region + background_region
93
+
94
+ return composited_x, y_a, y_b, alpha
95
+
96
+
97
+ def alphamix_fractal(
98
+ x: torch.Tensor,
99
+ y: torch.Tensor,
100
+ alpha_range=(0.3, 0.7),
101
+ steps_range=(1, 3),
102
+ triad_scales=(1/3, 1/9, 1/27),
103
+ beta_shape=(2.0, 2.0),
104
+ seed: Optional[int] = None,
105
+ ):
106
+ """Fractal AlphaMix: Triadic multi-patch overlays aligned to Cantor geometry."""
107
+ if seed is not None:
108
+ torch.manual_seed(seed)
109
+
110
+ B, C, H, W = x.shape
111
+ device = x.device
112
+
113
+ # Permutation for mixing
114
+ idx = torch.randperm(B, device=device)
115
+ y_a, y_b = y, y[idx]
116
+
117
+ x_mix = x.clone()
118
+ total_area = H * W
119
+
120
+ # Beta distribution for transparency sampling
121
+ k1, k2 = beta_shape
122
+ beta_dist = torch.distributions.Beta(k1, k2)
123
+ alpha_min, alpha_max = alpha_range
124
+
125
+ # Storage for effective alpha calculation
126
+ alpha_elems = []
127
+ area_weights = []
128
+
129
+ # Sample number of patches
130
+ steps = torch.randint(steps_range[0], steps_range[1] + 1, (1,), device=device).item()
131
+
132
+ for _ in range(steps):
133
+ # Choose triadic scale
134
+ scale_idx = torch.randint(0, len(triad_scales), (1,), device=device).item()
135
+ scale = triad_scales[scale_idx]
136
+
137
+ # Compute patch dimensions
138
+ patch_area = max(1, int(total_area * scale))
139
+ side = int(torch.sqrt(torch.tensor(patch_area, dtype=torch.float32)).item())
140
+ h = max(1, min(H, side))
141
+ w = max(1, min(W, side))
142
+
143
+ # Random position
144
+ top = torch.randint(0, H - h + 1, (1,), device=device).item()
145
+ left = torch.randint(0, W - w + 1, (1,), device=device).item()
146
+
147
+ # Sample transparency
148
+ alpha_raw = beta_dist.sample().item()
149
+ alpha = alpha_min + (alpha_max - alpha_min) * alpha_raw
150
+
151
+ # Track for effective alpha
152
+ alpha_elems.append(alpha)
153
+ area_weights.append(h * w)
154
+
155
+ # Blend patches
156
+ fg = alpha * x[:, :, top:top + h, left:left + w]
157
+ bg = (1 - alpha) * x[idx, :, top:top + h, left:left + w]
158
+ x_mix[:, :, top:top + h, left:left + w] = fg + bg
159
+
160
+ # Compute area-weighted effective alpha
161
+ alpha_t = torch.tensor(alpha_elems, dtype=torch.float32, device=device)
162
+ area_t = torch.tensor(area_weights, dtype=torch.float32, device=device)
163
+ alpha_eff = (alpha_t * area_t).sum() / (area_t.sum() + 1e-12)
164
+ alpha_eff = alpha_eff.item()
165
+
166
+ return x_mix, y_a, y_b, alpha_eff
167
+
168
+
169
+ # ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
170
+ # Custom Scheduler with LR Boost at Restarts
171
+ # ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
172
+
173
+ class CosineAnnealingWarmRestartsWithBoost(torch.optim.lr_scheduler._LRScheduler):
174
+ """Cosine Annealing with Warm Restarts and optional LR boost at restart points."""
175
+
176
+ def __init__(
177
+ self,
178
+ optimizer: torch.optim.Optimizer,
179
+ T_0: int,
180
+ T_mult: float = 1,
181
+ eta_min: float = 0,
182
+ restart_lr_mult: float = 1.0,
183
+ last_epoch: int = -1
184
+ ):
185
+ if T_0 <= 0 or not isinstance(T_0, int):
186
+ raise ValueError(f"Expected positive integer T_0, but got {T_0}")
187
+ if T_mult < 1:
188
+ raise ValueError(f"Expected T_mult >= 1, but got {T_mult}")
189
+ if restart_lr_mult <= 0:
190
+ raise ValueError(f"Expected positive restart_lr_mult, but got {restart_lr_mult}")
191
+
192
+ self.T_0 = T_0
193
+ self.T_i = T_0
194
+ self.T_mult = T_mult
195
+ self.eta_min = eta_min
196
+ self.restart_lr_mult = restart_lr_mult
197
+ self.T_cur = last_epoch
198
+
199
+ # Track boosted base LRs and restart count
200
+ self.current_base_lrs = None
201
+ self.restart_count = 0
202
+
203
+ super().__init__(optimizer, last_epoch)
204
+
205
+ def get_lr(self):
206
+ if self.T_cur == -1:
207
+ return self.base_lrs
208
+
209
+ # Use boosted base LRs if we've had restarts
210
+ if self.current_base_lrs is None:
211
+ base_lrs_to_use = self.base_lrs
212
+ else:
213
+ base_lrs_to_use = self.current_base_lrs
214
+
215
+ # Cosine annealing from current base LR to eta_min
216
+ return [
217
+ self.eta_min + (base_lr - self.eta_min) *
218
+ (1 + math.cos(math.pi * self.T_cur / self.T_i)) / 2
219
+ for base_lr in base_lrs_to_use
220
+ ]
221
+
222
+ def step(self, epoch=None):
223
+ if epoch is None and self.last_epoch < 0:
224
+ epoch = 0
225
+
226
+ if epoch is None:
227
+ epoch = self.last_epoch + 1
228
+ self.T_cur = self.T_cur + 1
229
+
230
+ # Check if we hit a restart point
231
+ if self.T_cur >= self.T_i:
232
+ # APPLY BOOST HERE before reset
233
+ self.restart_count += 1
234
+ if self.current_base_lrs is None:
235
+ self.current_base_lrs = list(self.base_lrs)
236
+
237
+ # Boost the base LRs
238
+ self.current_base_lrs = [
239
+ base_lr * self.restart_lr_mult
240
+ for base_lr in self.current_base_lrs
241
+ ]
242
+
243
+ # Now reset cycle
244
+ self.T_cur = self.T_cur - self.T_i
245
+ self.T_i = int(self.T_i * self.T_mult)
246
+ else:
247
+ if epoch < 0:
248
+ raise ValueError(f"Expected non-negative epoch, but got {epoch}")
249
+ if epoch >= self.T_0:
250
+ if self.T_mult == 1:
251
+ self.T_cur = epoch % self.T_0
252
+ self.restart_count = epoch // self.T_0
253
+ else:
254
+ n = int(math.log((epoch / self.T_0 * (self.T_mult - 1) + 1), self.T_mult))
255
+ self.restart_count = n
256
+ self.T_cur = epoch - self.T_0 * (self.T_mult ** n - 1) / (self.T_mult - 1)
257
+ self.T_i = self.T_0 * self.T_mult ** n
258
+
259
+ # Apply cumulative boost
260
+ if self.current_base_lrs is None:
261
+ self.current_base_lrs = [
262
+ base_lr * (self.restart_lr_mult ** self.restart_count)
263
+ for base_lr in self.base_lrs
264
+ ]
265
+ else:
266
+ self.T_i = self.T_0
267
+ self.T_cur = epoch
268
+
269
+ self.last_epoch = math.floor(epoch)
270
+
271
+ for param_group, lr in zip(self.optimizer.param_groups, self.get_lr()):
272
+ param_group['lr'] = lr
273
+
274
+ self._last_lr = [group['lr'] for group in self.optimizer.param_groups]
275
+
276
+
277
+ # ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
278
+ # Configuration
279
+ # ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
280
+
281
+ @dataclass
282
+ class CantorTrainingConfig:
283
+ """Complete configuration for Cantor fusion training with coalescence loss."""
284
+
285
+ # Dataset
286
+ dataset: str = "cifar10"
287
+ num_classes: int = 10
288
+
289
+ # Architecture
290
+ image_size: int = 32
291
+ patch_size: int = 4
292
+ embed_dim: int = 384
293
+ num_fusion_blocks: int = 6
294
+ num_heads: int = 8
295
+ fusion_window: int = 32
296
+ fusion_mode: str = "weighted"
297
+ k_simplex: int = 4
298
+ use_beatrix: bool = False
299
+ beatrix_tau: float = 0.25
300
+
301
+ # Optimization
302
+ precompute_geometric: bool = True
303
+ use_torch_compile: bool = True
304
+ use_mixed_precision: bool = False
305
+
306
+ # Regularization
307
+ dropout: float = 0.1
308
+ drop_path_rate: float = 0.1
309
+ label_smoothing: float = 0.1
310
+
311
+ # Training - Optimizer (AdamW)
312
+ optimizer_type: str = "adamw"
313
+ batch_size: int = 128
314
+ num_epochs: int = 300
315
+ learning_rate: float = 3e-4
316
+ weight_decay: float = 0.05
317
+ grad_clip: float = 1.0
318
+
319
+ # SGD-specific
320
+ sgd_momentum: float = 0.9
321
+ sgd_nesterov: bool = True
322
+
323
+ # AdamW-specific
324
+ adamw_betas: Tuple[float, float] = (0.9, 0.999)
325
+ adamw_eps: float = 1e-8
326
+
327
+ # Learning rate schedule - WARM RESTARTS WITH BOOST
328
+ scheduler_type: str = "cosine_restarts"
329
+ restart_period: int = 50
330
+ restart_mult: float = 2.0
331
+ restart_lr_mult: float = 1.0 # LR multiplier at restarts
332
+ min_lr: float = 1e-7
333
+
334
+ # MultiStepLR (fallback)
335
+ lr_milestones: List[int] = None
336
+ lr_gamma: float = 0.2
337
+
338
+ # Cosine annealing (regular)
339
+ warmup_epochs: int = 0
340
+
341
+ # Data augmentation
342
+ use_augmentation: bool = True
343
+ use_autoaugment: bool = True
344
+ use_cutout: bool = False
345
+ cutout_length: int = 16
346
+
347
+ # Mixing augmentation
348
+ use_mixing: bool = False
349
+ mixing_type: str = "alphamix"
350
+ mixing_alpha_range: Tuple[float, float] = (0.3, 0.7)
351
+ mixing_spatial_ratio: float = 0.25
352
+ mixing_prob: float = 1.0
353
+ fractal_steps_range: Tuple[int, int] = (1, 3)
354
+ fractal_triad_scales: Tuple[float, ...] = (1/3, 1/9, 1/27)
355
+
356
+ # Geometric Coalescence Loss
357
+ use_coalescence_loss: bool = True
358
+ lambda_coalescence: float = 0.5
359
+ coalescence_num_anchors: int = 64
360
+ coalescence_target_variance: float = 0.5
361
+ coalescence_base_weight: float = 0.1
362
+ coalescence_max_weight: float = 0.8
363
+ coalescence_weight_power: float = 2.0
364
+ coalescence_consciousness_weight: float = 0.3
365
+ coalescence_distance_weight: float = 0.4
366
+ coalescence_volume_weight: float = 0.3
367
+ coalescence_num_distance_pairs: int = 256
368
+ coalescence_num_simplex_samples: int = 32
369
+
370
+ # System
371
+ device: str = "cuda" if torch.cuda.is_available() else "cpu"
372
+ num_workers: int = 8
373
+ seed: int = 42
374
+
375
+ # Paths
376
+ weights_dir: str = "weights"
377
+ model_name: str = "vit-beans-v3"
378
+ run_name: Optional[str] = None
379
+
380
+ # HuggingFace
381
+ hf_username: str = "AbstractPhil"
382
+ hf_repo_name: Optional[str] = None
383
+ upload_to_hf: bool = True
384
+ hf_token: Optional[str] = None
385
+
386
+ # Logging
387
+ log_interval: int = 50
388
+ save_interval: int = 10
389
+ checkpoint_upload_interval: int = 20
390
+
391
+ def __post_init__(self):
392
+ # Auto-set num_classes
393
+ if self.dataset == "cifar10":
394
+ self.num_classes = 10
395
+ elif self.dataset == "cifar100":
396
+ self.num_classes = 100
397
+ else:
398
+ raise ValueError(f"Unknown dataset: {self.dataset}")
399
+
400
+ # Set default milestones
401
+ if self.lr_milestones is None:
402
+ if self.num_epochs >= 200:
403
+ self.lr_milestones = [60, 120, 160]
404
+ elif self.num_epochs >= 100:
405
+ self.lr_milestones = [30, 60, 80]
406
+ else:
407
+ self.lr_milestones = [
408
+ int(self.num_epochs * 0.5),
409
+ int(self.num_epochs * 0.75)
410
+ ]
411
+
412
+ # Auto-generate run name
413
+ if self.run_name is None:
414
+ timestamp = time.strftime("%Y%m%d_%H%M%S")
415
+ opt_name = self.optimizer_type.upper()
416
+ sched_name = "WarmRestart" if self.scheduler_type == "cosine_restarts" else self.scheduler_type
417
+ boost_str = f"_boost{self.restart_lr_mult}x" if self.restart_lr_mult > 1.0 else ""
418
+ coal_str = f"_coal{self.lambda_coalescence}" if self.use_coalescence_loss else ""
419
+ self.run_name = f"{self.dataset}_{self.fusion_mode}_{opt_name}_{sched_name}{boost_str}{coal_str}_{timestamp}"
420
+
421
+ # ONE SHARED REPO
422
+ if self.hf_repo_name is None:
423
+ self.hf_repo_name = self.model_name
424
+
425
+ # Set HF token
426
+ if self.hf_token is None:
427
+ self.hf_token = os.environ.get("HF_TOKEN")
428
+
429
+ # Calculate derived values
430
+ assert self.image_size % self.patch_size == 0
431
+ self.num_patches = (self.image_size // self.patch_size) ** 2
432
+ self.patch_dim = self.patch_size * self.patch_size * 3
433
+
434
+ # Create paths
435
+ self.output_dir = Path(self.weights_dir) / self.model_name / self.run_name
436
+ self.checkpoint_dir = self.output_dir / "checkpoints"
437
+ self.tensorboard_dir = self.output_dir / "tensorboard"
438
+
439
+ # Create directories
440
+ self.output_dir.mkdir(parents=True, exist_ok=True)
441
+ self.checkpoint_dir.mkdir(parents=True, exist_ok=True)
442
+ self.tensorboard_dir.mkdir(parents=True, exist_ok=True)
443
+
444
+ def save(self, path: Union[str, Path]):
445
+ """Save config to YAML file."""
446
+ path = Path(path)
447
+ config_dict = asdict(self)
448
+ # Convert tuples to lists for YAML
449
+ if 'adamw_betas' in config_dict:
450
+ config_dict['adamw_betas'] = list(config_dict['adamw_betas'])
451
+ if 'mixing_alpha_range' in config_dict:
452
+ config_dict['mixing_alpha_range'] = list(config_dict['mixing_alpha_range'])
453
+ if 'fractal_steps_range' in config_dict:
454
+ config_dict['fractal_steps_range'] = list(config_dict['fractal_steps_range'])
455
+ if 'fractal_triad_scales' in config_dict:
456
+ config_dict['fractal_triad_scales'] = list(config_dict['fractal_triad_scales'])
457
+ with open(path, 'w') as f:
458
+ yaml.dump(config_dict, f, default_flow_style=False)
459
+
460
+ @classmethod
461
+ def load(cls, path: Union[str, Path]):
462
+ """Load config from YAML file."""
463
+ path = Path(path)
464
+ with open(path, 'r') as f:
465
+ config_dict = yaml.safe_load(f)
466
+ # Convert lists back to tuples
467
+ if 'adamw_betas' in config_dict:
468
+ config_dict['adamw_betas'] = tuple(config_dict['adamw_betas'])
469
+ if 'mixing_alpha_range' in config_dict:
470
+ config_dict['mixing_alpha_range'] = tuple(config_dict['mixing_alpha_range'])
471
+ if 'fractal_steps_range' in config_dict:
472
+ config_dict['fractal_steps_range'] = tuple(config_dict['fractal_steps_range'])
473
+ if 'fractal_triad_scales' in config_dict:
474
+ config_dict['fractal_triad_scales'] = tuple(config_dict['fractal_triad_scales'])
475
+ return cls(**config_dict)
476
+
477
+
478
+ # ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
479
+ # Model Components
480
+ # ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
481
+
482
+ class PatchEmbedding(nn.Module):
483
+ """Patch embedding layer."""
484
+ def __init__(self, config: CantorTrainingConfig):
485
+ super().__init__()
486
+ self.config = config
487
+ self.proj = nn.Conv2d(3, config.embed_dim, kernel_size=config.patch_size, stride=config.patch_size)
488
+ self.pos_embed = nn.Parameter(torch.randn(1, config.num_patches, config.embed_dim) * 0.02)
489
+
490
+ def forward(self, x: torch.Tensor) -> torch.Tensor:
491
+ x = self.proj(x)
492
+ x = x.flatten(2).transpose(1, 2)
493
+ x = x + self.pos_embed
494
+ return x
495
+
496
+
497
+ class DropPath(nn.Module):
498
+ """Stochastic depth."""
499
+ def __init__(self, drop_prob: float = 0.0):
500
+ super().__init__()
501
+ self.drop_prob = drop_prob
502
+
503
+ def forward(self, x):
504
+ if self.drop_prob == 0. or not self.training:
505
+ return x
506
+ keep_prob = 1 - self.drop_prob
507
+ shape = (x.shape[0],) + (1,) * (x.ndim - 1)
508
+ random_tensor = keep_prob + torch.rand(shape, dtype=x.dtype, device=x.device)
509
+ random_tensor.floor_()
510
+ return x.div(keep_prob) * random_tensor
511
+
512
+
513
+ class CantorFusionBlock(nn.Module):
514
+ """Cantor fusion block."""
515
+ def __init__(self, config: CantorTrainingConfig, drop_path: float = 0.0):
516
+ super().__init__()
517
+ self.norm1 = nn.LayerNorm(config.embed_dim)
518
+
519
+ fusion_config = CantorFusionConfig(
520
+ dim=config.embed_dim,
521
+ num_heads=config.num_heads,
522
+ fusion_window=config.fusion_window,
523
+ fusion_mode=config.fusion_mode,
524
+ k_simplex=config.k_simplex,
525
+ use_beatrix_routing=config.use_beatrix,
526
+ use_consciousness_weighting=(config.fusion_mode == "consciousness"),
527
+ beatrix_tau=config.beatrix_tau,
528
+ use_gating=True,
529
+ dropout=config.dropout,
530
+ residual=False,
531
+ precompute_staircase=config.precompute_geometric,
532
+ precompute_routes=config.precompute_geometric,
533
+ precompute_distances=config.precompute_geometric,
534
+ use_optimized_gather=True,
535
+ staircase_cache_sizes=[config.num_patches],
536
+ use_torch_compile=config.use_torch_compile
537
+ )
538
+ self.fusion = CantorMultiheadFusion(fusion_config)
539
+
540
+ self.norm2 = nn.LayerNorm(config.embed_dim)
541
+ mlp_hidden = config.embed_dim * 4
542
+ self.mlp = nn.Sequential(
543
+ nn.Linear(config.embed_dim, mlp_hidden),
544
+ nn.GELU(),
545
+ nn.Dropout(config.dropout),
546
+ nn.Linear(mlp_hidden, config.embed_dim),
547
+ nn.Dropout(config.dropout)
548
+ )
549
+ self.drop_path = DropPath(drop_path) if drop_path > 0 else nn.Identity()
550
+
551
+ def forward(self, x: torch.Tensor, return_fusion_info: bool = False) -> Union[torch.Tensor, Tuple[torch.Tensor, Dict]]:
552
+ fusion_result = self.fusion(self.norm1(x))
553
+ x = x + self.drop_path(fusion_result['output'])
554
+ x_after_fusion = x # Save for coalescence loss
555
+ x = x + self.drop_path(self.mlp(self.norm2(x)))
556
+
557
+ if return_fusion_info:
558
+ fusion_info = {
559
+ 'output': x_after_fusion, # Embeddings after fusion (before MLP)
560
+ 'consciousness': fusion_result.get('consciousness'),
561
+ 'cantor_measure': fusion_result.get('cantor_measure')
562
+ }
563
+ return x, fusion_info
564
+ return x
565
+
566
+
567
+ class CantorClassifier(nn.Module):
568
+ """Cantor fusion classifier."""
569
+ def __init__(self, config: CantorTrainingConfig):
570
+ super().__init__()
571
+ self.config = config
572
+
573
+ self.patch_embed = PatchEmbedding(config)
574
+
575
+ dpr = [x.item() for x in torch.linspace(0, config.drop_path_rate, config.num_fusion_blocks)]
576
+ self.blocks = nn.ModuleList([
577
+ CantorFusionBlock(config, drop_path=dpr[i])
578
+ for i in range(config.num_fusion_blocks)
579
+ ])
580
+
581
+ self.norm = nn.LayerNorm(config.embed_dim)
582
+ self.head = nn.Linear(config.embed_dim, config.num_classes)
583
+
584
+ self.apply(self._init_weights)
585
+
586
+ def _init_weights(self, m):
587
+ if isinstance(m, nn.Linear):
588
+ nn.init.trunc_normal_(m.weight, std=0.02)
589
+ if m.bias is not None:
590
+ nn.init.constant_(m.bias, 0)
591
+ elif isinstance(m, nn.LayerNorm):
592
+ nn.init.constant_(m.bias, 0)
593
+ nn.init.constant_(m.weight, 1.0)
594
+ elif isinstance(m, nn.Conv2d):
595
+ nn.init.kaiming_normal_(m.weight, mode='fan_out', nonlinearity='relu')
596
+
597
+ def forward(self, x: torch.Tensor, return_fusion_info: bool = False) -> Union[torch.Tensor, Tuple[torch.Tensor, List[Dict]]]:
598
+ x = self.patch_embed(x)
599
+
600
+ fusion_infos = []
601
+ for i, block in enumerate(self.blocks):
602
+ if return_fusion_info and i == len(self.blocks) - 1:
603
+ x, fusion_info = block(x, return_fusion_info=True)
604
+ fusion_infos.append(fusion_info)
605
+ else:
606
+ x = block(x)
607
+
608
+ x = self.norm(x)
609
+ x = x.mean(dim=1)
610
+ logits = self.head(x)
611
+
612
+ if return_fusion_info:
613
+ return logits, fusion_infos
614
+ return logits
615
+
616
+
617
+ # ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
618
+ # HuggingFace Integration (unchanged)
619
+ # ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
620
+
621
+ class HuggingFaceUploader:
622
+ """Manages HuggingFace Hub uploads to ONE shared repo."""
623
+
624
+ def __init__(self, config: CantorTrainingConfig):
625
+ self.config = config
626
+ self.api = HfApi(token=config.hf_token) if config.upload_to_hf else None
627
+ self.repo_id = f"{config.hf_username}/{config.hf_repo_name}"
628
+ self.run_prefix = f"runs/{config.run_name}"
629
+
630
+ if config.upload_to_hf:
631
+ self._create_repo()
632
+ self._update_main_readme()
633
+
634
+ def _create_repo(self):
635
+ """Create HuggingFace repo if it doesn't exist."""
636
+ try:
637
+ create_repo(
638
+ repo_id=self.repo_id,
639
+ token=self.config.hf_token,
640
+ exist_ok=True,
641
+ private=False
642
+ )
643
+ print(f"[HF] Repository: https://huggingface.co/{self.repo_id}")
644
+ print(f"[HF] Run folder: {self.run_prefix}")
645
+ except Exception as e:
646
+ print(f"[HF] Warning: Could not create repo: {e}")
647
+
648
+ def _update_main_readme(self):
649
+ """Create or update the main shared README at repo root."""
650
+ if not self.config.upload_to_hf or self.api is None:
651
+ return
652
+
653
+ boost_info = ""
654
+ if self.config.restart_lr_mult > 1.0:
655
+ boost_info = f"""
656
+ ### 🚀 LR Boost + Geometric Coalescence
657
+ This run uses **restart_lr_mult = {self.config.restart_lr_mult}x** with **GeometricCoalescenceLoss**:
658
+ - LR boosts create aggressive exploration cycles
659
+ - Coalescence loss provides geometric scaffolding during weight thrashing
660
+ - Adaptive weighting: {self.config.coalescence_base_weight} → {self.config.coalescence_max_weight} during LR spikes
661
+ - Model reconstructs from geometric first principles when patterns shatter
662
+ """
663
+
664
+ main_readme = f"""---
665
+ tags:
666
+ - image-classification
667
+ - cantor-fusion
668
+ - geometric-deep-learning
669
+ - safetensors
670
+ - vision-transformer
671
+ - warm-restarts
672
+ - geometric-coalescence
673
+ library_name: pytorch
674
+ datasets:
675
+ - cifar10
676
+ - cifar100
677
+ metrics:
678
+ - accuracy
679
+ ---
680
+
681
+ # {self.config.hf_repo_name}
682
+
683
+ **Geometric Deep Learning with Cantor Multihead Fusion + Shatter-Reconstruct Training**
684
+
685
+ This repository contains training runs using Cantor fusion architecture with:
686
+ - Pentachoron (5-simplex) structures for geometric routing
687
+ - CosineAnnealingWarmRestarts for exploration cycles
688
+ - GeometricCoalescenceLoss for shatter-reconstruct training
689
+ {boost_info}
690
+
691
+ ## Current Run
692
+
693
+ **Latest**: `{self.config.run_name}`
694
+ - **Dataset**: {self.config.dataset.upper()}
695
+ - **Fusion Mode**: {self.config.fusion_mode}
696
+ - **Coalescence**: λ={self.config.lambda_coalescence} {'✓' if self.config.use_coalescence_loss else '✗'}
697
+ - **LR Boost**: {self.config.restart_lr_mult}x {'🚀' if self.config.restart_lr_mult > 1.0 else ''}
698
+
699
+ ---
700
+
701
+ **Repository maintained by**: [@{self.config.hf_username}](https://huggingface.co/{self.config.hf_username})
702
+ """
703
+
704
+ main_readme_path = Path(self.config.weights_dir) / self.config.model_name / "MAIN_README.md"
705
+ main_readme_path.parent.mkdir(parents=True, exist_ok=True)
706
+ with open(main_readme_path, 'w') as f:
707
+ f.write(main_readme)
708
+
709
+ try:
710
+ upload_file(
711
+ path_or_fileobj=str(main_readme_path),
712
+ path_in_repo="README.md",
713
+ repo_id=self.repo_id,
714
+ token=self.config.hf_token
715
+ )
716
+ print(f"[HF] Updated main README")
717
+ except Exception as e:
718
+ print(f"[HF] Main README upload failed: {e}")
719
+
720
+ def upload_file(self, file_path: Path, repo_path: str):
721
+ """Upload single file to HuggingFace."""
722
+ if not self.config.upload_to_hf or self.api is None:
723
+ return
724
+
725
+ try:
726
+ if not repo_path.startswith(self.run_prefix) and not repo_path.startswith("runs/"):
727
+ full_path = f"{self.run_prefix}/{repo_path}"
728
+ else:
729
+ full_path = repo_path
730
+
731
+ upload_file(
732
+ path_or_fileobj=str(file_path),
733
+ path_in_repo=full_path,
734
+ repo_id=self.repo_id,
735
+ token=self.config.hf_token
736
+ )
737
+ print(f"[HF] ✓ Uploaded: {full_path}")
738
+ except Exception as e:
739
+ print(f"[HF] ✗ Upload failed ({full_path}): {e}")
740
+
741
+ def upload_folder_contents(self, folder_path: Path, repo_folder: str):
742
+ """Upload entire folder to HuggingFace."""
743
+ if not self.config.upload_to_hf or self.api is None:
744
+ return
745
+
746
+ try:
747
+ full_path = f"{self.run_prefix}/{repo_folder}"
748
+ upload_folder(
749
+ folder_path=str(folder_path),
750
+ repo_id=self.repo_id,
751
+ path_in_repo=full_path,
752
+ token=self.config.hf_token,
753
+ ignore_patterns=["*.pyc", "__pycache__"]
754
+ )
755
+ print(f"[HF] Uploaded folder: {full_path}")
756
+ except Exception as e:
757
+ print(f"[HF] Folder upload failed: {e}")
758
+
759
+ def create_model_card(self, trainer_stats: Dict):
760
+ """Create and upload run-specific model card."""
761
+ if not self.config.upload_to_hf:
762
+ return
763
+
764
+ # Create run card with coalescence info
765
+ run_card = f"""# Run: {self.config.run_name}
766
+
767
+ ## Configuration
768
+ - **Dataset**: {self.config.dataset.upper()}
769
+ - **Parameters**: {trainer_stats['total_params']:,}
770
+ - **Coalescence Loss**: {'Enabled' if self.config.use_coalescence_loss else 'Disabled'}
771
+ - **LR Boost**: {self.config.restart_lr_mult}x
772
+
773
+ ## Performance
774
+ - **Best Validation Accuracy**: {trainer_stats['best_acc']:.2f}%
775
+ - **Training Time**: {trainer_stats['training_time']:.1f} hours
776
+
777
+ ---
778
+
779
+ Built with geometric shatter-reconstruct training.
780
+
781
+ **Training completed**: {time.strftime("%Y-%m-%d %H:%M:%S")}
782
+ """
783
+
784
+ readme_path = self.config.output_dir / "RUN_README.md"
785
+ with open(readme_path, 'w') as f:
786
+ f.write(run_card)
787
+
788
+ try:
789
+ upload_file(
790
+ path_or_fileobj=str(readme_path),
791
+ path_in_repo=f"{self.run_prefix}/README.md",
792
+ repo_id=self.repo_id,
793
+ token=self.config.hf_token
794
+ )
795
+ print(f"[HF] Uploaded run README")
796
+ except Exception as e:
797
+ print(f"[HF] Run README upload failed: {e}")
798
+
799
+
800
+ # ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
801
+ # Trainer with Geometric Coalescence Loss
802
+ # ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
803
+
804
+ class Trainer:
805
+ """Training manager with AdamW + Warm Restarts + Coalescence Loss."""
806
+
807
+ def __init__(self, config: CantorTrainingConfig):
808
+ self.config = config
809
+ self.device = torch.device(config.device)
810
+
811
+ # Set seed
812
+ torch.manual_seed(config.seed)
813
+ if torch.cuda.is_available():
814
+ torch.cuda.manual_seed(config.seed)
815
+
816
+ # Model
817
+ print("\n" + "=" * 70)
818
+ print(f"Initializing Cantor Classifier - {config.dataset.upper()}")
819
+ print("=" * 70)
820
+
821
+ init_start = time.time()
822
+ self.model = CantorClassifier(config).to(self.device)
823
+ init_time = time.time() - init_start
824
+
825
+ print(f"\n[Model] Initialization time: {init_time:.2f}s")
826
+ self.print_model_info()
827
+
828
+ # Track restart epochs
829
+ self.restart_epochs = self._calculate_restart_epochs()
830
+
831
+ # Optimizer
832
+ self.optimizer = self.create_optimizer()
833
+
834
+ # Scheduler
835
+ self.scheduler = self.create_scheduler()
836
+
837
+ # Loss
838
+ self.criterion = nn.CrossEntropyLoss(label_smoothing=config.label_smoothing)
839
+
840
+ # Geometric Coalescence Loss
841
+ if config.use_coalescence_loss:
842
+ print(f"\n[Coalescence Loss] Initializing...")
843
+ self.coalescence_loss_fn = GeometricCoalescenceLoss(
844
+ embed_dim=config.embed_dim,
845
+ num_anchors=config.coalescence_num_anchors,
846
+ k_simplex=config.k_simplex,
847
+ target_variance=config.coalescence_target_variance,
848
+ num_simplex_samples=config.coalescence_num_simplex_samples,
849
+ num_distance_pairs=config.coalescence_num_distance_pairs,
850
+ base_weight=config.coalescence_base_weight,
851
+ max_weight=config.coalescence_max_weight,
852
+ weight_power=config.coalescence_weight_power,
853
+ consciousness_weight=config.coalescence_consciousness_weight,
854
+ distance_weight=config.coalescence_distance_weight,
855
+ volume_weight=config.coalescence_volume_weight
856
+ ).to(self.device)
857
+
858
+ print(f"[Coalescence] λ={config.lambda_coalescence}")
859
+ print(f"[Coalescence] Adaptive weight: {config.coalescence_base_weight} → {config.coalescence_max_weight}")
860
+ print(f"[Coalescence] Components: anchor={config.coalescence_consciousness_weight:.1f}, "
861
+ f"dist={config.coalescence_distance_weight:.1f}, vol={config.coalescence_volume_weight:.1f}")
862
+ else:
863
+ self.coalescence_loss_fn = None
864
+ print(f"\n[Coalescence Loss] Disabled")
865
+
866
+ # Mixing info
867
+ self.use_mixing = config.use_mixing
868
+ self.mixing_type = config.mixing_type
869
+ self.mixing_prob = config.mixing_prob
870
+
871
+ # Mixed precision
872
+ self.use_amp = config.use_mixed_precision and config.device == "cuda"
873
+ self.scaler = GradScaler() if self.use_amp else None
874
+
875
+ if self.use_amp:
876
+ print(f"[Training] Mixed precision enabled")
877
+
878
+ # TensorBoard
879
+ self.writer = SummaryWriter(log_dir=str(config.tensorboard_dir))
880
+ print(f"[TensorBoard] Logging to: {config.tensorboard_dir}")
881
+
882
+ # HuggingFace
883
+ self.hf_uploader = HuggingFaceUploader(config) if config.upload_to_hf else None
884
+
885
+ # Save config
886
+ config.save(config.output_dir / "config.yaml")
887
+
888
+ # Metrics
889
+ self.best_acc = 0.0
890
+ self.global_step = 0
891
+ self.start_time = time.time()
892
+ self.upload_count = 0
893
+
894
+ def apply_mixing(self, images: torch.Tensor, labels: torch.Tensor):
895
+ """Apply mixing augmentation if enabled."""
896
+ if not self.use_mixing or torch.rand(1).item() > self.mixing_prob:
897
+ return images, labels, None
898
+
899
+ if self.mixing_type == "alphamix":
900
+ mixed_images, y_a, y_b, alpha = alphamix_data(
901
+ images, labels,
902
+ alpha_range=self.config.mixing_alpha_range,
903
+ spatial_ratio=self.config.mixing_spatial_ratio
904
+ )
905
+ elif self.mixing_type == "fractal":
906
+ mixed_images, y_a, y_b, alpha = alphamix_fractal(
907
+ images, labels,
908
+ alpha_range=self.config.mixing_alpha_range,
909
+ steps_range=self.config.fractal_steps_range,
910
+ triad_scales=self.config.fractal_triad_scales
911
+ )
912
+ else:
913
+ raise ValueError(f"Unknown mixing type: {self.mixing_type}")
914
+
915
+ return mixed_images, (y_a, y_b, alpha), alpha
916
+
917
+ def compute_mixed_loss(self, logits: torch.Tensor, mixed_labels):
918
+ """Compute loss for mixed labels."""
919
+ if mixed_labels is None:
920
+ return None
921
+
922
+ y_a, y_b, alpha = mixed_labels
923
+ loss_a = self.criterion(logits, y_a)
924
+ loss_b = self.criterion(logits, y_b)
925
+
926
+ loss = alpha * loss_a + (1 - alpha) * loss_b
927
+ return loss
928
+
929
+ def _calculate_restart_epochs(self) -> List[int]:
930
+ """Calculate when restarts will occur."""
931
+ if self.config.scheduler_type != "cosine_restarts":
932
+ return []
933
+
934
+ restarts = []
935
+ current = self.config.restart_period
936
+ period = self.config.restart_period
937
+
938
+ while current < self.config.num_epochs:
939
+ restarts.append(current)
940
+ period *= self.config.restart_mult
941
+ current += period
942
+
943
+ return restarts
944
+
945
+ def create_optimizer(self):
946
+ """Create optimizer based on config."""
947
+ if self.config.optimizer_type == "adamw":
948
+ print(f"\n[Optimizer] AdamW")
949
+ print(f" LR: {self.config.learning_rate}")
950
+ print(f" Betas: {self.config.adamw_betas}")
951
+ print(f" Weight decay: {self.config.weight_decay}")
952
+
953
+ return torch.optim.AdamW(
954
+ self.model.parameters(),
955
+ lr=self.config.learning_rate,
956
+ betas=self.config.adamw_betas,
957
+ eps=self.config.adamw_eps,
958
+ weight_decay=self.config.weight_decay
959
+ )
960
+ else:
961
+ raise ValueError(f"Unknown optimizer: {self.config.optimizer_type}")
962
+
963
+ def create_scheduler(self):
964
+ """Create LR scheduler based on config."""
965
+ if self.config.scheduler_type == "cosine_restarts":
966
+ print(f"\n[Scheduler] CosineAnnealingWarmRestarts with LR Boost")
967
+ print(f" T_0: {self.config.restart_period} epochs")
968
+ print(f" T_mult: {self.config.restart_mult}x")
969
+ print(f" Restart LR mult: {self.config.restart_lr_mult}x {'🚀' if self.config.restart_lr_mult > 1.0 else ''}")
970
+ print(f" Min LR: {self.config.min_lr}")
971
+
972
+ if self.config.restart_lr_mult > 1.0:
973
+ print(f"\n 🚀 BOOST MODE ENABLED!")
974
+ print(f" Creates wider exploration curves to escape local minima")
975
+ print(f" Coalescence loss provides geometric scaffolding during thrashing")
976
+
977
+ return CosineAnnealingWarmRestartsWithBoost(
978
+ self.optimizer,
979
+ T_0=self.config.restart_period,
980
+ T_mult=self.config.restart_mult,
981
+ eta_min=self.config.min_lr,
982
+ restart_lr_mult=self.config.restart_lr_mult
983
+ )
984
+ else:
985
+ raise ValueError(f"Unknown scheduler: {self.config.scheduler_type}")
986
+
987
+ def print_model_info(self):
988
+ """Print model info."""
989
+ total_params = sum(p.numel() for p in self.model.parameters())
990
+ print(f"\nParameters: {total_params:,}")
991
+ print(f"Dataset: {self.config.dataset.upper()}")
992
+ print(f"Fusion mode: {self.config.fusion_mode}")
993
+ print(f"Optimizer: {self.config.optimizer_type.upper()}")
994
+ print(f"Scheduler: {self.config.scheduler_type}")
995
+ if self.config.restart_lr_mult > 1.0:
996
+ print(f"LR Boost: {self.config.restart_lr_mult}x at restarts 🚀")
997
+ if self.config.use_coalescence_loss:
998
+ print(f"Coalescence Loss: λ={self.config.lambda_coalescence} ✓")
999
+ print(f"Output: {self.config.output_dir}")
1000
+
1001
+ def train_epoch(self, train_loader: DataLoader, epoch: int) -> Tuple[float, float]:
1002
+ """Train one epoch with coalescence loss."""
1003
+ self.model.train()
1004
+ total_loss, total_task_loss, total_coal_loss = 0.0, 0.0, 0.0
1005
+ correct, total = 0, 0
1006
+ mixing_applied_count = 0
1007
+ total_batches = 0
1008
+
1009
+ # Check if this is a restart epoch
1010
+ is_restart = (epoch in self.restart_epochs)
1011
+ epoch_desc = f"Epoch {epoch+1}/{self.config.num_epochs}"
1012
+ if is_restart:
1013
+ restart_num = self.restart_epochs.index(epoch) + 1
1014
+ boost_mult = self.config.restart_lr_mult ** restart_num if self.config.restart_lr_mult > 1.0 else 1.0
1015
+ epoch_desc += f" 🔄 RESTART #{restart_num}"
1016
+ if self.config.restart_lr_mult > 1.0:
1017
+ epoch_desc += f" ({boost_mult:.2f}x)"
1018
+
1019
+ pbar = tqdm(train_loader, desc=f"{epoch_desc} [Train]")
1020
+
1021
+ for batch_idx, (images, labels) in enumerate(pbar):
1022
+ images, labels = images.to(self.device, non_blocking=True), labels.to(self.device, non_blocking=True)
1023
+
1024
+ # Apply mixing augmentation
1025
+ original_labels = labels
1026
+ mixed_images, mixed_labels_info, mixing_alpha = self.apply_mixing(images, labels)
1027
+ if mixing_alpha is not None:
1028
+ mixing_applied_count += 1
1029
+ images = mixed_images
1030
+
1031
+ total_batches += 1
1032
+
1033
+ # Forward WITH fusion info for coalescence loss
1034
+ return_fusion = (self.coalescence_loss_fn is not None)
1035
+
1036
+ if self.use_amp:
1037
+ with autocast():
1038
+ if return_fusion:
1039
+ logits, fusion_infos = self.model(images, return_fusion_info=True)
1040
+ else:
1041
+ logits = self.model(images)
1042
+
1043
+ # Task loss
1044
+ if mixing_alpha is not None:
1045
+ task_loss = self.compute_mixed_loss(logits, mixed_labels_info)
1046
+ else:
1047
+ task_loss = self.criterion(logits, labels)
1048
+
1049
+ # Add coalescence loss
1050
+ coal_loss = torch.tensor(0.0, device=self.device)
1051
+ coal_metrics = {} # Initialize empty dict
1052
+ if self.coalescence_loss_fn and fusion_infos:
1053
+ coal_loss, coal_metrics = add_coalescence_loss_to_training(
1054
+ fusion_infos[-1], # Last layer
1055
+ self.coalescence_loss_fn,
1056
+ current_lr=self.scheduler.get_last_lr()[0],
1057
+ baseline_lr=self.config.learning_rate,
1058
+ lambda_coal=self.config.lambda_coalescence
1059
+ )
1060
+
1061
+ # Log coalescence metrics (FIXED: check if dict not empty)
1062
+ if batch_idx % self.config.log_interval == 0 and coal_metrics:
1063
+ self.writer.add_scalar('train/coalescence_loss', coal_loss.item(), self.global_step)
1064
+ self.writer.add_scalar('train/coalescence_weight', coal_metrics['adaptive_weight'], self.global_step)
1065
+ self.writer.add_scalar('train/anchor_loss', coal_metrics['anchor_loss'], self.global_step)
1066
+ self.writer.add_scalar('train/distance_loss', coal_metrics['distance_loss'], self.global_step)
1067
+ self.writer.add_scalar('train/volume_loss', coal_metrics['volume_loss'], self.global_step)
1068
+
1069
+ # Total loss
1070
+ loss = task_loss + coal_loss
1071
+
1072
+ self.optimizer.zero_grad(set_to_none=True)
1073
+ self.scaler.scale(loss).backward()
1074
+ self.scaler.unscale_(self.optimizer)
1075
+ torch.nn.utils.clip_grad_norm_(self.model.parameters(), self.config.grad_clip)
1076
+ self.scaler.step(self.optimizer)
1077
+ self.scaler.update()
1078
+ else:
1079
+ if return_fusion:
1080
+ logits, fusion_infos = self.model(images, return_fusion_info=True)
1081
+ else:
1082
+ logits = self.model(images)
1083
+
1084
+ # Task loss
1085
+ if mixing_alpha is not None:
1086
+ task_loss = self.compute_mixed_loss(logits, mixed_labels_info)
1087
+ else:
1088
+ task_loss = self.criterion(logits, labels)
1089
+
1090
+ # Add coalescence loss
1091
+ coal_loss = torch.tensor(0.0, device=self.device)
1092
+ coal_metrics = {} # Initialize empty dict
1093
+ if self.coalescence_loss_fn and fusion_infos:
1094
+ coal_loss, coal_metrics = add_coalescence_loss_to_training(
1095
+ fusion_infos[-1],
1096
+ self.coalescence_loss_fn,
1097
+ current_lr=self.scheduler.get_last_lr()[0],
1098
+ baseline_lr=self.config.learning_rate,
1099
+ lambda_coal=self.config.lambda_coalescence
1100
+ )
1101
+
1102
+ # Log coalescence metrics (FIXED: check if dict not empty)
1103
+ if batch_idx % self.config.log_interval == 0 and coal_metrics:
1104
+ self.writer.add_scalar('train/coalescence_loss', coal_loss.item(), self.global_step)
1105
+ self.writer.add_scalar('train/coalescence_weight', coal_metrics['adaptive_weight'], self.global_step)
1106
+ self.writer.add_scalar('train/anchor_loss', coal_metrics['anchor_loss'], self.global_step)
1107
+ self.writer.add_scalar('train/distance_loss', coal_metrics['distance_loss'], self.global_step)
1108
+ self.writer.add_scalar('train/volume_loss', coal_metrics['volume_loss'], self.global_step)
1109
+
1110
+ # Total loss
1111
+ loss = task_loss + coal_loss
1112
+
1113
+ self.optimizer.zero_grad(set_to_none=True)
1114
+ loss.backward()
1115
+ torch.nn.utils.clip_grad_norm_(self.model.parameters(), self.config.grad_clip)
1116
+ self.optimizer.step()
1117
+
1118
+ # Metrics
1119
+ total_loss += loss.item()
1120
+ total_task_loss += task_loss.item()
1121
+ total_coal_loss += coal_loss.item()
1122
+ _, predicted = logits.max(1)
1123
+ correct += predicted.eq(original_labels).sum().item()
1124
+ total += original_labels.size(0)
1125
+
1126
+ # TensorBoard logging
1127
+ if batch_idx % self.config.log_interval == 0:
1128
+ current_lr = self.scheduler.get_last_lr()[0]
1129
+ self.writer.add_scalar('train/total_loss', loss.item(), self.global_step)
1130
+ self.writer.add_scalar('train/task_loss', task_loss.item(), self.global_step)
1131
+ self.writer.add_scalar('train/accuracy', 100. * correct / total, self.global_step)
1132
+ self.writer.add_scalar('train/learning_rate', current_lr, self.global_step)
1133
+ if mixing_alpha is not None:
1134
+ self.writer.add_scalar('train/mixing_alpha', mixing_alpha, self.global_step)
1135
+
1136
+ self.global_step += 1
1137
+
1138
+ # Progress bar postfix
1139
+ postfix_dict = {
1140
+ 'loss': f'{loss.item():.4f}',
1141
+ 'task': f'{task_loss.item():.4f}',
1142
+ 'acc': f'{100. * correct / total:.2f}%',
1143
+ 'lr': f'{self.scheduler.get_last_lr()[0]:.6f}'
1144
+ }
1145
+ if self.coalescence_loss_fn and coal_loss.item() > 0:
1146
+ postfix_dict['coal'] = f'{coal_loss.item():.4f}'
1147
+ if self.use_mixing:
1148
+ mix_pct = 100.0 * mixing_applied_count / total_batches
1149
+ postfix_dict['mix'] = f'{mix_pct:.0f}%'
1150
+
1151
+ pbar.set_postfix(postfix_dict)
1152
+
1153
+ return total_loss / len(train_loader), 100. * correct / total
1154
+
1155
+ @torch.no_grad()
1156
+ def evaluate(self, val_loader: DataLoader, epoch: int) -> Tuple[float, Dict]:
1157
+ """Evaluate."""
1158
+ self.model.eval()
1159
+ total_loss, correct, total = 0.0, 0, 0
1160
+ consciousness_values = []
1161
+
1162
+ pbar = tqdm(val_loader, desc=f"Epoch {epoch+1}/{self.config.num_epochs} [Val] ")
1163
+
1164
+ for batch_idx, (images, labels) in enumerate(pbar):
1165
+ images, labels = images.to(self.device, non_blocking=True), labels.to(self.device, non_blocking=True)
1166
+
1167
+ # Forward with fusion info on last batch
1168
+ return_info = (batch_idx == len(val_loader) - 1)
1169
+
1170
+ if self.use_amp:
1171
+ with autocast():
1172
+ if return_info:
1173
+ logits, fusion_infos = self.model(images, return_fusion_info=True)
1174
+ if fusion_infos and fusion_infos[0].get('consciousness') is not None:
1175
+ consciousness_values.append(fusion_infos[0]['consciousness'].mean().item())
1176
+ else:
1177
+ logits = self.model(images)
1178
+ loss = self.criterion(logits, labels)
1179
+ else:
1180
+ if return_info:
1181
+ logits, fusion_infos = self.model(images, return_fusion_info=True)
1182
+ if fusion_infos and fusion_infos[0].get('consciousness') is not None:
1183
+ consciousness_values.append(fusion_infos[0]['consciousness'].mean().item())
1184
+ else:
1185
+ logits = self.model(images)
1186
+ loss = self.criterion(logits, labels)
1187
+
1188
+ total_loss += loss.item()
1189
+ _, predicted = logits.max(1)
1190
+ correct += predicted.eq(labels).sum().item()
1191
+ total += labels.size(0)
1192
+
1193
+ pbar.set_postfix({
1194
+ 'loss': f'{total_loss / (batch_idx + 1):.4f}',
1195
+ 'acc': f'{100. * correct / total:.2f}%'
1196
+ })
1197
+
1198
+ avg_loss = total_loss / len(val_loader)
1199
+ accuracy = 100. * correct / total
1200
+
1201
+ # TensorBoard logging
1202
+ self.writer.add_scalar('val/loss', avg_loss, epoch)
1203
+ self.writer.add_scalar('val/accuracy', accuracy, epoch)
1204
+ if consciousness_values:
1205
+ self.writer.add_scalar('val/consciousness', sum(consciousness_values) / len(consciousness_values), epoch)
1206
+
1207
+ metrics = {
1208
+ 'loss': avg_loss,
1209
+ 'accuracy': accuracy,
1210
+ 'consciousness': sum(consciousness_values) / len(consciousness_values) if consciousness_values else None
1211
+ }
1212
+
1213
+ return accuracy, metrics
1214
+
1215
+ def train(self, train_loader: DataLoader, val_loader: DataLoader):
1216
+ """Full training loop."""
1217
+ print("\n" + "=" * 70)
1218
+ print("Starting training with Geometric Coalescence Loss")
1219
+ if self.config.restart_lr_mult > 1.0:
1220
+ print("🚀 LR Boost Mode + Geometric Scaffolding")
1221
+ print("=" * 70 + "\n")
1222
+
1223
+ for epoch in range(self.config.num_epochs):
1224
+ # Train
1225
+ train_loss, train_acc = self.train_epoch(train_loader, epoch)
1226
+
1227
+ # Evaluate
1228
+ val_acc, val_metrics = self.evaluate(val_loader, epoch)
1229
+
1230
+ # Update scheduler
1231
+ self.scheduler.step()
1232
+
1233
+ # Check restart status
1234
+ is_restart = (epoch in self.restart_epochs)
1235
+ next_is_restart = ((epoch + 1) in self.restart_epochs)
1236
+ next_lr = self.scheduler.get_last_lr()[0]
1237
+
1238
+ # Print summary
1239
+ print(f"\n{'='*70}")
1240
+ print(f"Epoch [{epoch + 1}/{self.config.num_epochs}] Summary:")
1241
+ print(f" Train: Loss={train_loss:.4f}, Acc={train_acc:.2f}%")
1242
+ print(f" Val: Loss={val_metrics['loss']:.4f}, Acc={val_acc:.2f}%")
1243
+
1244
+ if next_is_restart and self.config.restart_lr_mult > 1.0:
1245
+ print(f" ⚠️ RESTART COMING! Coalescence weight will increase for stabilization")
1246
+ elif is_restart and self.config.restart_lr_mult > 1.0:
1247
+ print(f" 🔄 WARM RESTART! Geometric scaffolding active")
1248
+
1249
+ print(f" Current LR: {next_lr:.6f}")
1250
+
1251
+ # Checkpoint
1252
+ is_best = val_acc > self.best_acc
1253
+ should_upload = ((epoch + 1) % self.config.checkpoint_upload_interval == 0)
1254
+
1255
+ if is_best:
1256
+ self.best_acc = val_acc
1257
+ print(f" ✓ New best model! Accuracy: {val_acc:.2f}%")
1258
+ self.save_checkpoint(epoch, val_acc, prefix="best", upload=should_upload)
1259
+
1260
+ print(f"{'='*70}\n")
1261
+
1262
+ # Training complete
1263
+ training_time = (time.time() - self.start_time) / 3600
1264
+
1265
+ print("\n" + "=" * 70)
1266
+ print("Training Complete!")
1267
+ print(f"Best Validation Accuracy: {self.best_acc:.2f}%")
1268
+ print(f"Training Time: {training_time:.2f} hours")
1269
+ if self.config.restart_lr_mult > 1.0 and self.config.use_coalescence_loss:
1270
+ print("🚀 Shatter-reconstruct training successful!")
1271
+ print("=" * 70)
1272
+
1273
+ # Upload to HuggingFace
1274
+ if self.hf_uploader:
1275
+ trainer_stats = {
1276
+ 'total_params': sum(p.numel() for p in self.model.parameters()),
1277
+ 'best_acc': self.best_acc,
1278
+ 'training_time': training_time,
1279
+ 'final_epoch': self.config.num_epochs,
1280
+ 'batch_size': self.config.batch_size,
1281
+ 'mixed_precision': self.use_amp
1282
+ }
1283
+ self.hf_uploader.create_model_card(trainer_stats)
1284
+
1285
+ self.writer.close()
1286
+
1287
+ def save_checkpoint(self, epoch: int, accuracy: float, prefix: str = "checkpoint", upload: bool = False):
1288
+ """Save checkpoint."""
1289
+ checkpoint_dir = self.config.checkpoint_dir
1290
+ checkpoint_dir.mkdir(parents=True, exist_ok=True)
1291
+
1292
+ # Save model weights
1293
+ model_path = checkpoint_dir / f"{prefix}_model.safetensors"
1294
+ save_file(self.model.state_dict(), str(model_path))
1295
+
1296
+ # Save training state
1297
+ training_state = {
1298
+ 'optimizer_state_dict': self.optimizer.state_dict(),
1299
+ 'scheduler_state_dict': self.scheduler.state_dict(),
1300
+ }
1301
+ if self.scaler is not None:
1302
+ training_state['scaler_state_dict'] = self.scaler.state_dict()
1303
+ if self.coalescence_loss_fn is not None:
1304
+ training_state['coalescence_anchors'] = self.coalescence_loss_fn.anchors.data
1305
+
1306
+ training_state_path = checkpoint_dir / f"{prefix}_training_state.pt"
1307
+ torch.save(training_state, training_state_path)
1308
+
1309
+ # Save metadata
1310
+ metadata = {
1311
+ 'epoch': epoch,
1312
+ 'accuracy': accuracy,
1313
+ 'best_accuracy': self.best_acc,
1314
+ 'global_step': self.global_step,
1315
+ 'timestamp': time.strftime("%Y-%m-%d %H:%M:%S"),
1316
+ 'coalescence_enabled': self.config.use_coalescence_loss,
1317
+ 'restart_lr_mult': self.config.restart_lr_mult
1318
+ }
1319
+ metadata_path = checkpoint_dir / f"{prefix}_metadata.json"
1320
+ with open(metadata_path, 'w') as f:
1321
+ json.dump(metadata, f, indent=2)
1322
+
1323
+ print(f" 💾 Saved: {prefix}_model.safetensors")
1324
+
1325
+ # Upload
1326
+ if self.hf_uploader and upload:
1327
+ self.hf_uploader.upload_file(model_path, f"checkpoints/{prefix}_model.safetensors")
1328
+ self.upload_count += 1
1329
+
1330
+
1331
+ # ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
1332
+ # Data Loading
1333
+ # ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
1334
+
1335
+ class Cutout:
1336
+ """Cutout data augmentation."""
1337
+ def __init__(self, length: int):
1338
+ self.length = length
1339
+
1340
+ def __call__(self, img):
1341
+ h, w = img.size(1), img.size(2)
1342
+ mask = torch.ones((h, w), dtype=torch.float32)
1343
+ y = torch.randint(h, (1,)).item()
1344
+ x = torch.randint(w, (1,)).item()
1345
+
1346
+ y1 = max(0, y - self.length // 2)
1347
+ y2 = min(h, y + self.length // 2)
1348
+ x1 = max(0, x - self.length // 2)
1349
+ x2 = min(w, x + self.length // 2)
1350
+
1351
+ mask[y1:y2, x1:x2] = 0.
1352
+ mask = mask.expand_as(img)
1353
+ return img * mask
1354
+
1355
+
1356
+ def get_data_loaders(config: CantorTrainingConfig) -> Tuple[DataLoader, DataLoader]:
1357
+ """Create data loaders."""
1358
+ mean = (0.4914, 0.4822, 0.4465)
1359
+ std = (0.2470, 0.2435, 0.2616)
1360
+
1361
+ if config.use_augmentation:
1362
+ transforms_list = []
1363
+
1364
+ if config.use_autoaugment:
1365
+ transforms_list.append(transforms.AutoAugment(transforms.AutoAugmentPolicy.CIFAR10))
1366
+ else:
1367
+ transforms_list.extend([
1368
+ transforms.RandomCrop(32, padding=4),
1369
+ transforms.RandomHorizontalFlip(),
1370
+ ])
1371
+
1372
+ transforms_list.append(transforms.ToTensor())
1373
+ transforms_list.append(transforms.Normalize(mean, std))
1374
+
1375
+ if config.use_cutout:
1376
+ transforms_list.append(Cutout(config.cutout_length))
1377
+
1378
+ train_transform = transforms.Compose(transforms_list)
1379
+ else:
1380
+ train_transform = transforms.Compose([
1381
+ transforms.ToTensor(),
1382
+ transforms.Normalize(mean, std)
1383
+ ])
1384
+
1385
+ val_transform = transforms.Compose([
1386
+ transforms.ToTensor(),
1387
+ transforms.Normalize(mean, std)
1388
+ ])
1389
+
1390
+ if config.dataset == "cifar10":
1391
+ train_dataset = datasets.CIFAR10(root='./data', train=True, download=True, transform=train_transform)
1392
+ val_dataset = datasets.CIFAR10(root='./data', train=False, download=True, transform=val_transform)
1393
+ elif config.dataset == "cifar100":
1394
+ train_dataset = datasets.CIFAR100(root='./data', train=True, download=True, transform=train_transform)
1395
+ val_dataset = datasets.CIFAR100(root='./data', train=False, download=True, transform=val_transform)
1396
+ else:
1397
+ raise ValueError(f"Unknown dataset: {config.dataset}")
1398
+
1399
+ train_loader = DataLoader(
1400
+ train_dataset,
1401
+ batch_size=config.batch_size,
1402
+ shuffle=True,
1403
+ num_workers=config.num_workers,
1404
+ pin_memory=(config.device == "cuda")
1405
+ )
1406
+
1407
+ val_loader = DataLoader(
1408
+ val_dataset,
1409
+ batch_size=config.batch_size,
1410
+ shuffle=False,
1411
+ num_workers=config.num_workers,
1412
+ pin_memory=(config.device == "cuda")
1413
+ )
1414
+
1415
+ return train_loader, val_loader
1416
+
1417
+
1418
+ # ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
1419
+ # Main
1420
+ # ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
1421
+
1422
+ def main():
1423
+ """Main training function with Geometric Coalescence Loss."""
1424
+
1425
+ config = CantorTrainingConfig(
1426
+ # Dataset
1427
+ dataset="cifar100",
1428
+
1429
+ # Architecture
1430
+ embed_dim=486,
1431
+ num_fusion_blocks=9,
1432
+ num_heads=9,
1433
+ fusion_mode="learned",
1434
+ k_simplex=8,
1435
+ use_beatrix=False,
1436
+ fusion_window=27,
1437
+
1438
+ # Optimizer: AdamW
1439
+ optimizer_type="adamw",
1440
+ learning_rate=3e-4,
1441
+ weight_decay=0.05,
1442
+ adamw_betas=(0.9, 0.999),
1443
+
1444
+ # Scheduler: Warm Restarts + LR BOOST
1445
+ scheduler_type="cosine_restarts",
1446
+ restart_period=12,
1447
+ restart_mult=1.5,
1448
+ restart_lr_mult=1.15, # 🚀 Aggressive exploration
1449
+ min_lr=1e-7,
1450
+
1451
+ # Training
1452
+ num_epochs=300,
1453
+ batch_size=512,
1454
+ grad_clip=1.0,
1455
+ label_smoothing=0.15,
1456
+
1457
+ # Augmentation
1458
+ use_augmentation=True,
1459
+ use_autoaugment=True,
1460
+ use_cutout=True,
1461
+ cutout_length=16,
1462
+
1463
+ # Mixing
1464
+ use_mixing=True,
1465
+ mixing_type="alphamix",
1466
+ mixing_alpha_range=(0.3, 0.7),
1467
+ mixing_spatial_ratio=0.25,
1468
+ mixing_prob=0.5,
1469
+
1470
+ # Geometric Coalescence Loss
1471
+ use_coalescence_loss=True,
1472
+ lambda_coalescence=0.5,
1473
+ coalescence_num_anchors=64,
1474
+ coalescence_target_variance=0.5,
1475
+ coalescence_base_weight=0.1,
1476
+ coalescence_max_weight=0.8,
1477
+ coalescence_weight_power=2.0,
1478
+
1479
+ # Regularization
1480
+ dropout=0.1,
1481
+ drop_path_rate=0.15,
1482
+
1483
+ # System
1484
+ device="cuda",
1485
+ use_mixed_precision=False,
1486
+
1487
+ # HuggingFace
1488
+ hf_username="AbstractPhil",
1489
+ upload_to_hf=True,
1490
+ checkpoint_upload_interval=25,
1491
+ )
1492
+
1493
+ print("=" * 70)
1494
+ print(f"Cantor Fusion Classifier - {config.dataset.upper()}")
1495
+ print("Shatter-Reconstruct Training")
1496
+ print("=" * 70)
1497
+ print(f"\n🚀 LR Boost: {config.restart_lr_mult}x at restarts")
1498
+ print(f"🧬 Coalescence Loss: λ={config.lambda_coalescence}")
1499
+ print(f" Adaptive weight: {config.coalescence_base_weight} → {config.coalescence_max_weight}")
1500
+ print(f" Philosophy: Geometric truth survives when patterns shatter")
1501
+ print("=" * 70)
1502
+
1503
+ # Load data
1504
+ print("\nLoading data...")
1505
+ train_loader, val_loader = get_data_loaders(config)
1506
+ print(f" Train: {len(train_loader.dataset)} samples")
1507
+ print(f" Val: {len(val_loader.dataset)} samples")
1508
+
1509
+ # Train
1510
+ trainer = Trainer(config)
1511
+ trainer.train(train_loader, val_loader)
1512
+
1513
+ print("\n" + "=" * 70)
1514
+ print("🎯 Shatter-reconstruct training complete!")
1515
+ print(f" tensorboard --logdir {config.tensorboard_dir}")
1516
+ print("=" * 70)
1517
+
1518
+
1519
+ if __name__ == "__main__":
1520
+ main()