feat: activate dormant paradigms β progressive looping, evolution with loss feedback, no progressive_unfreeze\n\nWith STE+AdamW (not MeZO), we can afford multi-loop training.\nProgressive loop schedule: 1β2β3 loops as training advances.\nEvolution engine now receives previous step loss for surprise\ndetection and memory writes.\nProgressive unfreeze disabled by default (counterproductive with backprop)."