simone00 commited on
Commit
17d4058
·
verified ·
1 Parent(s): 964f3af

Add files using upload-large-folder tool

Browse files
Files changed (50) hide show
  1. 1e-12 +0 -0
  2. README.md +362 -0
  3. The Producer School - Oasis/Loops/Piano Loops/TPS - Oasis - Piano Loop 01 Fmin (120 BPM).mid +0 -0
  4. The Producer School - Oasis/Loops/Piano Loops/TPS - Oasis - Piano Loop 02 Amin (120 BPM).mid +0 -0
  5. The Producer School - Oasis/Loops/Piano Loops/TPS - Oasis - Piano Loop 03 Dmin (120 BPM).mid +0 -0
  6. The Producer School - Oasis/Loops/Piano Loops/TPS - Oasis - Piano Loop 04 Dmin (120 BPM).mid +0 -0
  7. The Producer School - Oasis/Loops/Piano Loops/TPS - Oasis - Piano Loop 05 Fmin (120 BPM).mid +0 -0
  8. The Producer School - Oasis/Loops/Piano Loops/TPS - Oasis - Piano Loop 06 Amin (120 BPM).mid +0 -0
  9. The Producer School - Oasis/Loops/Piano Loops/TPS - Oasis - Piano Loop 07 F#min (120 BPM).mid +0 -0
  10. The Producer School - Oasis/Loops/Piano Loops/TPS - Oasis - Piano Loop 08 Ebmin (120 BPM).mid +0 -0
  11. The Producer School - Oasis/Loops/Piano Loops/TPS - Oasis - Piano Loop 09 C#min (120 BPM).mid +0 -0
  12. The Producer School - Oasis/Loops/Piano Loops/TPS - Oasis - Piano Loop 10 Amin (120 BPM).mid +0 -0
  13. The Producer School - Oasis/Loops/Synth Loops/TPS - Oasis - Synth Loop 01 C (120 BPM).mid +0 -0
  14. The Producer School - Oasis/Loops/Synth Loops/TPS - Oasis - Synth Loop 02 C (120 BPM).mid +0 -0
  15. The Producer School - Oasis/Loops/Synth Loops/TPS - Oasis - Synth Loop 03 C (120 BPM).mid +0 -0
  16. The Producer School - Oasis/Loops/Synth Loops/TPS - Oasis - Synth Loop 04 C (120 BPM).mid +0 -0
  17. The Producer School - Oasis/Loops/Synth Loops/TPS - Oasis - Synth Loop 05 C (120 BPM).mid +0 -0
  18. The Producer School - Oasis/Loops/Synth Loops/TPS - Oasis - Synth Loop 06 C (120 BPM).mid +0 -0
  19. The Producer School - Oasis/Loops/Synth Loops/TPS - Oasis - Synth Loop 07 Amin (120 BPM).mid +0 -0
  20. The Producer School - Oasis/Loops/Synth Loops/TPS - Oasis - Synth Loop 08 C (120 BPM).mid +0 -0
  21. The Producer School - Oasis/Loops/Synth Loops/TPS - Oasis - Synth Loop 09 C (120 BPM).mid +0 -0
  22. The Producer School - Oasis/Project Files/Logic Pro/TPS - Oasis - Project File 01/TPS - Oasis - Project File 01 - Logic Pro.logicx/Alternatives/000/DisplayState.plist +0 -0
  23. The Producer School - Oasis/Project Files/Logic Pro/TPS - Oasis - Project File 01/TPS - Oasis - Project File 01 - Logic Pro.logicx/Alternatives/000/DisplayStateArchive +0 -0
  24. The Producer School - Oasis/Project Files/Logic Pro/TPS - Oasis - Project File 01/TPS - Oasis - Project File 01 - Logic Pro.logicx/Alternatives/000/MetaData.plist +0 -0
  25. The Producer School - Oasis/Project Files/Logic Pro/TPS - Oasis - Project File 01/TPS - Oasis - Project File 01 - Logic Pro.logicx/Alternatives/000/Project File Backups/00/MetaData.plist +0 -0
  26. The Producer School - Oasis/Project Files/Logic Pro/TPS - Oasis - Project File 01/TPS - Oasis - Project File 01 - Logic Pro.logicx/Alternatives/000/Project File Backups/01/MetaData.plist +0 -0
  27. The Producer School - Oasis/Project Files/Logic Pro/TPS - Oasis - Project File 01/TPS - Oasis - Project File 01 - Logic Pro.logicx/Alternatives/000/Project File Backups/02/MetaData.plist +0 -0
  28. The Producer School - Oasis/Project Files/Logic Pro/TPS - Oasis - Project File 01/TPS - Oasis - Project File 01 - Logic Pro.logicx/Alternatives/000/Project File Backups/03/MetaData.plist +0 -0
  29. extra_drum_dirs.csv +39 -0
  30. login.py +10 -0
  31. nodrums.py +82 -0
  32. param_sweep_results.csv +145 -0
  33. percorsi.csv +39 -0
  34. run_param_sweep.py +457 -0
  35. run_smart_sweep.py +1136 -0
  36. run_smart_sweep_old.py +1367 -0
  37. run_smart_sweep_old2.py +1400 -0
  38. run_smart_sweep_old3.py +2086 -0
  39. run_test.py +115 -0
  40. smart_sweep_results.csv +11 -0
  41. spade_declip_v11.py +2234 -0
  42. spade_declip_v12.py +0 -0
  43. spade_declip_v12old.py +0 -0
  44. spade_declip_v12old2.py +0 -0
  45. spade_declip_v13.py +0 -0
  46. spade_unrolled.py +1484 -0
  47. thr_lin +0 -0
  48. train_spade_unrolled.py +1485 -0
  49. train_transient_net.py +1509 -0
  50. transient_net.py +815 -0
1e-12 ADDED
File without changes
README.md ADDED
@@ -0,0 +1,362 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # SPADE Limiter Recovery — Documentazione
2
+
3
+ ## Panoramica
4
+
5
+ Due script che formano una pipeline completa per il recupero di dinamiche
6
+ compresse da un brickwall limiter su materiale percussivo.
7
+
8
+ | Script | Ruolo |
9
+ |---|---|
10
+ | `spade_declip_v12.py` | Algoritmo di recupero (SPADE solver + GPU) |
11
+ | `run_smart_sweep.py` | Ottimizzazione bayesiana dei parametri (Optuna) |
12
+
13
+ ---
14
+
15
+ ## `spade_declip_v12.py`
16
+
17
+ ### Cosa fa
18
+
19
+ Implementa S-SPADE e A-SPADE — algoritmi di audio declipping basati su
20
+ ottimizzazione sparsa nel dominio trasformato (DCT/RDFT). In modalità `soft`
21
+ il problema viene esteso dal declipping classico al **recupero di dinamiche
22
+ compresse da un brickwall limiter**: i campioni sopra la soglia vengono
23
+ trattati come lower-bound (non come equality), permettendo all'ADMM di
24
+ recuperare valori superiori al segnale limitato.
25
+
26
+ ### Algoritmo
27
+
28
+ Il solver risolve iterativamente:
29
+
30
+ ```
31
+ minimizza ||A(x)||₀ (L0 nel dominio trasformato)
32
+ soggetto a x ∈ Γ (consistency set: vincoli dal segnale limitato)
33
+ ```
34
+
35
+ tramite ADMM (Alternating Direction Method of Multipliers):
36
+ - **Step 2** — hard thresholding H_k: mantiene i k coefficienti più grandi
37
+ - **Step 3** — proiezione su Γ: impone i constraint del segnale limitato
38
+ - **Step 7** — aggiornamento duale: accumula le correzioni residue
39
+
40
+ ### Struttura del processing
41
+
42
+ ```
43
+ input limitato
44
+
45
+ [DC removal] → [mask detection Ir/Icp/Icm]
46
+
47
+ [macro_expand pre-pass] ← opzionale, v11
48
+
49
+ [_lr_split per banda] ← opzionale multiband, v11
50
+
51
+ [WOLA frame extraction]
52
+
53
+ [SPADE su ogni frame] ← GPU batch (F, M) tensor / CPU ThreadPool
54
+
55
+ [WOLA accumulation + normalizzazione]
56
+
57
+ [safe-Ir RMS match] ← v12: esclude campioni contaminati da WOLA
58
+
59
+ output recuperato
60
+ ```
61
+
62
+ ### Versioni e feature principali
63
+
64
+ **v10 — GPU acceleration**
65
+ - Tutti i frame attivi impacchettati in un batch tensor `(F, M)` e processati
66
+ in un singolo kernel GPU
67
+ - Supporto CUDA (NVIDIA) e ROCm (AMD, testato su RX 6700 XT)
68
+ - Convergenza tracciata per-frame con maschera booleana: i frame convergenti
69
+ vengono "congelati" mentre gli altri continuano a iterare
70
+ - Speedup tipico: 15–100× rispetto a CPU single-thread
71
+
72
+ **v11 — Delimiting features** (tutte disabilitate di default)
73
+ - `release_ms > 0` — Dilatazione maschere: i campioni nel range di release
74
+ del limiter vengono riclassificati da Ir a Icp/Icm, permettendo all'ADMM
75
+ di recuperare la coda del transiente
76
+ - `max_gain_db > 0` — Upper bound sulla proiezione: previene transienti
77
+ artificiali causati da ADMM senza limite di guadagno
78
+ - `multiband=True` — Split Linkwitz-Riley per banda con perfect reconstruction
79
+ (`hp = x - lp`): ogni banda viene processata indipendentemente con il suo
80
+ `delta_db`, poi sommata. I filtri sono IIR zero-phase (`sosfiltfilt`) —
81
+ nessuna distorsione di fase
82
+ - `macro_expand=True` — Pre-pass di espansione macro-dinamica: recupera la
83
+ soppressione di livello (body compression) a lungo termine che SPADE non
84
+ può correggere per via della finestra WOLA corta (~21 ms)
85
+
86
+ **v12 — LF recovery** (nuovo)
87
+ - `hard_thresh_lf` / `_hard_thresh_lf_gpu` — Hard thresholding stratificato
88
+ per frequenza: garantisce un budget minimo di `lf_k_min` coefficienti nelle
89
+ bande sotto `lf_cutoff_hz`, indipendente dalla competizione con i bin HF.
90
+ Risolve il sistematico sotto-recupero LF (-3÷-8 dB su sub-bass) causato dal
91
+ fatto che il limiter attenuava i coefficienti LF rendendoli invisibili al
92
+ top-k globale nelle prime iterazioni ADMM
93
+ - Safe-Ir RMS match: esclude dal calcolo RMS i campioni Ir entro
94
+ `window_length` campioni da qualsiasi boundary Icp/Icm, prevenendo che il
95
+ bleed WOLA dell'energia LF recuperata causasse un rescaling globale verso
96
+ il basso che annullava parzialmente il recupero stesso
97
+
98
+ **Bug fix ereditati**
99
+ - BUG-1: flip output (non input) nella sintesi DST dell'RDFT
100
+ - BUG-2: variabile duale A-SPADE nel dominio coefficienti, non segnale
101
+ - BUG-3: WOLA gain drift per canale in elaborazione stereo
102
+ - BUG-4: DC offset che rompeva la rilevazione delle maschere a mezza onda
103
+
104
+ ### Nota sui filtri IIR
105
+
106
+ Tutti i filtri nel codebase sono **zero-phase** (`sosfiltfilt`) eccetto il
107
+ generatore di rumore rosa in `run_smart_sweep.py` (IIR causale, usato solo
108
+ per la generazione del corpus, mai nel path critico). Nessun filtro introduce
109
+ distorsione di fase nel segnale processato o nel residual usato per la metrica.
110
+
111
+ ### `DeclipParams` — parametri principali
112
+
113
+ ```python
114
+ DeclipParams(
115
+ algo = "sspade", # "sspade" (default) | "aspade"
116
+ frame = "rdft", # "rdft" (default, P=2M) | "dct" (P=M)
117
+ mode = "soft", # "soft" = limiter recovery | "hard" = clipping
118
+ delta_db = 2.5, # dB dalla soglia del limiter a 0 dBFS
119
+ window_length = 1024, # campioni per frame WOLA
120
+ hop_length = 256, # hop WOLA (overlap = 1 - hop/win)
121
+ s = 1, # sparsity step (incremento k per iter)
122
+ r = 1, # sparsity rate (ogni r iter si incrementa k)
123
+ eps = 0.1, # criterio di convergenza ADMM
124
+ max_iter = 1000, # iterazioni massime per frame
125
+ sample_rate = 44100,
126
+ # v11
127
+ release_ms = 0.0, # dilatazione maschere (0 = disabilitato)
128
+ max_gain_db = 0.0, # cap recupero dB (0 = disabilitato)
129
+ multiband = False,
130
+ band_crossovers = (250, 4000), # Hz, usati solo se multiband=True
131
+ band_delta_db = (), # per-band delta_db; vuoto = usa delta_db
132
+ macro_expand = False,
133
+ macro_ratio = 1.2, # usato solo se macro_expand=True
134
+ # v12
135
+ lf_cutoff_hz = 0.0, # Hz soglia bin LF (0 = disabilitato)
136
+ lf_k_min = 0, # slot LF garantiti per iterazione ADMM
137
+ # GPU
138
+ use_gpu = True,
139
+ gpu_device = "auto",
140
+ )
141
+ ```
142
+
143
+ ### Dipendenze
144
+
145
+ ```
146
+ pip install numpy scipy soundfile
147
+ pip install torch # opzionale, per GPU
148
+ ```
149
+
150
+ ### CLI
151
+
152
+ ```bash
153
+ python spade_declip_v12.py input.wav output.wav --mode soft --delta-db 2.5
154
+
155
+ # Con feature v11/v12
156
+ python spade_declip_v12.py input.wav output.wav \
157
+ --mode soft --delta-db 2.5 \
158
+ --release-ms 80 --max-gain-db 6 \
159
+ --lf-cutoff-hz 1000 --lf-k-min 8
160
+ ```
161
+
162
+ ---
163
+
164
+ ## `run_smart_sweep.py`
165
+
166
+ ### Cosa fa
167
+
168
+ Ottimizzazione bayesiana degli iperparametri di `spade_declip_v12.py` su un
169
+ corpus di drum sample. Usa Optuna TPE (Tree-structured Parzen Estimator) con
170
+ MedianPruner per interrompere trial chiaramente sotto-performanti a metà corpus.
171
+
172
+ ### Pipeline di valutazione
173
+
174
+ Per ogni trial Optuna:
175
+
176
+ ```
177
+ 1. Corpus costruito una volta sola all'avvio (build_corpus):
178
+ - Carica drum sample (Kicks / Snares / Perc / Tops)
179
+ - Normalizza a 0 dBFS peak
180
+ - Aggiunge rumore rosa a -20 dB (simula sottofondo musicale)
181
+ - Ri-normalizza a 0 dBFS
182
+ - Applica limiter sintetico (brickwall, attack=1 campione, release=80 ms)
183
+ - Calcola GT residual = originale − limitato
184
+ - Conserva GT_res (peak-normalizzato, per cosine sim) e
185
+ GT_res_raw (scala originale, per confronto energetico assoluto)
186
+
187
+ 2. Per ogni trial:
188
+ - Shuffle del corpus con seed = trial.number (riproducibile)
189
+ - Prima metà del corpus → GPU mega-batch → check pruning
190
+ - Seconda metà del corpus → GPU mega-batch → score finale
191
+ - Calcolo score_breakdown per ogni file
192
+ - Aggregazione e salvataggio user_attrs su Optuna
193
+ ```
194
+
195
+ ### Limiter sintetico
196
+
197
+ ```
198
+ Soglia: -3.0 dBFS (LIMITER_THRESHOLD_DB)
199
+ Release: 80 ms (LIMITER_RELEASE_MS)
200
+ Attack: 1 campione (brickwall vero)
201
+ ```
202
+
203
+ La soglia è volutamente a 3 dB per creare un regime di limitazione
204
+ significativo che metta alla prova il recupero LF. Il rumore rosa simula
205
+ il contesto musicale su cui il limiter agisce, rendendo il training più
206
+ rappresentativo dell'uso reale su full mix.
207
+
208
+ ### Score composito — `score_breakdown`
209
+
210
+ Il best score riportato da Optuna è lo **score composito**, non la cosine
211
+ similarity semplice. Questo perché la cosine sim è scale-invariant e non
212
+ rileva il deficit energetico LF.
213
+
214
+ Sette metriche calcolate per ogni file:
215
+
216
+ | Campo | Cosa misura |
217
+ |---|---|
218
+ | `cosine` | Cosine sim TF globale (shape spettrale, 12 bande log) |
219
+ | `cosine_lf` | Cosine sim in 20–500 Hz (shape corpo sub-bass/bass) |
220
+ | `cosine_hf` | Cosine sim in 2k–20k Hz (shape attacco/brillantezza) |
221
+ | `energy_lf_db` | `RMS_lf(SPADE) − RMS_lf(GT)` su scala originale, dB |
222
+ | `energy_hf_db` | idem per HF |
223
+ | `overrecovery` | 1 se energy_lf_db > +3 dB (artefatti LF) |
224
+ | `composite` | Score composito usato come obiettivo Optuna |
225
+
226
+ **Formula composite:**
227
+
228
+ ```
229
+ pen_lf = exp(min(0, energy_lf_db) / 6) # penalità sub-recupero LF
230
+ pen_hf = exp(min(0, energy_hf_db) / 10) # penalità sub-recupero HF (più morbida)
231
+ composite = cosine × pen_lf^0.5 × pen_hf^0.2
232
+ ```
233
+
234
+ La penalità LF è più aggressiva (esponente 0.5 vs 0.2, costante 6 vs 10)
235
+ perché il problema di under-recovery LF era il deficit principale identificato
236
+ a -3÷-8 dB su sub-bass. A -6 dB sotto GT, `pen_lf^0.5 ≈ 0.61` — lo score
237
+ scende di ~39% rispetto alla cosine sim sola.
238
+
239
+ L'energia è confrontata su **scala originale** (GT_res_raw, non normalizzato)
240
+ per evitare che la peak-normalizzazione a RESIDUAL_DBFS mascheri il deficit
241
+ assoluto.
242
+
243
+ ### GPU mega-batch
244
+
245
+ Per minimizzare il costo di ogni trial su GPU AMD RX 6700 XT (RDNA2), tutti
246
+ i frame attivi dell'intero corpus vengono impacchettati in un unico tensore
247
+ `(F_total, M)` e processati con un singolo `_sspade_batch_gpu`. Con corpus da
248
+ ~50 file × ~350 frame ≈ 17500 frame, la GPU rimane a MCLK massimo per tutta
249
+ la durata del kernel invece di ciclare tra idle e burst per ogni file.
250
+
251
+ Il pruning rimane funzionale: il corpus viene diviso in due metà, la prima
252
+ viene processata, lo score intermedio viene reportato a Optuna, e se il trial
253
+ è chiaramente sotto-performante viene interrotto prima di processare la seconda.
254
+
255
+ ### Spazio di ricerca
256
+
257
+ | Parametro | Range | Note |
258
+ |---|---|---|
259
+ | `delta_db` | 1.5 – 3.5 dB | Calibrato sulla soglia del limiter a 3 dB |
260
+ | `window_length` | 512 / 1024 / 2048 | via `win_exp` ∈ {9,10,11} |
261
+ | `hop_length` | win/4 o win/8 | overlap 75% o 87.5% |
262
+ | `release_ms` | 10 – 200 ms | 0 = disabilitato |
263
+ | `max_gain_db` | 2 – 12 dB | cap recupero |
264
+ | `eps` | 0.03 / 0.05 / 0.1 | criterio convergenza ADMM |
265
+ | `max_iter` | 250 / 500 / 1000 | iterazioni massime |
266
+ | `multiband` | True/False | split LF/HF |
267
+ | `lf_delta_db` | 0.5 – 2.0 dB | delta per banda LF (se multiband) |
268
+ | `macro_expand` | True/False | pre-pass espansione |
269
+ | `macro_ratio` | 1.1 – 2.0 | rapporto espansione |
270
+ | `lf_cutoff_hz` | 0 / 500 / 1000 / 2000 Hz | soglia bin LF garantiti |
271
+ | `lf_k_min` | 0 – 16 | slot LF per iterazione ADMM |
272
+
273
+ Il corpus viene **shufflato** con seed deterministico pari al numero del trial
274
+ (`trial.number`) per prevenire bias sull'ordine dei file — senza shuffle,
275
+ i file all'inizio della lista avrebbero peso sistematicamente maggiore nel
276
+ pruning.
277
+
278
+ ### `trial.set_user_attr` — diagnostica per trial
279
+
280
+ Ogni trial completato salva nel database Optuna:
281
+
282
+ ```
283
+ cosine_overall, cosine_lf, cosine_hf → shape spettrale per banda
284
+ energy_lf_db, energy_hf_db → deficit/surplus energetico assoluto
285
+ n_overrecovery → file con LF > +3 dB sopra GT
286
+ score_std → consistenza cross-file
287
+ n_files_scored → file processati
288
+ ```
289
+
290
+ ### Dipendenze
291
+
292
+ ```
293
+ pip install numpy scipy soundfile optuna rich
294
+ pip install torch # per GPU
295
+ ```
296
+
297
+ ### CLI
298
+
299
+ ```bash
300
+ # Sweep completo (200 trial di default)
301
+ python run_smart_sweep.py
302
+
303
+ # Test rapido
304
+ python run_smart_sweep.py --trials 20
305
+
306
+ # Riprende da database esistente
307
+ python run_smart_sweep.py --resume
308
+
309
+ # Solo report finale (no sweep)
310
+ python run_smart_sweep.py --report
311
+
312
+ # Cartella custom
313
+ python run_smart_sweep.py --base-dir /path/to/samples
314
+ ```
315
+
316
+ ---
317
+
318
+ ## Stato e limitazioni note
319
+
320
+ ### Cosa funziona
321
+
322
+ - Recupero HF (2k–20k Hz) robusto, +12÷+18 dB rispetto al limitato su kick
323
+ - GPU mega-batch su AMD RX 6700 XT funzionante, speedup ~15–20×
324
+ - Ottimizzazione bayesiana riprendibile, stabile dopo ~80 trial
325
+ - Score composito con penalità energetica LF misura correttamente il deficit
326
+ che la cosine sim sola non rileva
327
+
328
+ ### Limitazioni attuali
329
+
330
+ - Il corpus è composto da **sample percussivi isolati** con rumore rosa
331
+ sintetico. I parametri ottimali su sample isolati possono non generalizzare
332
+ a full mix musicali dove il limiter agisce su strati sovrapposti
333
+ - Il limiter sintetico (threshold-based, release esponenziale fissa) è
334
+ un'approssimazione di limitatori commerciali come FabFilter Pro-L 2 che
335
+ usano curve di release adattive e lookahead
336
+ - `A-SPADE` non ha ancora il path GPU (solo S-SPADE è accelerato)
337
+ - I parametri SPADE sono **costanti per file**: il solver non si adatta
338
+ frame-per-frame alla struttura locale del segnale (onset vs. release vs.
339
+ silenzio). Questo è il limite strutturale principale rispetto a un approccio
340
+ di tipo SPADE Unrolled
341
+
342
+ ### Direzione di sviluppo: SPADE Unrolled
343
+
344
+ L'evoluzione naturale del sistema è sostituire i parametri fissi con un
345
+ **Context Encoder** (rete neurale piccola, ~100K parametri) che predice
346
+ `λ_LF`, `λ_HF`, `delta_factor` e `gmax_factor` per ogni frame in base
347
+ al contesto temporale dei K frame precedenti.
348
+
349
+ Il solver SPADE viene trasformato in K layer fissi (unrolling), con
350
+ `soft_thresh` stratificato al posto di `hard_thresh` per differenziabilità.
351
+ Il gradiente della loss fluisce attraverso i K layer ADMM fino all'encoder.
352
+
353
+ Il training è proposto in due fasi:
354
+ 1. **Fase 1** su sample isolati con rumore rosa (corpus attuale) — convergenza
355
+ rapida, ground truth esatto, encoder impara la firma della limitazione
356
+ 2. **Fase 2** su full mix con lo stesso limiter sintetico applicato a stems
357
+ sommati (Strategia A) — adattamento alla distribuzione reale, con mixed
358
+ batching per prevenire catastrophic forgetting della Fase 1
359
+
360
+ La componente algoritmica di SPADE garantisce che il modello non possa
361
+ inventare contenuto assente nel segnale limitato — solo i parametri del solver
362
+ vengono appresi, non una mappatura arbitraria input→output.
The Producer School - Oasis/Loops/Piano Loops/TPS - Oasis - Piano Loop 01 Fmin (120 BPM).mid ADDED
Binary file (219 Bytes). View file
 
The Producer School - Oasis/Loops/Piano Loops/TPS - Oasis - Piano Loop 02 Amin (120 BPM).mid ADDED
Binary file (217 Bytes). View file
 
The Producer School - Oasis/Loops/Piano Loops/TPS - Oasis - Piano Loop 03 Dmin (120 BPM).mid ADDED
Binary file (265 Bytes). View file
 
The Producer School - Oasis/Loops/Piano Loops/TPS - Oasis - Piano Loop 04 Dmin (120 BPM).mid ADDED
Binary file (247 Bytes). View file
 
The Producer School - Oasis/Loops/Piano Loops/TPS - Oasis - Piano Loop 05 Fmin (120 BPM).mid ADDED
Binary file (199 Bytes). View file
 
The Producer School - Oasis/Loops/Piano Loops/TPS - Oasis - Piano Loop 06 Amin (120 BPM).mid ADDED
Binary file (167 Bytes). View file
 
The Producer School - Oasis/Loops/Piano Loops/TPS - Oasis - Piano Loop 07 F#min (120 BPM).mid ADDED
Binary file (217 Bytes). View file
 
The Producer School - Oasis/Loops/Piano Loops/TPS - Oasis - Piano Loop 08 Ebmin (120 BPM).mid ADDED
Binary file (284 Bytes). View file
 
The Producer School - Oasis/Loops/Piano Loops/TPS - Oasis - Piano Loop 09 C#min (120 BPM).mid ADDED
Binary file (248 Bytes). View file
 
The Producer School - Oasis/Loops/Piano Loops/TPS - Oasis - Piano Loop 10 Amin (120 BPM).mid ADDED
Binary file (336 Bytes). View file
 
The Producer School - Oasis/Loops/Synth Loops/TPS - Oasis - Synth Loop 01 C (120 BPM).mid ADDED
Binary file (576 Bytes). View file
 
The Producer School - Oasis/Loops/Synth Loops/TPS - Oasis - Synth Loop 02 C (120 BPM).mid ADDED
Binary file (224 Bytes). View file
 
The Producer School - Oasis/Loops/Synth Loops/TPS - Oasis - Synth Loop 03 C (120 BPM).mid ADDED
Binary file (288 Bytes). View file
 
The Producer School - Oasis/Loops/Synth Loops/TPS - Oasis - Synth Loop 04 C (120 BPM).mid ADDED
Binary file (116 Bytes). View file
 
The Producer School - Oasis/Loops/Synth Loops/TPS - Oasis - Synth Loop 05 C (120 BPM).mid ADDED
Binary file (384 Bytes). View file
 
The Producer School - Oasis/Loops/Synth Loops/TPS - Oasis - Synth Loop 06 C (120 BPM).mid ADDED
Binary file (256 Bytes). View file
 
The Producer School - Oasis/Loops/Synth Loops/TPS - Oasis - Synth Loop 07 Amin (120 BPM).mid ADDED
Binary file (1.17 kB). View file
 
The Producer School - Oasis/Loops/Synth Loops/TPS - Oasis - Synth Loop 08 C (120 BPM).mid ADDED
Binary file (576 Bytes). View file
 
The Producer School - Oasis/Loops/Synth Loops/TPS - Oasis - Synth Loop 09 C (120 BPM).mid ADDED
Binary file (359 Bytes). View file
 
The Producer School - Oasis/Project Files/Logic Pro/TPS - Oasis - Project File 01/TPS - Oasis - Project File 01 - Logic Pro.logicx/Alternatives/000/DisplayState.plist ADDED
Binary file (4.23 kB). View file
 
The Producer School - Oasis/Project Files/Logic Pro/TPS - Oasis - Project File 01/TPS - Oasis - Project File 01 - Logic Pro.logicx/Alternatives/000/DisplayStateArchive ADDED
Binary file (7.32 kB). View file
 
The Producer School - Oasis/Project Files/Logic Pro/TPS - Oasis - Project File 01/TPS - Oasis - Project File 01 - Logic Pro.logicx/Alternatives/000/MetaData.plist ADDED
Binary file (7 kB). View file
 
The Producer School - Oasis/Project Files/Logic Pro/TPS - Oasis - Project File 01/TPS - Oasis - Project File 01 - Logic Pro.logicx/Alternatives/000/Project File Backups/00/MetaData.plist ADDED
Binary file (6.56 kB). View file
 
The Producer School - Oasis/Project Files/Logic Pro/TPS - Oasis - Project File 01/TPS - Oasis - Project File 01 - Logic Pro.logicx/Alternatives/000/Project File Backups/01/MetaData.plist ADDED
Binary file (6.56 kB). View file
 
The Producer School - Oasis/Project Files/Logic Pro/TPS - Oasis - Project File 01/TPS - Oasis - Project File 01 - Logic Pro.logicx/Alternatives/000/Project File Backups/02/MetaData.plist ADDED
Binary file (6.56 kB). View file
 
The Producer School - Oasis/Project Files/Logic Pro/TPS - Oasis - Project File 01/TPS - Oasis - Project File 01 - Logic Pro.logicx/Alternatives/000/Project File Backups/03/MetaData.plist ADDED
Binary file (6.56 kB). View file
 
extra_drum_dirs.csv ADDED
@@ -0,0 +1,39 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ Percorso Directory,Tipo
2
+ "./Cymatics - Cratediggers Vol.1/Drum Fills","Drum Loop"
3
+ "./Cymatics - Cratediggers Vol.1/Drum Loops","Drum Loop"
4
+ "./Cymatics - Cratediggers Vol.1/Hihat Loops & MIDI","Drum Loop"
5
+ "./Cymatics - Cratediggers Vol.1/Percussion Loops","Drum Loop"
6
+ "./Ghosthack - Ultimate Producer Bundle 2024/Ghosthack - Bassquake/Loops/Drum Loops","Drum Loop"
7
+ "./Ghosthack - Ultimate Producer Bundle 2024/Ghosthack - Commercial Deep House/Loops/Drum Loops","Drum Loop"
8
+ "./Ghosthack - Ultimate Producer Bundle 2024/Ghosthack - Commercial Deep House/One-Shots/Drum Shots","Drum One Shot"
9
+ "./Ghosthack - Ultimate Producer Bundle 2024/Ghosthack - Deathstep Essentials 2/Loops/Drum Fills","Drum Loop"
10
+ "./Ghosthack - Ultimate Producer Bundle 2024/Ghosthack - Deathstep Essentials 2/Loops/Drum Loops","Drum Loop"
11
+ "./Ghosthack - Ultimate Producer Bundle 2024/Ghosthack - Deathstep Essentials 2/One-Shots/Drum Shots","Drum One Shot"
12
+ "./Ghosthack - Ultimate Producer Bundle 2024/Ghosthack - Gaming Dubstep & Midtempo 2/Loops/Drum Loops","Drum Loop"
13
+ "./Ghosthack - Ultimate Producer Bundle 2024/Ghosthack - Gaming Dubstep & Midtempo 2/One-Shots/Drum Shots","Drum One Shot"
14
+ "./Ghosthack - Ultimate Producer Bundle 2024/Ghosthack - Trap & Wave Essentials/Loops/Drum Fills","Drum Loop"
15
+ "./Ghosthack - Ultimate Producer Bundle 2024/Ghosthack - Trap & Wave Essentials/Loops/Drum Loops","Drum Loop"
16
+ "./Ghosthack - Ultimate Producer Bundle 2025/Festival EDM Essentials/Loops/Drum Fills","Drum Loop"
17
+ "./Ghosthack - Ultimate Producer Bundle 2025/Festival EDM Essentials/Loops/Drum Loops","Drum Loop"
18
+ "./Ghosthack - Ultimate Producer Bundle 2025/Festival EDM Essentials/One-Shots/Drum Shots","Drum One Shot"
19
+ "./Ghosthack - Ultimate Producer Bundle 2025/Hardstyle Legends/Loops/Drum Fills","Drum Loop"
20
+ "./Ghosthack - Ultimate Producer Bundle 2025/Hardstyle Legends/One-Shots/Drum Shots","Drum One Shot"
21
+ "./Ghosthack - Ultimate Producer Bundle 2025/Infinite - Liquid DnB Samples/Loops/Drums/Full Drum Loops","Drum Loop"
22
+ "./Ghosthack - Ultimate Producer Bundle 2025/Infinite - Liquid DnB Samples/Loops/Drums/Separated Drum Loops","Drum Loop"
23
+ "./Ghosthack - Ultimate Producer Bundle 2025/Melodic Techno Essentials/Loops/Drum Fills","Drum Loop"
24
+ "./Ghosthack - Ultimate Producer Bundle 2025/Melodic Techno Essentials/Loops/Drum Loops","Drum Loop"
25
+ "./Ghosthack - Ultimate Producer Bundle 2025/Melodic Techno Essentials/One-Shots/Drum Shots","Drum One Shot"
26
+ "./Ghosthack - Ultimate Producer Bundle 2025/Midtempo Meteor/Loops/Drum Fills","Drum Loop"
27
+ "./Ghosthack - Ultimate Producer Bundle 2025/Midtempo Meteor/Loops/Drum Loops","Drum Loop"
28
+ "./Ghosthack - Ultimate Producer Bundle 2025/Neo Rave Essentials/Loops/Drum Fills","Drum Loop"
29
+ "./Ghosthack - Ultimate Producer Bundle 2025/Neo Rave Essentials/Loops/Drum Loops","Drum Loop"
30
+ "./Ghosthack - Ultimate Producer Bundle 2025/Neo Rave Essentials/One-Shots/Drum Shots","Drum One Shot"
31
+ "./Ghosthack - Ultimate Producer Bundle 2025/Phonk Essentials/Loops/Drums/Full Drum Loops","Drum Loop"
32
+ "./Ghosthack - Ultimate Producer Bundle 2025/Phonk Essentials/Loops/Drums/Separated Drum Loops","Drum Loop"
33
+ "./Ghosthack - Ultimate Producer Bundle 2025/Vision - Outrun Synthwave/Loops/Drum Fills","Drum Loop"
34
+ "./Ghosthack - Ultimate Producer Bundle 2025/Vision - Outrun Synthwave/Loops/Drum Loops","Drum Loop"
35
+ "./Ghosthack - Ultimate Producer Bundle 2025/Vision - Outrun Synthwave/One-Shots/Drum Shots","Drum One Shot"
36
+ "./Infinity Audio - Hugelism/Drum Loops","Drum Loop"
37
+ "./Infinity Audio - Hugelism/Percussion Loops","Drum Loop"
38
+ "./Sample.Tools.by.Cr2.Festival.Trap.3.MULTiFORMAT-DECiBEL/Festival_Trap_3/Audio_MIDI/Drum Loops","Drum Loop"
39
+ "./The Producer School - Oasis/Sample Pack/Drumloops/Percussion Loops","Drum Loop"
login.py ADDED
@@ -0,0 +1,10 @@
 
 
 
 
 
 
 
 
 
 
 
1
+ from huggingface_hub import HfApi
2
+
3
+ api = HfApi()
4
+
5
+ api.upload_large_folder(
6
+ folder_path=".",
7
+ repo_id="simone00/mymodel",
8
+ repo_type="dataset", # "model", "dataset", or "space"
9
+ num_workers=4, # parallel uploads — increase if you have good bandwidth
10
+ )
nodrums.py ADDED
@@ -0,0 +1,82 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import os
2
+ import shutil
3
+ import csv
4
+
5
+ # --- CONFIGURAZIONE ---
6
+ csv_file_path = 'percorsi.csv'
7
+ main_no_drums_folder = "No-Drums"
8
+ # Estensioni file audio da spostare
9
+ audio_extensions = ('.wav', '.mp3', '.aif', '.aiff', '.flac')
10
+ # Parole chiave che indicano file/cartelle di batteria
11
+ drum_keywords = ['drum', 'kick', 'snare', 'hihat', 'perc', 'clap', 'cymbal']
12
+
13
+ def organize_by_pack_root():
14
+ if not os.path.exists(csv_file_path):
15
+ print(f"Errore: Il file {csv_file_path} non è stato trovato.")
16
+ return
17
+
18
+ # Crea la cartella No-Drums in root
19
+ if not os.path.exists(main_no_drums_folder):
20
+ os.makedirs(main_no_drums_folder)
21
+ print(f"Creata cartella principale: {main_no_drums_folder}")
22
+
23
+ with open(csv_file_path, mode='r', encoding='utf-8') as file:
24
+ reader = csv.DictReader(file)
25
+
26
+ # Usiamo un set per non analizzare due volte la stessa cartella pack
27
+ processed_packs = set()
28
+
29
+ for row in reader:
30
+ specific_folder = row['Percorso Directory']
31
+
32
+ if not os.path.exists(specific_folder):
33
+ continue
34
+
35
+ # --- LOGICA: Trovare la root del Pack ---
36
+ # Questo è un approccio euristico: supponiamo che la root del pack
37
+ # sia qualche livello sopra la cartella drum specificata.
38
+ # Esempio: .../PackName/Loops/Drum Loops -> Root è .../PackName/
39
+
40
+ # Per sicurezza, analizziamo la cartella appena sopra quella specifica
41
+ pack_root = os.path.dirname(specific_folder)
42
+
43
+ # Se la cartella è già stata elaborata, saltiamo
44
+ if pack_root in processed_packs:
45
+ continue
46
+
47
+ processed_packs.add(pack_root)
48
+ print(f"\nAnalisi Root Pack: {pack_root}")
49
+
50
+ # --- Scansione ricorsiva della root del pack ---
51
+ for dirpath, dirnames, filenames in os.walk(pack_root):
52
+
53
+ # Salta la cartella No-Drums stessa
54
+ if main_no_drums_folder in dirpath:
55
+ continue
56
+
57
+ for filename in filenames:
58
+ if filename.lower().endswith(audio_extensions):
59
+
60
+ file_path_full = os.path.join(dirpath, filename)
61
+
62
+ # --- CONTROLLO STRICT ---
63
+ # Sposta se il file o la sua cartella NON contiene parole chiave
64
+ if not any(keyword in file_path_full.lower() for keyword in drum_keywords):
65
+
66
+ destination_path = os.path.join(main_no_drums_folder, filename)
67
+
68
+ # Gestione duplicati
69
+ if os.path.exists(destination_path):
70
+ base, extension = os.path.splitext(filename)
71
+ counter = 1
72
+ while os.path.exists(os.path.join(main_no_drums_folder, f"{base}_{counter}{extension}")):
73
+ counter += 1
74
+ destination_path = os.path.join(main_no_drums_folder, f"{base}_{counter}{extension}")
75
+
76
+ print(f" -> Sposto in root/No-Drums: {filename} (da {dirpath})")
77
+ shutil.move(file_path_full, destination_path)
78
+
79
+ print("\nOrganizzazione completata.")
80
+
81
+ if __name__ == "__main__":
82
+ organize_by_pack_root()
param_sweep_results.csv ADDED
@@ -0,0 +1,145 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ rank,sim_mean,sim_median,sim_p10,sim_p90,delta_db,window_length,hop_length,eps,max_iter,release_ms,max_gain_db
2
+ 1,0.92369,0.97345,0.78397,1.0,1.5,1024,256,0.05,500,100.0,6.0
3
+ 2,0.92342,0.97343,0.78386,1.0,1.5,1024,256,0.1,500,100.0,6.0
4
+ 3,0.92219,0.97457,0.7806,1.0,1.5,1024,256,0.1,500,100.0,4.0
5
+ 4,0.92205,0.97453,0.77916,1.0,1.5,1024,256,0.05,500,100.0,4.0
6
+ 5,0.92198,0.97042,0.78126,1.0,1.5,1024,256,0.05,500,250.0,6.0
7
+ 6,0.92169,0.97045,0.78135,1.0,1.5,1024,256,0.1,500,250.0,6.0
8
+ 7,0.92023,0.97141,0.77784,1.0,1.5,1024,256,0.1,500,250.0,4.0
9
+ 8,0.92005,0.97143,0.77696,1.0,1.5,1024,256,0.05,500,250.0,4.0
10
+ 9,0.91922,0.97758,0.76821,1.0,1.5,1024,256,0.05,500,0.0,0.0
11
+ 10,0.91922,0.97758,0.76821,1.0,1.5,1024,256,0.1,500,0.0,0.0
12
+ 11,0.91878,0.97799,0.76663,1.0,1.5,2048,256,0.05,500,0.0,0.0
13
+ 12,0.91878,0.97799,0.76663,1.0,1.5,2048,256,0.1,500,0.0,0.0
14
+ 13,0.91873,0.97799,0.76582,1.0,1.5,2048,256,0.05,500,0.0,6.0
15
+ 14,0.91873,0.97799,0.76582,1.0,1.5,2048,256,0.1,500,0.0,6.0
16
+ 15,0.91864,0.96349,0.77759,1.0,1.5,2048,256,0.05,500,100.0,6.0
17
+ 16,0.91863,0.96353,0.77746,1.0,1.5,2048,256,0.1,500,100.0,6.0
18
+ 17,0.91847,0.97741,0.76555,1.0,1.5,1024,256,0.05,500,0.0,6.0
19
+ 18,0.91847,0.97741,0.76555,1.0,1.5,1024,256,0.1,500,0.0,6.0
20
+ 19,0.91838,0.96648,0.77167,1.0,1.5,2048,256,0.05,500,100.0,0.0
21
+ 20,0.91837,0.96653,0.77156,1.0,1.5,2048,256,0.1,500,100.0,0.0
22
+ 21,0.91825,0.97757,0.76397,1.0,1.5,2048,256,0.05,500,0.0,4.0
23
+ 22,0.91825,0.97757,0.76397,1.0,1.5,2048,256,0.1,500,0.0,4.0
24
+ 23,0.91801,0.97732,0.76484,1.0,1.5,1024,256,0.05,500,0.0,4.0
25
+ 24,0.91801,0.97732,0.76484,1.0,1.5,1024,256,0.1,500,0.0,4.0
26
+ 25,0.91758,0.97644,0.76619,1.0,2.0,1024,256,0.05,500,0.0,0.0
27
+ 26,0.91758,0.97644,0.76619,1.0,2.0,1024,256,0.1,500,0.0,0.0
28
+ 27,0.91711,0.97653,0.76581,1.0,2.0,2048,256,0.05,500,0.0,0.0
29
+ 28,0.91711,0.97653,0.76581,1.0,2.0,2048,256,0.1,500,0.0,0.0
30
+ 29,0.91703,0.97619,0.76507,1.0,2.0,1024,256,0.05,500,0.0,6.0
31
+ 30,0.91703,0.97619,0.76507,1.0,2.0,1024,256,0.1,500,0.0,6.0
32
+ 31,0.9167,0.97627,0.76449,1.0,2.0,2048,256,0.05,500,0.0,6.0
33
+ 32,0.9167,0.97627,0.76449,1.0,2.0,2048,256,0.1,500,0.0,6.0
34
+ 33,0.91639,0.96321,0.77227,1.0,2.0,1024,256,0.05,500,100.0,6.0
35
+ 34,0.91613,0.96331,0.7717,1.0,2.0,1024,256,0.1,500,100.0,6.0
36
+ 35,0.91593,0.96273,0.77179,1.0,1.5,2048,256,0.05,500,100.0,4.0
37
+ 36,0.91593,0.96275,0.77184,1.0,1.5,2048,256,0.1,500,100.0,4.0
38
+ 37,0.91574,0.97556,0.76276,1.0,2.0,1024,256,0.05,500,0.0,4.0
39
+ 38,0.91574,0.97556,0.76276,1.0,2.0,1024,256,0.1,500,0.0,4.0
40
+ 39,0.91572,0.95651,0.77462,1.0,1.5,2048,256,0.05,500,250.0,6.0
41
+ 40,0.91568,0.95666,0.77449,1.0,1.5,2048,256,0.1,500,250.0,6.0
42
+ 41,0.9154,0.97562,0.76083,1.0,2.0,2048,256,0.05,500,0.0,4.0
43
+ 42,0.9154,0.97562,0.76083,1.0,2.0,2048,256,0.1,500,0.0,4.0
44
+ 43,0.91469,0.95932,0.76495,1.0,1.5,2048,256,0.1,500,250.0,0.0
45
+ 44,0.91469,0.9593,0.76508,1.0,1.5,2048,256,0.05,500,250.0,0.0
46
+ 45,0.91465,0.95919,0.76924,1.0,2.0,1024,256,0.05,500,250.0,6.0
47
+ 46,0.91461,0.97306,0.76395,1.0,2.5,1024,256,0.05,500,0.0,0.0
48
+ 47,0.9146,0.97306,0.76399,1.0,2.5,1024,256,0.1,500,0.0,0.0
49
+ 48,0.91457,0.97286,0.76053,1.0,2.5,2048,256,0.05,500,0.0,0.0
50
+ 49,0.91457,0.97286,0.76053,1.0,2.5,2048,256,0.1,500,0.0,0.0
51
+ 50,0.91439,0.95875,0.76926,1.0,2.0,1024,256,0.1,500,250.0,6.0
52
+ 51,0.91432,0.97273,0.76254,1.0,1.5,1024,256,0.05,500,100.0,0.0
53
+ 52,0.91432,0.97277,0.76266,1.0,1.5,1024,256,0.1,500,100.0,0.0
54
+ 53,0.91427,0.9643,0.76733,1.0,2.0,1024,256,0.1,500,100.0,4.0
55
+ 54,0.91403,0.96416,0.76684,1.0,2.0,1024,256,0.05,500,100.0,4.0
56
+ 55,0.91395,0.97289,0.76269,1.0,2.5,1024,256,0.1,500,0.0,6.0
57
+ 56,0.91395,0.97289,0.76269,1.0,2.5,1024,256,0.05,500,0.0,6.0
58
+ 57,0.91382,0.97246,0.76092,1.0,2.5,2048,256,0.05,500,0.0,6.0
59
+ 58,0.91382,0.97246,0.76092,1.0,2.5,2048,256,0.1,500,0.0,6.0
60
+ 59,0.9136,0.95773,0.76252,1.0,2.0,2048,256,0.05,500,100.0,0.0
61
+ 60,0.91359,0.95769,0.76231,1.0,2.0,2048,256,0.1,500,100.0,0.0
62
+ 61,0.91297,0.95294,0.76902,1.0,2.0,2048,256,0.05,500,100.0,6.0
63
+ 62,0.91292,0.95279,0.76913,1.0,2.0,2048,256,0.1,500,100.0,6.0
64
+ 63,0.91258,0.95586,0.76828,1.0,1.5,2048,256,0.05,500,250.0,4.0
65
+ 64,0.91253,0.95593,0.76831,1.0,1.5,2048,256,0.1,500,250.0,4.0
66
+ 65,0.91249,0.97205,0.75794,1.0,2.5,2048,256,0.05,500,0.0,4.0
67
+ 66,0.91249,0.97205,0.75794,1.0,2.5,2048,256,0.1,500,0.0,4.0
68
+ 67,0.91233,0.95976,0.7649,1.0,2.0,1024,256,0.1,500,250.0,4.0
69
+ 68,0.91217,0.9592,0.7643,1.0,2.0,1024,256,0.05,500,250.0,4.0
70
+ 69,0.9121,0.96997,0.75762,1.0,1.5,1024,256,0.1,500,250.0,0.0
71
+ 70,0.9121,0.96986,0.75765,1.0,1.5,1024,256,0.05,500,250.0,0.0
72
+ 71,0.91206,0.97182,0.75925,1.0,2.5,1024,256,0.05,500,0.0,4.0
73
+ 72,0.91206,0.97182,0.75925,1.0,2.5,1024,256,0.1,500,0.0,4.0
74
+ 73,0.91124,0.96795,0.75884,1.0,3.0,2048,256,0.05,500,0.0,0.0
75
+ 74,0.91124,0.96795,0.75884,1.0,3.0,2048,256,0.1,500,0.0,0.0
76
+ 75,0.91107,0.96722,0.76218,1.0,3.0,1024,256,0.1,500,0.0,0.0
77
+ 76,0.91107,0.96722,0.76218,1.0,3.0,1024,256,0.05,500,0.0,0.0
78
+ 77,0.90989,0.94994,0.7581,1.0,2.0,2048,256,0.05,500,250.0,0.0
79
+ 78,0.90988,0.94991,0.75799,1.0,2.0,2048,256,0.1,500,250.0,0.0
80
+ 79,0.9096,0.9453,0.76521,1.0,2.0,2048,256,0.05,500,250.0,6.0
81
+ 80,0.90959,0.96681,0.75597,1.0,3.0,2048,256,0.05,500,0.0,6.0
82
+ 81,0.90959,0.96681,0.75597,1.0,3.0,2048,256,0.1,500,0.0,6.0
83
+ 82,0.90953,0.95187,0.76256,1.0,2.0,2048,256,0.05,500,100.0,4.0
84
+ 83,0.90953,0.94477,0.76566,1.0,2.0,2048,256,0.1,500,250.0,6.0
85
+ 84,0.90942,0.95144,0.7625,1.0,2.0,2048,256,0.1,500,100.0,4.0
86
+ 85,0.9094,0.96561,0.75877,1.0,3.0,1024,256,0.1,500,0.0,6.0
87
+ 86,0.9094,0.96562,0.75877,1.0,3.0,1024,256,0.05,500,0.0,6.0
88
+ 87,0.909,0.9648,0.7527,1.0,2.0,1024,256,0.05,500,100.0,0.0
89
+ 88,0.909,0.96473,0.75285,1.0,2.0,1024,256,0.1,500,100.0,0.0
90
+ 89,0.90769,0.96556,0.75334,1.0,3.0,2048,256,0.05,500,0.0,4.0
91
+ 90,0.90769,0.96556,0.75334,1.0,3.0,2048,256,0.1,500,0.0,4.0
92
+ 91,0.90725,0.96353,0.75433,1.0,3.0,1024,256,0.1,500,0.0,4.0
93
+ 92,0.90724,0.96353,0.75433,1.0,3.0,1024,256,0.05,500,0.0,4.0
94
+ 93,0.90675,0.96053,0.7489,1.0,2.0,1024,256,0.05,500,250.0,0.0
95
+ 94,0.90674,0.96041,0.74865,1.0,2.0,1024,256,0.1,500,250.0,0.0
96
+ 95,0.90582,0.942,0.75865,1.0,2.0,2048,256,0.05,500,250.0,4.0
97
+ 96,0.9057,0.94166,0.75837,1.0,2.0,2048,256,0.1,500,250.0,4.0
98
+ 97,0.90492,0.93797,0.75779,1.0,2.5,1024,256,0.05,500,100.0,6.0
99
+ 98,0.90456,0.93718,0.75641,1.0,2.5,1024,256,0.1,500,100.0,6.0
100
+ 99,0.90441,0.9397,0.75007,1.0,2.5,2048,256,0.05,500,100.0,0.0
101
+ 100,0.9044,0.9397,0.75026,1.0,2.5,2048,256,0.1,500,100.0,0.0
102
+ 101,0.9031,0.93466,0.75453,1.0,2.5,1024,256,0.05,500,250.0,6.0
103
+ 102,0.90272,0.93341,0.75337,1.0,2.5,1024,256,0.1,500,250.0,6.0
104
+ 103,0.90139,0.93493,0.74938,1.0,2.5,1024,256,0.05,500,100.0,4.0
105
+ 104,0.90136,0.9344,0.74948,1.0,2.5,1024,256,0.1,500,100.0,4.0
106
+ 105,0.90081,0.93414,0.74536,1.0,2.5,2048,256,0.1,500,250.0,0.0
107
+ 106,0.90081,0.93417,0.74539,1.0,2.5,2048,256,0.05,500,250.0,0.0
108
+ 107,0.90046,0.92685,0.75308,1.0,2.5,2048,256,0.05,500,100.0,6.0
109
+ 108,0.90039,0.94467,0.74093,1.0,2.5,1024,256,0.05,500,100.0,0.0
110
+ 109,0.90038,0.92657,0.75307,1.0,2.5,2048,256,0.1,500,100.0,6.0
111
+ 110,0.90036,0.94461,0.74137,1.0,2.5,1024,256,0.1,500,100.0,0.0
112
+ 111,0.89947,0.93077,0.74637,1.0,2.5,1024,256,0.05,500,250.0,4.0
113
+ 112,0.89937,0.92996,0.74668,1.0,2.5,1024,256,0.1,500,250.0,4.0
114
+ 113,0.89814,0.92855,0.74274,1.0,3.0,2048,256,0.05,500,100.0,0.0
115
+ 114,0.89814,0.92844,0.74273,1.0,3.0,2048,256,0.1,500,100.0,0.0
116
+ 115,0.89796,0.94002,0.73704,1.0,2.5,1024,256,0.05,500,250.0,0.0
117
+ 116,0.89793,0.94004,0.73671,1.0,2.5,1024,256,0.1,500,250.0,0.0
118
+ 117,0.89738,0.92223,0.74875,1.0,2.5,2048,256,0.05,500,250.0,6.0
119
+ 118,0.89728,0.92212,0.74839,1.0,2.5,2048,256,0.1,500,250.0,6.0
120
+ 119,0.89589,0.91947,0.7463,1.0,2.5,2048,256,0.05,500,100.0,4.0
121
+ 120,0.89579,0.91925,0.74651,1.0,2.5,2048,256,0.1,500,100.0,4.0
122
+ 121,0.89559,0.92118,0.74407,1.0,3.0,1024,256,0.05,500,100.0,6.0
123
+ 122,0.89539,0.92554,0.73887,1.0,3.0,2048,256,0.1,500,250.0,0.0
124
+ 123,0.89539,0.92544,0.73897,1.0,3.0,2048,256,0.05,500,250.0,0.0
125
+ 124,0.89512,0.92,0.74294,1.0,3.0,1024,256,0.1,500,100.0,6.0
126
+ 125,0.8943,0.91902,0.7421,1.0,3.0,1024,256,0.05,500,250.0,6.0
127
+ 126,0.89379,0.91802,0.74069,1.0,3.0,1024,256,0.1,500,250.0,6.0
128
+ 127,0.89291,0.92852,0.72894,1.0,3.0,1024,256,0.05,500,100.0,0.0
129
+ 128,0.89288,0.92858,0.72881,1.0,3.0,1024,256,0.1,500,100.0,0.0
130
+ 129,0.89245,0.91347,0.74168,1.0,2.5,2048,256,0.05,500,250.0,4.0
131
+ 130,0.89235,0.91297,0.74183,1.0,2.5,2048,256,0.1,500,250.0,4.0
132
+ 131,0.89227,0.91352,0.74314,1.0,3.0,2048,256,0.05,500,100.0,6.0
133
+ 132,0.89219,0.91303,0.74303,1.0,3.0,2048,256,0.1,500,100.0,6.0
134
+ 133,0.89106,0.91559,0.73677,1.0,3.0,1024,256,0.05,500,100.0,4.0
135
+ 134,0.89105,0.91386,0.73688,1.0,3.0,1024,256,0.1,500,100.0,4.0
136
+ 135,0.89086,0.92475,0.72639,1.0,3.0,1024,256,0.05,500,250.0,0.0
137
+ 136,0.89083,0.92465,0.72637,1.0,3.0,1024,256,0.1,500,250.0,0.0
138
+ 137,0.88974,0.91353,0.73471,1.0,3.0,1024,256,0.05,500,250.0,4.0
139
+ 138,0.88974,0.90995,0.73926,1.0,3.0,2048,256,0.05,500,250.0,6.0
140
+ 139,0.88969,0.91178,0.73463,1.0,3.0,1024,256,0.1,500,250.0,4.0
141
+ 140,0.88965,0.9097,0.73974,1.0,3.0,2048,256,0.1,500,250.0,6.0
142
+ 141,0.88663,0.90265,0.73501,1.0,3.0,2048,256,0.05,500,100.0,4.0
143
+ 142,0.88655,0.9025,0.73544,1.0,3.0,2048,256,0.1,500,100.0,4.0
144
+ 143,0.88404,0.89986,0.73167,1.0,3.0,2048,256,0.05,500,250.0,4.0
145
+ 144,0.88395,0.89985,0.7318,1.0,3.0,2048,256,0.1,500,250.0,4.0
percorsi.csv ADDED
@@ -0,0 +1,39 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ Percorso Directory,Tipo
2
+ "./Cymatics - Cratediggers Vol.1/Drum Fills","Drum Loop"
3
+ "./Cymatics - Cratediggers Vol.1/Drum Loops","Drum Loop"
4
+ "./Cymatics - Cratediggers Vol.1/Hihat Loops & MIDI","Drum Loop"
5
+ "./Cymatics - Cratediggers Vol.1/Percussion Loops","Drum Loop"
6
+ "./Ghosthack - Ultimate Producer Bundle 2024/Ghosthack - Bassquake/Loops/Drum Loops","Drum Loop"
7
+ "./Ghosthack - Ultimate Producer Bundle 2024/Ghosthack - Commercial Deep House/Loops/Drum Loops","Drum Loop"
8
+ "./Ghosthack - Ultimate Producer Bundle 2024/Ghosthack - Commercial Deep House/One-Shots/Drum Shots","Drum One Shot"
9
+ "./Ghosthack - Ultimate Producer Bundle 2024/Ghosthack - Deathstep Essentials 2/Loops/Drum Fills","Drum Loop"
10
+ "./Ghosthack - Ultimate Producer Bundle 2024/Ghosthack - Deathstep Essentials 2/Loops/Drum Loops","Drum Loop"
11
+ "./Ghosthack - Ultimate Producer Bundle 2024/Ghosthack - Deathstep Essentials 2/One-Shots/Drum Shots","Drum One Shot"
12
+ "./Ghosthack - Ultimate Producer Bundle 2024/Ghosthack - Gaming Dubstep & Midtempo 2/Loops/Drum Loops","Drum Loop"
13
+ "./Ghosthack - Ultimate Producer Bundle 2024/Ghosthack - Gaming Dubstep & Midtempo 2/One-Shots/Drum Shots","Drum One Shot"
14
+ "./Ghosthack - Ultimate Producer Bundle 2024/Ghosthack - Trap & Wave Essentials/Loops/Drum Fills","Drum Loop"
15
+ "./Ghosthack - Ultimate Producer Bundle 2024/Ghosthack - Trap & Wave Essentials/Loops/Drum Loops","Drum Loop"
16
+ "./Ghosthack - Ultimate Producer Bundle 2025/Festival EDM Essentials/Loops/Drum Fills","Drum Loop"
17
+ "./Ghosthack - Ultimate Producer Bundle 2025/Festival EDM Essentials/Loops/Drum Loops","Drum Loop"
18
+ "./Ghosthack - Ultimate Producer Bundle 2025/Festival EDM Essentials/One-Shots/Drum Shots","Drum One Shot"
19
+ "./Ghosthack - Ultimate Producer Bundle 2025/Hardstyle Legends/Loops/Drum Fills","Drum Loop"
20
+ "./Ghosthack - Ultimate Producer Bundle 2025/Hardstyle Legends/One-Shots/Drum Shots","Drum One Shot"
21
+ "./Ghosthack - Ultimate Producer Bundle 2025/Infinite - Liquid DnB Samples/Loops/Drums/Full Drum Loops","Drum Loop"
22
+ "./Ghosthack - Ultimate Producer Bundle 2025/Infinite - Liquid DnB Samples/Loops/Drums/Separated Drum Loops","Drum Loop"
23
+ "./Ghosthack - Ultimate Producer Bundle 2025/Melodic Techno Essentials/Loops/Drum Fills","Drum Loop"
24
+ "./Ghosthack - Ultimate Producer Bundle 2025/Melodic Techno Essentials/Loops/Drum Loops","Drum Loop"
25
+ "./Ghosthack - Ultimate Producer Bundle 2025/Melodic Techno Essentials/One-Shots/Drum Shots","Drum One Shot"
26
+ "./Ghosthack - Ultimate Producer Bundle 2025/Midtempo Meteor/Loops/Drum Fills","Drum Loop"
27
+ "./Ghosthack - Ultimate Producer Bundle 2025/Midtempo Meteor/Loops/Drum Loops","Drum Loop"
28
+ "./Ghosthack - Ultimate Producer Bundle 2025/Neo Rave Essentials/Loops/Drum Fills","Drum Loop"
29
+ "./Ghosthack - Ultimate Producer Bundle 2025/Neo Rave Essentials/Loops/Drum Loops","Drum Loop"
30
+ "./Ghosthack - Ultimate Producer Bundle 2025/Neo Rave Essentials/One-Shots/Drum Shots","Drum One Shot"
31
+ "./Ghosthack - Ultimate Producer Bundle 2025/Phonk Essentials/Loops/Drums/Full Drum Loops","Drum Loop"
32
+ "./Ghosthack - Ultimate Producer Bundle 2025/Phonk Essentials/Loops/Drums/Separated Drum Loops","Drum Loop"
33
+ "./Ghosthack - Ultimate Producer Bundle 2025/Vision - Outrun Synthwave/Loops/Drum Fills","Drum Loop"
34
+ "./Ghosthack - Ultimate Producer Bundle 2025/Vision - Outrun Synthwave/Loops/Drum Loops","Drum Loop"
35
+ "./Ghosthack - Ultimate Producer Bundle 2025/Vision - Outrun Synthwave/One-Shots/Drum Shots","Drum One Shot"
36
+ "./Infinity Audio - Hugelism/Drum Loops","Drum Loop"
37
+ "./Infinity Audio - Hugelism/Percussion Loops","Drum Loop"
38
+ "./Sample.Tools.by.Cr2.Festival.Trap.3.MULTiFORMAT-DECiBEL/Festival_Trap_3/Audio_MIDI/Drum Loops","Drum Loop"
39
+ "./The Producer School - Oasis/Sample Pack/Drumloops/Percussion Loops","Drum Loop"
run_param_sweep.py ADDED
@@ -0,0 +1,457 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ """
2
+ run_param_sweep.py — S-SPADE parameter sweep with residual similarity ranking
3
+ ================================================================================
4
+
5
+ PIPELINE
6
+ --------
7
+ 1. Carica test.flac (limitato) e test__3db.flac (versione +3 dB pre-limiter)
8
+ 2. Normalizza entrambi a -20 LUFS integrato con pyloudnorm
9
+ 3. Ground-truth residual = somma in fase inversa dei due (= ciò che il limiter ha tolto)
10
+ 4. Normalizza il residual GT a -3 dBFS
11
+ 5. Per ogni combinazione di parametri:
12
+ a. Esegui declip su test.flac
13
+ b. Normalizza output a -20 LUFS
14
+ c. Calcola residual = test_lufs + declipped_lufs * (-1) (inv. di fase)
15
+ d. Normalizza a -3 dBFS
16
+ e. Confronta GT vs residual_iter con similarità coseno su micro-finestre
17
+ temporali + frequenziali
18
+ 6. Stampa ranking finale per similarity media e mediana
19
+
20
+ DIPENDENZE
21
+ ----------
22
+ pip install numpy scipy soundfile pyloudnorm rich
23
+ (rich è opzionale ma dà una tabella bella)
24
+ """
25
+
26
+ import sys
27
+ import itertools
28
+ import warnings
29
+ from dataclasses import dataclass, field, asdict
30
+ from pathlib import Path
31
+ from typing import List, Tuple, Dict, Optional
32
+ import traceback
33
+
34
+ import numpy as np
35
+ import scipy.signal as sig
36
+ import soundfile as sf
37
+
38
+ # ── pyloudnorm ───────────────────────────────────────────────────────────────
39
+ try:
40
+ import pyloudnorm as pyln
41
+ _HAS_PYLN = True
42
+ except ImportError:
43
+ _HAS_PYLN = False
44
+ warnings.warn("pyloudnorm non trovato — installa con: pip install pyloudnorm", stacklevel=1)
45
+
46
+ # ── spade_declip ─────────────────────────────────────────────────────────────
47
+ try:
48
+ from spade_declip_v11 import declip, DeclipParams
49
+ _HAS_SPADE = True
50
+ except ImportError:
51
+ _HAS_SPADE = False
52
+ warnings.warn("spade_declip_v11 non trovato — metti il file nella stessa cartella", stacklevel=1)
53
+
54
+ # ── rich (opzionale) ─────────────────────────────────────────────────────────
55
+ try:
56
+ from rich.console import Console
57
+ from rich.table import Table
58
+ from rich import print as rprint
59
+ _console = Console()
60
+ _HAS_RICH = True
61
+ except ImportError:
62
+ _HAS_RICH = False
63
+ _console = None
64
+
65
+ # =============================================================================
66
+ # FILE DI INPUT — modifica qui se i nomi sono diversi
67
+ # =============================================================================
68
+ FILE_LIMITED = "test.flac" # traccia limitata (quella che elaboriamo)
69
+ FILE_PLUS3DB = "test_3db.flac" # stessa traccia +3 dB (riferimento pre-limiter)
70
+ LUFS_TARGET = -20.0 # normalizzazione LUFS integrato per entrambe
71
+ RESIDUAL_DBFS = -3.0 # normalizzazione peak del residual
72
+
73
+ # =============================================================================
74
+ # GRIGLIA DEI PARAMETRI DA ESPLORARE
75
+ # =============================================================================
76
+ # Ogni lista può avere uno o più valori.
77
+ # Il prodotto cartesiano genera tutte le combinazioni da testare.
78
+
79
+ PARAM_GRID = {
80
+ # ── parametri declipping ─────────────────────────────────────────────
81
+ "delta_db" : [1.5, 2.0, 2.5, 3.0],
82
+ "window_length" : [1024, 2048],
83
+ "hop_length" : [256], # di solito window//4
84
+ "eps" : [0.05, 0.1],
85
+ "max_iter" : [500], # alza a 1000 se hai tempo
86
+ # ── v11 delimiting ───────────────────────────────────────────────────
87
+ "release_ms" : [0.0, 100.0, 250.0],
88
+ "max_gain_db" : [0.0, 4.0, 6.0],
89
+ }
90
+
91
+ # Parametri fissi (non cambiano tra le iterazioni)
92
+ FIXED_PARAMS = dict(
93
+ algo = "sspade",
94
+ frame = "rdft",
95
+ s = 1,
96
+ r = 1,
97
+ mode = "soft",
98
+ multiband = False,
99
+ macro_expand = False,
100
+ n_jobs = -1,
101
+ verbose = False,
102
+ show_progress= False,
103
+ )
104
+
105
+ # Limita il numero massimo di combinazioni da testare (None = tutte)
106
+ MAX_COMBINATIONS: Optional[int] = None # es. 40 per un test rapido
107
+
108
+ # =============================================================================
109
+ # FUNZIONI DI SUPPORTO
110
+ # =============================================================================
111
+
112
+ def normalize_lufs(audio: np.ndarray, sr: int, target_lufs: float) -> np.ndarray:
113
+ """Normalizza audio (N,C) a target_lufs LUFS integrato."""
114
+ if not _HAS_PYLN:
115
+ raise RuntimeError("pyloudnorm richiesto per la normalizzazione LUFS")
116
+ meter = pyln.Meter(sr)
117
+ # pyloudnorm vuole (N,) mono o (N,2) stereo
118
+ loud = meter.integrated_loudness(audio if audio.shape[1] > 1 else audio[:, 0])
119
+ if np.isinf(loud):
120
+ return audio # silenzio — lascia invariato
121
+ gain_lin = 10 ** ((target_lufs - loud) / 20.0)
122
+ return audio * gain_lin
123
+
124
+
125
+ def normalize_peak_dbfs(audio: np.ndarray, target_dbfs: float) -> np.ndarray:
126
+ """Normalizza audio al target peak (dBFS)."""
127
+ peak = np.max(np.abs(audio))
128
+ if peak < 1e-12:
129
+ return audio
130
+ target_lin = 10 ** (target_dbfs / 20.0)
131
+ return audio * (target_lin / peak)
132
+
133
+
134
+ def compute_residual(a: np.ndarray, b: np.ndarray) -> np.ndarray:
135
+ """
136
+ Residual = a + (-b) (inversione di fase di b).
137
+ Assicura stessa lunghezza troncando al minimo.
138
+ """
139
+ L = min(a.shape[0], b.shape[0])
140
+ return a[:L] - b[:L] # equivalente a a + inv_phase(b)
141
+
142
+
143
+ def stft_cosine_similarity(
144
+ gt: np.ndarray,
145
+ est: np.ndarray,
146
+ sr: int,
147
+ win_samples: int = 2048,
148
+ hop_samples: int = 512,
149
+ n_freq_bins: int = 16, # numero di bande di frequenza (mel-like split)
150
+ ) -> Dict[str, float]:
151
+ """
152
+ Calcola la similarità coseno media tra GT e stima su micro-finestre
153
+ tempo-frequenziali.
154
+
155
+ Strategia:
156
+ - STFT di entrambi i residual
157
+ - Split dell'asse frequenze in `n_freq_bins` bande log-spaced
158
+ - Per ogni banda e ogni frame: cosine similarity vettoriale
159
+ - Restituisce mean, median, p10, p90
160
+
161
+ Input: audio mono 1-D.
162
+ """
163
+ def _stft(x):
164
+ _, _, Z = sig.stft(
165
+ x, fs=sr, window="hann",
166
+ nperseg=win_samples, noverlap=win_samples - hop_samples,
167
+ boundary=None, padded=False
168
+ )
169
+ return Z # shape: (freqs, time)
170
+
171
+ # Usa solo il canale L se stereo
172
+ gt_m = gt[:, 0] if gt.ndim == 2 else gt
173
+ est_m = est[:, 0] if est.ndim == 2 else est
174
+
175
+ L = min(len(gt_m), len(est_m))
176
+ Z_gt = _stft(gt_m[:L])
177
+ Z_est = _stft(est_m[:L])
178
+
179
+ n_freqs, n_frames = Z_gt.shape
180
+
181
+ # Suddividi in bande log-spaced
182
+ edges = np.unique(
183
+ np.round(np.logspace(0, np.log10(n_freqs), n_freq_bins + 1)).astype(int)
184
+ )
185
+ edges = np.clip(edges, 0, n_freqs)
186
+
187
+ similarities = []
188
+ for i in range(len(edges) - 1):
189
+ f0, f1 = edges[i], edges[i + 1]
190
+ if f1 <= f0:
191
+ continue
192
+ # Per ogni frame temporale: cosine similarity sul vettore di frequenze nella banda
193
+ g = np.abs(Z_gt[f0:f1, :]) # (band_size, frames)
194
+ e = np.abs(Z_est[f0:f1, :])
195
+
196
+ # Cosine similarity per ogni frame
197
+ dot = np.sum(g * e, axis=0)
198
+ norm_g = np.sqrt(np.sum(g ** 2, axis=0)) + 1e-12
199
+ norm_e = np.sqrt(np.sum(e ** 2, axis=0)) + 1e-12
200
+ cos_frame = dot / (norm_g * norm_e)
201
+ similarities.extend(cos_frame.tolist())
202
+
203
+ arr = np.array(similarities)
204
+ return {
205
+ "mean" : float(np.mean(arr)),
206
+ "median": float(np.median(arr)),
207
+ "p10" : float(np.percentile(arr, 10)),
208
+ "p90" : float(np.percentile(arr, 90)),
209
+ }
210
+
211
+
212
+ # =============================================================================
213
+ # PREPARAZIONE GROUND TRUTH
214
+ # =============================================================================
215
+
216
+ def prepare_ground_truth(sr_ref: int) -> Tuple[np.ndarray, int]:
217
+ """
218
+ Carica test.flac e test__3db.flac, normalizza a LUFS_TARGET,
219
+ calcola il residual GT e lo normalizza a RESIDUAL_DBFS.
220
+ Ritorna (residual_gt, sr).
221
+ """
222
+ print("\n" + "=" * 65)
223
+ print("CALCOLO GROUND-TRUTH RESIDUAL")
224
+ print("=" * 65)
225
+
226
+ # Carica
227
+ limited, sr_l = sf.read(FILE_LIMITED, always_2d=True)
228
+ plus3db, sr_p = sf.read(FILE_PLUS3DB, always_2d=True)
229
+ assert sr_l == sr_p, f"Sample rate diversi: {sr_l} vs {sr_p}"
230
+ sr = sr_l
231
+
232
+ limited = limited.astype(float)
233
+ plus3db = plus3db.astype(float)
234
+
235
+ print(f" {FILE_LIMITED} : {limited.shape[0]} camp @ {sr} Hz "
236
+ f"| peak={np.max(np.abs(limited)):.4f}")
237
+ print(f" {FILE_PLUS3DB}: {plus3db.shape[0]} camp @ {sr} Hz "
238
+ f"| peak={np.max(np.abs(plus3db)):.4f}")
239
+
240
+ # Normalizza LUFS
241
+ limited_lufs = normalize_lufs(limited, sr, LUFS_TARGET)
242
+ plus3db_lufs = normalize_lufs(plus3db, sr, LUFS_TARGET)
243
+ print(f" Normalizzazione LUFS: target={LUFS_TARGET} dBLUFS")
244
+
245
+ # Residual
246
+ residual_gt = compute_residual(plus3db_lufs, limited_lufs)
247
+ residual_gt = normalize_peak_dbfs(residual_gt, RESIDUAL_DBFS)
248
+
249
+ peak_res = np.max(np.abs(residual_gt))
250
+ print(f" Residual GT peak normalizzato: {20*np.log10(peak_res+1e-12):.2f} dBFS "
251
+ f"({residual_gt.shape[0]} camp)")
252
+
253
+ return residual_gt, sr
254
+
255
+
256
+ # =============================================================================
257
+ # SINGOLA ITERAZIONE DECLIP + RESIDUAL
258
+ # =============================================================================
259
+
260
+ def run_iteration(
261
+ limited_raw: np.ndarray,
262
+ sr: int,
263
+ params_dict: dict,
264
+ residual_gt: np.ndarray,
265
+ ) -> Optional[Dict]:
266
+ """
267
+ Esegue una singola iterazione di declip con i params forniti,
268
+ calcola il residual e la similarità con il GT.
269
+ Ritorna un dizionario con i risultati, o None se fallisce.
270
+ """
271
+ try:
272
+ # DeclipParams
273
+ p = DeclipParams(
274
+ sample_rate = sr,
275
+ **{k: v for k, v in FIXED_PARAMS.items()},
276
+ **params_dict,
277
+ )
278
+
279
+ fixed, _ = declip(limited_raw.copy(), p)
280
+ fixed_2d = fixed[:, None] if fixed.ndim == 1 else fixed
281
+
282
+ # Normalizza output a LUFS_TARGET
283
+ fixed_lufs = normalize_lufs(fixed_2d, sr, LUFS_TARGET)
284
+
285
+ # Carica anche il limited normalizzato LUFS (ricalcola ogni volta per sicurezza)
286
+ limited_lufs = normalize_lufs(limited_raw, sr, LUFS_TARGET)
287
+
288
+ # Residual iterazione
289
+ residual_iter = compute_residual(fixed_lufs, limited_lufs)
290
+ residual_iter = normalize_peak_dbfs(residual_iter, RESIDUAL_DBFS)
291
+
292
+ # Similarità coseno su micro-finestre TF
293
+ sim = stft_cosine_similarity(residual_gt, residual_iter, sr)
294
+
295
+ return {
296
+ "params" : params_dict,
297
+ "sim_mean" : sim["mean"],
298
+ "sim_median": sim["median"],
299
+ "sim_p10" : sim["p10"],
300
+ "sim_p90" : sim["p90"],
301
+ }
302
+
303
+ except Exception as exc:
304
+ print(f" [ERRORE] {exc}")
305
+ traceback.print_exc()
306
+ return None
307
+
308
+
309
+ # =============================================================================
310
+ # STAMPA RISULTATI
311
+ # =============================================================================
312
+
313
+ def print_results(results: List[Dict], top_n: int = 20):
314
+ """Stampa il ranking per sim_mean decrescente."""
315
+
316
+ results_sorted = sorted(results, key=lambda x: x["sim_mean"], reverse=True)
317
+
318
+ print("\n" + "=" * 65)
319
+ print(f"RANKING (top {min(top_n, len(results_sorted))} / {len(results_sorted)} iterazioni)")
320
+ print(f"Metrica principale: similarità coseno media sulle micro-finestre TF")
321
+ print("=" * 65)
322
+
323
+ if _HAS_RICH:
324
+ table = Table(show_header=True, header_style="bold cyan")
325
+ table.add_column("#", style="dim", width=4)
326
+ table.add_column("mean", justify="right", width=7)
327
+ table.add_column("median", justify="right", width=7)
328
+ table.add_column("p10", justify="right", width=7)
329
+ table.add_column("p90", justify="right", width=7)
330
+ table.add_column("delta_db", justify="right", width=8)
331
+ table.add_column("win", justify="right", width=6)
332
+ table.add_column("hop", justify="right", width=6)
333
+ table.add_column("eps", justify="right", width=6)
334
+ table.add_column("max_iter", justify="right", width=8)
335
+ table.add_column("release_ms", justify="right", width=10)
336
+ table.add_column("max_gain_db", justify="right", width=11)
337
+
338
+ for rank, r in enumerate(results_sorted[:top_n], 1):
339
+ p = r["params"]
340
+ row_style = "green" if rank == 1 else ("yellow" if rank <= 3 else "")
341
+ table.add_row(
342
+ str(rank),
343
+ f"{r['sim_mean']:.4f}",
344
+ f"{r['sim_median']:.4f}",
345
+ f"{r['sim_p10']:.4f}",
346
+ f"{r['sim_p90']:.4f}",
347
+ str(p.get("delta_db", "—")),
348
+ str(p.get("window_length", "—")),
349
+ str(p.get("hop_length", "—")),
350
+ str(p.get("eps", "—")),
351
+ str(p.get("max_iter", "—")),
352
+ str(p.get("release_ms", "—")),
353
+ str(p.get("max_gain_db", "—")),
354
+ style=row_style,
355
+ )
356
+ _console.print(table)
357
+ else:
358
+ header = (
359
+ f"{'#':>3} {'mean':>7} {'med':>7} {'p10':>7} {'p90':>7}"
360
+ f" {'Δdb':>5} {'win':>5} {'hop':>4} {'eps':>5} "
361
+ f"{'iter':>5} {'rel_ms':>7} {'gain_db':>8}"
362
+ )
363
+ print(header)
364
+ print("-" * len(header))
365
+ for rank, r in enumerate(results_sorted[:top_n], 1):
366
+ p = r["params"]
367
+ print(
368
+ f"{rank:>3} {r['sim_mean']:>7.4f} {r['sim_median']:>7.4f}"
369
+ f" {r['sim_p10']:>7.4f} {r['sim_p90']:>7.4f}"
370
+ f" {p.get('delta_db',0):>5} {p.get('window_length',0):>5}"
371
+ f" {p.get('hop_length',0):>4} {p.get('eps',0):>5}"
372
+ f" {p.get('max_iter',0):>5} {p.get('release_ms',0):>7}"
373
+ f" {p.get('max_gain_db',0):>8}"
374
+ )
375
+
376
+ # Parametri migliori
377
+ best = results_sorted[0]
378
+ print("\n✓ PARAMETRI MIGLIORI:")
379
+ for k, v in best["params"].items():
380
+ print(f" {k} = {v}")
381
+ print(f" → sim_mean={best['sim_mean']:.4f} "
382
+ f"sim_median={best['sim_median']:.4f}")
383
+
384
+
385
+ # =============================================================================
386
+ # MAIN
387
+ # =============================================================================
388
+
389
+ def main():
390
+ if not _HAS_PYLN:
391
+ sys.exit("Installa pyloudnorm prima di eseguire: pip install pyloudnorm")
392
+ if not _HAS_SPADE:
393
+ sys.exit("spade_declip_v11.py non trovato nella cartella corrente")
394
+
395
+ # ── Carica il file limitato una sola volta ────────────────────────────
396
+ limited_raw, sr = sf.read(FILE_LIMITED, always_2d=True)
397
+ limited_raw = limited_raw.astype(float)
398
+
399
+ # ── Ground-truth residual ─────────────────────────────────────────────
400
+ residual_gt, sr = prepare_ground_truth(sr)
401
+
402
+ # ── Genera griglia parametri ──────────────────────────────────────────
403
+ keys = list(PARAM_GRID.keys())
404
+ values = list(PARAM_GRID.values())
405
+ combos = list(itertools.product(*values))
406
+
407
+ if MAX_COMBINATIONS and len(combos) > MAX_COMBINATIONS:
408
+ # Campionamento stratificato semplice (ogni N)
409
+ step = len(combos) // MAX_COMBINATIONS
410
+ combos = combos[::step][:MAX_COMBINATIONS]
411
+ print(f"\n[INFO] Griglia ridotta a {len(combos)} combinazioni (MAX_COMBINATIONS={MAX_COMBINATIONS})")
412
+
413
+ print(f"\n{'='*65}")
414
+ print(f"PARAM SWEEP — {len(combos)} combinazioni da testare")
415
+ print(f"{'='*65}")
416
+
417
+ results = []
418
+ for i, combo in enumerate(combos):
419
+ params_dict = dict(zip(keys, combo))
420
+ label = " ".join(f"{k}={v}" for k, v in params_dict.items())
421
+ print(f"\n[{i+1:>3}/{len(combos)}] {label}")
422
+
423
+ res = run_iteration(limited_raw, sr, params_dict, residual_gt)
424
+ if res is not None:
425
+ print(f" → sim_mean={res['sim_mean']:.4f} "
426
+ f"sim_median={res['sim_median']:.4f}")
427
+ results.append(res)
428
+ else:
429
+ print(" → SALTATO (errore)")
430
+
431
+ if not results:
432
+ print("\n[ERRORE] Nessuna iterazione completata.")
433
+ return
434
+
435
+ print_results(results, top_n=20)
436
+
437
+ # ── Salva CSV con tutti i risultati ───────────────────────────────────
438
+ import csv
439
+ csv_path = "param_sweep_results.csv"
440
+ fieldnames = ["rank", "sim_mean", "sim_median", "sim_p10", "sim_p90"] + keys
441
+ results_sorted = sorted(results, key=lambda x: x["sim_mean"], reverse=True)
442
+ with open(csv_path, "w", newline="") as f:
443
+ w = csv.DictWriter(f, fieldnames=fieldnames)
444
+ w.writeheader()
445
+ for rank, r in enumerate(results_sorted, 1):
446
+ row = {"rank": rank,
447
+ "sim_mean": round(r["sim_mean"], 5),
448
+ "sim_median": round(r["sim_median"], 5),
449
+ "sim_p10": round(r["sim_p10"], 5),
450
+ "sim_p90": round(r["sim_p90"], 5)}
451
+ row.update(r["params"])
452
+ w.writerow(row)
453
+ print(f"\n 📄 Risultati salvati in: {csv_path}")
454
+
455
+
456
+ if __name__ == "__main__":
457
+ main()
run_smart_sweep.py ADDED
@@ -0,0 +1,1136 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ """
2
+ run_smart_sweep.py — Hybrid SPADE (v11 HF + Unrolled LF) · Bayesian sweep
3
+ ===============================================================================
4
+
5
+ Architettura ibrida
6
+ -------------------
7
+ segnale limitato
8
+
9
+ LR crossover split a BAND_CROSSOVER_HZ (default 8000 Hz)
10
+ ├── HF (> 8 kHz) → SPADE v11 S-SPADE H_k (hard thresh, identico, invariato)
11
+ └── LF (< 8 kHz) → SPADEUnrolled model (soft thresh appreso, context GRU)
12
+
13
+ somma LF_rec + HF_rec → segnale recuperato
14
+
15
+ Razionale
16
+ ---------
17
+ SPADE v11 recupera bene i transienti HF (cymbal snap, hi-hat attack, snap):
18
+ i coefficienti DCT sopra 8 kHz sono sparsi e H_k li trova in poche iterazioni.
19
+ Sotto 8 kHz (corpo kick, fondamentale del basso) v11 sotto-recupera perché:
20
+ • il livello di sparsità k corretto è content-dipendente e il piano fisso
21
+ s/r/max_iter non lo indovina
22
+ • i contenuti tonali/sustain non sono globalmente sparsi → H_k spreca
23
+ budget su coefficienti HF irrilevanti
24
+ Il modello appreso risolve entrambi i problemi via lambda_lf adattivo e g_max.
25
+
26
+ Pipeline di valutazione (6 tracce, standard run_smart_sweep)
27
+ -------------------------------------------------------------
28
+ 01_orig_with_noise drum + pink noise @0 dBFS (ingresso pipeline)
29
+ 02_limited uscita limiter (ingresso SPADE) ≈ −LIMITER_THRESHOLD_DB dBFS
30
+ 03_gt_residual GT residual @RESIDUAL_DBFS (include attenuazione noise)
31
+ 04_spade_output uscita ibrida (float32, può >0 dBFS)
32
+ 05_res_iter residual ibrido @RESIDUAL_DBFS (solo componente sparsa)
33
+ 06_diff_residuals GT_res − res_iter @RESIDUAL_DBFS (silenzio ideale)
34
+ → annotato con cos_sim, diff/GT dB, noise_floor
35
+
36
+ Sweep Bayesiano
37
+ ---------------
38
+ Il modello ML è fisso (pesi caricati da checkpoint).
39
+ Il TPE ottimizza i parametri classici indipendentemente per ogni banda:
40
+ LF: lf_delta_db, lf_max_gain_db, lf_release_ms
41
+ HF: hf_delta_db, hf_win, hf_hop, hf_release_ms, hf_max_gain_db, hf_eps, hf_max_iter
42
+
43
+ USO
44
+ ---
45
+ python run_smart_sweep.py --model checkpoints/phase1_best.pt
46
+ python run_smart_sweep.py --model checkpoints/phase1_best.pt --trials 100
47
+ python run_smart_sweep.py --model checkpoints/phase1_best.pt --debug-export 5
48
+ python run_smart_sweep.py --model checkpoints/phase1_best.pt --resume
49
+ python run_smart_sweep.py --model checkpoints/phase1_best.pt --report
50
+
51
+ # baseline: solo v11 broadband (senza modello ML)
52
+ python run_smart_sweep.py --baseline-v11
53
+
54
+ DIPENDENZE
55
+ ----------
56
+ pip install numpy scipy soundfile optuna rich torch
57
+ spade_declip_v11.py — deve essere nel Python path
58
+ spade_unrolled.py — deve essere nel Python path (per HybridSPADEInference)
59
+ """
60
+
61
+ from __future__ import annotations
62
+
63
+ import argparse
64
+ import logging
65
+ import sys
66
+ import time
67
+ import warnings
68
+ from dataclasses import asdict
69
+ from pathlib import Path
70
+ from typing import Dict, List, Optional, Tuple
71
+
72
+ import numpy as np
73
+ import scipy.signal as sig
74
+ import soundfile as sf
75
+
76
+ logging.getLogger("optuna").setLevel(logging.WARNING)
77
+
78
+ # ── optuna ───────────────────────────────────────────────────────────────────
79
+ try:
80
+ import optuna
81
+ from optuna.samplers import TPESampler
82
+ from optuna.pruners import MedianPruner
83
+ _HAS_OPTUNA = True
84
+ except ImportError:
85
+ _HAS_OPTUNA = False
86
+ warnings.warn("optuna non trovato — pip install optuna")
87
+
88
+ # ── rich ─────────────────────────────────────────────────────────────────────
89
+ try:
90
+ from rich.console import Console
91
+ from rich.table import Table
92
+ _console = Console()
93
+ _HAS_RICH = True
94
+ except ImportError:
95
+ _HAS_RICH = False
96
+ _console = None
97
+
98
+ # ── spade v11 ────────────────────────────────────────────────────────────────
99
+ try:
100
+ from spade_declip_v11 import declip as _v11_declip, DeclipParams as _V11Params
101
+ _HAS_V11 = True
102
+ except ImportError:
103
+ _HAS_V11 = False
104
+ warnings.warn("spade_declip_v11.py non trovato — processing HF non disponibile")
105
+
106
+ # ── spade_unrolled ────────────────────────────────────────────────────────────
107
+ try:
108
+ import torch
109
+ from spade_unrolled import (
110
+ SPADEUnrolled, UnrolledConfig, SPADEUnrolledInference,
111
+ HybridSPADEInference,
112
+ )
113
+ _HAS_UNROLLED = True
114
+ except ImportError:
115
+ _HAS_UNROLLED = False
116
+ warnings.warn("spade_unrolled.py / torch non trovati — modello ML non disponibile")
117
+
118
+
119
+ # =============================================================================
120
+ # CONFIG
121
+ # =============================================================================
122
+
123
+ DRUM_DIRS = ["Kicks", "Snares", "Perc", "Tops"]
124
+
125
+ # Limiter sintetico (identico a run_smart_sweep_old)
126
+ LIMITER_THRESHOLD_DB = 1.5 # dB sotto il ceiling (positivo)
127
+ LIMITER_RELEASE_MS = 80.0 # release del limiter sintetico (ms)
128
+
129
+ RESIDUAL_DBFS = -3.0 # normalizzazione residual per comparabilità cross-file
130
+ PINK_NOISE_LEVEL_DB = -20.0 # noise di sottofondo (dB rel. al peak del drum)
131
+
132
+ # Crossover LF/HF: HF usa v11 invariato, LF usa modello appreso
133
+ BAND_CROSSOVER_HZ = 8000.0
134
+
135
+ # Parametri FISSI del solver v11 (HF)
136
+ HF_FIXED = dict(
137
+ algo = "sspade",
138
+ frame = "rdft",
139
+ mode = "soft",
140
+ n_jobs = 1,
141
+ verbose = False,
142
+ show_progress = False,
143
+ use_gpu = True,
144
+ )
145
+
146
+ STUDY_NAME = "hybrid_spade_v1"
147
+ OUT_CSV = "hybrid_sweep_results.csv"
148
+
149
+ # Parametri debug di default (usati da --debug-export senza sweep)
150
+ DEBUG_HF = dict(
151
+ hf_delta_db = 1.5,
152
+ hf_window_length = 2048,
153
+ hf_hop_length = 512,
154
+ hf_release_ms = 80.0,
155
+ hf_max_gain_db = 9.0,
156
+ hf_eps = 0.05,
157
+ hf_max_iter = 500,
158
+ )
159
+ DEBUG_LF = dict(
160
+ lf_delta_db = 1.5,
161
+ lf_max_gain_db = 9.0,
162
+ lf_release_ms = 80.0,
163
+ )
164
+
165
+
166
+ # =============================================================================
167
+ # HELPERS (identici a run_smart_sweep_old)
168
+ # =============================================================================
169
+
170
+ def ensure_2d(a: np.ndarray) -> np.ndarray:
171
+ return a[:, None] if a.ndim == 1 else a
172
+
173
+
174
+ def normalize_to_0dBFS(a: np.ndarray) -> np.ndarray:
175
+ pk = np.max(np.abs(a))
176
+ return a / pk if pk > 1e-12 else a
177
+
178
+
179
+ def normalize_peak(a: np.ndarray, target_dbfs: float) -> np.ndarray:
180
+ pk = np.max(np.abs(a))
181
+ return a * (10 ** (target_dbfs / 20.0) / pk) if pk > 1e-12 else a
182
+
183
+
184
+ def generate_pink_noise(
185
+ n_samples: int, n_channels: int, rng: np.random.Generator
186
+ ) -> np.ndarray:
187
+ b = np.array([0.049922035, -0.095993537, 0.050612699, -0.004408786])
188
+ a = np.array([1.0, -2.494956002, 2.017265875, -0.522189400])
189
+ out = np.empty((n_samples, n_channels))
190
+ for c in range(n_channels):
191
+ white = rng.standard_normal(n_samples)
192
+ pink = sig.lfilter(b, a, white)
193
+ rms = np.sqrt(np.mean(pink ** 2))
194
+ out[:, c] = pink / (rms + 1e-12)
195
+ return out
196
+
197
+
198
+ def mix_pink_noise(
199
+ audio_0dBFS: np.ndarray,
200
+ sr: int,
201
+ level_db: float,
202
+ rng: np.random.Generator,
203
+ ) -> np.ndarray:
204
+ audio = ensure_2d(audio_0dBFS)
205
+ N, C = audio.shape
206
+ noise = generate_pink_noise(N, C, rng)
207
+ peak = np.max(np.abs(audio))
208
+ gain = peak * (10 ** (level_db / 20.0))
209
+ mixed = audio + noise * gain
210
+ return mixed[:, 0] if audio_0dBFS.ndim == 1 else mixed
211
+
212
+
213
+ def apply_brickwall_limiter(
214
+ audio_0dBFS: np.ndarray,
215
+ sr: int,
216
+ threshold_db: float = LIMITER_THRESHOLD_DB,
217
+ release_ms: float = LIMITER_RELEASE_MS,
218
+ ) -> np.ndarray:
219
+ thr_lin = 10 ** (-abs(threshold_db) / 20.0)
220
+ rc = np.exp(-1.0 / max(release_ms * sr / 1000.0, 1e-9))
221
+ audio = ensure_2d(audio_0dBFS).copy()
222
+ N, C = audio.shape
223
+ out = np.empty_like(audio)
224
+ for c in range(C):
225
+ ch = audio[:, c]
226
+ env = 1.0
227
+ g = np.empty(N)
228
+ for n in range(N):
229
+ pk = abs(ch[n])
230
+ target = thr_lin / pk if pk > thr_lin else 1.0
231
+ env = target if target < env else rc * env + (1.0 - rc) * target
232
+ g[n] = env
233
+ out[:, c] = ch * g
234
+ return out[:, 0] if audio_0dBFS.ndim == 1 else out
235
+
236
+
237
+ def cosine_sim_tf(
238
+ gt: np.ndarray,
239
+ est: np.ndarray,
240
+ sr: int,
241
+ win_samples: int = 1024,
242
+ hop_samples: int = 256,
243
+ n_bands: int = 12,
244
+ ) -> float:
245
+ L = min(gt.shape[0], est.shape[0])
246
+ g = (gt[:L, 0] if gt.ndim == 2 else gt[:L]).copy()
247
+ e = (est[:L, 0] if est.ndim == 2 else est[:L]).copy()
248
+ win = min(win_samples, max(32, L // 4))
249
+ hop = min(hop_samples, win // 2)
250
+ if L < win or win < 32:
251
+ denom = np.linalg.norm(g) * np.linalg.norm(e) + 1e-12
252
+ return float(np.dot(g, e) / denom)
253
+ _, _, Zg = sig.stft(g, fs=sr, window="hann", nperseg=win,
254
+ noverlap=win - hop, boundary=None, padded=False)
255
+ _, _, Ze = sig.stft(e, fs=sr, window="hann", nperseg=win,
256
+ noverlap=win - hop, boundary=None, padded=False)
257
+ n_freqs, n_frames = Zg.shape
258
+ if n_frames == 0:
259
+ return float(np.dot(g, e) / (np.linalg.norm(g) * np.linalg.norm(e) + 1e-12))
260
+ edges = np.unique(np.round(
261
+ np.logspace(0, np.log10(max(n_freqs, 2)), min(n_bands, n_freqs) + 1)
262
+ ).astype(int))
263
+ edges = np.clip(edges, 0, n_freqs)
264
+ sims = []
265
+ for i in range(len(edges) - 1):
266
+ f0, f1 = int(edges[i]), int(edges[i + 1])
267
+ if f1 <= f0: continue
268
+ Mg = np.abs(Zg[f0:f1, :])
269
+ Me = np.abs(Ze[f0:f1, :])
270
+ dot = np.sum(Mg * Me, axis=0)
271
+ norm_g = np.sqrt(np.sum(Mg ** 2, axis=0)) + 1e-12
272
+ norm_e = np.sqrt(np.sum(Me ** 2, axis=0)) + 1e-12
273
+ sims.extend((dot / (norm_g * norm_e)).tolist())
274
+ return float(np.mean(sims)) if sims else 0.0
275
+
276
+
277
+ def _pk_dbfs(a: np.ndarray) -> float:
278
+ pk = float(np.max(np.abs(a)))
279
+ return 20.0 * np.log10(pk) if pk > 1e-12 else -999.0
280
+
281
+
282
+ def _rms_dbfs(a: np.ndarray) -> float:
283
+ rms = float(np.sqrt(np.mean(np.asarray(a).astype(float) ** 2)))
284
+ return 20.0 * np.log10(rms) if rms > 1e-12 else -999.0
285
+
286
+
287
+ def _write_wav(path: Path, audio: np.ndarray, sr: int) -> None:
288
+ a2d = ensure_2d(audio).astype(np.float32)
289
+ pk = float(np.max(np.abs(a2d)))
290
+ if pk > 1.0:
291
+ print(f" [WARN] {path.name}: peak={pk:.4f} (+{20*np.log10(pk):.2f} dBFS) — float32")
292
+ sf.write(str(path), a2d, sr, subtype="FLOAT")
293
+
294
+
295
+ # =============================================================================
296
+ # CORPUS
297
+ # =============================================================================
298
+
299
+ def build_corpus(base_dir: Path, max_files: Optional[int] = None) -> List[Dict]:
300
+ """
301
+ Per ogni drum sample:
302
+ 1. Carica e normalizza a 0 dBFS peak
303
+ 2. Mixa rumore rosa a PINK_NOISE_LEVEL_DB
304
+ 3. Normalizza il mix a 0 dBFS peak
305
+ 4. Applica limiter sintetico → limited
306
+ 5. GT_res_raw = orig_with_noise − limited
307
+ 6. Scarta file dove il limiter non interviene
308
+ 7. Normalizza GT_res a RESIDUAL_DBFS
309
+
310
+ Nuovo rispetto alla versione precedente:
311
+ • orig_with_noise viene salvato nel corpus (evita ricalcolo in debug_export)
312
+ """
313
+ corpus = []
314
+ extensions = {".wav", ".flac", ".aif", ".aiff"}
315
+ file_index = 0
316
+
317
+ for folder in DRUM_DIRS:
318
+ d = base_dir / folder
319
+ if not d.exists():
320
+ print(f" [WARN] Cartella non trovata: {d}")
321
+ continue
322
+ for f in sorted(d.glob("*")):
323
+ if f.suffix.lower() not in extensions:
324
+ continue
325
+ try:
326
+ audio, sr = sf.read(str(f), always_2d=True)
327
+ audio = audio.astype(float)
328
+ except Exception as exc:
329
+ print(f" [WARN] {f.name}: {exc}")
330
+ continue
331
+ if audio.shape[0] < 64:
332
+ continue
333
+
334
+ orig = normalize_to_0dBFS(audio)
335
+ rng = np.random.default_rng(seed=file_index)
336
+ mixed = ensure_2d(mix_pink_noise(orig, sr, PINK_NOISE_LEVEL_DB, rng))
337
+ file_index += 1
338
+ orig_with_noise = ensure_2d(normalize_to_0dBFS(mixed))
339
+ limited = ensure_2d(apply_brickwall_limiter(orig_with_noise, sr))
340
+ gt_res_raw = orig_with_noise - limited
341
+
342
+ if np.max(np.abs(gt_res_raw)) < 1e-6:
343
+ print(f" [SKIP] {f.name} — limiter inattivo")
344
+ continue
345
+
346
+ gt_res = normalize_peak(gt_res_raw, RESIDUAL_DBFS)
347
+
348
+ corpus.append({
349
+ "file": f.name,
350
+ "sr": sr,
351
+ "orig_with_noise": orig_with_noise, # ← nuovo: già pronto
352
+ "limited": limited,
353
+ "gt_res": gt_res,
354
+ "gt_res_raw": gt_res_raw, # ← nuovo: scala assoluta
355
+ })
356
+ if max_files and len(corpus) >= max_files:
357
+ return corpus
358
+
359
+ return corpus
360
+
361
+
362
+ # =============================================================================
363
+ # HYBRID PROCESSOR
364
+ # =============================================================================
365
+
366
+ def _lr_split_np(x: np.ndarray, crossover_hz: float, sr: int
367
+ ) -> Tuple[np.ndarray, np.ndarray]:
368
+ """Phase-perfect LR crossover. lp + hp == x esattamente."""
369
+ from scipy.signal import butter, sosfiltfilt
370
+ fc = float(np.clip(crossover_hz, 1.0, sr / 2.0 - 1.0))
371
+ sos = butter(2, fc, btype="low", fs=sr, output="sos")
372
+ lp = sosfiltfilt(sos, x)
373
+ hp = x - lp
374
+ return lp, hp
375
+
376
+
377
+ def process_hybrid(
378
+ limited: np.ndarray, # (N,) o (N, C) — segnale limitato
379
+ sr: int,
380
+ hf_params: dict, # parametri per v11 HF
381
+ lf_model: Optional["HybridSPADEInference"], # None = solo v11
382
+ lf_params: dict, # parametri per LF (delta_db, max_gain_db, …)
383
+ crossover_hz: float = BAND_CROSSOVER_HZ,
384
+ ) -> np.ndarray:
385
+ """
386
+ Processa un segnale con la pipeline ibrida:
387
+ HF (> crossover_hz): v11 S-SPADE invariato
388
+ LF (< crossover_hz): SPADEUnrolled (o v11 se lf_model is None)
389
+
390
+ Se lf_model is None → usa v11 anche per LF (modalità baseline).
391
+
392
+ Restituisce lo stesso shape di limited.
393
+ """
394
+ if not _HAS_V11:
395
+ raise RuntimeError("spade_declip_v11.py non trovato — impossibile processare HF")
396
+
397
+ mono = limited.ndim == 1
398
+ if mono:
399
+ limited = limited[:, None]
400
+ _, C = limited.shape
401
+ output = np.zeros_like(limited, dtype=np.float64)
402
+
403
+ for ch in range(C):
404
+ yc = limited[:, ch].astype(np.float64)
405
+
406
+ # ── LR split ────────────────────────────────────────────────────
407
+ lf_band, hf_band = _lr_split_np(yc, crossover_hz, sr)
408
+
409
+ # ── HF: v11 S-SPADE identico ─────────────────────────────────────
410
+ hf_win = hf_params.get("hf_window_length", 2048)
411
+ hf_hop = hf_params.get("hf_hop_length", hf_win // 4)
412
+ hf_p = _V11Params(
413
+ sample_rate = sr,
414
+ delta_db = hf_params.get("hf_delta_db", 1.5),
415
+ window_length = hf_win,
416
+ hop_length = hf_hop,
417
+ s = hf_params.get("hf_s", 1),
418
+ r = hf_params.get("hf_r", 1),
419
+ eps = hf_params.get("hf_eps", 0.05),
420
+ max_iter = hf_params.get("hf_max_iter", 500),
421
+ max_gain_db = hf_params.get("hf_max_gain_db", 9.0),
422
+ release_ms = hf_params.get("hf_release_ms", 0.0),
423
+ **HF_FIXED,
424
+ )
425
+ hf_rec, _ = _v11_declip(hf_band.astype(np.float32), hf_p)
426
+
427
+ # ── LF: modello appreso o v11 fallback ────────────────────────────
428
+ if lf_model is not None:
429
+ # Aggiorna i parametri LF nel wrapper (ricrea SPADEUnrolledInference)
430
+ lf_infer = SPADEUnrolledInference(
431
+ lf_model.model,
432
+ delta_db = lf_params.get("lf_delta_db", 1.5),
433
+ max_gain_db = lf_params.get("lf_max_gain_db", 9.0),
434
+ device = lf_model.device,
435
+ )
436
+ lf_rec = lf_infer.process(lf_band.astype(np.float32), sr)
437
+ else:
438
+ # Baseline: v11 anche per LF
439
+ lf_win = hf_params.get("hf_window_length", 2048)
440
+ lf_hop = lf_win // 4
441
+ lf_p = _V11Params(
442
+ sample_rate = sr,
443
+ delta_db = lf_params.get("lf_delta_db", 1.5),
444
+ window_length = lf_win,
445
+ hop_length = lf_hop,
446
+ eps = hf_params.get("hf_eps", 0.05),
447
+ max_iter = hf_params.get("hf_max_iter", 500),
448
+ max_gain_db = lf_params.get("lf_max_gain_db", 9.0),
449
+ release_ms = lf_params.get("lf_release_ms", 0.0),
450
+ **HF_FIXED,
451
+ )
452
+ lf_rec, _ = _v11_declip(lf_band.astype(np.float32), lf_p)
453
+
454
+ # ── Somma ─────────────────────────────────────────────────────────
455
+ L = min(len(lf_rec), len(hf_rec))
456
+ output[:L, ch] = lf_rec[:L].astype(np.float64) + hf_rec[:L]
457
+
458
+ return output[:, 0] if mono else output
459
+
460
+
461
+ # =============================================================================
462
+ # VALUTAZIONE SINGOLO FILE
463
+ # =============================================================================
464
+
465
+ def evaluate_one(
466
+ item: Dict,
467
+ hf_params: dict,
468
+ lf_params: dict,
469
+ lf_model: Optional["HybridSPADEInference"],
470
+ ) -> Optional[float]:
471
+ """
472
+ Esegue la pipeline ibrida su un item del corpus e restituisce il punteggio
473
+ cosine_sim_tf(gt_res, res_iter) in [0, 1]. 1.0 = recupero perfetto.
474
+ """
475
+ try:
476
+ sr = item["sr"]
477
+ limited = item["limited"].copy()
478
+ gt_res = item["gt_res"]
479
+
480
+ fixed_2d = ensure_2d(process_hybrid(limited, sr, hf_params, lf_model, lf_params))
481
+ res_raw = fixed_2d - limited
482
+ res_iter = normalize_peak(res_raw, RESIDUAL_DBFS)
483
+
484
+ return cosine_sim_tf(gt_res, res_iter, sr)
485
+ except Exception as exc:
486
+ warnings.warn(f"evaluate_one ({item['file']}): {exc}")
487
+ return None
488
+
489
+
490
+ # =============================================================================
491
+ # OBIETTIVO OPTUNA
492
+ # =============================================================================
493
+
494
+ def make_objective(
495
+ corpus: List[Dict],
496
+ lf_model: Optional["HybridSPADEInference"],
497
+ ):
498
+ def objective(trial: "optuna.Trial") -> float:
499
+
500
+ # ── Parametri HF (v11 S-SPADE) ───────────────────────────────────
501
+ hf_delta = trial.suggest_float("hf_delta_db", 0.5, 3.0, step=0.1)
502
+ hf_win_e = trial.suggest_int ("hf_win_exp", 9, 11)
503
+ hf_hop_d = trial.suggest_categorical("hf_hop_div", [4, 8])
504
+ hf_rel = trial.suggest_float("hf_release_ms", 0.0, 150.0, step=5.0)
505
+ hf_gain = trial.suggest_float("hf_max_gain_db", 2.0, 12.0, step=0.5)
506
+ hf_eps = trial.suggest_categorical("hf_eps", [0.03, 0.05, 0.1])
507
+ hf_iter = trial.suggest_categorical("hf_max_iter", [250, 500, 1000])
508
+ hf_win = 2 ** hf_win_e
509
+ hf_hop = hf_win // hf_hop_d
510
+
511
+ # ── Parametri LF (SPADEUnrolled) ─────────────────────────────────
512
+ # Il modello è fisso; ottimizziamo soglia/gain per la banda LF.
513
+ lf_delta = trial.suggest_float("lf_delta_db", 0.5, 3.0, step=0.1)
514
+ lf_gain = trial.suggest_float("lf_max_gain_db", 3.0, 12.0, step=0.5)
515
+ lf_rel = trial.suggest_float("lf_release_ms", 0.0, 150.0, step=5.0)
516
+
517
+ hf_params = dict(
518
+ hf_delta_db = hf_delta,
519
+ hf_window_length = hf_win,
520
+ hf_hop_length = hf_hop,
521
+ hf_release_ms = hf_rel,
522
+ hf_max_gain_db = hf_gain,
523
+ hf_eps = hf_eps,
524
+ hf_max_iter = hf_iter,
525
+ )
526
+ lf_params = dict(
527
+ lf_delta_db = lf_delta,
528
+ lf_max_gain_db = lf_gain,
529
+ lf_release_ms = lf_rel,
530
+ )
531
+
532
+ scores = []
533
+ midpoint = len(corpus) // 2
534
+
535
+ for step, item in enumerate(corpus):
536
+ sc = evaluate_one(item, hf_params, lf_params, lf_model)
537
+ if sc is not None:
538
+ scores.append(sc)
539
+ if step == midpoint and scores:
540
+ trial.report(float(np.mean(scores)), step=step)
541
+ if trial.should_prune():
542
+ raise optuna.TrialPruned()
543
+
544
+ if not scores:
545
+ return 0.0
546
+ mean_score = float(np.mean(scores))
547
+ trial.report(mean_score, step=len(corpus))
548
+ return mean_score
549
+
550
+ return objective
551
+
552
+
553
+ # =============================================================================
554
+ # DEBUG EXPORT (6 tracce + analisi spettrale)
555
+ # =============================================================================
556
+
557
+ def debug_export(
558
+ corpus: list,
559
+ base_dir: Path,
560
+ out_dir: Path,
561
+ n_files: int,
562
+ hf_params: dict,
563
+ lf_params: dict,
564
+ lf_model: Optional["HybridSPADEInference"],
565
+ ) -> None:
566
+ """
567
+ Esporta 6 WAV float32 per i primi n_files item del corpus.
568
+
569
+ Tracce esportate
570
+ ----------------
571
+ 01_orig_with_noise drum + pink noise @0 dBFS (prima del limiter)
572
+ 02_limited uscita limiter (ingresso SPADE)
573
+ 03_gt_residual GT residual @RESIDUAL_DBFS
574
+ 04_spade_output uscita ibrida (può >0 dBFS)
575
+ 05_res_iter residual ibrido @RESIDUAL_DBFS
576
+ 06_diff_residuals GT_res − res_iter @RESIDUAL_DBFS
577
+ → annotato: cos_sim, diff/GT dB, noise_floor dB
578
+
579
+ Metrica ideale: 06 = silenzio (diff → −∞ dB)
580
+ Floor fisico : ~ PINK_NOISE_LEVEL_DB + RESIDUAL_DBFS (rumore irrecuperabile)
581
+ """
582
+ out_dir.mkdir(parents=True, exist_ok=True)
583
+ items = corpus[:n_files]
584
+ col_w = max(len(it["file"]) for it in items) + 2
585
+
586
+ HDR = (f" {'file':<{col_w}} {'traccia':<22}"
587
+ f" {'peak dBFS':>10} {'RMS dBFS':>9} note")
588
+ SEP = " " + "─" * (len(HDR) - 2)
589
+
590
+ mode_str = "IBRIDO (v11 HF + ML LF)" if lf_model is not None else "BASELINE v11 broadband"
591
+
592
+ print()
593
+ if _HAS_RICH:
594
+ _console.rule(f"[bold cyan]DEBUG EXPORT — {mode_str}[/]")
595
+ else:
596
+ print("=" * 72)
597
+ print(f"DEBUG EXPORT — {mode_str}")
598
+ print("=" * 72)
599
+
600
+ print(f" Output dir : {out_dir}")
601
+ print(f" Modalità : {mode_str}")
602
+ print(f" Crossover : {BAND_CROSSOVER_HZ:.0f} Hz")
603
+ print(f" HF params : delta={hf_params.get('hf_delta_db',1.5):.2f}"
604
+ f" win={hf_params.get('hf_window_length',2048)}"
605
+ f" rel={hf_params.get('hf_release_ms',0):.0f}ms"
606
+ f" gain={hf_params.get('hf_max_gain_db',9):.1f}dB"
607
+ f" eps={hf_params.get('hf_eps',0.05)}"
608
+ f" iter={hf_params.get('hf_max_iter',500)}")
609
+ print(f" LF params : delta={lf_params.get('lf_delta_db',1.5):.2f}"
610
+ f" gain={lf_params.get('lf_max_gain_db',9):.1f}dB"
611
+ f" rel={lf_params.get('lf_release_ms',0):.0f}ms")
612
+ print(f" File esportati: {len(items)}")
613
+ print()
614
+ print(f" Livelli attesi:")
615
+ print(f" 01 ≈ 0.00 dBFS (normalizzato prima del limiter)")
616
+ print(f" 02 ≈ {-LIMITER_THRESHOLD_DB:+.2f} dBFS (uscita limiter)")
617
+ print(f" 03 = {RESIDUAL_DBFS:+.2f} dBFS (GT residual normalizzato)")
618
+ print(f" 04 può >0 dBFS (transiente recuperato)")
619
+ print(f" 05 = {RESIDUAL_DBFS:+.2f} dBFS (residual ibrido normalizzato)")
620
+ print(f" 06 << 0 dBFS (più basso = migliore)")
621
+ print()
622
+ print(HDR)
623
+
624
+ diff_stats = []
625
+
626
+ for file_idx, item in enumerate(items):
627
+ sr = item["sr"]
628
+ limited = item["limited"].copy()
629
+ gt_res = item["gt_res"]
630
+ gt_res_raw = item["gt_res_raw"]
631
+ orig_with_noise = item["orig_with_noise"]
632
+ stem = Path(item["file"]).stem
633
+
634
+ # ── Esegui pipeline ibrida ───────────────────────────────��────────
635
+ try:
636
+ fixed_2d = ensure_2d(
637
+ process_hybrid(limited.copy(), sr, hf_params, lf_model, lf_params)
638
+ )
639
+ except Exception as exc:
640
+ print(f" [ERRORE] {item['file']}: {exc}")
641
+ continue
642
+
643
+ # ── Residual iterazione (scala assoluta) ─────────────────────────
644
+ res_raw = fixed_2d - limited
645
+
646
+ # ── Metriche sulla scala raw (non normalizzata) ───────────────────
647
+ gt_arr = gt_res_raw
648
+ est_arr = res_raw
649
+ L = min(gt_arr.shape[0], est_arr.shape[0])
650
+
651
+ # Cosine similarity temporale (canale L)
652
+ g_flat = gt_arr[:L, 0] if gt_arr.ndim == 2 else gt_arr[:L]
653
+ e_flat = est_arr[:L, 0] if est_arr.ndim == 2 else est_arr[:L]
654
+ cos_sim_td = float(
655
+ np.dot(g_flat, e_flat) /
656
+ (np.linalg.norm(g_flat) * np.linalg.norm(e_flat) + 1e-12)
657
+ )
658
+
659
+ # diff/GT dB: quanto il residuo dell'errore è grande rispetto al GT
660
+ diff_raw = gt_arr[:L] - est_arr[:L]
661
+ diff_rms_db = _rms_dbfs(diff_raw)
662
+ gt_rms_db = _rms_dbfs(gt_arr[:L])
663
+ diff_vs_gt_db = diff_rms_db - gt_rms_db # 0 dB = diff uguale a GT; << 0 = buono
664
+
665
+ # Floor teorico: il rumore rosa fa parte del GT_res ma è irrecuperabile
666
+ noise_floor_db = PINK_NOISE_LEVEL_DB + RESIDUAL_DBFS # ≈ −23 dBFS
667
+
668
+ # ── Normalizza per l'export WAV ────────────────────────────────────
669
+ res_iter = normalize_peak(res_raw, RESIDUAL_DBFS)
670
+ diff_norm = (normalize_peak(diff_raw, RESIDUAL_DBFS)
671
+ if np.max(np.abs(diff_raw)) > 1e-12
672
+ else diff_raw)
673
+
674
+ diff_stats.append((diff_vs_gt_db, cos_sim_td))
675
+
676
+ # ── Definizione tracce (pipeline standard run_smart_sweep) ──────
677
+ tracks = [
678
+ ("01_orig_with_noise",
679
+ orig_with_noise,
680
+ f"drum+noise @0dBFS (input pipeline)"),
681
+ ("02_limited",
682
+ limited,
683
+ f"uscita limiter (input SPADE) atteso: ~{-LIMITER_THRESHOLD_DB:+.2f}dBFS"),
684
+ ("03_gt_residual",
685
+ gt_res,
686
+ f"GT residual @{RESIDUAL_DBFS:.0f}dBFS (include noise attenuation)"),
687
+ ("04_spade_output",
688
+ fixed_2d,
689
+ f"SPADE output (float32, puo' >0dBFS)"),
690
+ ("05_res_iter",
691
+ res_iter,
692
+ f"residual SPADE @{RESIDUAL_DBFS:.0f}dBFS (solo componente sparsa)"),
693
+ ("06_diff_residuals",
694
+ diff_norm,
695
+ f"GT - iter @{RESIDUAL_DBFS:.0f}dBFS "
696
+ f"cos_sim={cos_sim_td:.3f} diff/GT={diff_vs_gt_db:+.1f}dB "
697
+ f"noise_floor≈{noise_floor_db:+.1f}dB"),
698
+ ]
699
+
700
+ # ── Stampa tabella + scrivi WAV ────────────────────────────────────
701
+ print(SEP)
702
+ for track_name, audio, note in tracks:
703
+ pk = _pk_dbfs(audio)
704
+ rms = _rms_dbfs(audio)
705
+ flag = ""
706
+ if track_name == "06_diff_residuals":
707
+ if diff_vs_gt_db < -12: flag = "[OK] buona convergenza"
708
+ elif diff_vs_gt_db < -6: flag = "[~] convergenza parziale"
709
+ else: flag = "[WARN] diff elevato rispetto al GT"
710
+ row = (f" {item['file']:<{col_w}} {track_name:<22}"
711
+ f" {pk:>+10.2f} {rms:>+9.2f} {note} {flag}")
712
+ if _HAS_RICH:
713
+ color = ("green" if "[OK]" in flag else
714
+ "yellow" if "[~]" in flag else
715
+ "red" if "[WARN]" in flag else "")
716
+ _console.print(row.replace(flag, f"[{color or 'dim'}]{flag}[/]") if flag else row)
717
+ else:
718
+ print(row)
719
+ _write_wav(out_dir / f"{stem}__{track_name}.wav", audio, sr)
720
+
721
+ # ── Analisi spettrale per banda ────────────────────────────────────
722
+ BANDS_SPEC = [
723
+ ("Sub-bass ", 20, 80),
724
+ ("Bass ", 80, 250),
725
+ ("Low-mid ", 250, 800),
726
+ ("High-mid ", 800, 4000),
727
+ ("High <8k ", 4000, 8000),
728
+ ("High >8k ", 8000, 20000),
729
+ ]
730
+
731
+ def band_energy(audio_2d, sr, f_lo, f_hi):
732
+ mono = audio_2d[:, 0] if audio_2d.ndim == 2 else audio_2d
733
+ N = len(mono)
734
+ if N < 8: return -999.0
735
+ nyq = sr / 2.0
736
+ lo = max(f_lo / nyq, 1e-4)
737
+ hi = min(f_hi / nyq, 0.9999)
738
+ if lo >= hi: return -999.0
739
+ if lo < 1e-3:
740
+ b2, a2 = sig.butter(4, hi, btype="low")
741
+ else:
742
+ b2, a2 = sig.butter(4, [lo, hi], btype="band")
743
+ filtered = sig.filtfilt(b2, a2, mono)
744
+ return _rms_dbfs(filtered)
745
+
746
+ print()
747
+ band_hdr = (f" {'banda':<12} {'GT_res RMS':>10} {'iter rec RMS':>13}"
748
+ f" {'diff':>6} {'stato'}")
749
+ print(f" Analisi spettrale — {item['file']} (LF/HF split @ {BAND_CROSSOVER_HZ:.0f} Hz)")
750
+ print(f" {'─'*76}")
751
+ print(band_hdr)
752
+ print(f" {'─'*76}")
753
+ for bname, f_lo, f_hi in BANDS_SPEC:
754
+ gt_db = band_energy(gt_res_raw, sr, f_lo, f_hi)
755
+ iter_db = band_energy(res_raw, sr, f_lo, f_hi)
756
+ is_hf = f_lo >= BAND_CROSSOVER_HZ
757
+ label = "v11 HF →" if is_hf else "ML LF →"
758
+ if gt_db < -60:
759
+ rec_str = " — (silenzio)"
760
+ status = ""
761
+ else:
762
+ d = iter_db - gt_db
763
+ status = ("OK" if d > -3 else
764
+ "~ parziale" if d > -9 else
765
+ "!! sotto")
766
+ rec_str = f"{d:>+6.1f} dB {status}"
767
+ line = f" {bname:<12} {gt_db:>+10.1f} {iter_db:>+13.1f} {rec_str} [{label}]"
768
+ if _HAS_RICH:
769
+ color = ("green" if "OK" in rec_str else
770
+ "yellow" if "~" in rec_str else
771
+ "red" if "!!" in rec_str else "dim")
772
+ _console.print(f"[{color}]{line}[/]")
773
+ else:
774
+ print(line)
775
+ print()
776
+
777
+ # ── Riepilogo complessivo ─────────────────────────────────────────────
778
+ print(SEP)
779
+ if diff_stats:
780
+ vs_gt = [d[0] for d in diff_stats]
781
+ cosims = [d[1] for d in diff_stats]
782
+ nf_db = PINK_NOISE_LEVEL_DB + RESIDUAL_DBFS
783
+
784
+ print(f"\n RIEPILOGO ({len(diff_stats)} file):")
785
+ print(f" diff/GT_rms media : {np.mean(vs_gt):>+7.2f} dB")
786
+ print(f" diff/GT_rms migliore: {np.min(vs_gt):>+7.2f} dB")
787
+ print(f" diff/GT_rms peggiore: {np.max(vs_gt):>+7.2f} dB")
788
+ print(f" cos_sim TD media : {np.mean(cosims):>8.4f} (1.0 = identici)")
789
+ print()
790
+ print(f" Floor fisico (rumore irrecuperabile): ≈ {nf_db:+.1f} dBFS")
791
+ print(f" Soglia 'buona convergenza': diff/GT < −12 dB")
792
+ verdict = ("OK eccellente" if np.mean(vs_gt) < -12 else
793
+ "~ buona" if np.mean(vs_gt) < -6 else
794
+ "INFO compatibile con noise floor")
795
+ print(f" Verdetto: {verdict}")
796
+
797
+ print(f"\n WAV → {out_dir}/")
798
+ print(f" Formato: float32 (usa editor che supporta >0 dBFS)")
799
+ print(f" Nome: <stem>__<N>_<traccia>.wav")
800
+
801
+
802
+ # =============================================================================
803
+ # REPORT + CSV
804
+ # =============================================================================
805
+
806
+ def print_report(study: "optuna.Study", top_n: int = 20):
807
+ trials = sorted(
808
+ [t for t in study.trials if t.state == optuna.trial.TrialState.COMPLETE],
809
+ key=lambda t: t.value or 0, reverse=True,
810
+ )
811
+ if not trials:
812
+ print("Nessun trial completato.")
813
+ return
814
+
815
+ if _HAS_RICH:
816
+ _console.rule("[bold cyan]RISULTATI SWEEP BAYESIANO — HYBRID SPADE[/]")
817
+ tbl = Table(show_header=True, header_style="bold cyan", show_lines=False)
818
+ for col, w in [
819
+ ("#",4),("score",9),
820
+ ("HF_ddb",6),("HF_win",6),("HF_rel",6),("HF_gain",6),("HF_eps",5),("HF_iter",5),
821
+ ("LF_ddb",6),("LF_gain",6),("LF_rel",6),
822
+ ]:
823
+ tbl.add_column(col, justify="right", width=w)
824
+ for rank, t in enumerate(trials[:top_n], 1):
825
+ p = t.params
826
+ win = 2 ** p.get("hf_win_exp", 11)
827
+ hop = win // p.get("hf_hop_div", 4)
828
+ sty = "bold green" if rank == 1 else ("yellow" if rank <= 3 else "")
829
+ tbl.add_row(
830
+ str(rank), f"{t.value:.5f}",
831
+ f"{p['hf_delta_db']:.2f}", str(win),
832
+ f"{p['hf_release_ms']:.0f}", f"{p['hf_max_gain_db']:.1f}",
833
+ str(p['hf_eps']), str(p['hf_max_iter']),
834
+ f"{p['lf_delta_db']:.2f}", f"{p['lf_max_gain_db']:.1f}",
835
+ f"{p['lf_release_ms']:.0f}",
836
+ style=sty,
837
+ )
838
+ _console.print(tbl)
839
+ else:
840
+ hdr = (f"{'#':>3} {'score':>8} {'HFddb':>5} {'HFwin':>5}"
841
+ f" {'HFrel':>5} {'HFgain':>6} {'HFeps':>5} {'HFiter':>5}"
842
+ f" {'LFddb':>5} {'LFgain':>6} {'LFrel':>5}")
843
+ print(hdr); print("─" * len(hdr))
844
+ for rank, t in enumerate(trials[:top_n], 1):
845
+ p = t.params
846
+ win = 2 ** p.get("hf_win_exp", 11)
847
+ print(f"{rank:>3} {t.value:>8.5f} {p['hf_delta_db']:>5.2f}"
848
+ f" {win:>5} {p['hf_release_ms']:>5.0f}"
849
+ f" {p['hf_max_gain_db']:>6.1f} {str(p['hf_eps']):>5}"
850
+ f" {p['hf_max_iter']:>5}"
851
+ f" {p['lf_delta_db']:>5.2f} {p['lf_max_gain_db']:>6.1f}"
852
+ f" {p['lf_release_ms']:>5.0f}")
853
+
854
+ best = trials[0]
855
+ p = best.params
856
+ win = 2 ** p.get("hf_win_exp", 11)
857
+ hop = win // p.get("hf_hop_div", 4)
858
+ print("\n" + "═" * 60)
859
+ print("CONFIG OTTIMALE — HYBRID SPADE")
860
+ print("═" * 60)
861
+ print(f"""
862
+ hf_params = dict(
863
+ hf_delta_db = {p['hf_delta_db']:.2f},
864
+ hf_window_length = {win},
865
+ hf_hop_length = {hop},
866
+ hf_release_ms = {p['hf_release_ms']:.1f},
867
+ hf_max_gain_db = {p['hf_max_gain_db']:.1f},
868
+ hf_eps = {p['hf_eps']},
869
+ hf_max_iter = {p['hf_max_iter']},
870
+ )
871
+ lf_params = dict(
872
+ lf_delta_db = {p['lf_delta_db']:.2f},
873
+ lf_max_gain_db = {p['lf_max_gain_db']:.1f},
874
+ lf_release_ms = {p['lf_release_ms']:.1f},
875
+ )
876
+ """)
877
+ print(f"→ Best score : {best.value:.5f}")
878
+ n_pruned = sum(1 for t in study.trials if t.state == optuna.trial.TrialState.PRUNED)
879
+ print(f" Completed : {len(trials)} Pruned : {n_pruned}")
880
+
881
+
882
+ def save_csv(study: "optuna.Study"):
883
+ import csv
884
+ trials = sorted(
885
+ [t for t in study.trials if t.state == optuna.trial.TrialState.COMPLETE],
886
+ key=lambda t: t.value or 0, reverse=True,
887
+ )
888
+ if not trials:
889
+ return
890
+ fieldnames = ["rank", "score"] + list(trials[0].params.keys())
891
+ with open(OUT_CSV, "w", newline="") as f:
892
+ w = csv.DictWriter(f, fieldnames=fieldnames)
893
+ w.writeheader()
894
+ for rank, t in enumerate(trials, 1):
895
+ row = {"rank": rank, "score": f"{t.value:.6f}"}
896
+ row.update({k: f"{v:.4f}" if isinstance(v, float) else v
897
+ for k, v in t.params.items()})
898
+ w.writerow(row)
899
+ print(f"\n CSV salvato: {OUT_CSV}")
900
+
901
+
902
+ # =============================================================================
903
+ # MAIN
904
+ # =============================================================================
905
+
906
+ def main():
907
+ p = argparse.ArgumentParser(
908
+ description="Hybrid SPADE (v11 HF + Unrolled LF) — Bayesian sweep",
909
+ formatter_class=argparse.ArgumentDefaultsHelpFormatter,
910
+ )
911
+ p.add_argument("--base-dir", type=Path, default=Path("./Samples"),
912
+ help="Cartella radice contenente Kicks/, Snares/, ecc.")
913
+ p.add_argument("--model", type=Path, default=None,
914
+ dest="model_ckpt",
915
+ help="Checkpoint SPADEUnrolled (.pt). Ometti per baseline v11.")
916
+ p.add_argument("--trials", type=int, default=200)
917
+ p.add_argument("--resume", action="store_true",
918
+ help="Riprende uno studio Optuna esistente")
919
+ p.add_argument("--report", action="store_true",
920
+ help="Stampa solo il report dal DB esistente")
921
+ p.add_argument("--debug-export", type=int, default=0,
922
+ metavar="N",
923
+ help="Esporta le 6 tracce WAV per i primi N file del corpus")
924
+ p.add_argument("--debug-out", type=Path, default=Path("debug_export"),
925
+ help="Directory di output per --debug-export")
926
+ p.add_argument("--baseline-v11", action="store_true",
927
+ help="Usa solo v11 broadband (nessun modello ML) come baseline")
928
+ p.add_argument("--crossover-hz", type=float, default=BAND_CROSSOVER_HZ,
929
+ help="Frequenza di crossover LF/HF in Hz")
930
+ p.add_argument("--max-files", type=int, default=None,
931
+ help="Limita il corpus ai primi N file (test rapido)")
932
+ p.add_argument("--top", type=int, default=20,
933
+ help="Numero di trial da mostrare nel report")
934
+ p.add_argument("--db-path", type=str, default=f"sqlite:///{STUDY_NAME}.db",
935
+ help="SQLite URI per Optuna (default: sqlite:///hybrid_spade_v1.db)")
936
+ args = p.parse_args()
937
+
938
+ if not _HAS_V11:
939
+ print("[ERRORE] spade_declip_v11.py non trovato — uscita.")
940
+ sys.exit(1)
941
+
942
+ # ── Carica corpus ─────────────────────────────────────────────────────
943
+ print(f"\n Caricamento corpus da: {args.base_dir}")
944
+ corpus = build_corpus(args.base_dir, max_files=args.max_files)
945
+ if not corpus:
946
+ print("[ERRORE] Corpus vuoto — controlla --base-dir e le cartelle drum.")
947
+ sys.exit(1)
948
+ print(f" Corpus: {len(corpus)} file\n")
949
+
950
+ # ── Carica modello ML (opzionale) ─────────────────────────────────────
951
+ lf_model = None
952
+ if args.model_ckpt is not None and not args.baseline_v11:
953
+ if not _HAS_UNROLLED:
954
+ print("[ERRORE] spade_unrolled.py / PyTorch non trovati.")
955
+ sys.exit(1)
956
+ ckpt = torch.load(args.model_ckpt, map_location="cpu")
957
+ cfg = UnrolledConfig(**ckpt["cfg"])
958
+ model = SPADEUnrolled(cfg)
959
+ model.load_state_dict(ckpt["model"])
960
+ model.eval()
961
+ lf_model = HybridSPADEInference(
962
+ model,
963
+ crossover_hz = args.crossover_hz,
964
+ lf_delta_db = DEBUG_LF["lf_delta_db"],
965
+ lf_max_gain_db = DEBUG_LF["lf_max_gain_db"],
966
+ device = "auto",
967
+ )
968
+ print(f" Modello caricato: {args.model_ckpt}")
969
+ print(f" Crossover: {args.crossover_hz:.0f} Hz")
970
+ print(f" Parametri: {model.parameter_count():,} trainable\n")
971
+ else:
972
+ print(f" Modalità: {'baseline v11 broadband' if args.baseline_v11 else 'baseline v11 (nessun modello specificato)'}\n")
973
+
974
+ # ── Optuna: report-only ────────────────────────────────────────────────
975
+ if args.report:
976
+ if not _HAS_OPTUNA:
977
+ print("[ERRORE] optuna non trovato.")
978
+ sys.exit(1)
979
+ study = optuna.load_study(study_name=STUDY_NAME, storage=args.db_path)
980
+ print_report(study, top_n=args.top)
981
+ save_csv(study)
982
+ return
983
+
984
+ # ── Debug export ───────────────────────────────────────────────────────
985
+ if args.debug_export > 0:
986
+ # Se abbiamo un DB Optuna con trial completati → usa il best
987
+ best_hf = dict(DEBUG_HF)
988
+ best_lf = dict(DEBUG_LF)
989
+ if _HAS_OPTUNA:
990
+ try:
991
+ study = optuna.load_study(study_name=STUDY_NAME, storage=args.db_path)
992
+ complete = [t for t in study.trials
993
+ if t.state == optuna.trial.TrialState.COMPLETE]
994
+ if complete:
995
+ bp = max(complete, key=lambda t: t.value or 0).params
996
+ win = 2 ** bp.get("hf_win_exp", 11)
997
+ hop = win // bp.get("hf_hop_div", 4)
998
+ best_hf = dict(
999
+ hf_delta_db = bp.get("hf_delta_db", 1.5),
1000
+ hf_window_length = win,
1001
+ hf_hop_length = hop,
1002
+ hf_release_ms = bp.get("hf_release_ms", 0.0),
1003
+ hf_max_gain_db = bp.get("hf_max_gain_db", 9.0),
1004
+ hf_eps = bp.get("hf_eps", 0.05),
1005
+ hf_max_iter = bp.get("hf_max_iter", 500),
1006
+ )
1007
+ best_lf = dict(
1008
+ lf_delta_db = bp.get("lf_delta_db", 1.5),
1009
+ lf_max_gain_db = bp.get("lf_max_gain_db", 9.0),
1010
+ lf_release_ms = bp.get("lf_release_ms", 0.0),
1011
+ )
1012
+ print(f" Best trial caricato dal DB ({len(complete)} completati)")
1013
+ except Exception:
1014
+ pass
1015
+
1016
+ debug_export(corpus, args.base_dir, args.debug_out,
1017
+ args.debug_export, best_hf, best_lf, lf_model)
1018
+ return
1019
+
1020
+ # ── Bayesian sweep ─────────────────────────────────────────────────────
1021
+ if not _HAS_OPTUNA:
1022
+ print("[ERRORE] optuna non trovato — pip install optuna")
1023
+ sys.exit(1)
1024
+
1025
+ sampler = TPESampler(multivariate=True, seed=42)
1026
+ pruner = MedianPruner(n_startup_trials=10, n_warmup_steps=len(corpus)//2)
1027
+
1028
+ if args.resume:
1029
+ study = optuna.load_study(
1030
+ study_name=STUDY_NAME, storage=args.db_path,
1031
+ sampler=sampler, pruner=pruner,
1032
+ )
1033
+ print(f" Studio ripreso: {len(study.trials)} trial esistenti")
1034
+ else:
1035
+ study = optuna.create_study(
1036
+ study_name=STUDY_NAME, storage=args.db_path,
1037
+ direction="maximize",
1038
+ sampler=sampler, pruner=pruner,
1039
+ load_if_exists=True,
1040
+ )
1041
+
1042
+ objective = make_objective(corpus, lf_model)
1043
+
1044
+ # Progress bar (rich → tqdm → plain)
1045
+ _state = {
1046
+ "done": 0, "pruned": 0,
1047
+ "best": float("-inf"), "best_p": {}, "last": float("-inf"),
1048
+ "t0": time.time(),
1049
+ }
1050
+
1051
+ try:
1052
+ from rich.progress import (
1053
+ Progress, BarColumn, TextColumn,
1054
+ TimeElapsedColumn, TimeRemainingColumn, MofNCompleteColumn,
1055
+ )
1056
+ _has_rich_p = True
1057
+ except ImportError:
1058
+ _has_rich_p = False
1059
+
1060
+ try:
1061
+ import tqdm as _tqdm_mod
1062
+ _has_tqdm = True
1063
+ except ImportError:
1064
+ _has_tqdm = False
1065
+
1066
+ def _on_trial_end(study, trial):
1067
+ fin = trial.state == optuna.trial.TrialState.COMPLETE
1068
+ prn = trial.state == optuna.trial.TrialState.PRUNED
1069
+ if fin:
1070
+ _state["done"] += 1
1071
+ _state["last"] = trial.value or 0.0
1072
+ if _state["last"] > _state["best"]:
1073
+ _state["best"] = _state["last"]
1074
+ _state["best_p"] = dict(study.best_params)
1075
+ elif prn:
1076
+ _state["pruned"] += 1
1077
+
1078
+ t0 = time.time()
1079
+ try:
1080
+ if _has_rich_p:
1081
+ progress = Progress(
1082
+ TextColumn("[bold cyan]Trial[/] [cyan]{task.completed}/{task.total}[/]"),
1083
+ BarColumn(bar_width=32),
1084
+ MofNCompleteColumn(),
1085
+ TextColumn(" score [green]{task.fields[last]:.5f}[/]"),
1086
+ TextColumn(" best [bold green]{task.fields[best]:.5f}[/]"),
1087
+ TextColumn(" [dim]pruned {task.fields[pruned]}[/]"),
1088
+ TimeElapsedColumn(), TextColumn("ETA"), TimeRemainingColumn(),
1089
+ refresh_per_second=4,
1090
+ )
1091
+ def _on_trial_rich(study, trial):
1092
+ _on_trial_end(study, trial)
1093
+ progress.update(task_id, advance=1,
1094
+ last=_state["last"],
1095
+ best=max(_state["best"], 0.0),
1096
+ pruned=_state["pruned"])
1097
+ with progress:
1098
+ task_id = progress.add_task(
1099
+ "sweep", total=args.trials,
1100
+ last=0.0, best=0.0, pruned=0,
1101
+ )
1102
+ study.optimize(objective, n_trials=args.trials,
1103
+ callbacks=[_on_trial_rich],
1104
+ show_progress_bar=False)
1105
+ elif _has_tqdm:
1106
+ import tqdm
1107
+ pbar = tqdm.tqdm(total=args.trials, unit="trial")
1108
+ def _on_trial_tqdm(study, trial):
1109
+ _on_trial_end(study, trial)
1110
+ pbar.update(1)
1111
+ pbar.set_postfix(score=f"{_state['last']:.5f}",
1112
+ best=f"{_state['best']:.5f}",
1113
+ pruned=_state["pruned"])
1114
+ study.optimize(objective, n_trials=args.trials,
1115
+ callbacks=[_on_trial_tqdm], show_progress_bar=False)
1116
+ pbar.close()
1117
+ else:
1118
+ study.optimize(objective, n_trials=args.trials,
1119
+ callbacks=[_on_trial_end], show_progress_bar=False)
1120
+ except KeyboardInterrupt:
1121
+ print("\n[!] Interrotto — risultati parziali salvati.")
1122
+
1123
+ elapsed = time.time() - t0
1124
+ n_done = sum(1 for t in study.trials if t.state == optuna.trial.TrialState.COMPLETE)
1125
+ n_prune = sum(1 for t in study.trials if t.state == optuna.trial.TrialState.PRUNED)
1126
+ print(f"\n Completati: {n_done} | Pruned: {n_prune}"
1127
+ f" | Tempo: {elapsed/60:.1f} min"
1128
+ f" | Media: {elapsed/max(n_done+n_prune,1):.1f} s/trial")
1129
+
1130
+ print_report(study, top_n=args.top)
1131
+ save_csv(study)
1132
+ print("\nDone.")
1133
+
1134
+
1135
+ if __name__ == "__main__":
1136
+ main()
run_smart_sweep_old.py ADDED
@@ -0,0 +1,1367 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ """
2
+ run_smart_sweep.py — S-SPADE · Bayesian parameter search (v2)
3
+ ===================================================================
4
+
5
+ PIPELINE GROUND-TRUTH (Case 1 — threshold-based limiter)
6
+ ---------------------------------------------------------
7
+ Il limiter sintetico è threshold-based:
8
+ - Originale normalizzato a 0 dBFS peak
9
+ - Limiter: attua solo sui picchi sopra la soglia → output max peak ≈ −threshold_db
10
+ - Il CORPO del segnale (loudness percepita) rimane invariato per definizione
11
+ - NON si applica nessun gain al segnale limitato dopo il processing
12
+
13
+ Allineamento per il calcolo residual:
14
+ Originale e limited sono già sulla stessa scala (loudness uguale, picchi diversi).
15
+ Nessuna normalizzazione LUFS / RMS necessaria o corretta.
16
+
17
+ GT_res = original_0dBFS − limited (scale identiche)
18
+ res_iter = spade_output − limited (idem)
19
+
20
+ Entrambi vengono poi normalizzati a RESIDUAL_DBFS peak SOLO per rendere
21
+ comparabili file con diversi livelli assoluti — non altera la logica.
22
+
23
+ Metrica ideale:
24
+ GT_res ≡ res_iter → cosine_sim = 1.0 → differenza = −∞ dB
25
+
26
+ Ottimizzatore: Optuna TPE (Bayesian) + MedianPruner
27
+ Storage: SQLite (riprendibile con --resume)
28
+ Corpus: tutti i drum sample in Kicks / Snares / Perc / Tops
29
+
30
+ DIPENDENZE
31
+ ----------
32
+ pip install numpy scipy soundfile optuna rich
33
+ (pyloudnorm NON necessario)
34
+
35
+ USO
36
+ ---
37
+ python run_smart_sweep.py # 200 trial
38
+ python run_smart_sweep.py --trials 50 # test rapido
39
+ python run_smart_sweep.py --resume # riprende da DB
40
+ python run_smart_sweep.py --report # solo risultati
41
+ python run_smart_sweep.py --base-dir /path/SPADE # cartella custom
42
+ """
43
+
44
+ import argparse
45
+ import logging
46
+ import sys
47
+ import time
48
+ import warnings
49
+ from pathlib import Path
50
+ from typing import Dict, List, Optional
51
+
52
+ import numpy as np
53
+ import scipy.signal as sig
54
+ import soundfile as sf
55
+
56
+ logging.getLogger("optuna").setLevel(logging.WARNING)
57
+
58
+ # ── optuna ───────────────────────────────────────────────────────────────────
59
+ try:
60
+ import optuna
61
+ from optuna.samplers import TPESampler
62
+ from optuna.pruners import MedianPruner
63
+ _HAS_OPTUNA = True
64
+ except ImportError:
65
+ _HAS_OPTUNA = False
66
+ warnings.warn("optuna non trovato — pip install optuna")
67
+
68
+ # ── rich ─────────────────────────────────────────────────────────────────────
69
+ try:
70
+ from rich.console import Console
71
+ from rich.table import Table
72
+ _console = Console()
73
+ _HAS_RICH = True
74
+ except ImportError:
75
+ _HAS_RICH = False
76
+ _console = None
77
+
78
+ # ── spade_declip ─────────────────────────────────────────────────────────────
79
+ try:
80
+ from spade_declip_v11 import declip, DeclipParams
81
+ _HAS_SPADE = True
82
+ except ImportError:
83
+ _HAS_SPADE = False
84
+ warnings.warn("spade_declip_v11.py non trovato")
85
+
86
+ # =============================================================================
87
+ # CONFIG
88
+ # =============================================================================
89
+
90
+ DRUM_DIRS = ["Kicks", "Snares", "Perc", "Tops"]
91
+
92
+ # ── Limiter sintetico ─────────────────────────────────────────────────────────
93
+ # Case 1: threshold-based.
94
+ # Originale @ 0 dBFS peak → limiter attua sui picchi > soglia →
95
+ # output max peak ≈ −LIMITER_THRESHOLD_DB dBFS, loudness invariata.
96
+ # NON si tocca il segnale limitato con nessun gain dopo.
97
+ LIMITER_THRESHOLD_DB = 1.5 # dB sotto il ceiling (positivo)
98
+ LIMITER_RELEASE_MS = 80.0 # release del limiter sintetico (ms)
99
+ # attack = 1 campione → brickwall vero
100
+
101
+ # Normalizzazione residual — SOLO per comparabilità cross-file.
102
+ # Scala entrambi GT e iter identicamente, quindi non altera il confronto.
103
+ RESIDUAL_DBFS = -3.0
104
+
105
+ # ── Rumore rosa di sottofondo ─────────────────────────────────────────────────
106
+ # Simula un sottofondo musicale sotto il transiente di batteria.
107
+ # Viene mixato al sample (già a 0 dBFS peak) PRIMA del limiter.
108
+ # Questo assicura che:
109
+ # - il limiter agisca sul segnale realistico drum + music background
110
+ # - SPADE riceva lo stesso mix e debba lavorare in condizioni realistiche
111
+ # - GT_res = (drum+noise) − limiter(drum+noise) rifletta la situazione reale
112
+ # Livello relativo al peak del drum sample. −20 dB = sottofondo ben sotto
113
+ # il transiente, udibile ma non dominante (come un kick su un loop di batteria).
114
+ PINK_NOISE_LEVEL_DB = -20.0 # dB rel. al peak del drum (negativo = sotto)
115
+
116
+ # Optuna
117
+ STUDY_NAME = "spade_smart_v2"
118
+ OUT_CSV = "smart_sweep_results.csv"
119
+
120
+ # Parametri FISSI del solver SPADE (invarianti tra tutti i trial)
121
+ FIXED_SOLVER = dict(
122
+ algo = "sspade",
123
+ frame = "rdft",
124
+ mode = "soft",
125
+ s = 1,
126
+ r = 1,
127
+ n_jobs = 1,
128
+ verbose = False,
129
+ show_progress = False,
130
+ use_gpu = True,
131
+ # multiband e macro_expand sono nello spazio di ricerca
132
+ )
133
+
134
+ # Crossover multiband (fisso per comparabilita' tra trial)
135
+ # 250 Hz separa: LF=corpo/punch del kick | HF=transiente/attacco
136
+ BAND_CROSSOVER_HZ = 250.0
137
+
138
+ # =============================================================================
139
+ # HELPERS
140
+ # =============================================================================
141
+
142
+ def ensure_2d(a: np.ndarray) -> np.ndarray:
143
+ return a[:, None] if a.ndim == 1 else a
144
+
145
+
146
+ def normalize_to_0dBFS(a: np.ndarray) -> np.ndarray:
147
+ """Scala a 0 dBFS peak — usato solo sull'originale come riferimento comune."""
148
+ pk = np.max(np.abs(a))
149
+ return a / pk if pk > 1e-12 else a
150
+
151
+
152
+ def normalize_peak(a: np.ndarray, target_dbfs: float) -> np.ndarray:
153
+ """
154
+ Scala a target_dbfs dBFS peak.
155
+ Usato SOLO sui residual per comparabilità cross-file;
156
+ non altera la logica perché GT e iter vengono scalati identicamente.
157
+ """
158
+ pk = np.max(np.abs(a))
159
+ return a * (10 ** (target_dbfs / 20.0) / pk) if pk > 1e-12 else a
160
+
161
+
162
+ def generate_pink_noise(n_samples: int, n_channels: int, rng: np.random.Generator) -> np.ndarray:
163
+ """
164
+ Genera rumore rosa (1/f) tramite filtro IIR di Voss-McCartney (approssimazione
165
+ a 5 poli, accurata entro ±1 dB nel range 20 Hz – 20 kHz).
166
+
167
+ Output: shape (n_samples, n_channels), RMS normalizzato a 1.0 (prima
168
+ del mix-in con PINK_NOISE_LEVEL_DB, che controlla il livello finale).
169
+
170
+ Algoritmo: rumore bianco filtrato con H(z) = 1 / A(z) dove i coefficienti
171
+ sono ottimizzati per approssimare una densità spettrale 1/f.
172
+ """
173
+ # Coefficienti del filtro IIR a 5 poli (Voss approssimazione)
174
+ # Poli reali, tutti stabili (|p| < 1)
175
+ b = np.array([0.049922035, -0.095993537, 0.050612699, -0.004408786])
176
+ a = np.array([1.0, -2.494956002, 2.017265875, -0.522189400])
177
+
178
+ out = np.empty((n_samples, n_channels))
179
+ for c in range(n_channels):
180
+ white = rng.standard_normal(n_samples)
181
+ pink = sig.lfilter(b, a, white)
182
+ rms = np.sqrt(np.mean(pink ** 2))
183
+ out[:, c] = pink / (rms + 1e-12) # RMS = 1.0
184
+
185
+ return out
186
+
187
+
188
+ def mix_pink_noise(
189
+ audio_0dBFS: np.ndarray,
190
+ sr: int,
191
+ level_db: float,
192
+ rng: np.random.Generator,
193
+ ) -> np.ndarray:
194
+ """
195
+ Mixa rumore rosa nel segnale a un livello relativo al suo peak.
196
+
197
+ level_db < 0 → il rumore è sotto il peak del drum (es. −20 dB)
198
+ Il rumore dura quanto il sample; se il sample è stereo, il rumore è stereo
199
+ (canali indipendenti → decorrelato come un vero fondo musicale).
200
+
201
+ Il segnale in uscita può superare 0 dBFS di qualche frazione di dB: è
202
+ corretto, il limiter che segue si occupa di riportarlo sotto la soglia.
203
+ """
204
+ audio = ensure_2d(audio_0dBFS)
205
+ N, C = audio.shape
206
+
207
+ noise = generate_pink_noise(N, C, rng) # RMS = 1.0 per canale
208
+ # Scala il rumore al livello desiderato rispetto al peak del drum
209
+ peak = np.max(np.abs(audio))
210
+ gain = peak * (10 ** (level_db / 20.0)) # gain lineare assoluto
211
+ mixed = audio + noise * gain
212
+ # NON normalizziamo qui: la normalizzazione a 0 dBFS avviene in build_corpus
213
+ # subito dopo, su tutto il mix (drum + noise), prima di qualsiasi altra op.
214
+ return mixed[:, 0] if audio_0dBFS.ndim == 1 else mixed
215
+
216
+
217
+ # =============================================================================
218
+ # LIMITER SINTETICO (Case 1 — threshold-based, brickwall, 1-campione attack)
219
+ # =============================================================================
220
+
221
+ def apply_brickwall_limiter(
222
+ audio_0dBFS: np.ndarray,
223
+ sr: int,
224
+ threshold_db: float = LIMITER_THRESHOLD_DB,
225
+ release_ms: float = LIMITER_RELEASE_MS,
226
+ ) -> np.ndarray:
227
+ """
228
+ Brickwall limiter threshold-based.
229
+
230
+ Input: audio_0dBFS — già a 0 dBFS peak, shape (N,) o (N, C)
231
+ Output: segnale limitato, stessa shape — NON boosted, NON clippato
232
+
233
+ Gain envelope:
234
+ se |x[n]| > threshold_lin → target_gain = threshold_lin / |x[n]|
235
+ altrimenti → target_gain = 1.0
236
+ Attack : istantaneo (1 campione, true brickwall)
237
+ Release: esponenziale con costante release_ms
238
+
239
+ Post-processing: NESSUNO.
240
+ Il segnale in uscita ha max peak ≈ −threshold_db dBFS.
241
+ La loudness percepita è invariata rispetto all'input.
242
+ """
243
+ thr_lin = 10 ** (-abs(threshold_db) / 20.0)
244
+ rc = np.exp(-1.0 / max(release_ms * sr / 1000.0, 1e-9))
245
+
246
+ audio = ensure_2d(audio_0dBFS).copy()
247
+ N, C = audio.shape
248
+ out = np.empty_like(audio)
249
+
250
+ for c in range(C):
251
+ ch = audio[:, c]
252
+ env = 1.0
253
+ g = np.empty(N)
254
+ for n in range(N):
255
+ pk = abs(ch[n])
256
+ target = thr_lin / pk if pk > thr_lin else 1.0
257
+ # attack istantaneo se gain scende, release esponenziale se risale
258
+ env = target if target < env else rc * env + (1.0 - rc) * target
259
+ g[n] = env
260
+ out[:, c] = ch * g
261
+
262
+ # Restituisce stessa shape dell'input
263
+ return out[:, 0] if audio_0dBFS.ndim == 1 else out
264
+
265
+
266
+ # =============================================================================
267
+ # COSINE SIMILARITY TF
268
+ # =============================================================================
269
+
270
+ def cosine_sim_tf(
271
+ gt: np.ndarray,
272
+ est: np.ndarray,
273
+ sr: int,
274
+ win_samples: int = 1024,
275
+ hop_samples: int = 256,
276
+ n_bands: int = 12,
277
+ ) -> float:
278
+ """
279
+ Similarità coseno media su micro-finestre tempo-frequenziali.
280
+ Input: entrambi già a RESIDUAL_DBFS peak.
281
+ Output: scalare in [0, 1]. Target ideale = 1.0.
282
+ """
283
+ L = min(gt.shape[0], est.shape[0])
284
+ g = (gt[:L, 0] if gt.ndim == 2 else gt[:L]).copy()
285
+ e = (est[:L, 0] if est.ndim == 2 else est[:L]).copy()
286
+
287
+ win = min(win_samples, max(32, L // 4))
288
+ hop = min(hop_samples, win // 2)
289
+
290
+ if L < win or win < 32:
291
+ denom = np.linalg.norm(g) * np.linalg.norm(e) + 1e-12
292
+ return float(np.dot(g, e) / denom)
293
+
294
+ _, _, Zg = sig.stft(g, fs=sr, window="hann",
295
+ nperseg=win, noverlap=win - hop,
296
+ boundary=None, padded=False)
297
+ _, _, Ze = sig.stft(e, fs=sr, window="hann",
298
+ nperseg=win, noverlap=win - hop,
299
+ boundary=None, padded=False)
300
+
301
+ n_freqs, n_frames = Zg.shape
302
+ if n_frames == 0:
303
+ return float(np.dot(g, e) / (np.linalg.norm(g) * np.linalg.norm(e) + 1e-12))
304
+
305
+ edges = np.unique(np.round(
306
+ np.logspace(0, np.log10(max(n_freqs, 2)), min(n_bands, n_freqs) + 1)
307
+ ).astype(int))
308
+ edges = np.clip(edges, 0, n_freqs)
309
+
310
+ sims = []
311
+ for i in range(len(edges) - 1):
312
+ f0, f1 = int(edges[i]), int(edges[i + 1])
313
+ if f1 <= f0:
314
+ continue
315
+ Mg = np.abs(Zg[f0:f1, :])
316
+ Me = np.abs(Ze[f0:f1, :])
317
+ dot = np.sum(Mg * Me, axis=0)
318
+ norm_g = np.sqrt(np.sum(Mg ** 2, axis=0)) + 1e-12
319
+ norm_e = np.sqrt(np.sum(Me ** 2, axis=0)) + 1e-12
320
+ sims.extend((dot / (norm_g * norm_e)).tolist())
321
+
322
+ return float(np.mean(sims)) if sims else 0.0
323
+
324
+
325
+ # =============================================================================
326
+ # CORPUS
327
+ # =============================================================================
328
+
329
+ def build_corpus(base_dir: Path, max_files: Optional[int] = None) -> List[Dict]:
330
+ """
331
+ Per ogni drum sample:
332
+ 1. Carica e normalizza a 0 dBFS peak (riferimento comune cross-file)
333
+ 2. Mixa rumore rosa a PINK_NOISE_LEVEL_DB rel. al peak ← NUOVO
334
+ Il mix avviene in float (può temporaneamente superare 0 dBFS)
335
+ 3. Normalizza il mix (drum + noise) a 0 dBFS peak
336
+ Riferimento comune prima di tutta la pipeline successiva
337
+ 4. Applica limiter sintetico su (drum + noise) normalizzato → limited
338
+ 4. GT_res_raw = (drum + noise) − limited (stessa scala, nessun gain)
339
+ 5. Scarta file dove il limiter non interviene
340
+ 6. Normalizza GT_res a RESIDUAL_DBFS (solo comparabilità cross-file)
341
+
342
+ Il rumore è riproducibile: ogni file usa un seed deterministico derivato
343
+ dal suo indice nel corpus, così i trial sono comparabili tra loro.
344
+ """
345
+ corpus = []
346
+ extensions = {".wav", ".flac", ".aif", ".aiff"}
347
+ file_index = 0 # usato per seed deterministico del rumore
348
+
349
+ for folder in DRUM_DIRS:
350
+ d = base_dir / folder
351
+ if not d.exists():
352
+ print(f" [WARN] Cartella non trovata: {d}")
353
+ continue
354
+ for f in sorted(d.glob("*")):
355
+ if f.suffix.lower() not in extensions:
356
+ continue
357
+ try:
358
+ audio, sr = sf.read(str(f), always_2d=True)
359
+ audio = audio.astype(float)
360
+ except Exception as exc:
361
+ print(f" [WARN] {f.name}: {exc}")
362
+ continue
363
+
364
+ if audio.shape[0] < 64:
365
+ continue
366
+
367
+ # 1. 0 dBFS peak
368
+ orig = normalize_to_0dBFS(audio)
369
+
370
+ # 2. Mix rumore rosa — seed deterministico per riproducibilità
371
+ rng = np.random.default_rng(seed=file_index)
372
+ orig_with_noise = ensure_2d(mix_pink_noise(orig, sr,
373
+ PINK_NOISE_LEVEL_DB, rng))
374
+ file_index += 1
375
+
376
+ # 3. Normalizza il mix a 0 dBFS peak — riferimento comune prima
377
+ # di tutta la pipeline. Il mix in float può aver superato 0 dBFS;
378
+ # questa normalizzazione azzera il problema prima del limiter.
379
+ orig_with_noise = ensure_2d(normalize_to_0dBFS(orig_with_noise))
380
+
381
+ # 4. Limiter sintetico su (drum + noise) @0dBFS — nessun gain dopo
382
+ limited = ensure_2d(apply_brickwall_limiter(orig_with_noise, sr))
383
+
384
+ # 5. Residual grezzo — stessa scala, zero aggiustamenti
385
+ gt_res_raw = orig_with_noise - limited
386
+
387
+ # 6. Verifica attività del limiter
388
+ if np.max(np.abs(gt_res_raw)) < 1e-6:
389
+ print(f" [SKIP] {f.name} — picco sotto la soglia, limiter inattivo")
390
+ continue
391
+
392
+ # 7. Normalizza a RESIDUAL_DBFS solo per comparabilità cross-file
393
+ gt_res = normalize_peak(gt_res_raw, RESIDUAL_DBFS)
394
+
395
+ corpus.append({
396
+ "file" : f.name,
397
+ "sr" : sr,
398
+ "limited" : limited, # input a SPADE = drum + noise + limiter
399
+ "gt_res" : gt_res, # target residual
400
+ })
401
+
402
+ if max_files and len(corpus) >= max_files:
403
+ return corpus
404
+
405
+ return corpus
406
+
407
+
408
+ # =============================================================================
409
+ # VALUTAZIONE SINGOLO FILE
410
+ # =============================================================================
411
+
412
+ def evaluate_one(item: Dict, params: dict) -> Optional[float]:
413
+ """
414
+ Esegue SPADE su limited, calcola il residual e lo confronta con GT.
415
+
416
+ params contiene parametri SPADE puri + flag di alto livello:
417
+ multiband (bool) -- split LF/HF, elabora separatamente
418
+ macro_expand (bool) -- envelope pre-pass per recupero corpo LF
419
+ macro_ratio (float) -- rapporto espansione (1.0 = bypass)
420
+ lf_delta_db (float) -- delta_db per banda LF (<= BAND_CROSSOVER_HZ)
421
+ il delta_db standard e' usato per la banda HF
422
+ """
423
+ try:
424
+ sr = item["sr"]
425
+ limited = item["limited"].copy()
426
+ gt_res = item["gt_res"]
427
+
428
+ # Estrai flag di alto livello (non sono parametri DeclipParams diretti)
429
+ p2 = dict(params) # copia per non mutare l'originale
430
+ multiband = p2.pop("multiband", False)
431
+ macro_expand = p2.pop("macro_expand", False)
432
+ macro_ratio = p2.pop("macro_ratio", 1.0)
433
+ lf_delta_db = p2.pop("lf_delta_db", p2.get("delta_db", 1.5))
434
+
435
+ spade_kw = dict(
436
+ multiband = multiband,
437
+ macro_expand = macro_expand,
438
+ macro_ratio = macro_ratio if macro_expand else 1.0,
439
+ macro_release_ms = 200.0,
440
+ macro_attack_ms = 10.0,
441
+ )
442
+ if multiband:
443
+ spade_kw["band_crossovers"] = (BAND_CROSSOVER_HZ,)
444
+ spade_kw["band_delta_db"] = (lf_delta_db, p2["delta_db"])
445
+
446
+ p = DeclipParams(sample_rate=sr, **FIXED_SOLVER, **p2, **spade_kw)
447
+ fixed, _ = declip(limited, p)
448
+ fixed_2d = ensure_2d(fixed)
449
+
450
+ # Residual generato — stessa scala dell'input, nessun gain
451
+ res_raw = fixed_2d - limited
452
+ res_iter = normalize_peak(res_raw, RESIDUAL_DBFS)
453
+
454
+ return cosine_sim_tf(gt_res, res_iter, sr)
455
+
456
+ except Exception as exc:
457
+ warnings.warn(f"evaluate_one ({item['file']}): {exc}")
458
+ return None
459
+
460
+
461
+ # =============================================================================
462
+ # OBIETTIVO OPTUNA
463
+ # =============================================================================
464
+
465
+ def make_objective(corpus: List[Dict]):
466
+ def objective(trial: "optuna.Trial") -> float:
467
+ # ── Parametri core ────────────────────────────────────────────────
468
+ delta_db = trial.suggest_float("delta_db", 1.0, 2.0, step=0.05)
469
+ win_exp = trial.suggest_int ("win_exp", 9, 11)
470
+ win = 2 ** win_exp
471
+ hop_div = trial.suggest_categorical("hop_div", [4, 8])
472
+ hop = win // hop_div
473
+ rel_ms = trial.suggest_float("release_ms", 10.0, 200.0, step=5.0)
474
+ gain_db = trial.suggest_float("max_gain_db", 2.0, 12.0, step=0.5)
475
+ eps = trial.suggest_categorical("eps", [0.03, 0.05, 0.1])
476
+ max_iter = trial.suggest_categorical("max_iter", [250, 500, 1000])
477
+
478
+ # ── Multiband + Macro expand ────────────────────────────────────────
479
+ # SPAZIO STATICO: lf_delta_db e macro_ratio vengono SEMPRE campionati
480
+ # dal TPE (spazio fisso) e poi usati condizionalmente a runtime.
481
+ # Questo elimina il fallback a RandomSampler che degradava le performance
482
+ # del TPE multivariate con spazi dinamici.
483
+ multiband = trial.suggest_categorical("multiband", [False, True])
484
+ macro_expand = trial.suggest_categorical("macro_expand", [False, True])
485
+
486
+ # Sempre campionati (range fisso), usati solo se il flag e' True:
487
+ lf_delta_db = trial.suggest_float("lf_delta_db", 0.5, 2.0, step=0.05)
488
+ macro_ratio = trial.suggest_float("macro_ratio", 1.1, 2.0, step=0.05)
489
+
490
+ # Se multiband=False, lf_delta_db viene ignorato in evaluate_one.
491
+ # Se macro_expand=False, macro_ratio viene ignorato in evaluate_one.
492
+
493
+ params = dict(
494
+ delta_db = delta_db,
495
+ window_length = win,
496
+ hop_length = hop,
497
+ release_ms = rel_ms,
498
+ max_gain_db = gain_db,
499
+ eps = eps,
500
+ max_iter = max_iter,
501
+ # flag di alto livello (estratti in evaluate_one, non passati raw)
502
+ multiband = multiband,
503
+ lf_delta_db = lf_delta_db,
504
+ macro_expand = macro_expand,
505
+ macro_ratio = macro_ratio,
506
+ )
507
+
508
+ scores = []
509
+ midpoint = len(corpus) // 2
510
+
511
+ for step, item in enumerate(corpus):
512
+ sc = evaluate_one(item, dict(params)) # dict() per non mutare params
513
+ if sc is not None:
514
+ scores.append(sc)
515
+ if step == midpoint and scores:
516
+ trial.report(float(np.mean(scores)), step=step)
517
+ if trial.should_prune():
518
+ raise optuna.TrialPruned()
519
+
520
+ if not scores:
521
+ return 0.0
522
+ mean_score = float(np.mean(scores))
523
+ trial.report(mean_score, step=len(corpus))
524
+ return mean_score
525
+
526
+ return objective
527
+
528
+
529
+ # =============================================================================
530
+ # REPORT + CSV
531
+ # =============================================================================
532
+
533
+ def print_report(study: "optuna.Study", top_n: int = 20):
534
+ trials = sorted(
535
+ [t for t in study.trials if t.state == optuna.trial.TrialState.COMPLETE],
536
+ key=lambda t: t.value or 0, reverse=True,
537
+ )
538
+ if not trials:
539
+ print("Nessun trial completato.")
540
+ return
541
+
542
+ if _HAS_RICH:
543
+ _console.rule("[bold cyan]RISULTATI SWEEP BAYESIANO[/]")
544
+ tbl = Table(show_header=True, header_style="bold cyan", show_lines=False)
545
+ for col, w in [("#",4),("score",9),("ddb",6),("LFd",5),("win",6),
546
+ ("hop",4),("rel",6),("gain",6),("eps",5),("iter",5),
547
+ ("MB",3),("ME",3),("MR",5)]:
548
+ tbl.add_column(col, justify="right", width=w)
549
+ for rank, t in enumerate(trials[:top_n], 1):
550
+ p = t.params
551
+ win = 2 ** p["win_exp"]
552
+ hop = win // p["hop_div"]
553
+ mb = "Y" if p.get("multiband") else "n"
554
+ me = "Y" if p.get("macro_expand") else "n"
555
+ sty = "bold green" if rank == 1 else ("yellow" if rank <= 3 else "")
556
+ tbl.add_row(
557
+ str(rank), f"{t.value:.5f}",
558
+ f"{p['delta_db']:.2f}",
559
+ f"{p.get('lf_delta_db', p['delta_db']):.2f}",
560
+ str(win), str(hop),
561
+ f"{p['release_ms']:.0f}", f"{p['max_gain_db']:.1f}",
562
+ str(p['eps']), str(p['max_iter']),
563
+ mb, me, f"{p.get('macro_ratio', 1.0):.2f}",
564
+ style=sty,
565
+ )
566
+ _console.print(tbl)
567
+ else:
568
+ hdr = (f"{'#':>3} {'score':>8} {'ddb':>5} {'LFd':>5} {'win':>5}"
569
+ f" {'hop':>4} {'rel':>6} {'gain':>5} {'eps':>5} {'iter':>5}"
570
+ f" {'MB':>3} {'ME':>3} {'MR':>5}")
571
+ print(hdr); print("-" * len(hdr))
572
+ for rank, t in enumerate(trials[:top_n], 1):
573
+ p = t.params
574
+ win = 2 ** p["win_exp"]
575
+ hop = win // p["hop_div"]
576
+ mb = "Y" if p.get("multiband") else "n"
577
+ me = "Y" if p.get("macro_expand") else "n"
578
+ print(f"{rank:>3} {t.value:>8.5f} {p['delta_db']:>5.2f}"
579
+ f" {p.get('lf_delta_db', p['delta_db']):>5.2f} {win:>5}"
580
+ f" {hop:>4} {p['release_ms']:>6.0f} {p['max_gain_db']:>5.1f}"
581
+ f" {str(p['eps']):>5} {p['max_iter']:>5}"
582
+ f" {mb:>3} {me:>3} {p.get('macro_ratio', 1.0):>5.2f}")
583
+
584
+ best = trials[0]
585
+ p = best.params
586
+ win = 2 ** p["win_exp"]
587
+ hop = win // p["hop_div"]
588
+ n_pruned = sum(1 for t in study.trials
589
+ if t.state == optuna.trial.TrialState.PRUNED)
590
+
591
+ print("\n" + "═" * 60)
592
+ print("CONFIG OTTIMALE")
593
+ print("═" * 60)
594
+ print(f"""
595
+ params = DeclipParams(
596
+ algo = "sspade",
597
+ frame = "rdft",
598
+ mode = "soft",
599
+ delta_db = {p['delta_db']:.2f},
600
+ window_length = {win},
601
+ hop_length = {hop},
602
+ release_ms = {p['release_ms']:.1f},
603
+ max_gain_db = {p['max_gain_db']:.1f},
604
+ eps = {p['eps']},
605
+ max_iter = {p['max_iter']},
606
+ sample_rate = sr,
607
+ multiband = {p.get('multiband', False)},
608
+ band_crossovers = ({BAND_CROSSOVER_HZ},),
609
+ band_delta_db = ({p.get('lf_delta_db', p['delta_db']):.2f}, {p['delta_db']:.2f}),
610
+ macro_expand = {p.get('macro_expand', False)},
611
+ macro_ratio = {p.get('macro_ratio', 1.0):.2f},
612
+ n_jobs = -1,
613
+ show_progress = True,
614
+ )""")
615
+ print(f"\n→ Best score : {best.value:.5f}")
616
+ print(f" Trials done : {len(trials)}")
617
+ print(f" Pruned : {n_pruned}")
618
+
619
+
620
+
621
+ # =============================================================================
622
+ # DEBUG EXPORT
623
+ # =============================================================================
624
+
625
+ # Parametri SPADE usati per il debug (best noti dal grid sweep precedente).
626
+ # Se un DB Optuna esiste e ha trial completati, vengono sostituiti dal best.
627
+ DEBUG_PARAMS = dict(
628
+ delta_db = 1.5,
629
+ window_length = 1024,
630
+ hop_length = 256,
631
+ release_ms = 100.0,
632
+ max_gain_db = 6.0,
633
+ eps = 0.05,
634
+ max_iter = 500,
635
+ )
636
+
637
+
638
+ def _pk_dbfs(a: np.ndarray) -> float:
639
+ pk = float(np.max(np.abs(a)))
640
+ return 20.0 * np.log10(pk) if pk > 1e-12 else -999.0
641
+
642
+
643
+ def _rms_dbfs(a: np.ndarray) -> float:
644
+ rms = float(np.sqrt(np.mean(a.astype(float) ** 2)))
645
+ return 20.0 * np.log10(rms) if rms > 1e-12 else -999.0
646
+
647
+
648
+ def _write_wav(path: Path, audio: np.ndarray, sr: int) -> None:
649
+ """Scrive WAV float32 senza clipping. Avvisa se peak > 1.0."""
650
+ a2d = ensure_2d(audio).astype(np.float32)
651
+ pk = float(np.max(np.abs(a2d)))
652
+ if pk > 1.0:
653
+ print(f" [WARN] {path.name}: peak={pk:.4f} > 1.0 "
654
+ f"(+{20*np.log10(pk):.2f} dBFS) — float32, non clippato")
655
+ sf.write(str(path), a2d, sr, subtype="FLOAT")
656
+
657
+
658
+ def debug_export(
659
+ corpus: list,
660
+ base_dir: Path,
661
+ out_dir: Path,
662
+ n_files: int,
663
+ spade_params: dict,
664
+ ) -> None:
665
+ """
666
+ Esporta WAV di debug per i primi n_files item del corpus.
667
+
668
+ Per ogni file vengono scritti 6 WAV float32:
669
+ 01_orig_with_noise drum + pink noise, normalizzato a 0 dBFS peak
670
+ (segnale prima del limiter)
671
+ 02_limited uscita del limiter sintetico (input a SPADE)
672
+ 03_gt_residual orig_with_noise - limited, @RESIDUAL_DBFS peak
673
+ 04_spade_output uscita SPADE (float32, puo' superare 0 dBFS)
674
+ 05_res_iter spade_output - limited, @RESIDUAL_DBFS peak
675
+ 06_diff_residuals gt_residual - res_iter
676
+ ideale = silenzio = -inf dB
677
+
678
+ Stampa una tabella con peak dBFS e RMS dBFS per ogni traccia.
679
+
680
+ Livelli ATTESI:
681
+ 01 peak = 0.00 dBFS (normalizzato)
682
+ 02 peak ~ -LIMITER_THRESHOLD_DB dBFS (es. -1.5 dBFS)
683
+ 03 peak = RESIDUAL_DBFS (es. -3.0 dBFS)
684
+ 04 peak puo' essere > 0 dBFS (transiente recuperato)
685
+ 05 peak = RESIDUAL_DBFS (es. -3.0 dBFS)
686
+ 06 peak << 0 dBFS (piu' basso = SPADE piu' vicino al GT)
687
+ """
688
+ out_dir.mkdir(parents=True, exist_ok=True)
689
+ items = corpus[:n_files]
690
+ col_w = max(len(it["file"]) for it in items) + 2
691
+
692
+ HDR = (f" {'file':<{col_w}} {'traccia':<22}"
693
+ f" {'peak dBFS':>10} {'RMS dBFS':>9} note")
694
+ SEP = " " + "-" * (len(HDR) - 2)
695
+
696
+ print()
697
+ if _HAS_RICH:
698
+ _console.rule("[bold cyan]DEBUG EXPORT[/]")
699
+ else:
700
+ print("=" * 65)
701
+ print("DEBUG EXPORT")
702
+ print("=" * 65)
703
+
704
+ print(f" Output dir : {out_dir}")
705
+ print(f" SPADE params : delta_db={spade_params['delta_db']}"
706
+ f" win={spade_params['window_length']}"
707
+ f" hop={spade_params['hop_length']}"
708
+ f" rel={spade_params['release_ms']}ms"
709
+ f" gain={spade_params['max_gain_db']}dB")
710
+ print(f" File esportati: {len(items)}")
711
+ print()
712
+ print(f" Livelli attesi:")
713
+ print(f" 01_orig_with_noise : ~ 0.00 dBFS (normalizzato prima del limiter)")
714
+ print(f" 02_limited : ~ {-LIMITER_THRESHOLD_DB:+.2f} dBFS (uscita limiter)")
715
+ print(f" 03_gt_residual : = {RESIDUAL_DBFS:+.2f} dBFS (normalizzato)")
716
+ print(f" 04_spade_output : > 0 dBFS possibile (transiente recuperato)")
717
+ print(f" 05_res_iter : = {RESIDUAL_DBFS:+.2f} dBFS (normalizzato)")
718
+ print(f" 06_diff_residuals : << 0 dBFS (piu' basso = pipeline piu' corretta)")
719
+ print()
720
+ print(HDR)
721
+
722
+ diff_peaks = []
723
+
724
+ for file_index, item in enumerate(items):
725
+ sr = item["sr"]
726
+ limited = item["limited"].copy()
727
+ gt_res = item["gt_res"]
728
+ stem = Path(item["file"]).stem
729
+
730
+ # ── Ricostruisci orig_with_noise ──────────────────────────────────
731
+ # Riesegue la stessa pipeline di build_corpus con il seed identico
732
+ orig_with_noise = None
733
+ for folder in DRUM_DIRS:
734
+ candidate = base_dir / folder / item["file"]
735
+ if candidate.exists():
736
+ try:
737
+ raw, _ = sf.read(str(candidate), always_2d=True)
738
+ raw = raw.astype(float)
739
+ rng = np.random.default_rng(seed=file_index)
740
+ orig_0 = normalize_to_0dBFS(raw)
741
+ mixed = ensure_2d(mix_pink_noise(orig_0, sr,
742
+ PINK_NOISE_LEVEL_DB, rng))
743
+ orig_with_noise = ensure_2d(normalize_to_0dBFS(mixed))
744
+ except Exception:
745
+ pass
746
+ break
747
+
748
+ if orig_with_noise is None:
749
+ # Fallback: ricostruiamo da limited + gt_res (approssimazione)
750
+ gt_scale = 10 ** (RESIDUAL_DBFS / 20.0) # peak di gt_res
751
+ lim_peak = 10 ** (-LIMITER_THRESHOLD_DB / 20.0) # peak atteso del limited
752
+ gt_raw = gt_res * (lim_peak / (gt_scale + 1e-12))
753
+ orig_with_noise = ensure_2d(normalize_to_0dBFS(limited + gt_raw))
754
+
755
+ # ── Esegui SPADE ──────────────────────────────────────────────────
756
+ try:
757
+ p = DeclipParams(sample_rate=sr, **FIXED_SOLVER, **spade_params)
758
+ fixed, _ = declip(limited.copy(), p)
759
+ fixed_2d = ensure_2d(fixed)
760
+ except Exception as exc:
761
+ print(f" [ERRORE SPADE] {item['file']}: {exc}")
762
+ continue
763
+
764
+ # ── Residual iterazione (scala RAW, senza normalizzazione) ───────────
765
+ # IMPORTANTE: il diff deve avvenire sulla scala comune PRIMA di
766
+ # normalizzare i due residual, altrimenti la normalizzazione
767
+ # indipendente rimuove l'informazione di ampiezza relativa.
768
+ #
769
+ # gt_res e res_raw sono entrambi derivati dallo stesso limited →
770
+ # hanno la stessa scala di riferimento.
771
+ # gt_res e' gia' stato normalizzato a RESIDUAL_DBFS in build_corpus;
772
+ # dobbiamo riportarlo alla scala raw per il confronto.
773
+ #
774
+ # Scala comune: usiamo il peak del limited come riferimento.
775
+ # limited peak ≈ 10^(-LIMITER_THRESHOLD_DB/20) → scala assoluta nota.
776
+ res_raw = fixed_2d - limited # residual SPADE in scala assoluta
777
+
778
+ # gt_res_raw: ricostruiamo dalla scala normalizzata
779
+ # gt_res = gt_res_raw / peak(gt_res_raw) * 10^(RESIDUAL_DBFS/20)
780
+ # → gt_res_raw = gt_res * peak(gt_res_raw) / 10^(RESIDUAL_DBFS/20)
781
+ # Poiche' peak(gt_res_raw) non e' salvato, lo stimiamo:
782
+ # gt_res_raw ≈ orig_with_noise - limited (ricostruito)
783
+ gt_res_raw_approx = ensure_2d(orig_with_noise) - limited
784
+ L = min(gt_res_raw_approx.shape[0], res_raw.shape[0])
785
+
786
+ # ── Diff sulla scala comune (raw, non normalizzata) ───────────────
787
+ diff_raw = gt_res_raw_approx[:L] - res_raw[:L]
788
+
789
+ # ── Cosine similarity temporale (scalare, sul canale L) ──────────
790
+ g_flat = gt_res_raw_approx[:L, 0] if gt_res_raw_approx.ndim == 2 else gt_res_raw_approx[:L]
791
+ e_flat = res_raw[:L, 0] if res_raw.ndim == 2 else res_raw[:L]
792
+ cos_sim_td = float(
793
+ np.dot(g_flat, e_flat) /
794
+ (np.linalg.norm(g_flat) * np.linalg.norm(e_flat) + 1e-12)
795
+ )
796
+
797
+ # ── Stima floor teorico del diff dovuto al rumore rosa ────────────
798
+ # Il limiter attenue anche i picchi del rumore rosa → quella parte
799
+ # sta nel GT_res ma NON in res_iter (SPADE non la recupera).
800
+ # Stimiamo quanto rumore e' nel GT_res come proxy del floor.
801
+ noise_gain_lin = 10 ** (PINK_NOISE_LEVEL_DB / 20.0)
802
+ # Ampiezza del rumore rispetto al limited: noise_gain ≈ fraction
803
+ # del GT_res che e' irrecuperabile da SPADE.
804
+ noise_floor_db = 20 * np.log10(noise_gain_lin + 1e-12) + RESIDUAL_DBFS
805
+ # In pratica: diff non puo' essere < noise_floor per costruzione.
806
+
807
+ # ── diff dBFS relativo al GT_res (SNR-like) ───────────────────────
808
+ diff_rms_db = _rms_dbfs(diff_raw[:L])
809
+ gt_rms_db = _rms_dbfs(gt_res_raw_approx[:L])
810
+ # diff_vs_gt: quanto e' grande il diff rispetto al GT (0 dB = diff = GT)
811
+ diff_vs_gt_db = diff_rms_db - gt_rms_db # piu' negativo = meglio
812
+
813
+ # Normalizza per l'export WAV
814
+ res_iter = normalize_peak(res_raw, RESIDUAL_DBFS)
815
+ diff_norm = normalize_peak(diff_raw, RESIDUAL_DBFS) if np.max(np.abs(diff_raw)) > 1e-12 else diff_raw
816
+
817
+ diff_peaks.append((diff_vs_gt_db, cos_sim_td, diff_rms_db, gt_rms_db))
818
+
819
+ # ── Definizione tracce ────────────────────────────────────────────
820
+ tracks = [
821
+ ("01_orig_with_noise",
822
+ orig_with_noise,
823
+ f"drum+noise @0dBFS (input pipeline)"),
824
+ ("02_limited",
825
+ limited,
826
+ f"uscita limiter (input SPADE) atteso: ~{-LIMITER_THRESHOLD_DB:+.2f}dBFS"),
827
+ ("03_gt_residual",
828
+ gt_res,
829
+ f"GT residual @{RESIDUAL_DBFS:.0f}dBFS (include noise attenuation)"),
830
+ ("04_spade_output",
831
+ fixed_2d,
832
+ f"SPADE output (float32, puo' >0dBFS)"),
833
+ ("05_res_iter",
834
+ res_iter,
835
+ f"residual SPADE @{RESIDUAL_DBFS:.0f}dBFS (solo componente sparsa)"),
836
+ ("06_diff_residuals",
837
+ diff_norm,
838
+ f"GT - iter @{RESIDUAL_DBFS:.0f}dBFS "
839
+ f"cos_sim={cos_sim_td:.3f} diff/GT={diff_vs_gt_db:+.1f}dB "
840
+ f"noise_floor≈{noise_floor_db:+.1f}dB"),
841
+ ]
842
+
843
+ # ── Soglia realistica per il diff ─────────────────────────────────
844
+ # Il diff non puo' essere < noise_floor per costruzione del corpus.
845
+ # Calibriamo la soglia [OK] a noise_floor + 6 dB (margine).
846
+ ok_threshold = noise_floor_db + 6.0 # tipicamente attorno a -17 dBFS
847
+ warn_threshold = ok_threshold + 10.0 # tutto sopra e' davvero anomalo
848
+
849
+ # ── Stampa tabella + scrivi WAV ───────────────────────────────────
850
+ print(SEP)
851
+ for track_name, audio, note in tracks:
852
+ pk = _pk_dbfs(audio)
853
+ rms = _rms_dbfs(audio)
854
+
855
+ flag = ""
856
+ if track_name == "06_diff_residuals":
857
+ if diff_vs_gt_db < -12: flag = "[OK] buona convergenza"
858
+ elif diff_vs_gt_db < -6: flag = "[~] convergenza parziale"
859
+ else: flag = "[WARN] diff elevato rispetto al GT"
860
+
861
+ row = (f" {item['file']:<{col_w}} {track_name:<22}"
862
+ f" {pk:>+10.2f} {rms:>+9.2f} {note} {flag}")
863
+
864
+ if _HAS_RICH:
865
+ color = ("green" if "[OK]" in flag else
866
+ "yellow" if "[~]" in flag else
867
+ "red" if "[WARN]" in flag else "")
868
+ colored_row = row.replace(flag, f"[{color or 'dim'}]{flag}[/]") if flag else row
869
+ _console.print(colored_row)
870
+ else:
871
+ print(row)
872
+
873
+ wav_path = out_dir / f"{stem}__{track_name}.wav"
874
+ _write_wav(wav_path, audio, sr)
875
+
876
+ # ── Analisi spettrale per banda: LF vs HF ─────────────────────────
877
+ # Risponde alla domanda: quanto residual c'e' nelle basse frequenze,
878
+ # e quanto ne recupera SPADE?
879
+ #
880
+ # Bands:
881
+ # Sub-bass : 20 – 80 Hz (fondamentale kick, body)
882
+ # Bass : 80 – 250 Hz (corpo kick, coda)
883
+ # Low-mid : 250 – 800 Hz (presenza)
884
+ # High-mid : 800 – 4000 Hz (attacco, click)
885
+ # High : 4k – 20k Hz (aria, snap)
886
+ #
887
+ # Per ogni banda misura:
888
+ # GT_energy = energia del GT residual (quanto il limiter ha tolto)
889
+ # iter_energy = energia recuperata da SPADE
890
+ # recovery % = iter_energy / GT_energy × 100
891
+
892
+ def band_energy(audio_2d, sr, f_lo, f_hi):
893
+ """RMS energy in dB di una banda passante [f_lo, f_hi] Hz."""
894
+ mono = audio_2d[:, 0] if audio_2d.ndim == 2 else audio_2d
895
+ N = len(mono)
896
+ if N < 8:
897
+ return -999.0
898
+ # Butterworth bandpass (o lowpass/highpass ai bordi)
899
+ nyq = sr / 2.0
900
+ lo = max(f_lo / nyq, 1e-4)
901
+ hi = min(f_hi / nyq, 0.9999)
902
+ if lo >= hi:
903
+ return -999.0
904
+ if lo < 1e-3:
905
+ b, a = sig.butter(4, hi, btype="low")
906
+ else:
907
+ b, a = sig.butter(4, [lo, hi], btype="band")
908
+ filtered = sig.filtfilt(b, a, mono)
909
+ return _rms_dbfs(filtered)
910
+
911
+ BANDS = [
912
+ ("Sub-bass ", 20, 80),
913
+ ("Bass ", 80, 250),
914
+ ("Low-mid ", 250, 800),
915
+ ("High-mid ", 800, 4000),
916
+ ("High ", 4000, 20000),
917
+ ]
918
+
919
+ gt_mono = gt_res[:, 0] if gt_res.ndim == 2 else gt_res
920
+ ri_mono = res_iter[:, 0] if res_iter.ndim == 2 else res_iter
921
+
922
+ # Normalizza GT e iter sulla stessa scala (rimuovi la normalizzazione
923
+ # a RESIDUAL_DBFS per confrontare energie assolute)
924
+ gt_raw_for_bands = gt_res_raw_approx
925
+ iter_raw_for_bands = res_raw
926
+
927
+ print()
928
+ band_hdr = f" {'banda':<12} {'GT_res RMS':>10} {'SPADE rec RMS':>13} {'recovery':>9} {'limitato?'}"
929
+ print(f" Analisi spettrale per banda — {item['file']}")
930
+ print(f" {'─'*75}")
931
+ print(band_hdr)
932
+ print(f" {'─'*75}")
933
+ for bname, f_lo, f_hi in BANDS:
934
+ gt_db = band_energy(gt_raw_for_bands, sr, f_lo, f_hi)
935
+ iter_db = band_energy(iter_raw_for_bands, sr, f_lo, f_hi)
936
+ if gt_db < -60:
937
+ recovery_str = " — (silenzio)"
938
+ flag_b = ""
939
+ else:
940
+ diff_b = iter_db - gt_db # positivo = SPADE supera GT (overrecovery)
941
+ # recovery: 0 dB diff = recupero perfetto, molto negativo = sotto-recupero
942
+ if diff_b > -3:
943
+ flag_b = "OK"
944
+ elif diff_b > -9:
945
+ flag_b = "~ parziale"
946
+ else:
947
+ flag_b = "!! sotto-recupero"
948
+ recovery_str = f"{diff_b:>+7.1f} dB {flag_b}"
949
+ line = f" {bname:<12} {gt_db:>+10.1f} {iter_db:>+13.1f} {recovery_str}"
950
+ if _HAS_RICH:
951
+ color = "green" if "OK" in recovery_str else (
952
+ "yellow" if "~" in recovery_str else (
953
+ "red" if "!!" in recovery_str else "dim"))
954
+ _console.print(f"[{color}]{line}[/]")
955
+ else:
956
+ print(line)
957
+ print()
958
+
959
+ print(SEP)
960
+ print()
961
+ if diff_peaks:
962
+ vs_gt_vals = [d[0] for d in diff_peaks]
963
+ cos_vals = [d[1] for d in diff_peaks]
964
+ avg_vs_gt = float(np.mean(vs_gt_vals))
965
+ best_vs_gt = float(np.min(vs_gt_vals))
966
+ worst_vs_gt = float(np.max(vs_gt_vals))
967
+ avg_cos = float(np.mean(cos_vals))
968
+
969
+ noise_floor_db = 20 * np.log10(10 ** (PINK_NOISE_LEVEL_DB / 20.0) + 1e-12) + RESIDUAL_DBFS
970
+
971
+ print(f" RIEPILOGO 06_diff_residuals:")
972
+ print(f" diff/GT_rms media : {avg_vs_gt:>+7.2f} dB (0 dB = diff grande quanto GT)")
973
+ print(f" diff/GT_rms migliore: {best_vs_gt:>+7.2f} dB")
974
+ print(f" diff/GT_rms peggiore: {worst_vs_gt:>+7.2f} dB")
975
+ print(f" cos_sim TD media : {avg_cos:>8.4f} (1.0 = identici)")
976
+ print()
977
+ print(f" NOTA IMPORTANTE:")
978
+ print(f" Il rumore rosa ({PINK_NOISE_LEVEL_DB} dB) fa parte del GT_res ma")
979
+ print(f" NON puo' essere recuperato da SPADE (non e' sparso).")
980
+ print(f" Floor teorico del diff: ≈ {noise_floor_db:+.1f} dBFS — questo e' il")
981
+ print(f" limite fisico massimo raggiungibile con questo corpus.")
982
+ print(f" Un diff/GT < -6 dB indica buona convergenza di SPADE.")
983
+ print()
984
+ if worst_vs_gt < -12:
985
+ verdict = "OK Convergenza eccellente — SPADE recupera bene i transienti"
986
+ elif worst_vs_gt < -6:
987
+ verdict = "~ Convergenza buona — residuo compatibile con il noise floor"
988
+ else:
989
+ verdict = "INFO diff dominato dal rumore rosa — comportamento atteso e corretto"
990
+ print(f" Verdetto: {verdict}")
991
+ print(f"\n WAV scritti in : {out_dir}/")
992
+ print(f" Formato : float32, nessun clipping (usa un editor che supporta >0dBFS)")
993
+ print(f" Nomenclatura : <stem>__<N>_<traccia>.wav")
994
+
995
+
996
+ def save_csv(study: "optuna.Study"):
997
+ import csv
998
+ trials = sorted(
999
+ [t for t in study.trials if t.state == optuna.trial.TrialState.COMPLETE],
1000
+ key=lambda t: t.value or 0, reverse=True,
1001
+ )
1002
+ with open(OUT_CSV, "w", newline="") as f:
1003
+ w = csv.writer(f)
1004
+ w.writerow(["rank", "score", "delta_db", "lf_delta_db",
1005
+ "window_length", "hop_length", "release_ms", "max_gain_db",
1006
+ "eps", "max_iter", "multiband", "macro_expand", "macro_ratio"])
1007
+ for rank, t in enumerate(trials, 1):
1008
+ p = t.params
1009
+ win = 2 ** p["win_exp"]
1010
+ hop = win // p["hop_div"]
1011
+ w.writerow([
1012
+ rank, round(t.value, 6),
1013
+ p["delta_db"],
1014
+ round(p.get("lf_delta_db", p["delta_db"]), 2),
1015
+ win, hop,
1016
+ p["release_ms"], p["max_gain_db"], p["eps"], p["max_iter"],
1017
+ int(p.get("multiband", False)),
1018
+ int(p.get("macro_expand", False)),
1019
+ round(p.get("macro_ratio", 1.0), 2),
1020
+ ])
1021
+ print(f"\n 📄 CSV: {OUT_CSV}")
1022
+
1023
+
1024
+ # =============================================================================
1025
+ # MAIN
1026
+ # =============================================================================
1027
+
1028
+ def parse_args():
1029
+ ap = argparse.ArgumentParser(description="Smart Bayesian sweep per S-SPADE v2")
1030
+ ap.add_argument("--trials", type=int, default=200,
1031
+ help="Numero di trial Optuna (default: 200)")
1032
+ ap.add_argument("--resume", action="store_true",
1033
+ help="Carica lo study esistente e aggiunge trial")
1034
+ ap.add_argument("--report", action="store_true",
1035
+ help="Solo report (nessun nuovo trial)")
1036
+ ap.add_argument("--base-dir", type=str, default=".",
1037
+ help="Cartella radice con Kicks/Snares/Perc/Tops")
1038
+ ap.add_argument("--corpus-size", type=int, default=None,
1039
+ help="Limita il corpus a N file (None = tutti)")
1040
+ ap.add_argument("--top", type=int, default=20,
1041
+ help="Quanti trial mostrare nel ranking (default: 20)")
1042
+ ap.add_argument("--no-prune", action="store_true",
1043
+ help="Disabilita MedianPruner (più lento ma completo)")
1044
+ ap.add_argument("--debug-export", action="store_true",
1045
+ help="Esporta WAV di debug per i primi N file del corpus (no sweep)")
1046
+ ap.add_argument("--debug-dir", type=str, default="debug_export",
1047
+ help="Cartella output WAV di debug (default: debug_export)")
1048
+ ap.add_argument("--debug-n", type=int, default=10,
1049
+ help="Quanti file esportare in debug (default: 10)")
1050
+ return ap.parse_args()
1051
+
1052
+
1053
+ def main():
1054
+ args = parse_args()
1055
+
1056
+ missing = []
1057
+ if not _HAS_OPTUNA: missing.append("optuna")
1058
+ if not _HAS_SPADE: missing.append("spade_declip_v11.py (nella stessa dir)")
1059
+ if missing:
1060
+ pip = [m for m in missing if not m.endswith(")")]
1061
+ sys.exit("Mancante:\n pip install " + " ".join(pip)
1062
+ + ("\n " + "\n ".join(m for m in missing if m.endswith(")")) if any(m.endswith(")") for m in missing) else ""))
1063
+
1064
+ base_dir = Path(args.base_dir).resolve()
1065
+ storage = f"sqlite:///{STUDY_NAME}.db"
1066
+ sampler = TPESampler(seed=42, multivariate=True, warn_independent_sampling=False)
1067
+ pruner = (MedianPruner(n_startup_trials=10, n_warmup_steps=3)
1068
+ if not args.no_prune else optuna.pruners.NopPruner())
1069
+
1070
+ if args.report:
1071
+ try:
1072
+ study = optuna.load_study(study_name=STUDY_NAME, storage=storage,
1073
+ sampler=sampler, pruner=pruner)
1074
+ except Exception:
1075
+ sys.exit(f"Nessuno study trovato in {STUDY_NAME}.db")
1076
+ print_report(study, top_n=args.top)
1077
+ save_csv(study)
1078
+ return
1079
+
1080
+ # ── Debug export ──────────────────────────────────────────────────────────
1081
+ if args.debug_export:
1082
+ # Usa i parametri del best trial se esiste un DB, altrimenti DEBUG_PARAMS
1083
+ spade_params = dict(DEBUG_PARAMS)
1084
+ try:
1085
+ study = optuna.load_study(study_name=STUDY_NAME, storage=storage,
1086
+ sampler=sampler, pruner=pruner)
1087
+ completed = [t for t in study.trials
1088
+ if t.state == optuna.trial.TrialState.COMPLETE]
1089
+ if completed:
1090
+ best_t = max(completed, key=lambda t: t.value or 0)
1091
+ p = best_t.params
1092
+ win = 2 ** p["win_exp"]
1093
+ hop = win // p["hop_div"]
1094
+ spade_params = dict(
1095
+ delta_db = p["delta_db"],
1096
+ window_length = win,
1097
+ hop_length = hop,
1098
+ release_ms = p["release_ms"],
1099
+ max_gain_db = p["max_gain_db"],
1100
+ eps = p["eps"],
1101
+ max_iter = p["max_iter"],
1102
+ )
1103
+ print(f" [DEBUG] Usando best trial #{best_t.number}"
1104
+ f" (score={best_t.value:.5f}) dal DB.")
1105
+ except Exception:
1106
+ print(f" [DEBUG] DB non trovato — uso DEBUG_PARAMS di default.")
1107
+
1108
+ # Costruisci corpus (limitato a debug_n file per velocita')
1109
+ corpus = build_corpus(base_dir, max_files=args.debug_n)
1110
+ if not corpus:
1111
+ sys.exit("Corpus vuoto. Controlla --base-dir.")
1112
+ debug_export(
1113
+ corpus = corpus,
1114
+ base_dir = base_dir,
1115
+ out_dir = Path(args.debug_dir),
1116
+ n_files = args.debug_n,
1117
+ spade_params = spade_params,
1118
+ )
1119
+ return
1120
+
1121
+ # ── Corpus ───────────────────────────────────────────────────────────────
1122
+ print("\n" + "=" * 65)
1123
+ print("CORPUS + LIMITER SINTETICO (Case 1 — threshold-based)")
1124
+ print("=" * 65)
1125
+ print(f" Base dir : {base_dir}")
1126
+ print(f" Threshold : −{LIMITER_THRESHOLD_DB} dBFS")
1127
+ print(f" Release : {LIMITER_RELEASE_MS} ms")
1128
+ print(f" Level align: NESSUNO — loudness invariata per costruzione")
1129
+ print(f" Rumore rosa: {PINK_NOISE_LEVEL_DB} dB rel. peak "
1130
+ f"(simula sottofondo musicale sotto il transiente)")
1131
+
1132
+ corpus = build_corpus(base_dir, max_files=args.corpus_size)
1133
+ if not corpus:
1134
+ sys.exit("Corpus vuoto. Controlla --base-dir e le cartelle.")
1135
+
1136
+ print(f"\n ✓ {len(corpus)} file nel corpus\n")
1137
+ col_w = max(len(item["file"]) for item in corpus) + 2
1138
+ for item in corpus:
1139
+ rms = float(np.sqrt(np.mean(item["gt_res"] ** 2)))
1140
+ peak = float(np.max(np.abs(item["gt_res"])))
1141
+ print(f" {item['file']:<{col_w}} sr={item['sr']} "
1142
+ f"GT rms={rms:.4f} peak={peak:.4f}")
1143
+
1144
+ # ── Study ─────────────────────────────────────────────────────────────────
1145
+ print(f"\n{'='*65}")
1146
+ print(f"OTTIMIZZAZIONE BAYESIANA — {args.trials} trial")
1147
+ print(f"TPE (multivariate) + MedianPruner | storage: {STUDY_NAME}.db")
1148
+ print(f"{'='*65}\n")
1149
+
1150
+ study = optuna.create_study(
1151
+ study_name = STUDY_NAME,
1152
+ storage = storage,
1153
+ sampler = sampler,
1154
+ pruner = pruner,
1155
+ direction = "maximize",
1156
+ load_if_exists = True,
1157
+ )
1158
+
1159
+ # ── Progress bar (rich → tqdm → plain fallback) ───────────────────────────
1160
+ try:
1161
+ from rich.progress import (
1162
+ Progress, BarColumn, TextColumn,
1163
+ TimeElapsedColumn, TimeRemainingColumn, MofNCompleteColumn,
1164
+ )
1165
+ _has_rich_progress = True
1166
+ except ImportError:
1167
+ _has_rich_progress = False
1168
+
1169
+ try:
1170
+ import tqdm as _tqdm_mod
1171
+ _has_tqdm = True
1172
+ except ImportError:
1173
+ _has_tqdm = False
1174
+
1175
+ # Stato condiviso aggiornato dal callback.
1176
+ # Pre-popolato con i trial gia' nel DB in caso di --resume,
1177
+ # cosi' la progress bar mostra il conteggio corretto dall'inizio.
1178
+ _existing_complete = [t for t in study.trials
1179
+ if t.state == optuna.trial.TrialState.COMPLETE]
1180
+ _existing_pruned = [t for t in study.trials
1181
+ if t.state == optuna.trial.TrialState.PRUNED]
1182
+
1183
+ if _existing_complete:
1184
+ _best_existing = max(_existing_complete, key=lambda t: t.value or 0)
1185
+ _init_best = _best_existing.value or 0.0
1186
+ _init_best_p = dict(_best_existing.params)
1187
+ _init_last = _init_best
1188
+ else:
1189
+ _init_best, _init_best_p, _init_last = float("-inf"), {}, float("-inf")
1190
+
1191
+ _state = {
1192
+ "done": len(_existing_complete),
1193
+ "pruned": len(_existing_pruned),
1194
+ "best": _init_best,
1195
+ "best_p": _init_best_p,
1196
+ "last": _init_last,
1197
+ "t0": time.time(),
1198
+ "n_total": len(_existing_complete) + len(_existing_pruned) + args.trials,
1199
+ }
1200
+
1201
+ def _fmt_best(state: dict) -> str:
1202
+ """Stringa compatta con i parametri del best trial corrente."""
1203
+ bp = state["best_p"]
1204
+ if not bp:
1205
+ return "—"
1206
+ win = 2 ** bp.get("win_exp", 10)
1207
+ hop = win // bp.get("hop_div", 4)
1208
+ return (f"δ={bp.get('delta_db',0):.2f} "
1209
+ f"win={win} hop={hop} "
1210
+ f"rel={bp.get('release_ms',0):.0f}ms "
1211
+ f"gain={bp.get('max_gain_db',0):.1f}dB")
1212
+
1213
+ # ── Rich progress bar ─────────────────────────────────────────────────────
1214
+ if _has_rich_progress:
1215
+ progress = Progress(
1216
+ TextColumn("[bold cyan]Trial[/] [cyan]{task.completed}/{task.total}[/]"),
1217
+ BarColumn(bar_width=32),
1218
+ MofNCompleteColumn(),
1219
+ TextColumn(" score [green]{task.fields[last]:.5f}[/]"),
1220
+ TextColumn(" best [bold green]{task.fields[best]:.5f}[/]"),
1221
+ TextColumn(" [dim]pruned {task.fields[pruned]}[/]"),
1222
+ TimeElapsedColumn(),
1223
+ TextColumn("ETA"),
1224
+ TimeRemainingColumn(),
1225
+ refresh_per_second=4,
1226
+ transient=False,
1227
+ )
1228
+ task_id = None # creato dentro il context
1229
+
1230
+ def on_trial_end(study, trial):
1231
+ fin = (trial.state == optuna.trial.TrialState.COMPLETE)
1232
+ prn = (trial.state == optuna.trial.TrialState.PRUNED)
1233
+ if fin:
1234
+ _state["done"] += 1
1235
+ _state["last"] = trial.value or 0.0
1236
+ if _state["last"] > _state["best"]:
1237
+ _state["best"] = _state["last"]
1238
+ _state["best_p"] = dict(study.best_params)
1239
+ elif prn:
1240
+ _state["pruned"] += 1
1241
+ progress.update(
1242
+ task_id,
1243
+ advance = 1,
1244
+ last = _state["last"],
1245
+ best = max(_state["best"], 0.0),
1246
+ pruned = _state["pruned"],
1247
+ )
1248
+
1249
+ t0 = time.time()
1250
+ try:
1251
+ with progress:
1252
+ task_id = progress.add_task(
1253
+ "sweep",
1254
+ total = _state["n_total"],
1255
+ completed = _state["done"] + _state["pruned"],
1256
+ last = max(_state["last"], 0.0),
1257
+ best = max(_state["best"], 0.0),
1258
+ pruned = _state["pruned"],
1259
+ )
1260
+ study.optimize(
1261
+ make_objective(corpus),
1262
+ n_trials = args.trials,
1263
+ callbacks = [on_trial_end],
1264
+ show_progress_bar = False,
1265
+ )
1266
+ except KeyboardInterrupt:
1267
+ print("\n[!] Interrotto — risultati parziali salvati.")
1268
+
1269
+ # ── tqdm fallback ─────────────────────────────────────────────────────────
1270
+ elif _has_tqdm:
1271
+ import tqdm
1272
+ _already = _state["done"] + _state["pruned"]
1273
+ pbar = tqdm.tqdm(
1274
+ total = _state["n_total"],
1275
+ initial = _already,
1276
+ unit = "trial",
1277
+ bar_format = "{l_bar}{bar}| {n}/{total} [{elapsed}<{remaining}]",
1278
+ )
1279
+ if _already > 0:
1280
+ pbar.set_postfix(
1281
+ score = f"{max(_state['last'], 0.0):.5f}",
1282
+ best = f"{max(_state['best'], 0.0):.5f}",
1283
+ pruned = _state["pruned"],
1284
+ )
1285
+
1286
+ def on_trial_end(study, trial):
1287
+ fin = trial.state == optuna.trial.TrialState.COMPLETE
1288
+ prn = trial.state == optuna.trial.TrialState.PRUNED
1289
+ if fin:
1290
+ _state["done"] += 1
1291
+ _state["last"] = trial.value or 0.0
1292
+ if _state["last"] > _state["best"]:
1293
+ _state["best"] = _state["last"]
1294
+ _state["best_p"] = dict(study.best_params)
1295
+ elif prn:
1296
+ _state["pruned"] += 1
1297
+ pbar.update(1)
1298
+ pbar.set_postfix(
1299
+ score = f"{_state['last']:.5f}",
1300
+ best = f"{_state['best']:.5f}",
1301
+ pruned = _state["pruned"],
1302
+ )
1303
+
1304
+ t0 = time.time()
1305
+ try:
1306
+ study.optimize(
1307
+ make_objective(corpus),
1308
+ n_trials = args.trials,
1309
+ callbacks = [on_trial_end],
1310
+ show_progress_bar = False,
1311
+ )
1312
+ except KeyboardInterrupt:
1313
+ print("\n[!] Interrotto — risultati parziali salvati.")
1314
+ finally:
1315
+ pbar.close()
1316
+
1317
+ # ── Plain fallback ────────────────────────────────────────────────────────
1318
+ else:
1319
+ def on_trial_end(study, trial):
1320
+ fin = trial.state == optuna.trial.TrialState.COMPLETE
1321
+ prn = trial.state == optuna.trial.TrialState.PRUNED
1322
+ if fin:
1323
+ _state["done"] += 1
1324
+ _state["last"] = trial.value or 0.0
1325
+ if _state["last"] > _state["best"]:
1326
+ _state["best"] = _state["last"]
1327
+ _state["best_p"] = dict(study.best_params)
1328
+ elapsed = time.time() - _state["t0"]
1329
+ done_tot = _state["done"] + _state["pruned"]
1330
+ eta_s = (elapsed / done_tot) * (_state["n_total"] - done_tot) if done_tot else 0
1331
+ is_best = abs(_state["last"] - _state["best"]) < 1e-9
1332
+ bar_n = int(32 * done_tot / max(_state["n_total"], 1))
1333
+ bar = "█" * bar_n + "░" * (32 - bar_n)
1334
+ print(f"\r[{bar}] {done_tot}/{_state['n_total']}"
1335
+ f" {'★' if is_best else ' '}score={_state['last']:.5f}"
1336
+ f" best={_state['best']:.5f}"
1337
+ f" pruned={_state['pruned']}"
1338
+ f" ETA {eta_s/60:.1f}min ", end="", flush=True)
1339
+ elif prn:
1340
+ _state["pruned"] += 1
1341
+
1342
+ t0 = time.time()
1343
+ try:
1344
+ study.optimize(
1345
+ make_objective(corpus),
1346
+ n_trials = args.trials,
1347
+ callbacks = [on_trial_end],
1348
+ show_progress_bar = False,
1349
+ )
1350
+ except KeyboardInterrupt:
1351
+ print("\n[!] Interrotto — risultati parziali salvati.")
1352
+ print() # newline dopo la riga \r
1353
+
1354
+ elapsed = time.time() - t0
1355
+ n_done = sum(1 for t in study.trials if t.state == optuna.trial.TrialState.COMPLETE)
1356
+ n_prune = sum(1 for t in study.trials if t.state == optuna.trial.TrialState.PRUNED)
1357
+ print(f"\n Completati: {n_done} | Pruned: {n_prune}"
1358
+ f" | Tempo totale: {elapsed/60:.1f} min"
1359
+ f" | Media: {elapsed/max(n_done+n_prune,1):.1f} s/trial")
1360
+
1361
+ print_report(study, top_n=args.top)
1362
+ save_csv(study)
1363
+ print("\nDone.")
1364
+
1365
+
1366
+ if __name__ == "__main__":
1367
+ main()
run_smart_sweep_old2.py ADDED
@@ -0,0 +1,1400 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ """
2
+ run_smart_sweep.py — S-SPADE · Bayesian parameter search (v2)
3
+ ===================================================================
4
+
5
+ PIPELINE GROUND-TRUTH (Case 1 — threshold-based limiter)
6
+ ---------------------------------------------------------
7
+ Il limiter sintetico è threshold-based:
8
+ - Originale normalizzato a 0 dBFS peak
9
+ - Limiter: attua solo sui picchi sopra la soglia → output max peak ≈ −threshold_db
10
+ - Il CORPO del segnale (loudness percepita) rimane invariato per definizione
11
+ - NON si applica nessun gain al segnale limitato dopo il processing
12
+
13
+ Allineamento per il calcolo residual:
14
+ Originale e limited sono già sulla stessa scala (loudness uguale, picchi diversi).
15
+ Nessuna normalizzazione LUFS / RMS necessaria o corretta.
16
+
17
+ GT_res = original_0dBFS − limited (scale identiche)
18
+ res_iter = spade_output − limited (idem)
19
+
20
+ Entrambi vengono poi normalizzati a RESIDUAL_DBFS peak SOLO per rendere
21
+ comparabili file con diversi livelli assoluti — non altera la logica.
22
+
23
+ Metrica ideale:
24
+ GT_res ≡ res_iter → cosine_sim = 1.0 → differenza = −∞ dB
25
+
26
+ Ottimizzatore: Optuna TPE (Bayesian) + MedianPruner
27
+ Storage: SQLite (riprendibile con --resume)
28
+ Corpus: tutti i drum sample in Kicks / Snares / Perc / Tops
29
+
30
+ DIPENDENZE
31
+ ----------
32
+ pip install numpy scipy soundfile optuna rich
33
+ (pyloudnorm NON necessario)
34
+
35
+ USO
36
+ ---
37
+ python run_smart_sweep.py # 200 trial
38
+ python run_smart_sweep.py --trials 50 # test rapido
39
+ python run_smart_sweep.py --resume # riprende da DB
40
+ python run_smart_sweep.py --report # solo risultati
41
+ python run_smart_sweep.py --base-dir /path/SPADE # cartella custom
42
+ """
43
+
44
+ import argparse
45
+ import logging
46
+ import sys
47
+ import time
48
+ import warnings
49
+ from pathlib import Path
50
+ from typing import Dict, List, Optional
51
+
52
+ import numpy as np
53
+ import scipy.signal as sig
54
+ import soundfile as sf
55
+
56
+ logging.getLogger("optuna").setLevel(logging.WARNING)
57
+
58
+ # ── optuna ───────────────────────────────────────────────────────────────────
59
+ try:
60
+ import optuna
61
+ from optuna.samplers import TPESampler
62
+ from optuna.pruners import MedianPruner
63
+ _HAS_OPTUNA = True
64
+ except ImportError:
65
+ _HAS_OPTUNA = False
66
+ warnings.warn("optuna non trovato — pip install optuna")
67
+
68
+ # ── rich ─────────────────────────────────────────────────────────────────────
69
+ try:
70
+ from rich.console import Console
71
+ from rich.table import Table
72
+ _console = Console()
73
+ _HAS_RICH = True
74
+ except ImportError:
75
+ _HAS_RICH = False
76
+ _console = None
77
+
78
+ # ── spade_declip ─────────────────────────────────────────────────────────────
79
+ try:
80
+ from spade_declip_v12 import declip, DeclipParams
81
+ _HAS_SPADE = True
82
+ except ImportError:
83
+ _HAS_SPADE = False
84
+ warnings.warn("spade_declip_v12.py non trovato")
85
+
86
+ # =============================================================================
87
+ # CONFIG
88
+ # =============================================================================
89
+
90
+ DRUM_DIRS = ["Kicks", "Snares", "Perc", "Tops"]
91
+
92
+ # ── Limiter sintetico ─────────────────────────────────────────────────────────
93
+ # Case 1: threshold-based.
94
+ # Originale @ 0 dBFS peak → limiter attua sui picchi > soglia →
95
+ # output max peak ≈ −LIMITER_THRESHOLD_DB dBFS, loudness invariata.
96
+ # NON si tocca il segnale limitato con nessun gain dopo.
97
+ LIMITER_THRESHOLD_DB = 3.0 # dB sotto il ceiling (positivo)
98
+ LIMITER_RELEASE_MS = 80.0 # release del limiter sintetico (ms)
99
+ # attack = 1 campione → brickwall vero
100
+
101
+ # Normalizzazione residual — SOLO per comparabilità cross-file.
102
+ # Scala entrambi GT e iter identicamente, quindi non altera il confronto.
103
+ RESIDUAL_DBFS = -3.0
104
+
105
+ # ── Rumore rosa di sottofondo ─────────────────────────────────────────────────
106
+ # Simula un sottofondo musicale sotto il transiente di batteria.
107
+ # Viene mixato al sample (già a 0 dBFS peak) PRIMA del limiter.
108
+ # Questo assicura che:
109
+ # - il limiter agisca sul segnale realistico drum + music background
110
+ # - SPADE riceva lo stesso mix e debba lavorare in condizioni realistiche
111
+ # - GT_res = (drum+noise) − limiter(drum+noise) rifletta la situazione reale
112
+ # Livello relativo al peak del drum sample. −20 dB = sottofondo ben sotto
113
+ # il transiente, udibile ma non dominante (come un kick su un loop di batteria).
114
+ PINK_NOISE_LEVEL_DB = -20.0 # dB rel. al peak del drum (negativo = sotto)
115
+
116
+ # Optuna
117
+ STUDY_NAME = "spade_smart_v2_thr3db"
118
+ OUT_CSV = "smart_sweep_results.csv"
119
+
120
+ # Parametri FISSI del solver SPADE (invarianti tra tutti i trial)
121
+ FIXED_SOLVER = dict(
122
+ algo = "sspade",
123
+ frame = "rdft",
124
+ mode = "soft",
125
+ s = 1,
126
+ r = 1,
127
+ n_jobs = 1,
128
+ verbose = False,
129
+ show_progress = False,
130
+ use_gpu = True,
131
+ # multiband e macro_expand sono nello spazio di ricerca
132
+ )
133
+
134
+ # Crossover multiband (fisso per comparabilita' tra trial)
135
+ # 250 Hz separa: LF=corpo/punch del kick | HF=transiente/attacco
136
+ BAND_CROSSOVER_HZ = 250.0
137
+
138
+ # =============================================================================
139
+ # HELPERS
140
+ # =============================================================================
141
+
142
+ def ensure_2d(a: np.ndarray) -> np.ndarray:
143
+ return a[:, None] if a.ndim == 1 else a
144
+
145
+
146
+ def normalize_to_0dBFS(a: np.ndarray) -> np.ndarray:
147
+ """Scala a 0 dBFS peak — usato solo sull'originale come riferimento comune."""
148
+ pk = np.max(np.abs(a))
149
+ return a / pk if pk > 1e-12 else a
150
+
151
+
152
+ def normalize_peak(a: np.ndarray, target_dbfs: float) -> np.ndarray:
153
+ """
154
+ Scala a target_dbfs dBFS peak.
155
+ Usato SOLO sui residual per comparabilità cross-file;
156
+ non altera la logica perché GT e iter vengono scalati identicamente.
157
+ """
158
+ pk = np.max(np.abs(a))
159
+ return a * (10 ** (target_dbfs / 20.0) / pk) if pk > 1e-12 else a
160
+
161
+
162
+ def generate_pink_noise(n_samples: int, n_channels: int, rng: np.random.Generator) -> np.ndarray:
163
+ """
164
+ Genera rumore rosa (1/f) tramite filtro IIR di Voss-McCartney (approssimazione
165
+ a 5 poli, accurata entro ±1 dB nel range 20 Hz – 20 kHz).
166
+
167
+ Output: shape (n_samples, n_channels), RMS normalizzato a 1.0 (prima
168
+ del mix-in con PINK_NOISE_LEVEL_DB, che controlla il livello finale).
169
+
170
+ Algoritmo: rumore bianco filtrato con H(z) = 1 / A(z) dove i coefficienti
171
+ sono ottimizzati per approssimare una densità spettrale 1/f.
172
+ """
173
+ # Coefficienti del filtro IIR a 5 poli (Voss approssimazione)
174
+ # Poli reali, tutti stabili (|p| < 1)
175
+ b = np.array([0.049922035, -0.095993537, 0.050612699, -0.004408786])
176
+ a = np.array([1.0, -2.494956002, 2.017265875, -0.522189400])
177
+
178
+ out = np.empty((n_samples, n_channels))
179
+ for c in range(n_channels):
180
+ white = rng.standard_normal(n_samples)
181
+ pink = sig.lfilter(b, a, white)
182
+ rms = np.sqrt(np.mean(pink ** 2))
183
+ out[:, c] = pink / (rms + 1e-12) # RMS = 1.0
184
+
185
+ return out
186
+
187
+
188
+ def mix_pink_noise(
189
+ audio_0dBFS: np.ndarray,
190
+ sr: int,
191
+ level_db: float,
192
+ rng: np.random.Generator,
193
+ ) -> np.ndarray:
194
+ """
195
+ Mixa rumore rosa nel segnale a un livello relativo al suo peak.
196
+
197
+ level_db < 0 → il rumore è sotto il peak del drum (es. −20 dB)
198
+ Il rumore dura quanto il sample; se il sample è stereo, il rumore è stereo
199
+ (canali indipendenti → decorrelato come un vero fondo musicale).
200
+
201
+ Il segnale in uscita può superare 0 dBFS di qualche frazione di dB: è
202
+ corretto, il limiter che segue si occupa di riportarlo sotto la soglia.
203
+ """
204
+ audio = ensure_2d(audio_0dBFS)
205
+ N, C = audio.shape
206
+
207
+ noise = generate_pink_noise(N, C, rng) # RMS = 1.0 per canale
208
+ # Scala il rumore al livello desiderato rispetto al peak del drum
209
+ peak = np.max(np.abs(audio))
210
+ gain = peak * (10 ** (level_db / 20.0)) # gain lineare assoluto
211
+ mixed = audio + noise * gain
212
+ # NON normalizziamo qui: la normalizzazione a 0 dBFS avviene in build_corpus
213
+ # subito dopo, su tutto il mix (drum + noise), prima di qualsiasi altra op.
214
+ return mixed[:, 0] if audio_0dBFS.ndim == 1 else mixed
215
+
216
+
217
+ # =============================================================================
218
+ # LIMITER SINTETICO (Case 1 — threshold-based, brickwall, 1-campione attack)
219
+ # =============================================================================
220
+
221
+ def apply_brickwall_limiter(
222
+ audio_0dBFS: np.ndarray,
223
+ sr: int,
224
+ threshold_db: float = LIMITER_THRESHOLD_DB,
225
+ release_ms: float = LIMITER_RELEASE_MS,
226
+ ) -> np.ndarray:
227
+ """
228
+ Brickwall limiter threshold-based.
229
+
230
+ Input: audio_0dBFS — già a 0 dBFS peak, shape (N,) o (N, C)
231
+ Output: segnale limitato, stessa shape — NON boosted, NON clippato
232
+
233
+ Gain envelope:
234
+ se |x[n]| > threshold_lin → target_gain = threshold_lin / |x[n]|
235
+ altrimenti → target_gain = 1.0
236
+ Attack : istantaneo (1 campione, true brickwall)
237
+ Release: esponenziale con costante release_ms
238
+
239
+ Post-processing: NESSUNO.
240
+ Il segnale in uscita ha max peak ≈ −threshold_db dBFS.
241
+ La loudness percepita è invariata rispetto all'input.
242
+ """
243
+ thr_lin = 10 ** (-abs(threshold_db) / 20.0)
244
+ rc = np.exp(-1.0 / max(release_ms * sr / 1000.0, 1e-9))
245
+
246
+ audio = ensure_2d(audio_0dBFS).copy()
247
+ N, C = audio.shape
248
+ out = np.empty_like(audio)
249
+
250
+ for c in range(C):
251
+ ch = audio[:, c]
252
+ env = 1.0
253
+ g = np.empty(N)
254
+ for n in range(N):
255
+ pk = abs(ch[n])
256
+ target = thr_lin / pk if pk > thr_lin else 1.0
257
+ # attack istantaneo se gain scende, release esponenziale se risale
258
+ env = target if target < env else rc * env + (1.0 - rc) * target
259
+ g[n] = env
260
+ out[:, c] = ch * g
261
+
262
+ # Restituisce stessa shape dell'input
263
+ return out[:, 0] if audio_0dBFS.ndim == 1 else out
264
+
265
+
266
+ # =============================================================================
267
+ # COSINE SIMILARITY TF
268
+ # =============================================================================
269
+
270
+ def cosine_sim_tf(
271
+ gt: np.ndarray,
272
+ est: np.ndarray,
273
+ sr: int,
274
+ win_samples: int = 1024,
275
+ hop_samples: int = 256,
276
+ n_bands: int = 12,
277
+ ) -> float:
278
+ """
279
+ Similarità coseno media su micro-finestre tempo-frequenziali.
280
+ Input: entrambi già a RESIDUAL_DBFS peak.
281
+ Output: scalare in [0, 1]. Target ideale = 1.0.
282
+ """
283
+ L = min(gt.shape[0], est.shape[0])
284
+ g = (gt[:L, 0] if gt.ndim == 2 else gt[:L]).copy()
285
+ e = (est[:L, 0] if est.ndim == 2 else est[:L]).copy()
286
+
287
+ win = min(win_samples, max(32, L // 4))
288
+ hop = min(hop_samples, win // 2)
289
+
290
+ if L < win or win < 32:
291
+ denom = np.linalg.norm(g) * np.linalg.norm(e) + 1e-12
292
+ return float(np.dot(g, e) / denom)
293
+
294
+ _, _, Zg = sig.stft(g, fs=sr, window="hann",
295
+ nperseg=win, noverlap=win - hop,
296
+ boundary=None, padded=False)
297
+ _, _, Ze = sig.stft(e, fs=sr, window="hann",
298
+ nperseg=win, noverlap=win - hop,
299
+ boundary=None, padded=False)
300
+
301
+ n_freqs, n_frames = Zg.shape
302
+ if n_frames == 0:
303
+ return float(np.dot(g, e) / (np.linalg.norm(g) * np.linalg.norm(e) + 1e-12))
304
+
305
+ edges = np.unique(np.round(
306
+ np.logspace(0, np.log10(max(n_freqs, 2)), min(n_bands, n_freqs) + 1)
307
+ ).astype(int))
308
+ edges = np.clip(edges, 0, n_freqs)
309
+
310
+ sims = []
311
+ for i in range(len(edges) - 1):
312
+ f0, f1 = int(edges[i]), int(edges[i + 1])
313
+ if f1 <= f0:
314
+ continue
315
+ Mg = np.abs(Zg[f0:f1, :])
316
+ Me = np.abs(Ze[f0:f1, :])
317
+ dot = np.sum(Mg * Me, axis=0)
318
+ norm_g = np.sqrt(np.sum(Mg ** 2, axis=0)) + 1e-12
319
+ norm_e = np.sqrt(np.sum(Me ** 2, axis=0)) + 1e-12
320
+ sims.extend((dot / (norm_g * norm_e)).tolist())
321
+
322
+ return float(np.mean(sims)) if sims else 0.0
323
+
324
+
325
+ # =============================================================================
326
+ # CORPUS
327
+ # =============================================================================
328
+
329
+ def build_corpus(base_dir: Path, max_files: Optional[int] = None) -> List[Dict]:
330
+ """
331
+ Per ogni drum sample:
332
+ 1. Carica e normalizza a 0 dBFS peak (riferimento comune cross-file)
333
+ 2. Mixa rumore rosa a PINK_NOISE_LEVEL_DB rel. al peak ← NUOVO
334
+ Il mix avviene in float (può temporaneamente superare 0 dBFS)
335
+ 3. Normalizza il mix (drum + noise) a 0 dBFS peak
336
+ Riferimento comune prima di tutta la pipeline successiva
337
+ 4. Applica limiter sintetico su (drum + noise) normalizzato → limited
338
+ 4. GT_res_raw = (drum + noise) − limited (stessa scala, nessun gain)
339
+ 5. Scarta file dove il limiter non interviene
340
+ 6. Normalizza GT_res a RESIDUAL_DBFS (solo comparabilità cross-file)
341
+
342
+ Il rumore è riproducibile: ogni file usa un seed deterministico derivato
343
+ dal suo indice nel corpus, così i trial sono comparabili tra loro.
344
+ """
345
+ corpus = []
346
+ extensions = {".wav", ".flac", ".aif", ".aiff"}
347
+ file_index = 0 # usato per seed deterministico del rumore
348
+
349
+ for folder in DRUM_DIRS:
350
+ d = base_dir / folder
351
+ if not d.exists():
352
+ print(f" [WARN] Cartella non trovata: {d}")
353
+ continue
354
+ for f in sorted(d.glob("*")):
355
+ if f.suffix.lower() not in extensions:
356
+ continue
357
+ try:
358
+ audio, sr = sf.read(str(f), always_2d=True)
359
+ audio = audio.astype(float)
360
+ except Exception as exc:
361
+ print(f" [WARN] {f.name}: {exc}")
362
+ continue
363
+
364
+ if audio.shape[0] < 64:
365
+ continue
366
+
367
+ # 1. 0 dBFS peak
368
+ orig = normalize_to_0dBFS(audio)
369
+
370
+ # 2. Mix rumore rosa — seed deterministico per riproducibilità
371
+ rng = np.random.default_rng(seed=file_index)
372
+ orig_with_noise = ensure_2d(mix_pink_noise(orig, sr,
373
+ PINK_NOISE_LEVEL_DB, rng))
374
+ file_index += 1
375
+
376
+ # 3. Normalizza il mix a 0 dBFS peak — riferimento comune prima
377
+ # di tutta la pipeline. Il mix in float può aver superato 0 dBFS;
378
+ # questa normalizzazione azzera il problema prima del limiter.
379
+ orig_with_noise = ensure_2d(normalize_to_0dBFS(orig_with_noise))
380
+
381
+ # 4. Limiter sintetico su (drum + noise) @0dBFS — nessun gain dopo
382
+ limited = ensure_2d(apply_brickwall_limiter(orig_with_noise, sr))
383
+
384
+ # 5. Residual grezzo — stessa scala, zero aggiustamenti
385
+ gt_res_raw = orig_with_noise - limited
386
+
387
+ # 6. Verifica attività del limiter
388
+ if np.max(np.abs(gt_res_raw)) < 1e-6:
389
+ print(f" [SKIP] {f.name} — picco sotto la soglia, limiter inattivo")
390
+ continue
391
+
392
+ # 7. Normalizza a RESIDUAL_DBFS solo per comparabilità cross-file
393
+ gt_res = normalize_peak(gt_res_raw, RESIDUAL_DBFS)
394
+
395
+ corpus.append({
396
+ "file" : f.name,
397
+ "sr" : sr,
398
+ "limited" : limited, # input a SPADE = drum + noise + limiter
399
+ "gt_res" : gt_res, # target residual
400
+ })
401
+
402
+ if max_files and len(corpus) >= max_files:
403
+ return corpus
404
+
405
+ return corpus
406
+
407
+
408
+ # =============================================================================
409
+ # VALUTAZIONE SINGOLO FILE
410
+ # =============================================================================
411
+
412
+ def evaluate_one(item: Dict, params: dict) -> Optional[float]:
413
+ """
414
+ Esegue SPADE su limited, calcola il residual e lo confronta con GT.
415
+
416
+ params contiene parametri SPADE puri + flag di alto livello:
417
+ multiband (bool) -- split LF/HF, elabora separatamente
418
+ macro_expand (bool) -- envelope pre-pass per recupero corpo LF
419
+ macro_ratio (float) -- rapporto espansione (1.0 = bypass)
420
+ lf_delta_db (float) -- delta_db per banda LF (<= BAND_CROSSOVER_HZ)
421
+ il delta_db standard e' usato per la banda HF
422
+ lf_cutoff_hz (float) -- v12: Hz sotto cui riservare bin LF (0 = off)
423
+ lf_k_min (int) -- v12: slot LF garantiti per iterazione ADMM
424
+ """
425
+ try:
426
+ sr = item["sr"]
427
+ limited = item["limited"].copy()
428
+ gt_res = item["gt_res"]
429
+
430
+ # Estrai flag di alto livello (non sono parametri DeclipParams diretti)
431
+ p2 = dict(params) # copia per non mutare l'originale
432
+ multiband = p2.pop("multiband", False)
433
+ macro_expand = p2.pop("macro_expand", False)
434
+ macro_ratio = p2.pop("macro_ratio", 1.0)
435
+ lf_delta_db = p2.pop("lf_delta_db", p2.get("delta_db", 1.5))
436
+ # v12: stratified thresholding params — passati direttamente a DeclipParams
437
+ # (già nel dict p2, non richiedono pop separato)
438
+
439
+ spade_kw = dict(
440
+ multiband = multiband,
441
+ macro_expand = macro_expand,
442
+ macro_ratio = macro_ratio if macro_expand else 1.0,
443
+ macro_release_ms = 200.0,
444
+ macro_attack_ms = 10.0,
445
+ )
446
+ if multiband:
447
+ spade_kw["band_crossovers"] = (BAND_CROSSOVER_HZ,)
448
+ spade_kw["band_delta_db"] = (lf_delta_db, p2["delta_db"])
449
+
450
+ p = DeclipParams(sample_rate=sr, **FIXED_SOLVER, **p2, **spade_kw)
451
+ fixed, _ = declip(limited, p)
452
+ fixed_2d = ensure_2d(fixed)
453
+
454
+ # Residual generato — stessa scala dell'input, nessun gain
455
+ res_raw = fixed_2d - limited
456
+ res_iter = normalize_peak(res_raw, RESIDUAL_DBFS)
457
+
458
+ return cosine_sim_tf(gt_res, res_iter, sr)
459
+
460
+ except Exception as exc:
461
+ warnings.warn(f"evaluate_one ({item['file']}): {exc}")
462
+ return None
463
+
464
+
465
+ # =============================================================================
466
+ # OBIETTIVO OPTUNA
467
+ # =============================================================================
468
+
469
+ def make_objective(corpus: List[Dict]):
470
+ def objective(trial: "optuna.Trial") -> float:
471
+ # ── Parametri core ────────────────────────────────────────────────
472
+ delta_db = trial.suggest_float("delta_db", 1.5, 3.5, step=0.05)
473
+ win_exp = trial.suggest_int ("win_exp", 9, 11)
474
+ win = 2 ** win_exp
475
+ hop_div = trial.suggest_categorical("hop_div", [4, 8])
476
+ hop = win // hop_div
477
+ rel_ms = trial.suggest_float("release_ms", 10.0, 200.0, step=5.0)
478
+ gain_db = trial.suggest_float("max_gain_db", 2.0, 12.0, step=0.5)
479
+ eps = trial.suggest_categorical("eps", [0.03, 0.05, 0.1])
480
+ max_iter = trial.suggest_categorical("max_iter", [250, 500, 1000])
481
+
482
+ # ── Multiband + Macro expand ────────────────────────────────────────
483
+ # SPAZIO STATICO: lf_delta_db e macro_ratio vengono SEMPRE campionati
484
+ # dal TPE (spazio fisso) e poi usati condizionalmente a runtime.
485
+ # Questo elimina il fallback a RandomSampler che degradava le performance
486
+ # del TPE multivariate con spazi dinamici.
487
+ multiband = trial.suggest_categorical("multiband", [False, True])
488
+ macro_expand = trial.suggest_categorical("macro_expand", [False, True])
489
+
490
+ # Sempre campionati (range fisso), usati solo se il flag e' True:
491
+ lf_delta_db = trial.suggest_float("lf_delta_db", 0.5, 2.0, step=0.05)
492
+ macro_ratio = trial.suggest_float("macro_ratio", 1.1, 2.0, step=0.05)
493
+
494
+ # ── v12: frequency-stratified thresholding ─────────────────────────
495
+ # lf_cutoff_hz: soglia in Hz che separa i bin "LF garantiti" dagli HF.
496
+ # Con M=512, sr=44100: bin_k = k * sr / (2M) → lf_cutoff=1000Hz → 23 bin LF.
497
+ # lf_k_min: quanti di quei bin sono garantiti per ogni iterazione ADMM.
498
+ # 0 = disabilitato (comportamento identico a v11).
499
+ lf_cutoff_hz = trial.suggest_categorical("lf_cutoff_hz", [0.0, 500.0, 1000.0, 2000.0])
500
+ lf_k_min = trial.suggest_int("lf_k_min", 0, 16)
501
+ # Nota: quando lf_cutoff_hz=0 oppure lf_k_min=0, la feature e' disabilitata.
502
+ # Il TPE impara autonomamente quando conviene attivarla.
503
+
504
+ # Se multiband=False, lf_delta_db viene ignorato in evaluate_one.
505
+ # Se macro_expand=False, macro_ratio viene ignorato in evaluate_one.
506
+
507
+ params = dict(
508
+ delta_db = delta_db,
509
+ window_length = win,
510
+ hop_length = hop,
511
+ release_ms = rel_ms,
512
+ max_gain_db = gain_db,
513
+ eps = eps,
514
+ max_iter = max_iter,
515
+ # flag di alto livello (estratti in evaluate_one, non passati raw)
516
+ multiband = multiband,
517
+ lf_delta_db = lf_delta_db,
518
+ macro_expand = macro_expand,
519
+ macro_ratio = macro_ratio,
520
+ # v12: passati direttamente a DeclipParams (non estratti in evaluate_one)
521
+ lf_cutoff_hz = lf_cutoff_hz,
522
+ lf_k_min = lf_k_min,
523
+ )
524
+
525
+ scores = []
526
+ # ── Shuffle per-trial con seed riproducibile ──────────────────────
527
+ # Ogni trial vede il corpus in ordine diverso per evitare che i file
528
+ # in coda siano sistematicamente ignorati dal pruner (che valuta al
529
+ # midpoint) e che l'ottimizzatore sviluppi un bias sull'ordine fisso.
530
+ # Il seed è deterministico (trial.number) → riproducibile con --resume.
531
+ rng_shuffle = np.random.default_rng(trial.number)
532
+ shuffled_corpus = rng_shuffle.permutation(len(corpus)).tolist()
533
+ midpoint = len(corpus) // 2
534
+
535
+ for step, idx in enumerate(shuffled_corpus):
536
+ item = corpus[idx]
537
+ sc = evaluate_one(item, dict(params)) # dict() per non mutare params
538
+ if sc is not None:
539
+ scores.append(sc)
540
+ if step == midpoint and scores:
541
+ trial.report(float(np.mean(scores)), step=step)
542
+ if trial.should_prune():
543
+ raise optuna.TrialPruned()
544
+
545
+ if not scores:
546
+ return 0.0
547
+ mean_score = float(np.mean(scores))
548
+ trial.report(mean_score, step=len(corpus))
549
+ return mean_score
550
+
551
+ return objective
552
+
553
+
554
+ # =============================================================================
555
+ # REPORT + CSV
556
+ # =============================================================================
557
+
558
+ def print_report(study: "optuna.Study", top_n: int = 20):
559
+ trials = sorted(
560
+ [t for t in study.trials if t.state == optuna.trial.TrialState.COMPLETE],
561
+ key=lambda t: t.value or 0, reverse=True,
562
+ )
563
+ if not trials:
564
+ print("Nessun trial completato.")
565
+ return
566
+
567
+ if _HAS_RICH:
568
+ _console.rule("[bold cyan]RISULTATI SWEEP BAYESIANO[/]")
569
+ tbl = Table(show_header=True, header_style="bold cyan", show_lines=False)
570
+ for col, w in [("#",4),("score",9),("ddb",6),("LFd",5),("win",6),
571
+ ("hop",4),("rel",6),("gain",6),("eps",5),("iter",5),
572
+ ("MB",3),("ME",3),("MR",5),("LFcut",6),("LFk",4)]:
573
+ tbl.add_column(col, justify="right", width=w)
574
+ for rank, t in enumerate(trials[:top_n], 1):
575
+ p = t.params
576
+ win = 2 ** p["win_exp"]
577
+ hop = win // p["hop_div"]
578
+ mb = "Y" if p.get("multiband") else "n"
579
+ me = "Y" if p.get("macro_expand") else "n"
580
+ lfc = p.get("lf_cutoff_hz", 0.0)
581
+ lfk = p.get("lf_k_min", 0)
582
+ sty = "bold green" if rank == 1 else ("yellow" if rank <= 3 else "")
583
+ tbl.add_row(
584
+ str(rank), f"{t.value:.5f}",
585
+ f"{p['delta_db']:.2f}",
586
+ f"{p.get('lf_delta_db', p['delta_db']):.2f}",
587
+ str(win), str(hop),
588
+ f"{p['release_ms']:.0f}", f"{p['max_gain_db']:.1f}",
589
+ str(p['eps']), str(p['max_iter']),
590
+ mb, me, f"{p.get('macro_ratio', 1.0):.2f}",
591
+ f"{lfc:.0f}", str(lfk),
592
+ style=sty,
593
+ )
594
+ _console.print(tbl)
595
+ else:
596
+ hdr = (f"{'#':>3} {'score':>8} {'ddb':>5} {'LFd':>5} {'win':>5}"
597
+ f" {'hop':>4} {'rel':>6} {'gain':>5} {'eps':>5} {'iter':>5}"
598
+ f" {'MB':>3} {'ME':>3} {'MR':>5} {'LFcut':>6} {'LFk':>4}")
599
+ print(hdr); print("-" * len(hdr))
600
+ for rank, t in enumerate(trials[:top_n], 1):
601
+ p = t.params
602
+ win = 2 ** p["win_exp"]
603
+ hop = win // p["hop_div"]
604
+ mb = "Y" if p.get("multiband") else "n"
605
+ me = "Y" if p.get("macro_expand") else "n"
606
+ lfc = p.get("lf_cutoff_hz", 0.0)
607
+ lfk = p.get("lf_k_min", 0)
608
+ print(f"{rank:>3} {t.value:>8.5f} {p['delta_db']:>5.2f}"
609
+ f" {p.get('lf_delta_db', p['delta_db']):>5.2f} {win:>5}"
610
+ f" {hop:>4} {p['release_ms']:>6.0f} {p['max_gain_db']:>5.1f}"
611
+ f" {str(p['eps']):>5} {p['max_iter']:>5}"
612
+ f" {mb:>3} {me:>3} {p.get('macro_ratio', 1.0):>5.2f}"
613
+ f" {lfc:>6.0f} {lfk:>4}")
614
+
615
+ best = trials[0]
616
+ p = best.params
617
+ win = 2 ** p["win_exp"]
618
+ hop = win // p["hop_div"]
619
+ n_pruned = sum(1 for t in study.trials
620
+ if t.state == optuna.trial.TrialState.PRUNED)
621
+
622
+ print("\n" + "═" * 60)
623
+ print("CONFIG OTTIMALE")
624
+ print("═" * 60)
625
+ print(f"""
626
+ params = DeclipParams(
627
+ algo = "sspade",
628
+ frame = "rdft",
629
+ mode = "soft",
630
+ delta_db = {p['delta_db']:.2f},
631
+ window_length = {win},
632
+ hop_length = {hop},
633
+ release_ms = {p['release_ms']:.1f},
634
+ max_gain_db = {p['max_gain_db']:.1f},
635
+ eps = {p['eps']},
636
+ max_iter = {p['max_iter']},
637
+ sample_rate = sr,
638
+ multiband = {p.get('multiband', False)},
639
+ band_crossovers = ({BAND_CROSSOVER_HZ},),
640
+ band_delta_db = ({p.get('lf_delta_db', p['delta_db']):.2f}, {p['delta_db']:.2f}),
641
+ macro_expand = {p.get('macro_expand', False)},
642
+ macro_ratio = {p.get('macro_ratio', 1.0):.2f},
643
+ lf_cutoff_hz = {p.get('lf_cutoff_hz', 0.0):.1f}, # v12
644
+ lf_k_min = {p.get('lf_k_min', 0)}, # v12
645
+ n_jobs = -1,
646
+ show_progress = True,
647
+ )""")
648
+ print(f"\n→ Best score : {best.value:.5f}")
649
+ print(f" Trials done : {len(trials)}")
650
+ print(f" Pruned : {n_pruned}")
651
+
652
+
653
+
654
+ # =============================================================================
655
+ # DEBUG EXPORT
656
+ # =============================================================================
657
+
658
+ # Parametri SPADE usati per il debug (best noti dal grid sweep precedente).
659
+ # Se un DB Optuna esiste e ha trial completati, vengono sostituiti dal best.
660
+ DEBUG_PARAMS = dict(
661
+ delta_db = 1.5,
662
+ window_length = 1024,
663
+ hop_length = 256,
664
+ release_ms = 100.0,
665
+ max_gain_db = 6.0,
666
+ eps = 0.05,
667
+ max_iter = 500,
668
+ )
669
+
670
+
671
+ def _pk_dbfs(a: np.ndarray) -> float:
672
+ pk = float(np.max(np.abs(a)))
673
+ return 20.0 * np.log10(pk) if pk > 1e-12 else -999.0
674
+
675
+
676
+ def _rms_dbfs(a: np.ndarray) -> float:
677
+ rms = float(np.sqrt(np.mean(a.astype(float) ** 2)))
678
+ return 20.0 * np.log10(rms) if rms > 1e-12 else -999.0
679
+
680
+
681
+ def _write_wav(path: Path, audio: np.ndarray, sr: int) -> None:
682
+ """Scrive WAV float32 senza clipping. Avvisa se peak > 1.0."""
683
+ a2d = ensure_2d(audio).astype(np.float32)
684
+ pk = float(np.max(np.abs(a2d)))
685
+ if pk > 1.0:
686
+ print(f" [WARN] {path.name}: peak={pk:.4f} > 1.0 "
687
+ f"(+{20*np.log10(pk):.2f} dBFS) — float32, non clippato")
688
+ sf.write(str(path), a2d, sr, subtype="FLOAT")
689
+
690
+
691
+ def debug_export(
692
+ corpus: list,
693
+ base_dir: Path,
694
+ out_dir: Path,
695
+ n_files: int,
696
+ spade_params: dict,
697
+ ) -> None:
698
+ """
699
+ Esporta WAV di debug per i primi n_files item del corpus.
700
+
701
+ Per ogni file vengono scritti 6 WAV float32:
702
+ 01_orig_with_noise drum + pink noise, normalizzato a 0 dBFS peak
703
+ (segnale prima del limiter)
704
+ 02_limited uscita del limiter sintetico (input a SPADE)
705
+ 03_gt_residual orig_with_noise - limited, @RESIDUAL_DBFS peak
706
+ 04_spade_output uscita SPADE (float32, puo' superare 0 dBFS)
707
+ 05_res_iter spade_output - limited, @RESIDUAL_DBFS peak
708
+ 06_diff_residuals gt_residual - res_iter
709
+ ideale = silenzio = -inf dB
710
+
711
+ Stampa una tabella con peak dBFS e RMS dBFS per ogni traccia.
712
+
713
+ Livelli ATTESI:
714
+ 01 peak = 0.00 dBFS (normalizzato)
715
+ 02 peak ~ -LIMITER_THRESHOLD_DB dBFS (es. -1.5 dBFS)
716
+ 03 peak = RESIDUAL_DBFS (es. -3.0 dBFS)
717
+ 04 peak puo' essere > 0 dBFS (transiente recuperato)
718
+ 05 peak = RESIDUAL_DBFS (es. -3.0 dBFS)
719
+ 06 peak << 0 dBFS (piu' basso = SPADE piu' vicino al GT)
720
+ """
721
+ out_dir.mkdir(parents=True, exist_ok=True)
722
+ items = corpus[:n_files]
723
+ col_w = max(len(it["file"]) for it in items) + 2
724
+
725
+ HDR = (f" {'file':<{col_w}} {'traccia':<22}"
726
+ f" {'peak dBFS':>10} {'RMS dBFS':>9} note")
727
+ SEP = " " + "-" * (len(HDR) - 2)
728
+
729
+ print()
730
+ if _HAS_RICH:
731
+ _console.rule("[bold cyan]DEBUG EXPORT[/]")
732
+ else:
733
+ print("=" * 65)
734
+ print("DEBUG EXPORT")
735
+ print("=" * 65)
736
+
737
+ print(f" Output dir : {out_dir}")
738
+ print(f" SPADE params : delta_db={spade_params['delta_db']}"
739
+ f" win={spade_params['window_length']}"
740
+ f" hop={spade_params['hop_length']}"
741
+ f" rel={spade_params['release_ms']}ms"
742
+ f" gain={spade_params['max_gain_db']}dB")
743
+ print(f" File esportati: {len(items)}")
744
+ print()
745
+ print(f" Livelli attesi:")
746
+ print(f" 01_orig_with_noise : ~ 0.00 dBFS (normalizzato prima del limiter)")
747
+ print(f" 02_limited : ~ {-LIMITER_THRESHOLD_DB:+.2f} dBFS (uscita limiter)")
748
+ print(f" 03_gt_residual : = {RESIDUAL_DBFS:+.2f} dBFS (normalizzato)")
749
+ print(f" 04_spade_output : > 0 dBFS possibile (transiente recuperato)")
750
+ print(f" 05_res_iter : = {RESIDUAL_DBFS:+.2f} dBFS (normalizzato)")
751
+ print(f" 06_diff_residuals : << 0 dBFS (piu' basso = pipeline piu' corretta)")
752
+ print()
753
+ print(HDR)
754
+
755
+ diff_peaks = []
756
+
757
+ for file_index, item in enumerate(items):
758
+ sr = item["sr"]
759
+ limited = item["limited"].copy()
760
+ gt_res = item["gt_res"]
761
+ stem = Path(item["file"]).stem
762
+
763
+ # ── Ricostruisci orig_with_noise ──────────────────────────────────
764
+ # Riesegue la stessa pipeline di build_corpus con il seed identico
765
+ orig_with_noise = None
766
+ for folder in DRUM_DIRS:
767
+ candidate = base_dir / folder / item["file"]
768
+ if candidate.exists():
769
+ try:
770
+ raw, _ = sf.read(str(candidate), always_2d=True)
771
+ raw = raw.astype(float)
772
+ rng = np.random.default_rng(seed=file_index)
773
+ orig_0 = normalize_to_0dBFS(raw)
774
+ mixed = ensure_2d(mix_pink_noise(orig_0, sr,
775
+ PINK_NOISE_LEVEL_DB, rng))
776
+ orig_with_noise = ensure_2d(normalize_to_0dBFS(mixed))
777
+ except Exception:
778
+ pass
779
+ break
780
+
781
+ if orig_with_noise is None:
782
+ # Fallback: ricostruiamo da limited + gt_res (approssimazione)
783
+ gt_scale = 10 ** (RESIDUAL_DBFS / 20.0) # peak di gt_res
784
+ lim_peak = 10 ** (-LIMITER_THRESHOLD_DB / 20.0) # peak atteso del limited
785
+ gt_raw = gt_res * (lim_peak / (gt_scale + 1e-12))
786
+ orig_with_noise = ensure_2d(normalize_to_0dBFS(limited + gt_raw))
787
+
788
+ # ── Esegui SPADE ──────────────────────────────────────────────────
789
+ try:
790
+ p = DeclipParams(sample_rate=sr, **FIXED_SOLVER, **spade_params)
791
+ fixed, _ = declip(limited.copy(), p)
792
+ fixed_2d = ensure_2d(fixed)
793
+ except Exception as exc:
794
+ print(f" [ERRORE SPADE] {item['file']}: {exc}")
795
+ continue
796
+
797
+ # ── Residual iterazione (scala RAW, senza normalizzazione) ───────────
798
+ # IMPORTANTE: il diff deve avvenire sulla scala comune PRIMA di
799
+ # normalizzare i due residual, altrimenti la normalizzazione
800
+ # indipendente rimuove l'informazione di ampiezza relativa.
801
+ #
802
+ # gt_res e res_raw sono entrambi derivati dallo stesso limited →
803
+ # hanno la stessa scala di riferimento.
804
+ # gt_res e' gia' stato normalizzato a RESIDUAL_DBFS in build_corpus;
805
+ # dobbiamo riportarlo alla scala raw per il confronto.
806
+ #
807
+ # Scala comune: usiamo il peak del limited come riferimento.
808
+ # limited peak ≈ 10^(-LIMITER_THRESHOLD_DB/20) → scala assoluta nota.
809
+ res_raw = fixed_2d - limited # residual SPADE in scala assoluta
810
+
811
+ # gt_res_raw: ricostruiamo dalla scala normalizzata
812
+ # gt_res = gt_res_raw / peak(gt_res_raw) * 10^(RESIDUAL_DBFS/20)
813
+ # → gt_res_raw = gt_res * peak(gt_res_raw) / 10^(RESIDUAL_DBFS/20)
814
+ # Poiche' peak(gt_res_raw) non e' salvato, lo stimiamo:
815
+ # gt_res_raw ≈ orig_with_noise - limited (ricostruito)
816
+ gt_res_raw_approx = ensure_2d(orig_with_noise) - limited
817
+ L = min(gt_res_raw_approx.shape[0], res_raw.shape[0])
818
+
819
+ # ── Diff sulla scala comune (raw, non normalizzata) ───────────────
820
+ diff_raw = gt_res_raw_approx[:L] - res_raw[:L]
821
+
822
+ # ── Cosine similarity temporale (scalare, sul canale L) ──────────
823
+ g_flat = gt_res_raw_approx[:L, 0] if gt_res_raw_approx.ndim == 2 else gt_res_raw_approx[:L]
824
+ e_flat = res_raw[:L, 0] if res_raw.ndim == 2 else res_raw[:L]
825
+ cos_sim_td = float(
826
+ np.dot(g_flat, e_flat) /
827
+ (np.linalg.norm(g_flat) * np.linalg.norm(e_flat) + 1e-12)
828
+ )
829
+
830
+ # ── Stima floor teorico del diff dovuto al rumore rosa ────────────
831
+ # Il limiter attenue anche i picchi del rumore rosa → quella parte
832
+ # sta nel GT_res ma NON in res_iter (SPADE non la recupera).
833
+ # Stimiamo quanto rumore e' nel GT_res come proxy del floor.
834
+ noise_gain_lin = 10 ** (PINK_NOISE_LEVEL_DB / 20.0)
835
+ # Ampiezza del rumore rispetto al limited: noise_gain ≈ fraction
836
+ # del GT_res che e' irrecuperabile da SPADE.
837
+ noise_floor_db = 20 * np.log10(noise_gain_lin + 1e-12) + RESIDUAL_DBFS
838
+ # In pratica: diff non puo' essere < noise_floor per costruzione.
839
+
840
+ # ── diff dBFS relativo al GT_res (SNR-like) ───────────────────────
841
+ diff_rms_db = _rms_dbfs(diff_raw[:L])
842
+ gt_rms_db = _rms_dbfs(gt_res_raw_approx[:L])
843
+ # diff_vs_gt: quanto e' grande il diff rispetto al GT (0 dB = diff = GT)
844
+ diff_vs_gt_db = diff_rms_db - gt_rms_db # piu' negativo = meglio
845
+
846
+ # Normalizza per l'export WAV
847
+ res_iter = normalize_peak(res_raw, RESIDUAL_DBFS)
848
+ diff_norm = normalize_peak(diff_raw, RESIDUAL_DBFS) if np.max(np.abs(diff_raw)) > 1e-12 else diff_raw
849
+
850
+ diff_peaks.append((diff_vs_gt_db, cos_sim_td, diff_rms_db, gt_rms_db))
851
+
852
+ # ── Definizione tracce ────────────────────────────────────────────
853
+ tracks = [
854
+ ("01_orig_with_noise",
855
+ orig_with_noise,
856
+ f"drum+noise @0dBFS (input pipeline)"),
857
+ ("02_limited",
858
+ limited,
859
+ f"uscita limiter (input SPADE) atteso: ~{-LIMITER_THRESHOLD_DB:+.2f}dBFS"),
860
+ ("03_gt_residual",
861
+ gt_res,
862
+ f"GT residual @{RESIDUAL_DBFS:.0f}dBFS (include noise attenuation)"),
863
+ ("04_spade_output",
864
+ fixed_2d,
865
+ f"SPADE output (float32, puo' >0dBFS)"),
866
+ ("05_res_iter",
867
+ res_iter,
868
+ f"residual SPADE @{RESIDUAL_DBFS:.0f}dBFS (solo componente sparsa)"),
869
+ ("06_diff_residuals",
870
+ diff_norm,
871
+ f"GT - iter @{RESIDUAL_DBFS:.0f}dBFS "
872
+ f"cos_sim={cos_sim_td:.3f} diff/GT={diff_vs_gt_db:+.1f}dB "
873
+ f"noise_floor≈{noise_floor_db:+.1f}dB"),
874
+ ]
875
+
876
+ # ── Soglia realistica per il diff ─────────────────────────────────
877
+ # Il diff non puo' essere < noise_floor per costruzione del corpus.
878
+ # Calibriamo la soglia [OK] a noise_floor + 6 dB (margine).
879
+ ok_threshold = noise_floor_db + 6.0 # tipicamente attorno a -17 dBFS
880
+ warn_threshold = ok_threshold + 10.0 # tutto sopra e' davvero anomalo
881
+
882
+ # ── Stampa tabella + scrivi WAV ───────────────────────────────────
883
+ print(SEP)
884
+ for track_name, audio, note in tracks:
885
+ pk = _pk_dbfs(audio)
886
+ rms = _rms_dbfs(audio)
887
+
888
+ flag = ""
889
+ if track_name == "06_diff_residuals":
890
+ if diff_vs_gt_db < -12: flag = "[OK] buona convergenza"
891
+ elif diff_vs_gt_db < -6: flag = "[~] convergenza parziale"
892
+ else: flag = "[WARN] diff elevato rispetto al GT"
893
+
894
+ row = (f" {item['file']:<{col_w}} {track_name:<22}"
895
+ f" {pk:>+10.2f} {rms:>+9.2f} {note} {flag}")
896
+
897
+ if _HAS_RICH:
898
+ color = ("green" if "[OK]" in flag else
899
+ "yellow" if "[~]" in flag else
900
+ "red" if "[WARN]" in flag else "")
901
+ colored_row = row.replace(flag, f"[{color or 'dim'}]{flag}[/]") if flag else row
902
+ _console.print(colored_row)
903
+ else:
904
+ print(row)
905
+
906
+ wav_path = out_dir / f"{stem}__{track_name}.wav"
907
+ _write_wav(wav_path, audio, sr)
908
+
909
+ # ── Analisi spettrale per banda: LF vs HF ─────────────────────────
910
+ # Risponde alla domanda: quanto residual c'e' nelle basse frequenze,
911
+ # e quanto ne recupera SPADE?
912
+ #
913
+ # Bands:
914
+ # Sub-bass : 20 – 80 Hz (fondamentale kick, body)
915
+ # Bass : 80 – 250 Hz (corpo kick, coda)
916
+ # Low-mid : 250 – 800 Hz (presenza)
917
+ # High-mid : 800 – 4000 Hz (attacco, click)
918
+ # High : 4k – 20k Hz (aria, snap)
919
+ #
920
+ # Per ogni banda misura:
921
+ # GT_energy = energia del GT residual (quanto il limiter ha tolto)
922
+ # iter_energy = energia recuperata da SPADE
923
+ # recovery % = iter_energy / GT_energy × 100
924
+
925
+ def band_energy(audio_2d, sr, f_lo, f_hi):
926
+ """RMS energy in dB di una banda passante [f_lo, f_hi] Hz."""
927
+ mono = audio_2d[:, 0] if audio_2d.ndim == 2 else audio_2d
928
+ N = len(mono)
929
+ if N < 8:
930
+ return -999.0
931
+ # Butterworth bandpass (o lowpass/highpass ai bordi)
932
+ nyq = sr / 2.0
933
+ lo = max(f_lo / nyq, 1e-4)
934
+ hi = min(f_hi / nyq, 0.9999)
935
+ if lo >= hi:
936
+ return -999.0
937
+ if lo < 1e-3:
938
+ b, a = sig.butter(4, hi, btype="low")
939
+ else:
940
+ b, a = sig.butter(4, [lo, hi], btype="band")
941
+ filtered = sig.filtfilt(b, a, mono)
942
+ return _rms_dbfs(filtered)
943
+
944
+ BANDS = [
945
+ ("Sub-bass ", 20, 80),
946
+ ("Bass ", 80, 250),
947
+ ("Low-mid ", 250, 800),
948
+ ("High-mid ", 800, 4000),
949
+ ("High ", 4000, 20000),
950
+ ]
951
+
952
+ gt_mono = gt_res[:, 0] if gt_res.ndim == 2 else gt_res
953
+ ri_mono = res_iter[:, 0] if res_iter.ndim == 2 else res_iter
954
+
955
+ # Normalizza GT e iter sulla stessa scala (rimuovi la normalizzazione
956
+ # a RESIDUAL_DBFS per confrontare energie assolute)
957
+ gt_raw_for_bands = gt_res_raw_approx
958
+ iter_raw_for_bands = res_raw
959
+
960
+ print()
961
+ band_hdr = f" {'banda':<12} {'GT_res RMS':>10} {'SPADE rec RMS':>13} {'recovery':>9} {'limitato?'}"
962
+ print(f" Analisi spettrale per banda — {item['file']}")
963
+ print(f" {'─'*75}")
964
+ print(band_hdr)
965
+ print(f" {'─'*75}")
966
+ for bname, f_lo, f_hi in BANDS:
967
+ gt_db = band_energy(gt_raw_for_bands, sr, f_lo, f_hi)
968
+ iter_db = band_energy(iter_raw_for_bands, sr, f_lo, f_hi)
969
+ if gt_db < -60:
970
+ recovery_str = " — (silenzio)"
971
+ flag_b = ""
972
+ else:
973
+ diff_b = iter_db - gt_db # positivo = SPADE supera GT (overrecovery)
974
+ # recovery: 0 dB diff = recupero perfetto, molto negativo = sotto-recupero
975
+ if diff_b > -3:
976
+ flag_b = "OK"
977
+ elif diff_b > -9:
978
+ flag_b = "~ parziale"
979
+ else:
980
+ flag_b = "!! sotto-recupero"
981
+ recovery_str = f"{diff_b:>+7.1f} dB {flag_b}"
982
+ line = f" {bname:<12} {gt_db:>+10.1f} {iter_db:>+13.1f} {recovery_str}"
983
+ if _HAS_RICH:
984
+ color = "green" if "OK" in recovery_str else (
985
+ "yellow" if "~" in recovery_str else (
986
+ "red" if "!!" in recovery_str else "dim"))
987
+ _console.print(f"[{color}]{line}[/]")
988
+ else:
989
+ print(line)
990
+ print()
991
+
992
+ print(SEP)
993
+ print()
994
+ if diff_peaks:
995
+ vs_gt_vals = [d[0] for d in diff_peaks]
996
+ cos_vals = [d[1] for d in diff_peaks]
997
+ avg_vs_gt = float(np.mean(vs_gt_vals))
998
+ best_vs_gt = float(np.min(vs_gt_vals))
999
+ worst_vs_gt = float(np.max(vs_gt_vals))
1000
+ avg_cos = float(np.mean(cos_vals))
1001
+
1002
+ noise_floor_db = 20 * np.log10(10 ** (PINK_NOISE_LEVEL_DB / 20.0) + 1e-12) + RESIDUAL_DBFS
1003
+
1004
+ print(f" RIEPILOGO 06_diff_residuals:")
1005
+ print(f" diff/GT_rms media : {avg_vs_gt:>+7.2f} dB (0 dB = diff grande quanto GT)")
1006
+ print(f" diff/GT_rms migliore: {best_vs_gt:>+7.2f} dB")
1007
+ print(f" diff/GT_rms peggiore: {worst_vs_gt:>+7.2f} dB")
1008
+ print(f" cos_sim TD media : {avg_cos:>8.4f} (1.0 = identici)")
1009
+ print()
1010
+ print(f" NOTA IMPORTANTE:")
1011
+ print(f" Il rumore rosa ({PINK_NOISE_LEVEL_DB} dB) fa parte del GT_res ma")
1012
+ print(f" NON puo' essere recuperato da SPADE (non e' sparso).")
1013
+ print(f" Floor teorico del diff: ≈ {noise_floor_db:+.1f} dBFS — questo e' il")
1014
+ print(f" limite fisico massimo raggiungibile con questo corpus.")
1015
+ print(f" Un diff/GT < -6 dB indica buona convergenza di SPADE.")
1016
+ print()
1017
+ if worst_vs_gt < -12:
1018
+ verdict = "OK Convergenza eccellente — SPADE recupera bene i transienti"
1019
+ elif worst_vs_gt < -6:
1020
+ verdict = "~ Convergenza buona — residuo compatibile con il noise floor"
1021
+ else:
1022
+ verdict = "INFO diff dominato dal rumore rosa — comportamento atteso e corretto"
1023
+ print(f" Verdetto: {verdict}")
1024
+ print(f"\n WAV scritti in : {out_dir}/")
1025
+ print(f" Formato : float32, nessun clipping (usa un editor che supporta >0dBFS)")
1026
+ print(f" Nomenclatura : <stem>__<N>_<traccia>.wav")
1027
+
1028
+
1029
+ def save_csv(study: "optuna.Study"):
1030
+ import csv
1031
+ trials = sorted(
1032
+ [t for t in study.trials if t.state == optuna.trial.TrialState.COMPLETE],
1033
+ key=lambda t: t.value or 0, reverse=True,
1034
+ )
1035
+ with open(OUT_CSV, "w", newline="") as f:
1036
+ w = csv.writer(f)
1037
+ w.writerow(["rank", "score", "delta_db", "lf_delta_db",
1038
+ "window_length", "hop_length", "release_ms", "max_gain_db",
1039
+ "eps", "max_iter", "multiband", "macro_expand", "macro_ratio"])
1040
+ for rank, t in enumerate(trials, 1):
1041
+ p = t.params
1042
+ win = 2 ** p["win_exp"]
1043
+ hop = win // p["hop_div"]
1044
+ w.writerow([
1045
+ rank, round(t.value, 6),
1046
+ p["delta_db"],
1047
+ round(p.get("lf_delta_db", p["delta_db"]), 2),
1048
+ win, hop,
1049
+ p["release_ms"], p["max_gain_db"], p["eps"], p["max_iter"],
1050
+ int(p.get("multiband", False)),
1051
+ int(p.get("macro_expand", False)),
1052
+ round(p.get("macro_ratio", 1.0), 2),
1053
+ ])
1054
+ print(f"\n 📄 CSV: {OUT_CSV}")
1055
+
1056
+
1057
+ # =============================================================================
1058
+ # MAIN
1059
+ # =============================================================================
1060
+
1061
+ def parse_args():
1062
+ ap = argparse.ArgumentParser(description="Smart Bayesian sweep per S-SPADE v2")
1063
+ ap.add_argument("--trials", type=int, default=200,
1064
+ help="Numero di trial Optuna (default: 200)")
1065
+ ap.add_argument("--resume", action="store_true",
1066
+ help="Carica lo study esistente e aggiunge trial")
1067
+ ap.add_argument("--report", action="store_true",
1068
+ help="Solo report (nessun nuovo trial)")
1069
+ ap.add_argument("--base-dir", type=str, default=".",
1070
+ help="Cartella radice con Kicks/Snares/Perc/Tops")
1071
+ ap.add_argument("--corpus-size", type=int, default=None,
1072
+ help="Limita il corpus a N file (None = tutti)")
1073
+ ap.add_argument("--top", type=int, default=20,
1074
+ help="Quanti trial mostrare nel ranking (default: 20)")
1075
+ ap.add_argument("--no-prune", action="store_true",
1076
+ help="Disabilita MedianPruner (più lento ma completo)")
1077
+ ap.add_argument("--debug-export", action="store_true",
1078
+ help="Esporta WAV di debug per i primi N file del corpus (no sweep)")
1079
+ ap.add_argument("--debug-dir", type=str, default="debug_export",
1080
+ help="Cartella output WAV di debug (default: debug_export)")
1081
+ ap.add_argument("--debug-n", type=int, default=10,
1082
+ help="Quanti file esportare in debug (default: 10)")
1083
+ return ap.parse_args()
1084
+
1085
+
1086
+ def main():
1087
+ args = parse_args()
1088
+
1089
+ missing = []
1090
+ if not _HAS_OPTUNA: missing.append("optuna")
1091
+ if not _HAS_SPADE: missing.append("spade_declip_v11.py (nella stessa dir)")
1092
+ if missing:
1093
+ pip = [m for m in missing if not m.endswith(")")]
1094
+ sys.exit("Mancante:\n pip install " + " ".join(pip)
1095
+ + ("\n " + "\n ".join(m for m in missing if m.endswith(")")) if any(m.endswith(")") for m in missing) else ""))
1096
+
1097
+ base_dir = Path(args.base_dir).resolve()
1098
+ storage = f"sqlite:///{STUDY_NAME}.db"
1099
+ sampler = TPESampler(seed=42, multivariate=True, warn_independent_sampling=False)
1100
+ pruner = (MedianPruner(n_startup_trials=10, n_warmup_steps=3)
1101
+ if not args.no_prune else optuna.pruners.NopPruner())
1102
+
1103
+ if args.report:
1104
+ try:
1105
+ study = optuna.load_study(study_name=STUDY_NAME, storage=storage,
1106
+ sampler=sampler, pruner=pruner)
1107
+ except Exception:
1108
+ sys.exit(f"Nessuno study trovato in {STUDY_NAME}.db")
1109
+ print_report(study, top_n=args.top)
1110
+ save_csv(study)
1111
+ return
1112
+
1113
+ # ── Debug export ──────────────────────────────────────────────────────────
1114
+ if args.debug_export:
1115
+ # Usa i parametri del best trial se esiste un DB, altrimenti DEBUG_PARAMS
1116
+ spade_params = dict(DEBUG_PARAMS)
1117
+ try:
1118
+ study = optuna.load_study(study_name=STUDY_NAME, storage=storage,
1119
+ sampler=sampler, pruner=pruner)
1120
+ completed = [t for t in study.trials
1121
+ if t.state == optuna.trial.TrialState.COMPLETE]
1122
+ if completed:
1123
+ best_t = max(completed, key=lambda t: t.value or 0)
1124
+ p = best_t.params
1125
+ win = 2 ** p["win_exp"]
1126
+ hop = win // p["hop_div"]
1127
+ spade_params = dict(
1128
+ delta_db = p["delta_db"],
1129
+ window_length = win,
1130
+ hop_length = hop,
1131
+ release_ms = p["release_ms"],
1132
+ max_gain_db = p["max_gain_db"],
1133
+ eps = p["eps"],
1134
+ max_iter = p["max_iter"],
1135
+ )
1136
+ print(f" [DEBUG] Usando best trial #{best_t.number}"
1137
+ f" (score={best_t.value:.5f}) dal DB.")
1138
+ except Exception:
1139
+ print(f" [DEBUG] DB non trovato — uso DEBUG_PARAMS di default.")
1140
+
1141
+ # Costruisci corpus (limitato a debug_n file per velocita')
1142
+ corpus = build_corpus(base_dir, max_files=args.debug_n)
1143
+ if not corpus:
1144
+ sys.exit("Corpus vuoto. Controlla --base-dir.")
1145
+ debug_export(
1146
+ corpus = corpus,
1147
+ base_dir = base_dir,
1148
+ out_dir = Path(args.debug_dir),
1149
+ n_files = args.debug_n,
1150
+ spade_params = spade_params,
1151
+ )
1152
+ return
1153
+
1154
+ # ── Corpus ───────────────────────────────────────────────────────────────
1155
+ print("\n" + "=" * 65)
1156
+ print("CORPUS + LIMITER SINTETICO (Case 1 — threshold-based)")
1157
+ print("=" * 65)
1158
+ print(f" Base dir : {base_dir}")
1159
+ print(f" Threshold : −{LIMITER_THRESHOLD_DB} dBFS")
1160
+ print(f" Release : {LIMITER_RELEASE_MS} ms")
1161
+ print(f" Level align: NESSUNO — loudness invariata per costruzione")
1162
+ print(f" Rumore rosa: {PINK_NOISE_LEVEL_DB} dB rel. peak "
1163
+ f"(simula sottofondo musicale sotto il transiente)")
1164
+
1165
+ corpus = build_corpus(base_dir, max_files=args.corpus_size)
1166
+ if not corpus:
1167
+ sys.exit("Corpus vuoto. Controlla --base-dir e le cartelle.")
1168
+
1169
+ print(f"\n ✓ {len(corpus)} file nel corpus\n")
1170
+ col_w = max(len(item["file"]) for item in corpus) + 2
1171
+ for item in corpus:
1172
+ rms = float(np.sqrt(np.mean(item["gt_res"] ** 2)))
1173
+ peak = float(np.max(np.abs(item["gt_res"])))
1174
+ print(f" {item['file']:<{col_w}} sr={item['sr']} "
1175
+ f"GT rms={rms:.4f} peak={peak:.4f}")
1176
+
1177
+ # ── Study ─────────────────────────────────────────────────────────────────
1178
+ print(f"\n{'='*65}")
1179
+ print(f"OTTIMIZZAZIONE BAYESIANA — {args.trials} trial")
1180
+ print(f"TPE (multivariate) + MedianPruner | storage: {STUDY_NAME}.db")
1181
+ print(f"{'='*65}\n")
1182
+
1183
+ study = optuna.create_study(
1184
+ study_name = STUDY_NAME,
1185
+ storage = storage,
1186
+ sampler = sampler,
1187
+ pruner = pruner,
1188
+ direction = "maximize",
1189
+ load_if_exists = True,
1190
+ )
1191
+
1192
+ # ── Progress bar (rich → tqdm → plain fallback) ───────────────────────────
1193
+ try:
1194
+ from rich.progress import (
1195
+ Progress, BarColumn, TextColumn,
1196
+ TimeElapsedColumn, TimeRemainingColumn, MofNCompleteColumn,
1197
+ )
1198
+ _has_rich_progress = True
1199
+ except ImportError:
1200
+ _has_rich_progress = False
1201
+
1202
+ try:
1203
+ import tqdm as _tqdm_mod
1204
+ _has_tqdm = True
1205
+ except ImportError:
1206
+ _has_tqdm = False
1207
+
1208
+ # Stato condiviso aggiornato dal callback.
1209
+ # Pre-popolato con i trial gia' nel DB in caso di --resume,
1210
+ # cosi' la progress bar mostra il conteggio corretto dall'inizio.
1211
+ _existing_complete = [t for t in study.trials
1212
+ if t.state == optuna.trial.TrialState.COMPLETE]
1213
+ _existing_pruned = [t for t in study.trials
1214
+ if t.state == optuna.trial.TrialState.PRUNED]
1215
+
1216
+ if _existing_complete:
1217
+ _best_existing = max(_existing_complete, key=lambda t: t.value or 0)
1218
+ _init_best = _best_existing.value or 0.0
1219
+ _init_best_p = dict(_best_existing.params)
1220
+ _init_last = _init_best
1221
+ else:
1222
+ _init_best, _init_best_p, _init_last = float("-inf"), {}, float("-inf")
1223
+
1224
+ _state = {
1225
+ "done": len(_existing_complete),
1226
+ "pruned": len(_existing_pruned),
1227
+ "best": _init_best,
1228
+ "best_p": _init_best_p,
1229
+ "last": _init_last,
1230
+ "t0": time.time(),
1231
+ "n_total": len(_existing_complete) + len(_existing_pruned) + args.trials,
1232
+ }
1233
+
1234
+ def _fmt_best(state: dict) -> str:
1235
+ """Stringa compatta con i parametri del best trial corrente."""
1236
+ bp = state["best_p"]
1237
+ if not bp:
1238
+ return "—"
1239
+ win = 2 ** bp.get("win_exp", 10)
1240
+ hop = win // bp.get("hop_div", 4)
1241
+ return (f"δ={bp.get('delta_db',0):.2f} "
1242
+ f"win={win} hop={hop} "
1243
+ f"rel={bp.get('release_ms',0):.0f}ms "
1244
+ f"gain={bp.get('max_gain_db',0):.1f}dB")
1245
+
1246
+ # ── Rich progress bar ─────────────────────────────────────────────────────
1247
+ if _has_rich_progress:
1248
+ progress = Progress(
1249
+ TextColumn("[bold cyan]Trial[/] [cyan]{task.completed}/{task.total}[/]"),
1250
+ BarColumn(bar_width=32),
1251
+ MofNCompleteColumn(),
1252
+ TextColumn(" score [green]{task.fields[last]:.5f}[/]"),
1253
+ TextColumn(" best [bold green]{task.fields[best]:.5f}[/]"),
1254
+ TextColumn(" [dim]pruned {task.fields[pruned]}[/]"),
1255
+ TimeElapsedColumn(),
1256
+ TextColumn("ETA"),
1257
+ TimeRemainingColumn(),
1258
+ refresh_per_second=4,
1259
+ transient=False,
1260
+ )
1261
+ task_id = None # creato dentro il context
1262
+
1263
+ def on_trial_end(study, trial):
1264
+ fin = (trial.state == optuna.trial.TrialState.COMPLETE)
1265
+ prn = (trial.state == optuna.trial.TrialState.PRUNED)
1266
+ if fin:
1267
+ _state["done"] += 1
1268
+ _state["last"] = trial.value or 0.0
1269
+ if _state["last"] > _state["best"]:
1270
+ _state["best"] = _state["last"]
1271
+ _state["best_p"] = dict(study.best_params)
1272
+ elif prn:
1273
+ _state["pruned"] += 1
1274
+ progress.update(
1275
+ task_id,
1276
+ advance = 1,
1277
+ last = _state["last"],
1278
+ best = max(_state["best"], 0.0),
1279
+ pruned = _state["pruned"],
1280
+ )
1281
+
1282
+ t0 = time.time()
1283
+ try:
1284
+ with progress:
1285
+ task_id = progress.add_task(
1286
+ "sweep",
1287
+ total = _state["n_total"],
1288
+ completed = _state["done"] + _state["pruned"],
1289
+ last = max(_state["last"], 0.0),
1290
+ best = max(_state["best"], 0.0),
1291
+ pruned = _state["pruned"],
1292
+ )
1293
+ study.optimize(
1294
+ make_objective(corpus),
1295
+ n_trials = args.trials,
1296
+ callbacks = [on_trial_end],
1297
+ show_progress_bar = False,
1298
+ )
1299
+ except KeyboardInterrupt:
1300
+ print("\n[!] Interrotto — risultati parziali salvati.")
1301
+
1302
+ # ── tqdm fallback ─────────────────────────────────────────────────────────
1303
+ elif _has_tqdm:
1304
+ import tqdm
1305
+ _already = _state["done"] + _state["pruned"]
1306
+ pbar = tqdm.tqdm(
1307
+ total = _state["n_total"],
1308
+ initial = _already,
1309
+ unit = "trial",
1310
+ bar_format = "{l_bar}{bar}| {n}/{total} [{elapsed}<{remaining}]",
1311
+ )
1312
+ if _already > 0:
1313
+ pbar.set_postfix(
1314
+ score = f"{max(_state['last'], 0.0):.5f}",
1315
+ best = f"{max(_state['best'], 0.0):.5f}",
1316
+ pruned = _state["pruned"],
1317
+ )
1318
+
1319
+ def on_trial_end(study, trial):
1320
+ fin = trial.state == optuna.trial.TrialState.COMPLETE
1321
+ prn = trial.state == optuna.trial.TrialState.PRUNED
1322
+ if fin:
1323
+ _state["done"] += 1
1324
+ _state["last"] = trial.value or 0.0
1325
+ if _state["last"] > _state["best"]:
1326
+ _state["best"] = _state["last"]
1327
+ _state["best_p"] = dict(study.best_params)
1328
+ elif prn:
1329
+ _state["pruned"] += 1
1330
+ pbar.update(1)
1331
+ pbar.set_postfix(
1332
+ score = f"{_state['last']:.5f}",
1333
+ best = f"{_state['best']:.5f}",
1334
+ pruned = _state["pruned"],
1335
+ )
1336
+
1337
+ t0 = time.time()
1338
+ try:
1339
+ study.optimize(
1340
+ make_objective(corpus),
1341
+ n_trials = args.trials,
1342
+ callbacks = [on_trial_end],
1343
+ show_progress_bar = False,
1344
+ )
1345
+ except KeyboardInterrupt:
1346
+ print("\n[!] Interrotto — risultati parziali salvati.")
1347
+ finally:
1348
+ pbar.close()
1349
+
1350
+ # ── Plain fallback ────────────────────────────────────────────────────────
1351
+ else:
1352
+ def on_trial_end(study, trial):
1353
+ fin = trial.state == optuna.trial.TrialState.COMPLETE
1354
+ prn = trial.state == optuna.trial.TrialState.PRUNED
1355
+ if fin:
1356
+ _state["done"] += 1
1357
+ _state["last"] = trial.value or 0.0
1358
+ if _state["last"] > _state["best"]:
1359
+ _state["best"] = _state["last"]
1360
+ _state["best_p"] = dict(study.best_params)
1361
+ elapsed = time.time() - _state["t0"]
1362
+ done_tot = _state["done"] + _state["pruned"]
1363
+ eta_s = (elapsed / done_tot) * (_state["n_total"] - done_tot) if done_tot else 0
1364
+ is_best = abs(_state["last"] - _state["best"]) < 1e-9
1365
+ bar_n = int(32 * done_tot / max(_state["n_total"], 1))
1366
+ bar = "█" * bar_n + "░" * (32 - bar_n)
1367
+ print(f"\r[{bar}] {done_tot}/{_state['n_total']}"
1368
+ f" {'★' if is_best else ' '}score={_state['last']:.5f}"
1369
+ f" best={_state['best']:.5f}"
1370
+ f" pruned={_state['pruned']}"
1371
+ f" ETA {eta_s/60:.1f}min ", end="", flush=True)
1372
+ elif prn:
1373
+ _state["pruned"] += 1
1374
+
1375
+ t0 = time.time()
1376
+ try:
1377
+ study.optimize(
1378
+ make_objective(corpus),
1379
+ n_trials = args.trials,
1380
+ callbacks = [on_trial_end],
1381
+ show_progress_bar = False,
1382
+ )
1383
+ except KeyboardInterrupt:
1384
+ print("\n[!] Interrotto — risultati parziali salvati.")
1385
+ print() # newline dopo la riga \r
1386
+
1387
+ elapsed = time.time() - t0
1388
+ n_done = sum(1 for t in study.trials if t.state == optuna.trial.TrialState.COMPLETE)
1389
+ n_prune = sum(1 for t in study.trials if t.state == optuna.trial.TrialState.PRUNED)
1390
+ print(f"\n Completati: {n_done} | Pruned: {n_prune}"
1391
+ f" | Tempo totale: {elapsed/60:.1f} min"
1392
+ f" | Media: {elapsed/max(n_done+n_prune,1):.1f} s/trial")
1393
+
1394
+ print_report(study, top_n=args.top)
1395
+ save_csv(study)
1396
+ print("\nDone.")
1397
+
1398
+
1399
+ if __name__ == "__main__":
1400
+ main()
run_smart_sweep_old3.py ADDED
@@ -0,0 +1,2086 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ """
2
+ run_smart_sweep.py — S-SPADE · Bayesian parameter search (v2)
3
+ ===================================================================
4
+
5
+ PIPELINE GROUND-TRUTH (Case 1 — threshold-based limiter)
6
+ ---------------------------------------------------------
7
+ Il limiter sintetico è threshold-based:
8
+ - Originale normalizzato a 0 dBFS peak
9
+ - Limiter: attua solo sui picchi sopra la soglia → output max peak ≈ −threshold_db
10
+ - Il CORPO del segnale (loudness percepita) rimane invariato per definizione
11
+ - NON si applica nessun gain al segnale limitato dopo il processing
12
+
13
+ Allineamento per il calcolo residual:
14
+ Originale e limited sono già sulla stessa scala (loudness uguale, picchi diversi).
15
+ Nessuna normalizzazione LUFS / RMS necessaria o corretta.
16
+
17
+ GT_res = original_0dBFS − limited (scale identiche)
18
+ res_iter = spade_output − limited (idem)
19
+
20
+ Entrambi vengono poi normalizzati a RESIDUAL_DBFS peak SOLO per rendere
21
+ comparabili file con diversi livelli assoluti — non altera la logica.
22
+
23
+ Metrica ideale:
24
+ GT_res ≡ res_iter → cosine_sim = 1.0 → differenza = −∞ dB
25
+
26
+ Ottimizzatore: Optuna TPE (Bayesian) + MedianPruner
27
+ Storage: SQLite (riprendibile con --resume)
28
+ Corpus: tutti i drum sample in Kicks / Snares / Perc / Tops
29
+
30
+ DIPENDENZE
31
+ ----------
32
+ pip install numpy scipy soundfile optuna rich
33
+ (pyloudnorm NON necessario)
34
+
35
+ USO
36
+ ---
37
+ python run_smart_sweep.py # 200 trial
38
+ python run_smart_sweep.py --trials 50 # test rapido
39
+ python run_smart_sweep.py --resume # riprende da DB
40
+ python run_smart_sweep.py --report # solo risultati
41
+ python run_smart_sweep.py --base-dir /path/SPADE # cartella custom
42
+ """
43
+
44
+ import argparse
45
+ import logging
46
+ import os
47
+ import sys
48
+ import time
49
+ import warnings
50
+ from pathlib import Path
51
+ from typing import Dict, List, Optional
52
+
53
+ # ── AMD ROCm performance tuning ───────────────────────────────────────────────
54
+ # Must be set BEFORE any torch/ROCm import.
55
+ #
56
+ # HSA_ENABLE_SDMA=0: disabilita il DMA engine per i trasferimenti host↔device.
57
+ # Su RDNA (RX 6700 XT e simili) l'SDMA engine ha latenza elevata per batch
58
+ # piccoli (<1 MB). Usando compute-shader blits invece, il primo trasferimento
59
+ # è 3-5× più veloce. Nessun effetto su batch grandi.
60
+ #
61
+ # GPU_MAX_HW_QUEUES=4: limita le hardware queue a 4 (default=8 su RDNA).
62
+ # Con 8 queue e un singolo dispatch stream, il driver distribuisce le wave
63
+ # su queue diverse causando serializzazione. Con 4 si concentrano sullo stesso
64
+ # ring buffer e si riduce la latenza di scheduling.
65
+ #
66
+ # HSA_OVERRIDE_GFX_VERSION: solo se necessario (RX 6700 XT = gfx1031 → OK as-is).
67
+ os.environ.setdefault("HSA_ENABLE_SDMA", "0")
68
+ os.environ.setdefault("GPU_MAX_HW_QUEUES", "4")
69
+
70
+ import numpy as np
71
+ import scipy.signal as sig
72
+ import soundfile as sf
73
+
74
+ logging.getLogger("optuna").setLevel(logging.WARNING)
75
+
76
+ # ── optuna ───────────────────────────────────────────────────────────────────
77
+ try:
78
+ import optuna
79
+ from optuna.samplers import TPESampler
80
+ from optuna.pruners import MedianPruner
81
+ _HAS_OPTUNA = True
82
+ except ImportError:
83
+ _HAS_OPTUNA = False
84
+ warnings.warn("optuna non trovato — pip install optuna")
85
+
86
+ # ── rich ─────────────────────────────────────────────────────────────────────
87
+ try:
88
+ from rich.console import Console
89
+ from rich.table import Table
90
+ _console = Console()
91
+ _HAS_RICH = True
92
+ except ImportError:
93
+ _HAS_RICH = False
94
+ _console = None
95
+
96
+ # ── spade_declip ─────────────────────────────────────────────────────────────
97
+ try:
98
+ from spade_declip_v12 import (
99
+ declip, DeclipParams,
100
+ # Internals needed for the GPU mega-batch path in evaluate_corpus_gpu_mega:
101
+ _compute_masks, _dilate_masks_soft, _macro_expand_pass,
102
+ _build_lf_mask, _sspade_batch_gpu,
103
+ ClippingMasks,
104
+ )
105
+ _HAS_SPADE = True
106
+ except ImportError:
107
+ _HAS_SPADE = False
108
+ warnings.warn("spade_declip_v12.py non trovato")
109
+
110
+ # =============================================================================
111
+ # CONFIG
112
+ # =============================================================================
113
+
114
+ DRUM_DIRS = ["Kicks", "Snares", "Perc", "Tops"]
115
+
116
+ # ── Limiter sintetico ─────────────────────────────────────────────────────────
117
+ # Case 1: threshold-based.
118
+ # Originale @ 0 dBFS peak → limiter attua sui picchi > soglia →
119
+ # output max peak ≈ −LIMITER_THRESHOLD_DB dBFS, loudness invariata.
120
+ # NON si tocca il segnale limitato con nessun gain dopo.
121
+ LIMITER_THRESHOLD_DB = 3.0 # dB sotto il ceiling (positivo)
122
+ LIMITER_RELEASE_MS = 80.0 # release del limiter sintetico (ms)
123
+ # attack = 1 campione → brickwall vero
124
+
125
+ # Normalizzazione residual — SOLO per comparabilità cross-file.
126
+ # Scala entrambi GT e iter identicamente, quindi non altera il confronto.
127
+ RESIDUAL_DBFS = -3.0
128
+
129
+ # ── Rumore rosa di sottofondo ─────────────────────────────────────────────────
130
+ # Simula un sottofondo musicale sotto il transiente di batteria.
131
+ # Viene mixato al sample (già a 0 dBFS peak) PRIMA del limiter.
132
+ # Questo assicura che:
133
+ # - il limiter agisca sul segnale realistico drum + music background
134
+ # - SPADE riceva lo stesso mix e debba lavorare in condizioni realistiche
135
+ # - GT_res = (drum+noise) − limiter(drum+noise) rifletta la situazione reale
136
+ # Livello relativo al peak del drum sample. −20 dB = sottofondo ben sotto
137
+ # il transiente, udibile ma non dominante (come un kick su un loop di batteria).
138
+ PINK_NOISE_LEVEL_DB = -20.0 # dB rel. al peak del drum (negativo = sotto)
139
+
140
+ # Optuna
141
+ STUDY_NAME = "spade_smart_v2_thr3db"
142
+ OUT_CSV = "smart_sweep_results.csv"
143
+
144
+ # Parametri FISSI del solver SPADE (invarianti tra tutti i trial)
145
+ FIXED_SOLVER = dict(
146
+ algo = "sspade",
147
+ frame = "rdft",
148
+ mode = "soft",
149
+ s = 1,
150
+ r = 1,
151
+ n_jobs = 1,
152
+ verbose = False,
153
+ show_progress = False,
154
+ use_gpu = True,
155
+ # multiband e macro_expand sono nello spazio di ricerca
156
+ )
157
+
158
+ # Crossover multiband (fisso per comparabilita' tra trial)
159
+ # 250 Hz separa: LF=corpo/punch del kick | HF=transiente/attacco
160
+ BAND_CROSSOVER_HZ = 250.0
161
+
162
+ # =============================================================================
163
+ # HELPERS
164
+ # =============================================================================
165
+
166
+ def ensure_2d(a: np.ndarray) -> np.ndarray:
167
+ return a[:, None] if a.ndim == 1 else a
168
+
169
+
170
+ def normalize_to_0dBFS(a: np.ndarray) -> np.ndarray:
171
+ """Scala a 0 dBFS peak — usato solo sull'originale come riferimento comune."""
172
+ pk = np.max(np.abs(a))
173
+ return a / pk if pk > 1e-12 else a
174
+
175
+
176
+ def normalize_peak(a: np.ndarray, target_dbfs: float) -> np.ndarray:
177
+ """
178
+ Scala a target_dbfs dBFS peak.
179
+ Usato SOLO sui residual per comparabilità cross-file;
180
+ non altera la logica perché GT e iter vengono scalati identicamente.
181
+ """
182
+ pk = np.max(np.abs(a))
183
+ return a * (10 ** (target_dbfs / 20.0) / pk) if pk > 1e-12 else a
184
+
185
+
186
+ def generate_pink_noise(n_samples: int, n_channels: int, rng: np.random.Generator) -> np.ndarray:
187
+ """
188
+ Genera rumore rosa (1/f) tramite filtro IIR di Voss-McCartney (approssimazione
189
+ a 5 poli, accurata entro ±1 dB nel range 20 Hz – 20 kHz).
190
+
191
+ Output: shape (n_samples, n_channels), RMS normalizzato a 1.0 (prima
192
+ del mix-in con PINK_NOISE_LEVEL_DB, che controlla il livello finale).
193
+
194
+ Algoritmo: rumore bianco filtrato con H(z) = 1 / A(z) dove i coefficienti
195
+ sono ottimizzati per approssimare una densità spettrale 1/f.
196
+ """
197
+ # Coefficienti del filtro IIR a 5 poli (Voss approssimazione)
198
+ # Poli reali, tutti stabili (|p| < 1)
199
+ b = np.array([0.049922035, -0.095993537, 0.050612699, -0.004408786])
200
+ a = np.array([1.0, -2.494956002, 2.017265875, -0.522189400])
201
+
202
+ out = np.empty((n_samples, n_channels))
203
+ for c in range(n_channels):
204
+ white = rng.standard_normal(n_samples)
205
+ pink = sig.lfilter(b, a, white)
206
+ rms = np.sqrt(np.mean(pink ** 2))
207
+ out[:, c] = pink / (rms + 1e-12) # RMS = 1.0
208
+
209
+ return out
210
+
211
+
212
+ def mix_pink_noise(
213
+ audio_0dBFS: np.ndarray,
214
+ sr: int,
215
+ level_db: float,
216
+ rng: np.random.Generator,
217
+ ) -> np.ndarray:
218
+ """
219
+ Mixa rumore rosa nel segnale a un livello relativo al suo peak.
220
+
221
+ level_db < 0 → il rumore è sotto il peak del drum (es. −20 dB)
222
+ Il rumore dura quanto il sample; se il sample è stereo, il rumore è stereo
223
+ (canali indipendenti → decorrelato come un vero fondo musicale).
224
+
225
+ Il segnale in uscita può superare 0 dBFS di qualche frazione di dB: è
226
+ corretto, il limiter che segue si occupa di riportarlo sotto la soglia.
227
+ """
228
+ audio = ensure_2d(audio_0dBFS)
229
+ N, C = audio.shape
230
+
231
+ noise = generate_pink_noise(N, C, rng) # RMS = 1.0 per canale
232
+ # Scala il rumore al livello desiderato rispetto al peak del drum
233
+ peak = np.max(np.abs(audio))
234
+ gain = peak * (10 ** (level_db / 20.0)) # gain lineare assoluto
235
+ mixed = audio + noise * gain
236
+ # NON normalizziamo qui: la normalizzazione a 0 dBFS avviene in build_corpus
237
+ # subito dopo, su tutto il mix (drum + noise), prima di qualsiasi altra op.
238
+ return mixed[:, 0] if audio_0dBFS.ndim == 1 else mixed
239
+
240
+
241
+ # =============================================================================
242
+ # LIMITER SINTETICO (Case 1 — threshold-based, brickwall, 1-campione attack)
243
+ # =============================================================================
244
+
245
+ def apply_brickwall_limiter(
246
+ audio_0dBFS: np.ndarray,
247
+ sr: int,
248
+ threshold_db: float = LIMITER_THRESHOLD_DB,
249
+ release_ms: float = LIMITER_RELEASE_MS,
250
+ ) -> np.ndarray:
251
+ """
252
+ Brickwall limiter threshold-based.
253
+
254
+ Tenta la GPU (Hillis-Steele parallel prefix scan, O(log N) depth) se
255
+ PyTorch + CUDA/ROCm sono disponibili, altrimenti Numba JIT, altrimenti
256
+ loop numpy ottimizzato.
257
+
258
+ Input: audio_0dBFS — già a 0 dBFS peak, shape (N,) o (N, C)
259
+ Output: segnale limitato, stessa shape — NON boosted, NON clippato
260
+ """
261
+ thr_lin = 10 ** (-abs(threshold_db) / 20.0)
262
+ rc = np.exp(-1.0 / max(release_ms * sr / 1000.0, 1e-9))
263
+
264
+ audio = ensure_2d(audio_0dBFS).copy().astype(np.float32)
265
+ N, C = audio.shape
266
+
267
+ # ── GPU path (preferred) ──────────────────────────────────────────────
268
+ try:
269
+ import torch
270
+ if torch.cuda.is_available():
271
+ dev = "cuda"
272
+ out = np.empty_like(audio)
273
+ for c in range(C):
274
+ x_t = torch.from_numpy(audio[:, c]).to(device=dev)
275
+ y_t = _brickwall_limiter_gpu(x_t, thr_lin, rc)
276
+ out[:, c] = y_t.cpu().numpy()
277
+ return out[:, 0] if audio_0dBFS.ndim == 1 else out
278
+ except Exception:
279
+ pass # fall through to CPU paths
280
+
281
+ # ── Numba JIT path ────────────────────────────────────────────────────
282
+ try:
283
+ from numba import njit
284
+
285
+ @njit(cache=True)
286
+ def _limiter_loop_nb(ch: np.ndarray, thr: float, rc: float,
287
+ g_out: np.ndarray) -> None:
288
+ env = 1.0
289
+ for n in range(len(ch)):
290
+ pk = abs(ch[n])
291
+ target = thr / pk if pk > thr else 1.0
292
+ env = target if target < env else rc * env + (1.0 - rc) * target
293
+ g_out[n] = env
294
+
295
+ out = np.empty(audio.shape, dtype=np.float32)
296
+ for c in range(C):
297
+ g = np.empty(N, dtype=np.float32)
298
+ _limiter_loop_nb(audio[:, c].astype(np.float64), thr_lin, rc, g)
299
+ out[:, c] = audio[:, c] * g
300
+ return out[:, 0] if audio_0dBFS.ndim == 1 else out
301
+
302
+ except ImportError:
303
+ pass
304
+
305
+ # ── Pure-numpy fallback ───────────────────────────────────────────────
306
+ out = np.empty_like(audio)
307
+ for c in range(C):
308
+ ch = audio[:, c].astype(np.float64)
309
+ pk = np.abs(ch)
310
+ g_instant = np.where(pk > thr_lin, thr_lin / np.maximum(pk, 1e-12), 1.0)
311
+ g = np.empty(N)
312
+ env = 1.0
313
+ gi = g_instant
314
+ for n in range(N):
315
+ t = gi[n]
316
+ env = t if t < env else rc * env + (1.0 - rc) * t
317
+ g[n] = env
318
+ out[:, c] = ch * g
319
+ return out[:, 0] if audio_0dBFS.ndim == 1 else out
320
+
321
+
322
+ # =============================================================================
323
+ # COSINE SIMILARITY TF
324
+ # =============================================================================
325
+
326
+ def cosine_sim_tf(
327
+ gt: np.ndarray,
328
+ est: np.ndarray,
329
+ sr: int,
330
+ win_samples: int = 1024,
331
+ hop_samples: int = 256,
332
+ n_bands: int = 12,
333
+ ) -> float:
334
+ """
335
+ Similarità coseno media su micro-finestre tempo-frequenziali.
336
+ Input: entrambi già a RESIDUAL_DBFS peak.
337
+ Output: scalare in [0, 1]. Target ideale = 1.0.
338
+ """
339
+ L = min(gt.shape[0], est.shape[0])
340
+ g = (gt[:L, 0] if gt.ndim == 2 else gt[:L]).copy()
341
+ e = (est[:L, 0] if est.ndim == 2 else est[:L]).copy()
342
+
343
+ win = min(win_samples, max(32, L // 4))
344
+ hop = min(hop_samples, win // 2)
345
+
346
+ if L < win or win < 32:
347
+ denom = np.linalg.norm(g) * np.linalg.norm(e) + 1e-12
348
+ return float(np.dot(g, e) / denom)
349
+
350
+ _, _, Zg = sig.stft(g, fs=sr, window="hann",
351
+ nperseg=win, noverlap=win - hop,
352
+ boundary=None, padded=False)
353
+ _, _, Ze = sig.stft(e, fs=sr, window="hann",
354
+ nperseg=win, noverlap=win - hop,
355
+ boundary=None, padded=False)
356
+
357
+ n_freqs, n_frames = Zg.shape
358
+ if n_frames == 0:
359
+ return float(np.dot(g, e) / (np.linalg.norm(g) * np.linalg.norm(e) + 1e-12))
360
+
361
+ edges = np.unique(np.round(
362
+ np.logspace(0, np.log10(max(n_freqs, 2)), min(n_bands, n_freqs) + 1)
363
+ ).astype(int))
364
+ edges = np.clip(edges, 0, n_freqs)
365
+
366
+ sims = []
367
+ for i in range(len(edges) - 1):
368
+ f0, f1 = int(edges[i]), int(edges[i + 1])
369
+ if f1 <= f0:
370
+ continue
371
+ Mg = np.abs(Zg[f0:f1, :])
372
+ Me = np.abs(Ze[f0:f1, :])
373
+ dot = np.sum(Mg * Me, axis=0)
374
+ norm_g = np.sqrt(np.sum(Mg ** 2, axis=0)) + 1e-12
375
+ norm_e = np.sqrt(np.sum(Me ** 2, axis=0)) + 1e-12
376
+ sims.extend((dot / (norm_g * norm_e)).tolist())
377
+
378
+ return float(np.mean(sims)) if sims else 0.0
379
+
380
+
381
+ # =============================================================================
382
+ # CORPUS
383
+ # =============================================================================
384
+
385
+ def build_corpus(base_dir: Path, max_files: Optional[int] = None) -> List[Dict]:
386
+ """
387
+ Per ogni drum sample:
388
+ 1. Carica e normalizza a 0 dBFS peak (riferimento comune cross-file)
389
+ 2. Mixa rumore rosa a PINK_NOISE_LEVEL_DB rel. al peak ← NUOVO
390
+ Il mix avviene in float (può temporaneamente superare 0 dBFS)
391
+ 3. Normalizza il mix (drum + noise) a 0 dBFS peak
392
+ Riferimento comune prima di tutta la pipeline successiva
393
+ 4. Applica limiter sintetico su (drum + noise) normalizzato → limited
394
+ 4. GT_res_raw = (drum + noise) − limited (stessa scala, nessun gain)
395
+ 5. Scarta file dove il limiter non interviene
396
+ 6. Normalizza GT_res a RESIDUAL_DBFS (solo comparabilità cross-file)
397
+
398
+ Il rumore è riproducibile: ogni file usa un seed deterministico derivato
399
+ dal suo indice nel corpus, così i trial sono comparabili tra loro.
400
+ """
401
+ corpus = []
402
+ extensions = {".wav", ".flac", ".aif", ".aiff"}
403
+ file_index = 0 # usato per seed deterministico del rumore
404
+
405
+ for folder in DRUM_DIRS:
406
+ d = base_dir / folder
407
+ if not d.exists():
408
+ print(f" [WARN] Cartella non trovata: {d}")
409
+ continue
410
+ for f in sorted(d.glob("*")):
411
+ if f.suffix.lower() not in extensions:
412
+ continue
413
+ try:
414
+ audio, sr = sf.read(str(f), always_2d=True)
415
+ audio = audio.astype(float)
416
+ except Exception as exc:
417
+ print(f" [WARN] {f.name}: {exc}")
418
+ continue
419
+
420
+ if audio.shape[0] < 64:
421
+ continue
422
+
423
+ # 1. 0 dBFS peak
424
+ orig = normalize_to_0dBFS(audio)
425
+
426
+ # 2. Mix rumore rosa — seed deterministico per riproducibilità
427
+ rng = np.random.default_rng(seed=file_index)
428
+ orig_with_noise = ensure_2d(mix_pink_noise(orig, sr,
429
+ PINK_NOISE_LEVEL_DB, rng))
430
+ file_index += 1
431
+
432
+ # 3. Normalizza il mix a 0 dBFS peak — riferimento comune prima
433
+ # di tutta la pipeline. Il mix in float può aver superato 0 dBFS;
434
+ # questa normalizzazione azzera il problema prima del limiter.
435
+ orig_with_noise = ensure_2d(normalize_to_0dBFS(orig_with_noise))
436
+
437
+ # 4. Limiter sintetico su (drum + noise) @0dBFS — nessun gain dopo
438
+ limited = ensure_2d(apply_brickwall_limiter(orig_with_noise, sr))
439
+
440
+ # 5. Residual grezzo — stessa scala, zero aggiustamenti
441
+ gt_res_raw = orig_with_noise - limited
442
+
443
+ # 6. Verifica attività del limiter
444
+ if np.max(np.abs(gt_res_raw)) < 1e-6:
445
+ print(f" [SKIP] {f.name} — picco sotto la soglia, limiter inattivo")
446
+ continue
447
+
448
+ # 7. Normalizza a RESIDUAL_DBFS solo per comparabilità cross-file
449
+ gt_res = normalize_peak(gt_res_raw, RESIDUAL_DBFS)
450
+
451
+ corpus.append({
452
+ "file" : f.name,
453
+ "sr" : sr,
454
+ "limited" : limited, # input a SPADE = drum + noise + limiter
455
+ "gt_res" : gt_res, # target residual
456
+ })
457
+
458
+ if max_files and len(corpus) >= max_files:
459
+ return corpus
460
+
461
+ return corpus
462
+
463
+
464
+ # =============================================================================
465
+ # VALUTAZIONE SINGOLO FILE
466
+ # =============================================================================
467
+
468
+ def evaluate_one(item: Dict, params: dict) -> Optional[float]:
469
+ """
470
+ Esegue SPADE su limited, calcola il residual e lo confronta con GT.
471
+
472
+ params contiene parametri SPADE puri + flag di alto livello:
473
+ multiband (bool) -- split LF/HF, elabora separatamente
474
+ macro_expand (bool) -- envelope pre-pass per recupero corpo LF
475
+ macro_ratio (float) -- rapporto espansione (1.0 = bypass)
476
+ lf_delta_db (float) -- delta_db per banda LF (<= BAND_CROSSOVER_HZ)
477
+ il delta_db standard e' usato per la banda HF
478
+ lf_cutoff_hz (float) -- v12: Hz sotto cui riservare bin LF (0 = off)
479
+ lf_k_min (int) -- v12: slot LF garantiti per iterazione ADMM
480
+ """
481
+ try:
482
+ sr = item["sr"]
483
+ limited = item["limited"].copy()
484
+ gt_res = item["gt_res"]
485
+
486
+ # Estrai flag di alto livello (non sono parametri DeclipParams diretti)
487
+ p2 = dict(params) # copia per non mutare l'originale
488
+ multiband = p2.pop("multiband", False)
489
+ macro_expand = p2.pop("macro_expand", False)
490
+ macro_ratio = p2.pop("macro_ratio", 1.0)
491
+ lf_delta_db = p2.pop("lf_delta_db", p2.get("delta_db", 1.5))
492
+ # v12: stratified thresholding params — passati direttamente a DeclipParams
493
+ # (già nel dict p2, non richiedono pop separato)
494
+
495
+ spade_kw = dict(
496
+ multiband = multiband,
497
+ macro_expand = macro_expand,
498
+ macro_ratio = macro_ratio if macro_expand else 1.0,
499
+ macro_release_ms = 200.0,
500
+ macro_attack_ms = 10.0,
501
+ )
502
+ if multiband:
503
+ spade_kw["band_crossovers"] = (BAND_CROSSOVER_HZ,)
504
+ spade_kw["band_delta_db"] = (lf_delta_db, p2["delta_db"])
505
+
506
+ p = DeclipParams(sample_rate=sr, **FIXED_SOLVER, **p2, **spade_kw)
507
+ fixed, _ = declip(limited, p)
508
+ fixed_2d = ensure_2d(fixed)
509
+
510
+ # Residual generato — stessa scala dell'input, nessun gain
511
+ res_raw = fixed_2d - limited
512
+ res_iter = normalize_peak(res_raw, RESIDUAL_DBFS)
513
+
514
+ # GPU cosine sim when available, CPU fallback otherwise
515
+ try:
516
+ import torch
517
+ g = gt_res[:, 0] if gt_res.ndim == 2 else gt_res
518
+ e = (res_iter[:, 0] if res_iter.ndim == 2 else res_iter).astype(np.float32)
519
+ dev = "cuda" if torch.cuda.is_available() else "cpu"
520
+ g_t = torch.from_numpy(g.astype(np.float32)).to(dev)
521
+ e_t = torch.from_numpy(e).to(dev)
522
+ Lmin = min(g_t.shape[0], e_t.shape[0])
523
+ return _cosine_sim_gpu(g_t[:Lmin], e_t[:Lmin])
524
+ except Exception:
525
+ return cosine_sim_tf(gt_res, res_iter, sr)
526
+
527
+ except Exception as exc:
528
+ warnings.warn(f"evaluate_one ({item['file']}): {exc}")
529
+ return None
530
+
531
+
532
+ # =============================================================================
533
+ # GPU MEGA-BATCH (v12 — AMD RX 6700 XT optimisation)
534
+ # =============================================================================
535
+
536
+
537
+ # =============================================================================
538
+ # GPU PIPELINE — tutti i pass su GPU (v13)
539
+ # =============================================================================
540
+ #
541
+ # Architettura
542
+ # ------------
543
+ # _brickwall_limiter_gpu : limiter con Hillis-Steele parallel prefix scan
544
+ # _compute_masks_gpu : boolean tensor ops
545
+ # _dilate_masks_gpu : F.max_pool1d invece di np.convolve
546
+ # _extract_frames_gpu : tensor.unfold → frame batch senza loop Python
547
+ # _wola_gpu : scatter_add_ → overlap-add senza loop Python
548
+ # _rms_match_gpu : F.max_pool1d per near-clip + ops tensore
549
+ # _cosine_sim_gpu : torch.stft → cosine sim senza scipy
550
+ # evaluate_corpus_gpu_mega : pipeline completa — zero numpy nel hot-path
551
+ # =============================================================================
552
+
553
+ def _brickwall_limiter_gpu(
554
+ audio_t: "torch.Tensor", # (L,) o (C, L) float32
555
+ thr_lin: float,
556
+ rc: float,
557
+ ) -> "torch.Tensor":
558
+ """
559
+ Brickwall limiter su GPU tramite Hillis-Steele parallel prefix scan.
560
+
561
+ Recurrence (causal):
562
+ env[n] = min(target[n], rc * env[n-1] + (1-rc) * target[n])
563
+
564
+ Rappresentazione funzione clamp-lineare f(y) = min(t, r*y + c):
565
+ - r_n = rc, c_n = (1-rc)*target[n], t_n = target[n]
566
+
567
+ Operatore di composizione h_a ⋆ h_b (applica h_a prima, poi h_b):
568
+ r_ab = r_a * r_b
569
+ c_ab = r_b * c_a + c_b
570
+ t_ab = min(t_b, r_b * t_a + c_b)
571
+
572
+ Hillis-Steele inclusive prefix scan con ⋆ → O(log N) depth.
573
+ Risultato: scan[n] = f_0 ⋆ f_1 ⋆ ... ⋆ f_n
574
+ env[n] = scan[n](env_init=1.0) = min(t_prefix[n], r_prefix[n] + c_prefix[n])
575
+ """
576
+ import torch, math as _m
577
+ squeeze = audio_t.dim() == 1
578
+ if squeeze:
579
+ audio_t = audio_t.unsqueeze(0) # (1, L)
580
+ C, L = audio_t.shape
581
+ dev = audio_t.device
582
+ dt = audio_t.dtype
583
+
584
+ # ── Step 1: instantaneous gain target (fully parallel) ────────────────
585
+ pk = audio_t.abs().clamp(min=1e-12)
586
+ target = (thr_lin / pk).clamp(max=1.0) # (C, L)
587
+
588
+ # ── Step 2: Hillis-Steele prefix scan ─────────────────────────────────
589
+ inv_rc = 1.0 - rc
590
+ # Each position n represents f_n: (r=rc, c=(1-rc)*target[n], t=target[n])
591
+ r = torch.full((C, L), rc, device=dev, dtype=dt)
592
+ c = target * inv_rc # (C, L)
593
+ t = target.clone() # (C, L)
594
+
595
+ d = 1
596
+ while d < L:
597
+ # Clone previous step (Hillis-Steele requires read-before-write)
598
+ r_p = r.clone()
599
+ c_p = c.clone()
600
+ t_p = t.clone()
601
+ # For positions i >= d: scan[i] = scan_prev[i-d] ⋆ scan_prev[i]
602
+ r_l = r_p[:, :-d]; c_l = c_p[:, :-d]; t_l = t_p[:, :-d]
603
+ r_r = r_p[:, d:]; c_r = c_p[:, d:]; t_r = t_p[:, d:]
604
+ r[:, d:] = r_l * r_r
605
+ c[:, d:] = r_r * c_l + c_r
606
+ t[:, d:] = torch.minimum(t_r, r_r * t_l + c_r)
607
+ d *= 2
608
+
609
+ # ── Step 3: evaluate at env_init = 1.0 ───────────────────────────────
610
+ env = torch.minimum(t, r + c) # (C, L)
611
+
612
+ out = audio_t * env
613
+ return out.squeeze(0) if squeeze else out
614
+
615
+
616
+ def _compute_masks_gpu(
617
+ yc_t: "torch.Tensor", # (L,) float
618
+ thresh: float,
619
+ ) -> "tuple[torch.Tensor, torch.Tensor, torch.Tensor]":
620
+ """GPU version of _compute_masks. Returns (Ir, Icp, Icm) bool tensors."""
621
+ Icp = yc_t >= thresh
622
+ Icm = yc_t <= -thresh
623
+ Ir = ~(Icp | Icm)
624
+ return Ir, Icp, Icm
625
+
626
+
627
+ def _dilate_masks_gpu(
628
+ Icp_t: "torch.Tensor", # (L,) bool
629
+ Icm_t: "torch.Tensor", # (L,) bool
630
+ yc_t: "torch.Tensor", # (L,) float
631
+ rel_samp: int,
632
+ ) -> "tuple[torch.Tensor, torch.Tensor, torch.Tensor]":
633
+ """
634
+ GPU forward morphological dilation of soft-mode masks.
635
+
636
+ Replaces np.convolve(..., ones(rel_samp+1))[:N] > 0 with
637
+ F.max_pool1d(causal_pad, kernel=rel_samp+1, stride=1).
638
+
639
+ Causal dilation: each True in Icp/Icm infects the next rel_samp positions.
640
+ Equivalent to convolving with a boxcar of length rel_samp+1 (causal).
641
+ max_pool1d with left-padding of rel_samp achieves this.
642
+ """
643
+ import torch.nn.functional as F
644
+ if rel_samp <= 0:
645
+ return ~(Icp_t | Icm_t), Icp_t, Icm_t
646
+
647
+ L = yc_t.shape[0]
648
+ k = rel_samp + 1 # kernel size matching np.ones(rel_samp + 1)
649
+
650
+ def _dilate(mask_t):
651
+ # (1, 1, L) → pad left by rel_samp → max_pool(kernel=k, stride=1) → (1,1,L)
652
+ x = mask_t.float().unsqueeze(0).unsqueeze(0) # (1, 1, L)
653
+ x = F.pad(x, (rel_samp, 0), value=0.0) # left-pad for causality
654
+ x = F.max_pool1d(x, kernel_size=k, stride=1) # (1, 1, L)
655
+ return x.squeeze().bool()[:L]
656
+
657
+ dil_union = _dilate(Icp_t | Icm_t) # any clipped → forward dilation
658
+ new_Icp = dil_union & (yc_t >= 0)
659
+ new_Icm = dil_union & (yc_t < 0)
660
+ new_Ir = ~(new_Icp | new_Icm)
661
+ return new_Ir, new_Icp, new_Icm
662
+
663
+
664
+ def _extract_frames_gpu(
665
+ yc_t: "torch.Tensor", # (L,) float — DC-removed, normalised
666
+ Ir_t: "torch.Tensor", # (L,) bool
667
+ Icp_t: "torch.Tensor", # (L,) bool
668
+ Icm_t: "torch.Tensor", # (L,) bool
669
+ M: int,
670
+ a: int,
671
+ win_t: "torch.Tensor", # (M,) float
672
+ thresh: float,
673
+ ) -> "tuple":
674
+ """
675
+ GPU frame extraction using tensor.unfold — zero Python loops.
676
+
677
+ Returns
678
+ -------
679
+ yc_active : (n_active, M) float — windowed frames for SPADE
680
+ Ir_active : (n_active, M) bool
681
+ Icp_active : (n_active, M) bool
682
+ Icm_active : (n_active, M) bool
683
+ is_active : (N,) bool — bypass mask for ALL N frames
684
+ N : int — total number of frames
685
+ idx1s_t : (N,) long — start indices for WOLA
686
+ """
687
+ import torch.nn.functional as F
688
+ L = yc_t.shape[0]
689
+ import math
690
+ N = math.ceil(L / a)
691
+ dev = yc_t.device
692
+
693
+ # Pad to N*a + M to ensure all frames are exactly M samples
694
+ pad_len = N * a + M - L
695
+ yc_pad = F.pad(yc_t, (0, pad_len), value=0.0)
696
+ Ir_pad = F.pad(Ir_t.float(), (0, pad_len), value=1.0).bool()
697
+ Icp_pad = F.pad(Icp_t.float(),(0, pad_len), value=0.0).bool()
698
+ Icm_pad = F.pad(Icm_t.float(),(0, pad_len), value=0.0).bool()
699
+
700
+ # unfold: (L_padded,) → (N, M) — zero-copy strided view
701
+ yc_frames = yc_pad.unfold(0, M, a) # (N, M)
702
+ Ir_frames = Ir_pad.unfold(0, M, a) # (N, M) bool
703
+ Icp_frames = Icp_pad.unfold(0, M, a)
704
+ Icm_frames = Icm_pad.unfold(0, M, a)
705
+
706
+ # Per-frame peak → bypass decision (fully parallel)
707
+ frame_peaks = yc_frames.abs().amax(dim=-1) # (N,)
708
+ is_active = frame_peaks >= thresh # (N,) bool
709
+
710
+ # Active frames
711
+ yc_active = yc_frames [is_active] * win_t # (n_active, M) windowed
712
+ Ir_active = Ir_frames [is_active]
713
+ Icp_active = Icp_frames[is_active]
714
+ Icm_active = Icm_frames[is_active]
715
+
716
+ idx1s_t = torch.arange(N, device=dev, dtype=torch.long) * a # (N,)
717
+
718
+ return yc_active, Ir_active, Icp_active, Icm_active, is_active, N, idx1s_t
719
+
720
+
721
+ def _wola_gpu(
722
+ x_active_t: "torch.Tensor", # (n_active, M) float — SPADE output
723
+ is_active: "torch.Tensor", # (N,) bool
724
+ idx1s_t: "torch.Tensor", # (N,) long — frame start indices
725
+ yc_t: "torch.Tensor", # (L,) float — original signal (for bypass)
726
+ win_t: "torch.Tensor", # (M,) float
727
+ L: int,
728
+ M: int,
729
+ ) -> "torch.Tensor":
730
+ """
731
+ GPU WOLA overlap-add via scatter_add_ — zero Python loops.
732
+
733
+ Bypassed frames accumulate yc * win^2.
734
+ Active frames accumulate x_spade * win.
735
+ norm_win accumulates win^2 for ALL frames.
736
+ """
737
+ import torch.nn.functional as F
738
+ dev = x_active_t.device
739
+ dt = x_active_t.dtype
740
+ win2 = win_t ** 2 # (M,)
741
+
742
+ N = idx1s_t.shape[0]
743
+
744
+ # Index matrix for ALL N frames: (N, M)
745
+ col = torch.arange(M, device=dev, dtype=torch.long)
746
+ idx_mat = idx1s_t.unsqueeze(1) + col.unsqueeze(0) # (N, M)
747
+
748
+ # Output buffers (L+M to avoid OOB)
749
+ x_out = torch.zeros(L + M, device=dev, dtype=dt)
750
+ norm_out = torch.zeros(L + M, device=dev, dtype=dt)
751
+
752
+ # norm_win: all N frames contribute win^2
753
+ norm_vals = win2.unsqueeze(0).expand(N, -1) # (N, M)
754
+ norm_out.scatter_add_(0, idx_mat.reshape(-1), norm_vals.reshape(-1))
755
+
756
+ # Bypassed frames: yc * win^2
757
+ byp_mask = ~is_active
758
+ if byp_mask.any():
759
+ byp_idx = idx_mat[byp_mask] # (n_byp, M)
760
+ yc_pad = F.pad(yc_t, (0, M)) # (L+M,)
761
+ byp_yc = yc_pad[byp_idx] # (n_byp, M) — gather
762
+ byp_val = byp_yc * win2.unsqueeze(0)
763
+ x_out.scatter_add_(0, byp_idx.reshape(-1), byp_val.reshape(-1))
764
+
765
+ # Active frames: x_spade * win
766
+ if is_active.any():
767
+ act_idx = idx_mat[is_active] # (n_active, M)
768
+ act_val = x_active_t.to(dt) * win_t.unsqueeze(0) # (n_active, M)
769
+ x_out.scatter_add_(0, act_idx.reshape(-1), act_val.reshape(-1))
770
+
771
+ norm_clamped = norm_out[:L].clamp(min=1e-12)
772
+ return x_out[:L] / norm_clamped
773
+
774
+
775
+ def _rms_match_gpu(
776
+ x_t: "torch.Tensor", # (L,) float — reconstructed signal
777
+ yc_t: "torch.Tensor", # (L,) float — input (DC-removed)
778
+ Ir_t: "torch.Tensor", # (L,) bool — reliable samples
779
+ M: int,
780
+ ) -> "torch.Tensor":
781
+ """
782
+ GPU reliable-sample RMS match (v12 safe-Ir).
783
+
784
+ Replaces np.convolve(...) for near-clip detection with F.max_pool1d.
785
+ All ops stay on GPU — returns rescaled x_t tensor.
786
+ """
787
+ import torch.nn.functional as F
788
+ if Ir_t.sum() == 0:
789
+ return x_t
790
+
791
+ # Near-clip: any Ir sample within M of a clip boundary is "contaminated"
792
+ clip_f = (~Ir_t).float().unsqueeze(0).unsqueeze(0) # (1, 1, L)
793
+ near = F.max_pool1d(clip_f, M, stride=1,
794
+ padding=M // 2).squeeze()[:len(Ir_t)] > 0
795
+
796
+ safe_Ir = Ir_t & ~near
797
+ use_Ir = safe_Ir if safe_Ir.sum() >= 100 else Ir_t
798
+
799
+ rms_in = yc_t[use_Ir].pow(2).mean().sqrt()
800
+ rms_out = x_t [use_Ir].pow(2).mean().sqrt()
801
+ if rms_out > 1e-12 and rms_in > 1e-12:
802
+ x_t = x_t * (rms_in / rms_out)
803
+ return x_t
804
+
805
+
806
+ def _cosine_sim_gpu(
807
+ gt_t: "torch.Tensor", # (L,) float — GT residual
808
+ est_t: "torch.Tensor", # (L,) float — estimated residual
809
+ win_samples: int = 1024,
810
+ hop_samples: int = 256,
811
+ ) -> float:
812
+ """
813
+ GPU cosine similarity via torch.stft.
814
+
815
+ Replaces scipy.signal.stft + numpy band loops with a single GPU STFT
816
+ call and vectorised band computation. Returns float in [0, 1].
817
+ """
818
+ import torch
819
+ L = min(gt_t.shape[0], est_t.shape[0])
820
+ g = gt_t [:L].float()
821
+ e = est_t[:L].float()
822
+ dev = g.device
823
+
824
+ win_s = min(win_samples, max(32, L // 4))
825
+ hop_s = min(hop_samples, win_s // 2)
826
+
827
+ if L < win_s or win_s < 32:
828
+ denom = g.norm() * e.norm() + 1e-12
829
+ return (g * e).sum().item() / denom.item()
830
+
831
+ window = torch.hann_window(win_s, device=dev)
832
+ # torch.stft: input (L,) → (F, T) complex
833
+ Zg = torch.stft(g, win_s, hop_s, window=window,
834
+ return_complex=True, normalized=False) # (F, T)
835
+ Ze = torch.stft(e, win_s, hop_s, window=window,
836
+ return_complex=True, normalized=False)
837
+
838
+ Mg = Zg.abs() # (F, T)
839
+ Me = Ze.abs() # (F, T)
840
+
841
+ dot = (Mg * Me).sum(dim=0) # (T,)
842
+ norm_g = Mg.norm(dim=0).clamp(min=1e-12) # (T,)
843
+ norm_e = Me.norm(dim=0).clamp(min=1e-12) # (T,)
844
+
845
+ return (dot / (norm_g * norm_e)).mean().item()
846
+
847
+
848
+ # ── GPU corpus cache — upload limited arrays to GPU once per build_corpus ──
849
+ # Keyed by (id(item), device_str). Cleared at program exit.
850
+ _GPU_CORPUS_CACHE: dict = {}
851
+
852
+
853
+ def evaluate_corpus_gpu_mega(
854
+ items: List[Dict],
855
+ params_dict: dict,
856
+ device: str,
857
+ ) -> List[Optional[float]]:
858
+ """
859
+ Pipeline interamente GPU — v13.
860
+
861
+ Pass 0 Carica i tensor GPU dal cache (upload one-time per corpus build).
862
+ Pass 1 Per ogni item: normalise + DC + masks + dilation + unfold (GPU).
863
+ Raccoglie i frame attivi nel mega-tensor.
864
+ Pass 2 _sspade_batch_gpu — invariato, già GPU.
865
+ Pass 3 Per ogni item: WOLA + RMS match + cosine sim (GPU).
866
+ Nessun trasferimento GPU→CPU fino ai punteggi finali.
867
+
868
+ Rispetto a v12:
869
+ - Eliminati tutti i loop Python nel hot-path
870
+ - ThreadPoolExecutor rimosso (serializzazione GPU è già il collo di bottiglia)
871
+ - numpy utilizzato SOLO per l'inizializzazione degli array corpus (build_corpus)
872
+ e per raccogliere i punteggi finali (una .item() per file)
873
+ """
874
+ try:
875
+ import torch
876
+ import torch.nn.functional as F
877
+ from scipy.signal import hann as _hann
878
+ import math as _m
879
+ except ImportError:
880
+ return [evaluate_one(item, dict(params_dict)) for item in items]
881
+
882
+ # ── Extract flags ─────────────────────────────────────────────────────
883
+ p2 = dict(params_dict)
884
+ multiband = p2.pop("multiband", False)
885
+ macro_expand = p2.pop("macro_expand", False)
886
+ macro_ratio = p2.pop("macro_ratio", 1.0)
887
+ lf_delta_db = p2.pop("lf_delta_db", p2.get("delta_db", 1.5))
888
+
889
+ if multiband:
890
+ return [evaluate_one(item, dict(params_dict)) for item in items]
891
+
892
+ # ── Build DeclipParams ────────────────────────────────────────────────
893
+ sr_ref = items[0]["sr"]
894
+ spade_kw = dict(
895
+ macro_expand=macro_expand,
896
+ macro_ratio=macro_ratio if macro_expand else 1.0,
897
+ macro_release_ms=200.0,
898
+ macro_attack_ms=10.0,
899
+ )
900
+ try:
901
+ p = DeclipParams(sample_rate=sr_ref, **FIXED_SOLVER, **p2, **spade_kw)
902
+ except Exception as exc:
903
+ warnings.warn(f"evaluate_corpus_gpu_mega: DeclipParams error: {exc}")
904
+ return [None] * len(items)
905
+
906
+ M = p.window_length
907
+ a = p.hop_length
908
+ NORM_TGT = 0.9
909
+ win_np = np.sqrt(_hann(M, sym=False)).astype(np.float32)
910
+ win_t = torch.from_numpy(win_np).to(device=device) # (M,) on GPU
911
+
912
+ # ── LF mask tensor ────────────────────────────────────────────────────
913
+ lf_mask_t = None
914
+ if p.lf_cutoff_hz > 0.0 and p.lf_k_min > 0:
915
+ lf_mask_np = _build_lf_mask(M, p.frame, sr_ref, p.lf_cutoff_hz)
916
+ lf_mask_t = torch.tensor(lf_mask_np, dtype=torch.bool, device=device)
917
+
918
+ g_max = (10.0 ** (p.max_gain_db / 20.0) if p.max_gain_db > 0.0
919
+ else float("inf"))
920
+
921
+ # ── Pass 0: GPU corpus cache ──────────────────────────────────────────
922
+ # Upload item["limited"] to GPU once; reuse across trials.
923
+ limited_gpu: list = []
924
+ gt_res_gpu: list = []
925
+ for item in items:
926
+ key = (id(item), device)
927
+ if key not in _GPU_CORPUS_CACHE:
928
+ ltd = np.asarray(item["limited"], dtype=np.float32)
929
+ if ltd.ndim == 2:
930
+ ltd = ltd[:, 0] # take L channel; corpus is mono-per-item
931
+ _GPU_CORPUS_CACHE[key] = torch.from_numpy(ltd).to(device=device)
932
+ limited_gpu.append(_GPU_CORPUS_CACHE[key])
933
+
934
+ gt_key = (id(item), device, "gt")
935
+ if gt_key not in _GPU_CORPUS_CACHE:
936
+ gt = np.asarray(item["gt_res"], dtype=np.float32)
937
+ if gt.ndim == 2:
938
+ gt = gt[:, 0]
939
+ _GPU_CORPUS_CACHE[gt_key] = torch.from_numpy(gt).to(device=device)
940
+ gt_res_gpu.append(_GPU_CORPUS_CACHE[gt_key])
941
+
942
+ # ── Pass 1: GPU preprocessing + frame extraction ──────────────────────
943
+ # Process items sequentially; each step is a GPU kernel (no Python per-sample).
944
+ item_states: list = []
945
+ all_yc_active: list = [] # (n_i, M) tensors — will be cat'd
946
+ all_Ir_active: list = []
947
+ all_Icp_active: list = []
948
+ all_Icm_active: list = []
949
+
950
+ for i, item in enumerate(items):
951
+ try:
952
+ sr = item["sr"]
953
+ yc_orig = limited_gpu[i] # (L,) on GPU
954
+
955
+ # Normalise
956
+ gp = float(yc_orig.abs().max().item())
957
+ if gp > NORM_TGT:
958
+ scale = NORM_TGT / gp
959
+ yc = yc_orig * scale
960
+ else:
961
+ scale = 1.0
962
+ yc = yc_orig
963
+
964
+ # DC removal
965
+ dc = float(yc.mean().item())
966
+ yc = yc - dc
967
+
968
+ # Ceiling + threshold (GPU scalars)
969
+ ceiling = float(torch.maximum(yc.max(), (-yc).max()).item())
970
+ thresh = ceiling * (10.0 ** (-p.delta_db / 20.0))
971
+ if thresh <= 0.0:
972
+ item_states.append(None)
973
+ continue
974
+
975
+ # Masks — GPU boolean ops
976
+ Ir_t, Icp_t, Icm_t = _compute_masks_gpu(yc, thresh)
977
+
978
+ # Mask dilation — GPU max_pool
979
+ if p.release_ms > 0.0:
980
+ rs = max(0, round(p.release_ms * sr / 1000.0))
981
+ if rs > 0:
982
+ Ir_t, Icp_t, Icm_t = _dilate_masks_gpu(Icp_t, Icm_t, yc, rs)
983
+
984
+ # Macro expand — still CPU via imported function; runs on numpy
985
+ if macro_expand and macro_ratio > 1.0:
986
+ yc_np = yc.cpu().numpy().astype(float)
987
+ yc_np = _macro_expand_pass(yc_np, sr,
988
+ attack_ms=p.macro_attack_ms,
989
+ release_ms=p.macro_release_ms,
990
+ ratio=macro_ratio)
991
+ yc = torch.from_numpy(yc_np.astype(np.float32)).to(device=device)
992
+ Ir_t, Icp_t, Icm_t = _compute_masks_gpu(yc, thresh)
993
+ if p.release_ms > 0.0 and rs > 0:
994
+ Ir_t, Icp_t, Icm_t = _dilate_masks_gpu(Icp_t, Icm_t, yc, rs)
995
+
996
+ L = yc.shape[0]
997
+
998
+ # Frame extraction — GPU unfold
999
+ yc_act, Ir_act, Icp_act, Icm_act, is_active, N, idx1s_t = \
1000
+ _extract_frames_gpu(yc, Ir_t, Icp_t, Icm_t, M, a, win_t, thresh)
1001
+
1002
+ frame_offset = sum(s["n_active"] for s in item_states if s is not None)
1003
+ n_active = int(is_active.sum().item())
1004
+
1005
+ item_states.append({
1006
+ "yc": yc,
1007
+ "scale": scale,
1008
+ "Ir_t": Ir_t,
1009
+ "L": L,
1010
+ "is_active": is_active,
1011
+ "N": N,
1012
+ "idx1s_t": idx1s_t,
1013
+ "frame_offset":frame_offset,
1014
+ "n_active": n_active,
1015
+ "gt_t": gt_res_gpu[i],
1016
+ "limited_t": limited_gpu[i],
1017
+ "sr": sr,
1018
+ })
1019
+
1020
+ if n_active > 0:
1021
+ all_yc_active .append(yc_act)
1022
+ all_Ir_active .append(Ir_act)
1023
+ all_Icp_active.append(Icp_act)
1024
+ all_Icm_active.append(Icm_act)
1025
+
1026
+ except Exception as exc:
1027
+ warnings.warn(f"evaluate_corpus_gpu_mega preprocess ({item['file']}): {exc}")
1028
+ item_states.append(None)
1029
+
1030
+ if not all_yc_active:
1031
+ return [None] * len(items)
1032
+
1033
+ # Concatenate into mega-batch — single GPU allocation
1034
+ yc_mega = torch.cat(all_yc_active, dim=0) # (total_active, M)
1035
+ Ir_mega = torch.cat(all_Ir_active, dim=0)
1036
+ Icp_mega = torch.cat(all_Icp_active, dim=0)
1037
+ Icm_mega = torch.cat(all_Icm_active, dim=0)
1038
+
1039
+ total_frames = yc_mega.shape[0]
1040
+ total_meta = sum(s["N"] for s in item_states if s is not None)
1041
+ bypass_frames = total_meta - total_frames
1042
+ vram_mb = total_frames * M * 4 * 4 / 1024 ** 2
1043
+ print(f" [mega-batch] {total_frames} active / {total_meta} total frames "
1044
+ f"({100*bypass_frames/max(total_meta,1):.0f}% bypassed) "
1045
+ f"≈{vram_mb:.0f} MB GPU")
1046
+
1047
+ # ── Pass 2 (GPU): _sspade_batch_gpu — unchanged ───────────────────────
1048
+ try:
1049
+ x_mega, _ = _sspade_batch_gpu(
1050
+ yc_mega, Ir_mega, Icp_mega, Icm_mega,
1051
+ p.frame, p.s, p.r, p.eps, p.max_iter,
1052
+ g_max=g_max, lf_mask_t=lf_mask_t, k_lf_min=p.lf_k_min,
1053
+ gpu_dtype=getattr(p, "gpu_dtype", "float32"),
1054
+ )
1055
+ except Exception as exc:
1056
+ warnings.warn(f"evaluate_corpus_gpu_mega GPU pass: {exc}")
1057
+ return [None] * len(items)
1058
+ finally:
1059
+ del yc_mega, Ir_mega, Icp_mega, Icm_mega
1060
+
1061
+ # ── Pass 3 (GPU): WOLA + RMS match + cosine sim ───────────────────────
1062
+ # All operations stay on GPU. Only .item() at the very end to get the score.
1063
+ scores: List[Optional[float]] = []
1064
+ NORM_LIN = 10.0 ** (RESIDUAL_DBFS / 20.0)
1065
+
1066
+ for state in item_states:
1067
+ if state is None:
1068
+ scores.append(None)
1069
+ continue
1070
+ try:
1071
+ yc = state["yc"] # (L,) float GPU
1072
+ scale = state["scale"]
1073
+ L = state["L"]
1074
+ Ir_t = state["Ir_t"]
1075
+ is_active= state["is_active"] # (N,) bool
1076
+ idx1s_t = state["idx1s_t"] # (N,) long
1077
+ f_off = state["frame_offset"]
1078
+ n_act = state["n_active"]
1079
+ gt_t = state["gt_t"] # (L_gt,) float GPU
1080
+ ltd_t = state["limited_t"] # (L,) float GPU
1081
+ sr = state["sr"]
1082
+
1083
+ # Slice active frames for this item
1084
+ x_item = x_mega[f_off:f_off + n_act] if n_act > 0 \
1085
+ else torch.empty((0, M), device=device)
1086
+
1087
+ # GPU WOLA
1088
+ x_t = _wola_gpu(x_item, is_active, idx1s_t, yc, win_t, L, M)
1089
+
1090
+ # GPU RMS match
1091
+ x_t = _rms_match_gpu(x_t, yc, Ir_t, M)
1092
+
1093
+ # Un-scale
1094
+ x_t = x_t / scale
1095
+
1096
+ # Residual — GPU subtraction
1097
+ ltd_ch = ltd_t[:L] # align lengths
1098
+ res_raw = x_t - ltd_ch
1099
+
1100
+ # Normalise to RESIDUAL_DBFS (GPU)
1101
+ pk = res_raw.abs().max().clamp(min=1e-12)
1102
+ res_norm = res_raw * (NORM_LIN / pk)
1103
+
1104
+ # Align with gt_t
1105
+ gt_ch = gt_t[:, 0] if gt_t.dim() == 2 else gt_t
1106
+ Lmin = min(res_norm.shape[0], gt_ch.shape[0])
1107
+
1108
+ # GPU cosine sim via torch.stft
1109
+ sc = _cosine_sim_gpu(gt_ch[:Lmin], res_norm[:Lmin],
1110
+ win_samples=1024, hop_samples=256)
1111
+ scores.append(sc)
1112
+
1113
+ except Exception as exc:
1114
+ warnings.warn(f"evaluate_corpus_gpu_mega WOLA/score ({item['file']}): {exc}")
1115
+ scores.append(None)
1116
+
1117
+ return scores
1118
+
1119
+
1120
+ # OBIETTIVO OPTUNA
1121
+ # =============================================================================
1122
+
1123
+ def make_objective(corpus: List[Dict]):
1124
+ def objective(trial: "optuna.Trial") -> float:
1125
+ # ── Parametri core ────────────────────────────────────────────────
1126
+ delta_db = trial.suggest_float("delta_db", 1.5, 3.5, step=0.05)
1127
+ win_exp = trial.suggest_int ("win_exp", 9, 11)
1128
+ win = 2 ** win_exp
1129
+ hop_div = trial.suggest_categorical("hop_div", [4, 8])
1130
+ hop = win // hop_div
1131
+ rel_ms = trial.suggest_float("release_ms", 10.0, 200.0, step=5.0)
1132
+ gain_db = trial.suggest_float("max_gain_db", 2.0, 12.0, step=0.5)
1133
+ eps = trial.suggest_categorical("eps", [0.03, 0.05, 0.1])
1134
+ max_iter = trial.suggest_categorical("max_iter", [250, 500, 1000])
1135
+
1136
+ # ── Multiband + Macro expand ────────────────────────────────────────
1137
+ # SPAZIO STATICO: lf_delta_db e macro_ratio vengono SEMPRE campionati
1138
+ # dal TPE (spazio fisso) e poi usati condizionalmente a runtime.
1139
+ # Questo elimina il fallback a RandomSampler che degradava le performance
1140
+ # del TPE multivariate con spazi dinamici.
1141
+ multiband = trial.suggest_categorical("multiband", [False, True])
1142
+ macro_expand = trial.suggest_categorical("macro_expand", [False, True])
1143
+
1144
+ # Sempre campionati (range fisso), usati solo se il flag e' True:
1145
+ lf_delta_db = trial.suggest_float("lf_delta_db", 0.5, 2.0, step=0.05)
1146
+ macro_ratio = trial.suggest_float("macro_ratio", 1.1, 2.0, step=0.05)
1147
+
1148
+ # ── v12: frequency-stratified thresholding ─────────────────────────
1149
+ # lf_cutoff_hz: soglia in Hz che separa i bin "LF garantiti" dagli HF.
1150
+ # Con M=512, sr=44100: bin_k = k * sr / (2M) → lf_cutoff=1000Hz → 23 bin LF.
1151
+ # lf_k_min: quanti di quei bin sono garantiti per ogni iterazione ADMM.
1152
+ # 0 = disabilitato (comportamento identico a v11).
1153
+ lf_cutoff_hz = trial.suggest_categorical("lf_cutoff_hz", [0.0, 500.0, 1000.0, 2000.0])
1154
+ lf_k_min = trial.suggest_int("lf_k_min", 0, 16)
1155
+ # Nota: quando lf_cutoff_hz=0 oppure lf_k_min=0, la feature e' disabilitata.
1156
+ # Il TPE impara autonomamente quando conviene attivarla.
1157
+
1158
+ # Se multiband=False, lf_delta_db viene ignorato in evaluate_one.
1159
+ # Se macro_expand=False, macro_ratio viene ignorato in evaluate_one.
1160
+
1161
+ params = dict(
1162
+ delta_db = delta_db,
1163
+ window_length = win,
1164
+ hop_length = hop,
1165
+ release_ms = rel_ms,
1166
+ max_gain_db = gain_db,
1167
+ eps = eps,
1168
+ max_iter = max_iter,
1169
+ # flag di alto livello (estratti in evaluate_one, non passati raw)
1170
+ multiband = multiband,
1171
+ lf_delta_db = lf_delta_db,
1172
+ macro_expand = macro_expand,
1173
+ macro_ratio = macro_ratio,
1174
+ # v12: passati direttamente a DeclipParams (non estratti in evaluate_one)
1175
+ lf_cutoff_hz = lf_cutoff_hz,
1176
+ lf_k_min = lf_k_min,
1177
+ )
1178
+
1179
+ scores = []
1180
+ # ── Shuffle per-trial con seed riproducibile ──────────────────────
1181
+ rng_shuffle = np.random.default_rng(trial.number)
1182
+ shuffled_corpus = rng_shuffle.permutation(len(corpus)).tolist()
1183
+ midpoint = len(corpus) // 2
1184
+ ordered_items = [corpus[idx] for idx in shuffled_corpus]
1185
+
1186
+ # ── GPU mega-batch: tutti i frame del corpus in un solo kernel ────
1187
+ # Rileva il device GPU disponibile per questa chiamata.
1188
+ # Se non disponibile, evaluate_corpus_gpu_mega ricade su evaluate_one.
1189
+ _gpu_dev = "cpu"
1190
+ try:
1191
+ import torch
1192
+ if torch.cuda.is_available():
1193
+ _gpu_dev = "cuda"
1194
+ except ImportError:
1195
+ pass
1196
+
1197
+ # Primo metà del corpus → prune check → seconda metà
1198
+ # (preserva il beneficio del MedianPruner senza N kernel separati)
1199
+ first_half = ordered_items[:midpoint + 1]
1200
+ second_half = ordered_items[midpoint + 1:]
1201
+
1202
+ scores_first = evaluate_corpus_gpu_mega(first_half, dict(params), _gpu_dev)
1203
+ scores = [sc for sc in scores_first if sc is not None]
1204
+
1205
+ if scores:
1206
+ trial.report(float(np.mean(scores)), step=midpoint)
1207
+ if trial.should_prune():
1208
+ raise optuna.TrialPruned()
1209
+
1210
+ if second_half:
1211
+ scores_second = evaluate_corpus_gpu_mega(second_half, dict(params), _gpu_dev)
1212
+ scores.extend(sc for sc in scores_second if sc is not None)
1213
+
1214
+ if not scores:
1215
+ return 0.0
1216
+ mean_score = float(np.mean(scores))
1217
+ trial.report(mean_score, step=len(corpus))
1218
+ return mean_score
1219
+
1220
+ return objective
1221
+
1222
+
1223
+ # =============================================================================
1224
+ # REPORT + CSV
1225
+ # =============================================================================
1226
+
1227
+ def print_report(study: "optuna.Study", top_n: int = 20):
1228
+ trials = sorted(
1229
+ [t for t in study.trials if t.state == optuna.trial.TrialState.COMPLETE],
1230
+ key=lambda t: t.value or 0, reverse=True,
1231
+ )
1232
+ if not trials:
1233
+ print("Nessun trial completato.")
1234
+ return
1235
+
1236
+ if _HAS_RICH:
1237
+ _console.rule("[bold cyan]RISULTATI SWEEP BAYESIANO[/]")
1238
+ tbl = Table(show_header=True, header_style="bold cyan", show_lines=False)
1239
+ for col, w in [("#",4),("score",9),("ddb",6),("LFd",5),("win",6),
1240
+ ("hop",4),("rel",6),("gain",6),("eps",5),("iter",5),
1241
+ ("MB",3),("ME",3),("MR",5),("LFcut",6),("LFk",4)]:
1242
+ tbl.add_column(col, justify="right", width=w)
1243
+ for rank, t in enumerate(trials[:top_n], 1):
1244
+ p = t.params
1245
+ win = 2 ** p["win_exp"]
1246
+ hop = win // p["hop_div"]
1247
+ mb = "Y" if p.get("multiband") else "n"
1248
+ me = "Y" if p.get("macro_expand") else "n"
1249
+ lfc = p.get("lf_cutoff_hz", 0.0)
1250
+ lfk = p.get("lf_k_min", 0)
1251
+ sty = "bold green" if rank == 1 else ("yellow" if rank <= 3 else "")
1252
+ tbl.add_row(
1253
+ str(rank), f"{t.value:.5f}",
1254
+ f"{p['delta_db']:.2f}",
1255
+ f"{p.get('lf_delta_db', p['delta_db']):.2f}",
1256
+ str(win), str(hop),
1257
+ f"{p['release_ms']:.0f}", f"{p['max_gain_db']:.1f}",
1258
+ str(p['eps']), str(p['max_iter']),
1259
+ mb, me, f"{p.get('macro_ratio', 1.0):.2f}",
1260
+ f"{lfc:.0f}", str(lfk),
1261
+ style=sty,
1262
+ )
1263
+ _console.print(tbl)
1264
+ else:
1265
+ hdr = (f"{'#':>3} {'score':>8} {'ddb':>5} {'LFd':>5} {'win':>5}"
1266
+ f" {'hop':>4} {'rel':>6} {'gain':>5} {'eps':>5} {'iter':>5}"
1267
+ f" {'MB':>3} {'ME':>3} {'MR':>5} {'LFcut':>6} {'LFk':>4}")
1268
+ print(hdr); print("-" * len(hdr))
1269
+ for rank, t in enumerate(trials[:top_n], 1):
1270
+ p = t.params
1271
+ win = 2 ** p["win_exp"]
1272
+ hop = win // p["hop_div"]
1273
+ mb = "Y" if p.get("multiband") else "n"
1274
+ me = "Y" if p.get("macro_expand") else "n"
1275
+ lfc = p.get("lf_cutoff_hz", 0.0)
1276
+ lfk = p.get("lf_k_min", 0)
1277
+ print(f"{rank:>3} {t.value:>8.5f} {p['delta_db']:>5.2f}"
1278
+ f" {p.get('lf_delta_db', p['delta_db']):>5.2f} {win:>5}"
1279
+ f" {hop:>4} {p['release_ms']:>6.0f} {p['max_gain_db']:>5.1f}"
1280
+ f" {str(p['eps']):>5} {p['max_iter']:>5}"
1281
+ f" {mb:>3} {me:>3} {p.get('macro_ratio', 1.0):>5.2f}"
1282
+ f" {lfc:>6.0f} {lfk:>4}")
1283
+
1284
+ best = trials[0]
1285
+ p = best.params
1286
+ win = 2 ** p["win_exp"]
1287
+ hop = win // p["hop_div"]
1288
+ n_pruned = sum(1 for t in study.trials
1289
+ if t.state == optuna.trial.TrialState.PRUNED)
1290
+
1291
+ print("\n" + "═" * 60)
1292
+ print("CONFIG OTTIMALE")
1293
+ print("═" * 60)
1294
+ print(f"""
1295
+ params = DeclipParams(
1296
+ algo = "sspade",
1297
+ frame = "rdft",
1298
+ mode = "soft",
1299
+ delta_db = {p['delta_db']:.2f},
1300
+ window_length = {win},
1301
+ hop_length = {hop},
1302
+ release_ms = {p['release_ms']:.1f},
1303
+ max_gain_db = {p['max_gain_db']:.1f},
1304
+ eps = {p['eps']},
1305
+ max_iter = {p['max_iter']},
1306
+ sample_rate = sr,
1307
+ multiband = {p.get('multiband', False)},
1308
+ band_crossovers = ({BAND_CROSSOVER_HZ},),
1309
+ band_delta_db = ({p.get('lf_delta_db', p['delta_db']):.2f}, {p['delta_db']:.2f}),
1310
+ macro_expand = {p.get('macro_expand', False)},
1311
+ macro_ratio = {p.get('macro_ratio', 1.0):.2f},
1312
+ lf_cutoff_hz = {p.get('lf_cutoff_hz', 0.0):.1f}, # v12
1313
+ lf_k_min = {p.get('lf_k_min', 0)}, # v12
1314
+ n_jobs = -1,
1315
+ show_progress = True,
1316
+ )""")
1317
+ print(f"\n→ Best score : {best.value:.5f}")
1318
+ print(f" Trials done : {len(trials)}")
1319
+ print(f" Pruned : {n_pruned}")
1320
+
1321
+
1322
+
1323
+ # =============================================================================
1324
+ # DEBUG EXPORT
1325
+ # =============================================================================
1326
+
1327
+ # Parametri SPADE usati per il debug (best noti dal grid sweep precedente).
1328
+ # Se un DB Optuna esiste e ha trial completati, vengono sostituiti dal best.
1329
+ DEBUG_PARAMS = dict(
1330
+ delta_db = 1.5,
1331
+ window_length = 1024,
1332
+ hop_length = 256,
1333
+ release_ms = 100.0,
1334
+ max_gain_db = 6.0,
1335
+ eps = 0.05,
1336
+ max_iter = 500,
1337
+ )
1338
+
1339
+
1340
+ def _pk_dbfs(a: np.ndarray) -> float:
1341
+ pk = float(np.max(np.abs(a)))
1342
+ return 20.0 * np.log10(pk) if pk > 1e-12 else -999.0
1343
+
1344
+
1345
+ def _rms_dbfs(a: np.ndarray) -> float:
1346
+ rms = float(np.sqrt(np.mean(a.astype(float) ** 2)))
1347
+ return 20.0 * np.log10(rms) if rms > 1e-12 else -999.0
1348
+
1349
+
1350
+ def _write_wav(path: Path, audio: np.ndarray, sr: int) -> None:
1351
+ """Scrive WAV float32 senza clipping. Avvisa se peak > 1.0."""
1352
+ a2d = ensure_2d(audio).astype(np.float32)
1353
+ pk = float(np.max(np.abs(a2d)))
1354
+ if pk > 1.0:
1355
+ print(f" [WARN] {path.name}: peak={pk:.4f} > 1.0 "
1356
+ f"(+{20*np.log10(pk):.2f} dBFS) — float32, non clippato")
1357
+ sf.write(str(path), a2d, sr, subtype="FLOAT")
1358
+
1359
+
1360
+ def debug_export(
1361
+ corpus: list,
1362
+ base_dir: Path,
1363
+ out_dir: Path,
1364
+ n_files: int,
1365
+ spade_params: dict,
1366
+ ) -> None:
1367
+ """
1368
+ Esporta WAV di debug per i primi n_files item del corpus.
1369
+
1370
+ Per ogni file vengono scritti 6 WAV float32:
1371
+ 01_orig_with_noise drum + pink noise, normalizzato a 0 dBFS peak
1372
+ (segnale prima del limiter)
1373
+ 02_limited uscita del limiter sintetico (input a SPADE)
1374
+ 03_gt_residual orig_with_noise - limited, @RESIDUAL_DBFS peak
1375
+ 04_spade_output uscita SPADE (float32, puo' superare 0 dBFS)
1376
+ 05_res_iter spade_output - limited, @RESIDUAL_DBFS peak
1377
+ 06_diff_residuals gt_residual - res_iter
1378
+ ideale = silenzio = -inf dB
1379
+
1380
+ Stampa una tabella con peak dBFS e RMS dBFS per ogni traccia.
1381
+
1382
+ Livelli ATTESI:
1383
+ 01 peak = 0.00 dBFS (normalizzato)
1384
+ 02 peak ~ -LIMITER_THRESHOLD_DB dBFS (es. -1.5 dBFS)
1385
+ 03 peak = RESIDUAL_DBFS (es. -3.0 dBFS)
1386
+ 04 peak puo' essere > 0 dBFS (transiente recuperato)
1387
+ 05 peak = RESIDUAL_DBFS (es. -3.0 dBFS)
1388
+ 06 peak << 0 dBFS (piu' basso = SPADE piu' vicino al GT)
1389
+ """
1390
+ out_dir.mkdir(parents=True, exist_ok=True)
1391
+ items = corpus[:n_files]
1392
+ col_w = max(len(it["file"]) for it in items) + 2
1393
+
1394
+ HDR = (f" {'file':<{col_w}} {'traccia':<22}"
1395
+ f" {'peak dBFS':>10} {'RMS dBFS':>9} note")
1396
+ SEP = " " + "-" * (len(HDR) - 2)
1397
+
1398
+ print()
1399
+ if _HAS_RICH:
1400
+ _console.rule("[bold cyan]DEBUG EXPORT[/]")
1401
+ else:
1402
+ print("=" * 65)
1403
+ print("DEBUG EXPORT")
1404
+ print("=" * 65)
1405
+
1406
+ print(f" Output dir : {out_dir}")
1407
+ print(f" SPADE params : delta_db={spade_params['delta_db']}"
1408
+ f" win={spade_params['window_length']}"
1409
+ f" hop={spade_params['hop_length']}"
1410
+ f" rel={spade_params['release_ms']}ms"
1411
+ f" gain={spade_params['max_gain_db']}dB")
1412
+ print(f" File esportati: {len(items)}")
1413
+ print()
1414
+ print(f" Livelli attesi:")
1415
+ print(f" 01_orig_with_noise : ~ 0.00 dBFS (normalizzato prima del limiter)")
1416
+ print(f" 02_limited : ~ {-LIMITER_THRESHOLD_DB:+.2f} dBFS (uscita limiter)")
1417
+ print(f" 03_gt_residual : = {RESIDUAL_DBFS:+.2f} dBFS (normalizzato)")
1418
+ print(f" 04_spade_output : > 0 dBFS possibile (transiente recuperato)")
1419
+ print(f" 05_res_iter : = {RESIDUAL_DBFS:+.2f} dBFS (normalizzato)")
1420
+ print(f" 06_diff_residuals : << 0 dBFS (piu' basso = pipeline piu' corretta)")
1421
+ print()
1422
+ print(HDR)
1423
+
1424
+ diff_peaks = []
1425
+
1426
+ for file_index, item in enumerate(items):
1427
+ sr = item["sr"]
1428
+ limited = item["limited"].copy()
1429
+ gt_res = item["gt_res"]
1430
+ stem = Path(item["file"]).stem
1431
+
1432
+ # ── Ricostruisci orig_with_noise ──────────────────────────────────
1433
+ # Riesegue la stessa pipeline di build_corpus con il seed identico
1434
+ orig_with_noise = None
1435
+ for folder in DRUM_DIRS:
1436
+ candidate = base_dir / folder / item["file"]
1437
+ if candidate.exists():
1438
+ try:
1439
+ raw, _ = sf.read(str(candidate), always_2d=True)
1440
+ raw = raw.astype(float)
1441
+ rng = np.random.default_rng(seed=file_index)
1442
+ orig_0 = normalize_to_0dBFS(raw)
1443
+ mixed = ensure_2d(mix_pink_noise(orig_0, sr,
1444
+ PINK_NOISE_LEVEL_DB, rng))
1445
+ orig_with_noise = ensure_2d(normalize_to_0dBFS(mixed))
1446
+ except Exception:
1447
+ pass
1448
+ break
1449
+
1450
+ if orig_with_noise is None:
1451
+ # Fallback: ricostruiamo da limited + gt_res (approssimazione)
1452
+ gt_scale = 10 ** (RESIDUAL_DBFS / 20.0) # peak di gt_res
1453
+ lim_peak = 10 ** (-LIMITER_THRESHOLD_DB / 20.0) # peak atteso del limited
1454
+ gt_raw = gt_res * (lim_peak / (gt_scale + 1e-12))
1455
+ orig_with_noise = ensure_2d(normalize_to_0dBFS(limited + gt_raw))
1456
+
1457
+ # ── Esegui SPADE ───────────────────────────────────────────��──────
1458
+ try:
1459
+ p = DeclipParams(sample_rate=sr, **FIXED_SOLVER, **spade_params)
1460
+ fixed, _ = declip(limited.copy(), p)
1461
+ fixed_2d = ensure_2d(fixed)
1462
+ except Exception as exc:
1463
+ print(f" [ERRORE SPADE] {item['file']}: {exc}")
1464
+ continue
1465
+
1466
+ # ── Residual iterazione (scala RAW, senza normalizzazione) ───────────
1467
+ # IMPORTANTE: il diff deve avvenire sulla scala comune PRIMA di
1468
+ # normalizzare i due residual, altrimenti la normalizzazione
1469
+ # indipendente rimuove l'informazione di ampiezza relativa.
1470
+ #
1471
+ # gt_res e res_raw sono entrambi derivati dallo stesso limited →
1472
+ # hanno la stessa scala di riferimento.
1473
+ # gt_res e' gia' stato normalizzato a RESIDUAL_DBFS in build_corpus;
1474
+ # dobbiamo riportarlo alla scala raw per il confronto.
1475
+ #
1476
+ # Scala comune: usiamo il peak del limited come riferimento.
1477
+ # limited peak ≈ 10^(-LIMITER_THRESHOLD_DB/20) → scala assoluta nota.
1478
+ res_raw = fixed_2d - limited # residual SPADE in scala assoluta
1479
+
1480
+ # gt_res_raw: ricostruiamo dalla scala normalizzata
1481
+ # gt_res = gt_res_raw / peak(gt_res_raw) * 10^(RESIDUAL_DBFS/20)
1482
+ # → gt_res_raw = gt_res * peak(gt_res_raw) / 10^(RESIDUAL_DBFS/20)
1483
+ # Poiche' peak(gt_res_raw) non e' salvato, lo stimiamo:
1484
+ # gt_res_raw ≈ orig_with_noise - limited (ricostruito)
1485
+ gt_res_raw_approx = ensure_2d(orig_with_noise) - limited
1486
+ L = min(gt_res_raw_approx.shape[0], res_raw.shape[0])
1487
+
1488
+ # ── Diff sulla scala comune (raw, non normalizzata) ───────────────
1489
+ diff_raw = gt_res_raw_approx[:L] - res_raw[:L]
1490
+
1491
+ # ── Cosine similarity temporale (scalare, sul canale L) ──────────
1492
+ g_flat = gt_res_raw_approx[:L, 0] if gt_res_raw_approx.ndim == 2 else gt_res_raw_approx[:L]
1493
+ e_flat = res_raw[:L, 0] if res_raw.ndim == 2 else res_raw[:L]
1494
+ cos_sim_td = float(
1495
+ np.dot(g_flat, e_flat) /
1496
+ (np.linalg.norm(g_flat) * np.linalg.norm(e_flat) + 1e-12)
1497
+ )
1498
+
1499
+ # ── Stima floor teorico del diff dovuto al rumore rosa ────────────
1500
+ # Il limiter attenue anche i picchi del rumore rosa → quella parte
1501
+ # sta nel GT_res ma NON in res_iter (SPADE non la recupera).
1502
+ # Stimiamo quanto rumore e' nel GT_res come proxy del floor.
1503
+ noise_gain_lin = 10 ** (PINK_NOISE_LEVEL_DB / 20.0)
1504
+ # Ampiezza del rumore rispetto al limited: noise_gain ≈ fraction
1505
+ # del GT_res che e' irrecuperabile da SPADE.
1506
+ noise_floor_db = 20 * np.log10(noise_gain_lin + 1e-12) + RESIDUAL_DBFS
1507
+ # In pratica: diff non puo' essere < noise_floor per costruzione.
1508
+
1509
+ # ── diff dBFS relativo al GT_res (SNR-like) ───────────────────────
1510
+ diff_rms_db = _rms_dbfs(diff_raw[:L])
1511
+ gt_rms_db = _rms_dbfs(gt_res_raw_approx[:L])
1512
+ # diff_vs_gt: quanto e' grande il diff rispetto al GT (0 dB = diff = GT)
1513
+ diff_vs_gt_db = diff_rms_db - gt_rms_db # piu' negativo = meglio
1514
+
1515
+ # Normalizza per l'export WAV
1516
+ res_iter = normalize_peak(res_raw, RESIDUAL_DBFS)
1517
+ diff_norm = normalize_peak(diff_raw, RESIDUAL_DBFS) if np.max(np.abs(diff_raw)) > 1e-12 else diff_raw
1518
+
1519
+ diff_peaks.append((diff_vs_gt_db, cos_sim_td, diff_rms_db, gt_rms_db))
1520
+
1521
+ # ── Definizione tracce ────────────────────────────────────────────
1522
+ tracks = [
1523
+ ("01_orig_with_noise",
1524
+ orig_with_noise,
1525
+ f"drum+noise @0dBFS (input pipeline)"),
1526
+ ("02_limited",
1527
+ limited,
1528
+ f"uscita limiter (input SPADE) atteso: ~{-LIMITER_THRESHOLD_DB:+.2f}dBFS"),
1529
+ ("03_gt_residual",
1530
+ gt_res,
1531
+ f"GT residual @{RESIDUAL_DBFS:.0f}dBFS (include noise attenuation)"),
1532
+ ("04_spade_output",
1533
+ fixed_2d,
1534
+ f"SPADE output (float32, puo' >0dBFS)"),
1535
+ ("05_res_iter",
1536
+ res_iter,
1537
+ f"residual SPADE @{RESIDUAL_DBFS:.0f}dBFS (solo componente sparsa)"),
1538
+ ("06_diff_residuals",
1539
+ diff_norm,
1540
+ f"GT - iter @{RESIDUAL_DBFS:.0f}dBFS "
1541
+ f"cos_sim={cos_sim_td:.3f} diff/GT={diff_vs_gt_db:+.1f}dB "
1542
+ f"noise_floor≈{noise_floor_db:+.1f}dB"),
1543
+ ]
1544
+
1545
+ # ── Soglia realistica per il diff ─────────────────────────────────
1546
+ # Il diff non puo' essere < noise_floor per costruzione del corpus.
1547
+ # Calibriamo la soglia [OK] a noise_floor + 6 dB (margine).
1548
+ ok_threshold = noise_floor_db + 6.0 # tipicamente attorno a -17 dBFS
1549
+ warn_threshold = ok_threshold + 10.0 # tutto sopra e' davvero anomalo
1550
+
1551
+ # ── Stampa tabella + scrivi WAV ───────────────────────────────────
1552
+ print(SEP)
1553
+ for track_name, audio, note in tracks:
1554
+ pk = _pk_dbfs(audio)
1555
+ rms = _rms_dbfs(audio)
1556
+
1557
+ flag = ""
1558
+ if track_name == "06_diff_residuals":
1559
+ if diff_vs_gt_db < -12: flag = "[OK] buona convergenza"
1560
+ elif diff_vs_gt_db < -6: flag = "[~] convergenza parziale"
1561
+ else: flag = "[WARN] diff elevato rispetto al GT"
1562
+
1563
+ row = (f" {item['file']:<{col_w}} {track_name:<22}"
1564
+ f" {pk:>+10.2f} {rms:>+9.2f} {note} {flag}")
1565
+
1566
+ if _HAS_RICH:
1567
+ color = ("green" if "[OK]" in flag else
1568
+ "yellow" if "[~]" in flag else
1569
+ "red" if "[WARN]" in flag else "")
1570
+ colored_row = row.replace(flag, f"[{color or 'dim'}]{flag}[/]") if flag else row
1571
+ _console.print(colored_row)
1572
+ else:
1573
+ print(row)
1574
+
1575
+ wav_path = out_dir / f"{stem}__{track_name}.wav"
1576
+ _write_wav(wav_path, audio, sr)
1577
+
1578
+ # ── Analisi spettrale per banda: LF vs HF ─────────────────────────
1579
+ # Risponde alla domanda: quanto residual c'e' nelle basse frequenze,
1580
+ # e quanto ne recupera SPADE?
1581
+ #
1582
+ # Bands:
1583
+ # Sub-bass : 20 – 80 Hz (fondamentale kick, body)
1584
+ # Bass : 80 – 250 Hz (corpo kick, coda)
1585
+ # Low-mid : 250 – 800 Hz (presenza)
1586
+ # High-mid : 800 – 4000 Hz (attacco, click)
1587
+ # High : 4k – 20k Hz (aria, snap)
1588
+ #
1589
+ # Per ogni banda misura:
1590
+ # GT_energy = energia del GT residual (quanto il limiter ha tolto)
1591
+ # iter_energy = energia recuperata da SPADE
1592
+ # recovery % = iter_energy / GT_energy × 100
1593
+
1594
+ def band_energy(audio_2d, sr, f_lo, f_hi):
1595
+ """RMS energy in dB di una banda passante [f_lo, f_hi] Hz."""
1596
+ mono = audio_2d[:, 0] if audio_2d.ndim == 2 else audio_2d
1597
+ N = len(mono)
1598
+ if N < 8:
1599
+ return -999.0
1600
+ # Butterworth bandpass (o lowpass/highpass ai bordi)
1601
+ nyq = sr / 2.0
1602
+ lo = max(f_lo / nyq, 1e-4)
1603
+ hi = min(f_hi / nyq, 0.9999)
1604
+ if lo >= hi:
1605
+ return -999.0
1606
+ if lo < 1e-3:
1607
+ b, a = sig.butter(4, hi, btype="low")
1608
+ else:
1609
+ b, a = sig.butter(4, [lo, hi], btype="band")
1610
+ filtered = sig.filtfilt(b, a, mono)
1611
+ return _rms_dbfs(filtered)
1612
+
1613
+ BANDS = [
1614
+ ("Sub-bass ", 20, 80),
1615
+ ("Bass ", 80, 250),
1616
+ ("Low-mid ", 250, 800),
1617
+ ("High-mid ", 800, 4000),
1618
+ ("High ", 4000, 20000),
1619
+ ]
1620
+
1621
+ gt_mono = gt_res[:, 0] if gt_res.ndim == 2 else gt_res
1622
+ ri_mono = res_iter[:, 0] if res_iter.ndim == 2 else res_iter
1623
+
1624
+ # Normalizza GT e iter sulla stessa scala (rimuovi la normalizzazione
1625
+ # a RESIDUAL_DBFS per confrontare energie assolute)
1626
+ gt_raw_for_bands = gt_res_raw_approx
1627
+ iter_raw_for_bands = res_raw
1628
+
1629
+ print()
1630
+ band_hdr = f" {'banda':<12} {'GT_res RMS':>10} {'SPADE rec RMS':>13} {'recovery':>9} {'limitato?'}"
1631
+ print(f" Analisi spettrale per banda — {item['file']}")
1632
+ print(f" {'─'*75}")
1633
+ print(band_hdr)
1634
+ print(f" {'─'*75}")
1635
+ for bname, f_lo, f_hi in BANDS:
1636
+ gt_db = band_energy(gt_raw_for_bands, sr, f_lo, f_hi)
1637
+ iter_db = band_energy(iter_raw_for_bands, sr, f_lo, f_hi)
1638
+ if gt_db < -60:
1639
+ recovery_str = " — (silenzio)"
1640
+ flag_b = ""
1641
+ else:
1642
+ diff_b = iter_db - gt_db # positivo = SPADE supera GT (overrecovery)
1643
+ # recovery: 0 dB diff = recupero perfetto, molto negativo = sotto-recupero
1644
+ if diff_b > -3:
1645
+ flag_b = "OK"
1646
+ elif diff_b > -9:
1647
+ flag_b = "~ parziale"
1648
+ else:
1649
+ flag_b = "!! sotto-recupero"
1650
+ recovery_str = f"{diff_b:>+7.1f} dB {flag_b}"
1651
+ line = f" {bname:<12} {gt_db:>+10.1f} {iter_db:>+13.1f} {recovery_str}"
1652
+ if _HAS_RICH:
1653
+ color = "green" if "OK" in recovery_str else (
1654
+ "yellow" if "~" in recovery_str else (
1655
+ "red" if "!!" in recovery_str else "dim"))
1656
+ _console.print(f"[{color}]{line}[/]")
1657
+ else:
1658
+ print(line)
1659
+ print()
1660
+
1661
+ print(SEP)
1662
+ print()
1663
+ if diff_peaks:
1664
+ vs_gt_vals = [d[0] for d in diff_peaks]
1665
+ cos_vals = [d[1] for d in diff_peaks]
1666
+ avg_vs_gt = float(np.mean(vs_gt_vals))
1667
+ best_vs_gt = float(np.min(vs_gt_vals))
1668
+ worst_vs_gt = float(np.max(vs_gt_vals))
1669
+ avg_cos = float(np.mean(cos_vals))
1670
+
1671
+ noise_floor_db = 20 * np.log10(10 ** (PINK_NOISE_LEVEL_DB / 20.0) + 1e-12) + RESIDUAL_DBFS
1672
+
1673
+ print(f" RIEPILOGO 06_diff_residuals:")
1674
+ print(f" diff/GT_rms media : {avg_vs_gt:>+7.2f} dB (0 dB = diff grande quanto GT)")
1675
+ print(f" diff/GT_rms migliore: {best_vs_gt:>+7.2f} dB")
1676
+ print(f" diff/GT_rms peggiore: {worst_vs_gt:>+7.2f} dB")
1677
+ print(f" cos_sim TD media : {avg_cos:>8.4f} (1.0 = identici)")
1678
+ print()
1679
+ print(f" NOTA IMPORTANTE:")
1680
+ print(f" Il rumore rosa ({PINK_NOISE_LEVEL_DB} dB) fa parte del GT_res ma")
1681
+ print(f" NON puo' essere recuperato da SPADE (non e' sparso).")
1682
+ print(f" Floor teorico del diff: ≈ {noise_floor_db:+.1f} dBFS — questo e' il")
1683
+ print(f" limite fisico massimo raggiungibile con questo corpus.")
1684
+ print(f" Un diff/GT < -6 dB indica buona convergenza di SPADE.")
1685
+ print()
1686
+ if worst_vs_gt < -12:
1687
+ verdict = "OK Convergenza eccellente — SPADE recupera bene i transienti"
1688
+ elif worst_vs_gt < -6:
1689
+ verdict = "~ Convergenza buona — residuo compatibile con il noise floor"
1690
+ else:
1691
+ verdict = "INFO diff dominato dal rumore rosa — comportamento atteso e corretto"
1692
+ print(f" Verdetto: {verdict}")
1693
+ print(f"\n WAV scritti in : {out_dir}/")
1694
+ print(f" Formato : float32, nessun clipping (usa un editor che supporta >0dBFS)")
1695
+ print(f" Nomenclatura : <stem>__<N>_<traccia>.wav")
1696
+
1697
+
1698
+ def save_csv(study: "optuna.Study"):
1699
+ import csv
1700
+ trials = sorted(
1701
+ [t for t in study.trials if t.state == optuna.trial.TrialState.COMPLETE],
1702
+ key=lambda t: t.value or 0, reverse=True,
1703
+ )
1704
+ with open(OUT_CSV, "w", newline="") as f:
1705
+ w = csv.writer(f)
1706
+ w.writerow(["rank", "score", "delta_db", "lf_delta_db",
1707
+ "window_length", "hop_length", "release_ms", "max_gain_db",
1708
+ "eps", "max_iter", "multiband", "macro_expand", "macro_ratio"])
1709
+ for rank, t in enumerate(trials, 1):
1710
+ p = t.params
1711
+ win = 2 ** p["win_exp"]
1712
+ hop = win // p["hop_div"]
1713
+ w.writerow([
1714
+ rank, round(t.value, 6),
1715
+ p["delta_db"],
1716
+ round(p.get("lf_delta_db", p["delta_db"]), 2),
1717
+ win, hop,
1718
+ p["release_ms"], p["max_gain_db"], p["eps"], p["max_iter"],
1719
+ int(p.get("multiband", False)),
1720
+ int(p.get("macro_expand", False)),
1721
+ round(p.get("macro_ratio", 1.0), 2),
1722
+ ])
1723
+ print(f"\n 📄 CSV: {OUT_CSV}")
1724
+
1725
+
1726
+ # =============================================================================
1727
+ # MAIN
1728
+ # =============================================================================
1729
+
1730
+ def parse_args():
1731
+ ap = argparse.ArgumentParser(description="Smart Bayesian sweep per S-SPADE v2")
1732
+ ap.add_argument("--trials", type=int, default=200,
1733
+ help="Numero di trial Optuna (default: 200)")
1734
+ ap.add_argument("--resume", action="store_true",
1735
+ help="Carica lo study esistente e aggiunge trial")
1736
+ ap.add_argument("--report", action="store_true",
1737
+ help="Solo report (nessun nuovo trial)")
1738
+ ap.add_argument("--base-dir", type=str, default=".",
1739
+ help="Cartella radice con Kicks/Snares/Perc/Tops")
1740
+ ap.add_argument("--corpus-size", type=int, default=None,
1741
+ help="Limita il corpus a N file (None = tutti)")
1742
+ ap.add_argument("--top", type=int, default=20,
1743
+ help="Quanti trial mostrare nel ranking (default: 20)")
1744
+ ap.add_argument("--no-prune", action="store_true",
1745
+ help="Disabilita MedianPruner (più lento ma completo)")
1746
+ ap.add_argument("--debug-export", action="store_true",
1747
+ help="Esporta WAV di debug per i primi N file del corpus (no sweep)")
1748
+ ap.add_argument("--debug-dir", type=str, default="debug_export",
1749
+ help="Cartella output WAV di debug (default: debug_export)")
1750
+ ap.add_argument("--debug-n", type=int, default=10,
1751
+ help="Quanti file esportare in debug (default: 10)")
1752
+ return ap.parse_args()
1753
+
1754
+
1755
+ def main():
1756
+ args = parse_args()
1757
+
1758
+ missing = []
1759
+ if not _HAS_OPTUNA: missing.append("optuna")
1760
+ if not _HAS_SPADE: missing.append("spade_declip_v11.py (nella stessa dir)")
1761
+ if missing:
1762
+ pip = [m for m in missing if not m.endswith(")")]
1763
+ sys.exit("Mancante:\n pip install " + " ".join(pip)
1764
+ + ("\n " + "\n ".join(m for m in missing if m.endswith(")")) if any(m.endswith(")") for m in missing) else ""))
1765
+
1766
+ base_dir = Path(args.base_dir).resolve()
1767
+ storage = f"sqlite:///{STUDY_NAME}.db"
1768
+ sampler = TPESampler(seed=42, multivariate=True, warn_independent_sampling=False)
1769
+ pruner = (MedianPruner(n_startup_trials=10, n_warmup_steps=3)
1770
+ if not args.no_prune else optuna.pruners.NopPruner())
1771
+
1772
+ if args.report:
1773
+ try:
1774
+ study = optuna.load_study(study_name=STUDY_NAME, storage=storage,
1775
+ sampler=sampler, pruner=pruner)
1776
+ except Exception:
1777
+ sys.exit(f"Nessuno study trovato in {STUDY_NAME}.db")
1778
+ print_report(study, top_n=args.top)
1779
+ save_csv(study)
1780
+ return
1781
+
1782
+ # ── Debug export ──────────────────────────────────────────────────────────
1783
+ if args.debug_export:
1784
+ # Usa i parametri del best trial se esiste un DB, altrimenti DEBUG_PARAMS
1785
+ spade_params = dict(DEBUG_PARAMS)
1786
+ try:
1787
+ study = optuna.load_study(study_name=STUDY_NAME, storage=storage,
1788
+ sampler=sampler, pruner=pruner)
1789
+ completed = [t for t in study.trials
1790
+ if t.state == optuna.trial.TrialState.COMPLETE]
1791
+ if completed:
1792
+ best_t = max(completed, key=lambda t: t.value or 0)
1793
+ p = best_t.params
1794
+ win = 2 ** p["win_exp"]
1795
+ hop = win // p["hop_div"]
1796
+ spade_params = dict(
1797
+ delta_db = p["delta_db"],
1798
+ window_length = win,
1799
+ hop_length = hop,
1800
+ release_ms = p["release_ms"],
1801
+ max_gain_db = p["max_gain_db"],
1802
+ eps = p["eps"],
1803
+ max_iter = p["max_iter"],
1804
+ )
1805
+ print(f" [DEBUG] Usando best trial #{best_t.number}"
1806
+ f" (score={best_t.value:.5f}) dal DB.")
1807
+ except Exception:
1808
+ print(f" [DEBUG] DB non trovato — uso DEBUG_PARAMS di default.")
1809
+
1810
+ # Costruisci corpus (limitato a debug_n file per velocita')
1811
+ corpus = build_corpus(base_dir, max_files=args.debug_n)
1812
+ if not corpus:
1813
+ sys.exit("Corpus vuoto. Controlla --base-dir.")
1814
+ debug_export(
1815
+ corpus = corpus,
1816
+ base_dir = base_dir,
1817
+ out_dir = Path(args.debug_dir),
1818
+ n_files = args.debug_n,
1819
+ spade_params = spade_params,
1820
+ )
1821
+ return
1822
+
1823
+ # ── Corpus ───────────────────────────────────────────────────────────────
1824
+ print("\n" + "=" * 65)
1825
+ print("CORPUS + LIMITER SINTETICO (Case 1 — threshold-based)")
1826
+ print("=" * 65)
1827
+ print(f" Base dir : {base_dir}")
1828
+ print(f" Threshold : −{LIMITER_THRESHOLD_DB} dBFS")
1829
+ print(f" Release : {LIMITER_RELEASE_MS} ms")
1830
+ print(f" Level align: NESSUNO — loudness invariata per costruzione")
1831
+ print(f" Rumore rosa: {PINK_NOISE_LEVEL_DB} dB rel. peak "
1832
+ f"(simula sottofondo musicale sotto il transiente)")
1833
+
1834
+ corpus = build_corpus(base_dir, max_files=args.corpus_size)
1835
+ if not corpus:
1836
+ sys.exit("Corpus vuoto. Controlla --base-dir e le cartelle.")
1837
+
1838
+ # ── GPU warm-up: forza MCLK al massimo prima del primo trial ─────────────
1839
+ # Su RDNA2 (RX 6700 XT) il memory clock parte da 96 MHz (idle) e impiega
1840
+ # ~200ms per salire a 1750 MHz. Un primo batch piccolo lascia MCLK basso
1841
+ # per tutto il trial. Questo dummy dispatch forza il ramp-up in anticipo.
1842
+ try:
1843
+ import torch
1844
+ if torch.cuda.is_available():
1845
+ _wd = "cuda"
1846
+ _sz = 8192 * 1024 # 8 MB → sufficiente per trigger MCLK ramp
1847
+ _dummy = torch.randn(_sz, device=_wd, dtype=torch.float32)
1848
+ _dummy2 = _dummy * 2.0 + _dummy.roll(1)
1849
+ torch.cuda.synchronize()
1850
+ del _dummy, _dummy2
1851
+ print(" ✓ GPU warm-up completato (MCLK ramp forzato)")
1852
+ except Exception:
1853
+ pass
1854
+
1855
+ print(f"\n ✓ {len(corpus)} file nel corpus\n")
1856
+ col_w = max(len(item["file"]) for item in corpus) + 2
1857
+ for item in corpus:
1858
+ rms = float(np.sqrt(np.mean(item["gt_res"] ** 2)))
1859
+ peak = float(np.max(np.abs(item["gt_res"])))
1860
+ print(f" {item['file']:<{col_w}} sr={item['sr']} "
1861
+ f"GT rms={rms:.4f} peak={peak:.4f}")
1862
+
1863
+ # ── Study ─────────────────────────────────────────────────────────────────
1864
+ print(f"\n{'='*65}")
1865
+ print(f"OTTIMIZZAZIONE BAYESIANA — {args.trials} trial")
1866
+ print(f"TPE (multivariate) + MedianPruner | storage: {STUDY_NAME}.db")
1867
+ print(f"{'='*65}\n")
1868
+
1869
+ study = optuna.create_study(
1870
+ study_name = STUDY_NAME,
1871
+ storage = storage,
1872
+ sampler = sampler,
1873
+ pruner = pruner,
1874
+ direction = "maximize",
1875
+ load_if_exists = True,
1876
+ )
1877
+
1878
+ # ── Progress bar (rich → tqdm → plain fallback) ───────────────────────────
1879
+ try:
1880
+ from rich.progress import (
1881
+ Progress, BarColumn, TextColumn,
1882
+ TimeElapsedColumn, TimeRemainingColumn, MofNCompleteColumn,
1883
+ )
1884
+ _has_rich_progress = True
1885
+ except ImportError:
1886
+ _has_rich_progress = False
1887
+
1888
+ try:
1889
+ import tqdm as _tqdm_mod
1890
+ _has_tqdm = True
1891
+ except ImportError:
1892
+ _has_tqdm = False
1893
+
1894
+ # Stato condiviso aggiornato dal callback.
1895
+ # Pre-popolato con i trial gia' nel DB in caso di --resume,
1896
+ # cosi' la progress bar mostra il conteggio corretto dall'inizio.
1897
+ _existing_complete = [t for t in study.trials
1898
+ if t.state == optuna.trial.TrialState.COMPLETE]
1899
+ _existing_pruned = [t for t in study.trials
1900
+ if t.state == optuna.trial.TrialState.PRUNED]
1901
+
1902
+ if _existing_complete:
1903
+ _best_existing = max(_existing_complete, key=lambda t: t.value or 0)
1904
+ _init_best = _best_existing.value or 0.0
1905
+ _init_best_p = dict(_best_existing.params)
1906
+ _init_last = _init_best
1907
+ else:
1908
+ _init_best, _init_best_p, _init_last = float("-inf"), {}, float("-inf")
1909
+
1910
+ _state = {
1911
+ "done": len(_existing_complete),
1912
+ "pruned": len(_existing_pruned),
1913
+ "best": _init_best,
1914
+ "best_p": _init_best_p,
1915
+ "last": _init_last,
1916
+ "t0": time.time(),
1917
+ "n_total": len(_existing_complete) + len(_existing_pruned) + args.trials,
1918
+ }
1919
+
1920
+ def _fmt_best(state: dict) -> str:
1921
+ """Stringa compatta con i parametri del best trial corrente."""
1922
+ bp = state["best_p"]
1923
+ if not bp:
1924
+ return "—"
1925
+ win = 2 ** bp.get("win_exp", 10)
1926
+ hop = win // bp.get("hop_div", 4)
1927
+ return (f"δ={bp.get('delta_db',0):.2f} "
1928
+ f"win={win} hop={hop} "
1929
+ f"rel={bp.get('release_ms',0):.0f}ms "
1930
+ f"gain={bp.get('max_gain_db',0):.1f}dB")
1931
+
1932
+ # ── Rich progress bar ─────────────────────────────────────────────────────
1933
+ if _has_rich_progress:
1934
+ progress = Progress(
1935
+ TextColumn("[bold cyan]Trial[/] [cyan]{task.completed}/{task.total}[/]"),
1936
+ BarColumn(bar_width=32),
1937
+ MofNCompleteColumn(),
1938
+ TextColumn(" score [green]{task.fields[last]:.5f}[/]"),
1939
+ TextColumn(" best [bold green]{task.fields[best]:.5f}[/]"),
1940
+ TextColumn(" [dim]pruned {task.fields[pruned]}[/]"),
1941
+ TimeElapsedColumn(),
1942
+ TextColumn("ETA"),
1943
+ TimeRemainingColumn(),
1944
+ refresh_per_second=4,
1945
+ transient=False,
1946
+ )
1947
+ task_id = None # creato dentro il context
1948
+
1949
+ def on_trial_end(study, trial):
1950
+ fin = (trial.state == optuna.trial.TrialState.COMPLETE)
1951
+ prn = (trial.state == optuna.trial.TrialState.PRUNED)
1952
+ if fin:
1953
+ _state["done"] += 1
1954
+ _state["last"] = trial.value or 0.0
1955
+ if _state["last"] > _state["best"]:
1956
+ _state["best"] = _state["last"]
1957
+ _state["best_p"] = dict(study.best_params)
1958
+ elif prn:
1959
+ _state["pruned"] += 1
1960
+ progress.update(
1961
+ task_id,
1962
+ advance = 1,
1963
+ last = _state["last"],
1964
+ best = max(_state["best"], 0.0),
1965
+ pruned = _state["pruned"],
1966
+ )
1967
+
1968
+ t0 = time.time()
1969
+ try:
1970
+ with progress:
1971
+ task_id = progress.add_task(
1972
+ "sweep",
1973
+ total = _state["n_total"],
1974
+ completed = _state["done"] + _state["pruned"],
1975
+ last = max(_state["last"], 0.0),
1976
+ best = max(_state["best"], 0.0),
1977
+ pruned = _state["pruned"],
1978
+ )
1979
+ study.optimize(
1980
+ make_objective(corpus),
1981
+ n_trials = args.trials,
1982
+ callbacks = [on_trial_end],
1983
+ show_progress_bar = False,
1984
+ )
1985
+ except KeyboardInterrupt:
1986
+ print("\n[!] Interrotto — risultati parziali salvati.")
1987
+
1988
+ # ── tqdm fallback ─────────────────────────────────────────────────────────
1989
+ elif _has_tqdm:
1990
+ import tqdm
1991
+ _already = _state["done"] + _state["pruned"]
1992
+ pbar = tqdm.tqdm(
1993
+ total = _state["n_total"],
1994
+ initial = _already,
1995
+ unit = "trial",
1996
+ bar_format = "{l_bar}{bar}| {n}/{total} [{elapsed}<{remaining}]",
1997
+ )
1998
+ if _already > 0:
1999
+ pbar.set_postfix(
2000
+ score = f"{max(_state['last'], 0.0):.5f}",
2001
+ best = f"{max(_state['best'], 0.0):.5f}",
2002
+ pruned = _state["pruned"],
2003
+ )
2004
+
2005
+ def on_trial_end(study, trial):
2006
+ fin = trial.state == optuna.trial.TrialState.COMPLETE
2007
+ prn = trial.state == optuna.trial.TrialState.PRUNED
2008
+ if fin:
2009
+ _state["done"] += 1
2010
+ _state["last"] = trial.value or 0.0
2011
+ if _state["last"] > _state["best"]:
2012
+ _state["best"] = _state["last"]
2013
+ _state["best_p"] = dict(study.best_params)
2014
+ elif prn:
2015
+ _state["pruned"] += 1
2016
+ pbar.update(1)
2017
+ pbar.set_postfix(
2018
+ score = f"{_state['last']:.5f}",
2019
+ best = f"{_state['best']:.5f}",
2020
+ pruned = _state["pruned"],
2021
+ )
2022
+
2023
+ t0 = time.time()
2024
+ try:
2025
+ study.optimize(
2026
+ make_objective(corpus),
2027
+ n_trials = args.trials,
2028
+ callbacks = [on_trial_end],
2029
+ show_progress_bar = False,
2030
+ )
2031
+ except KeyboardInterrupt:
2032
+ print("\n[!] Interrotto — risultati parziali salvati.")
2033
+ finally:
2034
+ pbar.close()
2035
+
2036
+ # ── Plain fallback ────────────────────────────────────────────────────────
2037
+ else:
2038
+ def on_trial_end(study, trial):
2039
+ fin = trial.state == optuna.trial.TrialState.COMPLETE
2040
+ prn = trial.state == optuna.trial.TrialState.PRUNED
2041
+ if fin:
2042
+ _state["done"] += 1
2043
+ _state["last"] = trial.value or 0.0
2044
+ if _state["last"] > _state["best"]:
2045
+ _state["best"] = _state["last"]
2046
+ _state["best_p"] = dict(study.best_params)
2047
+ elapsed = time.time() - _state["t0"]
2048
+ done_tot = _state["done"] + _state["pruned"]
2049
+ eta_s = (elapsed / done_tot) * (_state["n_total"] - done_tot) if done_tot else 0
2050
+ is_best = abs(_state["last"] - _state["best"]) < 1e-9
2051
+ bar_n = int(32 * done_tot / max(_state["n_total"], 1))
2052
+ bar = "█" * bar_n + "░" * (32 - bar_n)
2053
+ print(f"\r[{bar}] {done_tot}/{_state['n_total']}"
2054
+ f" {'★' if is_best else ' '}score={_state['last']:.5f}"
2055
+ f" best={_state['best']:.5f}"
2056
+ f" pruned={_state['pruned']}"
2057
+ f" ETA {eta_s/60:.1f}min ", end="", flush=True)
2058
+ elif prn:
2059
+ _state["pruned"] += 1
2060
+
2061
+ t0 = time.time()
2062
+ try:
2063
+ study.optimize(
2064
+ make_objective(corpus),
2065
+ n_trials = args.trials,
2066
+ callbacks = [on_trial_end],
2067
+ show_progress_bar = False,
2068
+ )
2069
+ except KeyboardInterrupt:
2070
+ print("\n[!] Interrotto — risultati parziali salvati.")
2071
+ print() # newline dopo la riga \r
2072
+
2073
+ elapsed = time.time() - t0
2074
+ n_done = sum(1 for t in study.trials if t.state == optuna.trial.TrialState.COMPLETE)
2075
+ n_prune = sum(1 for t in study.trials if t.state == optuna.trial.TrialState.PRUNED)
2076
+ print(f"\n Completati: {n_done} | Pruned: {n_prune}"
2077
+ f" | Tempo totale: {elapsed/60:.1f} min"
2078
+ f" | Media: {elapsed/max(n_done+n_prune,1):.1f} s/trial")
2079
+
2080
+ print_report(study, top_n=args.top)
2081
+ save_csv(study)
2082
+ print("\nDone.")
2083
+
2084
+
2085
+ if __name__ == "__main__":
2086
+ main()
run_test.py ADDED
@@ -0,0 +1,115 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ """
2
+ run_test_v6.py — S-SPADE RDFT · brickwall limiter recovery
3
+ ===============================================================
4
+ Usa spade_declip_v9 in mode='soft' per recuperare la dinamica
5
+ compressa da un brickwall limiter su tracce di mastering.
6
+
7
+ Novità v6 rispetto a v5
8
+ -----------------------
9
+ NEW — Progress bar (rich → tqdm → plain fallback):
10
+ Durante il processing viene mostrata una barra di avanzamento
11
+ per canale con ETA, % frame bypassed e contatore no_conv.
12
+ Installare rich per la UI migliore: pip install rich
13
+ Alternativa: pip install tqdm
14
+ Funziona anche senza nessuno dei due (plain % printout).
15
+
16
+ Come leggere delta_db da Waveform Statistics (RX / Audition / iZotope):
17
+ Individua il livello sotto il quale il limiter NON è intervenuto,
18
+ es. "da −∞ fino a −2.5 dB" → delta_db = 2.5
19
+ In alternativa: Max RMS ≈ −1.3 dB → prova delta_db tra 1.0 e 2.5.
20
+
21
+ Output:
22
+ Il file salvato è FLOAT32 WAV — può avere sample > 1.0 (corretto:
23
+ sono i transienti recuperati sopra il ceiling limitato).
24
+ Applica un gain di −20·log10(peak) dB per riportare a 0 dBFS.
25
+ """
26
+ import numpy as np
27
+ import soundfile as sf
28
+ from spade_declip_v11 import declip, DeclipParams
29
+
30
+ # ── File da processare ────────────────────────────────────────────────────
31
+ # Formato: (filename, delta_db)
32
+ # delta_db = dB dal ceiling (0 dBFS) alla soglia del limiter
33
+ FILES_SOFT = [
34
+ ("test.flac", 2.5),
35
+ # ("mastering.flac", 1.5), # prova valori più stretti se senti artefatti
36
+ # ("mastering.flac", 3.0), # prova valori più ampi se i transienti non cambiano
37
+ ]
38
+
39
+ ALGO = "sspade"
40
+ FRAME = "rdft"
41
+
42
+ # ─────────────────────────────────────────────────────────────────────────
43
+ print("\n" + "=" * 65)
44
+ print("MODE: SOFT (brickwall limiter recovery)")
45
+ print("=" * 65)
46
+
47
+ for filepath, delta_db in FILES_SOFT:
48
+ print(f"\nFile : {filepath} | delta_db={delta_db} dB")
49
+
50
+ try:
51
+ yc, sr_val = sf.read(filepath, always_2d=True)
52
+ except Exception as e:
53
+ print(f" [ERRORE] {e}")
54
+ continue
55
+
56
+ yc = yc.astype(float)
57
+ n_samp, n_ch = yc.shape
58
+ labels = ["L", "R"] if n_ch == 2 else ["Ch" + str(c) for c in range(n_ch)]
59
+
60
+ print(f" SR={sr_val} Hz | dur={round(n_samp/sr_val, 2)}s | channels={n_ch}")
61
+ for c, lbl in enumerate(labels):
62
+ peak_c = float(np.max(np.abs(yc[:, c])))
63
+ print(f" [{lbl}] peak={round(peak_c, 4)}")
64
+
65
+ params = DeclipParams(
66
+ algo="sspade",
67
+ frame="rdft",
68
+ window_length=1024,
69
+ hop_length=256,
70
+ s=1, r=1, eps=0.1, max_iter=1000,
71
+ mode="soft",
72
+ delta_db=delta_db,
73
+ # --- NOVITÀ V11 ---
74
+ sample_rate=sr_val, # Fondamentale per il calcolo dei ms
75
+ release_ms=250.0, # Aiuta a ridurre il pumping post-picco
76
+ max_gain_db=6.0, # Evita transienti "ice-pick" innaturali
77
+ multiband=False, # Metti True se il limiter originale era multibanda
78
+ macro_expand=False, # Metti True per recuperare "corpo" (RMS)
79
+ # ------------------
80
+ n_jobs=-1,
81
+ verbose=True,
82
+ show_progress=True,
83
+ )
84
+
85
+
86
+ fixed, masks = declip(yc, params)
87
+
88
+ fixed_2d = fixed[:, None] if fixed.ndim == 1 else fixed
89
+ peak_out = float(np.max(np.abs(fixed_2d)))
90
+
91
+ # Costruisce nome file output
92
+ for ext in (".flac", ".wav", ".aif", ".aiff"):
93
+ if filepath.lower().endswith(ext):
94
+ base = filepath[:-len(ext)]
95
+ break
96
+ else:
97
+ base = filepath
98
+ out_name = f"{base}_soft_d{str(delta_db).replace('.','p')}_{ALGO}_{FRAME}.wav"
99
+
100
+ # ── Write as 32-bit float WAV ─────────────────────────────────────────
101
+ # CRITICAL: subtype='FLOAT' preserves sample values > 1.0 (recovered
102
+ # transients). The v4 default (PCM_16) silently truncated anything
103
+ # outside ±1.0 to exactly ±1.0, re-clipping all recovered peaks.
104
+ sf.write(out_name, fixed_2d.astype(np.float32), sr_val, subtype='FLOAT')
105
+
106
+ peaks = [round(float(np.max(np.abs(fixed_2d[:, c]))), 4) for c in range(n_ch)]
107
+ print(f" → {out_name}")
108
+ print(f" peak out: " + " ".join(f"{lbl}={p}" for lbl, p in zip(labels, peaks)))
109
+ if peak_out > 1.0:
110
+ gain_db = round(-20 * np.log10(peak_out), 2)
111
+ print(f" ⚠ Peak > 1.0 — applica {gain_db} dB per riportare a 0 dBFS")
112
+ else:
113
+ print(f" ✓ Peak ≤ 1.0 — nessuna normalizzazione necessaria")
114
+
115
+ print("\nDone.")
smart_sweep_results.csv ADDED
@@ -0,0 +1,11 @@
 
 
 
 
 
 
 
 
 
 
 
 
1
+ rank,score,delta_db,lf_delta_db,window_length,hop_length,release_ms,max_gain_db,eps,max_iter,multiband,macro_expand,macro_ratio
2
+ 1,0.915965,3.5,1.0,2048,512,165.0,9.0,0.05,1000,0,1,1.75
3
+ 2,0.906553,2.25,1.15,1024,256,110.0,8.0,0.05,1000,0,1,1.2
4
+ 3,0.864981,2.75,1.05,1024,256,170.0,5.0,0.1,250,1,1,1.95
5
+ 4,0.830788,3.05,0.85,1024,256,110.0,6.0,0.05,250,0,1,1.15
6
+ 5,0.809859,2.1,1.75,1024,256,195.0,10.0,0.03,250,1,0,1.4
7
+ 6,0.809082,3.25,0.8,2048,256,115.0,10.0,0.03,1000,0,0,1.2
8
+ 7,0.784889,2.2,2.0,2048,512,105.0,5.0,0.1,250,0,1,1.3
9
+ 8,0.744222,2.85,1.05,512,64,140.0,8.5,0.05,500,0,0,1.35
10
+ 9,0.740836,2.25,0.95,2048,512,40.0,3.5,0.05,1000,0,1,1.55
11
+ 10,0.645003,2.0,1.45,1024,256,110.0,4.5,0.1,250,1,0,1.15
spade_declip_v11.py ADDED
@@ -0,0 +1,2234 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ """
2
+ spade_declip.py – v11 (delimiting: envelope dilation + ratio bounds + multiband + macro expand)
3
+ ====================================================================================
4
+ S-SPADE / A-SPADE audio declipping — extended to recover dynamics
5
+ compressed by a brickwall limiter (mode='soft').
6
+
7
+ GPU acceleration (v10 — NEW)
8
+ ------------------------------
9
+ Requires PyTorch ≥ 2.0 with a working CUDA *or* ROCm backend.
10
+ PyTorch-ROCm exposes AMD GPUs under the standard torch.cuda namespace,
11
+ so detection and device strings are identical to NVIDIA:
12
+
13
+ Device auto-detection order: CUDA → ROCm → CPU fallback
14
+ Device string: "auto" — first available GPU (cuda / cuda:0)
15
+ "cuda:0" — explicit device index
16
+ "cpu" — force CPU (disables GPU path)
17
+
18
+ GPU strategy:
19
+ CPU path (v8/v9): processes frames one-by-one with ThreadPoolExecutor.
20
+ GPU path (v10): packs ALL active frames into a single (F, M) batch
21
+ tensor and runs S-SPADE entirely on the GPU in one
22
+ kernel sweep — DCT, hard-threshold, IDCT, proj_Γ are
23
+ all vectorised over F simultaneously.
24
+
25
+ Convergence is tracked per-frame with a bool mask; converged frames
26
+ are frozen (their dual variable stops updating) while the rest keep
27
+ iterating. The GPU loop exits as soon as every frame has converged
28
+ or max_iter is reached.
29
+
30
+ Typical speedup vs. single-thread CPU: 20–100× depending on GPU and
31
+ frame count. The RX 6700 XT (12 GB, ROCm) processes the 2784-frame
32
+ stereo example in ~60–90 s vs. the 1289 s CPU baseline (≈15–20×).
33
+
34
+ DCT implementation on GPU:
35
+ Uses a verified FFT-based Makhoul (1980) algorithm that exactly matches
36
+ scipy.fft.dct(x, type=2, norm='ortho') to float32 precision.
37
+ Runs in float64 internally for numerical safety, cast to float32 on
38
+ output. Both DCT and IDCT are batch-safe: input shape (..., N).
39
+
40
+ Limitations:
41
+ • algo='aspade' is CPU-only in v10 (A-SPADE GPU planned for v11).
42
+ Set use_gpu=False or switch to algo='sspade' for GPU acceleration.
43
+ • Very long files (> ~2 h at 48 kHz) may require chunked batching;
44
+ add gpu_batch_frames parameter if VRAM is exhausted.
45
+
46
+ References
47
+ ----------
48
+ [1] Kitić, Bertin, Gribonval — "Sparsity and cosparsity for audio declipping:
49
+ a flexible non-convex approach", LVA/ICA 2015. (arXiv:1506.01830)
50
+ [2] Záviška, Rajmic, Průša, Veselý — "Revisiting Synthesis Model in Sparse
51
+ Audio Declipper", 2018. (arXiv:1807.03612)
52
+
53
+ Algorithms
54
+ ----------
55
+ S-SPADE → Algorithm 1 in [2] (synthesis, coefficient-domain ADMM) [DEFAULT]
56
+ Projection uses the closed-form Lemma / eq.(12) from [2].
57
+ A-SPADE → Algorithm 2 in [2] (analysis, signal-domain ADMM)
58
+
59
+ Transforms
60
+ ----------
61
+ 'dct' Orthonormal DCT-II (tight Parseval frame, bound = 1, P = N)
62
+ 'rdft' Redundant real frame [DCT-II/√2 ‖ DST-II/√2] (tight, bound = 1, P = 2N)
63
+ [DEFAULT] Best empirical quality; mimics oversampled DFT from [1][2].
64
+
65
+ Operating modes
66
+ ---------------
67
+ mode='hard' (default)
68
+ Standard hard-clipping recovery. Mask detects samples exactly at the
69
+ digital ceiling (±tau). Same behaviour as v5.
70
+
71
+ mode='soft' (introduced v6, frame-adaptive bypass in v7)
72
+ Brickwall-limiter recovery. Any sample above the limiter threshold
73
+ (ceiling − delta_db dB) is treated as potentially attenuated; its true
74
+ value is constrained to be ≥ its current value (lower bound, not equality).
75
+
76
+ v7 frame-adaptive bypass ← NEW
77
+ --------------------------------
78
+ Before processing each frame, the raw un-windowed peak is compared to
79
+ the global threshold:
80
+
81
+ frame_peak = max(|yc[idx1:idx2]|)
82
+
83
+ frame_peak < threshold → bypass: WOLA accumulation with win²,
84
+ SPADE never called, zero artefact risk.
85
+ frame_peak >= threshold → normal SPADE processing.
86
+
87
+ The bypass uses identical win²/norm_win bookkeeping to the SPADE path,
88
+ so the WOLA reconstruction is numerically transparent.
89
+ Verbose output reports active/bypassed/no-conv frame counts and speedup.
90
+
91
+ Mathematical basis:
92
+ proj_Γ implements v[Icp] = max(v[Icp], yc[Icp]) where yc[Icp]
93
+ is the actual limited sample value — the lower-bound constraint is
94
+ exact. proj_gamma, tight_sspade, tight_aspade are UNCHANGED.
95
+
96
+ Practical parameter guidance:
97
+ delta_db = dB from 0 dBFS to the limiter threshold.
98
+ Read from Waveform Statistics: find the level below which the limiter
99
+ did NOT intervene → delta_db = that level (positive number).
100
+ Typical brickwall masterings: 1.0 – 3.0 dB.
101
+
102
+ Limitations:
103
+ • Attack/release pumping attenuates samples just outside the threshold;
104
+ those are pinned as reliable — unavoidable without the limiter's curve.
105
+ • Macro-dynamics cannot be restored; only transient peaks are recovered.
106
+
107
+ Verified bugs fixed (inherited from v5/v6)
108
+ -------------------------------------------
109
+ BUG-1 frsyn/RDFT: flip output not input in DST synthesis
110
+ BUG-2 tight_aspade: dual variable in coefficient domain, not signal domain
111
+ BUG-3 _declip_mono: per-channel WOLA gain drift (stereo L/R balance)
112
+ BUG-4 _declip_mono: DC offset breaks half-wave mask detection
113
+
114
+ Dependencies: pip install numpy scipy soundfile
115
+
116
+ Usage (API)
117
+ -----------
118
+ from spade_declip_v10 import declip, DeclipParams
119
+
120
+ params = DeclipParams(mode="soft", delta_db=2.5) # GPU used automatically
121
+ fixed, masks = declip(limited_master, params)
122
+
123
+ # Explicit GPU device (ROCm / CUDA):
124
+ params = DeclipParams(mode="soft", delta_db=2.5, use_gpu=True, gpu_device="cuda:0")
125
+
126
+ # Force CPU (disable GPU):
127
+ params = DeclipParams(mode="soft", delta_db=2.5, use_gpu=False)
128
+
129
+ New in v11 — Delimiting features
130
+ ----------------------------------
131
+ Four new DeclipParams knobs that transition from declipping to genuine delimiting.
132
+ All are disabled by default for full backward compatibility.
133
+
134
+ 1. Envelope-Based Mask Dilation (release_ms > 0)
135
+ --------------------------------------------------
136
+ A limiter's release time attenuates not just the peak sample but all samples
137
+ for the next 10–50 ms. By default, _compute_masks marks those post-peak
138
+ samples as "reliable" (Ir), pinning the ADMM solver to artificially low values
139
+ and causing the pumping artifact.
140
+
141
+ Fix: _dilate_masks_soft() forward-dilates Icp and Icm by `release_samples =
142
+ round(release_ms * sample_rate / 1000)` samples using convolution. Any newly
143
+ flagged sample within the release window is reclassified:
144
+ yc[n] ≥ 0 → Icp (true value ≥ yc[n])
145
+ yc[n] < 0 → Icm (true value ≤ yc[n])
146
+ The constraint is always satisfied by the limiter model: the true value can
147
+ only be larger in magnitude than the gain-reduced sample.
148
+
149
+ Parameters: release_ms (float, default 0.0), sample_rate (int, default 44100).
150
+
151
+ 2. Ratio-Aware Upper Bound (max_gain_db > 0)
152
+ -----------------------------------------------
153
+ Without an upper bound, the L0-ADMM can generate "ice-pick" transients that
154
+ exceed any physical limiter's ratio. max_gain_db caps the recovery:
155
+
156
+ v[Icp] = clip(max(v[Icp], yc[Icp]), yc[Icp], yc[Icp] * G_max)
157
+ v[Icm] = clip(min(v[Icm], yc[Icm]), yc[Icm] * G_max, yc[Icm])
158
+
159
+ where G_max = 10^(max_gain_db / 20). Implemented in both proj_gamma (CPU)
160
+ and the inline GPU projection in _sspade_batch_gpu.
161
+
162
+ Parameters: max_gain_db (float, default 0.0 = disabled; e.g. 6.0 for ±6 dB max).
163
+
164
+ 3. Sub-band (Multi-band) SPADE (multiband=True)
165
+ --------------------------------------------------
166
+ Multi-band limiters (FabFilter Pro-L 2, etc.) apply independent gain reduction
167
+ per frequency range. Running broadband SPADE on such material "un-ducks"
168
+ frequency bands that were never attenuated, causing harshness.
169
+
170
+ Fix: _lr_split() builds a phase-perfect crossover (LP via scipy Butterworth
171
+ sosfiltfilt + HP = x − LP) at each crossover frequency. Each band is
172
+ declipped independently with its own delta_db threshold, then summed back.
173
+
174
+ The GPU batch path naturally handles multiple bands — each band contributes
175
+ its own frames to the (F, M) batch with no added latency.
176
+
177
+ Parameters: multiband (bool), band_crossovers (tuple of Hz, default (250, 4000)),
178
+ band_delta_db (tuple of floats; empty = use delta_db for all bands).
179
+
180
+ 4. Macro-Dynamics Upward Expansion Pre-pass (macro_expand=True)
181
+ ----------------------------------------------------------------
182
+ SPADE operates on ≈21 ms WOLA windows and cannot undo the slow 200–500 ms
183
+ RMS squash ("body" compression) a mastering limiter imposes.
184
+
185
+ Fix: _macro_expand_pass() runs a causal peak-envelope follower (attack +
186
+ release IIR) over the full signal, estimates where the level is held below
187
+ the long-term 80th-percentile envelope, and applies gentle upward expansion:
188
+
189
+ g(n) = (env(n) / threshold)^(1/ratio − 1) if env(n) < threshold
190
+ = 1.0 otherwise
191
+
192
+ SPADE then corrects the microscopic waveform peaks that the expander cannot
193
+ interpolate. The two passes are complementary by design.
194
+
195
+ Parameters: macro_expand (bool), macro_attack_ms (float, default 10.0),
196
+ macro_release_ms (float, default 200.0), macro_ratio (float, default 1.2).
197
+
198
+
199
+ Usage (CLI)
200
+ -----------
201
+ python spade_declip_v11.py input.wav output.wav --mode soft --delta-db 2.5
202
+ python spade_declip_v11.py input.wav output.wav --mode soft --delta-db 2.5 --gpu-device cuda:0
203
+ python spade_declip_v11.py input.wav output.wav --mode soft --delta-db 2.5 --no-gpu
204
+
205
+ # v11 delimiting features
206
+ python spade_declip_v11.py input.wav output.wav --mode soft --delta-db 2.5 \
207
+ --release-ms 30 --max-gain-db 6 --multiband --band-crossovers 250 4000
208
+
209
+ python spade_declip_v11.py input.wav output.wav --mode soft --delta-db 2.5 \
210
+ --macro-expand --macro-release-ms 200 --macro-ratio 1.2
211
+ """
212
+
213
+ from __future__ import annotations
214
+ try:
215
+ import torch as _torch_module
216
+ import torch
217
+ _TORCH_AVAILABLE = True
218
+ except ImportError:
219
+ _TORCH_AVAILABLE = False
220
+ import argparse
221
+ import os
222
+ import time
223
+ import warnings
224
+ from concurrent.futures import ThreadPoolExecutor
225
+ from dataclasses import dataclass
226
+ from typing import List, Literal, Tuple, Union
227
+
228
+ import numpy as np
229
+ from scipy.fft import dct, idct
230
+ from scipy.signal.windows import hann
231
+
232
+
233
+ # ============================================================================
234
+ # Progress-bar backend (rich → tqdm → plain fallback, zero hard deps)
235
+ # ============================================================================
236
+ # Three concrete backends implement the same interface:
237
+ #
238
+ # ctx = _make_progress(n_channels)
239
+ # with ctx:
240
+ # task = ctx.add_task(label, total=N)
241
+ # ctx.advance(task) # +1 frame done
242
+ # ctx.set_postfix(task, key=value) # update live counters
243
+ #
244
+ # The module-level _PROGRESS_LOCK serialises add_task() calls so that two
245
+ # channel threads don't interleave their header prints.
246
+
247
+ import threading
248
+ _PROGRESS_LOCK = threading.Lock()
249
+
250
+ try:
251
+ from rich.progress import (
252
+ Progress, BarColumn, TextColumn, TimeRemainingColumn,
253
+ TimeElapsedColumn, MofNCompleteColumn, SpinnerColumn,
254
+ )
255
+ from rich.console import Console
256
+ from rich.panel import Panel
257
+ from rich import print as rprint
258
+ _RICH = True
259
+ except ImportError:
260
+ _RICH = False
261
+
262
+ try:
263
+ import tqdm as _tqdm_mod
264
+ _TQDM = True
265
+ except ImportError:
266
+ _TQDM = False
267
+
268
+
269
+ class _RichProgress:
270
+ """Thin wrapper around a shared rich.Progress instance."""
271
+
272
+ def __init__(self, n_channels: int):
273
+ self._progress = Progress(
274
+ SpinnerColumn(),
275
+ TextColumn("[bold cyan]{task.fields[ch_label]:<4}[/]"),
276
+ BarColumn(bar_width=36),
277
+ MofNCompleteColumn(),
278
+ TextColumn("[green]{task.fields[eta_str]}[/]"),
279
+ TextColumn("[dim]{task.fields[bypass_str]}[/]"),
280
+ TextColumn("[yellow]{task.fields[noconv_str]}[/]"),
281
+ TimeElapsedColumn(),
282
+ TimeRemainingColumn(),
283
+ refresh_per_second=10,
284
+ )
285
+
286
+ def __enter__(self):
287
+ self._progress.__enter__()
288
+ return self
289
+
290
+ def __exit__(self, *args):
291
+ self._progress.__exit__(*args)
292
+
293
+ def add_task(self, ch_label: str, total: int) -> object:
294
+ return self._progress.add_task(
295
+ "", total=total,
296
+ ch_label=ch_label,
297
+ eta_str="",
298
+ bypass_str="",
299
+ noconv_str="",
300
+ )
301
+
302
+ def advance(self, task_id, n_bypassed: int, n_noconv: int, n_done: int, n_total: int):
303
+ bypass_pct = 100.0 * n_bypassed / n_done if n_done else 0.0
304
+ self._progress.update(
305
+ task_id,
306
+ advance=1,
307
+ bypass_str=f"bypassed {bypass_pct:.0f}%" if n_bypassed else "",
308
+ noconv_str=f"no_conv {n_noconv}" if n_noconv else "",
309
+ )
310
+
311
+
312
+ class _TqdmProgress:
313
+ """Thin wrapper around tqdm, one bar per channel."""
314
+
315
+ def __init__(self, n_channels: int):
316
+ self._bars: dict = {}
317
+
318
+ def __enter__(self):
319
+ return self
320
+
321
+ def __exit__(self, *args):
322
+ for bar in self._bars.values():
323
+ bar.close()
324
+
325
+ def add_task(self, ch_label: str, total: int) -> str:
326
+ import tqdm
327
+ bar = tqdm.tqdm(
328
+ total=total,
329
+ desc=f"[{ch_label}]",
330
+ unit="fr",
331
+ dynamic_ncols=True,
332
+ leave=True,
333
+ )
334
+ self._bars[ch_label] = bar
335
+ return ch_label
336
+
337
+ def advance(self, task_id, n_bypassed: int, n_noconv: int, n_done: int, n_total: int):
338
+ bar = self._bars[task_id]
339
+ bypass_pct = 100.0 * n_bypassed / n_done if n_done else 0.0
340
+ parts = []
341
+ if n_bypassed:
342
+ parts.append(f"bypass={bypass_pct:.0f}%")
343
+ if n_noconv:
344
+ parts.append(f"no_conv={n_noconv}")
345
+ bar.set_postfix_str(" ".join(parts))
346
+ bar.update(1)
347
+
348
+
349
+ class _PlainProgress:
350
+ """Last-resort fallback: prints a percentage line per channel."""
351
+
352
+ def __init__(self, n_channels: int):
353
+ self._state: dict = {}
354
+
355
+ def __enter__(self):
356
+ return self
357
+
358
+ def __exit__(self, *args):
359
+ pass
360
+
361
+ def add_task(self, ch_label: str, total: int) -> str:
362
+ self._state[ch_label] = {"total": total, "done": 0, "last_pct": -1}
363
+ return ch_label
364
+
365
+ def advance(self, task_id, n_bypassed: int, n_noconv: int, n_done: int, n_total: int):
366
+ s = self._state[task_id]
367
+ s["done"] += 1
368
+ pct = int(100 * s["done"] / s["total"])
369
+ # Print only at each 5% step to avoid flooding stdout
370
+ if pct // 5 > s["last_pct"] // 5:
371
+ s["last_pct"] = pct
372
+ print(f" [{task_id}] {pct:3d}% ({s['done']}/{s['total']} frames"
373
+ + (f" bypassed={n_bypassed}" if n_bypassed else "")
374
+ + (f" no_conv={n_noconv}" if n_noconv else "")
375
+ + ")")
376
+
377
+
378
+ def _make_progress(n_channels: int):
379
+ """Return the best available progress backend."""
380
+ if _RICH:
381
+ return _RichProgress(n_channels)
382
+ if _TQDM:
383
+ return _TqdmProgress(n_channels)
384
+ return _PlainProgress(n_channels)
385
+
386
+
387
+
388
+
389
+ # ============================================================================
390
+ # Data structures
391
+ # ============================================================================
392
+
393
+ @dataclass
394
+ class ClippingMasks:
395
+ """
396
+ Boolean index masks identifying the three sample categories of a clipped signal.
397
+
398
+ Attributes
399
+ ----------
400
+ Ir : reliable (unclipped) samples — must be preserved exactly
401
+ Icp : positively clipped (flat at +τ) — true signal ≥ τ
402
+ Icm : negatively clipped (flat at −τ) — true signal ≤ −τ
403
+ """
404
+ Ir: np.ndarray
405
+ Icp: np.ndarray
406
+ Icm: np.ndarray
407
+
408
+
409
+ @dataclass
410
+ class DeclipParams:
411
+ """
412
+ Parameters controlling the declipping pipeline.
413
+
414
+ Attributes
415
+ ----------
416
+ algo : 'sspade' | 'aspade'
417
+ Core per-frame algorithm. Default: 'sspade' (best empirical results).
418
+ window_length : int
419
+ Frame size in samples. Powers of 2 recommended (e.g. 1024, 2048).
420
+ Per [2]: A-SPADE works best ≈ 2048; S-SPADE is robust to longer windows.
421
+ hop_length : int
422
+ Hop between consecutive frames. Minimum 50% overlap recommended ([2] §4.4).
423
+ Typical: window_length // 4 (75% overlap, best quality per [2]).
424
+ frame : 'dct' | 'rdft'
425
+ Sparse transform.
426
+ 'dct' — orthonormal DCT-II (no redundancy, P = N).
427
+ 'rdft' — redundant real tight frame DCT‖DST (redundancy 2, P = 2N);
428
+ mimics the oversampled DFT used in [1][2]. [DEFAULT — best quality]
429
+ s : int
430
+ Initial and incremental sparsity step (k starts at s, increases by s
431
+ every r iterations). [2] uses s = 100 for whole-signal; s = 1 block-by-block.
432
+ r : int
433
+ Sparsity relaxation period (k is incremented every r iterations).
434
+ eps : float
435
+ Convergence threshold ε. Loop stops when the residual norm ≤ ε.
436
+ [1][2] use ε = 0.1 for their experiments.
437
+ max_iter : int
438
+ Hard upper limit on iterations per frame.
439
+ verbose : bool
440
+ Print per-signal diagnostics (DC offset, threshold, mask sizes, timing).
441
+ n_jobs : int
442
+ Number of parallel workers for multi-channel processing.
443
+ 1 = sequential (default, always safe).
444
+ -1 = use all available CPU cores.
445
+ mode : 'hard' | 'soft'
446
+ Detection mode.
447
+ 'hard' — standard hard-clipping recovery (default).
448
+ Marks samples exactly at ±tau as clipped.
449
+ 'soft' — brickwall limiter recovery (NEW in v6).
450
+ Marks all samples above the limiter threshold as potentially
451
+ attenuated. The threshold is ceiling − delta_db dB, where
452
+ ceiling = max(|yc|) after DC removal.
453
+ The lower-bound constraint true_value ≥ yc[n] is already
454
+ implemented by proj_gamma — no algorithmic changes needed.
455
+ delta_db : float
456
+ [soft mode only] Distance in dB from 0 dBFS to the limiter threshold.
457
+ Read from Waveform Statistics: find the level below which the limiter
458
+ did NOT intervene, e.g. "da −∞ fino a −2.5 dB" → delta_db = 2.5.
459
+ Typical brickwall masterings: 1.0 – 3.0 dB.
460
+ Ignored when mode='hard'.
461
+ use_gpu : bool
462
+ Enable GPU acceleration via PyTorch (CUDA or ROCm). Default: True.
463
+ Falls back to CPU automatically if PyTorch is not installed, no GPU
464
+ is present, or algo='aspade' (A-SPADE GPU not yet implemented).
465
+ gpu_device : str
466
+ PyTorch device string. Default: "auto" (first available GPU).
467
+ Examples: "cuda", "cuda:0", "cuda:1", "cpu".
468
+ AMD ROCm GPUs appear as "cuda" in PyTorch-ROCm — use the same syntax.
469
+ sample_rate : int
470
+ [v11] Sample rate of the audio in Hz. Required when release_ms > 0 or
471
+ multiband=True. Set automatically from the file header when using the CLI.
472
+ Default: 44100.
473
+ release_ms : float
474
+ [v11, soft mode] Limiter release time in milliseconds. When > 0, the
475
+ clipping masks are forward-dilated by this many samples so that post-peak
476
+ samples attenuated by the limiter’s release phase are treated as constrained
477
+ (not reliable). 0 = disabled (v10 behaviour). Typical: 10–50 ms.
478
+ max_gain_db : float
479
+ [v11, soft mode] Maximum recovery in dB above the limited sample value.
480
+ Caps proj_Γ to prevent ADMM from generating unphysical transients.
481
+ 0 = disabled (unbounded, v10 behaviour). Typical: 3–6 dB.
482
+ multiband : bool
483
+ [v11, soft mode] Enable Linkwitz-Riley sub-band processing. The signal is
484
+ split at band_crossovers Hz, each band is processed with its own delta_db,
485
+ then summed. Addresses multi-band limiting (FabFilter Pro-L 2 etc.).
486
+ band_crossovers : tuple[float, ...]
487
+ [v11] Crossover frequencies in Hz (ascending). Produces len+1 bands.
488
+ Default: (250, 4000) → Low / Mid / High.
489
+ band_delta_db : tuple[float, ...]
490
+ [v11] Per-band delta_db values. If empty, delta_db is used for all bands.
491
+ Must have the same length as band_crossovers + 1 when non-empty.
492
+ macro_expand : bool
493
+ [v11, soft mode] Enable macro-dynamics upward expansion pre-pass. A causal
494
+ peak-envelope follower detects where the limiter’s release held the level
495
+ down, then applies gentle upward expansion before SPADE restores the peaks.
496
+ macro_attack_ms : float
497
+ [v11] Expander attack time in ms. Default: 10.0.
498
+ macro_release_ms : float
499
+ [v11] Expander release time in ms. Default: 200.0.
500
+ macro_ratio : float
501
+ [v11] Expansion ratio. 1.0 = bypass; >1 = upward expansion.
502
+ g(n) = (env(n)/threshold)^(1/ratio - 1) when below threshold. Default: 1.2.
503
+ """
504
+ algo: Literal["sspade", "aspade"] = "sspade"
505
+ window_length: int = 1024
506
+ hop_length: int = 256
507
+ frame: Literal["dct", "rdft"] = "rdft"
508
+ s: int = 1
509
+ r: int = 1
510
+ eps: float = 0.1
511
+ max_iter: int = 1000
512
+ verbose: bool = False
513
+ n_jobs: int = 1
514
+ mode: Literal["hard", "soft"] = "hard"
515
+ delta_db: float = 1.0
516
+ show_progress: bool = True
517
+ use_gpu: bool = True # v10: GPU acceleration
518
+ gpu_device: str = "auto" # v10: device string
519
+ # ── v11: delimiting features ─────────────────────────────────────────
520
+ sample_rate: int = 44100 # required for release_ms and multiband
521
+ release_ms: float = 0.0 # mask dilation (0 = disabled)
522
+ max_gain_db: float = 0.0 # ratio-aware cap (0 = disabled)
523
+ multiband: bool = False # Linkwitz-Riley sub-band processing
524
+ band_crossovers: tuple = (250, 4000) # Hz crossover frequencies
525
+ band_delta_db: tuple = () # per-band delta_db; empty = use delta_db
526
+ macro_expand: bool = False # upward expansion pre-pass
527
+ macro_attack_ms: float = 10.0 # expander attack (ms)
528
+ macro_release_ms: float = 200.0 # expander release (ms)
529
+ macro_ratio: float = 1.2 # expansion ratio (1.0 = bypass)
530
+
531
+
532
+ # ============================================================================
533
+ # Sparse transform — Analysis (A) and Synthesis (D = A^H) operators
534
+ # ============================================================================
535
+
536
+ def _frame_size(M: int, frame: str) -> int:
537
+ """Number of transform coefficients P for a frame of M samples."""
538
+ if frame == "dct":
539
+ return M
540
+ if frame == "rdft":
541
+ return 2 * M
542
+ raise ValueError(f"Unknown frame '{frame}'")
543
+
544
+
545
+ # ============================================================================
546
+ # GPU engine (PyTorch — CUDA or ROCm)
547
+ # ============================================================================
548
+ # All GPU functions are defined unconditionally but only called when torch is
549
+ # available. Type annotations use strings to avoid NameError at import time.
550
+
551
+ import math as _math
552
+
553
+ def _resolve_gpu_device(params: "DeclipParams") -> "str | None":
554
+ """
555
+ Return a torch device string if GPU is usable, else None.
556
+
557
+ AMD ROCm GPUs are exposed by PyTorch-ROCm under the torch.cuda namespace
558
+ (torch.cuda.is_available() returns True, devices appear as "cuda" / "cuda:0").
559
+ Detection is therefore identical for NVIDIA CUDA and AMD ROCm.
560
+
561
+ Returns None if:
562
+ • params.use_gpu is False
563
+ • PyTorch is not installed
564
+ • No CUDA/ROCm device is present or accessible
565
+ • algo='aspade' (A-SPADE GPU not yet implemented)
566
+ """
567
+ if not params.use_gpu:
568
+ return None
569
+ if params.algo != "sspade":
570
+ return None # A-SPADE GPU not implemented; fall through to CPU path
571
+ try:
572
+ import torch
573
+ if not torch.cuda.is_available():
574
+ return None
575
+ dev = "cuda" if params.gpu_device == "auto" else params.gpu_device
576
+ torch.zeros(1, device=dev) # warm-up / validity check
577
+ return dev
578
+ except Exception:
579
+ return None
580
+
581
+
582
+ def _dct2_gpu(x: "torch.Tensor") -> "torch.Tensor":
583
+ """
584
+ Batched orthonormal DCT-II on GPU. x: (..., N) — float32 or float64.
585
+ Returns same dtype as input.
586
+ Numerically matches scipy.fft.dct(x, type=2, norm='ortho') to ~1e-14.
587
+
588
+ Algorithm: Makhoul (1980) FFT-based DCT-II.
589
+ 1. Reorder x into v = [x[0], x[2], …, x[N-1], x[N-3], …, x[1]]
590
+ 2. V = FFT(v) (computed in float64 for accuracy)
591
+ 3. C = Re( exp(−jπk/(2N)) · V ) · √(2/N)
592
+ 4. C[0] /= √2 (ortho normalisation for DC bin)
593
+ """
594
+ import torch
595
+ in_dtype = x.dtype
596
+ N = x.shape[-1]
597
+ v = torch.cat([x[..., ::2], x[..., 1::2].flip(-1)], dim=-1)
598
+ V = torch.fft.fft(v.double(), dim=-1)
599
+ k = torch.arange(N, device=x.device, dtype=torch.float64)
600
+ tw = torch.exp(-1j * _math.pi * k / (2.0 * N))
601
+ C = (tw * V).real * _math.sqrt(2.0 / N)
602
+ C = C.clone()
603
+ C[..., 0] /= _math.sqrt(2.0)
604
+ return C.to(in_dtype)
605
+
606
+
607
+ def _idct2_gpu(X: "torch.Tensor") -> "torch.Tensor":
608
+ """
609
+ Batched orthonormal IDCT-II on GPU. X: (..., N) — float32 or float64.
610
+ Returns same dtype as input.
611
+ Numerically matches scipy.fft.idct(X, type=2, norm='ortho') to ~1e-14.
612
+
613
+ Inverse of _dct2_gpu via conjugate-twiddle + IFFT (Makhoul 1980):
614
+ 1. Undo ortho scaling: C = X·√(N/2); C[0] ·= √2
615
+ 2. Build W[k] = C[k] − j·C[N−k] for k=0…N−1
616
+ where C[0] uses the W[0] = C[0] special case (ipart[0] = 0).
617
+ ipart[k] = −C[N−k] for k=1…N−1
618
+ ↳ BUG FIX: use C.flip(-1)[..., :-1] which gives C[N-1], C[N-2], …, C[1]
619
+ The old code used Cf[1:] = C[N-2], C[N-3], …, C[0] — off by one.
620
+ 3. Recover V: V = W · exp(+jπk/(2N))
621
+ 4. v = Re(IFFT(V))
622
+ 5. Un-interleave: x[2n] = v[n], x[2n+1] = v[N−1−n]
623
+ """
624
+ import torch
625
+ in_dtype = X.dtype
626
+ N = X.shape[-1]
627
+ C = X.double() * _math.sqrt(N / 2.0)
628
+ C = C.clone() # avoid in-place on original
629
+ C[..., 0] *= _math.sqrt(2.0)
630
+ # ── BUG-GPU-3 FIX ────────────────────────────────────────────────────
631
+ # ipart[k] must equal -C[N-k] for k=1..N-1.
632
+ # C.flip(-1) = [C[N-1], C[N-2], ..., C[1], C[0]]
633
+ # C.flip(-1)[..., :-1] = [C[N-1], C[N-2], ..., C[1]] ← correct
634
+ # (old buggy code: -Cf[..., 1:] = -[C[N-2], C[N-3], ..., C[0]] ← off by one)
635
+ ipart = torch.zeros_like(C)
636
+ ipart[..., 1:] = -C.flip(-1)[..., :-1]
637
+ W = torch.view_as_complex(torch.stack([C, ipart], dim=-1))
638
+ k = torch.arange(N, device=X.device, dtype=torch.float64)
639
+ V = W * torch.exp(1j * _math.pi * k / (2.0 * N))
640
+ v = torch.fft.ifft(V, dim=-1).real
641
+ half = (N + 1) // 2
642
+ x = torch.empty_like(v)
643
+ x[..., ::2] = v[..., :half]
644
+ x[..., 1::2] = v[..., half:].flip(-1)
645
+ return x.to(in_dtype)
646
+
647
+
648
+ def _frana_gpu(x: "torch.Tensor", frame: str) -> "torch.Tensor":
649
+ """
650
+ Batched analysis operator A: (..., M) → (..., P).
651
+ DCT frame: P = M → orthonormal DCT-II
652
+ RDFT frame: P = 2M → [DCT(x)/√2 ‖ DST(x)/√2]
653
+ DST-II(x) = DCT-II(x[::-1])
654
+ """
655
+ import torch
656
+ if frame == "dct":
657
+ return _dct2_gpu(x)
658
+ s2 = _math.sqrt(2.0)
659
+ return torch.cat([_dct2_gpu(x) / s2, _dct2_gpu(x.flip(-1)) / s2], dim=-1)
660
+
661
+
662
+ def _frsyn_gpu(z: "torch.Tensor", frame: str, M: int) -> "torch.Tensor":
663
+ """
664
+ Batched synthesis operator D = A^H: (..., P) → (..., M).
665
+ Adjoint of _frana_gpu. For RDFT the DST adjoint flips the OUTPUT.
666
+ """
667
+ import torch
668
+ if frame == "dct":
669
+ return _idct2_gpu(z)
670
+ s2 = _math.sqrt(2.0)
671
+ cos_part = _idct2_gpu(z[..., :M]) / s2
672
+ sin_part = _idct2_gpu(z[..., M:]).flip(-1) / s2
673
+ return cos_part + sin_part
674
+
675
+
676
+ def _hard_thresh_gpu(u: "torch.Tensor", k: int) -> "torch.Tensor":
677
+ """
678
+ Batched hard thresholding: keep k largest-magnitude coefficients per row.
679
+ u: (F, P). Returns same shape with all but top-k magnitudes zeroed.
680
+ """
681
+ k = int(max(1, min(k, u.shape[-1])))
682
+ kth = torch.topk(u.abs(), k, dim=-1, sorted=True).values[..., -1:] # (F,1)
683
+ return u * (u.abs() >= kth)
684
+
685
+
686
+ def _sspade_batch_gpu(
687
+ yc_w: "torch.Tensor", # (F, M) windowed frames, already on device
688
+ Ir: "torch.Tensor", # (F, M) bool — reliable samples
689
+ Icp: "torch.Tensor", # (F, M) bool — positively limited
690
+ Icm: "torch.Tensor", # (F, M) bool — negatively limited
691
+ frame: str,
692
+ s: int,
693
+ r: int,
694
+ eps: float,
695
+ max_iter: int,
696
+ g_max: float = float("inf"), # v11: ratio-aware upper bound (linear)
697
+ ) -> "Tuple[torch.Tensor, torch.Tensor]":
698
+ """
699
+ Batched S-SPADE on GPU — all F frames processed simultaneously.
700
+
701
+ Determinism guarantees
702
+ ----------------------
703
+ BUG-GPU-2 fix: ADMM runs in float64 throughout (yc_w is upcast at entry,
704
+ downcast to float32 on output). This matches the CPU path which also runs
705
+ in float64 via numpy/scipy. float32 would accumulate ~2.3 units of error
706
+ after 500 iterations vs float64's ~1e-14 — causing divergent ADMM trajectories.
707
+
708
+ BUG-GPU-1 fix: zi_final captures zi at the exact convergence iteration for
709
+ each frame. Without this, zi keeps being overwritten in subsequent iterations
710
+ for already-converged frames (dual ui stops updating but the zi update
711
+ expression keeps running for all frames). The CPU tight_sspade breaks
712
+ immediately on convergence; the GPU batch loop cannot break early, so
713
+ zi_final is the equivalent mechanism.
714
+
715
+ Convergence mask
716
+ ----------------
717
+ A per-frame `active` bool mask marks frames still iterating.
718
+ - `conv[f]` = True once frame f has met the stopping criterion
719
+ - `active[f]` = ~conv[f]
720
+ - ui is updated only for active frames (correct — matches CPU which exits
721
+ before updating ui on the convergence iteration)
722
+ - zi_final[f] is frozen at the first iteration where conv[f] becomes True
723
+
724
+ Returns
725
+ -------
726
+ x_frames : (F, M) float32 — time-domain restored frames (on device)
727
+ converged : (F,) bool — True where ADMM converged within max_iter
728
+ """
729
+ import torch
730
+ # ── BUG-GPU-2 FIX: upcast to float64 to match CPU float64 path ───────
731
+ yc_w64 = yc_w.double()
732
+ F, M = yc_w64.shape
733
+
734
+ zi = _frana_gpu(yc_w64, frame) # (F, P) float64
735
+ ui = torch.zeros_like(zi) # float64
736
+ k = s
737
+ active = torch.ones (F, dtype=torch.bool, device=yc_w.device)
738
+ conv = torch.zeros(F, dtype=torch.bool, device=yc_w.device)
739
+
740
+ # ── BUG-GPU-1 FIX: zi_final captures zi at the convergence iteration ─
741
+ # Frames that never converge will have zi_final = zi at loop exit.
742
+ zi_final = zi.clone()
743
+
744
+ for i in range(1, max_iter + 1):
745
+ # ── Step 2: sparsity (all frames) ────────────────────────────────
746
+ zb = _hard_thresh_gpu(zi + ui, k) # (F, P)
747
+
748
+ # ── Step 3: project onto Γ via eq.(12) ───────────────────────────
749
+ v_c = zb - ui # (F, P)
750
+ Dv = _frsyn_gpu(v_c, frame, M) # (F, M)
751
+
752
+ pDv = Dv.clone()
753
+ pDv[Ir] = yc_w64[Ir]
754
+ # Ratio-aware projection (v11): lower bound max(v, yc) AND optional upper bound
755
+ # Use finite g_max check to avoid 0 * inf = nan when g_max=inf (disabled).
756
+ lower_p = yc_w64[Icp]
757
+ if _math.isfinite(g_max):
758
+ upper_p = (lower_p * g_max).clamp(min=lower_p)
759
+ else:
760
+ upper_p = torch.full_like(lower_p, _math.inf)
761
+ pDv[Icp] = torch.clamp(torch.maximum(pDv[Icp], lower_p), max=upper_p)
762
+ lower_m = yc_w64[Icm] # negative values
763
+ if _math.isfinite(g_max):
764
+ lower_m_cap = (lower_m * g_max).clamp(max=lower_m)
765
+ else:
766
+ lower_m_cap = torch.full_like(lower_m, -_math.inf)
767
+ pDv[Icm] = torch.clamp(torch.minimum(pDv[Icm], lower_m), min=lower_m_cap)
768
+
769
+ zi = v_c - _frana_gpu(Dv - pDv, frame) # (F, P)
770
+
771
+ # ── Step 4: convergence check for still-active frames ─────────────
772
+ norms = (zi - zb).norm(dim=-1) # (F,)
773
+ new_conv = active & (norms <= eps)
774
+
775
+ if new_conv.any():
776
+ # Freeze zi at the convergence point — equivalent to CPU 'break'
777
+ zi_final[new_conv] = zi[new_conv]
778
+ conv |= new_conv
779
+ active = ~conv
780
+
781
+ if not active.any():
782
+ break
783
+
784
+ # ── Step 7: dual update for active frames only ────────────────────
785
+ # CPU tight_sspade updates ui AFTER the convergence check,
786
+ # meaning ui is NOT updated on the convergence iteration.
787
+ # Matching that: only active frames (not yet converged) update ui.
788
+ ui[active] = ui[active] + zi[active] - zb[active]
789
+
790
+ if i % r == 0:
791
+ k += s
792
+
793
+ # Frames that never converged: use their final zi
794
+ if active.any():
795
+ zi_final[active] = zi[active]
796
+
797
+ # Downcast output to float32 for WOLA accumulation
798
+ return _frsyn_gpu(zi_final, frame, M).float(), conv
799
+
800
+
801
+ def _declip_mono_gpu(
802
+ yc: np.ndarray,
803
+ params: "DeclipParams",
804
+ tau: float,
805
+ ch_label: str,
806
+ device: str,
807
+ progress_ctx = None,
808
+ task_id = None,
809
+ ) -> "Tuple[np.ndarray, ClippingMasks]":
810
+ """
811
+ GPU-accelerated mono declipping pipeline.
812
+
813
+ Three-pass strategy
814
+ -------------------
815
+ Pass 1 (CPU): extract all frames, compute bypass decisions and masks.
816
+ Pass 2 (GPU): pack active frames into a batch tensor and run
817
+ _sspade_batch_gpu — all frames in one GPU kernel sweep.
818
+ Pass 3 (CPU): sequential WOLA accumulation + RMS level match.
819
+
820
+ Progress behaviour
821
+ ------------------
822
+ Bypassed frames advance the progress bar in real-time during Pass 1.
823
+ Active (GPU-processed) frames advance the bar immediately after
824
+ Pass 2 returns (appears as a single jump — mirrors how the GPU works).
825
+ """
826
+ import torch
827
+
828
+ # ── DC removal (BUG-4 fix) ────────────────────────────────────���──────
829
+ dc_offset = float(np.mean(yc))
830
+ yc = yc - dc_offset
831
+
832
+ # ── Ceiling and threshold ────────────────────────────────────────────
833
+ ceiling_pos = float(np.max(yc))
834
+ ceiling_neg = float(-np.min(yc))
835
+
836
+ if params.mode == "hard":
837
+ threshold = min(ceiling_pos, ceiling_neg)
838
+ else:
839
+ ceiling = max(ceiling_pos, ceiling_neg)
840
+ threshold = ceiling * (10.0 ** (-params.delta_db / 20.0))
841
+
842
+ if threshold <= 0.0:
843
+ return yc.copy(), _compute_masks(yc, 0.0)
844
+
845
+ masks = _compute_masks(yc, threshold)
846
+
847
+ # ── v11 Feature 1: envelope-based mask dilation (GPU path) ────────────
848
+ if params.mode == "soft" and params.release_ms > 0.0:
849
+ rel_samp = max(0, round(params.release_ms * params.sample_rate / 1000.0))
850
+ if rel_samp > 0:
851
+ masks = _dilate_masks_soft(masks, yc, rel_samp)
852
+
853
+ # ── v11 Feature 4: macro-dynamics upward expansion pre-pass ──────────
854
+ if params.mode == "soft" and params.macro_expand and params.macro_ratio > 1.0:
855
+ yc = _macro_expand_pass(
856
+ yc, params.sample_rate,
857
+ attack_ms=params.macro_attack_ms,
858
+ release_ms=params.macro_release_ms,
859
+ ratio=params.macro_ratio,
860
+ )
861
+ masks = _compute_masks(yc, threshold)
862
+ if params.release_ms > 0.0:
863
+ rel_samp = max(0, round(params.release_ms * params.sample_rate / 1000.0))
864
+ if rel_samp > 0:
865
+ masks = _dilate_masks_soft(masks, yc, rel_samp)
866
+
867
+ n_clipped = int(np.sum(~masks.Ir))
868
+ L = len(yc)
869
+
870
+ # ── v11 Feature 2: ratio-aware upper bound (linear) ──────────────────
871
+ g_max = (10.0 ** (params.max_gain_db / 20.0)
872
+ if params.mode == "soft" and params.max_gain_db > 0.0
873
+ else float("inf"))
874
+
875
+ if params.verbose:
876
+ ch = f" [{ch_label}]" if ch_label else ""
877
+ tag = "threshold" if params.mode == "soft" else "tau"
878
+ print(f"[declip{ch}] Length : {L} samples [device: {device}]")
879
+ print(f"[declip{ch}] DC offset : {dc_offset:+.6f} ({dc_offset*100:+.4f}%) → removed")
880
+ if params.mode == "hard":
881
+ print(f"[declip{ch}] {tag:<9} : {threshold:.6f} "
882
+ f"(pos_peak={ceiling_pos:.6f} neg_peak={ceiling_neg:.6f} using min)")
883
+ else:
884
+ print(f"[declip{ch}] ceiling : {max(ceiling_pos, ceiling_neg):.6f} "
885
+ f"(pos={ceiling_pos:.6f} neg={ceiling_neg:.6f})")
886
+ print(f"[declip{ch}] {tag:<9} : {threshold:.6f} "
887
+ f"(ceiling − {params.delta_db:.2f} dB = "
888
+ f"{20*np.log10(threshold/max(ceiling_pos,ceiling_neg)):.2f} dBFS)")
889
+ print(f"[declip{ch}] Detected : {n_clipped}/{L} "
890
+ f"({100*n_clipped/L:.1f}%) "
891
+ f"Icp={int(masks.Icp.sum())} Icm={int(masks.Icm.sum())}")
892
+ print(f"[declip{ch}] Algorithm : {params.algo.upper()} "
893
+ f"frame={params.frame.upper()} mode={params.mode.upper()} "
894
+ f"win={params.window_length} hop={params.hop_length} "
895
+ f"({100*(1-params.hop_length/params.window_length):.0f}% overlap) "
896
+ f"[GPU BATCH on {device}]")
897
+ if params.mode == "soft":
898
+ feats = []
899
+ if params.release_ms > 0: feats.append(f"release_ms={params.release_ms}")
900
+ if params.max_gain_db > 0: feats.append(f"max_gain_db={params.max_gain_db}")
901
+ if params.macro_expand: feats.append(f"macro_expand(ratio={params.macro_ratio})")
902
+ if feats:
903
+ print(f"[declip{ch}] v11 feats : " + " ".join(feats))
904
+
905
+ M = params.window_length
906
+ a = params.hop_length
907
+ N = int(np.ceil(L / a))
908
+ win = np.sqrt(hann(M, sym=False))
909
+ t0 = time.time()
910
+
911
+ # ── Pass 1 (CPU): frame extraction, bypass filter, mask build ────────
912
+ # wola_meta[i] = (idx1, idx2, seg_len, is_bypassed)
913
+ # active_* = lists for non-bypassed frames only, in order
914
+ wola_meta : list = []
915
+ active_yc_w : list = [] # windowed frames for SPADE
916
+ active_Ir : list = []
917
+ active_Icp : list = []
918
+ active_Icm : list = []
919
+ active_orig_idx : list = [] # original frame index i → maps back into wola_meta
920
+
921
+ skipped = 0
922
+ for i in range(N):
923
+ idx1 = i * a
924
+ idx2 = min(idx1 + M, L)
925
+ seg_len = idx2 - idx1
926
+ pad = M - seg_len
927
+
928
+ yc_frame = np.zeros(M)
929
+ yc_frame[:seg_len] = yc[idx1:idx2]
930
+
931
+ if params.mode == "soft":
932
+ fp = float(np.max(np.abs(yc_frame[:seg_len]))) if seg_len else 0.0
933
+ if fp < threshold:
934
+ wola_meta.append((idx1, idx2, seg_len, True))
935
+ skipped += 1
936
+ if progress_ctx is not None and task_id is not None:
937
+ progress_ctx.advance(task_id, n_bypassed=skipped,
938
+ n_noconv=0, n_done=i + 1, n_total=N)
939
+ continue
940
+
941
+ wola_meta.append((idx1, idx2, seg_len, False))
942
+ active_yc_w.append(yc_frame * win)
943
+ active_Ir .append(np.concatenate([masks.Ir [idx1:idx2], np.ones (pad, dtype=bool)]))
944
+ active_Icp.append(np.concatenate([masks.Icp[idx1:idx2], np.zeros(pad, dtype=bool)]))
945
+ active_Icm.append(np.concatenate([masks.Icm[idx1:idx2], np.zeros(pad, dtype=bool)]))
946
+ active_orig_idx.append(len(wola_meta) - 1) # index into wola_meta
947
+
948
+ n_active = len(active_yc_w)
949
+ n_noconv = 0
950
+ x_active_results: dict = {} # wola_meta_index → x_frame (M,) numpy
951
+
952
+ # ── Pass 2 (GPU): batched S-SPADE ────────────────────────────────────
953
+ if n_active > 0:
954
+ yc_batch = torch.tensor(np.stack(active_yc_w), dtype=torch.float64, device=device)
955
+ Ir_batch = torch.tensor(np.stack(active_Ir), dtype=torch.bool, device=device)
956
+ Icp_batch = torch.tensor(np.stack(active_Icp), dtype=torch.bool, device=device)
957
+ Icm_batch = torch.tensor(np.stack(active_Icm), dtype=torch.bool, device=device)
958
+
959
+ if params.verbose:
960
+ ch = f" [{ch_label}]" if ch_label else ""
961
+ vmem = ""
962
+ try:
963
+ alloc = torch.cuda.memory_allocated(device) / 1024**2
964
+ vmem = f" VRAM used ≈ {alloc:.0f} MB"
965
+ except Exception:
966
+ pass
967
+ print(f"[declip{ch}] GPU pass : {n_active} active frames → "
968
+ f"{yc_batch.shape} batch{vmem}")
969
+
970
+ x_batch, conv_batch = _sspade_batch_gpu(
971
+ yc_batch, Ir_batch, Icp_batch, Icm_batch,
972
+ params.frame, params.s, params.r, params.eps, params.max_iter,
973
+ g_max=g_max,
974
+ )
975
+
976
+ x_np = x_batch.cpu().numpy()
977
+ conv_np = conv_batch.cpu().numpy()
978
+ n_noconv = int((~conv_np).sum())
979
+
980
+ for j, meta_idx in enumerate(active_orig_idx):
981
+ x_active_results[meta_idx] = x_np[j]
982
+
983
+ # Advance progress bar for GPU-processed frames (bulk update)
984
+ if progress_ctx is not None and task_id is not None:
985
+ for j in range(n_active):
986
+ progress_ctx.advance(task_id, n_bypassed=skipped,
987
+ n_noconv=n_noconv,
988
+ n_done=skipped + j + 1, n_total=N)
989
+
990
+ # ── Pass 3 (CPU): WOLA accumulation ───────────────────────────────────
991
+ x = np.zeros(L)
992
+ norm_win = np.zeros(L)
993
+
994
+ for meta_idx, (idx1, idx2, seg_len, is_bypassed) in enumerate(wola_meta):
995
+ if is_bypassed:
996
+ x [idx1:idx2] += yc[idx1:idx2] * win[:seg_len] ** 2
997
+ norm_win[idx1:idx2] += win[:seg_len] ** 2
998
+ else:
999
+ xf = x_active_results[meta_idx]
1000
+ x [idx1:idx2] += xf[:seg_len] * win[:seg_len]
1001
+ norm_win[idx1:idx2] += win[:seg_len] ** 2
1002
+
1003
+ norm_win = np.where(norm_win < 1e-12, 1.0, norm_win)
1004
+ x /= norm_win
1005
+
1006
+ # ── Reliable-sample RMS match (BUG-3 fix) ────────────────────────────
1007
+ Ir = masks.Ir
1008
+ if Ir.sum() > 0:
1009
+ rms_in = float(np.sqrt(np.mean(yc[Ir] ** 2)))
1010
+ rms_out = float(np.sqrt(np.mean(x[Ir] ** 2)))
1011
+ if rms_out > 1e-12 and rms_in > 1e-12:
1012
+ x *= rms_in / rms_out
1013
+
1014
+ if params.verbose:
1015
+ ch = f" [{ch_label}]" if ch_label else ""
1016
+ skip_pct = 100.0 * skipped / N if N else 0.0
1017
+ print(f"[declip{ch}] Frames : {N} total | "
1018
+ f"active={n_active} (GPU) bypassed={skipped} ({skip_pct:.1f}%) "
1019
+ f"no_conv={n_noconv} | time: {time.time()-t0:.1f}s")
1020
+
1021
+ return x, masks
1022
+
1023
+
1024
+
1025
+
1026
+
1027
+ def frana(x: np.ndarray, frame: str) -> np.ndarray:
1028
+ """
1029
+ Analysis operator A : R^N → R^P.
1030
+
1031
+ For a tight Parseval frame A, the synthesis operator is D = A^H, and
1032
+ A^H A = I_N (perfect reconstruction property).
1033
+
1034
+ DCT frame (P = N):
1035
+ A = orthonormal DCT-II. A^H = A^{-1} = IDCT.
1036
+
1037
+ RDFT frame (P = 2N, redundancy 2):
1038
+ A = [A₁; A₂] where A₁ = DCT-II/√2 and A₂ = DST-II/√2.
1039
+ DST-II(x) is computed as DCT-II(x[::-1]).
1040
+ Tight frame property: A₁^H A₁ + A₂^H A₂ = I/2 + I/2 = I. ✓
1041
+ """
1042
+ if frame == "dct":
1043
+ return dct(x, type=2, norm="ortho")
1044
+ if frame == "rdft":
1045
+ cos_part = dct(x, type=2, norm="ortho") / np.sqrt(2) # DCT-II / √2
1046
+ sin_part = dct(x[::-1], type=2, norm="ortho") / np.sqrt(2) # DST-II / √2
1047
+ return np.concatenate([cos_part, sin_part])
1048
+ raise ValueError(f"Unknown frame '{frame}'")
1049
+
1050
+
1051
+ def frsyn(z: np.ndarray, frame: str, M: int) -> np.ndarray:
1052
+ """
1053
+ Synthesis operator D = A^H : R^P → R^N.
1054
+
1055
+ DCT frame:
1056
+ D = IDCT (same matrix as A for orthonormal DCT).
1057
+
1058
+ RDFT frame:
1059
+ D = [A₁^H, A₂^H] applied to [z₁; z₂]:
1060
+ A₁^H z₁ = IDCT(z₁) / √2
1061
+ A₂^H z₂ = IDCT(z₂)[::-1] / √2 ← correct: flip the OUTPUT
1062
+ Note: the original v1 had the bug idct(z₂[::-1]) — flipping the INPUT.
1063
+ Correct adjoint of DST-II requires IDCT(z₂)[::-1], NOT IDCT(z₂[::-1]).
1064
+ """
1065
+ if frame == "dct":
1066
+ return idct(z, type=2, norm="ortho")
1067
+ if frame == "rdft":
1068
+ cos_part = idct(z[:M], type=2, norm="ortho") / np.sqrt(2)
1069
+ sin_part = idct(z[M:], type=2, norm="ortho")[::-1] / np.sqrt(2) # BUG-1 fix
1070
+ return cos_part + sin_part
1071
+ raise ValueError(f"Unknown frame '{frame}'")
1072
+
1073
+
1074
+ # ============================================================================
1075
+ # Hard thresholding H_k
1076
+ # ============================================================================
1077
+
1078
+ def hard_thresh(u: np.ndarray, k: int) -> np.ndarray:
1079
+ """
1080
+ Hard-thresholding operator H_k.
1081
+
1082
+ Keeps the k largest-magnitude components of u; sets all others to zero.
1083
+ Corresponds to step 2 of both Algorithm 1 and Algorithm 2 in [1][2].
1084
+
1085
+ Parameters
1086
+ ----------
1087
+ u : coefficient vector (in R^P)
1088
+ k : number of non-zero coefficients to retain
1089
+
1090
+ Notes
1091
+ -----
1092
+ The papers remark that for real signals represented with complex DFT,
1093
+ thresholding should act on conjugate pairs to preserve the real-signal
1094
+ structure. Since our RDFT frame uses real DCT/DST, all coefficients
1095
+ are real-valued and standard element-wise thresholding is appropriate.
1096
+ """
1097
+ k = int(np.clip(k, 1, len(u)))
1098
+ alpha = np.sort(np.abs(u))[::-1][k - 1] # k-th largest magnitude
1099
+ return u * (np.abs(u) >= alpha)
1100
+
1101
+
1102
+ # ============================================================================
1103
+ # Projection onto the consistency set Γ
1104
+ # ============================================================================
1105
+
1106
+ def proj_gamma(
1107
+ w: np.ndarray,
1108
+ yc: np.ndarray,
1109
+ masks: ClippingMasks,
1110
+ g_max: float = float("inf"), # v11: ratio-aware upper bound (linear)
1111
+ ) -> np.ndarray:
1112
+ """
1113
+ Orthogonal projection onto Γ(y) in the time domain.
1114
+
1115
+ Implements eq. (6) of [2] / eq. (2) of [1]:
1116
+
1117
+ [proj_Γ(w)]_n = y_n if n ∈ R (reliable)
1118
+ = max{w_n, τ} if n ∈ H (positive clip, i.e. ≥ τ)
1119
+ = min{w_n, −τ} if n ∈ L (negative clip, i.e. ≤ −τ)
1120
+
1121
+ Equivalently, using bounding vectors b_L, b_H as in eq. (7)/(9) of [2]:
1122
+ proj_{[b_L, b_H]}(w) = min{max{b_L, w}, b_H}
1123
+
1124
+ v11 — ratio-aware upper bound (g_max > 0, default disabled = inf):
1125
+ [proj_Γ(w)]_n = clip(max(w_n, yc_n), yc_n, yc_n · g_max) for n ∈ Icp
1126
+ = clip(min(w_n, yc_n), yc_n · g_max, yc_n) for n ∈ Icm
1127
+ This prevents ADMM from generating transients above the limiter’s
1128
+ expected maximum gain reduction while still honouring the lower bound.
1129
+
1130
+ Parameters
1131
+ ----------
1132
+ w : time-domain signal to project (R^N)
1133
+ yc : original clipped signal (R^N), provides boundary values
1134
+ masks : clipping masks (Ir, Icp, Icm)
1135
+ g_max : linear gain ceiling (default: inf = no cap, i.e. v10 behaviour).
1136
+ Compute from max_gain_db as: g_max = 10 ** (max_gain_db / 20).
1137
+ """
1138
+ v = w.copy()
1139
+ v[masks.Ir] = yc[masks.Ir] # reliable: fix exactly
1140
+ # Positive clipped: lower bound ≥ yc, optional upper bound ≤ yc * g_max
1141
+ lo_p = yc[masks.Icp]
1142
+ if np.isfinite(g_max):
1143
+ hi_p = lo_p * g_max
1144
+ else:
1145
+ hi_p = np.full_like(lo_p, np.inf) # avoid 0 * inf = nan
1146
+ v[masks.Icp] = np.clip(np.maximum(v[masks.Icp], lo_p), lo_p, hi_p)
1147
+ # Negative clipped: upper bound ≤ yc, optional lower bound ≥ yc * g_max
1148
+ lo_m = yc[masks.Icm] # negative values
1149
+ if np.isfinite(g_max):
1150
+ lo_m_cap = lo_m * g_max # more negative than lo_m
1151
+ else:
1152
+ lo_m_cap = np.full_like(lo_m, -np.inf) # avoid 0 * inf = nan
1153
+ v[masks.Icm] = np.clip(np.minimum(v[masks.Icm], lo_m), lo_m_cap, lo_m)
1154
+ return v
1155
+
1156
+
1157
+ # ============================================================================
1158
+ # S-SPADE (Algorithm 1 in [2])
1159
+ # ============================================================================
1160
+
1161
+ def tight_sspade(
1162
+ yc: np.ndarray,
1163
+ masks: ClippingMasks,
1164
+ frame: str,
1165
+ s: int,
1166
+ r: int,
1167
+ eps: float,
1168
+ max_iter: int,
1169
+ g_max: float = float("inf"), # v11: ratio-aware upper bound
1170
+ ) -> Tuple[np.ndarray, bool]:
1171
+ """
1172
+ S-SPADE for one windowed audio frame.
1173
+
1174
+ Implements Algorithm 1 from [2], which uses the closed-form projection
1175
+ lemma (eq. 12) to make per-iteration cost equal to A-SPADE:
1176
+
1177
+ ẑ^(i) = v - D^* ( D v - proj_{[b_L,b_H]}(D v) )
1178
+ where v = z̄^(i) - u^(i-1)
1179
+
1180
+ State variables
1181
+ ---------------
1182
+ zi : current estimate in coefficient domain (R^P)
1183
+ ui : dual / guidance variable (R^P) — coefficient domain
1184
+ k : current sparsity level (number of non-zero coefficients)
1185
+
1186
+ Convergence criterion (Algorithm 1, row 4 in [2])
1187
+ -------------------------------------------------
1188
+ ‖ẑ^(i) - z̄^(i)‖₂ ≤ ε
1189
+ """
1190
+ M = len(yc)
1191
+ zi = frana(yc, frame) # ẑ^(0) = A^H y (eq. D^H y in [2])
1192
+ ui = np.zeros_like(zi) # u^(0) = 0
1193
+ k = s
1194
+ converged = False
1195
+
1196
+ for i in range(1, max_iter + 1):
1197
+
1198
+ # ── Step 2 : enforce sparsity ─────────────────────────────────────
1199
+ # z̄^(i) = H_k( ẑ^(i-1) + u^(i-1) )
1200
+ zb = hard_thresh(zi + ui, k)
1201
+
1202
+ # ── Step 3 : project onto Γ via eq.(12) from [2] ─────────────────
1203
+ # v = z̄^(i) - u^(i-1) (coefficient domain)
1204
+ v_coeff = zb - ui
1205
+ # D v (time domain)
1206
+ Dv = frsyn(v_coeff, frame, M)
1207
+ # proj_{Γ}(D v)
1208
+ proj_Dv = proj_gamma(Dv, yc, masks, g_max=g_max)
1209
+ # ẑ^(i) = v - D^*( D v - proj(D v) )
1210
+ zi = v_coeff - frana(Dv - proj_Dv, frame)
1211
+
1212
+ # ── Step 4 : convergence check ────────────────────────────────────
1213
+ # ‖ẑ^(i) - z̄^(i)‖₂ ≤ ε
1214
+ if np.linalg.norm(zi - zb) <= eps:
1215
+ converged = True
1216
+ break
1217
+
1218
+ # ── Step 7 : update dual variable ────────────────────────────────
1219
+ # u^(i) = u^(i-1) + ẑ^(i) - z̄^(i)
1220
+ ui = ui + zi - zb
1221
+
1222
+ # ── Sparsity relaxation (rows 9-11 in [2]) ────────────────────────
1223
+ if i % r == 0:
1224
+ k += s
1225
+
1226
+ # Return time-domain estimate: x̂ = D ẑ^(i)
1227
+ return frsyn(zi, frame, M), converged
1228
+
1229
+
1230
+ # ============================================================================
1231
+ # A-SPADE (Algorithm 2 in [2])
1232
+ # ============================================================================
1233
+
1234
+ def tight_aspade(
1235
+ yc: np.ndarray,
1236
+ masks: ClippingMasks,
1237
+ frame: str,
1238
+ s: int,
1239
+ r: int,
1240
+ eps: float,
1241
+ max_iter: int,
1242
+ g_max: float = float("inf"), # v11: ratio-aware upper bound
1243
+ ) -> Tuple[np.ndarray, bool]:
1244
+ """
1245
+ A-SPADE for one windowed audio frame.
1246
+
1247
+ Implements Algorithm 2 from [2]. The projection step uses the
1248
+ closed-form formula from eq.(5)/(8) of [2]:
1249
+
1250
+ x̂^(i) = proj_{[b_L, b_H]}( A^H ( z̄^(i) − u^(i-1) ) )
1251
+ = proj_Γ( D ( z̄^(i) − u^(i-1) ) )
1252
+ = proj_Γ( frsyn(zb − ui) )
1253
+
1254
+ State variables
1255
+ ---------------
1256
+ xi : current estimate in signal domain (R^N)
1257
+ ui : dual / guidance variable (R^P) — COEFFICIENT domain [BUG-2 fix]
1258
+ k : current sparsity level
1259
+
1260
+ Convergence criterion (Algorithm 2, row 4 in [2])
1261
+ -------------------------------------------------
1262
+ ‖A x̂^(i) − z̄^(i)‖₂ ≤ ε (coefficient-domain norm) [BUG-2c fix]
1263
+ """
1264
+ M = len(yc)
1265
+ P = _frame_size(M, frame)
1266
+
1267
+ xi = yc.copy() # x̂^(0) = y
1268
+ ui = np.zeros(P) # u^(0) = 0 — coefficient domain R^P [BUG-2 fix]
1269
+ k = s
1270
+ converged = False
1271
+
1272
+ for i in range(1, max_iter + 1):
1273
+
1274
+ # ── Step 2 : enforce sparsity ─────────────────────────────────────
1275
+ # z̄^(i) = H_k( A x̂^(i-1) + u^(i-1) )
1276
+ # Note: frana(xi) + ui, NOT frana(xi + frsyn(ui)) [BUG-2a fix]
1277
+ zb = hard_thresh(frana(xi, frame) + ui, k)
1278
+
1279
+ # ── Step 3 : project onto Γ ───────────────────────────────────────
1280
+ # x̂^(i) = proj_Γ( A^H( z̄^(i) - u^(i-1) ) )
1281
+ # = proj_Γ( frsyn( zb - ui ) ) [BUG-2b fix]
1282
+ xi_new = proj_gamma(frsyn(zb - ui, frame, M), yc, masks, g_max=g_max)
1283
+
1284
+ # ── Step 4 : convergence check ────────────────────────────────────
1285
+ # ‖A x̂^(i) - z̄^(i)‖₂ ≤ ε (coefficient-domain norm) [BUG-2c fix]
1286
+ if np.linalg.norm(frana(xi_new, frame) - zb) <= eps:
1287
+ converged = True
1288
+ xi = xi_new
1289
+ break
1290
+
1291
+ # ── Step 7 : update dual variable ────────────────────────────────
1292
+ # u^(i) = u^(i-1) + A x̂^(i) - z̄^(i) [BUG-2d fix]
1293
+ ui = ui + frana(xi_new, frame) - zb
1294
+ xi = xi_new
1295
+
1296
+ # ── Sparsity relaxation ───────────────────────────────────────────
1297
+ if i % r == 0:
1298
+ k += s
1299
+
1300
+ return xi, converged
1301
+
1302
+
1303
+ # ============================================================================
1304
+ # Main declipping pipeline
1305
+ # ============================================================================
1306
+
1307
+ def _compute_masks(yc: np.ndarray, threshold: float) -> ClippingMasks:
1308
+ """
1309
+ Compute clipping/limiting masks from a 1-D signal and a detection threshold.
1310
+
1311
+ Works for both modes:
1312
+ hard (mode='hard'): threshold = tau (samples exactly at digital ceiling)
1313
+ soft (mode='soft'): threshold = tau * 10^(-delta_db/20) (limiter threshold)
1314
+
1315
+ In soft mode, samples above the threshold have their TRUE value constrained
1316
+ to be ≥ their current (limited) value. proj_gamma already implements this
1317
+ correctly via v[Icp] = max(v[Icp], yc[Icp]) — since yc[Icp] is the actual
1318
+ limited value, not tau. No change to the projection operator is needed.
1319
+ """
1320
+ Icp = yc >= threshold
1321
+ Icm = yc <= -threshold
1322
+ Ir = ~(Icp | Icm)
1323
+ return ClippingMasks(Ir=Ir, Icp=Icp, Icm=Icm)
1324
+
1325
+
1326
+ # ============================================================================
1327
+ # v11 — Delimiting helper functions
1328
+ # ============================================================================
1329
+
1330
+ def _dilate_masks_soft(
1331
+ masks: ClippingMasks,
1332
+ yc: np.ndarray,
1333
+ release_samples: int,
1334
+ ) -> ClippingMasks:
1335
+ """
1336
+ Forward morphological dilation of the soft-mode clipping masks.
1337
+
1338
+ A mastering limiter does not merely clip the peak sample; its release time
1339
+ causes gain reduction to persist for `release_samples` samples after each
1340
+ peak. Without dilation, those post-peak samples are pinned as "reliable"
1341
+ (Ir), forcing the ADMM solver to anchor the reconstruction to artificially
1342
+ attenuated values and producing the pumping artifact.
1343
+
1344
+ Algorithm
1345
+ ---------
1346
+ For each True position in Icp or Icm, the following `release_samples`
1347
+ positions are also flagged as constrained (Icp/Icm). Implemented as a
1348
+ causal linear convolution:
1349
+
1350
+ dilated = convolve(mask, ones(release_samples + 1))[:N] > 0
1351
+
1352
+ Newly flagged samples are reclassified by polarity:
1353
+ yc[n] >= 0 → Icp (true value ≥ yc[n], always satisfied by limiter model)
1354
+ yc[n] < 0 → Icm (true value ≤ yc[n], same reasoning)
1355
+
1356
+ This is mathematically valid because a gain-reducing limiter always
1357
+ produces |yc[n]| ≤ |true[n]| on every attenuated sample.
1358
+
1359
+ Parameters
1360
+ ----------
1361
+ masks : original ClippingMasks from _compute_masks
1362
+ yc : DC-removed signal (same length as masks)
1363
+ release_samples : dilation width = round(release_ms * sr / 1000)
1364
+
1365
+ Returns
1366
+ -------
1367
+ ClippingMasks with expanded Icp, Icm and correspondingly shrunk Ir.
1368
+ """
1369
+ if release_samples <= 0:
1370
+ return masks
1371
+
1372
+ N = len(yc)
1373
+ kern = np.ones(release_samples + 1, dtype=np.float64)
1374
+
1375
+ # Causal forward dilation: each True position infects the next
1376
+ # release_samples positions (conv[:N] gives the causal output).
1377
+ dil_cp = np.convolve(masks.Icp.astype(np.float64), kern)[:N] > 0
1378
+ dil_cm = np.convolve(masks.Icm.astype(np.float64), kern)[:N] > 0
1379
+
1380
+ # Union of original and dilated masks
1381
+ new_Icp = dil_cp | dil_cm # will be filtered by polarity below
1382
+ new_Icm = dil_cp | dil_cm
1383
+
1384
+ # Assign dilated samples by polarity of the limited signal
1385
+ new_Icp = new_Icp & (yc >= 0) # positive half
1386
+ new_Icm = new_Icm & (yc < 0) # negative half
1387
+
1388
+ # Reliable = everything not in Icp or Icm
1389
+ new_Ir = ~(new_Icp | new_Icm)
1390
+
1391
+ return ClippingMasks(Ir=new_Ir, Icp=new_Icp, Icm=new_Icm)
1392
+
1393
+
1394
+ def _lr_split(x: np.ndarray, fc: float, sr: int) -> "Tuple[np.ndarray, np.ndarray]":
1395
+ """
1396
+ Phase-perfect Linkwitz-Riley crossover at frequency `fc` Hz.
1397
+
1398
+ Returns (lp, hp) such that lp + hp == x exactly (perfect reconstruction
1399
+ by construction: hp = x - lp). The LP is a zero-phase 4th-order
1400
+ Butterworth realised with sosfiltfilt.
1401
+
1402
+ A 4th-order zero-phase Butterworth (sosfiltfilt of 2nd-order coefficients)
1403
+ has the same amplitude response as LR4 at the crossover point (−6 dB at
1404
+ fc) and is computationally convenient. Summing LP + HP = x eliminates
1405
+ any phase-cancellation artifact at the crossover frequency.
1406
+
1407
+ Parameters
1408
+ ----------
1409
+ x : 1-D signal array
1410
+ fc : crossover frequency in Hz (clamped to [1, sr/2 − 1])
1411
+ sr : sample rate in Hz
1412
+ """
1413
+ from scipy.signal import butter, sosfiltfilt
1414
+ fc_safe = float(np.clip(fc, 1.0, sr / 2.0 - 1.0))
1415
+ sos = butter(2, fc_safe, btype="low", fs=sr, output="sos")
1416
+ lp = sosfiltfilt(sos, x)
1417
+ hp = x - lp # perfect reconstruction: no leakage at any frequency
1418
+ return lp, hp
1419
+
1420
+
1421
+ def _macro_expand_pass(
1422
+ yc: np.ndarray,
1423
+ sr: int,
1424
+ attack_ms: float = 10.0,
1425
+ release_ms: float = 200.0,
1426
+ ratio: float = 1.2,
1427
+ ) -> np.ndarray:
1428
+ """
1429
+ Macro-dynamics upward expansion pre-pass.
1430
+
1431
+ Restores the slow (>21 ms) amplitude modulation suppressed by a mastering
1432
+ limiter's release time — the "body compression" that SPADE cannot undo
1433
+ because it operates frame-by-frame at ~21 ms windows.
1434
+
1435
+ Algorithm
1436
+ ---------
1437
+ 1. Compute a zero-phase smoothed peak envelope using sosfiltfilt.
1438
+ The attack and release IIR time constants map to Butterworth LP cutoffs:
1439
+ fc_att = 2.2 / (2π · attack_s) [−3 dB at attack cutoff]
1440
+ fc_rel = 2.2 / (2π · release_s)
1441
+ Two passes (attack on rising, release on falling) are approximated by
1442
+ using the *slower* of the two for the LP filter (conservative choice).
1443
+
1444
+ 2. Threshold: 80th-percentile of the non-silent envelope values.
1445
+ Above the threshold the signal is already "loud" → no expansion.
1446
+ Below the threshold it was compressed → apply upward expansion gain.
1447
+
1448
+ 3. Expansion gain (standard upward-expander transfer function):
1449
+ g(n) = (env(n) / threshold)^(1/ratio − 1) env < threshold
1450
+ = 1.0 otherwise
1451
+ For ratio > 1, (1/ratio − 1) < 0, so g > 1 when env < threshold
1452
+ (quiet sections get boosted).
1453
+
1454
+ 4. Gain is smoothed with a 20 Hz LP to prevent clicks, then hard-clipped
1455
+ to [1.0, ∞) so the pre-pass only expands — it never attenuates.
1456
+
1457
+ Parameters
1458
+ ----------
1459
+ yc : 1-D float signal (DC-removed, level-normalised)
1460
+ sr : sample rate in Hz
1461
+ attack_ms : expander attack time constant (ms); typically 5–20 ms
1462
+ release_ms : expander release time constant (ms); typically 100–300 ms
1463
+ ratio : expansion ratio >1.0; 1.0 = bypass, 1.2 = gentle
1464
+
1465
+ Returns
1466
+ -------
1467
+ Expanded signal with the same length as yc.
1468
+ """
1469
+ from scipy.signal import butter, sosfiltfilt
1470
+
1471
+ if ratio <= 1.0:
1472
+ return yc.copy()
1473
+
1474
+ x_abs = np.abs(yc)
1475
+
1476
+ # ── Envelope follower ─────────────────────────────────────────────────
1477
+ # Use the *slower* time constant (release) for the zero-phase LP filter.
1478
+ # This approximates a peak-hold envelope that attacks fast and releases slow.
1479
+ rel_s = max(release_ms, attack_ms) / 1000.0
1480
+ fc_env = min(2.2 / (2.0 * np.pi * rel_s), sr / 2.0 - 1.0)
1481
+ sos_e = butter(2, fc_env, fs=sr, output="sos")
1482
+ env = sosfiltfilt(sos_e, x_abs)
1483
+ env = np.maximum(env, 1e-10)
1484
+
1485
+ # ── Threshold: 80th percentile of non-silent samples ─────────────────
1486
+ mask_sig = env > 1e-6
1487
+ if not mask_sig.any():
1488
+ return yc.copy()
1489
+ thresh = float(np.percentile(env[mask_sig], 80))
1490
+ thresh = max(thresh, 1e-8)
1491
+
1492
+ # ── Expansion gain ────────────────────────────────────────────────────
1493
+ exponent = 1.0 / ratio - 1.0 # negative for ratio > 1
1494
+ g = np.where(env >= thresh,
1495
+ 1.0,
1496
+ (env / thresh) ** exponent)
1497
+
1498
+ # ── Smooth gain to avoid clicks (~20 Hz LP) ───────────────────────────
1499
+ fc_g = min(20.0, sr / 2.0 - 1.0)
1500
+ sos_g = butter(2, fc_g, fs=sr, output="sos")
1501
+ g = sosfiltfilt(sos_g, g)
1502
+ g = np.maximum(g, 1.0) # upward only — never attenuate
1503
+
1504
+ return yc * g
1505
+
1506
+
1507
+ def _declip_mono(
1508
+ yc: np.ndarray,
1509
+ params: DeclipParams,
1510
+ tau: float, # pre-computed global ceiling — used only as hint;
1511
+ # always recomputed internally after DC removal.
1512
+ ch_label: str = "",
1513
+ frame_workers: int = 1, # v8: intra-channel frame-level parallelism
1514
+ progress_ctx = None, # v9: shared _*Progress instance (or None)
1515
+ task_id = None, # v9: task handle returned by progress_ctx.add_task
1516
+ ) -> Tuple[np.ndarray, ClippingMasks]:
1517
+ """
1518
+ Core mono declipping / delimiting pipeline (internal).
1519
+
1520
+ Parameters
1521
+ ----------
1522
+ yc : 1-D float array — one channel of the input signal
1523
+ params : DeclipParams
1524
+ tau : ceiling hint (pre-computed in declip()); kept for API compat,
1525
+ recomputed internally after DC removal.
1526
+ ch_label : string used in verbose output, e.g. "L" or "R"
1527
+
1528
+ DC removal (BUG-4 fix, v5)
1529
+ --------------------------
1530
+ A DC offset as small as 0.3% makes the global peak asymmetric, causing
1531
+ the lower-polarity ceiling to fall just below tau and be misclassified as
1532
+ reliable. Fix: subtract per-channel mean before all threshold computations.
1533
+ The DC is discarded on output (recording artefact, not musical content).
1534
+
1535
+ Soft mode (v6)
1536
+ --------------
1537
+ When params.mode == 'soft', the threshold is set to:
1538
+ threshold = ceiling * 10^(-delta_db / 20)
1539
+ where ceiling = max(|yc|) after DC removal.
1540
+ This marks all samples above the limiter threshold as potentially attenuated.
1541
+ The BUG-4 half-wave issue is inherently avoided in soft mode because the
1542
+ threshold sits delta_db dB BELOW the ceiling; small DC asymmetries (typically
1543
+ < 0.05 dB) cannot push the opposite polarity's ceiling below the threshold.
1544
+ DC removal is still performed for cleanliness.
1545
+
1546
+ proj_gamma correctness in soft mode
1547
+ ------------------------------------
1548
+ For limited samples, the true value satisfies: true ≥ yc[n] (one-sided).
1549
+ proj_gamma already implements exactly this:
1550
+ v[Icp] = max(v[Icp], yc[Icp])
1551
+ Since yc[Icp] here is the *actual limited value* (not tau), the constraint
1552
+ is correct. No change to tight_sspade or tight_aspade is needed.
1553
+ """
1554
+ # ── DC removal (BUG-4 fix, applies to both modes) ────────────────────
1555
+ dc_offset = float(np.mean(yc))
1556
+ yc = yc - dc_offset # DC-free working copy
1557
+
1558
+ # ── Ceiling and threshold ─────────────────────────────────────────────
1559
+ ceiling_pos = float(np.max(yc)) # positive peak after DC removal
1560
+ ceiling_neg = float(-np.min(yc)) # negative peak (absolute value)
1561
+
1562
+ if params.mode == "hard":
1563
+ # BUG-4 fix: use min(pos, neg) so both half-waves are always detected
1564
+ threshold = min(ceiling_pos, ceiling_neg)
1565
+ else:
1566
+ # soft mode: threshold = ceiling − delta_db dB
1567
+ # Use max(pos, neg) for ceiling — we want the actual brickwall level,
1568
+ # and the threshold is well below it so small asymmetries don't matter.
1569
+ ceiling = max(ceiling_pos, ceiling_neg)
1570
+ threshold = ceiling * (10.0 ** (-params.delta_db / 20.0))
1571
+
1572
+ if threshold <= 0.0:
1573
+ return yc.copy(), _compute_masks(yc, 0.0)
1574
+
1575
+ masks = _compute_masks(yc, threshold)
1576
+
1577
+ # ── v11 Feature 1: envelope-based mask dilation ───────────────────────
1578
+ if params.mode == "soft" and params.release_ms > 0.0:
1579
+ rel_samp = max(0, round(params.release_ms * params.sample_rate / 1000.0))
1580
+ if rel_samp > 0:
1581
+ masks = _dilate_masks_soft(masks, yc, rel_samp)
1582
+
1583
+ # ── v11 Feature 4: macro-dynamics upward expansion pre-pass ──────────
1584
+ if params.mode == "soft" and params.macro_expand and params.macro_ratio > 1.0:
1585
+ yc = _macro_expand_pass(
1586
+ yc, params.sample_rate,
1587
+ attack_ms=params.macro_attack_ms,
1588
+ release_ms=params.macro_release_ms,
1589
+ ratio=params.macro_ratio,
1590
+ )
1591
+ # Recompute masks on the expanded signal so Ir values are correct
1592
+ masks = _compute_masks(yc, threshold)
1593
+ if params.release_ms > 0.0:
1594
+ rel_samp = max(0, round(params.release_ms * params.sample_rate / 1000.0))
1595
+ if rel_samp > 0:
1596
+ masks = _dilate_masks_soft(masks, yc, rel_samp)
1597
+
1598
+ n_clipped = int(np.sum(~masks.Ir))
1599
+ L = len(yc)
1600
+
1601
+ # ── v11 Feature 2: ratio-aware upper bound (linear) ──────────────────
1602
+ g_max = (10.0 ** (params.max_gain_db / 20.0)
1603
+ if params.mode == "soft" and params.max_gain_db > 0.0
1604
+ else float("inf"))
1605
+
1606
+ if params.verbose:
1607
+ ch = (" [" + ch_label + "]") if ch_label else ""
1608
+ tag = "threshold" if params.mode == "soft" else "tau"
1609
+ print(f"[declip{ch}] Length : {L} samples")
1610
+ print(f"[declip{ch}] DC offset : {dc_offset:+.6f} ({dc_offset*100:+.4f}%) → removed")
1611
+ if params.mode == "hard":
1612
+ print(f"[declip{ch}] {tag:<9} : {threshold:.6f} "
1613
+ f"(pos_peak={ceiling_pos:.6f} neg_peak={ceiling_neg:.6f} using min)")
1614
+ else:
1615
+ print(f"[declip{ch}] ceiling : {max(ceiling_pos, ceiling_neg):.6f} "
1616
+ f"(pos={ceiling_pos:.6f} neg={ceiling_neg:.6f})")
1617
+ print(f"[declip{ch}] {tag:<9} : {threshold:.6f} "
1618
+ f"(ceiling − {params.delta_db:.2f} dB = "
1619
+ f"{20*np.log10(threshold/max(ceiling_pos,ceiling_neg)):.2f} dBFS)")
1620
+ print(f"[declip{ch}] Detected : {n_clipped}/{L} "
1621
+ f"({100*n_clipped/L:.1f}%) "
1622
+ f"Icp={int(masks.Icp.sum())} Icm={int(masks.Icm.sum())}")
1623
+ print(f"[declip{ch}] Algorithm : {params.algo.upper()} "
1624
+ f"frame={params.frame.upper()} mode={params.mode.upper()} "
1625
+ f"win={params.window_length} hop={params.hop_length} "
1626
+ f"({100*(1-params.hop_length/params.window_length):.0f}% overlap)")
1627
+ if params.mode == "soft":
1628
+ feats = []
1629
+ if params.release_ms > 0: feats.append(f"release_ms={params.release_ms}")
1630
+ if params.max_gain_db > 0: feats.append(f"max_gain_db={params.max_gain_db}")
1631
+ if params.macro_expand: feats.append(f"macro_expand(ratio={params.macro_ratio})")
1632
+ if feats:
1633
+ print(f"[declip{ch}] v11 feats : " + " ".join(feats))
1634
+
1635
+ spade_fn = tight_sspade if params.algo == "sspade" else tight_aspade
1636
+
1637
+ M = params.window_length
1638
+ a = params.hop_length
1639
+ N = int(np.ceil(L / a))
1640
+ win = np.sqrt(hann(M, sym=False)) # sqrt-Hann: satisfies COLA
1641
+ x = np.zeros(L)
1642
+ norm_win = np.zeros(L)
1643
+ no_conv = 0
1644
+ skipped = 0 # frames bypassed by frame-adaptive threshold (soft only)
1645
+ t0 = time.time()
1646
+
1647
+ # ── Per-frame worker (pure computation, no shared-state writes) ──────
1648
+ # Returns all data needed for WOLA accumulation; the accumulation itself
1649
+ # is always done sequentially to avoid race conditions on x / norm_win.
1650
+ def _process_frame(i: int):
1651
+ idx1 = i * a
1652
+ idx2 = min(idx1 + M, L)
1653
+ seg_len = idx2 - idx1
1654
+ pad = M - seg_len
1655
+
1656
+ yc_frame = np.zeros(M)
1657
+ yc_frame[:seg_len] = yc[idx1:idx2]
1658
+
1659
+ # ── Frame-adaptive bypass (soft mode only, v7) ───────────────────
1660
+ if params.mode == "soft":
1661
+ frame_peak = float(np.max(np.abs(yc_frame[:seg_len]))) if seg_len > 0 else 0.0
1662
+ if frame_peak < threshold:
1663
+ return idx1, idx2, seg_len, None, False, True # bypassed=True
1664
+
1665
+ yc_frame_w = yc_frame * win
1666
+
1667
+ fm = ClippingMasks(
1668
+ Ir = np.concatenate([masks.Ir [idx1:idx2], np.ones (pad, dtype=bool)]),
1669
+ Icp = np.concatenate([masks.Icp[idx1:idx2], np.zeros(pad, dtype=bool)]),
1670
+ Icm = np.concatenate([masks.Icm[idx1:idx2], np.zeros(pad, dtype=bool)]),
1671
+ )
1672
+
1673
+ x_frame, conv = spade_fn(
1674
+ yc_frame_w, fm,
1675
+ params.frame, params.s, params.r, params.eps, params.max_iter,
1676
+ g_max=g_max,
1677
+ )
1678
+ return idx1, idx2, seg_len, x_frame, conv, False # bypassed=False
1679
+
1680
+ # ── Parallel SPADE compute (v8) + live progress (v9) ─────────────────
1681
+ # scipy.fft.dct releases the GIL → threads run truly in parallel on DCT.
1682
+ # WOLA accumulation (cheap) is kept sequential to avoid data races.
1683
+ #
1684
+ # Progress strategy:
1685
+ # parallel: pool.submit + as_completed → advance bar as each frame lands
1686
+ # sequential: plain loop with advance after each frame
1687
+ #
1688
+ # frame_results[i] is stored by *original index* so WOLA order is preserved.
1689
+ frame_results: list = [None] * N
1690
+ _n_bypassed = 0
1691
+ _n_noconv = 0
1692
+
1693
+ def _advance(n_done: int):
1694
+ if progress_ctx is not None and task_id is not None:
1695
+ progress_ctx.advance(task_id,
1696
+ n_bypassed=_n_bypassed,
1697
+ n_noconv=_n_noconv,
1698
+ n_done=n_done,
1699
+ n_total=N)
1700
+
1701
+ if frame_workers > 1:
1702
+ from concurrent.futures import as_completed
1703
+ with ThreadPoolExecutor(max_workers=frame_workers) as pool:
1704
+ future_to_idx = {pool.submit(_process_frame, i): i for i in range(N)}
1705
+ n_done = 0
1706
+ for future in as_completed(future_to_idx):
1707
+ i = future_to_idx[future]
1708
+ frame_results[i] = future.result()
1709
+ n_done += 1
1710
+ # Peek at result to update live counters before advancing bar
1711
+ *_, conv, bypassed = frame_results[i]
1712
+ if bypassed:
1713
+ _n_bypassed += 1
1714
+ elif not conv:
1715
+ _n_noconv += 1
1716
+ _advance(n_done)
1717
+ else:
1718
+ for i in range(N):
1719
+ frame_results[i] = _process_frame(i)
1720
+ *_, conv, bypassed = frame_results[i]
1721
+ if bypassed:
1722
+ _n_bypassed += 1
1723
+ elif not conv:
1724
+ _n_noconv += 1
1725
+ _advance(i + 1)
1726
+
1727
+ # ── Sequential WOLA accumulation ─────────────────────────────────────
1728
+ for idx1, idx2, seg_len, x_frame, conv, bypassed in frame_results:
1729
+ if bypassed:
1730
+ yc_seg = yc[idx1:idx2]
1731
+ x [idx1:idx2] += yc_seg * win[:seg_len] ** 2
1732
+ norm_win[idx1:idx2] += win[:seg_len] ** 2
1733
+ skipped += 1
1734
+ else:
1735
+ if not conv:
1736
+ no_conv += 1
1737
+ x [idx1:idx2] += x_frame[:seg_len] * win[:seg_len]
1738
+ norm_win[idx1:idx2] += win[:seg_len] ** 2
1739
+
1740
+ # WOLA normalisation
1741
+ norm_win = np.where(norm_win < 1e-12, 1.0, norm_win)
1742
+ x /= norm_win
1743
+
1744
+ # ── Reliable-sample level matching (BUG-3 fix) ───────────────────────
1745
+ # Rescale output so its RMS on reliable samples matches the input RMS.
1746
+ # Eliminates per-channel WOLA gain drift (up to 5 dB in stereo material).
1747
+ Ir = masks.Ir
1748
+ if Ir.sum() > 0:
1749
+ rms_in = float(np.sqrt(np.mean(yc[Ir] ** 2)))
1750
+ rms_out = float(np.sqrt(np.mean(x[Ir] ** 2)))
1751
+ if rms_out > 1e-12 and rms_in > 1e-12:
1752
+ x *= rms_in / rms_out
1753
+
1754
+ if params.verbose:
1755
+ ch = (" [" + ch_label + "]") if ch_label else ""
1756
+ active = N - skipped
1757
+ skip_pct = 100.0 * skipped / N if N > 0 else 0.0
1758
+ if params.mode == "soft" and skipped > 0:
1759
+ print(f"[declip{ch}] Frames : {N} total | "
1760
+ f"active={active} bypassed={skipped} ({skip_pct:.1f}%) "
1761
+ f"no_conv={no_conv} | time: {time.time()-t0:.1f}s")
1762
+ else:
1763
+ print(f"[declip{ch}] Frames : {N} (no conv: {no_conv}) "
1764
+ f"time: {time.time()-t0:.1f}s")
1765
+
1766
+ return x, masks
1767
+
1768
+
1769
+ def declip(
1770
+ yc: np.ndarray,
1771
+ params: "DeclipParams | None" = None,
1772
+ ) -> "Tuple[np.ndarray, Union[ClippingMasks, List[ClippingMasks]]]":
1773
+ """
1774
+ Declip a hard-clipped audio signal — mono or multi-channel.
1775
+
1776
+ Accepts either:
1777
+ * a 1-D array (N_samples,) — mono
1778
+ * a 2-D array (N_samples, N_channels) — stereo / surround
1779
+
1780
+ For multi-channel input, tau is detected from the global peak across
1781
+ ALL channels, modelling the single hardware clipping threshold correctly.
1782
+ Each channel is then processed independently. Parallel processing is
1783
+ controlled by params.n_jobs.
1784
+
1785
+ Parameters
1786
+ ----------
1787
+ yc : float array, shape (N,) or (N, C)
1788
+ params : DeclipParams (defaults used if None)
1789
+
1790
+ Returns
1791
+ -------
1792
+ x : declipped signal, same shape as yc
1793
+ masks : ClippingMasks (mono input)
1794
+ list of ClippingMasks (multi-channel input, one per channel)
1795
+ """
1796
+ if params is None:
1797
+ params = DeclipParams()
1798
+
1799
+ yc = np.asarray(yc, dtype=float)
1800
+
1801
+ # ── v11 Feature 3: Multiband (Linkwitz-Riley) routing ───────────────────
1802
+ # When multiband=True, we split the signal into frequency bands, process
1803
+ # each independently with its own delta_db, then sum back. The split
1804
+ # uses perfect-reconstruction LP+HP pairs (HP = input − LP), so the sum
1805
+ # always reconstructs the original without leakage artifacts.
1806
+ # This wrapper recurses into declip() with multiband=False for each band.
1807
+ if params.multiband and params.mode == "soft":
1808
+ from dataclasses import replace as _dc_replace
1809
+ crossovers = list(params.band_crossovers)
1810
+ n_bands = len(crossovers) + 1
1811
+ sr = params.sample_rate
1812
+
1813
+ # Per-band delta_db: use band_delta_db if fully specified, else fall back
1814
+ if len(params.band_delta_db) == n_bands:
1815
+ band_deltas = list(params.band_delta_db)
1816
+ else:
1817
+ band_deltas = [params.delta_db] * n_bands
1818
+
1819
+ # Split signal into bands using cascaded LP / HP = input − LP
1820
+ sig_1d = yc if yc.ndim == 1 else None # handle below per-channel
1821
+ if yc.ndim == 2:
1822
+ # Process each channel’s bands independently, same crossovers
1823
+ n_samp, n_ch = yc.shape
1824
+ out = np.zeros_like(yc)
1825
+ all_masks = []
1826
+ for c in range(n_ch):
1827
+ ch_sig = yc[:, c]
1828
+ ch_out = np.zeros(n_samp)
1829
+ ch_masks = []
1830
+ remainder = ch_sig.copy()
1831
+ for b, (fc, d_db) in enumerate(zip(crossovers, band_deltas[:-1])):
1832
+ lp, remainder = _lr_split(remainder, fc, sr)
1833
+ band_params = _dc_replace(params, multiband=False, delta_db=d_db)
1834
+ band_fixed, band_mask = declip(lp, band_params)
1835
+ ch_out += band_fixed
1836
+ ch_masks.append(band_mask)
1837
+ # Last band (remainder is the HP)
1838
+ band_params = _dc_replace(params, multiband=False, delta_db=band_deltas[-1])
1839
+ band_fixed, band_mask = declip(remainder, band_params)
1840
+ ch_out += band_fixed
1841
+ ch_masks.append(band_mask)
1842
+ out[:, c] = ch_out
1843
+ all_masks.append(ch_masks)
1844
+ return out, all_masks
1845
+ else:
1846
+ # Mono multiband
1847
+ out = np.zeros_like(yc)
1848
+ all_masks = []
1849
+ remainder = yc.copy()
1850
+ for b, (fc, d_db) in enumerate(zip(crossovers, band_deltas[:-1])):
1851
+ lp, remainder = _lr_split(remainder, fc, sr)
1852
+ band_params = _dc_replace(params, multiband=False, delta_db=d_db)
1853
+ band_fixed, band_mask = declip(lp, band_params)
1854
+ out += band_fixed
1855
+ all_masks.append(band_mask)
1856
+ # Last band
1857
+ band_params = _dc_replace(params, multiband=False, delta_db=band_deltas[-1])
1858
+ band_fixed, band_mask = declip(remainder, band_params)
1859
+ out += band_fixed
1860
+ all_masks.append(band_mask)
1861
+ return out, all_masks
1862
+
1863
+ # ── Normalisation fix ────────────────────────────────────────────────
1864
+ # SPADE recovers values *above* tau at formerly-clipped positions.
1865
+ # If the input is at the digital ceiling (tau = 1.0), those recovered
1866
+ # values exceed 1.0 and any hard np.clip(-1,1) by the caller destroys
1867
+ # all improvement, making the output identical to the input.
1868
+ # Fix: normalise so tau < 1.0 before processing; undo normalisation after.
1869
+ NORM_TARGET = 0.9
1870
+ global_peak = float(np.max(np.abs(yc)))
1871
+ if global_peak > NORM_TARGET:
1872
+ scale = NORM_TARGET / global_peak # < 1
1873
+ yc_norm = yc * scale
1874
+ else:
1875
+ scale = 1.0
1876
+ yc_norm = yc
1877
+
1878
+ # ── GPU detection (once, shared across all channels) ─────────────────
1879
+ gpu_dev = _resolve_gpu_device(params)
1880
+
1881
+ # ── Mono path ────────────────────────────────────────────────────────
1882
+ if yc_norm.ndim == 1:
1883
+ tau = float(np.max(np.abs(yc_norm)))
1884
+ if tau == 0.0:
1885
+ warnings.warn("Input signal is all zeros.")
1886
+ return yc.copy(), _compute_masks(yc, 0.0)
1887
+
1888
+ if gpu_dev is not None:
1889
+ # GPU path: single channel, no threading needed
1890
+ if params.show_progress:
1891
+ N_frames = int(np.ceil(len(yc_norm) / params.hop_length))
1892
+ prog = _make_progress(1)
1893
+ with prog:
1894
+ task = prog.add_task("mono", total=N_frames)
1895
+ fixed, masks = _declip_mono_gpu(
1896
+ yc_norm, params, tau, ch_label="mono",
1897
+ device=gpu_dev, progress_ctx=prog, task_id=task,
1898
+ )
1899
+ else:
1900
+ fixed, masks = _declip_mono_gpu(
1901
+ yc_norm, params, tau, ch_label="mono", device=gpu_dev,
1902
+ )
1903
+ else:
1904
+ # CPU path (v8/v9)
1905
+ n_workers = params.n_jobs if params.n_jobs > 0 else os.cpu_count() or 1
1906
+ if params.show_progress:
1907
+ N_frames = int(np.ceil(len(yc_norm) / params.hop_length))
1908
+ prog = _make_progress(1)
1909
+ with prog:
1910
+ task = prog.add_task("mono", total=N_frames)
1911
+ fixed, masks = _declip_mono(
1912
+ yc_norm, params, tau,
1913
+ frame_workers=n_workers,
1914
+ progress_ctx=prog, task_id=task,
1915
+ )
1916
+ else:
1917
+ fixed, masks = _declip_mono(yc_norm, params, tau, frame_workers=n_workers)
1918
+ return fixed / scale, masks
1919
+
1920
+ # ── Multi-channel path ───────────────────────────────────────────────
1921
+ if yc_norm.ndim != 2:
1922
+ raise ValueError(
1923
+ f"yc must be 1-D (mono) or 2-D (samples x channels), got shape {yc.shape}"
1924
+ )
1925
+
1926
+ n_samples, n_ch = yc_norm.shape
1927
+
1928
+ # Global tau: same hardware threshold for all channels
1929
+ tau = float(np.max(np.abs(yc_norm)))
1930
+ if tau == 0.0:
1931
+ warnings.warn("Input signal is all zeros.")
1932
+ empty_masks = [_compute_masks(yc[:, c], 0.0) for c in range(n_ch)]
1933
+ return yc.copy(), empty_masks
1934
+
1935
+ # Channel labels: L/R for stereo, Ch0/Ch1/… for more
1936
+ if n_ch == 2:
1937
+ labels = ["L", "R"]
1938
+ else:
1939
+ labels = ["Ch" + str(c) for c in range(n_ch)]
1940
+
1941
+ if params.verbose:
1942
+ print(f"[declip] {n_ch}-channel signal | "
1943
+ f"tau={tau:.4f} | mode={params.mode.upper()} | "
1944
+ + (f"device={gpu_dev}" if gpu_dev else f"n_jobs={params.n_jobs}")
1945
+ + (f" | delta_db={params.delta_db:.2f}" if params.mode == "soft" else ""))
1946
+
1947
+ # ── Parallel / sequential dispatch ───────────────────────────────────
1948
+ N_frames = int(np.ceil(n_samples / params.hop_length))
1949
+ prog = _make_progress(n_ch) if params.show_progress else None
1950
+
1951
+ if gpu_dev is not None:
1952
+ # GPU path: channels processed sequentially (GPU already uses all VRAM
1953
+ # for the frame batch; no benefit from running channels concurrently)
1954
+ def _process_channel(c: int, task_id=None):
1955
+ return _declip_mono_gpu(
1956
+ yc_norm[:, c], params, tau,
1957
+ ch_label=labels[c], device=gpu_dev,
1958
+ progress_ctx=prog, task_id=task_id,
1959
+ )
1960
+ else:
1961
+ # CPU path: two-level parallelism (channel-workers × frame-workers)
1962
+ total_workers = params.n_jobs if params.n_jobs > 0 else os.cpu_count() or 1
1963
+ channel_workers = min(total_workers, n_ch)
1964
+ frame_workers_ch = max(1, total_workers // channel_workers)
1965
+
1966
+ def _process_channel(c: int, task_id=None):
1967
+ return _declip_mono(
1968
+ yc_norm[:, c], params, tau,
1969
+ ch_label=labels[c],
1970
+ frame_workers=frame_workers_ch,
1971
+ progress_ctx=prog,
1972
+ task_id=task_id,
1973
+ )
1974
+
1975
+ # Channel-level concurrency: GPU uses 1 worker (sequential), CPU uses n_ch
1976
+ ch_workers = 1 if gpu_dev is not None else min(
1977
+ params.n_jobs if params.n_jobs > 0 else os.cpu_count() or 1, n_ch
1978
+ )
1979
+
1980
+ def _run():
1981
+ if prog is not None:
1982
+ task_ids = [prog.add_task(labels[c], total=N_frames) for c in range(n_ch)]
1983
+ else:
1984
+ task_ids = [None] * n_ch
1985
+
1986
+ if ch_workers == 1:
1987
+ return [_process_channel(c, task_ids[c]) for c in range(n_ch)]
1988
+ else:
1989
+ with ThreadPoolExecutor(max_workers=ch_workers) as pool:
1990
+ futures = [pool.submit(_process_channel, c, task_ids[c]) for c in range(n_ch)]
1991
+ return [f.result() for f in futures]
1992
+
1993
+ if prog is not None:
1994
+ with prog:
1995
+ results = _run()
1996
+ else:
1997
+ results = _run()
1998
+
1999
+ # Reassemble into (N_samples, N_channels)
2000
+ fixed_channels = [r[0] for r in results]
2001
+ masks_list = [r[1] for r in results]
2002
+ x_out = np.column_stack(fixed_channels) / scale
2003
+
2004
+ return x_out, masks_list
2005
+
2006
+
2007
+ # ============================================================================
2008
+ # Quality metrics
2009
+ # ============================================================================
2010
+
2011
+ def sdr(reference: np.ndarray, estimate: np.ndarray) -> float:
2012
+ """
2013
+ Signal-to-Distortion Ratio (dB).
2014
+
2015
+ Definition from eq.(14) in [2]:
2016
+ SDR(u, v) = 10 log₁₀( ‖u‖² / ‖u − v‖² )
2017
+ """
2018
+ noise = reference - estimate
2019
+ denom = np.sum(noise ** 2)
2020
+ if denom < 1e-20:
2021
+ return float("inf")
2022
+ return 10.0 * np.log10(np.sum(reference ** 2) / denom)
2023
+
2024
+
2025
+ def delta_sdr(
2026
+ reference: np.ndarray,
2027
+ clipped: np.ndarray,
2028
+ estimate: np.ndarray,
2029
+ ) -> float:
2030
+ """
2031
+ ΔSDR improvement (dB) — eq.(13) in [2]:
2032
+ ΔSDR = SDR(x, x̂) − SDR(x, y)
2033
+ """
2034
+ return sdr(reference, estimate) - sdr(reference, clipped)
2035
+
2036
+
2037
+ # ============================================================================
2038
+ # Command-line interface
2039
+ # ============================================================================
2040
+
2041
+ def _build_parser() -> argparse.ArgumentParser:
2042
+ p = argparse.ArgumentParser(
2043
+ description="SPADE Audio Declipping / Limiter Recovery (v11)",
2044
+ formatter_class=argparse.ArgumentDefaultsHelpFormatter,
2045
+ )
2046
+ p.add_argument("input", help="Input clipped / limited audio file (WAV, FLAC, ...)")
2047
+ p.add_argument("output", help="Output restored audio file")
2048
+ p.add_argument("--algo", choices=["sspade", "aspade"], default="sspade")
2049
+ p.add_argument("--window-length", type=int, default=1024, dest="window_length")
2050
+ p.add_argument("--hop-length", type=int, default=256, dest="hop_length")
2051
+ p.add_argument("--frame", choices=["dct", "rdft"], default="rdft")
2052
+ p.add_argument("--s", type=int, default=1)
2053
+ p.add_argument("--r", type=int, default=1)
2054
+ p.add_argument("--eps", type=float, default=0.1)
2055
+ p.add_argument("--max-iter", type=int, default=1000, dest="max_iter")
2056
+ p.add_argument("--n-jobs", type=int, default=1, dest="n_jobs",
2057
+ help="CPU parallel workers for multi-channel (-1 = all cores). "
2058
+ "Ignored when GPU is active.")
2059
+ p.add_argument("--mode", choices=["hard", "soft"], default="hard",
2060
+ help="'hard' = standard clipping recovery; "
2061
+ "'soft' = brickwall limiter recovery")
2062
+ p.add_argument("--delta-db", type=float, default=1.0, dest="delta_db",
2063
+ help="[soft mode] dB below 0 dBFS where the limiter starts acting "
2064
+ "(e.g. 2.5 means threshold at -2.5 dBFS)")
2065
+ p.add_argument("--gpu-device", type=str, default="auto", dest="gpu_device",
2066
+ help="PyTorch device for GPU path: 'auto', 'cuda', 'cuda:0', 'cpu'. "
2067
+ "AMD ROCm GPUs appear as 'cuda' in PyTorch-ROCm.")
2068
+ p.add_argument("--no-gpu", action="store_true", dest="no_gpu",
2069
+ help="Disable GPU acceleration; use CPU (v8/v9 threading) path instead.")
2070
+ # v11 delimiting features
2071
+ p.add_argument("--release-ms", type=float, default=0.0, dest="release_ms",
2072
+ help="[v11, soft] Limiter release time in ms for mask dilation "
2073
+ "(0 = disabled, typical 10-50 ms)")
2074
+ p.add_argument("--max-gain-db", type=float, default=0.0, dest="max_gain_db",
2075
+ help="[v11, soft] Max transient recovery in dB above limited value "
2076
+ "(0 = disabled, e.g. 6 for +6 dB cap)")
2077
+ p.add_argument("--multiband", action="store_true",
2078
+ help="[v11, soft] Enable Linkwitz-Riley sub-band processing")
2079
+ p.add_argument("--band-crossovers", type=float, nargs="+", default=[250.0, 4000.0],
2080
+ dest="band_crossovers",
2081
+ help="[v11] Crossover frequencies in Hz (e.g. 250 4000)")
2082
+ p.add_argument("--band-delta-db", type=float, nargs="+", default=[],
2083
+ dest="band_delta_db",
2084
+ help="[v11] Per-band delta_db values (must match number of bands)")
2085
+ p.add_argument("--macro-expand", action="store_true", dest="macro_expand",
2086
+ help="[v11, soft] Enable macro-dynamics upward expansion pre-pass")
2087
+ p.add_argument("--macro-attack-ms", type=float, default=10.0, dest="macro_attack_ms",
2088
+ help="[v11] Expander attack time (ms, default 10)")
2089
+ p.add_argument("--macro-release-ms", type=float, default=200.0, dest="macro_release_ms",
2090
+ help="[v11] Expander release time (ms, default 200)")
2091
+ p.add_argument("--macro-ratio", type=float, default=1.2, dest="macro_ratio",
2092
+ help="[v11] Expansion ratio >1.0 (default 1.2; 1.0 = bypass)")
2093
+ p.add_argument("--verbose", action="store_true")
2094
+ p.add_argument("--reference", default=None,
2095
+ help="Clean reference file for delta-SDR measurement")
2096
+ return p
2097
+
2098
+
2099
+ def main() -> None:
2100
+ try:
2101
+ import soundfile as sf
2102
+ except ImportError:
2103
+ raise SystemExit("Install soundfile: pip install soundfile")
2104
+
2105
+ args = _build_parser().parse_args()
2106
+ yc, sr = sf.read(args.input, always_2d=True) # shape: (N, C) always
2107
+ yc = yc.astype(float)
2108
+ n_samp, n_ch = yc.shape
2109
+
2110
+ print("Input :", args.input,
2111
+ "|", n_samp, "samples @", sr, "Hz |", n_ch, "channel(s)")
2112
+
2113
+ params = DeclipParams(
2114
+ algo=args.algo, window_length=args.window_length,
2115
+ hop_length=args.hop_length, frame=args.frame,
2116
+ s=args.s, r=args.r, eps=args.eps, max_iter=args.max_iter,
2117
+ verbose=args.verbose, n_jobs=args.n_jobs,
2118
+ mode=args.mode, delta_db=args.delta_db,
2119
+ use_gpu=not args.no_gpu, gpu_device=args.gpu_device,
2120
+ # v11: delimiting features
2121
+ sample_rate=sr,
2122
+ release_ms=args.release_ms,
2123
+ max_gain_db=args.max_gain_db,
2124
+ multiband=args.multiband,
2125
+ band_crossovers=tuple(args.band_crossovers),
2126
+ band_delta_db=tuple(args.band_delta_db),
2127
+ macro_expand=args.macro_expand,
2128
+ macro_attack_ms=args.macro_attack_ms,
2129
+ macro_release_ms=args.macro_release_ms,
2130
+ macro_ratio=args.macro_ratio,
2131
+ )
2132
+
2133
+ # Pass 1-D array for mono so return type stays ClippingMasks (not list)
2134
+ yc_in = yc[:, 0] if n_ch == 1 else yc
2135
+ fixed, masks = declip(yc_in, params)
2136
+ # NOTE: do NOT clip to [-1, 1] — recovered transients may legitimately
2137
+ # exceed 1.0. Write as 32-bit float to preserve them.
2138
+
2139
+ # soundfile always wants 2-D for write
2140
+ fixed_2d = fixed[:, None] if fixed.ndim == 1 else fixed
2141
+ sf.write(args.output, fixed_2d.astype(np.float32), sr, subtype="FLOAT")
2142
+ print("Output :", args.output)
2143
+
2144
+ # Per-channel clipping summary
2145
+ masks_iter = [masks] if n_ch == 1 else masks
2146
+ labels = ["L", "R"] if n_ch == 2 else ["Ch" + str(c) for c in range(n_ch)]
2147
+ for m, lbl in zip(masks_iter, labels):
2148
+ n_clip = int(np.sum(~m.Ir))
2149
+ pct = 100.0 * n_clip / n_samp
2150
+ print(" [" + lbl + "] clipped:", n_clip, "/", n_samp,
2151
+ "samples (" + str(round(pct, 1)) + "%)")
2152
+
2153
+ # Optional SDR vs. clean reference
2154
+ if args.reference:
2155
+ ref, _ = sf.read(args.reference, always_2d=True)
2156
+ ref = ref.astype(float)
2157
+ L = min(ref.shape[0], fixed_2d.shape[0])
2158
+ for c in range(min(n_ch, ref.shape[1])):
2159
+ lbl = labels[c]
2160
+ r_c = ref[:L, c]
2161
+ y_c = yc[:L, c]
2162
+ f_c = fixed_2d[:L, c]
2163
+ print(" [" + lbl + "]"
2164
+ " SDR clipped=" + str(round(sdr(r_c, y_c), 2)) + " dB"
2165
+ " declipped=" + str(round(sdr(r_c, f_c), 2)) + " dB"
2166
+ " delta=" + str(round(delta_sdr(r_c, y_c, f_c), 2)) + " dB")
2167
+
2168
+
2169
+ # =============================================================================
2170
+ # Demo / self-test (mono + stereo)
2171
+ # =============================================================================
2172
+
2173
+ def _demo() -> None:
2174
+ """
2175
+ Self-test: mono and stereo synthetic signals, both algorithms, both frames.
2176
+ """
2177
+ print("=" * 65)
2178
+ print("SPADE Declipping v3 — Self-Test (mono + stereo)")
2179
+ print("=" * 65)
2180
+
2181
+ sr = 16_000
2182
+ t = np.linspace(0, 1, sr, endpoint=False)
2183
+
2184
+ def make_tonal(freqs_amps):
2185
+ sig = sum(a * np.sin(2 * np.pi * f * t) for f, a in freqs_amps)
2186
+ return sig / np.max(np.abs(sig))
2187
+
2188
+ clean_L = make_tonal([(440, 0.5), (880, 0.3), (1320, 0.15)])
2189
+ clean_R = make_tonal([(550, 0.5), (1100, 0.3), (2200, 0.1)])
2190
+ clean_stereo = np.column_stack([clean_L, clean_R]) # (N, 2)
2191
+
2192
+ theta_c = 0.6
2193
+ clipped_stereo = np.clip(clean_stereo, -theta_c, theta_c)
2194
+ n_clip_L = np.mean(np.abs(clipped_stereo[:, 0]) >= theta_c) * 100
2195
+ n_clip_R = np.mean(np.abs(clipped_stereo[:, 1]) >= theta_c) * 100
2196
+ print("\ntheta_c =", theta_c,
2197
+ " | L clipped:", str(round(n_clip_L, 1)) + "%",
2198
+ " R clipped:", str(round(n_clip_R, 1)) + "%")
2199
+
2200
+ for algo in ("sspade", "aspade"):
2201
+ for fr in ("dct", "rdft"):
2202
+ params = DeclipParams(
2203
+ algo=algo, frame=fr,
2204
+ window_length=1024, hop_length=256,
2205
+ s=1, r=1, eps=0.1, max_iter=500,
2206
+ n_jobs=2, # process L and R in parallel
2207
+ verbose=False,
2208
+ )
2209
+ fixed, masks_list = declip(clipped_stereo, params)
2210
+ dsdr_L = delta_sdr(clean_stereo[:, 0], clipped_stereo[:, 0], fixed[:, 0])
2211
+ dsdr_R = delta_sdr(clean_stereo[:, 1], clipped_stereo[:, 1], fixed[:, 1])
2212
+ tag = algo.upper() + " + " + fr.upper()
2213
+ print(" " + tag + " | L DSDR=" + str(round(dsdr_L, 1)) + " dB"
2214
+ " R DSDR=" + str(round(dsdr_R, 1)) + " dB")
2215
+
2216
+ # Quick mono sanity check
2217
+ print("\n--- Mono sanity check ---")
2218
+ clipped_mono = np.clip(clean_L, -theta_c, theta_c)
2219
+ params_mono = DeclipParams(algo="sspade", frame="rdft",
2220
+ window_length=1024, hop_length=256,
2221
+ s=1, r=1, eps=0.1, max_iter=500)
2222
+ fixed_mono, _ = declip(clipped_mono, params_mono)
2223
+ print(" SSPADE+RDFT mono DSDR =",
2224
+ str(round(delta_sdr(clean_L, clipped_mono, fixed_mono), 1)), "dB")
2225
+
2226
+ print("\nSelf-test complete.")
2227
+
2228
+
2229
+ if __name__ == "__main__":
2230
+ import sys
2231
+ if "--demo" in sys.argv:
2232
+ _demo()
2233
+ else:
2234
+ main()
spade_declip_v12.py ADDED
The diff for this file is too large to render. See raw diff
 
spade_declip_v12old.py ADDED
The diff for this file is too large to render. See raw diff
 
spade_declip_v12old2.py ADDED
The diff for this file is too large to render. See raw diff
 
spade_declip_v13.py ADDED
The diff for this file is too large to render. See raw diff
 
spade_unrolled.py ADDED
@@ -0,0 +1,1484 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ """
2
+ spade_unrolled.py — SPADE Unrolled (Algorithm Unrolling + Context Encoder)
3
+ ================================================================================
4
+
5
+ Replaces the fixed S-SPADE solver (v12) with a learned parameter predictor.
6
+
7
+ Architecture
8
+ ------------
9
+
10
+ Input (limited audio frame + K context frames)
11
+
12
+ SpectralFeatureExtractor
13
+ • log-mel spectrogram (n_mels=32) per frame
14
+ • short-time loudness proxy (RMS in dB)
15
+ → shape: (B, K+1, n_mels+1)
16
+
17
+ ContextEncoder (causal GRU)
18
+ • 2-layer GRU, hidden_size=128
19
+ • Only K previous frames are seen (strict causality)
20
+ → h_t: (B, 128)
21
+
22
+ ParameterHead (linear → 5 outputs per frame)
23
+ • lambda_lf : soft-threshold for LF bins (≥ 0)
24
+ • lambda_hf : soft-threshold for HF bins (≥ 0)
25
+ • delta_factor: scales delta_db ∈ [0.5, 2.0]
26
+ • gmax_factor : scales max_gain_db ∈ [0.5, 2.0]
27
+ • eps_factor : scales convergence eps ∈ [0.5, 2.0]
28
+
29
+ UnrolledADMM (K_unroll=8 fixed layers, fully differentiable)
30
+ Each layer:
31
+ 1. Analysis: z = frana(x, frame) — DCT / RDFT
32
+ 2. Soft-thresh: z̃ = S_λ(z + u) — stratified LF/HF
33
+ 3. Synthesis: Dv = frsyn(z̃ - u, frame, M) — reconstruction
34
+ 4. Projection: pDv = proj_Γ(Dv, yc, masks, g_max)
35
+ 5. Residual: z ← z̃ - u - frana(Dv - pDv, frame)
36
+ 6. Dual update: u ← u + z - z̃
37
+ → x̂ = frsyn(z_K, frame, M)
38
+
39
+ Output: restored audio frame (B, M)
40
+
41
+ Key differences from v12 (classical SPADE)
42
+ -------------------------------------------
43
+ • Hard thresholding H_k (L0) → differentiable soft thresholding S_λ (L1 proxy)
44
+ • Fixed hyperparameters → predicted per-frame by ContextEncoder
45
+ • Fixed iteration count → exactly K_unroll unrolled layers (no convergence loop)
46
+ • Global sparsity level k → independent LF/HF soft-threshold budgets
47
+
48
+ Transform operators (GPU-compatible, differentiable)
49
+ ------------------------------------------------------
50
+ The DCT-II / RDFT analysis-synthesis operators from spade_declip_v12 are
51
+ re-implemented in PyTorch so gradients flow through them. Numerically they
52
+ match scipy to float32 precision.
53
+
54
+ Projection operator
55
+ -------------------
56
+ proj_Γ is already differentiable (clamp + max/min). Gradients flow through
57
+ the Icp / Icm branches; Ir samples are pinned (zero gradient, correct).
58
+
59
+ WOLA (Weighted Overlap-Add) integration
60
+ ----------------------------------------
61
+ The model processes individual frames. The full WOLA loop lives in
62
+ SPADEUnrolledInference which wraps UnrolledSPADE with frame extraction +
63
+ accumulation. Training uses individual frames to allow per-sample gradient
64
+ computation without materialising the full signal in the graph.
65
+
66
+ References
67
+ ----------
68
+ [1] Gregor & LeCun, "Learning Fast Approximations of Sparse Coding", ICML 2010.
69
+ [2] Adler et al., "Learned Primal-Dual Reconstruction", IEEE TMI 2018.
70
+ [3] Kitić et al., "SPADE", LVA/ICA 2015 (arXiv:1506.01830).
71
+ [4] Záviška et al., "Revisiting SPADE", 2018 (arXiv:1807.03612).
72
+ """
73
+
74
+ from __future__ import annotations
75
+
76
+ import math
77
+ from dataclasses import dataclass, field
78
+ from typing import Literal, Optional, Tuple
79
+
80
+ import numpy as np
81
+
82
+ try:
83
+ import torch
84
+ import torch.nn as nn
85
+ import torch.nn.functional as F
86
+ _TORCH_OK = True
87
+ except ImportError:
88
+ _TORCH_OK = False
89
+ raise ImportError("PyTorch is required for spade_unrolled.py (pip install torch)")
90
+
91
+
92
+ # =============================================================================
93
+ # Config dataclass
94
+ # =============================================================================
95
+
96
+ @dataclass
97
+ class UnrolledConfig:
98
+ """All hyperparameters for the SPADE-Unrolled model."""
99
+
100
+ # ── Signal / transform ────────────────────────────────────────────────
101
+ # Defaults from run_smart_sweep.py rank-1 result:
102
+ # window=2048, hop=512 (score=0.916, the best-performing WOLA config)
103
+ window_length: int = 2048 # M — samples per WOLA frame
104
+ hop_length: int = 512 # a — WOLA hop
105
+ frame: Literal["dct", "rdft"] = "rdft" # transform type
106
+ sample_rate: int = 44100
107
+
108
+ # ── Unrolling ─────────────────────────────────────────────────────────
109
+ # K_unroll=4: literatura su algorithm unrolling (LISTA, ALISTA, ISTA-Net)
110
+ # mostra 3-5 layer come punto ottimale. Con K=8 e soft-thresh, il prodotto
111
+ # dei Jacobiani attraverso le dead zones → grad ≈ 0 al GRU.
112
+ K_unroll: int = 4 # number of ADMM layers per frame
113
+
114
+ # ── Context encoder ───────────────────────────────────────────────────
115
+ K_context: int = 8 # # of past frames fed to GRU
116
+ n_mels: int = 32 # mel bands for feature extraction
117
+ gru_hidden: int = 128 # GRU hidden size
118
+ gru_layers: int = 2 # GRU depth
119
+
120
+ # ── Per-frame parameter bounds ────────────────────────────────────────
121
+ # All outputs of the head are mapped to these ranges.
122
+ #
123
+ # Lambda calibration rationale (POST-NORMALISATION)
124
+ # ---------------------------------------------------
125
+ # UnrolledADMM normalises each frame by its max DCT coefficient magnitude,
126
+ # so inside the ADMM loop ALL coefficients are in [-1, 1] with max = 1.0.
127
+ # Lambda must therefore be expressed relative to this [0, 1] scale.
128
+ #
129
+ # For a typical kick drum (M=1024, 44100 Hz), the 47 LF bins (≤ 1 kHz)
130
+ # have post-normalisation magnitudes distributed roughly as 1/k:
131
+ # p25 ≈ 0.028 median ≈ 0.042 p75 ≈ 0.083
132
+ # A lambda in (0.01, 0.50) spans the full meaningful sparsification range
133
+ # from "keep almost all" to "keep only the dominant few" coefficients.
134
+ #
135
+ # Previous range (1e-6, 0.015) was 10-100× too small: even at the maximum
136
+ # λ=0.015, ZERO LF coefficients were ever thresholded → pure all-pass →
137
+ # encoder collapsed to identity (λ→0) because the loss gradient for λ
138
+ # was essentially zero everywhere in (0, 0.015).
139
+ lambda_lf_range: Tuple[float, float] = (1e-3, 0.50)
140
+ # Upper bound reduced 0.30 → 0.08.
141
+ # For M=2048, sr=44100: HF DCT coefficients (>1kHz, post-normalisation)
142
+ # have typical magnitudes 0.01–0.06. With λ_hf=0.10 (old upper range median)
143
+ # *all* HF content was zeroed → dsdr_high < −1 dB at every epoch.
144
+ # Cap at 0.08 ensures only sub-noise coefficients are thresholded.
145
+ lambda_hf_range: Tuple[float, float] = (1e-3, 0.08)
146
+ delta_factor_range: Tuple[float, float] = (0.5, 1.5)
147
+ # [FIX] Range expanded to (0.5, 2.0): with base_max_gain_db=12 this maps
148
+ # to g_max ∈ [6, 24] dB, giving the Parameter Head full freedom to be
149
+ # conservative in the mid band (g_fac≈0.5 → 6 dB) or aggressive in
150
+ # sub-bass (g_fac≈2.0 → 24 dB) without hitting a hard floor/ceiling.
151
+ # The old (0.85, 1.5) floor was causing mid regression: the model could
152
+ # not reduce gain selectively in the 500–2000 Hz region.
153
+ gmax_factor_range: Tuple[float, float] = (0.2, 2.0)
154
+ eps_factor_range: Tuple[float, float] = (0.5, 1.5)
155
+
156
+ # ── LF/HF split ──────────────────────────────────────────────────────
157
+ # 8000 Hz is the crossover between:
158
+ # LF (0–8 kHz): learned reconstruction — this is where v11 S-SPADE
159
+ # struggled (kick body, transient fundamental, sub-bass).
160
+ # The model learns a content-adaptive sparse prior here.
161
+ # HF (8–22 kHz): v11 S-SPADE hard thresholding H_k is used unchanged
162
+ # via HybridSPADEInference — v11 already recovers HF
163
+ # transients (cymbal snap, hi-hat attack) accurately.
164
+ # During training, the model processes the full LF-bandpassed signal.
165
+ # At inference, HybridSPADEInference handles the LR split and v11 HF.
166
+ lf_cutoff_hz: float = 8000.0 # bins below this → LF soft thresh (learned)
167
+
168
+ # ── Base SPADE params (encoder predicts *multipliers* of these) ───────
169
+ # Initialised from run_smart_sweep.py rank-1 result (score=0.916):
170
+ # delta_db=3.5, eps=0.05
171
+ # max_gain_db: sweep rank-1 = 9.0, but Phase-1 training showed the model
172
+ # converges to g_fac≈0.50 (factor range lower bound) which gives 4.5 dB.
173
+ # Using base=6.0 dB instead: g_fac=0.75 (mid-range) → 4.5 dB, and the
174
+ # model can still explore up to 9.0 dB (g_fac=1.5) if needed.
175
+ base_delta_db: float = 3.5 # rank-1: delta_db
176
+ base_max_gain_db: float = 6.0 # calibrated: g_fac=0.75 → 4.5 dB (Phase-1 optimum)
177
+ base_eps: float = 0.05 # rank-1: eps
178
+
179
+ # lf_delta_db from rank-1 = 1.0 vs delta_db = 3.5 → ratio ≈ 0.286
180
+ # Used to derive a softer lambda_lf initialisation relative to lambda_hf:
181
+ # lower lf_delta means LF region is recovered more aggressively (fewer
182
+ # coefficients zeroed), so lambda_lf_init < lambda_hf_init.
183
+ lf_delta_ratio: float = 0.286 # lf_delta_db / delta_db (rank-1: 1.0/3.5)
184
+
185
+
186
+ # =============================================================================
187
+ # Transform operators (differentiable, GPU-compatible)
188
+ # =============================================================================
189
+
190
+ def _dct2(x: torch.Tensor) -> torch.Tensor:
191
+ """Batched orthonormal DCT-II. x: (..., N) → (..., N).
192
+ Matches scipy.fft.dct(x, type=2, norm='ortho') to float32.
193
+ Makhoul (1980) FFT-based algorithm.
194
+ """
195
+ N = x.shape[-1]
196
+ v = torch.cat([x[..., ::2], x[..., 1::2].flip(-1)], dim=-1)
197
+ V = torch.fft.fft(v.double(), dim=-1)
198
+ k = torch.arange(N, device=x.device, dtype=torch.float64)
199
+ tw = torch.exp(-1j * math.pi * k / (2.0 * N))
200
+ C = (tw * V).real * math.sqrt(2.0 / N)
201
+ C = C.clone()
202
+ C[..., 0] /= math.sqrt(2.0)
203
+ return C.to(x.dtype)
204
+
205
+
206
+ def _idct2(X: torch.Tensor) -> torch.Tensor:
207
+ """Batched orthonormal IDCT-II. X: (..., N) → (..., N).
208
+ Inverse of _dct2. BUG-GPU-3 fix included.
209
+ """
210
+ N = X.shape[-1]
211
+ C = X.double() * math.sqrt(N / 2.0)
212
+ C = C.clone()
213
+ C[..., 0] *= math.sqrt(2.0)
214
+ ipart = torch.zeros_like(C)
215
+ ipart[..., 1:] = -C.flip(-1)[..., :-1]
216
+ W = torch.view_as_complex(torch.stack([C, ipart], dim=-1))
217
+ k = torch.arange(N, device=X.device, dtype=torch.float64)
218
+ V = W * torch.exp(1j * math.pi * k / (2.0 * N))
219
+ v = torch.fft.ifft(V, dim=-1).real
220
+ half = (N + 1) // 2
221
+ x = torch.empty_like(v)
222
+ x[..., ::2] = v[..., :half]
223
+ x[..., 1::2] = v[..., half:].flip(-1)
224
+ return x.to(X.dtype)
225
+
226
+
227
+ def frana(x: torch.Tensor, frame: str) -> torch.Tensor:
228
+ """Analysis operator A: (..., M) → (..., P).
229
+ DCT: P = M; RDFT: P = 2M.
230
+ Differentiable.
231
+ """
232
+ if frame == "dct":
233
+ return _dct2(x)
234
+ s2 = math.sqrt(2.0)
235
+ return torch.cat([_dct2(x) / s2, _dct2(x.flip(-1)) / s2], dim=-1)
236
+
237
+
238
+ def frsyn(z: torch.Tensor, frame: str, M: int) -> torch.Tensor:
239
+ """Synthesis operator D = A^H: (..., P) → (..., M).
240
+ Adjoint of frana. BUG-1 fix: flip output (not input) for DST part.
241
+ Differentiable.
242
+ """
243
+ if frame == "dct":
244
+ return _idct2(z)
245
+ s2 = math.sqrt(2.0)
246
+ cos_part = _idct2(z[..., :M]) / s2
247
+ sin_part = _idct2(z[..., M:]).flip(-1) / s2
248
+ return cos_part + sin_part
249
+
250
+
251
+ def build_lf_mask(M: int, frame: str, sr: int, lf_cutoff_hz: float,
252
+ device: torch.device) -> torch.Tensor:
253
+ """Boolean mask: True for LF bins (freq < lf_cutoff_hz). Shape: (P,)."""
254
+ P = M if frame == "dct" else 2 * M
255
+ mask = torch.zeros(P, dtype=torch.bool, device=device)
256
+ k_cut = int(math.ceil(lf_cutoff_hz * 2.0 * M / sr))
257
+ k_cut = max(1, min(k_cut, M))
258
+ if frame == "dct":
259
+ mask[:k_cut] = True
260
+ else:
261
+ mask[:k_cut] = True
262
+ mask[M:M + k_cut] = True
263
+ return mask
264
+
265
+
266
+ # =============================================================================
267
+ # Differentiable Projection onto Γ
268
+ # =============================================================================
269
+
270
+ def proj_gamma_torch(
271
+ w: torch.Tensor, # (..., M) — time-domain estimate
272
+ yc: torch.Tensor, # (..., M) — limited signal
273
+ Ir: torch.Tensor, # (..., M) bool — reliable
274
+ Icp: torch.Tensor, # (..., M) bool — positive-clipped
275
+ Icm: torch.Tensor, # (..., M) bool — negative-clipped
276
+ g_max: float = float("inf"),
277
+ ) -> torch.Tensor:
278
+ """
279
+ Differentiable projection onto the consistency set Γ.
280
+
281
+ Reliable samples: pin to yc (zero gradient — correct for training).
282
+ Positive clipped: lower bound max(w, yc), optional upper bound yc*g_max.
283
+ Negative clipped: upper bound min(w, yc), optional lower bound yc*g_max.
284
+
285
+ NOTE: The gradient through Ir positions is zero by construction — the
286
+ model cannot change reliable samples. This is the physically correct
287
+ inductive bias: SPADE must be transparent on non-limited regions.
288
+ """
289
+ v = w.clone()
290
+
291
+ # Reliable: pin exactly — no gradient contribution from these
292
+ v = torch.where(Ir, yc, v)
293
+
294
+ # Positive clipped: lower-bound constraint ≥ yc
295
+ lower_p = yc * Icp.float()
296
+ if math.isfinite(g_max):
297
+ upper_p = (lower_p * g_max).clamp(min=lower_p)
298
+ v = torch.where(Icp, torch.clamp(torch.maximum(v, lower_p),
299
+ min=lower_p, max=upper_p), v)
300
+ else:
301
+ v = torch.where(Icp, torch.maximum(v, yc), v)
302
+
303
+ # Negative clipped: upper-bound constraint ≤ yc
304
+ upper_m = yc * Icm.float() # negative values
305
+ if math.isfinite(g_max):
306
+ lower_m_cap = (upper_m * g_max).clamp(max=upper_m)
307
+ v = torch.where(Icm, torch.clamp(torch.minimum(v, upper_m),
308
+ min=lower_m_cap, max=upper_m), v)
309
+ else:
310
+ v = torch.where(Icm, torch.minimum(v, yc), v)
311
+
312
+ return v
313
+
314
+
315
+ # =============================================================================
316
+ # Differentiable stratified soft thresholding
317
+ # =============================================================================
318
+
319
+ def soft_thresh_stratified(
320
+ z: torch.Tensor, # (..., P) — coefficient vector
321
+ u: torch.Tensor, # (..., P) — dual variable (same shape)
322
+ lambda_lf: torch.Tensor, # (..., 1) — LF threshold
323
+ lambda_hf: torch.Tensor, # (..., 1) — HF threshold
324
+ lf_mask: torch.Tensor, # (P,) — True = LF bin
325
+ ) -> torch.Tensor:
326
+ """
327
+ Differentiable soft-thresholding S_λ(z+u) with separate LF/HF budgets.
328
+
329
+ S_λ(x) = sign(x) * max(|x| - λ, 0)
330
+
331
+ LF bins (lf_mask=True) : threshold = lambda_lf
332
+ HF bins (lf_mask=False): threshold = lambda_hf
333
+
334
+ Replaces the hard (non-differentiable) H_k thresholding in classical SPADE.
335
+ """
336
+ x = z + u
337
+ # Broadcast lf_mask to match x shape
338
+ lf = lf_mask.view(*([1] * (x.dim() - 1)), -1) # (..., P)
339
+ lam = torch.where(lf, lambda_lf, lambda_hf) # (..., P)
340
+ return torch.sign(x) * F.relu(x.abs() - lam)
341
+
342
+
343
+ # =============================================================================
344
+ # Spectral Feature Extractor (for Context Encoder input)
345
+ # =============================================================================
346
+
347
+ class SpectralFeatureExtractor(nn.Module):
348
+ """
349
+ Converts a raw audio frame (shape: B × M) into a feature vector
350
+ suitable for the ContextEncoder.
351
+
352
+ Features per frame:
353
+ • log-mel spectrogram: n_mels values (shape of spectral envelope)
354
+ • short-time loudness: 1 value (RMS in dB, proxy for LUFS)
355
+ → total: n_mels + 1 features
356
+
357
+ Implementation note:
358
+ Uses a fixed (non-trained) triangular mel filterbank computed from
359
+ the DCT-II power spectrum. Mel filters are registered as buffers so
360
+ they move with the module to the correct device automatically.
361
+ """
362
+
363
+ def __init__(self, cfg: UnrolledConfig):
364
+ super().__init__()
365
+ self.M = cfg.window_length
366
+ self.sr = cfg.sample_rate
367
+ self.n_mels = cfg.n_mels
368
+ self.P = self.M if cfg.frame == "dct" else 2 * self.M
369
+
370
+ # ── Build mel filterbank (fixed, not trained) ─────────────────────
371
+ # Map DCT frequency bins → mel scale using triangular filters.
372
+ # We use only the DCT-part (first M bins of RDFT) for the spectrogram.
373
+ mel_filters = self._build_mel_filterbank() # (n_mels, M)
374
+ self.register_buffer("mel_filters", mel_filters)
375
+
376
+ def _build_mel_filterbank(self) -> torch.Tensor:
377
+ """Triangular mel filterbank as a (n_mels, M) matrix."""
378
+ def hz_to_mel(f):
379
+ return 2595.0 * math.log10(1.0 + f / 700.0)
380
+
381
+ def mel_to_hz(m):
382
+ return 700.0 * (10.0 ** (m / 2595.0) - 1.0)
383
+
384
+ M = self.M
385
+ sr = self.sr
386
+ n_mels = self.n_mels
387
+
388
+ mel_lo = hz_to_mel(20.0)
389
+ mel_hi = hz_to_mel(min(sr / 2.0, 20000.0))
390
+ mel_pts = torch.linspace(mel_lo, mel_hi, n_mels + 2)
391
+ hz_pts = torch.tensor([mel_to_hz(m.item()) for m in mel_pts])
392
+
393
+ # DCT-II bin frequencies: f_k = k * sr / (2M)
394
+ freqs = torch.arange(M, dtype=torch.float32) * sr / (2.0 * M)
395
+ filters = torch.zeros(n_mels, M)
396
+
397
+ for m in range(n_mels):
398
+ f_lo = hz_pts[m].item()
399
+ f_c = hz_pts[m + 1].item()
400
+ f_hi = hz_pts[m + 2].item()
401
+ # Rising flank: lo → c
402
+ mask_r = (freqs >= f_lo) & (freqs <= f_c)
403
+ if (f_c - f_lo) > 0:
404
+ filters[m][mask_r] = (freqs[mask_r] - f_lo) / (f_c - f_lo)
405
+ # Falling flank: c → hi
406
+ mask_f = (freqs > f_c) & (freqs <= f_hi)
407
+ if (f_hi - f_c) > 0:
408
+ filters[m][mask_f] = (f_hi - freqs[mask_f]) / (f_hi - f_c)
409
+
410
+ # Normalise each filter to unit area (power-preserving)
411
+ area = filters.sum(dim=-1, keepdim=True).clamp(min=1e-8)
412
+ return filters / area
413
+
414
+ def forward(self, x: torch.Tensor) -> torch.Tensor:
415
+ """
416
+ x: (B, M) — raw audio frame (windowed or not)
417
+ returns: (B, n_mels + 1) — spectral features
418
+ """
419
+ B, M = x.shape
420
+
421
+ # ── Log-mel spectrogram ───────────────────────────────────────────
422
+ dct_coeff = _dct2(x.float()) # (B, M)
423
+ power_spec = dct_coeff[:, :M] ** 2 # (B, M) — DCT-part power
424
+ mel_spec = torch.matmul(power_spec,
425
+ self.mel_filters.T) # (B, n_mels)
426
+ log_mel = torch.log(mel_spec.clamp(min=1e-10))
427
+
428
+ # ── Short-time loudness (RMS in dB) ───────────────────────────────
429
+ rms = x.pow(2).mean(dim=-1, keepdim=True).clamp(min=1e-10).sqrt()
430
+ lufs = 20.0 * torch.log10(rms.clamp(min=1e-10)) # (B, 1) — dBFS approx
431
+
432
+ return torch.cat([log_mel, lufs], dim=-1) # (B, n_mels+1)
433
+
434
+
435
+ # =============================================================================
436
+ # Context Encoder (causal GRU → per-frame parameters)
437
+ # =============================================================================
438
+
439
+ class ContextEncoder(nn.Module):
440
+ """
441
+ Causal GRU encoder that predicts per-frame SPADE parameters from
442
+ the spectral context of K previous frames + the current frame.
443
+
444
+ Input: (B, K_context+1, n_feats) — spectral features, last dim = current
445
+ Output: (B, 5) — [lambda_lf, lambda_hf, delta_factor, gmax_factor, eps_factor]
446
+ All values are in their configured physical range.
447
+
448
+ Architecture:
449
+ Input linear projection → 2-layer GRU → last hidden state → ParameterHead
450
+
451
+ Causality:
452
+ The GRU processes the context sequence [frame_{t-K}, …, frame_{t-1}, frame_t]
453
+ in forward order. Only the hidden state at the LAST position (frame_t) is
454
+ used to predict parameters for frame_t. No future frames are seen.
455
+
456
+ Parameter ~count:
457
+ input_proj: (n_feats, 64) → 64 * (n_feats+1) ≈ 2 K
458
+ GRU layer 1: input=64, hidden=128 → 3 * 128 * (64+128+1) ≈ 74 K
459
+ GRU layer 2: input=128, hidden=128 → 3 * 128 * (128+128+1) ≈ 98 K
460
+ head: 128 → 64 → 5 → ~ 8 K
461
+ ─────────────────────────────────────────────────────────────────────
462
+ Total: ~ 182 K (target ≤ 200 K)
463
+ """
464
+
465
+ def __init__(self, cfg: UnrolledConfig):
466
+ super().__init__()
467
+ self.cfg = cfg
468
+ n_feats = cfg.n_mels + 1 # spectral + loudness
469
+ proj_dim = 64
470
+
471
+ self.input_proj = nn.Sequential(
472
+ nn.Linear(n_feats, proj_dim),
473
+ nn.LayerNorm(proj_dim),
474
+ nn.GELU(),
475
+ )
476
+ self.gru = nn.GRU(
477
+ input_size=proj_dim,
478
+ hidden_size=cfg.gru_hidden,
479
+ num_layers=cfg.gru_layers,
480
+ batch_first=True,
481
+ dropout=0.1 if cfg.gru_layers > 1 else 0.0,
482
+ )
483
+ self.head = nn.Sequential(
484
+ nn.Linear(cfg.gru_hidden, 64),
485
+ nn.GELU(),
486
+ nn.Linear(64, 5), # 5 output parameters
487
+ )
488
+ # ── Bias initialisation ───────────────────────────────────────────
489
+ # delta/gmax/eps: with range (0.5, 1.5), sigmoid(0) = 0.5 → factor = 1.0
490
+ # (identity: start exactly at the sweep-optimised base values).
491
+ # bias[2,3,4] remain at 0 → correct.
492
+ #
493
+ # lambda_lf / lambda_hf: start near median post-normalised coeff (~0.042).
494
+ # The sweep rank-1 has lf_delta_ratio=0.286, meaning LF recovery is softer
495
+ # (lower effective threshold). We encode this as lambda_lf starting lower
496
+ # than lambda_hf, so fewer LF coefficients are zeroed on iteration 1.
497
+ #
498
+ # For lambda range (lo, hi) = (1e-3, 0.5):
499
+ # bias_hf = -1.4 → sigmoid(-1.4)=0.198 → lambda_hf ≈ 0.001+0.499*0.198 = 0.10
500
+ # bias_lf = bias_hf + ln(lf_delta_ratio) ≈ -1.4 + (-1.25) = -2.65
501
+ # → sigmoid(-2.65)=0.066 → lambda_lf ≈ 0.001+0.499*0.066 = 0.034
502
+ # Ratio lambda_lf/lambda_hf ≈ 0.34 — reflects the softer LF threshold.
503
+ lf_ratio = getattr(cfg, "lf_delta_ratio", 1.0)
504
+ lf_bias_offset = math.log(max(lf_ratio, 1e-3)) # negative → lower lambda_lf
505
+ with torch.no_grad():
506
+ self.head[-1].bias[0] = -1.4 + lf_bias_offset # lambda_lf init ≈ 0.034 (range 0.001–0.50)
507
+ # lambda_hf: range changed to (0.001, 0.08), span=0.079.
508
+ # Target init ≈ 0.03 (passes real transient energy, zeros only noise).
509
+ # sigmoid(-0.54) = 0.368 → λ_hf = 0.001 + 0.079*0.368 ≈ 0.030
510
+ # Previous bias=-1.4 with old range (0.001,0.30) also gave 0.10,
511
+ # but 0.10 zeroed all HF DCT bins for M=2048 → dsdr_high always < 0.
512
+ self.head[-1].bias[1] = -0.54 # lambda_hf init ≈ 0.030
513
+
514
+ def forward(self, feat_seq: torch.Tensor) -> torch.Tensor:
515
+ """
516
+ feat_seq: (B, K_context+1, n_feats) — spectral features, ordered t-K … t
517
+ returns: (B, 5) — physical parameter values
518
+ """
519
+ B, T, _ = feat_seq.shape
520
+ projected = self.input_proj(feat_seq) # (B, T, 64)
521
+ gru_out, _ = self.gru(projected) # (B, T, 128)
522
+ h_t = gru_out[:, -1, :] # (B, 128) — last step only
523
+
524
+ raw = self.head(h_t) # (B, 5) — unconstrained
525
+ params = self._scale_outputs(raw) # (B, 5) — physical ranges
526
+ return params
527
+
528
+ def _scale_outputs(self, raw: torch.Tensor) -> torch.Tensor:
529
+ """Apply sigmoid + affine rescaling to map raw logits → physical ranges."""
530
+ s = torch.sigmoid(raw) # (B, 5) in (0, 1)
531
+
532
+ def rescale(x_01, lo, hi):
533
+ return lo + (hi - lo) * x_01
534
+
535
+ cfg = self.cfg
536
+ lambda_lf = rescale(s[:, 0], *cfg.lambda_lf_range)
537
+ lambda_hf = rescale(s[:, 1], *cfg.lambda_hf_range)
538
+ delta_factor = rescale(s[:, 2], *cfg.delta_factor_range)
539
+ gmax_factor = rescale(s[:, 3], *cfg.gmax_factor_range)
540
+ eps_factor = rescale(s[:, 4], *cfg.eps_factor_range)
541
+
542
+ return torch.stack([lambda_lf, lambda_hf, delta_factor,
543
+ gmax_factor, eps_factor], dim=-1)
544
+
545
+
546
+ # =============================================================================
547
+ # Unrolled ADMM (K_unroll differentiable SPADE layers)
548
+ # =============================================================================
549
+
550
+ class UnrolledADMM(nn.Module):
551
+ """
552
+ K_unroll unrolled S-SPADE ADMM layers with differentiable soft thresholding.
553
+
554
+ Each layer follows the S-SPADE update equations from [4] eq.(12),
555
+ with hard thresholding H_k replaced by stratified soft thresholding S_λ:
556
+
557
+ z̄^(l) = S_{λ_LF, λ_HF}( z^(l-1) + u^(l-1) ) ← stratified soft thresh
558
+ v^(l) = z̄^(l) - u^(l-1)
559
+ Dv = frsyn(v^(l), frame, M)
560
+ pDv = proj_Γ(Dv, yc, masks, g_max)
561
+ z^(l) = v^(l) - frana(Dv - pDv, frame)
562
+ u^(l) = u^(l-1) + z^(l) - z̄^(l) ← dual update
563
+
564
+ The frame parameters (lambda_lf, lambda_hf, g_max) are computed ONCE before
565
+ the loop from the ContextEncoder output and held constant across all layers
566
+ for the current frame.
567
+
568
+ Learnable per-layer scalings
569
+ ----------------------------
570
+ Following Gregor & LeCun (2010), we add a learnable scale per layer for
571
+ both the threshold and the dual step:
572
+ layer_lf_scale[l] : multiplied on lambda_lf before thresholding
573
+ layer_hf_scale[l] : multiplied on lambda_hf before thresholding
574
+ layer_dual_scale[l] : multiplied on the dual update magnitude
575
+
576
+ These are initialised to 1.0 and learned jointly with the encoder.
577
+ They allow the unrolled ADMM to adapt the effective threshold per layer
578
+ (e.g. coarser thresh early, finer late) without changing the architecture.
579
+
580
+ Total learnable params here: 3 × K_unroll ≈ 24 scalars (negligible)
581
+ """
582
+
583
+ def __init__(self, cfg: UnrolledConfig):
584
+ super().__init__()
585
+ self.cfg = cfg
586
+ self.M = cfg.window_length
587
+ self.frame = cfg.frame
588
+ self.K = cfg.K_unroll
589
+
590
+ # Per-layer learnable scale factors.
591
+ # Mirror S-SPADE where k increases over iterations (threshold decreases).
592
+ # Range [1.5 → 0.3]: aggressive early thresholding, fine late.
593
+ # Kept smaller than before (was [3.0,0.6]) to avoid over-killing the
594
+ # gradient in the first layers.
595
+ n = self.K
596
+ init_scales = torch.linspace(1.5, 0.3, n)
597
+ self.layer_lf_scale = nn.Parameter(init_scales.clone())
598
+ self.layer_hf_scale = nn.Parameter(init_scales.clone())
599
+ self.layer_dual_scale = nn.Parameter(torch.ones(n))
600
+
601
+ def forward(
602
+ self,
603
+ yc_w: torch.Tensor, # (B, M) — windowed limited frame
604
+ Ir: torch.Tensor, # (B, M) bool — reliable
605
+ Icp: torch.Tensor, # (B, M) bool — pos-clipped
606
+ Icm: torch.Tensor, # (B, M) bool — neg-clipped
607
+ lambda_lf: torch.Tensor, # (B,) — LF soft-threshold
608
+ lambda_hf: torch.Tensor, # (B,) — HF soft-threshold
609
+ g_max: torch.Tensor, # (B,) — linear gain cap
610
+ lf_mask: torch.Tensor, # (P,) bool — LF bin mask
611
+ ) -> Tuple[torch.Tensor, torch.Tensor, Optional[torch.Tensor]]:
612
+ """
613
+ Returns
614
+ -------
615
+ x_hat : (B, M) — restored frame (final layer)
616
+ z_thresh : (B, P) — thresholded coefficients at final layer
617
+ (used for sparsity loss: L1 penalty)
618
+ x_hat_mid : (B, M) | None — restored frame at K//2 layer
619
+ (used for deep supervision auxiliary loss)
620
+ """
621
+ B, M = yc_w.shape
622
+ P = M if self.frame == "dct" else 2 * M
623
+
624
+ # ── Per-frame input normalisation ─────────────────────────────────
625
+ # Scale yc_w so that the max DCT coefficient magnitude ≈ 1.0.
626
+ # IMPORTANT: .detach() — frame_scale is a normalisation constant
627
+ # computed from the INPUT (not a learned parameter). Without detach,
628
+ # the gradient flows back through amax/clamp into the DCT of yc_w,
629
+ # creating a confusing secondary path that competes with ∂loss/∂λ.
630
+ yc_d = yc_w.double()
631
+ z_init = frana(yc_d, self.frame) # (B, P)
632
+ frame_scale = z_init.abs().amax(dim=-1, keepdim=True).clamp(min=1e-8).detach()
633
+ yc_d_norm = yc_d / frame_scale # normalised frame
634
+
635
+ # ── Initialise ADMM state ─────────────────────────────────────────
636
+ zi = frana(yc_d_norm, self.frame) # (B, P) float64
637
+ ui = torch.zeros_like(zi)
638
+
639
+ # Expand lambda tensors for broadcasting: (B,) → (B, 1)
640
+ lam_lf = lambda_lf.unsqueeze(-1).double() # (B, 1)
641
+ lam_hf = lambda_hf.unsqueeze(-1).double() # (B, 1)
642
+
643
+ # Normalise masks and yc to the same frame scale
644
+ Ir_d = Ir.bool()
645
+ Icp_d = Icp.bool()
646
+ Icm_d = Icm.bool()
647
+ yc_norm_d = yc_d_norm # already computed above
648
+
649
+ # g_max is scale-invariant (ratio), no adjustment needed
650
+
651
+ # ── Unrolled ADMM layers ──────────────────────────────────────────
652
+ mid_layer = self.K // 2
653
+ x_hat_mid: Optional[torch.Tensor] = None
654
+ zb_last = torch.zeros_like(zi) # will hold last thresholded coefficients
655
+
656
+ for l in range(self.K):
657
+ scale_lf = self.layer_lf_scale[l].double().clamp(min=0.1) # prevent negative
658
+ scale_hf = self.layer_hf_scale[l].double().clamp(min=0.1)
659
+ scale_dual = self.layer_dual_scale[l].double()
660
+
661
+ # Step 2: stratified soft thresholding
662
+ zb = soft_thresh_stratified(
663
+ zi, ui,
664
+ lam_lf * scale_lf,
665
+ lam_hf * scale_hf,
666
+ lf_mask,
667
+ ) # (B, P)
668
+ zb_last = zb # track for sparsity loss
669
+
670
+ # Step 3: projection onto Γ via eq.(12)
671
+ v_c = zb - ui # (B, P)
672
+ Dv = frsyn(v_c, self.frame, M) # (B, M)
673
+
674
+ # Differentiable proj_Γ on the normalised domain
675
+ pDv = Dv.clone()
676
+ pDv = torch.where(Ir_d, yc_norm_d, pDv)
677
+
678
+ # Positive clipped
679
+ lo_p = yc_norm_d
680
+ hi_p = lo_p * g_max.unsqueeze(-1).double().clamp(min=1.0)
681
+ pDv = torch.where(Icp_d, torch.clamp(torch.maximum(pDv, lo_p),
682
+ min=lo_p, max=hi_p), pDv)
683
+
684
+ # Negative clipped
685
+ up_m = yc_norm_d
686
+ lo_m = up_m * g_max.unsqueeze(-1).double().clamp(min=1.0)
687
+ lo_m_c = torch.minimum(lo_m, up_m)
688
+ pDv = torch.where(Icm_d, torch.clamp(torch.minimum(pDv, up_m),
689
+ min=lo_m_c, max=up_m), pDv)
690
+
691
+ # ADMM coefficient update — eq.(12) from [4]
692
+ zi = v_c - frana(Dv - pDv, self.frame) # (B, P)
693
+
694
+ # Dual update
695
+ ui = ui + (zi - zb) * scale_dual
696
+
697
+ # ── Deep supervision: record mid-layer reconstruction ─────────
698
+ if l == mid_layer - 1:
699
+ x_mid_norm = frsyn(zi, self.frame, M)
700
+ x_hat_mid = (x_mid_norm * frame_scale).float()
701
+
702
+ # Synthesise output and invert the per-frame normalisation
703
+ x_hat_norm = frsyn(zi, self.frame, M) # (B, M)
704
+ x_hat = (x_hat_norm * frame_scale).float() # back to original scale
705
+
706
+ # Return thresholded coefficients (float32, original scale) for sparsity loss
707
+ z_thresh = (zb_last * frame_scale).float()
708
+
709
+ return x_hat, z_thresh, x_hat_mid
710
+
711
+
712
+ # =============================================================================
713
+ # Full SPADE-Unrolled model
714
+ # =============================================================================
715
+
716
+ class SPADEUnrolled(nn.Module):
717
+ """
718
+ Full SPADE-Unrolled model.
719
+
720
+ Combines:
721
+ 1. SpectralFeatureExtractor — raw frames → spectral features
722
+ 2. ContextEncoder — spectral context → per-frame SPADE params
723
+ 3. UnrolledADMM — K differentiable ADMM layers
724
+
725
+ Forward pass (single-frame mode for training):
726
+ • Takes a batch of (limited frame, K context frames, clipping masks)
727
+ • Returns the restored frame and the predicted parameters (for logging)
728
+
729
+ Inference mode (WOLA loop):
730
+ • Use SPADEUnrolledInference wrapper to process full signals
731
+ """
732
+
733
+ def __init__(self, cfg: UnrolledConfig):
734
+ super().__init__()
735
+ self.cfg = cfg
736
+
737
+ self.feature_extractor = SpectralFeatureExtractor(cfg)
738
+ self.context_encoder = ContextEncoder(cfg)
739
+ self.unrolled_admm = UnrolledADMM(cfg)
740
+
741
+ # LF mask — registered as buffer (moves with module.to(device))
742
+ # Built lazily on first forward call (needs device info)
743
+ self._lf_mask: Optional[torch.Tensor] = None
744
+
745
+ def _get_lf_mask(self, device: torch.device) -> torch.Tensor:
746
+ if self._lf_mask is None or self._lf_mask.device != device:
747
+ self._lf_mask = build_lf_mask(
748
+ self.cfg.window_length, self.cfg.frame,
749
+ self.cfg.sample_rate, self.cfg.lf_cutoff_hz,
750
+ device,
751
+ )
752
+ return self._lf_mask
753
+
754
+ def forward(
755
+ self,
756
+ yc_w: torch.Tensor, # (B, M) — current windowed limited frame
757
+ ctx_frames: torch.Tensor, # (B, K_ctx, M) — K previous frames (limited, windowed)
758
+ Ir: torch.Tensor, # (B, M) bool
759
+ Icp: torch.Tensor, # (B, M) bool
760
+ Icm: torch.Tensor, # (B, M) bool
761
+ ) -> Tuple[torch.Tensor, torch.Tensor, torch.Tensor, Optional[torch.Tensor]]:
762
+ """
763
+ Returns
764
+ -------
765
+ x_hat : (B, M) — restored frame (float32)
766
+ params : (B, 5) — predicted per-frame parameters
767
+ z_thresh : (B, P) — thresholded coefficients (for sparsity loss)
768
+ x_hat_mid: (B, M)|None — mid-layer reconstruction (for deep supervision)
769
+ """
770
+ B = yc_w.shape[0]
771
+ device = yc_w.device
772
+
773
+ # ── 1. Extract spectral features ──────────────────────────────────
774
+ # Current frame features
775
+ feat_curr = self.feature_extractor(yc_w) # (B, n_feats)
776
+
777
+ # Context frames features: process all at once for efficiency
778
+ B, K, M = ctx_frames.shape
779
+ ctx_flat = ctx_frames.reshape(B * K, M)
780
+ feat_ctx_flat = self.feature_extractor(ctx_flat) # (B*K, n_feats)
781
+ feat_ctx = feat_ctx_flat.reshape(B, K, -1) # (B, K, n_feats)
782
+
783
+ # Concatenate: [ctx_{t-K}, …, ctx_{t-1}, current_t]
784
+ feat_seq = torch.cat([feat_ctx, feat_curr.unsqueeze(1)], dim=1) # (B, K+1, n_feats)
785
+
786
+ # ── 2. Predict per-frame parameters ───────────────────────────────
787
+ params = self.context_encoder(feat_seq) # (B, 5)
788
+
789
+ lambda_lf = params[:, 0] # (B,)
790
+ lambda_hf = params[:, 1] # (B,)
791
+ delta_factor = params[:, 2] # (B,)
792
+ gmax_factor = params[:, 3] # (B,)
793
+ # eps_factor = params[:, 4] — used for convergence check (inference only)
794
+
795
+ # Compute physical gain cap (linear) from predicted multiplier
796
+ g_max_db = self.cfg.base_max_gain_db * gmax_factor # (B,) dB
797
+ g_max = 10.0 ** (g_max_db / 20.0) # (B,) linear
798
+
799
+ # ── 3. Run unrolled ADMM ───────────────────────────────────────────
800
+ lf_mask = self._get_lf_mask(device) # (P,)
801
+
802
+ x_hat, z_thresh, x_hat_mid = self.unrolled_admm(
803
+ yc_w=yc_w,
804
+ Ir=Ir, Icp=Icp, Icm=Icm,
805
+ lambda_lf=lambda_lf,
806
+ lambda_hf=lambda_hf,
807
+ g_max=g_max,
808
+ lf_mask=lf_mask,
809
+ ) # (B, M), (B, P), (B, M)|None
810
+
811
+ return x_hat, params, z_thresh, x_hat_mid
812
+
813
+ def parameter_count(self) -> int:
814
+ """Total trainable parameters."""
815
+ return sum(p.numel() for p in self.parameters() if p.requires_grad)
816
+
817
+
818
+ # =============================================================================
819
+ # Loss functions
820
+ # =============================================================================
821
+
822
+ class SPADEUnrolledLoss(nn.Module):
823
+ """
824
+ Composite loss for SPADE-Unrolled training.
825
+
826
+ Components
827
+ ----------
828
+ 1. Mask MSE (w_mask=2.0) MSE on Icp|Icm — recovery region only
829
+ 2. Transparency (w_transp=0.1) MSE on Ir — non-limited must be unchanged
830
+ 3. STFT (w_stft=0.05) Multi-scale L1.
831
+ 4. LF coeff MSE (w_lf_coeff=2.0) PRIMARY LF loss.
832
+ 5. LF energy (w_lf_energy=0.5) One-sided RMS under-recovery penalty.
833
+ 6. Over-recovery (w_over=0.3) LF energy > GT+3dB.
834
+ 7. λ reg (w_reg=5.0) Anti-saturation: L2 from target center.
835
+ Centers: λ_lf=0.034, λ_hf=0.03 — calibrated for M=2048, sr=44100.
836
+ IMPORTANT: λ_hf_center was previously 0.10 (calibrated for M=1024).
837
+ For M=2048, HF DCT coefficients post-normalisation are 0.01–0.05;
838
+ λ_hf=0.10 zeroed all of them → dsdr_high < −1.8 dB every epoch.
839
+ λ_hf_center=0.03 allows real HF transient content to pass through.
840
+ 7b.g_fac floor (w_gfac_floor=3.0) ReLU(floor - g_fac)^2 penalty.
841
+ Blocks the attenuative shortcut: without this, g_fac collapses toward
842
+ the range lower bound (0.5) making g_max ≈ 3 dB, which attenuates the
843
+ signal rather than declipping it (observed: dsdr_high/mid both < 0 from
844
+ epoch 1, worsening steadily to −1.2 / −0.7 dB by ep22).
845
+ floor=0.85 → g_max ≥ 5.1 dB — conservative but blocks the shortcut.
846
+ 8. Sparsity (w_sparsity=0.5) L1 of thresholded coefficients z_thresh.
847
+ Penalises passing too many coefficients: forces λ to actually zero things out.
848
+ Combined with λ-reg this creates a stable equilibrium where the ADMM does
849
+ real sparse solving rather than degenerating to an identity mapping.
850
+ 9. Deep supervision (w_ds=0.5) Auxiliary mask+LF loss at mid-layer (K//2).
851
+ Directly injects gradient into the GRU, preventing vanishing gradients
852
+ across K unrolled ADMM layers.
853
+ """
854
+
855
+ def __init__(
856
+ self,
857
+ w_mask: float = 2.0,
858
+ w_transp: float = 0.1,
859
+ w_stft: float = 0.05,
860
+ w_lf_coeff: float = 2.0,
861
+ w_lf_energy: float = 0.5,
862
+ w_over: float = 0.3,
863
+ w_reg: float = 5.0,
864
+ w_sparsity: float = 0.5,
865
+ w_ds: float = 0.15, # deep supervision — guide gradient early, don't dominate
866
+ w_gfac_floor: float = 3.0, # g_fac floor penalty: ReLU(g_floor - g_fac)^2
867
+ gfac_floor: float = 0.5, # [FIX] matches new gmax_factor_range lower bound (0.5, 2.0)
868
+ sample_rate: int = 44100,
869
+ lf_cutoff_hz: float = 500.0,
870
+ lambda_lf_center: float = 0.034, # softer LF: reflects lf_delta_ratio=0.286
871
+ # hf_center corrected 0.10 → 0.03.
872
+ # 0.10 was calibrated for M=1024 but model uses M=2048. For M=2048,
873
+ # HF post-normalised magnitudes are 0.01–0.05; λ_hf=0.10 zeros them all
874
+ # (confirmed: dsdr_high stable at −1.8 dB while λ-reg held λ_hf=0.10).
875
+ lambda_hf_center: float = 0.03,
876
+ ):
877
+ super().__init__()
878
+ self.w_mask = w_mask
879
+ self.w_transp = w_transp
880
+ self.w_stft = w_stft
881
+ self.w_lf_coeff = w_lf_coeff
882
+ self.w_lf_energy = w_lf_energy
883
+ self.w_over = w_over
884
+ self.w_reg = w_reg
885
+ self.w_sparsity = w_sparsity
886
+ self.w_ds = w_ds
887
+ self.w_gfac_floor = w_gfac_floor
888
+ self.gfac_floor = gfac_floor
889
+ self.sr = sample_rate
890
+ self.lf_cutoff = lf_cutoff_hz
891
+ self.lf_center = lambda_lf_center
892
+ self.hf_center = lambda_hf_center
893
+ self.stft_wins = [256, 512, 1024]
894
+
895
+ def _frame_loss(
896
+ self,
897
+ x_hat: torch.Tensor, # (B, M)
898
+ x_clean: torch.Tensor, # (B, M)
899
+ yc_w: torch.Tensor, # (B, M)
900
+ Ir: torch.Tensor, # (B, M) bool
901
+ Icp: torch.Tensor, # (B, M) bool
902
+ Icm: torch.Tensor, # (B, M) bool
903
+ ) -> Tuple[torch.Tensor, dict]:
904
+ """Compute mask + transparency + STFT + LF losses for one frame estimate."""
905
+ B, M = x_hat.shape
906
+ losses = {}
907
+
908
+ mask_active = (Icp | Icm).float()
909
+ mask_ir = Ir.float()
910
+ n_active = mask_active.sum(dim=-1).clamp(min=1)
911
+ n_ir = mask_ir.sum(dim=-1).clamp(min=1)
912
+
913
+ # 1. Mask MSE
914
+ sq_err_active = ((x_hat - x_clean) ** 2) * mask_active
915
+ loss_mask = (sq_err_active.sum(dim=-1) / n_active).mean()
916
+ losses["mask"] = loss_mask.item()
917
+
918
+ # 2. Transparency
919
+ sq_err_ir = ((x_hat - x_clean) ** 2) * mask_ir
920
+ loss_transp = (sq_err_ir.sum(dim=-1) / n_ir).mean()
921
+ losses["transp"] = loss_transp.item()
922
+
923
+ # 3. Multi-scale STFT
924
+ loss_stft = x_hat.new_zeros(1)
925
+ for win in self.stft_wins:
926
+ hop = win // 4
927
+ wnd = torch.hann_window(win, device=x_hat.device)
928
+ def _stft(x, _w=wnd, _win=win, _hop=hop):
929
+ return torch.stft(x, n_fft=_win, hop_length=_hop,
930
+ win_length=_win, window=_w, return_complex=True)
931
+ loss_stft = loss_stft + F.l1_loss(_stft(x_hat.float()).abs(),
932
+ _stft(x_clean.float()).abs())
933
+ loss_stft = loss_stft / len(self.stft_wins)
934
+ losses["stft"] = loss_stft.item()
935
+
936
+ # 4. LF coefficient MSE (PRIMARY)
937
+ k_cut = int(math.ceil(self.lf_cutoff * 2.0 * M / self.sr))
938
+ k_cut = max(1, min(k_cut, M))
939
+ dct_res_hat = _dct2(x_hat.float() - yc_w.float())[:, :k_cut]
940
+ dct_res_clean = _dct2(x_clean.float() - yc_w.float())[:, :k_cut]
941
+ loss_lf_coeff = F.mse_loss(dct_res_hat, dct_res_clean)
942
+ losses["lf_coeff"] = loss_lf_coeff.item()
943
+
944
+ # 5. LF energy asymmetric
945
+ dct_hat = _dct2(x_hat.float())[:, :k_cut]
946
+ dct_clean = _dct2(x_clean.float())[:, :k_cut]
947
+ rms_lf_hat = dct_hat.pow(2).mean(dim=-1).clamp(min=1e-10).sqrt()
948
+ rms_lf_clean = dct_clean.pow(2).mean(dim=-1).clamp(min=1e-10).sqrt()
949
+ loss_lf_energy = F.relu(rms_lf_clean - rms_lf_hat).pow(2).mean()
950
+ losses["lf_energy"] = loss_lf_energy.item()
951
+
952
+ # 6. Over-recovery
953
+ loss_over = F.relu(rms_lf_hat - rms_lf_clean * 10.0 ** (3.0/20.0)).pow(2).mean()
954
+ losses["over"] = loss_over.item()
955
+
956
+ total = (self.w_mask * loss_mask
957
+ + self.w_transp * loss_transp
958
+ + self.w_stft * loss_stft.squeeze()
959
+ + self.w_lf_coeff * loss_lf_coeff
960
+ + self.w_lf_energy * loss_lf_energy
961
+ + self.w_over * loss_over)
962
+ return total, losses
963
+
964
+ def forward(
965
+ self,
966
+ x_hat: torch.Tensor, # (B, M)
967
+ x_clean: torch.Tensor, # (B, M)
968
+ yc_w: torch.Tensor, # (B, M)
969
+ Ir: torch.Tensor, # (B, M) bool
970
+ Icp: torch.Tensor, # (B, M) bool
971
+ Icm: torch.Tensor, # (B, M) bool
972
+ params: Optional[torch.Tensor] = None, # (B, 5)
973
+ z_thresh: Optional[torch.Tensor] = None, # (B, P) thresholded coeffs
974
+ x_hat_mid: Optional[torch.Tensor] = None, # (B, M) mid-layer x_hat
975
+ ) -> Tuple[torch.Tensor, dict]:
976
+ losses = {}
977
+
978
+ # ── Primary frame losses ──────────────────────────────────────────
979
+ total, frame_losses = self._frame_loss(x_hat, x_clean, yc_w, Ir, Icp, Icm)
980
+ losses.update(frame_losses)
981
+
982
+ # ── 7. λ anti-saturation regularization (STRONGER, CORRECT CENTERS) ─
983
+ loss_reg = x_hat.new_zeros(1)
984
+ if params is not None and self.w_reg > 0:
985
+ loss_reg = ((params[:, 0] - self.lf_center).pow(2).mean() +
986
+ (params[:, 1] - self.hf_center).pow(2).mean())
987
+ losses["reg"] = loss_reg.item()
988
+ total = total + self.w_reg * loss_reg.squeeze()
989
+
990
+ # ── 7b. g_fac floor penalty ───────────────────────────────────────
991
+ # Prevents the model from using gain attenuation as a shortcut to
992
+ # satisfy the mask/over losses. Without this, g_fac drifts toward
993
+ # the range lower bound (observed: 0.5 → ~0.54 by ep22) causing
994
+ # dsdr_high < 0 and dsdr_mid < 0 — the model makes the signal worse.
995
+ #
996
+ # Loss = w_gfac_floor * mean( ReLU(floor - g_fac)^2 )
997
+ # Zero when g_fac >= floor. Quadratic below floor → smooth gradient.
998
+ loss_gfac_floor = x_hat.new_zeros(1)
999
+ if params is not None and self.w_gfac_floor > 0:
1000
+ g_fac = params[:, 3] # (B,)
1001
+ loss_gfac_floor = F.relu(self.gfac_floor - g_fac).pow(2).mean()
1002
+ losses["gfac_floor"] = loss_gfac_floor.item()
1003
+ total = total + self.w_gfac_floor * loss_gfac_floor.squeeze()
1004
+
1005
+ # ── 8. Sparsity loss: L1 of z_thresh ─────────────────────────────
1006
+ # Penalises z_thresh being large: encourages λ to zero out coefficients.
1007
+ # This breaks the identity-mapping local minimum (λ→0 = all coeffs pass).
1008
+ loss_sparsity = x_hat.new_zeros(1)
1009
+ if z_thresh is not None and self.w_sparsity > 0:
1010
+ loss_sparsity = z_thresh.abs().mean()
1011
+ losses["sparsity"] = loss_sparsity.item()
1012
+ total = total + self.w_sparsity * loss_sparsity.squeeze()
1013
+
1014
+ # ── 9. Deep supervision: auxiliary loss at mid-layer ──────────────
1015
+ # Same primary losses applied to x_hat_mid (reconstruction at K//2).
1016
+ # Injects gradient directly into the GRU, preventing vanishing gradients.
1017
+ loss_ds = x_hat.new_zeros(1)
1018
+ if x_hat_mid is not None and self.w_ds > 0:
1019
+ ds_total, _ = self._frame_loss(x_hat_mid, x_clean, yc_w, Ir, Icp, Icm)
1020
+ loss_ds = ds_total
1021
+ losses["ds"] = loss_ds.item()
1022
+ total = total + self.w_ds * loss_ds.squeeze()
1023
+
1024
+ losses["total"] = total.item()
1025
+ return total, losses
1026
+
1027
+
1028
+
1029
+ # =============================================================================
1030
+ # WOLA-based inference wrapper
1031
+ # =============================================================================
1032
+
1033
+ class SPADEUnrolledInference:
1034
+ """
1035
+ Wraps SPADEUnrolled to process a full audio signal via WOLA.
1036
+
1037
+ Equivalent to _declip_mono_gpu but calls the learned model instead of
1038
+ the classical SPADE solver. Used at test time only (not differentiable
1039
+ end-to-end because of the frame-level sliding window).
1040
+
1041
+ Usage
1042
+ -----
1043
+ model = SPADEUnrolled(cfg)
1044
+ model.load_state_dict(...)
1045
+ model.eval()
1046
+
1047
+ infer = SPADEUnrolledInference(model, delta_db=2.5, device="cuda")
1048
+ x_hat = infer.process(y_limited, sample_rate=44100)
1049
+ """
1050
+
1051
+ def __init__(
1052
+ self,
1053
+ model: SPADEUnrolled,
1054
+ delta_db: float = 2.5,
1055
+ max_gain_db: float = 6.0,
1056
+ device: str = "cuda",
1057
+ batch_frames: int = 256, # GPU batch size for frame processing
1058
+ ):
1059
+ self.model = model.to(device)
1060
+ self.model.eval()
1061
+ self.cfg = model.cfg
1062
+ self.delta_db = delta_db
1063
+ self.max_gain_db = max_gain_db
1064
+ self.device = device
1065
+ self.batch_frames = batch_frames
1066
+
1067
+ @torch.no_grad()
1068
+ def process(self, y_limited: np.ndarray, sample_rate: int = 44100) -> np.ndarray:
1069
+ """
1070
+ y_limited : (N,) or (N, C) — limited audio
1071
+ returns : (N,) or (N, C) — restored audio
1072
+ """
1073
+ from scipy.signal.windows import hann as _hann
1074
+ try:
1075
+ from spade_declip_v12 import _compute_masks, _dilate_masks_soft
1076
+ except ImportError:
1077
+ raise ImportError("spade_declip_v12.py must be in the Python path")
1078
+
1079
+ mono = y_limited.ndim == 1
1080
+ if mono:
1081
+ y_limited = y_limited[:, None]
1082
+ _, C = y_limited.shape
1083
+ outputs = []
1084
+
1085
+ for ch in range(C):
1086
+ yc = y_limited[:, ch].astype(np.float64)
1087
+ dc = float(np.mean(yc))
1088
+ yc -= dc
1089
+
1090
+ ceiling = float(np.max(np.abs(yc)))
1091
+ thresh = ceiling * (10.0 ** (-self.delta_db / 20.0))
1092
+ if thresh <= 0:
1093
+ outputs.append(yc)
1094
+ continue
1095
+
1096
+ masks_obj = _compute_masks(yc, thresh)
1097
+
1098
+ M = self.cfg.window_length
1099
+ a = self.cfg.hop_length
1100
+ N = int(np.ceil(len(yc) / a))
1101
+ win = np.sqrt(_hann(M, sym=False))
1102
+
1103
+ out_buf = np.zeros(len(yc) + M)
1104
+ norm_buf = np.zeros(len(yc) + M)
1105
+ L = len(yc)
1106
+
1107
+ # Build context buffer: circular buffer of K_ctx frames
1108
+ K_ctx = self.cfg.K_context
1109
+ ctx_buf = np.zeros((K_ctx, M), dtype=np.float32)
1110
+
1111
+ for i in range(N):
1112
+ idx1 = i * a
1113
+ idx2 = min(idx1 + M, L)
1114
+ seg_len = idx2 - idx1
1115
+
1116
+ yc_frame = np.zeros(M)
1117
+ yc_frame[:seg_len] = yc[idx1:idx2]
1118
+ win_frame = yc_frame * win
1119
+
1120
+ # Bypass: no limiting in this frame
1121
+ frame_peak = np.max(np.abs(yc[idx1:idx2]))
1122
+ if frame_peak < thresh:
1123
+ out_buf[idx1:idx1+M] += win_frame * win
1124
+ norm_buf[idx1:idx1+M] += win ** 2
1125
+ ctx_buf = np.roll(ctx_buf, -1, axis=0)
1126
+ ctx_buf[-1] = win_frame.astype(np.float32)
1127
+ continue
1128
+
1129
+ # Extract masks for this frame
1130
+ Ir_f = masks_obj.Ir[idx1:idx2]
1131
+ Icp_f = masks_obj.Icp[idx1:idx2]
1132
+ Icm_f = masks_obj.Icm[idx1:idx2]
1133
+
1134
+ # Pad masks to M
1135
+ Ir_p = np.zeros(M, dtype=bool); Ir_p[:seg_len] = Ir_f
1136
+ Icp_p = np.zeros(M, dtype=bool); Icp_p[:seg_len] = Icp_f
1137
+ Icm_p = np.zeros(M, dtype=bool); Icm_p[:seg_len] = Icm_f
1138
+ Ir_p[seg_len:] = True # padded region = reliable
1139
+
1140
+ # To tensors
1141
+ def _t(arr, dtype=torch.float32):
1142
+ return torch.tensor(arr, dtype=dtype,
1143
+ device=self.device).unsqueeze(0)
1144
+
1145
+ yc_t = _t(win_frame.astype(np.float32))
1146
+ ctx_t = torch.tensor(ctx_buf, dtype=torch.float32,
1147
+ device=self.device).unsqueeze(0) # (1, K, M)
1148
+ Ir_t = _t(Ir_p, dtype=torch.bool)
1149
+ Icp_t = _t(Icp_p, dtype=torch.bool)
1150
+ Icm_t = _t(Icm_p, dtype=torch.bool)
1151
+
1152
+ with torch.no_grad():
1153
+ x_hat_t, _, _, _ = self.model(yc_t, ctx_t, Ir_t, Icp_t, Icm_t)
1154
+
1155
+ x_hat = x_hat_t.squeeze(0).cpu().numpy()
1156
+
1157
+ out_buf[idx1:idx1+M] += x_hat * win
1158
+ norm_buf[idx1:idx1+M] += win ** 2
1159
+
1160
+ # Update context buffer
1161
+ ctx_buf = np.roll(ctx_buf, -1, axis=0)
1162
+ ctx_buf[-1] = win_frame.astype(np.float32)
1163
+
1164
+ # Normalise WOLA
1165
+ safe_norm = np.where(norm_buf > 1e-8, norm_buf, 1.0)
1166
+ recovered = out_buf / safe_norm
1167
+ recovered = recovered[:L] + dc
1168
+ outputs.append(recovered)
1169
+
1170
+ result = np.column_stack(outputs)
1171
+ return result[:, 0] if mono else result
1172
+
1173
+
1174
+
1175
+ # =============================================================================
1176
+ # Hybrid inference: v11 S-SPADE (HF unchanged) + SPADEUnrolled (LF learned)
1177
+ # =============================================================================
1178
+
1179
+ class HybridSPADEInference:
1180
+ """
1181
+ Hybrid audio delimiting inference.
1182
+
1183
+ Architecture
1184
+ ------------
1185
+ 1. LR crossover split at `crossover_hz` (default 8000 Hz).
1186
+ Uses the same phase-perfect Butterworth HP = x − LP formula as v11
1187
+ `_lr_split`, ensuring lf + hf == x exactly (no energy loss or leakage).
1188
+
1189
+ 2. HF band (≥ crossover_hz):
1190
+ → ``spade_declip_v11._sspade_batch_gpu`` (GPU) or
1191
+ ``spade_declip_v11.tight_sspade`` (CPU)
1192
+ Algorithm is BYTE-FOR-BYTE identical to v11. Hard thresholding H_k
1193
+ with progressive relaxation (k starts at hf_s, increments by hf_s
1194
+ every hf_r iterations up to hf_max_iter).
1195
+
1196
+ 3. LF band (< crossover_hz):
1197
+ → ``SPADEUnrolledInference.process()`` — learned reconstruction.
1198
+ ContextEncoder predicts per-frame lambda_lf, g_max, delta from the
1199
+ K previous frames; UnrolledADMM applies K_unroll differentiable
1200
+ soft-threshold layers.
1201
+
1202
+ 4. Output = lf_recovered + hf_recovered.
1203
+
1204
+ Rationale for 8 kHz crossover
1205
+ ------------------------------
1206
+ v11 S-SPADE recovers HF transients (cymbal snap, hi-hat attack) well:
1207
+ the DCT coefficients above 8 kHz are sparse and hard thresholding with
1208
+ small k finds them reliably. Below 8 kHz (kick body, bass fundamental)
1209
+ v11 under-recovers because:
1210
+ • The "true" sparsity level k is content-dependent and poorly set
1211
+ by the fixed s/r schedule in the time available (K_unroll layers).
1212
+ • Tonal/sustain content is not globally sparse, so H_k wastes budget
1213
+ zeroing low-energy HF coefficients instead of recovering LF energy.
1214
+ The learned model addresses both issues via adaptive lambda_lf and g_max.
1215
+
1216
+ Parameters
1217
+ ----------
1218
+ model : trained SPADEUnrolled (loaded from checkpoint)
1219
+ crossover_hz : LR crossover frequency (default 8000 Hz)
1220
+ lf_delta_db : threshold for LF band mask detection (dB below ceiling)
1221
+ lf_max_gain_db : gain cap for LF band recovery
1222
+ lf_release_ms : mask dilation for LF band (limiter release smear)
1223
+ hf_delta_db : threshold for HF band mask detection
1224
+ hf_s : v11 sparsity step (k starts at hf_s, increments by hf_s)
1225
+ hf_r : v11 sparsity relaxation period (k incremented every hf_r iter)
1226
+ hf_eps : v11 convergence threshold
1227
+ hf_max_iter : v11 max iterations per frame
1228
+ hf_max_gain_db : v11 ratio-aware gain cap for HF band
1229
+ hf_release_ms : v11 mask dilation for HF band
1230
+ hf_window_length : v11 WOLA window for HF band (default 2048)
1231
+ hf_hop_length : v11 WOLA hop for HF band (default 512)
1232
+ device : 'cuda' | 'cpu' | 'auto'
1233
+ batch_frames : GPU batch size for SPADEUnrolled LF processing
1234
+ """
1235
+
1236
+ def __init__(
1237
+ self,
1238
+ model: "SPADEUnrolled",
1239
+ crossover_hz: float = 8000.0,
1240
+ lf_delta_db: float = 1.5,
1241
+ lf_max_gain_db: float = 6.0,
1242
+ lf_release_ms: float = 0.0,
1243
+ hf_delta_db: float = 1.5,
1244
+ hf_s: int = 1,
1245
+ hf_r: int = 1,
1246
+ hf_eps: float = 0.05,
1247
+ hf_max_iter: int = 500,
1248
+ hf_max_gain_db: float = 6.0,
1249
+ hf_release_ms: float = 0.0,
1250
+ hf_window_length: int = 2048,
1251
+ hf_hop_length: int = 512,
1252
+ device: str = "auto",
1253
+ batch_frames: int = 256,
1254
+ ):
1255
+ if device == "auto":
1256
+ try:
1257
+ import torch as _t
1258
+ device = "cuda" if _t.cuda.is_available() else "cpu"
1259
+ except ImportError:
1260
+ device = "cpu"
1261
+
1262
+ self.model = model.to(device)
1263
+ self.model.eval()
1264
+ self.cfg = model.cfg
1265
+
1266
+ self.crossover_hz = crossover_hz
1267
+ self.lf_delta_db = lf_delta_db
1268
+ self.lf_max_gain_db = lf_max_gain_db
1269
+ self.lf_release_ms = lf_release_ms
1270
+ self.hf_delta_db = hf_delta_db
1271
+ self.hf_s = hf_s
1272
+ self.hf_r = hf_r
1273
+ self.hf_eps = hf_eps
1274
+ self.hf_max_iter = hf_max_iter
1275
+ self.hf_max_gain_db = hf_max_gain_db
1276
+ self.hf_release_ms = hf_release_ms
1277
+ self.hf_window_length = hf_window_length
1278
+ self.hf_hop_length = hf_hop_length
1279
+ self.device = device
1280
+ self.batch_frames = batch_frames
1281
+
1282
+ # Cached LF inference wrapper (re-used across calls)
1283
+ self._lf_infer = SPADEUnrolledInference(
1284
+ model,
1285
+ delta_db = lf_delta_db,
1286
+ max_gain_db = lf_max_gain_db,
1287
+ device = device,
1288
+ batch_frames = batch_frames,
1289
+ )
1290
+
1291
+ @staticmethod
1292
+ def _lr_split(
1293
+ x: np.ndarray,
1294
+ crossover_hz: float,
1295
+ sr: int,
1296
+ ) -> "Tuple[np.ndarray, np.ndarray]":
1297
+ """
1298
+ Phase-perfect Linkwitz-Riley crossover. lp + hp == x exactly.
1299
+ Identical to spade_declip_v11._lr_split.
1300
+ """
1301
+ from scipy.signal import butter, sosfiltfilt
1302
+ fc = float(np.clip(crossover_hz, 1.0, sr / 2.0 - 1.0))
1303
+ sos = butter(2, fc, btype="low", fs=sr, output="sos")
1304
+ lp = sosfiltfilt(sos, x)
1305
+ hp = x - lp
1306
+ return lp, hp
1307
+
1308
+ def _process_hf_band(
1309
+ self,
1310
+ hf_mono: np.ndarray,
1311
+ sr: int,
1312
+ ) -> np.ndarray:
1313
+ """
1314
+ Run v11 S-SPADE on the HF band. Algorithm is identical to v11.
1315
+ Imports lazily so spade_declip_v11 is only required at inference.
1316
+ """
1317
+ try:
1318
+ from spade_declip_v11 import (
1319
+ declip as _v11_declip,
1320
+ DeclipParams as _V11Params,
1321
+ )
1322
+ except ImportError:
1323
+ raise ImportError(
1324
+ "spade_declip_v11.py must be in the Python path for HF processing."
1325
+ )
1326
+
1327
+ params = _V11Params(
1328
+ algo = "sspade",
1329
+ frame = self.cfg.frame,
1330
+ mode = "soft",
1331
+ delta_db = self.hf_delta_db,
1332
+ window_length = self.hf_window_length,
1333
+ hop_length = self.hf_hop_length,
1334
+ s = self.hf_s,
1335
+ r = self.hf_r,
1336
+ eps = self.hf_eps,
1337
+ max_iter = self.hf_max_iter,
1338
+ max_gain_db = self.hf_max_gain_db,
1339
+ release_ms = self.hf_release_ms,
1340
+ sample_rate = sr,
1341
+ use_gpu = (self.device != "cpu"),
1342
+ gpu_device = (self.device if self.device != "cpu" else "auto"),
1343
+ show_progress = False,
1344
+ verbose = False,
1345
+ )
1346
+ fixed, _ = _v11_declip(hf_mono, params)
1347
+ return fixed
1348
+
1349
+ @torch.no_grad()
1350
+ def process(
1351
+ self,
1352
+ y_limited: np.ndarray,
1353
+ sample_rate: int = 44100,
1354
+ ) -> np.ndarray:
1355
+ """
1356
+ y_limited : (N,) or (N, C) — limited audio at any sample rate
1357
+ returns : (N,) or (N, C) — hybrid-recovered audio
1358
+
1359
+ Pipeline per channel
1360
+ --------------------
1361
+ 1. LR crossover split at self.crossover_hz
1362
+ → lf_band (0 – crossover_hz)
1363
+ → hf_band (crossover_hz – Nyquist)
1364
+ 2. HF: spade_declip_v11 S-SPADE (identical algorithm, unchanged)
1365
+ 3. LF: SPADEUnrolledInference (learned soft-threshold ADMM)
1366
+ 4. hf_recovered + lf_recovered = full signal
1367
+ """
1368
+ mono = y_limited.ndim == 1
1369
+ if mono:
1370
+ y_limited = y_limited[:, None]
1371
+ _, C = y_limited.shape
1372
+ out_channels = []
1373
+
1374
+ for ch in range(C):
1375
+ yc = y_limited[:, ch].astype(np.float64)
1376
+
1377
+ # ── Phase-perfect LR split ────────────────────────────────────
1378
+ lf_band, hf_band = self._lr_split(yc, self.crossover_hz, sample_rate)
1379
+
1380
+ # ── HF: v11 S-SPADE (unchanged) ────────────────────────────────
1381
+ hf_rec = self._process_hf_band(hf_band.astype(np.float64), sample_rate)
1382
+
1383
+ # ── LF: SPADEUnrolled (learned) ────────────────────────────────
1384
+ lf_rec = self._lf_infer.process(
1385
+ lf_band.astype(np.float32), sample_rate
1386
+ )
1387
+
1388
+ # ── Recombine ─────────────────────────────────────────────────
1389
+ L = min(len(lf_rec), len(hf_rec))
1390
+ combined = lf_rec[:L].astype(np.float64) + hf_rec[:L]
1391
+ out_channels.append(combined)
1392
+
1393
+ result = np.column_stack(out_channels)
1394
+ return result[:, 0] if mono else result
1395
+
1396
+
1397
+ # =============================================================================
1398
+ # Model factory
1399
+ # =============================================================================
1400
+
1401
+ def build_model(cfg: Optional[UnrolledConfig] = None) -> SPADEUnrolled:
1402
+ """Construct a SPADEUnrolled model with default or custom config."""
1403
+ if cfg is None:
1404
+ cfg = UnrolledConfig()
1405
+ model = SPADEUnrolled(cfg)
1406
+ n = model.parameter_count()
1407
+ print(f"[SPADEUnrolled] Built model: {n:,} trainable parameters")
1408
+ return model
1409
+
1410
+
1411
+ # =============================================================================
1412
+ # Quick sanity check
1413
+ # =============================================================================
1414
+
1415
+ def _smoke_test():
1416
+ """Run a forward pass with random data to verify shapes and dtypes."""
1417
+ print("=" * 60)
1418
+ print("SPADE Unrolled — Smoke Test")
1419
+ print("=" * 60)
1420
+
1421
+ cfg = UnrolledConfig(
1422
+ window_length=512,
1423
+ hop_length=128,
1424
+ K_unroll=4,
1425
+ K_context=4,
1426
+ n_mels=16,
1427
+ gru_hidden=64,
1428
+ gru_layers=1,
1429
+ )
1430
+ model = build_model(cfg)
1431
+ model.eval()
1432
+
1433
+ B = 4
1434
+ M = cfg.window_length
1435
+ K = cfg.K_context
1436
+
1437
+ # Random limited frames + masks
1438
+ yc = torch.randn(B, M) * 0.5
1439
+ ctx = torch.randn(B, K, M) * 0.5
1440
+ thresh = 0.3
1441
+ Ir = yc.abs() < thresh
1442
+ Icp = yc >= thresh
1443
+ Icm = yc <= -thresh
1444
+
1445
+ with torch.no_grad():
1446
+ x_hat, params, z_thresh, x_hat_mid = model(yc, ctx, Ir, Icp, Icm)
1447
+
1448
+ print(f" Input yc: {tuple(yc.shape)} dtype={yc.dtype}")
1449
+ print(f" Output x_hat: {tuple(x_hat.shape)} dtype={x_hat.dtype}")
1450
+ print(f" Params: {tuple(params.shape)} dtype={params.dtype}")
1451
+ print(f" Param ranges:")
1452
+ print(f" lambda_lf ∈ [{params[:,0].min():.4f}, {params[:,0].max():.4f}]")
1453
+ print(f" lambda_hf ∈ [{params[:,1].min():.4f}, {params[:,1].max():.4f}]")
1454
+ print(f" delta_fac ∈ [{params[:,2].min():.4f}, {params[:,2].max():.4f}]")
1455
+ print(f" gmax_fac ∈ [{params[:,3].min():.4f}, {params[:,3].max():.4f}]")
1456
+ print(f" eps_fac ∈ [{params[:,4].min():.4f}, {params[:,4].max():.4f}]")
1457
+
1458
+ # Loss test
1459
+ x_clean = yc + torch.randn_like(yc) * 0.1
1460
+ loss_fn = SPADEUnrolledLoss()
1461
+ loss, details = loss_fn(x_hat, x_clean, yc, Ir, Icp, Icm)
1462
+ print(f"\n Loss: {loss.item():.6f}")
1463
+ for k, v in details.items():
1464
+ print(f" {k:12s}: {v:.6f}")
1465
+
1466
+ # Check gradients
1467
+ model.train()
1468
+ x_hat2, _, z2, xm2 = model(yc, ctx, Ir, Icp, Icm)
1469
+ loss2, _ = loss_fn(x_hat2, x_clean, yc, Ir, Icp, Icm, z_thresh=z2, x_hat_mid=xm2)
1470
+ loss2.backward()
1471
+ grad_norms = {
1472
+ name: p.grad.norm().item()
1473
+ for name, p in model.named_parameters()
1474
+ if p.grad is not None
1475
+ }
1476
+ print(f"\n Gradient norms (sample):")
1477
+ for k, v in list(grad_norms.items())[:6]:
1478
+ print(f" {k:40s}: {v:.6f}")
1479
+
1480
+ print("\n ✓ Smoke test passed.")
1481
+
1482
+
1483
+ if __name__ == "__main__":
1484
+ _smoke_test()
thr_lin ADDED
File without changes
train_spade_unrolled.py ADDED
@@ -0,0 +1,1485 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ """
2
+ train_spade_unrolled.py — Two-phase training for SPADE Unrolled (v13 integration)
3
+ =================================================================
4
+
5
+ Phase 1 — Isolated drum samples + pink noise
6
+ • Corpus: Kicks / Snares / Perc / Tops (same as run_smart_sweep.py)
7
+ • Augmentation: limiter threshold ∈ [−1.5, −4.5 dBFS], release ∈ [40, 120 ms]
8
+ • Noise: pink noise @ −20 dBFS (background proxy, prevents silent-region exploit)
9
+ • Purpose: learn limiter signature from clean transients with exact GT
10
+
11
+ Phase 2 — Full mix (Strategy A: synthetic limiter on real mixes)
12
+ • Corpus: WAV/FLAC stems or premix files (user-provided, see --mix-dir)
13
+ • Same limiter as Phase 1 applied to full mix → GT = premix
14
+ • Mixed batching: α% Phase-1 + (1−α)% Phase-2 (default α=30%)
15
+ • Loss weights: higher w_transp (Ir regions critical in polyphonic content)
16
+
17
+ Training strategy
18
+ -----------------
19
+ Phase 1 → checkpoint → Phase 2 (fine-tune, mixed batching)
20
+ ↑ saved separately to detect catastrophic forgetting
21
+
22
+ Scheduler
23
+ ---------
24
+ OneCycleLR in both phases. Phase 2 starts at lr/10 of Phase 1 peak.
25
+
26
+ Pruning (Phase 2)
27
+ -----------------
28
+ Validation on BOTH Phase-1 and Phase-2 val sets. Training stops when
29
+ Phase-2 composite score stops improving OR Phase-1 score degrades > threshold.
30
+
31
+ CLI
32
+ ---
33
+ # Phase 1 only (test)
34
+ python train_spade_unrolled.py --phase 1 --epochs 30 --drum-dir ./Samples
35
+
36
+ # Phase 2 (requires Phase-1 checkpoint)
37
+ python train_spade_unrolled.py --phase 2 --epochs 20 --drum-dir ./Samples \
38
+ --mix-dir ./FullMix --ckpt-phase1 phase1_best.pt
39
+
40
+ # Full two-phase run
41
+ python train_spade_unrolled.py --phase both --epochs-p1 50 --epochs-p2 30 \
42
+ --drum-dir ./Samples --mix-dir ./FullMix
43
+
44
+ # Resume
45
+ python train_spade_unrolled.py --phase 2 --resume checkpoints/epoch_10.pt \
46
+ --drum-dir ./Samples --mix-dir ./FullMix
47
+
48
+ # v13 integration (recommended): match HybridSPADEInference deployment
49
+ # --lf-band-hz 8000 : LP-filter frames to 8kHz (crossover_hz default)
50
+ # --loss-lf-cutoff 500: focus LF coefficient loss on sub-bass/bass failure zone
51
+ # mask dilation is ON by default (disable with --no-mask-dilation)
52
+ python train_spade_unrolled.py --phase both --epochs-p1 50 --epochs-p2 30 \
53
+ --drum-dir ./Samples --mix-dir ./FullMix \
54
+ --lf-band-hz 8000 --loss-lf-cutoff 500
55
+ """
56
+
57
+ from __future__ import annotations
58
+
59
+ import argparse
60
+ import json
61
+ import math
62
+ import os
63
+ import random
64
+ import time
65
+ import warnings
66
+ from dataclasses import dataclass, asdict
67
+ from pathlib import Path
68
+ from typing import Dict, List, Optional, Tuple
69
+
70
+ import numpy as np
71
+ import scipy.signal as sig
72
+
73
+ try:
74
+ import soundfile as sf
75
+ _HAS_SF = True
76
+ except ImportError:
77
+ _HAS_SF = False
78
+ warnings.warn("soundfile not found — pip install soundfile")
79
+
80
+ try:
81
+ import torch
82
+ import torch.nn as nn
83
+ import torch.optim as optim
84
+ from torch.utils.data import Dataset, DataLoader, ConcatDataset, WeightedRandomSampler
85
+ _HAS_TORCH = True
86
+ except ImportError:
87
+ _HAS_TORCH = False
88
+ raise ImportError("PyTorch required — pip install torch")
89
+
90
+ from spade_unrolled import (
91
+ SPADEUnrolled, UnrolledConfig, SPADEUnrolledLoss, build_model,
92
+ )
93
+
94
+ # ── Try importing SPADE internals for corpus preparation ─────────────────────
95
+ try:
96
+ from spade_declip_v13 import (_compute_masks, _dilate_masks_soft,
97
+ _lr_split as _v13_lr_split)
98
+ _HAS_SPADE = True
99
+ except ImportError:
100
+ _HAS_SPADE = False
101
+ warnings.warn("spade_declip_v13.py not found — mask computation disabled")
102
+
103
+ # =============================================================================
104
+ # Constants
105
+ # =============================================================================
106
+
107
+ DRUM_DIRS = ["Kicks", "Snares", "Perc", "Tops"]
108
+ SAMPLE_RATE = 44100
109
+
110
+ # Limiter augmentation ranges (Phase 1)
111
+ AUG_THRESH_RANGE = (-1.5, -4.5) # dBFS — limiter threshold
112
+ AUG_RELEASE_RANGE = (40.0, 120.0) # ms
113
+ PINK_NOISE_DB = -20.0 # dB relative to peak
114
+
115
+ # Default training hyperparameters
116
+ BATCH_SIZE = 32
117
+ LR_PHASE1 = 3e-4
118
+ LR_PHASE2 = 3e-5 # Phase 2 starts at 1/10 of Phase 1
119
+ WEIGHT_DECAY = 1e-4
120
+ GRAD_CLIP = 1.0
121
+
122
+ # Mixed batching: fraction of Phase-1 samples in each Phase-2 batch
123
+ PHASE1_MIX_FRAC = 0.30
124
+
125
+ # Validation: catastrophic forgetting threshold
126
+ FORGETTING_THRESHOLD_DB = 2.0 # Phase-1 composite score must not drop > this dB
127
+
128
+
129
+ # =============================================================================
130
+ # Pink noise generator (causal IIR, only used for corpus generation)
131
+ # =============================================================================
132
+
133
+ def generate_pink_noise(n_samples: int, rng: np.random.Generator) -> np.ndarray:
134
+ """Voss-McCartney 5-pole IIR approximation of 1/f noise."""
135
+ b = np.array([ 0.049922035, -0.095993537, 0.050612699, -0.004408786])
136
+ a = np.array([ 1.0, -2.494956002, 2.017265875, -0.522189400])
137
+ white = rng.standard_normal(n_samples)
138
+ pink = sig.lfilter(b, a, white)
139
+ rms = np.sqrt(np.mean(pink ** 2))
140
+ return pink / (rms + 1e-12)
141
+
142
+
143
+ # =============================================================================
144
+ # Synthetic brickwall limiter
145
+ # =============================================================================
146
+
147
+ def apply_brickwall_limiter(
148
+ audio: np.ndarray,
149
+ sr: int,
150
+ threshold_db: float,
151
+ release_ms: float,
152
+ ) -> np.ndarray:
153
+ """
154
+ Brickwall limiter with 1-sample attack, exponential release.
155
+ audio: (N,) — normalised to 0 dBFS peak.
156
+ Returns: (N,) — limited signal (peak ≈ threshold_db dBFS).
157
+ """
158
+ thr_lin = 10 ** (-abs(threshold_db) / 20.0)
159
+ rc = np.exp(-1.0 / max(release_ms * sr / 1000.0, 1e-9))
160
+
161
+ out = np.empty_like(audio)
162
+ env = 1.0
163
+ for n in range(len(audio)):
164
+ pk = abs(audio[n])
165
+ target = thr_lin / pk if pk > thr_lin else 1.0
166
+ env = target if target < env else rc * env + (1.0 - rc) * target
167
+ out[n] = audio[n] * env
168
+ return out
169
+
170
+
171
+ # =============================================================================
172
+ # Corpus preparation: build list of (clean, limited) sample pairs
173
+ # =============================================================================
174
+
175
+ @dataclass
176
+ class AudioSample:
177
+ """A single training sample: original + limited pair."""
178
+ clean: np.ndarray # (N,) float64 — original at 0 dBFS
179
+ limited: np.ndarray # (N,) float64 — after limiter
180
+ sr: int
181
+ threshold_db: float
182
+ release_ms: float
183
+ source: str # file path for debugging
184
+
185
+
186
+ def load_audio_file(path: Path, target_sr: int = SAMPLE_RATE) -> Optional[np.ndarray]:
187
+ """Load and resample audio file to mono float64."""
188
+ if not _HAS_SF:
189
+ return None
190
+ try:
191
+ audio, sr = sf.read(str(path), always_2d=True)
192
+ audio = audio.mean(axis=1) # mono
193
+ if sr != target_sr:
194
+ # Simple nearest-neighbour resample (scipy not always available)
195
+ try:
196
+ from scipy.signal import resample_poly
197
+ from math import gcd
198
+ g = gcd(target_sr, sr)
199
+ audio = resample_poly(audio, target_sr // g, sr // g)
200
+ except Exception:
201
+ pass
202
+ peak = np.max(np.abs(audio))
203
+ if peak < 1e-8:
204
+ return None
205
+ return audio.astype(np.float64) / peak # 0 dBFS peak
206
+ except Exception as e:
207
+ warnings.warn(f"Could not load {path}: {e}")
208
+ return None
209
+
210
+
211
+ def build_drum_corpus(
212
+ base_dir: Path,
213
+ rng: np.random.Generator,
214
+ augment: bool = True,
215
+ ) -> List[AudioSample]:
216
+ """
217
+ Build corpus of drum samples with synthetic limiting.
218
+ Mirrors the corpus in run_smart_sweep.py + augmentation.
219
+
220
+ augment=True: randomise threshold and release per sample.
221
+ augment=False: use fixed LIMITER_THRESHOLD_DB=3.0, RELEASE_MS=80.
222
+ """
223
+ samples = []
224
+ drum_dirs = [base_dir / d for d in DRUM_DIRS if (base_dir / d).exists()]
225
+ if not drum_dirs:
226
+ # Flat structure fallback
227
+ drum_dirs = [base_dir]
228
+
229
+ audio_files = []
230
+ for d in drum_dirs:
231
+ for ext in ["*.wav", "*.WAV", "*.flac", "*.FLAC", "*.aif", "*.aiff"]:
232
+ audio_files.extend(d.glob(ext))
233
+
234
+ if not audio_files:
235
+ warnings.warn(f"No audio files found in {base_dir}")
236
+ return samples
237
+
238
+ for path in audio_files:
239
+ audio = load_audio_file(path, SAMPLE_RATE)
240
+ if audio is None:
241
+ continue
242
+
243
+ # Add pink noise background (same logic as run_smart_sweep.py)
244
+ pink = generate_pink_noise(len(audio), rng)
245
+ gain = float(np.max(np.abs(audio))) * (10 ** (PINK_NOISE_DB / 20.0))
246
+ mixed = audio + pink * gain
247
+ peak = np.max(np.abs(mixed))
248
+ if peak > 1e-8:
249
+ mixed /= peak # re-normalise to 0 dBFS
250
+
251
+ # Limiter parameters
252
+ if augment:
253
+ thr_db = rng.uniform(*sorted(AUG_THRESH_RANGE)) # e.g. -1.5 to -4.5
254
+ rel_ms = rng.uniform(*AUG_RELEASE_RANGE)
255
+ else:
256
+ thr_db = -3.0
257
+ rel_ms = 80.0
258
+
259
+ limited = apply_brickwall_limiter(mixed, SAMPLE_RATE, thr_db, rel_ms)
260
+
261
+ samples.append(AudioSample(
262
+ clean=mixed, limited=limited,
263
+ sr=SAMPLE_RATE,
264
+ threshold_db=thr_db, release_ms=rel_ms,
265
+ source=str(path),
266
+ ))
267
+
268
+ return samples
269
+
270
+
271
+ def build_fullmix_corpus(
272
+ mix_dir: Path,
273
+ rng: np.random.Generator,
274
+ augment: bool = True,
275
+ ) -> List[AudioSample]:
276
+ """
277
+ Build corpus of full-mix samples (Strategy A from checklist).
278
+ Applies the same synthetic limiter as Phase 1 to real mix material.
279
+ NO pink noise added: the mix content itself provides the background.
280
+ """
281
+ samples = []
282
+ audio_files = []
283
+ for ext in ["*.wav", "*.WAV", "*.flac", "*.FLAC"]:
284
+ audio_files.extend(mix_dir.rglob(ext))
285
+
286
+ for path in audio_files:
287
+ audio = load_audio_file(path, SAMPLE_RATE)
288
+ if audio is None:
289
+ continue
290
+
291
+ if augment:
292
+ thr_db = rng.uniform(*sorted(AUG_THRESH_RANGE))
293
+ rel_ms = rng.uniform(*AUG_RELEASE_RANGE)
294
+ else:
295
+ thr_db = -3.0
296
+ rel_ms = 80.0
297
+
298
+ limited = apply_brickwall_limiter(audio, SAMPLE_RATE, thr_db, rel_ms)
299
+ samples.append(AudioSample(
300
+ clean=audio, limited=limited,
301
+ sr=SAMPLE_RATE,
302
+ threshold_db=thr_db, release_ms=rel_ms,
303
+ source=str(path),
304
+ ))
305
+
306
+ return samples
307
+
308
+
309
+ # =============================================================================
310
+ # Frame dataset: extract random frames from AudioSample list
311
+ # =============================================================================
312
+
313
+ class FrameDataset(Dataset):
314
+ """
315
+ Random-frame dataset: returns (yc_w, ctx, x_clean_w, Ir, Icp, Icm) tuples.
316
+
317
+ Each call to __getitem__ draws a RANDOM frame from a RANDOM AudioSample.
318
+ This means len(dataset) is a virtual epoch length (n_virtual), not the
319
+ number of samples. Caller sets n_virtual = n_samples * avg_frames.
320
+
321
+ Context frames are the K windows immediately BEFORE the current frame.
322
+ For the first K frames, context is zero-padded.
323
+
324
+ Mask computation requires delta_db, which is derived from the sample's
325
+ threshold_db. delta_db = abs(threshold_db) is the v12 convention.
326
+ """
327
+
328
+ def __init__(
329
+ self,
330
+ samples: List[AudioSample],
331
+ cfg: UnrolledConfig,
332
+ n_virtual: int = 10000,
333
+ rng_seed: int = 42,
334
+ lf_band_hz: float = 0.0, # v13: LP-filter training data to match inference LR split
335
+ apply_mask_dilation: bool = False, # v13: dilate masks with sample release_ms
336
+ ):
337
+ if not samples:
338
+ raise ValueError("Empty sample list passed to FrameDataset")
339
+ self.samples = samples
340
+ self.cfg = cfg
341
+ self.M = cfg.window_length
342
+ self.a = cfg.hop_length
343
+ self.K_ctx = cfg.K_context
344
+ self.n_virtual = n_virtual
345
+ self.rng = np.random.default_rng(rng_seed)
346
+ self.lf_band_hz = lf_band_hz
347
+ self.apply_mask_dilation = apply_mask_dilation
348
+
349
+ from scipy.signal.windows import hann
350
+ self.win = np.sqrt(hann(self.M, sym=False)).astype(np.float32)
351
+
352
+ # Pre-build LR-split LP filter if requested (same as v13 _lr_split)
353
+ self._lp_sos = None
354
+ if lf_band_hz > 0.0 and _HAS_SPADE:
355
+ from scipy.signal import butter
356
+ fc = float(np.clip(lf_band_hz, 1.0, cfg.sample_rate / 2.0 - 1.0))
357
+ self._lp_sos = butter(2, fc, btype="low", fs=cfg.sample_rate, output="sos")
358
+
359
+ def __len__(self):
360
+ return self.n_virtual
361
+
362
+ def __getitem__(self, _idx: int):
363
+ # ── Pick a random sample ──────────────────────────────────────────
364
+ s = self.samples[self.rng.integers(len(self.samples))]
365
+
366
+ # ── v13: apply LR split to match HybridSPADEInference inference distribution
367
+ # When lf_band_hz > 0, LP-filter both clean and limited to the LF band
368
+ # before frame extraction. This eliminates the train/inference mismatch
369
+ # where the model trained on full-bandwidth drums but was deployed on the
370
+ # 0–8 kHz LP band.
371
+ if self._lp_sos is not None:
372
+ from scipy.signal import sosfiltfilt
373
+ clean_sig = sosfiltfilt(self._lp_sos, s.clean).astype(np.float64)
374
+ limited_sig = sosfiltfilt(self._lp_sos, s.limited).astype(np.float64)
375
+ else:
376
+ clean_sig = s.clean
377
+ limited_sig = s.limited
378
+
379
+ L = len(clean_sig)
380
+ if L < self.M:
381
+ # Too short: zero-pad
382
+ pad = self.M - L
383
+ clean = np.pad(clean_sig, (0, pad))
384
+ limited = np.pad(limited_sig, (0, pad))
385
+ i = 0
386
+ else:
387
+ # Pick a random frame index (at least partially within signal)
388
+ max_idx = max(0, (L - self.M) // self.a)
389
+ i = self.rng.integers(0, max_idx + 1) if max_idx > 0 else 0
390
+ clean = clean_sig
391
+ limited = limited_sig
392
+
393
+ # Extract current frame
394
+ idx1 = i * self.a
395
+ idx2 = min(idx1 + self.M, L)
396
+ seg_len = idx2 - idx1
397
+
398
+ yc_w = np.zeros(self.M, dtype=np.float32)
399
+ x_clean = np.zeros(self.M, dtype=np.float32)
400
+ yc_w[:seg_len] = (limited[idx1:idx2] * self.win[:seg_len]).astype(np.float32)
401
+ x_clean[:seg_len] = (clean[idx1:idx2] * self.win[:seg_len]).astype(np.float32)
402
+
403
+ # ���─ Compute masks ────────────────────────────────────────────────
404
+ # v13: optionally apply _dilate_masks_soft so training masks match the
405
+ # dilated masks used at inference (release_ms dilation in _declip_mono_gpu).
406
+ # Without dilation, the model is trained on sharp-edge masks but evaluated
407
+ # on masks that include the limiter release tail — a systematic distribution
408
+ # mismatch that causes the model to under-recover post-peak samples.
409
+ delta_db = abs(s.threshold_db)
410
+ ceiling = float(np.max(np.abs(limited)))
411
+ thresh = ceiling * (10.0 ** (-delta_db / 20.0))
412
+ thresh = max(thresh, 1e-8)
413
+
414
+ yc_raw = limited[idx1:idx2]
415
+ Ir_s = np.abs(yc_raw) < thresh
416
+ Icp_s = yc_raw >= thresh
417
+ Icm_s = yc_raw <= -thresh
418
+
419
+ Ir = np.ones(self.M, dtype=bool)
420
+ Icp = np.zeros(self.M, dtype=bool)
421
+ Icm = np.zeros(self.M, dtype=bool)
422
+ Ir[:seg_len] = Ir_s
423
+ Icp[:seg_len] = Icp_s
424
+ Icm[:seg_len] = Icm_s
425
+
426
+ # v13: dilate masks with sample-specific release_ms if enabled
427
+ if self.apply_mask_dilation and _HAS_SPADE and s.release_ms > 0.0:
428
+ from spade_declip_v13 import ClippingMasks as _CM
429
+ # Apply dilation on the FULL frame (not just the active segment)
430
+ # to get temporally consistent masks matching the inference path.
431
+ masks_obj = _CM(Ir=Ir, Icp=Icp, Icm=Icm)
432
+ rel_samp = max(0, round(s.release_ms * self.cfg.sample_rate / 1000.0))
433
+ if rel_samp > 0:
434
+ yc_for_dil = np.zeros(self.M, dtype=np.float64)
435
+ yc_for_dil[:seg_len] = yc_raw
436
+ masks_obj = _dilate_masks_soft(masks_obj, yc_for_dil, rel_samp)
437
+ Ir = masks_obj.Ir
438
+ Icp = masks_obj.Icp
439
+ Icm = masks_obj.Icm
440
+
441
+ # Extract K context frames (strictly causal: before frame i)
442
+ ctx = np.zeros((self.K_ctx, self.M), dtype=np.float32)
443
+ for k in range(self.K_ctx):
444
+ ci = i - (self.K_ctx - k) # context frame index: i-K, i-K+1, …, i-1
445
+ if ci < 0:
446
+ continue # zero-pad before signal start
447
+ c_idx1 = ci * self.a
448
+ c_idx2 = min(c_idx1 + self.M, L)
449
+ c_seg = c_idx2 - c_idx1
450
+ if c_seg <= 0:
451
+ continue
452
+ ctx[k, :c_seg] = (limited[c_idx1:c_idx2] * self.win[:c_seg]).astype(np.float32)
453
+
454
+ return (
455
+ torch.from_numpy(yc_w),
456
+ torch.from_numpy(ctx),
457
+ torch.from_numpy(x_clean),
458
+ torch.from_numpy(Ir),
459
+ torch.from_numpy(Icp),
460
+ torch.from_numpy(Icm),
461
+ )
462
+
463
+
464
+ # =============================================================================
465
+ # Mixed Batch Sampler for Phase 2
466
+ # =============================================================================
467
+
468
+ class MixedBatchDataset(Dataset):
469
+ """
470
+ Samples from two datasets with a fixed mixing ratio.
471
+ Used in Phase 2: PHASE1_MIX_FRAC from Phase-1, rest from Phase-2.
472
+
473
+ Implements the mixed batching strategy from the checklist (option B):
474
+ "keep contact with Phase-1 distribution during Phase-2 training".
475
+ """
476
+
477
+ def __init__(
478
+ self,
479
+ dataset_p1: FrameDataset,
480
+ dataset_p2: FrameDataset,
481
+ p1_frac: float = PHASE1_MIX_FRAC,
482
+ n_virtual: int = 20000,
483
+ ):
484
+ self.ds_p1 = dataset_p1
485
+ self.ds_p2 = dataset_p2
486
+ self.p1_frac = p1_frac
487
+ self.n_virtual = n_virtual
488
+ self.rng = random.Random(0)
489
+
490
+ def __len__(self):
491
+ return self.n_virtual
492
+
493
+ def __getitem__(self, idx: int):
494
+ if self.rng.random() < self.p1_frac:
495
+ return self.ds_p1[idx % len(self.ds_p1)]
496
+ else:
497
+ return self.ds_p2[idx % len(self.ds_p2)]
498
+
499
+
500
+ # =============================================================================
501
+ # Composite evaluation score (mirrors run_smart_sweep.py)
502
+ # =============================================================================
503
+
504
+ def compute_composite_score(
505
+ x_hat: np.ndarray, # (N,) — restored signal
506
+ x_clean: np.ndarray, # (N,) — ground truth
507
+ limited: np.ndarray, # (N,) — limited signal (for GT residual)
508
+ sr: int = SAMPLE_RATE,
509
+ ) -> Dict[str, float]:
510
+ """
511
+ Compute the composite score from run_smart_sweep.py.
512
+
513
+ cosine_sim — global spectral shape (12 log bands)
514
+ energy_lf_db— RMS(hat)−RMS(GT) in 20–500 Hz
515
+ composite — cosine × pen_lf^0.5 × pen_hf^0.2
516
+ """
517
+ eps = 1e-10
518
+
519
+ gt_res = x_clean - limited # ground truth residual
520
+ est_res = x_hat - limited # estimated residual
521
+
522
+ # Normalise residuals for cosine sim
523
+ gt_n = gt_res / (np.max(np.abs(gt_res)) + eps)
524
+ est_n = est_res / (np.max(np.abs(est_res)) + eps)
525
+
526
+ # Global cosine sim on DCT coefficients
527
+ from scipy.fft import dct as scipy_dct
528
+ G = scipy_dct(gt_n, type=2, norm='ortho')
529
+ E = scipy_dct(est_n, type=2, norm='ortho')
530
+ cosine = float(np.dot(G, E) / (np.linalg.norm(G) * np.linalg.norm(E) + eps))
531
+
532
+ # LF band RMS (20–500 Hz) on ORIGINAL scale
533
+ N = len(x_hat)
534
+ k_cut = int(math.ceil(500.0 * 2.0 * N / sr))
535
+ k_cut = max(1, min(k_cut, N))
536
+
537
+ rms_lf_gt = np.sqrt(np.mean(scipy_dct(x_clean, type=2, norm='ortho')[:k_cut]**2) + eps)
538
+ rms_lf_hat = np.sqrt(np.mean(scipy_dct(x_hat, type=2, norm='ortho')[:k_cut]**2) + eps)
539
+ energy_lf_db = 20.0 * math.log10(rms_lf_hat / rms_lf_gt + eps)
540
+
541
+ # HF band RMS (2k–20k Hz)
542
+ k_hf_lo = int(math.ceil(2000.0 * 2.0 * N / sr))
543
+ k_hf_hi = int(math.ceil(20000.0 * 2.0 * N / sr))
544
+ k_hf_lo = max(1, min(k_hf_lo, N))
545
+ k_hf_hi = max(k_hf_lo + 1, min(k_hf_hi, N))
546
+
547
+ rms_hf_gt = np.sqrt(np.mean(scipy_dct(x_clean, type=2, norm='ortho')[k_hf_lo:k_hf_hi]**2) + eps)
548
+ rms_hf_hat = np.sqrt(np.mean(scipy_dct(x_hat, type=2, norm='ortho')[k_hf_lo:k_hf_hi]**2) + eps)
549
+ energy_hf_db = 20.0 * math.log10(rms_hf_hat / rms_hf_gt + eps)
550
+
551
+ pen_lf = math.exp(min(0.0, energy_lf_db) / 6.0)
552
+ pen_hf = math.exp(min(0.0, energy_hf_db) / 10.0)
553
+ composite = cosine * (pen_lf ** 0.5) * (pen_hf ** 0.2)
554
+
555
+ return {
556
+ "cosine": cosine,
557
+ "energy_lf_db": energy_lf_db,
558
+ "energy_hf_db": energy_hf_db,
559
+ "composite": composite,
560
+ }
561
+
562
+
563
+ # =============================================================================
564
+ # Delta SDR metrics (global + multiband)
565
+ # =============================================================================
566
+
567
+ def _sdr_batch(ref: torch.Tensor, est: torch.Tensor,
568
+ eps: float = 1e-10) -> torch.Tensor:
569
+ """SDR in dB per sample. ref, est: (B, N) → (B,)."""
570
+ return 10.0 * torch.log10(
571
+ ref.pow(2).sum(-1) / ((ref - est).pow(2).sum(-1) + eps) + eps
572
+ )
573
+
574
+
575
+ def _delta_sdr_batch(
576
+ clean: torch.Tensor, # (B, N) — GT
577
+ limited: torch.Tensor, # (B, N) — model input
578
+ enhanced: torch.Tensor, # (B, N) — model output
579
+ eps: float = 1e-10,
580
+ ) -> torch.Tensor:
581
+ """ΔSDR = SDR(enhanced, clean) − SDR(limited, clean) per sample. (B,)"""
582
+ return _sdr_batch(clean, enhanced, eps) - _sdr_batch(clean, limited, eps)
583
+
584
+
585
+ def _bandpass_fft(x: torch.Tensor, sr: int,
586
+ f_lo: float, f_hi: float) -> torch.Tensor:
587
+ """Zero-phase FFT bandpass filter on last dimension."""
588
+ N = x.shape[-1]
589
+ X = torch.fft.rfft(x, n=N)
590
+ freqs = torch.fft.rfftfreq(N, d=1.0 / sr)
591
+ X_filt = X * ((freqs >= f_lo) & (freqs < f_hi)).to(x.device)
592
+ return torch.fft.irfft(X_filt, n=N)
593
+
594
+
595
+ # Bands aligned with v13 diagnostic zones.
596
+ # The ML model processes only the LF band (0–lf_band_hz Hz, default 8000 Hz).
597
+ # Monitoring the 4–22 kHz "high" band of the MODEL output is meaningless
598
+ # (that content is handled by classical SPADE and never passes through the model).
599
+ # New bands focus on the sub-bass under-recovery problem identified in the
600
+ # v13 analysis: sub-bass (0–250 Hz) and bass (250–500 Hz) were −13 to −22 dB
601
+ # below GT while mid-frequency bins were already well-recovered.
602
+ _BANDS = {
603
+ "sub_bass": (0.0, 250.0), # Primary failure zone (v13 analysis)
604
+ "bass": (250.0, 500.0), # lf_split_hz boundary zone
605
+ "low_mid": (500.0, 2000.0), # Previously OK in v11 ML model
606
+ "mid": (2000.0, 8000.0), # Upper LF band (model ceiling at inference)
607
+ }
608
+
609
+
610
+ def _multiband_delta_sdr_batch(
611
+ clean: torch.Tensor, # (B, N)
612
+ limited: torch.Tensor, # (B, N)
613
+ enhanced: torch.Tensor, # (B, N)
614
+ sr: int = SAMPLE_RATE,
615
+ eps: float = 1e-10,
616
+ ) -> Dict[str, float]:
617
+ """
618
+ ΔSDR per frequency band. Returns dict with keys dsdr_low/mid/high.
619
+
620
+ Diagnostic guide
621
+ ----------------
622
+ dsdr_high ↑ but dsdr_low flat/neg → spectral bias: raise w_lf_coeff
623
+ dsdr_high < 0 → transient smearing (g_fac too low,
624
+ or w_transp too high)
625
+ dsdr_global < 0 → model degrading signal overall
626
+ """
627
+ results: Dict[str, float] = {}
628
+ for band_name, (f_lo, f_hi) in _BANDS.items():
629
+ c_b = _bandpass_fft(clean, sr, f_lo, f_hi)
630
+ l_b = _bandpass_fft(limited, sr, f_lo, f_hi)
631
+ e_b = _bandpass_fft(enhanced, sr, f_lo, f_hi)
632
+ if c_b.pow(2).mean().item() < 1e-12:
633
+ results[f"dsdr_{band_name}"] = 0.0
634
+ else:
635
+ results[f"dsdr_{band_name}"] = _delta_sdr_batch(
636
+ c_b, l_b, e_b, eps).mean().item()
637
+ return results
638
+
639
+
640
+ # =============================================================================
641
+ # Training loop
642
+ # =============================================================================
643
+
644
+ @dataclass
645
+ class TrainConfig:
646
+ """Training hyperparameters."""
647
+ # Directories
648
+ drum_dir: str = "./Samples"
649
+ mix_dir: str = "" # required for Phase 2
650
+ ckpt_dir: str = "./checkpoints"
651
+
652
+ # Training phases
653
+ phase: str = "both" # "1", "2", or "both"
654
+ epochs_p1: int = 50
655
+ epochs_p2: int = 30
656
+
657
+ # Optimisation
658
+ batch_size: int = BATCH_SIZE
659
+ lr_phase1: float = LR_PHASE1
660
+ lr_phase2: float = LR_PHASE2
661
+ weight_decay: float = WEIGHT_DECAY
662
+ grad_clip: float = GRAD_CLIP
663
+
664
+ # Mixed batching
665
+ p1_mix_frac: float = PHASE1_MIX_FRAC
666
+
667
+ # Virtual epoch size
668
+ frames_per_epoch: int = 8000
669
+
670
+ # Phase-2 mixed-epoch size
671
+ frames_per_epoch_p2: int = 16000
672
+
673
+ # Validation split
674
+ val_frac: float = 0.15
675
+
676
+ # Catastrophic forgetting threshold (Phase 2)
677
+ forgetting_thr: float = FORGETTING_THRESHOLD_DB
678
+
679
+ # Loss weights — Phase 1
680
+ # w_stft kept low (0.05): when dominant, creates conflicting gradient
681
+ # that locks λ_hf at maximum regardless of recovery quality.
682
+ # w_lf_coeff is the primary LF signal (coefficient MSE, direct gradient).
683
+ # w_reg penalises λ saturation to zero/max.
684
+ loss_w_mask: float = 2.0
685
+ loss_w_transp_p1: float = 0.1 # sparse Ir in Phase 1 (drums)
686
+ loss_w_transp_p2: float = 1.0 # Ir = majority in Phase 2 (full mix)
687
+ loss_w_stft: float = 0.05
688
+ loss_w_lf_coeff: float = 2.0 # NEW primary LF loss
689
+ loss_w_lf_energy: float = 0.5 # secondary LF loss
690
+ loss_w_over: float = 0.3
691
+ loss_w_reg: float = 5.0 # λ anti-saturation (stronger; wider lambda range)
692
+ loss_w_sparsity: float = 0.5 # L1(z_thresh): force real sparsification
693
+ loss_w_ds: float = 0.15 # deep supervision — guide gradient, don't dominate
694
+ # (0.5 caused DS to be 3.8× primary losses → plateau)
695
+ # g_fac floor penalty: prevents attenuative shortcut (g_fac collapsing toward 0)
696
+ # [FIX] gmax_factor_range expanded to (0.5, 2.0); floor lowered accordingly.
697
+ loss_w_gfac_floor: float = 3.0
698
+ loss_gfac_floor: float = 0.5 # [FIX] must match new gmax_factor_range[0] = 0.5
699
+
700
+ # Early stopping
701
+ early_stop_patience: int = 15 # stop if val_loss doesn't improve for this many epochs
702
+
703
+ # ── v13 integration ───────────────────────────────────────────────────────
704
+ # lf_band_hz: match the LR split applied by HybridSPADEInference at inference.
705
+ # When > 0, FrameDataset applies an LP filter at this frequency before
706
+ # feeding the frame to the model, eliminating the distribution mismatch
707
+ # between training (full-bandwidth drums) and inference (0–lf_band_hz band).
708
+ # Set to 0.0 to disable (legacy behaviour: train on full-bandwidth signal).
709
+ lf_band_hz: float = 8000.0 # Hz — must match HybridSPADEInference.crossover_hz
710
+
711
+ # apply_mask_dilation: apply _dilate_masks_soft with each sample's release_ms
712
+ # so the training masks match the dilated masks used at inference.
713
+ apply_mask_dilation: bool = True
714
+
715
+ # loss_lf_cutoff_hz: frequency passed to SPADEUnrolledLoss.lf_cutoff_hz.
716
+ # Controls which DCT bins count as "LF" for loss_lf_coeff and loss_lf_energy.
717
+ # [FIX] Raised from 500 Hz → 2500 Hz. At 500 Hz, the primary LF losses
718
+ # (w=2.0) were blind to the bass+low-mid band (500–2000 Hz), leaving only
719
+ # the weak STFT loss (w=0.05) to cover that region — root cause of the
720
+ # persistent mid regression (ΔSDR mid ≈ −2.0 dB across all epochs).
721
+ # 2500 Hz covers sub-bass + bass + low-mids in the primary loss term.
722
+ loss_lf_cutoff_hz: float = 2500.0 # Hz — [FIX] was 500.0
723
+
724
+ # Device
725
+ device: str = "cuda"
726
+
727
+ # Resume
728
+ resume: str = ""
729
+ ckpt_phase1: str = ""
730
+
731
+ # Logging
732
+ log_every: int = 50 # steps
733
+ val_every: int = 1 # epochs
734
+ save_every: int = 5 # epochs
735
+
736
+
737
+ def _make_loss_fn(tc: "TrainConfig", w_transp: float,
738
+ sample_rate: int, device: str) -> "SPADEUnrolledLoss":
739
+ """
740
+ Construct SPADEUnrolledLoss from TrainConfig.
741
+
742
+ Centralises loss construction so Phase 1 and Phase 2 are guaranteed
743
+ to use the same weights (except w_transp, which differs by design).
744
+ Also threads tc.loss_lf_cutoff_hz to SPADEUnrolledLoss.lf_cutoff_hz
745
+ so the loss targets the same frequency boundary that was applied to
746
+ training data via lf_band_hz.
747
+ """
748
+ return SPADEUnrolledLoss(
749
+ w_mask = tc.loss_w_mask,
750
+ w_transp = w_transp,
751
+ w_stft = tc.loss_w_stft,
752
+ w_lf_coeff = tc.loss_w_lf_coeff,
753
+ w_lf_energy = tc.loss_w_lf_energy,
754
+ w_over = tc.loss_w_over,
755
+ w_reg = tc.loss_w_reg,
756
+ w_sparsity = tc.loss_w_sparsity,
757
+ w_ds = tc.loss_w_ds,
758
+ w_gfac_floor = tc.loss_w_gfac_floor,
759
+ gfac_floor = tc.loss_gfac_floor,
760
+ sample_rate = sample_rate,
761
+ lf_cutoff_hz = tc.loss_lf_cutoff_hz,
762
+ ).to(device)
763
+
764
+
765
+ def _make_loaders(
766
+ samples_tr: List[AudioSample],
767
+ samples_val: List[AudioSample],
768
+ cfg_model: UnrolledConfig,
769
+ tc: TrainConfig,
770
+ phase: int,
771
+ samples_p1_tr: Optional[List[AudioSample]] = None,
772
+ ) -> Tuple[DataLoader, DataLoader]:
773
+ """Build train and validation DataLoaders for a given phase."""
774
+
775
+ n_tr = tc.frames_per_epoch if phase == 1 else tc.frames_per_epoch_p2
776
+ n_val = max(500, n_tr // 8)
777
+
778
+ # v13: pass lf_band_hz and apply_mask_dilation so training data
779
+ # matches the inference distribution (HybridSPADEInference LR split
780
+ # and _dilate_masks_soft).
781
+ ds_kwargs = dict(
782
+ lf_band_hz = tc.lf_band_hz,
783
+ apply_mask_dilation = tc.apply_mask_dilation,
784
+ )
785
+
786
+ ds_val = FrameDataset(samples_val, cfg_model, n_virtual=n_val, rng_seed=999,
787
+ **ds_kwargs)
788
+
789
+ if phase == 1:
790
+ ds_tr = FrameDataset(samples_tr, cfg_model, n_virtual=n_tr, rng_seed=0,
791
+ **ds_kwargs)
792
+ else:
793
+ # Phase 2: mixed batching
794
+ if not samples_p1_tr:
795
+ raise ValueError("Phase 2 requires Phase-1 training samples (mixed batching)")
796
+ ds_p2 = FrameDataset(samples_tr, cfg_model, n_virtual=n_tr, rng_seed=1, **ds_kwargs)
797
+ ds_p1 = FrameDataset(samples_p1_tr, cfg_model, n_virtual=n_tr//2, rng_seed=2, **ds_kwargs)
798
+ ds_tr = MixedBatchDataset(ds_p1, ds_p2, p1_frac=tc.p1_mix_frac, n_virtual=n_tr)
799
+
800
+ loader_tr = DataLoader(ds_tr, batch_size=tc.batch_size, shuffle=True,
801
+ num_workers=2, pin_memory=True, drop_last=True,
802
+ persistent_workers=True) # [FIX] prevents SIGSEGV at worker re-spawn between epochs
803
+ loader_val = DataLoader(ds_val, batch_size=tc.batch_size, shuffle=False,
804
+ num_workers=1, pin_memory=True,
805
+ persistent_workers=True) # [FIX] idem
806
+ return loader_tr, loader_val
807
+
808
+
809
+ def _diag_encoder_params(
810
+ model: SPADEUnrolled,
811
+ loader: DataLoader,
812
+ device: str,
813
+ epoch: int,
814
+ n_batches: int = 3,
815
+ ):
816
+ """
817
+ Stampa i valori mediani delle predizioni del ContextEncoder su un sottoinsieme
818
+ del validation set. Serve per diagnosticare se il modello sta imparando a
819
+ variare i parametri o se collassa su valori costanti.
820
+
821
+ Output atteso dopo convergenza:
822
+ lambda_lf ∈ [0.0001, 0.005] — soglia LF bassa per recupero aggressivo
823
+ lambda_hf ∈ [0.001, 0.010] — soglia HF più permissiva
824
+ delta_fac ∈ [0.8, 1.5] — adatta il threshold del limiter al frame
825
+ gmax_fac ∈ [1.0, 1.8] — cap guadagno proporzionale al transiente
826
+ """
827
+ model.eval()
828
+ all_params = []
829
+ with torch.no_grad():
830
+ for i, batch in enumerate(loader):
831
+ if i >= n_batches:
832
+ break
833
+ yc_w, ctx, _, _, _, _ = [b.to(device) for b in batch]
834
+ _, params, _, _ = model(yc_w, ctx,
835
+ torch.zeros_like(yc_w, dtype=torch.bool),
836
+ torch.zeros_like(yc_w, dtype=torch.bool),
837
+ torch.zeros_like(yc_w, dtype=torch.bool))
838
+ all_params.append(params.cpu())
839
+
840
+ if not all_params:
841
+ return
842
+
843
+ p = torch.cat(all_params, dim=0) # (N, 5)
844
+ med = p.median(dim=0).values
845
+ lo = p.quantile(0.1, dim=0)
846
+ hi = p.quantile(0.9, dim=0)
847
+ names = ["λ_lf", "λ_hf", "δ_fac", "g_fac", "ε_fac"]
848
+
849
+ print(f" [diag ep{epoch}] encoder params (median | p10–p90 | std):")
850
+ for n, m, l, h in zip(names, med, lo, hi):
851
+ std = p[:, names.index(n)].std().item()
852
+ collapsed = "⚠ COLLAPSED" if std < 1e-5 else ""
853
+ print(f" {n:6s}: {m:.5f} [{l:.5f} – {h:.5f}] std={std:.6f} {collapsed}")
854
+
855
+ # Anche: gradient norm medio delle ultime epoche
856
+ total_gnorm = 0.0
857
+ n_params = 0
858
+ for p_tensor in model.parameters():
859
+ if p_tensor.grad is not None:
860
+ total_gnorm += p_tensor.grad.norm().item() ** 2
861
+ n_params += 1
862
+ if n_params > 0:
863
+ print(f" grad_norm_rms: {math.sqrt(total_gnorm / n_params):.5f}")
864
+
865
+
866
+ def _train_epoch(
867
+ model: SPADEUnrolled,
868
+ loader: DataLoader,
869
+ optimizer: torch.optim.Optimizer,
870
+ scheduler: object,
871
+ loss_fn: SPADEUnrolledLoss,
872
+ tc: TrainConfig,
873
+ epoch: int,
874
+ device: str,
875
+ ) -> Dict[str, float]:
876
+ model.train()
877
+ running = {k: 0.0 for k in ["total", "mask", "transp", "stft",
878
+ "lf_coeff", "lf_energy", "over", "reg",
879
+ "sparsity", "ds", "gfac_floor"]}
880
+ n_steps = 0
881
+ t0 = time.time()
882
+
883
+ for step, batch in enumerate(loader):
884
+ yc_w, ctx, x_clean, Ir, Icp, Icm = [b.to(device) for b in batch]
885
+
886
+ optimizer.zero_grad(set_to_none=True)
887
+
888
+ x_hat, params, z_thresh, x_hat_mid = model(yc_w, ctx, Ir, Icp, Icm)
889
+ loss, details = loss_fn(x_hat, x_clean, yc_w, Ir, Icp, Icm,
890
+ params=params, z_thresh=z_thresh, x_hat_mid=x_hat_mid)
891
+
892
+ loss.backward()
893
+ nn.utils.clip_grad_norm_(model.parameters(), tc.grad_clip)
894
+ optimizer.step()
895
+ if scheduler is not None:
896
+ scheduler.step()
897
+
898
+ for k, v in details.items():
899
+ running[k] += v
900
+ n_steps += 1
901
+
902
+ if step > 0 and step % tc.log_every == 0:
903
+ elapsed = time.time() - t0
904
+ sps = step * tc.batch_size / elapsed
905
+ r, n = running, n_steps
906
+ print(f" [Ep {epoch:3d} | {step:4d}/{len(loader)}] "
907
+ f"loss={r['total']/n:.4f} "
908
+ f"mask={r['mask']/n:.4f} "
909
+ f"lf_c={r['lf_coeff']/n:.4f} "
910
+ f"lf_e={r['lf_energy']/n:.4f} "
911
+ f"stft={r['stft']/n:.4f} "
912
+ f"spar={r['sparsity']/n:.4f} "
913
+ f"ds={r['ds']/n:.4f} "
914
+ f"gf={r['gfac_floor']/n:.5f} "
915
+ f"reg={r['reg']/n:.5f} "
916
+ f"{sps:.0f} sa/s")
917
+
918
+ return {k: v / max(n_steps, 1) for k, v in running.items()}
919
+
920
+
921
+ @torch.no_grad()
922
+ def _validate_epoch(
923
+ model: SPADEUnrolled,
924
+ loader: DataLoader,
925
+ loss_fn: SPADEUnrolledLoss,
926
+ device: str,
927
+ ) -> Dict[str, float]:
928
+ """
929
+ Validation loop.
930
+
931
+ Returns loss sub-components AND Delta SDR metrics (global + multiband):
932
+
933
+ Keys added beyond the standard loss dict
934
+ -----------------------------------------
935
+ gfac_floor : floor-penalty value — should converge toward 0 as g_fac ≥ floor
936
+ dsdr_global : ΔSDR averaged over all frames (dB)
937
+ > 0 → model improves fidelity over the limited input
938
+ < 0 → model HURTS the signal (regression / artefacts)
939
+ dsdr_low : ΔSDR in 0–250 Hz — kick body, bass fundamentals
940
+ Flat/negative while dsdr_high grows → spectral LF bias
941
+ → raise w_lf_coeff or increase lambda_lf range
942
+ dsdr_mid : ΔSDR in 250–4 kHz — snare body, tonal content
943
+ dsdr_high : ΔSDR in 4–22 kHz — transient attack, cymbals
944
+ Negative → model smearing attacks / ice-pick artefacts
945
+ → check g_fac floor, lower w_transp
946
+ """
947
+ model.eval()
948
+ running = {k: 0.0 for k in ["total", "mask", "transp", "stft",
949
+ "lf_coeff", "lf_energy", "over", "reg",
950
+ "sparsity", "ds", "gfac_floor"]}
951
+ dsdr_running = {"dsdr_global": 0.0,
952
+ "dsdr_sub_bass": 0.0, "dsdr_bass": 0.0,
953
+ "dsdr_low_mid": 0.0, "dsdr_mid": 0.0,
954
+ "cos_sim": 0.0}
955
+ n_steps = 0
956
+
957
+ for batch in loader:
958
+ yc_w, ctx, x_clean, Ir, Icp, Icm = [b.to(device) for b in batch]
959
+ x_hat, params_v, z_thresh_v, xm_v = model(yc_w, ctx, Ir, Icp, Icm)
960
+ _, details = loss_fn(x_hat, x_clean, yc_w, Ir, Icp, Icm,
961
+ params=params_v, z_thresh=z_thresh_v, x_hat_mid=xm_v)
962
+ for k, v in details.items():
963
+ running[k] += v
964
+
965
+ # ── Delta SDR — global ────────────────────────────────────────────
966
+ dsdr_running["dsdr_global"] += _delta_sdr_batch(
967
+ x_clean, yc_w, x_hat).mean().item()
968
+
969
+ # ── Delta SDR — multiband ─────────────────────────────────────────
970
+ for k, v in _multiband_delta_sdr_batch(x_clean, yc_w, x_hat,
971
+ sr=SAMPLE_RATE).items():
972
+ dsdr_running[k] += v
973
+
974
+ # ── Cosine similarity (residual, DCT domain) ─────────────────────
975
+ # Mirrors the cosine_sim metric in run_smart_sweep.py:
976
+ # how well the estimated residual matches the GT residual in shape.
977
+ # 1.0 = perfect; 0.0 = orthogonal; target: > 0.20 by ep10.
978
+ gt_res = (x_clean - yc_w).cpu().float() # (B, M)
979
+ est_res = (x_hat - yc_w).cpu().float()
980
+ eps_cos = 1e-10
981
+ from scipy.fft import dct as _dct_np
982
+ gt_np = gt_res.numpy()
983
+ est_np = est_res.numpy()
984
+ # Vectorised DCT over rows
985
+ G = np.array([_dct_np(g, type=2, norm="ortho") for g in gt_np])
986
+ E = np.array([_dct_np(e, type=2, norm="ortho") for e in est_np])
987
+ cos_batch = (G * E).sum(-1) / (
988
+ np.sqrt((G**2).sum(-1)) * np.sqrt((E**2).sum(-1)) + eps_cos)
989
+ dsdr_running["cos_sim"] += float(cos_batch.mean())
990
+
991
+ n_steps += 1
992
+
993
+ metrics = {k: v / max(n_steps, 1) for k, v in running.items()}
994
+ metrics.update({k: v / max(n_steps, 1) for k, v in dsdr_running.items()})
995
+ return metrics
996
+
997
+
998
+ def _save_checkpoint(model, optimizer, epoch, val_loss, path: Path, extra=None):
999
+ state = {
1000
+ "epoch": epoch,
1001
+ "val_loss": val_loss,
1002
+ "model": model.state_dict(),
1003
+ "optimizer": optimizer.state_dict(),
1004
+ "cfg": asdict(model.cfg),
1005
+ }
1006
+ if extra:
1007
+ state.update(extra)
1008
+ torch.save(state, path)
1009
+ print(f" ✓ Checkpoint saved → {path} (val_loss={val_loss:.4f})")
1010
+
1011
+
1012
+ def _load_checkpoint(model, optimizer, path: Path, device: str):
1013
+ ckpt = torch.load(path, map_location=device)
1014
+ model.load_state_dict(ckpt["model"])
1015
+ if optimizer is not None and "optimizer" in ckpt:
1016
+ optimizer.load_state_dict(ckpt["optimizer"])
1017
+ return ckpt.get("epoch", 0), ckpt.get("val_loss", float("inf"))
1018
+
1019
+
1020
+ # =============================================================================
1021
+ # Phase 1 training
1022
+ # =============================================================================
1023
+
1024
+ def train_phase1(
1025
+ model: SPADEUnrolled,
1026
+ tc: TrainConfig,
1027
+ device: str,
1028
+ ) -> Path:
1029
+ """
1030
+ Train Phase 1 on isolated drum samples + pink noise.
1031
+ Returns path to best checkpoint.
1032
+ """
1033
+ print("\n" + "="*70)
1034
+ print("PHASE 1 — Isolated drum samples + pink noise")
1035
+ print("="*70)
1036
+
1037
+ drum_dir = Path(tc.drum_dir)
1038
+ if not drum_dir.exists():
1039
+ raise FileNotFoundError(f"Drum directory not found: {drum_dir}")
1040
+
1041
+ rng = np.random.default_rng(42)
1042
+ print(f" Loading drum corpus from {drum_dir} …")
1043
+ all_samples = build_drum_corpus(drum_dir, rng, augment=True)
1044
+
1045
+ if not all_samples:
1046
+ raise RuntimeError("No drum samples found — check --drum-dir path")
1047
+
1048
+ print(f" Total samples: {len(all_samples)}")
1049
+
1050
+ # Train / val split (by file, not frame)
1051
+ n_val = max(1, int(len(all_samples) * tc.val_frac))
1052
+ rng.shuffle(all_samples) # type: ignore
1053
+ samples_val = all_samples[:n_val]
1054
+ samples_tr = all_samples[n_val:]
1055
+ print(f" Train: {len(samples_tr)} Val: {len(samples_val)}")
1056
+
1057
+ cfg_model = model.cfg
1058
+ loss_fn = _make_loss_fn(tc, w_transp=tc.loss_w_transp_p1,
1059
+ sample_rate=cfg_model.sample_rate, device=device)
1060
+
1061
+ loader_tr, loader_val = _make_loaders(samples_tr, samples_val, cfg_model, tc, phase=1)
1062
+
1063
+ optimizer = optim.AdamW(model.parameters(), lr=tc.lr_phase1,
1064
+ weight_decay=tc.weight_decay, betas=(0.9, 0.999))
1065
+ total_steps = tc.epochs_p1 * len(loader_tr)
1066
+ scheduler = optim.lr_scheduler.OneCycleLR(
1067
+ optimizer, max_lr=tc.lr_phase1, total_steps=total_steps,
1068
+ pct_start=0.1, anneal_strategy="cos",
1069
+ )
1070
+
1071
+ ckpt_dir = Path(tc.ckpt_dir)
1072
+ ckpt_dir.mkdir(parents=True, exist_ok=True)
1073
+
1074
+ # Resume support
1075
+ start_epoch = 1
1076
+ best_val = float("inf")
1077
+ if tc.resume:
1078
+ ep, val = _load_checkpoint(model, optimizer, Path(tc.resume), device)
1079
+ start_epoch = ep + 1
1080
+ best_val = val
1081
+ print(f" Resumed from epoch {ep} (val={val:.4f})")
1082
+
1083
+ best_ckpt = ckpt_dir / "phase1_best.pt"
1084
+ history = []
1085
+ no_improve = 0 # early stopping counter
1086
+
1087
+ for epoch in range(start_epoch, tc.epochs_p1 + 1):
1088
+ t_ep = time.time()
1089
+ tr_m = _train_epoch(model, loader_tr, optimizer, scheduler, loss_fn, tc, epoch, device)
1090
+ val_m = _validate_epoch(model, loader_val, loss_fn, device)
1091
+
1092
+ elapsed = time.time() - t_ep
1093
+
1094
+ dsdr_g = val_m.get("dsdr_global", 0.0)
1095
+ dsdr_sb = val_m.get("dsdr_sub_bass", 0.0)
1096
+ dsdr_ba = val_m.get("dsdr_bass", 0.0)
1097
+ dsdr_lm = val_m.get("dsdr_low_mid", 0.0)
1098
+ dsdr_hi = val_m.get("dsdr_mid", 0.0) # "mid" = 2-8kHz = ceiling of LF model
1099
+ gf = val_m.get("gfac_floor", 0.0)
1100
+
1101
+ # Actionable flags
1102
+ flag = ""
1103
+ if dsdr_hi > 0.5 and dsdr_sb < 0.1:
1104
+ flag = " ⚠ SPECTRAL BIAS ↑mid/flat-sub-bass → raise w_lf_coeff or w_lf_energy"
1105
+ elif dsdr_hi < -0.5:
1106
+ flag = " ⚠ mid regression → g_fac too low or w_transp too high"
1107
+ elif dsdr_g < 0:
1108
+ flag = " ⚠ Global ΔSDR<0 → model degrading signal"
1109
+ if gf > 1e-4:
1110
+ flag += f" ⚠ gfac_floor={gf:.5f} > 0 → g_fac still below floor"
1111
+
1112
+ print(f"Epoch {epoch:3d}/{tc.epochs_p1} "
1113
+ f"tr={tr_m['total']:.4f} val={val_m['total']:.4f} "
1114
+ f"[mask={val_m['mask']:.4f} lf_c={val_m['lf_coeff']:.4f} "
1115
+ f"lf_e={val_m['lf_energy']:.4f} stft={val_m['stft']:.4f} "
1116
+ f"gf={gf:.5f} reg={val_m['reg']:.5f}] "
1117
+ f"({elapsed:.1f}s)")
1118
+ print(f" ΔSDR global={dsdr_g:+.2f}dB "
1119
+ f"sub_bass={dsdr_sb:+.2f}dB bass={dsdr_ba:+.2f}dB "
1120
+ f"low_mid={dsdr_lm:+.2f}dB mid={dsdr_hi:+.2f}dB "
1121
+ f"cos_sim={val_m.get('cos_sim', 0.0):.3f}{flag}")
1122
+
1123
+ # ── Diagnostica parametri encoder ────────────────────────────────
1124
+ if epoch % 2 == 0 or epoch == 1:
1125
+ _diag_encoder_params(model, loader_val, device, epoch)
1126
+
1127
+ history.append({"epoch": epoch, "train": tr_m, "val": val_m,
1128
+ "dsdr_global": dsdr_g, "dsdr_sub_bass": dsdr_sb,
1129
+ "dsdr_bass": dsdr_ba, "dsdr_low_mid": dsdr_lm,
1130
+ "dsdr_mid": dsdr_hi,
1131
+ "cos_sim": val_m.get("cos_sim", 0.0)})
1132
+
1133
+ if val_m["total"] < best_val:
1134
+ best_val = val_m["total"]
1135
+ no_improve = 0
1136
+ _save_checkpoint(model, optimizer, epoch, best_val, best_ckpt,
1137
+ extra={"phase": 1, "history": history})
1138
+ else:
1139
+ no_improve += 1
1140
+
1141
+ if epoch % tc.save_every == 0:
1142
+ ep_ckpt = ckpt_dir / f"phase1_epoch{epoch:03d}.pt"
1143
+ _save_checkpoint(model, optimizer, epoch, val_m["total"], ep_ckpt)
1144
+
1145
+ # ── Early stopping ────────────────────────────────────────────────
1146
+ if tc.early_stop_patience > 0 and no_improve >= tc.early_stop_patience:
1147
+ print(f"\n ⏹ Early stopping at epoch {epoch} "
1148
+ f"(no improvement for {no_improve} epochs).")
1149
+ break
1150
+
1151
+ print(f"\n Phase 1 complete. Best val loss: {best_val:.4f}")
1152
+ print(f" Best checkpoint: {best_ckpt}")
1153
+ return best_ckpt
1154
+
1155
+
1156
+ # =============================================================================
1157
+ # Phase 2 training
1158
+ # =============================================================================
1159
+
1160
+ def train_phase2(
1161
+ model: SPADEUnrolled,
1162
+ tc: TrainConfig,
1163
+ device: str,
1164
+ p1_ckpt: Optional[Path] = None,
1165
+ samples_p1_tr: Optional[List[AudioSample]] = None,
1166
+ ) -> Path:
1167
+ """
1168
+ Train Phase 2 on full-mix material with Phase-1 mixed batching.
1169
+
1170
+ If p1_ckpt is given, load Phase-1 weights first.
1171
+ If samples_p1_tr is given, include them in mixed batching.
1172
+ """
1173
+ print("\n" + "="*70)
1174
+ print("PHASE 2 — Full mix (Strategy A) + mixed batching")
1175
+ print("="*70)
1176
+
1177
+ # ── Load Phase-1 weights ──────────────────────────────────────────────
1178
+ if p1_ckpt and p1_ckpt.exists():
1179
+ ep, val = _load_checkpoint(model, None, p1_ckpt, device)
1180
+ print(f" Loaded Phase-1 checkpoint: {p1_ckpt} (epoch={ep}, val={val:.4f})")
1181
+ elif tc.ckpt_phase1:
1182
+ ep, val = _load_checkpoint(model, None, Path(tc.ckpt_phase1), device)
1183
+ print(f" Loaded Phase-1 checkpoint: {tc.ckpt_phase1} (epoch={ep}, val={val:.4f})")
1184
+ else:
1185
+ print(" [WARNING] No Phase-1 checkpoint provided — training from scratch")
1186
+
1187
+ # ── Load full-mix corpus ──────────────────────────────────────────────
1188
+ mix_dir = Path(tc.mix_dir) if tc.mix_dir else None
1189
+ if mix_dir is None or not mix_dir.exists():
1190
+ raise FileNotFoundError(
1191
+ f"Full-mix directory not found: {tc.mix_dir}\n"
1192
+ "Pass --mix-dir /path/to/full/mix/files (Strategy A: pre-limiter stems)"
1193
+ )
1194
+
1195
+ rng = np.random.default_rng(123)
1196
+ print(f" Loading full-mix corpus from {mix_dir} …")
1197
+ all_mix = build_fullmix_corpus(mix_dir, rng, augment=True)
1198
+
1199
+ if not all_mix:
1200
+ raise RuntimeError("No full-mix files found — check --mix-dir path")
1201
+
1202
+ print(f" Full-mix samples: {len(all_mix)}")
1203
+
1204
+ n_val = max(1, int(len(all_mix) * tc.val_frac))
1205
+ rng.shuffle(all_mix) # type: ignore
1206
+ mix_val = all_mix[:n_val]
1207
+ mix_tr = all_mix[n_val:]
1208
+
1209
+ # ── Also load Phase-1 drum corpus for mixed batching ─────────────────
1210
+ if samples_p1_tr is None:
1211
+ drum_dir = Path(tc.drum_dir)
1212
+ if drum_dir.exists():
1213
+ p1_rng = np.random.default_rng(42)
1214
+ p1_all = build_drum_corpus(drum_dir, p1_rng, augment=True)
1215
+ n_val_p1 = max(1, int(len(p1_all) * tc.val_frac))
1216
+ p1_rng.shuffle(p1_all) # type: ignore
1217
+ samples_p1_val = p1_all[:n_val_p1]
1218
+ samples_p1_tr = p1_all[n_val_p1:]
1219
+ print(f" Phase-1 drum samples for mixed batching: {len(samples_p1_tr)}")
1220
+ else:
1221
+ print(" [WARNING] No drum directory for mixed batching — Phase-2 only training")
1222
+ samples_p1_tr = mix_tr[:max(1, len(mix_tr)//5)] # fallback: subset of mix
1223
+
1224
+ cfg_model = model.cfg
1225
+
1226
+ # Phase-2 uses higher transparency weight (Ir = majority in full mix).
1227
+ # _make_loss_fn also threads tc.loss_lf_cutoff_hz so the LF coefficient
1228
+ # loss targets the same frequency boundary as the training data LR split.
1229
+ # NOTE: w_lf_coeff is intentionally kept equal to Phase-1 (2.0) — the
1230
+ # sub-bass under-recovery problem requires strong LF gradient in both phases.
1231
+ loss_fn_p2 = _make_loss_fn(tc, w_transp=tc.loss_w_transp_p2,
1232
+ sample_rate=cfg_model.sample_rate, device=device)
1233
+
1234
+ # Phase-1 loss function (used only for the catastrophic-forgetting probe
1235
+ # on the Phase-1 validation set — must match Phase-1 training exactly).
1236
+ loss_fn_p1 = _make_loss_fn(tc, w_transp=tc.loss_w_transp_p1,
1237
+ sample_rate=cfg_model.sample_rate, device=device)
1238
+
1239
+ loader_tr, loader_val_p2 = _make_loaders(
1240
+ mix_tr, mix_val, cfg_model, tc, phase=2,
1241
+ samples_p1_tr=samples_p1_tr,
1242
+ )
1243
+
1244
+ # Phase-1 validation loader (forgetting monitor).
1245
+ # ds_kwargs ensures masks + LR split match inference — same as Phase-1
1246
+ # training datasets, so the forgetting probe is on a comparable distribution.
1247
+ _p1_val_samples = p1_all[:n_val_p1] if 'p1_all' in dir() else mix_val
1248
+ ds_val_p1 = FrameDataset(
1249
+ _p1_val_samples, cfg_model, n_virtual=800, rng_seed=888,
1250
+ **ds_kwargs,
1251
+ )
1252
+ loader_val_p1 = DataLoader(ds_val_p1, batch_size=tc.batch_size,
1253
+ shuffle=False, num_workers=0) # [FIX] single-thread: only 11 samples, avoids shared-mem SIGSEGV
1254
+
1255
+ optimizer = optim.AdamW(model.parameters(), lr=tc.lr_phase2,
1256
+ weight_decay=tc.weight_decay, betas=(0.9, 0.999))
1257
+ total_steps = tc.epochs_p2 * len(loader_tr)
1258
+ scheduler = optim.lr_scheduler.OneCycleLR(
1259
+ optimizer, max_lr=tc.lr_phase2, total_steps=total_steps,
1260
+ pct_start=0.05, anneal_strategy="cos",
1261
+ )
1262
+
1263
+ ckpt_dir = Path(tc.ckpt_dir)
1264
+ best_ckpt = ckpt_dir / "phase2_best.pt"
1265
+ best_val = float("inf")
1266
+ p1_val_baseline = None # set on first epoch
1267
+ history = []
1268
+
1269
+ for epoch in range(1, tc.epochs_p2 + 1):
1270
+ t_ep = time.time()
1271
+ tr_m = _train_epoch(model, loader_tr, optimizer, scheduler,
1272
+ loss_fn_p2, tc, epoch, device)
1273
+
1274
+ # Validate on Phase-2 val set
1275
+ val_m_p2 = _validate_epoch(model, loader_val_p2, loss_fn_p2, device)
1276
+
1277
+ # Monitor Phase-1 forgetting
1278
+ val_m_p1 = _validate_epoch(model, loader_val_p1, loss_fn_p1, device)
1279
+ if p1_val_baseline is None:
1280
+ p1_val_baseline = val_m_p1["total"]
1281
+
1282
+ forgetting = val_m_p1["total"] - p1_val_baseline
1283
+ elapsed = time.time() - t_ep
1284
+
1285
+ dsdr_g = val_m_p2.get("dsdr_global", 0.0)
1286
+ dsdr_sb = val_m_p2.get("dsdr_sub_bass", 0.0)
1287
+ dsdr_ba = val_m_p2.get("dsdr_bass", 0.0)
1288
+ dsdr_lm = val_m_p2.get("dsdr_low_mid", 0.0)
1289
+ dsdr_hi = val_m_p2.get("dsdr_mid", 0.0) # 2-8kHz (LF model ceiling)
1290
+ dsdr_g_p1 = val_m_p1.get("dsdr_global", 0.0)
1291
+ dsdr_sb_p1 = val_m_p1.get("dsdr_sub_bass", 0.0) # key forgetting indicator
1292
+ gf = val_m_p2.get("gfac_floor", 0.0)
1293
+ cos_p2 = val_m_p2.get("cos_sim", 0.0)
1294
+ cos_p1 = val_m_p1.get("cos_sim", 0.0)
1295
+
1296
+ flag = ""
1297
+ if dsdr_hi > 0.5 and dsdr_sb < 0.1:
1298
+ flag = " ⚠ SPECTRAL BIAS ↑mid/flat-sub-bass → raise w_lf_coeff or w_lf_energy"
1299
+ elif dsdr_hi < -0.5:
1300
+ flag = " ⚠ mid regression → g_fac too low or w_transp too high"
1301
+ elif dsdr_g < 0:
1302
+ flag = " ⚠ Global ΔSDR<0 → model degrading signal"
1303
+ if gf > 1e-4:
1304
+ flag += f" ⚠ gfac_floor={gf:.5f}>0"
1305
+ if dsdr_sb_p1 < dsdr_sb - 1.5:
1306
+ flag += " ⚠ sub-bass forgetting P1 vs P2 >1.5dB"
1307
+
1308
+ print(f"Epoch {epoch:3d}/{tc.epochs_p2} "
1309
+ f"tr={tr_m['total']:.4f} "
1310
+ f"val_p2={val_m_p2['total']:.4f} "
1311
+ f"val_p1={val_m_p1['total']:.4f} "
1312
+ f"forgetting={forgetting:+.4f} "
1313
+ f"gf={gf:.5f} ({elapsed:.1f}s)")
1314
+ print(f" ΔSDR(P2) global={dsdr_g:+.2f}dB "
1315
+ f"sub_bass={dsdr_sb:+.2f}dB bass={dsdr_ba:+.2f}dB "
1316
+ f"low_mid={dsdr_lm:+.2f}dB mid={dsdr_hi:+.2f}dB "
1317
+ f"cos={cos_p2:.3f}{flag}")
1318
+ print(f" ΔSDR(P1) global={dsdr_g_p1:+.2f}dB "
1319
+ f"sub_bass={dsdr_sb_p1:+.2f}dB cos={cos_p1:.3f} [forgetting probe]")
1320
+
1321
+ if forgetting > tc.forgetting_thr:
1322
+ print(f" ⚠ Catastrophic forgetting detected "
1323
+ f"(Δ={forgetting:.4f} > {tc.forgetting_thr}) "
1324
+ f"Consider increasing p1_mix_frac or reducing lr")
1325
+
1326
+ history.append({
1327
+ "epoch": epoch,
1328
+ "train": tr_m,
1329
+ "val_p2": val_m_p2,
1330
+ "val_p1": val_m_p1,
1331
+ "forgetting": forgetting,
1332
+ "dsdr_global": dsdr_g,
1333
+ "dsdr_sub_bass": dsdr_sb,
1334
+ "dsdr_bass": dsdr_ba,
1335
+ "dsdr_low_mid": dsdr_lm,
1336
+ "dsdr_mid": dsdr_hi,
1337
+ "cos_sim_p2": cos_p2,
1338
+ "dsdr_global_p1": dsdr_g_p1,
1339
+ "dsdr_sub_bass_p1": dsdr_sb_p1,
1340
+ "cos_sim_p1": cos_p1,
1341
+ })
1342
+
1343
+ if val_m_p2["total"] < best_val:
1344
+ best_val = val_m_p2["total"]
1345
+ _save_checkpoint(model, optimizer, epoch, best_val, best_ckpt,
1346
+ extra={"phase": 2, "forgetting": forgetting,
1347
+ "history": history})
1348
+
1349
+ if epoch % tc.save_every == 0:
1350
+ ep_ckpt = ckpt_dir / f"phase2_epoch{epoch:03d}.pt"
1351
+ _save_checkpoint(model, optimizer, epoch, val_m_p2["total"], ep_ckpt)
1352
+
1353
+ print(f"\n Phase 2 complete. Best P2 val loss: {best_val:.4f}")
1354
+ print(f" Best checkpoint: {best_ckpt}")
1355
+ return best_ckpt
1356
+
1357
+
1358
+ # =============================================================================
1359
+ # CLI
1360
+ # =============================================================================
1361
+
1362
+ def _build_parser() -> argparse.ArgumentParser:
1363
+ p = argparse.ArgumentParser(
1364
+ description="Train SPADE-Unrolled (two-phase curriculum)",
1365
+ formatter_class=argparse.ArgumentDefaultsHelpFormatter,
1366
+ )
1367
+ p.add_argument("--phase", choices=["1", "2", "both"], default="both",
1368
+ help="Training phase to run")
1369
+ p.add_argument("--epochs-p1", type=int, default=50, dest="epochs_p1")
1370
+ p.add_argument("--epochs-p2", type=int, default=30, dest="epochs_p2")
1371
+ p.add_argument("--drum-dir", type=str, default="./Samples", dest="drum_dir",
1372
+ help="Root directory with Kicks / Snares / Perc / Tops subdirs")
1373
+ p.add_argument("--mix-dir", type=str, default="", dest="mix_dir",
1374
+ help="Directory with full-mix WAV/FLAC files (Phase 2)")
1375
+ p.add_argument("--ckpt-dir", type=str, default="./checkpoints", dest="ckpt_dir")
1376
+ p.add_argument("--ckpt-phase1", type=str, default="", dest="ckpt_phase1",
1377
+ help="Phase-1 checkpoint to load at start of Phase 2")
1378
+ p.add_argument("--resume", type=str, default="",
1379
+ help="Resume from checkpoint path")
1380
+ p.add_argument("--batch-size", type=int, default=BATCH_SIZE, dest="batch_size")
1381
+ p.add_argument("--lr-p1", type=float, default=LR_PHASE1, dest="lr_phase1")
1382
+ p.add_argument("--lr-p2", type=float, default=LR_PHASE2, dest="lr_phase2")
1383
+ p.add_argument("--p1-mix-frac", type=float, default=PHASE1_MIX_FRAC, dest="p1_mix_frac",
1384
+ help="Fraction of Phase-1 samples in each Phase-2 batch")
1385
+ p.add_argument("--device", type=str, default="cuda",
1386
+ help="PyTorch device: 'cuda', 'cuda:0', 'cpu', 'mps'")
1387
+ p.add_argument("--window", type=int, default=2048,
1388
+ help="WOLA window length (samples). Default: 2048 (sweep rank-1)")
1389
+ p.add_argument("--hop", type=int, default=512,
1390
+ help="WOLA hop length (samples). Default: 512 (sweep rank-1)")
1391
+ p.add_argument("--k-unroll", type=int, default=4, dest="k_unroll",
1392
+ help="Number of unrolled ADMM layers")
1393
+ p.add_argument("--k-context", type=int, default=8, dest="k_context",
1394
+ help="Number of context frames for the GRU encoder")
1395
+ # ── Base SPADE parameters (sweep optima, encoder predicts multipliers of these) ─
1396
+ p.add_argument("--base-delta-db", type=float, default=3.5, dest="base_delta_db",
1397
+ help="Base delta_db (sweep rank-1=3.5, rank-2=2.25, rank-3=2.75)")
1398
+ p.add_argument("--base-max-gain-db", type=float, default=6.0, dest="base_max_gain_db",
1399
+ help="Base max_gain_db. Default 6.0: g_fac=0.75→4.5dB (Phase-1 optimum). "
1400
+ "Sweep rank-1=9.0 led to g_fac collapsing to lower bound.")
1401
+ p.add_argument("--base-eps", type=float, default=0.05, dest="base_eps",
1402
+ help="Base eps (sweep rank-1=0.05, rank-2=0.05, rank-3=0.1)")
1403
+ p.add_argument("--lf-delta-ratio", type=float, default=0.286, dest="lf_delta_ratio",
1404
+ help="lf_delta_db / delta_db ratio for lambda_lf init (rank-1: 1.0/3.5≈0.286)")
1405
+ p.add_argument("--early-stop", type=int, default=15, dest="early_stop_patience",
1406
+ help="Early stopping patience (epochs without val improvement). 0 = disabled.")
1407
+ # ── v13 integration ───────────────────────────────────────────────────────
1408
+ p.add_argument("--lf-band-hz", type=float, default=8000.0, dest="lf_band_hz",
1409
+ help="[v13] LP-filter training frames to this frequency (Hz) before "
1410
+ "feeding to the model. Must match HybridSPADEInference.crossover_hz "
1411
+ "to eliminate train/inference distribution mismatch. "
1412
+ "0 = disabled (full-bandwidth, legacy v12 behaviour).")
1413
+ p.add_argument("--no-mask-dilation", action="store_true", dest="no_mask_dilation",
1414
+ help="[v13] Disable mask dilation with sample release_ms in FrameDataset. "
1415
+ "By default dilation is ON to match the dilated masks used at inference.")
1416
+ p.add_argument("--loss-lf-cutoff", type=float, default=2500.0, dest="loss_lf_cutoff_hz",
1417
+ help="[v13] Frequency (Hz) up to which DCT bins count as LF for "
1418
+ "loss_lf_coeff and loss_lf_energy. "
1419
+ "[FIX] Default raised 500→2500 Hz: at 500 Hz the primary LF losses "
1420
+ "were blind to the 500–2000 Hz mid band, causing persistent mid regression. "
1421
+ "2500 Hz covers sub-bass+bass+low-mids in the high-weight loss term (w=2.0).")
1422
+ return p
1423
+
1424
+
1425
+ def main():
1426
+ args = _build_parser().parse_args()
1427
+
1428
+ # ── Model config ──────────────────────────────────────────────────────
1429
+ cfg = UnrolledConfig(
1430
+ window_length=args.window,
1431
+ hop_length=args.hop,
1432
+ K_unroll=args.k_unroll,
1433
+ K_context=args.k_context,
1434
+ base_delta_db=args.base_delta_db,
1435
+ base_max_gain_db=args.base_max_gain_db,
1436
+ base_eps=args.base_eps,
1437
+ lf_delta_ratio=args.lf_delta_ratio,
1438
+ )
1439
+ model = build_model(cfg)
1440
+ device = args.device
1441
+
1442
+ # Check device availability
1443
+ if device.startswith("cuda") and not torch.cuda.is_available():
1444
+ print(" [WARNING] CUDA not available — falling back to CPU")
1445
+ device = "cpu"
1446
+ model = model.to(device)
1447
+
1448
+ # ── Training config ───────────────────────────────────────────────────
1449
+ tc = TrainConfig(
1450
+ drum_dir = args.drum_dir,
1451
+ mix_dir = args.mix_dir,
1452
+ ckpt_dir = args.ckpt_dir,
1453
+ ckpt_phase1 = args.ckpt_phase1,
1454
+ phase = args.phase,
1455
+ epochs_p1 = args.epochs_p1,
1456
+ epochs_p2 = args.epochs_p2,
1457
+ batch_size = args.batch_size,
1458
+ lr_phase1 = args.lr_phase1,
1459
+ lr_phase2 = args.lr_phase2,
1460
+ p1_mix_frac = args.p1_mix_frac,
1461
+ device = device,
1462
+ resume = args.resume,
1463
+ early_stop_patience = args.early_stop_patience,
1464
+ # v13 integration
1465
+ lf_band_hz = args.lf_band_hz,
1466
+ apply_mask_dilation = not args.no_mask_dilation,
1467
+ loss_lf_cutoff_hz = args.loss_lf_cutoff_hz,
1468
+ )
1469
+
1470
+ # ── Run phases ────────────────────────────────────────────────────────
1471
+ p1_ckpt = None
1472
+ p1_tr_samples = None
1473
+
1474
+ if args.phase in ("1", "both"):
1475
+ p1_ckpt = train_phase1(model, tc, device)
1476
+
1477
+ if args.phase in ("2", "both"):
1478
+ train_phase2(model, tc, device, p1_ckpt=p1_ckpt,
1479
+ samples_p1_tr=p1_tr_samples)
1480
+
1481
+ print("\n ✓ Training complete.")
1482
+
1483
+
1484
+ if __name__ == "__main__":
1485
+ main()
train_transient_net.py ADDED
@@ -0,0 +1,1509 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ """
2
+ train_transient_net.py — Two-phase training for TransientNet
3
+ ===============================================================
4
+
5
+ Drop-in replacement for train_spade_unrolled.py.
6
+ Reuses the same corpus pipeline (drum samples + full mix), same synthetic
7
+ brickwall limiter, same evaluation metrics — but trains TransientNet
8
+ (direct residual prediction) instead of SPADE-Unrolled.
9
+
10
+ What changed vs train_spade_unrolled.py
11
+ ---------------------------------------
12
+ ✗ NO binary masks (Ir, Icp, Icm) — the limiter has no "reliable" samples
13
+ ✗ NO SPADE imports — no spade_declip_v13, no _compute_masks, no _dilate_masks
14
+ ✗ NO conflicting loss terms — 3 clean terms vs 9 with competing gradients
15
+ ✗ NO soft-thresholding dead zones — gradients flow freely through all layers
16
+ ✗ NO g_fac collapse — no gain parameter to get stuck at floor
17
+
18
+ ✓ Same corpus pipeline (drum samples + pink noise + full mix)
19
+ ✓ Same synthetic limiter (1-sample attack, exponential release)
20
+ ✓ Same WOLA frame extraction (sqrt-Hann windows)
21
+ ✓ Same two-phase curriculum (Phase 1: drums, Phase 2: full mix + mixed batching)
22
+ ✓ Same ΔSDR evaluation metrics (global + multiband)
23
+ ✓ Same CLI interface (--phase, --epochs-p1, --drum-dir, etc.)
24
+ ✓ LF band filtering for hybrid inference compatibility
25
+
26
+ CLI
27
+ ---
28
+ # Phase 1 only
29
+ python train_transient_net.py --phase 1 --epochs-p1 50 --drum-dir ./Samples
30
+
31
+ # Full two-phase
32
+ python train_transient_net.py --phase both --epochs-p1 50 --epochs-p2 30 \
33
+ --drum-dir ./Samples --mix-dir ./FullMix
34
+
35
+ # Resume
36
+ python train_transient_net.py --phase 1 --resume checkpoints_tnet/phase1_best.pt \
37
+ --drum-dir ./Samples
38
+ """
39
+
40
+ from __future__ import annotations
41
+
42
+ import argparse
43
+ import math
44
+ import os
45
+ import random
46
+ import time
47
+ import warnings
48
+ from dataclasses import dataclass, asdict
49
+ from pathlib import Path
50
+ from typing import Dict, List, Optional, Tuple
51
+
52
+ import numpy as np
53
+ import scipy.signal as sig
54
+
55
+ try:
56
+ import soundfile as sf
57
+ _HAS_SF = True
58
+ except ImportError:
59
+ _HAS_SF = False
60
+ warnings.warn("soundfile not found — pip install soundfile")
61
+
62
+ try:
63
+ import torch
64
+ import torch.nn as nn
65
+ import torch.optim as optim
66
+ from torch.utils.data import Dataset, DataLoader
67
+ except ImportError:
68
+ raise ImportError("PyTorch required — pip install torch")
69
+
70
+ from transient_net import (
71
+ TransientNet, TransientNetConfig, TransientLoss, build_model,
72
+ )
73
+
74
+
75
+ # =============================================================================
76
+ # Constants
77
+ # =============================================================================
78
+
79
+ DRUM_DIRS = ["Kicks", "Snares", "Perc", "Tops"]
80
+ SAMPLE_RATE = 44100
81
+
82
+ # Limiter augmentation ranges
83
+ AUG_THRESH_RANGE = (-1.5, -4.5) # dBFS
84
+ AUG_RELEASE_RANGE = (40.0, 120.0) # ms
85
+ PINK_NOISE_DB = -20.0 # dB relative to peak
86
+
87
+ # Default training hyperparameters
88
+ BATCH_SIZE = 32
89
+ LR_PHASE1 = 3e-4
90
+ LR_PHASE2 = 3e-5
91
+ WEIGHT_DECAY = 1e-4
92
+ GRAD_CLIP = 1.0
93
+
94
+ PHASE1_MIX_FRAC = 0.30
95
+ FORGETTING_THR_DB = 2.0
96
+
97
+
98
+ # =============================================================================
99
+ # Pink noise generator
100
+ # =============================================================================
101
+
102
+ def generate_pink_noise(n_samples: int, rng: np.random.Generator) -> np.ndarray:
103
+ """Voss-McCartney 5-pole IIR approximation of 1/f noise."""
104
+ b = np.array([ 0.049922035, -0.095993537, 0.050612699, -0.004408786])
105
+ a = np.array([ 1.0, -2.494956002, 2.017265875, -0.522189400])
106
+ white = rng.standard_normal(n_samples)
107
+ pink = sig.lfilter(b, a, white)
108
+ rms = np.sqrt(np.mean(pink ** 2))
109
+ return pink / (rms + 1e-12)
110
+
111
+
112
+ # =============================================================================
113
+ # Synthetic brickwall limiter (Numba-accelerated, Python fallback)
114
+ # =============================================================================
115
+
116
+ # Try to JIT-compile the inner loop with Numba for ~200× speedup.
117
+ # Falls back transparently to pure Python if Numba is not installed.
118
+ try:
119
+ from numba import njit as _njit
120
+
121
+ @_njit(cache=True, fastmath=True)
122
+ def _limiter_loop(audio: np.ndarray, thr_lin: float, rc: float) -> np.ndarray:
123
+ out = np.empty_like(audio)
124
+ env = 1.0
125
+ for n in range(len(audio)):
126
+ pk = abs(audio[n])
127
+ target = thr_lin / pk if pk > thr_lin else 1.0
128
+ env = target if target < env else rc * env + (1.0 - rc) * target
129
+ out[n] = audio[n] * env
130
+ return out
131
+
132
+ _NUMBA_OK = True
133
+
134
+ except ImportError:
135
+ _NUMBA_OK = False
136
+
137
+ def _limiter_loop(audio: np.ndarray, thr_lin: float, rc: float) -> np.ndarray: # type: ignore[misc]
138
+ """Pure-Python fallback (slow for large arrays)."""
139
+ out = np.empty_like(audio)
140
+ env = 1.0
141
+ for n in range(len(audio)):
142
+ pk = abs(audio[n])
143
+ target = thr_lin / pk if pk > thr_lin else 1.0
144
+ env = target if target < env else rc * env + (1.0 - rc) * target
145
+ out[n] = audio[n] * env
146
+ return out
147
+
148
+
149
+ def apply_brickwall_limiter(
150
+ audio: np.ndarray,
151
+ sr: int,
152
+ threshold_db: float,
153
+ release_ms: float,
154
+ ) -> np.ndarray:
155
+ """
156
+ Brickwall limiter: 1-sample attack, exponential release.
157
+ Uses Numba JIT when available (install with: pip install numba).
158
+ """
159
+ thr_lin = 10 ** (-abs(threshold_db) / 20.0)
160
+ rc = np.exp(-1.0 / max(release_ms * sr / 1000.0, 1e-9))
161
+ return _limiter_loop(audio.astype(np.float64), thr_lin, rc)
162
+
163
+
164
+ # =============================================================================
165
+ # Corpus preparation — LAZY LOADING
166
+ # =============================================================================
167
+ # Instead of loading all audio into RAM, we store lightweight metadata and
168
+ # load/limit on-the-fly in __getitem__. This reduces RAM from O(corpus_size)
169
+ # to O(cache_size) — critical when expanding from 76 to 2000+ samples.
170
+ #
171
+ # Architecture:
172
+ # SampleMeta — lightweight descriptor (path, chunk bounds, flags)
173
+ # AudioCache — LRU cache for loaded audio files (avoids re-reading disk)
174
+ # FrameDataset — draws random frames, applies limiter on-the-fly
175
+
176
+ from functools import lru_cache
177
+ import threading
178
+
179
+ @dataclass
180
+ class SampleMeta:
181
+ """Lightweight sample descriptor — NO audio arrays, just metadata."""
182
+ path: str # file path
183
+ chunk_start: int = 0 # sample offset (for loop chunks)
184
+ chunk_end: int = -1 # -1 = entire file
185
+ add_pink: bool = True # one-shots get pink noise
186
+ is_fullmix: bool = False
187
+
188
+
189
+ def _load_and_cache_audio(path: str, target_sr: int = SAMPLE_RATE) -> Optional[np.ndarray]:
190
+ """Load audio file to mono float64, normalised to 0 dBFS peak.
191
+ Results are cached by the AudioCache wrapper below."""
192
+ if not _HAS_SF:
193
+ return None
194
+ try:
195
+ audio, sr = sf.read(path, always_2d=True)
196
+ audio = audio.mean(axis=1)
197
+ if sr != target_sr:
198
+ try:
199
+ from scipy.signal import resample_poly
200
+ from math import gcd
201
+ g = gcd(target_sr, sr)
202
+ audio = resample_poly(audio, target_sr // g, sr // g)
203
+ except Exception:
204
+ pass
205
+ peak = np.max(np.abs(audio))
206
+ if peak < 1e-8:
207
+ return None
208
+ return audio.astype(np.float64) / peak
209
+ except Exception as e:
210
+ return None
211
+
212
+
213
+ class AudioCache:
214
+ """
215
+ Thread-safe LRU cache for loaded audio files.
216
+
217
+ Holds at most `max_files` normalised audio arrays in RAM.
218
+ With max_files=64 and average file ~5 sec → ~64 × 220K × 8 bytes ≈ 110 MB.
219
+ Much less than the 8+ GB of the eager approach.
220
+ """
221
+
222
+ def __init__(self, max_files: int = 512):
223
+ self._max = max_files
224
+ self._cache: Dict[str, np.ndarray] = {}
225
+ self._order: List[str] = [] # LRU order (most recent at end)
226
+ self._lock = threading.Lock()
227
+
228
+ def get(self, path: str) -> Optional[np.ndarray]:
229
+ with self._lock:
230
+ if path in self._cache:
231
+ # Move to end (most recently used)
232
+ self._order.remove(path)
233
+ self._order.append(path)
234
+ return self._cache[path]
235
+
236
+ # Load outside lock (I/O bound)
237
+ audio = _load_and_cache_audio(path)
238
+ if audio is None:
239
+ return None
240
+
241
+ with self._lock:
242
+ if path not in self._cache:
243
+ # Evict oldest if full
244
+ while len(self._cache) >= self._max:
245
+ oldest = self._order.pop(0)
246
+ del self._cache[oldest]
247
+ self._cache[path] = audio
248
+ self._order.append(path)
249
+
250
+ return audio
251
+
252
+
253
+ # Global cache shared by all datasets (avoids reloading same files
254
+ # across train/val/p1/p2 datasets).
255
+ # 512 files × ~5s average × 44100 × 8 bytes ≈ 880 MB — adjust via --cache-files CLI arg.
256
+ _AUDIO_CACHE = AudioCache(max_files=512)
257
+
258
+
259
+ class ChunkCache:
260
+ """
261
+ Thread-safe LRU cache for *preprocessed* clean audio chunks.
262
+
263
+ Stores the result of (_extract_chunk) — i.e. the chunk after:
264
+ - slicing, pink-noise mixing, and peak-normalisation.
265
+ Keyed by (path, chunk_start, chunk_end, add_pink).
266
+
267
+ This avoids re-reading the file AND re-generating pink noise on every
268
+ __getitem__ call. Only the random limiter (fast with Numba) and the
269
+ random frame crop still run per-access.
270
+
271
+ Memory estimate: 512 chunks × 4 s avg × 44100 Hz × 4 bytes ≈ 360 MB.
272
+ Tune with --chunk-cache-size (default 512; set 0 to disable).
273
+ """
274
+
275
+ def __init__(self, max_chunks: int = 512):
276
+ self._max = max_chunks
277
+ self._cache: Dict[tuple, np.ndarray] = {}
278
+ self._order: List[tuple] = []
279
+ self._lock = threading.Lock()
280
+
281
+ def get(self, key: tuple) -> Optional[np.ndarray]:
282
+ if self._max == 0:
283
+ return None
284
+ with self._lock:
285
+ if key in self._cache:
286
+ self._order.remove(key)
287
+ self._order.append(key)
288
+ return self._cache[key]
289
+ return None
290
+
291
+ def put(self, key: tuple, chunk: np.ndarray) -> None:
292
+ if self._max == 0:
293
+ return
294
+ with self._lock:
295
+ if key not in self._cache:
296
+ while len(self._cache) >= self._max:
297
+ oldest = self._order.pop(0)
298
+ del self._cache[oldest]
299
+ self._cache[key] = chunk.copy()
300
+ self._order.append(key)
301
+
302
+
303
+ _CHUNK_CACHE = ChunkCache(max_chunks=512)
304
+
305
+
306
+ def _extract_chunk(
307
+ audio: np.ndarray,
308
+ meta: SampleMeta,
309
+ rng: np.random.Generator,
310
+ ) -> np.ndarray:
311
+ """
312
+ Extract the relevant chunk from a loaded audio file, with LRU caching.
313
+
314
+ For one-shots: returns full audio (with deterministic pink noise).
315
+ For loop chunks: returns the slice [chunk_start:chunk_end].
316
+
317
+ The result is cached in _CHUNK_CACHE — disk I/O and pink-noise synthesis
318
+ are skipped on repeated accesses to the same chunk. The random limiter
319
+ (which is the remaining per-call work) is applied outside this function.
320
+ """
321
+ cache_key = (meta.path, meta.chunk_start, meta.chunk_end, meta.add_pink)
322
+ cached = _CHUNK_CACHE.get(cache_key)
323
+ if cached is not None:
324
+ return cached
325
+
326
+ # ── Compute ───────────────────────────────────────────────────────────
327
+ if meta.chunk_end > 0:
328
+ chunk = audio[meta.chunk_start : meta.chunk_end].copy()
329
+ else:
330
+ chunk = audio.copy()
331
+
332
+ # Truncate very long files (safety)
333
+ max_samples = int(_LOOP_MAX_SEC * SAMPLE_RATE)
334
+ if len(chunk) > max_samples:
335
+ chunk = chunk[:max_samples]
336
+
337
+ # Add pink noise for one-shots.
338
+ # Use a deterministic seed derived from the path so the cached chunk
339
+ # is always identical (the random limiter still varies every access).
340
+ if meta.add_pink:
341
+ seed_rng = np.random.default_rng(abs(hash(meta.path)) & 0xFFFFFFFF)
342
+ pink = generate_pink_noise(len(chunk), seed_rng)
343
+ gain = float(np.max(np.abs(chunk))) * (10 ** (PINK_NOISE_DB / 20.0))
344
+ chunk = chunk + pink * gain
345
+
346
+ # Re-normalise to 0 dBFS
347
+ peak = np.max(np.abs(chunk))
348
+ if peak > 1e-8:
349
+ chunk = chunk / peak
350
+
351
+ _CHUNK_CACHE.put(cache_key, chunk)
352
+ return chunk
353
+
354
+
355
+ # =============================================================================
356
+ # Corpus builders — now return List[SampleMeta] instead of List[SampleMeta]
357
+ # =============================================================================
358
+
359
+ AUDIO_EXTENSIONS = ["*.wav", "*.WAV", "*.flac", "*.FLAC", "*.aif", "*.aiff",
360
+ "*.AIF", "*.AIFF", "*.mp3", "*.ogg"]
361
+
362
+
363
+ def _collect_audio_files(directory: Path) -> List[Path]:
364
+ """Recursively collect audio files from a directory."""
365
+ files = []
366
+ if not directory.exists():
367
+ return files
368
+ for ext in AUDIO_EXTENSIONS:
369
+ files.extend(directory.rglob(ext))
370
+ return sorted(files)
371
+
372
+
373
+ def _scan_audio_file(path: Path) -> Optional[int]:
374
+ """
375
+ Quick scan: get duration in samples without loading full audio.
376
+ Falls back to loading if sf.info is not available.
377
+ """
378
+ try:
379
+ info = sf.info(str(path))
380
+ return int(info.frames)
381
+ except Exception:
382
+ return None
383
+
384
+
385
+ def build_drum_corpus(
386
+ base_dir: Path,
387
+ rng: np.random.Generator,
388
+ augment: bool = True,
389
+ ) -> List[SampleMeta]:
390
+ """Build corpus of drum one-shot metadata (lazy — no audio loaded)."""
391
+ metas = []
392
+ drum_dirs = [base_dir / d for d in DRUM_DIRS if (base_dir / d).exists()]
393
+ if not drum_dirs:
394
+ drum_dirs = [base_dir]
395
+
396
+ audio_files = []
397
+ for d in drum_dirs:
398
+ for ext in ["*.wav", "*.WAV", "*.flac", "*.FLAC", "*.aif", "*.aiff"]:
399
+ audio_files.extend(d.glob(ext))
400
+
401
+ if not audio_files:
402
+ warnings.warn(f"No audio files found in {base_dir}")
403
+ return metas
404
+
405
+ for path in audio_files:
406
+ # Quick existence check (don't load audio yet)
407
+ if not path.exists() or path.stat().st_size < 100:
408
+ continue
409
+ metas.append(SampleMeta(
410
+ path=str(path),
411
+ chunk_start=0,
412
+ chunk_end=-1,
413
+ add_pink=True,
414
+ is_fullmix=False,
415
+ ))
416
+
417
+ return metas
418
+
419
+
420
+ def build_fullmix_corpus(
421
+ mix_dir: Path,
422
+ rng: np.random.Generator,
423
+ augment: bool = True,
424
+ ) -> List[SampleMeta]:
425
+ """Build corpus of full-mix metadata."""
426
+ metas = []
427
+ audio_files = []
428
+ for ext in ["*.wav", "*.WAV", "*.flac", "*.FLAC"]:
429
+ audio_files.extend(mix_dir.rglob(ext))
430
+
431
+ for path in audio_files:
432
+ if not path.exists() or path.stat().st_size < 100:
433
+ continue
434
+ metas.append(SampleMeta(
435
+ path=str(path),
436
+ chunk_start=0,
437
+ chunk_end=-1,
438
+ add_pink=False,
439
+ is_fullmix=True,
440
+ ))
441
+
442
+ return metas
443
+
444
+
445
+ # =============================================================================
446
+ # Extended corpus: drum loops + one-shots from additional directories
447
+ # =============================================================================
448
+
449
+ _ONESHOT_MAX_SEC = 1.5
450
+ _LOOP_MAX_SEC = 30.0
451
+ _LOOP_CHUNK_SEC = 4.0
452
+ _LOOP_N_AUG = 3 # number of SampleMeta entries per chunk
453
+ # (limiter params are randomised at access time,
454
+ # so this just controls sampling weight)
455
+
456
+
457
+ def parse_extra_dirs_csv(csv_path: Path) -> List[Tuple[Path, str]]:
458
+ """Parse CSV with columns: Percorso Directory, Tipo."""
459
+ import csv
460
+ entries = []
461
+ base = csv_path.parent
462
+
463
+ with open(csv_path, "r", encoding="utf-8") as f:
464
+ reader = csv.reader(f)
465
+ header = next(reader, None)
466
+ for row in reader:
467
+ if len(row) < 2:
468
+ continue
469
+ raw_path = row[0].strip().strip('"')
470
+ tipo = row[1].strip().strip('"')
471
+ p = Path(raw_path)
472
+ if not p.is_absolute():
473
+ p = base / p
474
+ entries.append((p, tipo))
475
+
476
+ return entries
477
+
478
+
479
+ def build_extended_corpus(
480
+ extra_entries: List[Tuple[Path, str]],
481
+ rng: np.random.Generator,
482
+ ) -> List[SampleMeta]:
483
+ """
484
+ Build lazy corpus from extended directories.
485
+
486
+ For loops: scan file duration, create chunk metas with start/end offsets.
487
+ For one-shots: single meta per file.
488
+
489
+ No audio is loaded — just file scanning for duration.
490
+ """
491
+ metas = []
492
+ n_files = 0
493
+ n_oneshots = 0
494
+ n_loops = 0
495
+ n_skipped = 0
496
+
497
+ chunk_samples = int(_LOOP_CHUNK_SEC * SAMPLE_RATE)
498
+ chunk_hop = chunk_samples // 2 # 50% overlap
499
+
500
+ for directory, tipo in extra_entries:
501
+ if not directory.exists():
502
+ warnings.warn(f"Directory not found, skipping: {directory}")
503
+ n_skipped += 1
504
+ continue
505
+
506
+ files = _collect_audio_files(directory)
507
+ if not files:
508
+ continue
509
+
510
+ is_oneshot = ("one" in tipo.lower() and "shot" in tipo.lower())
511
+
512
+ for path in files:
513
+ if not path.exists() or path.stat().st_size < 100:
514
+ continue
515
+
516
+ # Get duration without loading audio
517
+ n_samples = _scan_audio_file(path)
518
+ if n_samples is None or n_samples < 1000:
519
+ continue
520
+
521
+ n_files += 1
522
+ duration_sec = n_samples / SAMPLE_RATE
523
+
524
+ if is_oneshot or duration_sec <= _ONESHOT_MAX_SEC:
525
+ # One-shot
526
+ metas.append(SampleMeta(
527
+ path=str(path), chunk_start=0, chunk_end=-1,
528
+ add_pink=True, is_fullmix=False,
529
+ ))
530
+ n_oneshots += 1
531
+ else:
532
+ # Loop: create chunk metas
533
+ max_samples = min(n_samples, int(_LOOP_MAX_SEC * SAMPLE_RATE))
534
+ pos = 0
535
+ while pos + chunk_samples <= max_samples:
536
+ # Create N_AUG entries per chunk (increases sampling weight;
537
+ # actual limiter params are randomised every access)
538
+ for _ in range(_LOOP_N_AUG):
539
+ metas.append(SampleMeta(
540
+ path=str(path),
541
+ chunk_start=pos,
542
+ chunk_end=pos + chunk_samples,
543
+ add_pink=False,
544
+ is_fullmix=False,
545
+ ))
546
+ pos += chunk_hop
547
+
548
+ # Last partial chunk
549
+ remaining = max_samples - pos
550
+ if remaining > chunk_samples // 4:
551
+ for _ in range(_LOOP_N_AUG):
552
+ metas.append(SampleMeta(
553
+ path=str(path),
554
+ chunk_start=pos,
555
+ chunk_end=max_samples,
556
+ add_pink=False,
557
+ is_fullmix=False,
558
+ ))
559
+
560
+ n_loops += 1
561
+
562
+ print(f" Extended corpus: {n_files} files scanned "
563
+ f"({n_oneshots} one-shots, {n_loops} loops) "
564
+ f"→ {len(metas)} SampleMeta entries [lazy — 0 bytes audio in RAM]")
565
+ if n_skipped > 0:
566
+ print(f" ⚠ {n_skipped} directories not found (skipped)")
567
+
568
+ return metas
569
+
570
+
571
+ # =============================================================================
572
+ # Frame Dataset — LAZY LOADING, no masks
573
+ # =============================================================================
574
+
575
+ class FrameDataset(Dataset):
576
+ """
577
+ Random-frame dataset for TransientNet with LAZY audio loading.
578
+
579
+ Returns (yc_w, ctx, x_clean_w) — no masks needed.
580
+
581
+ Audio is loaded from disk on-demand via AudioCache (LRU, ~110 MB).
582
+ Limiter is applied on-the-fly with random params per access.
583
+ This keeps RAM usage constant regardless of corpus size.
584
+
585
+ Augmentation (training only)
586
+ ----------------------------
587
+ - Random limiter params: threshold and release randomised every access
588
+ - Random gain: ±6 dB uniform
589
+ - Polarity flip: 50%
590
+ """
591
+
592
+ def __init__(
593
+ self,
594
+ samples: List[SampleMeta],
595
+ cfg: TransientNetConfig,
596
+ n_virtual: int = 10000,
597
+ rng_seed: int = 42,
598
+ lf_band_hz: float = 0.0,
599
+ augment: bool = True,
600
+ ):
601
+ if not samples:
602
+ raise ValueError("Empty sample list")
603
+ self.samples = samples
604
+ self.M = cfg.window_length
605
+ self.a = cfg.hop_length
606
+ self.K_ctx = cfg.K_context
607
+ self.sr = cfg.sample_rate
608
+ self.n_virtual = n_virtual
609
+ self.rng = np.random.default_rng(rng_seed)
610
+ self.lf_band_hz = lf_band_hz
611
+ self.augment = augment
612
+ self.cache = _AUDIO_CACHE
613
+
614
+ from scipy.signal.windows import hann
615
+ self.win = np.sqrt(hann(self.M, sym=False)).astype(np.float32)
616
+
617
+ self._lp_sos = None
618
+ if lf_band_hz > 0.0:
619
+ from scipy.signal import butter
620
+ fc = float(np.clip(lf_band_hz, 1.0, cfg.sample_rate / 2.0 - 1.0))
621
+ self._lp_sos = butter(2, fc, btype="low", fs=cfg.sample_rate, output="sos")
622
+
623
+ def __len__(self):
624
+ return self.n_virtual
625
+
626
+ def __getitem__(self, _idx: int):
627
+ # ── Pick a random sample meta ─────────────────────────────────────
628
+ meta = self.samples[self.rng.integers(len(self.samples))]
629
+
630
+ # ── Load audio from cache ─────────────────────────────────────────
631
+ raw_audio = self.cache.get(meta.path)
632
+ if raw_audio is None:
633
+ # File unreadable — pick another (rare)
634
+ for _ in range(10):
635
+ meta = self.samples[self.rng.integers(len(self.samples))]
636
+ raw_audio = self.cache.get(meta.path)
637
+ if raw_audio is not None:
638
+ break
639
+ if raw_audio is None:
640
+ # Return zeros as last resort
641
+ return (torch.zeros(self.M), torch.zeros(self.K_ctx, self.M),
642
+ torch.zeros(self.M))
643
+
644
+ # ── Extract chunk + add pink noise if needed ──────────────────────
645
+ clean_sig = _extract_chunk(raw_audio, meta, self.rng)
646
+
647
+ # ── Apply limiter with random params ──────────────────────────────
648
+ if self.augment:
649
+ thr_db = self.rng.uniform(*sorted(AUG_THRESH_RANGE))
650
+ rel_ms = self.rng.uniform(*AUG_RELEASE_RANGE)
651
+ else:
652
+ thr_db = -3.0
653
+ rel_ms = 80.0
654
+
655
+ limited_sig = apply_brickwall_limiter(clean_sig, self.sr, thr_db, rel_ms)
656
+
657
+ # ── Optional LF band filtering ────────────────────────────────────
658
+ if self._lp_sos is not None:
659
+ from scipy.signal import sosfiltfilt
660
+ clean_sig = sosfiltfilt(self._lp_sos, clean_sig).astype(np.float64)
661
+ limited_sig = sosfiltfilt(self._lp_sos, limited_sig).astype(np.float64)
662
+
663
+ L = len(clean_sig)
664
+ if L < self.M:
665
+ pad = self.M - L
666
+ clean_sig = np.pad(clean_sig, (0, pad))
667
+ limited_sig = np.pad(limited_sig, (0, pad))
668
+ i = 0
669
+ else:
670
+ max_idx = max(0, (L - self.M) // self.a)
671
+ i = self.rng.integers(0, max_idx + 1) if max_idx > 0 else 0
672
+
673
+ # Current frame (windowed)
674
+ idx1 = i * self.a
675
+ idx2 = min(idx1 + self.M, L)
676
+ seg_len = idx2 - idx1
677
+
678
+ yc_w = np.zeros(self.M, dtype=np.float32)
679
+ x_clean = np.zeros(self.M, dtype=np.float32)
680
+ yc_w[:seg_len] = (limited_sig[idx1:idx2] * self.win[:seg_len]).astype(np.float32)
681
+ x_clean[:seg_len] = (clean_sig[idx1:idx2] * self.win[:seg_len]).astype(np.float32)
682
+
683
+ # K context frames (strictly causal)
684
+ ctx = np.zeros((self.K_ctx, self.M), dtype=np.float32)
685
+ for k in range(self.K_ctx):
686
+ ci = i - (self.K_ctx - k)
687
+ if ci < 0:
688
+ continue
689
+ c_idx1 = ci * self.a
690
+ c_idx2 = min(c_idx1 + self.M, L)
691
+ c_seg = c_idx2 - c_idx1
692
+ if c_seg <= 0:
693
+ continue
694
+ ctx[k, :c_seg] = (limited_sig[c_idx1:c_idx2] * self.win[:c_seg]).astype(np.float32)
695
+
696
+ # ── Augmentation ──────────────────────────────────────────────────
697
+ if self.augment:
698
+ gain_db = self.rng.uniform(-6.0, 0.0)
699
+ gain_lin = np.float32(10 ** (gain_db / 20.0))
700
+ yc_w *= gain_lin
701
+ x_clean *= gain_lin
702
+ ctx *= gain_lin
703
+
704
+ if self.rng.random() < 0.5:
705
+ yc_w *= -1
706
+ x_clean *= -1
707
+ ctx *= -1
708
+
709
+ return (
710
+ torch.from_numpy(yc_w),
711
+ torch.from_numpy(ctx),
712
+ torch.from_numpy(x_clean),
713
+ )
714
+
715
+
716
+ class MixedBatchDataset(Dataset):
717
+ """Phase-2 mixed batching: α% Phase-1 + (1−α)% Phase-2."""
718
+
719
+ def __init__(self, ds_p1, ds_p2, p1_frac=PHASE1_MIX_FRAC, n_virtual=20000):
720
+ self.ds_p1 = ds_p1
721
+ self.ds_p2 = ds_p2
722
+ self.p1_frac = p1_frac
723
+ self.n_virtual = n_virtual
724
+ self.rng = random.Random(0)
725
+
726
+ def __len__(self):
727
+ return self.n_virtual
728
+
729
+ def __getitem__(self, idx):
730
+ if self.rng.random() < self.p1_frac:
731
+ return self.ds_p1[idx % len(self.ds_p1)]
732
+ return self.ds_p2[idx % len(self.ds_p2)]
733
+
734
+
735
+ # =============================================================================
736
+ # Evaluation metrics (reused from train_spade_unrolled.py)
737
+ # =============================================================================
738
+
739
+ def _sdr_batch(ref, est, eps=1e-10):
740
+ return 10.0 * torch.log10(
741
+ ref.pow(2).sum(-1) / ((ref - est).pow(2).sum(-1) + eps) + eps)
742
+
743
+ def _delta_sdr_batch(clean, limited, enhanced, eps=1e-10):
744
+ return _sdr_batch(clean, enhanced, eps) - _sdr_batch(clean, limited, eps)
745
+
746
+ def _bandpass_fft(x, sr, f_lo, f_hi):
747
+ N = x.shape[-1]
748
+ X = torch.fft.rfft(x, n=N)
749
+ freqs = torch.fft.rfftfreq(N, d=1.0 / sr)
750
+ X_filt = X * ((freqs >= f_lo) & (freqs < f_hi)).to(x.device)
751
+ return torch.fft.irfft(X_filt, n=N)
752
+
753
+ _BANDS = {
754
+ "sub_bass": (0.0, 250.0),
755
+ "bass": (250.0, 500.0),
756
+ "low_mid": (500.0, 2000.0),
757
+ "mid": (2000.0, 8000.0),
758
+ }
759
+
760
+ def _multiband_delta_sdr_batch(clean, limited, enhanced, sr=SAMPLE_RATE, eps=1e-10):
761
+ results = {}
762
+ for band_name, (f_lo, f_hi) in _BANDS.items():
763
+ c_b = _bandpass_fft(clean, sr, f_lo, f_hi)
764
+ l_b = _bandpass_fft(limited, sr, f_lo, f_hi)
765
+ e_b = _bandpass_fft(enhanced, sr, f_lo, f_hi)
766
+ if c_b.pow(2).mean().item() < 1e-12:
767
+ results[f"dsdr_{band_name}"] = 0.0
768
+ else:
769
+ results[f"dsdr_{band_name}"] = _delta_sdr_batch(
770
+ c_b, l_b, e_b, eps).mean().item()
771
+ return results
772
+
773
+
774
+ # =============================================================================
775
+ # Training config
776
+ # =============================================================================
777
+
778
+ @dataclass
779
+ class TrainConfig:
780
+ """All training hyperparameters."""
781
+ # Directories
782
+ drum_dir: str = "./Samples"
783
+ mix_dir: str = ""
784
+ ckpt_dir: str = "./checkpoints_tnet"
785
+ extra_dirs_csv: str = "" # CSV with additional drum dirs (loops + one-shots)
786
+
787
+ # Phases
788
+ phase: str = "both"
789
+ epochs_p1: int = 50
790
+ epochs_p2: int = 30
791
+
792
+ # Optimisation
793
+ batch_size: int = BATCH_SIZE
794
+ lr_phase1: float = LR_PHASE1
795
+ lr_phase2: float = LR_PHASE2
796
+ weight_decay: float = WEIGHT_DECAY
797
+ grad_clip: float = GRAD_CLIP
798
+
799
+ # Mixed batching
800
+ p1_mix_frac: float = PHASE1_MIX_FRAC
801
+
802
+ # Virtual epoch sizes
803
+ frames_per_epoch: int = 8000
804
+ frames_per_epoch_p2: int = 16000
805
+
806
+ # Validation
807
+ val_frac: float = 0.15
808
+ forgetting_thr: float = FORGETTING_THR_DB
809
+
810
+ # Loss weights (4 terms, no conflicts)
811
+ loss_w_time: float = 1.0
812
+ loss_w_stft: float = 0.5
813
+ loss_w_residual: float = 0.01 # light anti-hallucination
814
+ loss_w_energy: float = 0.1 # residual amplitude calibration (clamped + warmup)
815
+
816
+ # Early stopping
817
+ early_stop_patience: int = 15
818
+
819
+ # LF band filter (for hybrid inference compatibility)
820
+ lf_band_hz: float = 8000.0
821
+
822
+ # Device / resume
823
+ device: str = "cuda"
824
+ resume: str = ""
825
+ ckpt_phase1: str = ""
826
+
827
+ # Logging
828
+ log_every: int = 50
829
+ save_every: int = 5
830
+ num_workers: int = 4 # DataLoader worker processes
831
+
832
+
833
+ # =============================================================================
834
+ # Data loaders
835
+ # =============================================================================
836
+
837
+ def _make_loaders(
838
+ samples_tr: List[SampleMeta],
839
+ samples_val: List[SampleMeta],
840
+ cfg: TransientNetConfig,
841
+ tc: TrainConfig,
842
+ phase: int,
843
+ samples_p1_tr: Optional[List[SampleMeta]] = None,
844
+ ) -> Tuple[DataLoader, DataLoader]:
845
+
846
+ n_tr = tc.frames_per_epoch if phase == 1 else tc.frames_per_epoch_p2
847
+ n_val = max(500, n_tr // 8)
848
+
849
+ ds_kw = dict(lf_band_hz=tc.lf_band_hz)
850
+
851
+ ds_val = FrameDataset(samples_val, cfg, n_virtual=n_val, rng_seed=999,
852
+ augment=False, **ds_kw)
853
+
854
+ if phase == 1:
855
+ ds_tr = FrameDataset(samples_tr, cfg, n_virtual=n_tr, rng_seed=0,
856
+ augment=True, **ds_kw)
857
+ else:
858
+ if not samples_p1_tr:
859
+ raise ValueError("Phase 2 requires Phase-1 samples (mixed batching)")
860
+ ds_p2 = FrameDataset(samples_tr, cfg, n_virtual=n_tr, rng_seed=1,
861
+ augment=True, **ds_kw)
862
+ ds_p1 = FrameDataset(samples_p1_tr, cfg, n_virtual=n_tr//2, rng_seed=2,
863
+ augment=True, **ds_kw)
864
+ ds_tr = MixedBatchDataset(ds_p1, ds_p2, p1_frac=tc.p1_mix_frac, n_virtual=n_tr)
865
+
866
+ loader_tr = DataLoader(ds_tr, batch_size=tc.batch_size, shuffle=True,
867
+ num_workers=tc.num_workers, pin_memory=True, drop_last=True,
868
+ persistent_workers=True)
869
+ loader_val = DataLoader(ds_val, batch_size=tc.batch_size, shuffle=False,
870
+ num_workers=max(1, tc.num_workers // 2), pin_memory=True,
871
+ persistent_workers=True)
872
+ return loader_tr, loader_val
873
+
874
+
875
+ # =============================================================================
876
+ # Diagnostics
877
+ # =============================================================================
878
+
879
+ def _diag_model(
880
+ model: TransientNet,
881
+ loader: DataLoader,
882
+ device: str,
883
+ epoch: int,
884
+ n_batches: int = 5,
885
+ ):
886
+ """
887
+ Print diagnostics: residual magnitude, gradient norms, parameter stats.
888
+
889
+ What to look for:
890
+ res_abs_mean ↑ → model is generating correction (good, if ΔSDR also ↑)
891
+ res_abs_mean → 0 → model collapsed to identity (bad)
892
+ grad_norm_rms → 0 → vanishing gradients (bad — but shouldn't happen here)
893
+ res_std/res_mean → diversity of corrections across frames
894
+ """
895
+ model.eval()
896
+ all_res = []
897
+ with torch.no_grad():
898
+ for i, batch in enumerate(loader):
899
+ if i >= n_batches:
900
+ break
901
+ yc_w, ctx, _ = [b.to(device) for b in batch]
902
+ _, residual = model(yc_w, ctx)
903
+ all_res.append(residual.cpu())
904
+
905
+ if not all_res:
906
+ return
907
+
908
+ R = torch.cat(all_res, dim=0) # (N, M)
909
+ res_abs = R.abs()
910
+
911
+ print(f" [diag ep{epoch}] residual stats:")
912
+ print(f" mean|r̂|: {res_abs.mean():.6f}")
913
+ print(f" max|r̂|: {res_abs.max():.6f}")
914
+ print(f" std|r̂|: {res_abs.std():.6f}")
915
+ print(f" % > 0.001: {(res_abs > 0.001).float().mean():.3f}")
916
+ print(f" % > 0.01: {(res_abs > 0.01).float().mean():.3f}")
917
+
918
+ # Gradient norm
919
+ total_gnorm = 0.0
920
+ n_params = 0
921
+ for p in model.parameters():
922
+ if p.grad is not None:
923
+ total_gnorm += p.grad.norm().item() ** 2
924
+ n_params += 1
925
+ if n_params > 0:
926
+ rms = math.sqrt(total_gnorm / n_params)
927
+ print(f" grad_norm_rms: {rms:.6f}"
928
+ + (" ⚠ WEAK" if rms < 1e-5 else ""))
929
+
930
+ # Check for collapsed output conv
931
+ oc_weight = model.frame_processor.output_conv.weight
932
+ print(f" output_conv.weight norm: {oc_weight.norm():.6f}"
933
+ + (" ⚠ NEAR-ZERO (identity mode)" if oc_weight.norm() < 1e-5 else ""))
934
+
935
+
936
+ # =============================================================================
937
+ # Training loop
938
+ # =============================================================================
939
+
940
+ def _train_epoch(
941
+ model: TransientNet,
942
+ loader: DataLoader,
943
+ optimizer: torch.optim.Optimizer,
944
+ scheduler: object,
945
+ loss_fn: TransientLoss,
946
+ tc: TrainConfig,
947
+ epoch: int,
948
+ device: str,
949
+ ) -> Dict[str, float]:
950
+ model.train()
951
+ running = {k: 0.0 for k in ["total", "time_mse", "stft", "res_l1", "energy_ratio", "w_energy_eff"]}
952
+ n_steps = 0
953
+ t0 = time.time()
954
+
955
+ for step, batch in enumerate(loader):
956
+ yc_w, ctx, x_clean = [b.to(device) for b in batch]
957
+
958
+ optimizer.zero_grad(set_to_none=True)
959
+
960
+ x_hat, residual = model(yc_w, ctx)
961
+ loss, details = loss_fn(x_hat, x_clean, yc_w, residual)
962
+
963
+ loss.backward()
964
+ nn.utils.clip_grad_norm_(model.parameters(), tc.grad_clip)
965
+ optimizer.step()
966
+ if scheduler is not None:
967
+ scheduler.step()
968
+
969
+ for k, v in details.items():
970
+ running[k] += v
971
+ n_steps += 1
972
+
973
+ if step > 0 and step % tc.log_every == 0:
974
+ elapsed = time.time() - t0
975
+ sps = step * tc.batch_size / elapsed
976
+ r, n = running, n_steps
977
+ print(f" [Ep {epoch:3d} | {step:4d}/{len(loader)}] "
978
+ f"loss={r['total']/n:.4f} "
979
+ f"mse={r['time_mse']/n:.4f} "
980
+ f"stft={r['stft']/n:.4f} "
981
+ f"ergy={r['energy_ratio']/n:.3f} "
982
+ f"res={r['res_l1']/n:.5f} "
983
+ f"{sps:.0f} sa/s")
984
+
985
+ return {k: v / max(n_steps, 1) for k, v in running.items()}
986
+
987
+
988
+ @torch.no_grad()
989
+ def _validate_epoch(
990
+ model: TransientNet,
991
+ loader: DataLoader,
992
+ loss_fn: TransientLoss,
993
+ device: str,
994
+ ) -> Dict[str, float]:
995
+ """
996
+ Validation loop.
997
+
998
+ Returns loss sub-components + ΔSDR metrics.
999
+
1000
+ Keys
1001
+ ----
1002
+ total, time_mse, stft, res_l1 : loss components
1003
+ dsdr_global : ΔSDR averaged over all frames (dB)
1004
+ dsdr_sub_bass, dsdr_bass, dsdr_low_mid, dsdr_mid : per-band ΔSDR
1005
+ cos_sim : cosine similarity of residual (DCT domain)
1006
+ res_abs_mean : mean |residual| (identity-collapse indicator)
1007
+ """
1008
+ model.eval()
1009
+ running = {k: 0.0 for k in ["total", "time_mse", "stft", "res_l1", "energy_ratio", "w_energy_eff"]}
1010
+ dsdr_running = {
1011
+ "dsdr_global": 0.0,
1012
+ "dsdr_sub_bass": 0.0, "dsdr_bass": 0.0,
1013
+ "dsdr_low_mid": 0.0, "dsdr_mid": 0.0,
1014
+ "cos_sim": 0.0, "res_abs_mean": 0.0,
1015
+ }
1016
+ n_steps = 0
1017
+
1018
+ for batch in loader:
1019
+ yc_w, ctx, x_clean = [b.to(device) for b in batch]
1020
+ x_hat, residual = model(yc_w, ctx)
1021
+ _, details = loss_fn(x_hat, x_clean, yc_w, residual)
1022
+
1023
+ for k, v in details.items():
1024
+ running[k] += v
1025
+
1026
+ # ΔSDR global
1027
+ dsdr_running["dsdr_global"] += _delta_sdr_batch(
1028
+ x_clean, yc_w, x_hat).mean().item()
1029
+
1030
+ # ΔSDR multiband
1031
+ for k, v in _multiband_delta_sdr_batch(x_clean, yc_w, x_hat, sr=SAMPLE_RATE).items():
1032
+ dsdr_running[k] += v
1033
+
1034
+ # Cosine similarity (DCT domain, residual vs GT residual)
1035
+ gt_res = (x_clean - yc_w).cpu().float()
1036
+ est_res = residual.cpu().float()
1037
+ eps = 1e-10
1038
+ from scipy.fft import dct as _dct_np
1039
+ G = np.array([_dct_np(g.numpy(), type=2, norm="ortho") for g in gt_res])
1040
+ E = np.array([_dct_np(e.numpy(), type=2, norm="ortho") for e in est_res])
1041
+ cos_batch = (G * E).sum(-1) / (
1042
+ np.sqrt((G**2).sum(-1)) * np.sqrt((E**2).sum(-1)) + eps)
1043
+ dsdr_running["cos_sim"] += float(cos_batch.mean())
1044
+
1045
+ # Identity-collapse indicator
1046
+ dsdr_running["res_abs_mean"] += residual.abs().mean().item()
1047
+
1048
+ n_steps += 1
1049
+
1050
+ metrics = {k: v / max(n_steps, 1) for k, v in running.items()}
1051
+ metrics.update({k: v / max(n_steps, 1) for k, v in dsdr_running.items()})
1052
+ return metrics
1053
+
1054
+
1055
+ def _save_checkpoint(model, optimizer, epoch, val_loss, path: Path, extra=None):
1056
+ state = {
1057
+ "epoch": epoch,
1058
+ "val_loss": val_loss,
1059
+ "model": model.state_dict(),
1060
+ "optimizer": optimizer.state_dict(),
1061
+ "cfg": asdict(model.cfg),
1062
+ "arch": "TransientNet",
1063
+ }
1064
+ if extra:
1065
+ state.update(extra)
1066
+ torch.save(state, path)
1067
+ print(f" ✓ Checkpoint saved → {path} (val_loss={val_loss:.4f})")
1068
+
1069
+
1070
+ def _load_checkpoint(model, optimizer, path: Path, device: str):
1071
+ ckpt = torch.load(path, map_location=device, weights_only=False)
1072
+ model.load_state_dict(ckpt["model"])
1073
+ if optimizer is not None and "optimizer" in ckpt:
1074
+ optimizer.load_state_dict(ckpt["optimizer"])
1075
+ return ckpt.get("epoch", 0), ckpt.get("val_loss", float("inf"))
1076
+
1077
+
1078
+ # =============================================================================
1079
+ # Phase 1
1080
+ # =============================================================================
1081
+
1082
+ def train_phase1(
1083
+ model: TransientNet,
1084
+ tc: TrainConfig,
1085
+ device: str,
1086
+ ) -> Path:
1087
+ print("\n" + "="*70)
1088
+ print("PHASE 1 — Isolated drum samples + pink noise [TransientNet]")
1089
+ print("="*70)
1090
+
1091
+ drum_dir = Path(tc.drum_dir)
1092
+ if not drum_dir.exists():
1093
+ raise FileNotFoundError(f"Drum directory not found: {drum_dir}")
1094
+
1095
+ rng = np.random.default_rng(42)
1096
+ print(f" Loading drum corpus from {drum_dir} …")
1097
+ all_samples = build_drum_corpus(drum_dir, rng, augment=True)
1098
+
1099
+ # ── Extended corpus (loops + extra one-shots) ─────────────────────────
1100
+ if tc.extra_dirs_csv:
1101
+ csv_path = Path(tc.extra_dirs_csv)
1102
+ if csv_path.exists():
1103
+ print(f" Loading extended corpus from {csv_path} …")
1104
+ extra_entries = parse_extra_dirs_csv(csv_path)
1105
+ print(f" Found {len(extra_entries)} directories in CSV")
1106
+ extra_samples = build_extended_corpus(extra_entries, rng)
1107
+ all_samples.extend(extra_samples)
1108
+ else:
1109
+ print(f" ⚠ Extra dirs CSV not found: {csv_path}")
1110
+
1111
+ if not all_samples:
1112
+ raise RuntimeError("No drum samples found — check --drum-dir and --extra-dirs")
1113
+
1114
+ print(f" Total samples (original + extended): {len(all_samples)}")
1115
+
1116
+ # ── Scale virtual epoch size to corpus size ───────────────────────────
1117
+ # With 76 samples, 8000 frames/epoch = ~105 frames/sample (reasonable).
1118
+ # With 1000+ samples, keep the same ratio but cap at 40K to avoid
1119
+ # excessively long epochs.
1120
+ base_ratio = tc.frames_per_epoch / 76.0 # ~105
1121
+ scaled_epoch = min(40000, max(tc.frames_per_epoch,
1122
+ int(len(all_samples) * base_ratio)))
1123
+ if scaled_epoch != tc.frames_per_epoch:
1124
+ print(f" Auto-scaled frames_per_epoch: {tc.frames_per_epoch} → {scaled_epoch}")
1125
+ tc.frames_per_epoch = scaled_epoch
1126
+
1127
+ n_val = max(1, int(len(all_samples) * tc.val_frac))
1128
+ rng.shuffle(all_samples)
1129
+ samples_val = all_samples[:n_val]
1130
+ samples_tr = all_samples[n_val:]
1131
+ print(f" Train: {len(samples_tr)} Val: {len(samples_val)}")
1132
+
1133
+ loss_fn = TransientLoss(
1134
+ w_time=tc.loss_w_time,
1135
+ w_stft=tc.loss_w_stft,
1136
+ w_residual=tc.loss_w_residual,
1137
+ w_energy=tc.loss_w_energy,
1138
+ ).to(device)
1139
+
1140
+ loader_tr, loader_val = _make_loaders(samples_tr, samples_val, model.cfg, tc, phase=1)
1141
+
1142
+ optimizer = optim.AdamW(model.parameters(), lr=tc.lr_phase1,
1143
+ weight_decay=tc.weight_decay, betas=(0.9, 0.999))
1144
+ total_steps = tc.epochs_p1 * len(loader_tr)
1145
+ scheduler = optim.lr_scheduler.OneCycleLR(
1146
+ optimizer, max_lr=tc.lr_phase1, total_steps=total_steps,
1147
+ pct_start=0.1, anneal_strategy="cos",
1148
+ )
1149
+
1150
+ ckpt_dir = Path(tc.ckpt_dir)
1151
+ ckpt_dir.mkdir(parents=True, exist_ok=True)
1152
+
1153
+ start_epoch = 1
1154
+ best_val = float("inf")
1155
+ if tc.resume:
1156
+ ep, val = _load_checkpoint(model, optimizer, Path(tc.resume), device)
1157
+ start_epoch = ep + 1
1158
+ best_val = val
1159
+ print(f" Resumed from epoch {ep} (val={val:.4f})")
1160
+
1161
+ best_ckpt = ckpt_dir / "phase1_best.pt"
1162
+ history = []
1163
+ no_improve = 0
1164
+
1165
+ for epoch in range(start_epoch, tc.epochs_p1 + 1):
1166
+ t_ep = time.time()
1167
+ loss_fn._current_epoch = epoch # warmup ramp for energy loss
1168
+ tr_m = _train_epoch(model, loader_tr, optimizer, scheduler, loss_fn, tc, epoch, device)
1169
+ val_m = _validate_epoch(model, loader_val, loss_fn, device)
1170
+ elapsed = time.time() - t_ep
1171
+
1172
+ dsdr_g = val_m.get("dsdr_global", 0.0)
1173
+ dsdr_sb = val_m.get("dsdr_sub_bass", 0.0)
1174
+ dsdr_ba = val_m.get("dsdr_bass", 0.0)
1175
+ dsdr_lm = val_m.get("dsdr_low_mid", 0.0)
1176
+ dsdr_hi = val_m.get("dsdr_mid", 0.0)
1177
+ cos_sim = val_m.get("cos_sim", 0.0)
1178
+ res_abs = val_m.get("res_abs_mean", 0.0)
1179
+
1180
+ # Actionable flags
1181
+ flag = ""
1182
+ if dsdr_hi > 0.5 and dsdr_sb < 0.1:
1183
+ flag = " ⚠ SPECTRAL BIAS ↑mid/flat-sub"
1184
+ elif dsdr_hi < -0.5:
1185
+ flag = " ⚠ mid regression"
1186
+ elif dsdr_g < 0:
1187
+ flag = " ⚠ ΔSDR<0 → degrading signal"
1188
+ if res_abs < 1e-5:
1189
+ flag += " ⚠ IDENTITY COLLAPSE (|r̂|→0)"
1190
+
1191
+ print(f"Epoch {epoch:3d}/{tc.epochs_p1} "
1192
+ f"tr={tr_m['total']:.4f} val={val_m['total']:.4f} "
1193
+ f"[mse={val_m['time_mse']:.4f} stft={val_m['stft']:.4f} "
1194
+ f"ergy={val_m['energy_ratio']:.3f}×{val_m.get('w_energy_eff',0):.2f} "
1195
+ f"res={val_m['res_l1']:.5f}] "
1196
+ f"({elapsed:.1f}s)")
1197
+ print(f" ΔSDR global={dsdr_g:+.2f}dB "
1198
+ f"sub_bass={dsdr_sb:+.2f}dB bass={dsdr_ba:+.2f}dB "
1199
+ f"low_mid={dsdr_lm:+.2f}dB mid={dsdr_hi:+.2f}dB "
1200
+ f"cos={cos_sim:.3f} |r̂|={res_abs:.5f}{flag}")
1201
+
1202
+ # Diagnostics every 2 epochs
1203
+ if epoch % 2 == 0 or epoch == 1:
1204
+ _diag_model(model, loader_val, device, epoch)
1205
+
1206
+ history.append({
1207
+ "epoch": epoch, "train": tr_m, "val": val_m,
1208
+ "dsdr_global": dsdr_g, "dsdr_sub_bass": dsdr_sb,
1209
+ "dsdr_bass": dsdr_ba, "dsdr_low_mid": dsdr_lm,
1210
+ "dsdr_mid": dsdr_hi, "cos_sim": cos_sim,
1211
+ })
1212
+
1213
+ if val_m["total"] < best_val:
1214
+ best_val = val_m["total"]
1215
+ no_improve = 0
1216
+ _save_checkpoint(model, optimizer, epoch, best_val, best_ckpt,
1217
+ extra={"phase": 1, "history": history})
1218
+ else:
1219
+ no_improve += 1
1220
+
1221
+ if epoch % tc.save_every == 0:
1222
+ ep_ckpt = ckpt_dir / f"phase1_epoch{epoch:03d}.pt"
1223
+ _save_checkpoint(model, optimizer, epoch, val_m["total"], ep_ckpt)
1224
+
1225
+ if tc.early_stop_patience > 0 and no_improve >= tc.early_stop_patience:
1226
+ print(f"\n ⏹ Early stopping at epoch {epoch} "
1227
+ f"(no improvement for {no_improve} epochs).")
1228
+ break
1229
+
1230
+ print(f"\n Phase 1 complete. Best val loss: {best_val:.4f}")
1231
+ print(f" Best checkpoint: {best_ckpt}")
1232
+ return best_ckpt
1233
+
1234
+
1235
+ # =============================================================================
1236
+ # Phase 2
1237
+ # =============================================================================
1238
+
1239
+ def train_phase2(
1240
+ model: TransientNet,
1241
+ tc: TrainConfig,
1242
+ device: str,
1243
+ p1_ckpt: Optional[Path] = None,
1244
+ samples_p1_tr: Optional[List[SampleMeta]] = None,
1245
+ ) -> Path:
1246
+ print("\n" + "="*70)
1247
+ print("PHASE 2 — Full mix + mixed batching [TransientNet]")
1248
+ print("="*70)
1249
+
1250
+ # Load Phase-1 weights
1251
+ if p1_ckpt and p1_ckpt.exists():
1252
+ ep, val = _load_checkpoint(model, None, p1_ckpt, device)
1253
+ print(f" Loaded Phase-1 checkpoint: {p1_ckpt} (epoch={ep}, val={val:.4f})")
1254
+ elif tc.ckpt_phase1:
1255
+ ep, val = _load_checkpoint(model, None, Path(tc.ckpt_phase1), device)
1256
+ print(f" Loaded Phase-1 checkpoint: {tc.ckpt_phase1}")
1257
+ else:
1258
+ print(" [WARNING] No Phase-1 checkpoint — training from scratch")
1259
+
1260
+ # Full-mix corpus
1261
+ mix_dir = Path(tc.mix_dir) if tc.mix_dir else None
1262
+ if mix_dir is None or not mix_dir.exists():
1263
+ raise FileNotFoundError(f"Full-mix directory not found: {tc.mix_dir}")
1264
+
1265
+ rng = np.random.default_rng(123)
1266
+ print(f" Loading full-mix corpus from {mix_dir} …")
1267
+ all_mix = build_fullmix_corpus(mix_dir, rng, augment=True)
1268
+
1269
+ if not all_mix:
1270
+ raise RuntimeError("No full-mix files found")
1271
+
1272
+ print(f" Full-mix samples: {len(all_mix)}")
1273
+ n_val = max(1, int(len(all_mix) * tc.val_frac))
1274
+ rng.shuffle(all_mix)
1275
+ mix_val = all_mix[:n_val]
1276
+ mix_tr = all_mix[n_val:]
1277
+
1278
+ # Phase-1 drum corpus for mixed batching
1279
+ if samples_p1_tr is None:
1280
+ drum_dir = Path(tc.drum_dir)
1281
+ if drum_dir.exists():
1282
+ p1_rng = np.random.default_rng(42)
1283
+ p1_all = build_drum_corpus(drum_dir, p1_rng, augment=True)
1284
+ n_val_p1 = max(1, int(len(p1_all) * tc.val_frac))
1285
+ p1_rng.shuffle(p1_all)
1286
+ samples_p1_val = p1_all[:n_val_p1]
1287
+ samples_p1_tr = p1_all[n_val_p1:]
1288
+ print(f" Phase-1 drum samples for mixed batching: {len(samples_p1_tr)}")
1289
+ else:
1290
+ print(" [WARNING] No drum directory — Phase-2 only")
1291
+ samples_p1_tr = mix_tr[:max(1, len(mix_tr)//5)]
1292
+
1293
+ loss_fn = TransientLoss(
1294
+ w_time=tc.loss_w_time,
1295
+ w_stft=tc.loss_w_stft,
1296
+ w_residual=tc.loss_w_residual,
1297
+ w_energy=tc.loss_w_energy,
1298
+ ).to(device)
1299
+
1300
+ loader_tr, loader_val_p2 = _make_loaders(
1301
+ mix_tr, mix_val, model.cfg, tc, phase=2, samples_p1_tr=samples_p1_tr)
1302
+
1303
+ # Phase-1 val loader (forgetting monitor)
1304
+ _p1_val = samples_p1_val if 'samples_p1_val' in dir() else mix_val
1305
+ ds_val_p1 = FrameDataset(
1306
+ _p1_val, model.cfg, n_virtual=800, rng_seed=888,
1307
+ lf_band_hz=tc.lf_band_hz, augment=False)
1308
+ loader_val_p1 = DataLoader(ds_val_p1, batch_size=tc.batch_size,
1309
+ shuffle=False, num_workers=0)
1310
+
1311
+ optimizer = optim.AdamW(model.parameters(), lr=tc.lr_phase2,
1312
+ weight_decay=tc.weight_decay, betas=(0.9, 0.999))
1313
+ total_steps = tc.epochs_p2 * len(loader_tr)
1314
+ scheduler = optim.lr_scheduler.OneCycleLR(
1315
+ optimizer, max_lr=tc.lr_phase2, total_steps=total_steps,
1316
+ pct_start=0.05, anneal_strategy="cos",
1317
+ )
1318
+
1319
+ ckpt_dir = Path(tc.ckpt_dir)
1320
+ best_ckpt = ckpt_dir / "phase2_best.pt"
1321
+ best_val = float("inf")
1322
+ p1_baseline = None
1323
+ history = []
1324
+
1325
+ for epoch in range(1, tc.epochs_p2 + 1):
1326
+ t_ep = time.time()
1327
+ # Phase 2: energy loss fully ramped (add warmup_epochs to ensure ramp=1.0)
1328
+ loss_fn._current_epoch = epoch + loss_fn.energy_warmup_epochs
1329
+ tr_m = _train_epoch(model, loader_tr, optimizer, scheduler, loss_fn, tc, epoch, device)
1330
+
1331
+ val_p2 = _validate_epoch(model, loader_val_p2, loss_fn, device)
1332
+ val_p1 = _validate_epoch(model, loader_val_p1, loss_fn, device)
1333
+
1334
+ if p1_baseline is None:
1335
+ p1_baseline = val_p1["total"]
1336
+ forgetting = val_p1["total"] - p1_baseline
1337
+ elapsed = time.time() - t_ep
1338
+
1339
+ dsdr_g = val_p2.get("dsdr_global", 0.0)
1340
+ dsdr_sb = val_p2.get("dsdr_sub_bass", 0.0)
1341
+ dsdr_ba = val_p2.get("dsdr_bass", 0.0)
1342
+ dsdr_lm = val_p2.get("dsdr_low_mid", 0.0)
1343
+ dsdr_hi = val_p2.get("dsdr_mid", 0.0)
1344
+ cos_p2 = val_p2.get("cos_sim", 0.0)
1345
+ dsdr_g_p1 = val_p1.get("dsdr_global", 0.0)
1346
+
1347
+ flag = ""
1348
+ if dsdr_g < 0:
1349
+ flag = " ⚠ ΔSDR<0"
1350
+ if val_p2.get("res_abs_mean", 0.0) < 1e-5:
1351
+ flag += " ⚠ IDENTITY COLLAPSE"
1352
+
1353
+ print(f"Epoch {epoch:3d}/{tc.epochs_p2} "
1354
+ f"tr={tr_m['total']:.4f} "
1355
+ f"val_p2={val_p2['total']:.4f} "
1356
+ f"val_p1={val_p1['total']:.4f} "
1357
+ f"forgetting={forgetting:+.4f} ({elapsed:.1f}s)")
1358
+ print(f" ΔSDR(P2) global={dsdr_g:+.2f}dB "
1359
+ f"sub_bass={dsdr_sb:+.2f}dB bass={dsdr_ba:+.2f}dB "
1360
+ f"low_mid={dsdr_lm:+.2f}dB mid={dsdr_hi:+.2f}dB "
1361
+ f"cos={cos_p2:.3f}{flag}")
1362
+ print(f" ΔSDR(P1) global={dsdr_g_p1:+.2f}dB [forgetting probe]")
1363
+
1364
+ if forgetting > tc.forgetting_thr:
1365
+ print(f" ⚠ Catastrophic forgetting (Δ={forgetting:.4f} > {tc.forgetting_thr})")
1366
+
1367
+ history.append({
1368
+ "epoch": epoch, "train": tr_m,
1369
+ "val_p2": val_p2, "val_p1": val_p1,
1370
+ "forgetting": forgetting,
1371
+ })
1372
+
1373
+ if val_p2["total"] < best_val:
1374
+ best_val = val_p2["total"]
1375
+ _save_checkpoint(model, optimizer, epoch, best_val, best_ckpt,
1376
+ extra={"phase": 2, "forgetting": forgetting,
1377
+ "history": history})
1378
+
1379
+ if epoch % tc.save_every == 0:
1380
+ ep_ckpt = ckpt_dir / f"phase2_epoch{epoch:03d}.pt"
1381
+ _save_checkpoint(model, optimizer, epoch, val_p2["total"], ep_ckpt)
1382
+
1383
+ print(f"\n Phase 2 complete. Best P2 val loss: {best_val:.4f}")
1384
+ return best_ckpt
1385
+
1386
+
1387
+ # =============================================================================
1388
+ # CLI
1389
+ # =============================================================================
1390
+
1391
+ def _build_parser() -> argparse.ArgumentParser:
1392
+ p = argparse.ArgumentParser(
1393
+ description="Train TransientNet (two-phase curriculum)",
1394
+ formatter_class=argparse.ArgumentDefaultsHelpFormatter,
1395
+ )
1396
+ p.add_argument("--phase", choices=["1", "2", "both"], default="both")
1397
+ p.add_argument("--epochs-p1", type=int, default=50, dest="epochs_p1")
1398
+ p.add_argument("--epochs-p2", type=int, default=30, dest="epochs_p2")
1399
+ p.add_argument("--drum-dir", type=str, default="./Samples", dest="drum_dir")
1400
+ p.add_argument("--extra-dirs", type=str, default="", dest="extra_dirs_csv",
1401
+ help="CSV file listing additional drum directories (loops + one-shots). "
1402
+ "Format: 'Percorso Directory,Tipo' with types 'Drum Loop' or 'Drum One Shot'")
1403
+ p.add_argument("--mix-dir", type=str, default="", dest="mix_dir")
1404
+ p.add_argument("--ckpt-dir", type=str, default="./checkpoints_tnet", dest="ckpt_dir")
1405
+ p.add_argument("--ckpt-phase1", type=str, default="", dest="ckpt_phase1")
1406
+ p.add_argument("--resume", type=str, default="")
1407
+ p.add_argument("--batch-size", type=int, default=BATCH_SIZE, dest="batch_size")
1408
+ p.add_argument("--lr-p1", type=float, default=LR_PHASE1, dest="lr_phase1")
1409
+ p.add_argument("--lr-p2", type=float, default=LR_PHASE2, dest="lr_phase2")
1410
+ p.add_argument("--p1-mix-frac", type=float, default=PHASE1_MIX_FRAC, dest="p1_mix_frac")
1411
+ p.add_argument("--device", type=str, default="cuda")
1412
+ p.add_argument("--window", type=int, default=2048)
1413
+ p.add_argument("--hop", type=int, default=512)
1414
+ p.add_argument("--k-context", type=int, default=8, dest="k_context")
1415
+ p.add_argument("--lf-band-hz", type=float, default=8000.0, dest="lf_band_hz")
1416
+ p.add_argument("--early-stop", type=int, default=15, dest="early_stop_patience")
1417
+
1418
+ # Model architecture
1419
+ p.add_argument("--conv-channels", type=int, default=48, dest="conv_channels")
1420
+ p.add_argument("--gru-hidden", type=int, default=128, dest="gru_hidden")
1421
+ p.add_argument("--cond-dim", type=int, default=64, dest="cond_dim")
1422
+
1423
+ # Loss weights
1424
+ p.add_argument("--w-time", type=float, default=1.0, dest="loss_w_time")
1425
+ p.add_argument("--w-stft", type=float, default=0.5, dest="loss_w_stft")
1426
+ p.add_argument("--w-residual", type=float, default=0.01, dest="loss_w_residual")
1427
+ p.add_argument("--w-energy", type=float, default=0.1, dest="loss_w_energy",
1428
+ help="Residual energy calibration weight (clamped log-ratio + warmup)")
1429
+ p.add_argument("--audio-cache-files", type=int, default=512, dest="audio_cache_files",
1430
+ help="LRU audio file cache size (number of decoded files held in RAM). "
1431
+ "Increase on machines with >8 GB free RAM.")
1432
+ p.add_argument("--chunk-cache-size", type=int, default=512, dest="chunk_cache_size",
1433
+ help="LRU preprocessed-chunk cache size. "
1434
+ "Set 0 to disable. ~700 KB per 4-second chunk (≈ 350 MB default).")
1435
+ p.add_argument("--num-workers", type=int, default=4, dest="num_workers",
1436
+ help="DataLoader worker processes (default 4; set 2 on low-core machines).")
1437
+
1438
+ return p
1439
+
1440
+
1441
+ def main():
1442
+ args = _build_parser().parse_args()
1443
+
1444
+ # ── Configure caches from CLI args ────────────────────────────────────
1445
+ global _AUDIO_CACHE, _CHUNK_CACHE
1446
+ _AUDIO_CACHE = AudioCache(max_files=args.audio_cache_files)
1447
+ _CHUNK_CACHE = ChunkCache(max_chunks=args.chunk_cache_size)
1448
+
1449
+ numba_status = "✓ Numba JIT" if _NUMBA_OK else "✗ no Numba (pip install numba for ~200× limiter speedup)"
1450
+ print(f" [cache] audio={args.audio_cache_files} files "
1451
+ f"chunk={args.chunk_cache_size} chunks "
1452
+ f"workers={args.num_workers} limiter={numba_status}")
1453
+
1454
+ # Model config
1455
+ cfg = TransientNetConfig(
1456
+ window_length=args.window,
1457
+ hop_length=args.hop,
1458
+ K_context=args.k_context,
1459
+ lf_cutoff_hz=args.lf_band_hz,
1460
+ conv_channels=args.conv_channels,
1461
+ gru_hidden=args.gru_hidden,
1462
+ cond_dim=args.cond_dim,
1463
+ )
1464
+ model = build_model(cfg)
1465
+ device = args.device
1466
+
1467
+ if device.startswith("cuda") and not torch.cuda.is_available():
1468
+ print(" [WARNING] CUDA not available — falling back to CPU")
1469
+ device = "cpu"
1470
+ model = model.to(device)
1471
+
1472
+ # Training config
1473
+ tc = TrainConfig(
1474
+ drum_dir = args.drum_dir,
1475
+ extra_dirs_csv = args.extra_dirs_csv,
1476
+ mix_dir = args.mix_dir,
1477
+ ckpt_dir = args.ckpt_dir,
1478
+ ckpt_phase1 = args.ckpt_phase1,
1479
+ phase = args.phase,
1480
+ epochs_p1 = args.epochs_p1,
1481
+ epochs_p2 = args.epochs_p2,
1482
+ batch_size = args.batch_size,
1483
+ lr_phase1 = args.lr_phase1,
1484
+ lr_phase2 = args.lr_phase2,
1485
+ p1_mix_frac = args.p1_mix_frac,
1486
+ device = device,
1487
+ resume = args.resume,
1488
+ early_stop_patience = args.early_stop_patience,
1489
+ lf_band_hz = args.lf_band_hz,
1490
+ loss_w_time = args.loss_w_time,
1491
+ loss_w_stft = args.loss_w_stft,
1492
+ loss_w_residual = args.loss_w_residual,
1493
+ loss_w_energy = args.loss_w_energy,
1494
+ num_workers = args.num_workers,
1495
+ )
1496
+
1497
+ # Run phases
1498
+ p1_ckpt = None
1499
+ if args.phase in ("1", "both"):
1500
+ p1_ckpt = train_phase1(model, tc, device)
1501
+
1502
+ if args.phase in ("2", "both"):
1503
+ train_phase2(model, tc, device, p1_ckpt=p1_ckpt)
1504
+
1505
+ print("\n ✓ Training complete.")
1506
+
1507
+
1508
+ if __name__ == "__main__":
1509
+ main()
transient_net.py ADDED
@@ -0,0 +1,815 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ """
2
+ transient_net.py — Limiter Transient Restoration via Learned Gain Inversion
3
+ ================================================================================
4
+
5
+ Replaces the SPADE-Unrolled architecture with a direct residual prediction
6
+ model. No sparse priors, no ADMM, no binary masks — the model learns to
7
+ invert the limiter's gain envelope from data.
8
+
9
+ Why this works better than SPADE for limiters
10
+ ---------------------------------------------
11
+ A brickwall limiter applies time-varying gain g[n] ∈ (0, 1]:
12
+
13
+ y[n] = x[n] · g[n]
14
+
15
+ The original signal is x[n] = y[n] / g[n], so the residual is:
16
+
17
+ r[n] = x[n] − y[n] = y[n] · (1/g[n] − 1)
18
+
19
+ Key properties that SPADE cannot exploit:
20
+ • g[n] is a smooth envelope (attack/release dynamics) — NOT sparse in DCT
21
+ • All samples are affected during release (no "reliable" samples)
22
+ • The correction is multiplicative and proportional to |y[n]|
23
+
24
+ This model directly predicts r̂[n] for each WOLA frame.
25
+
26
+ Architecture
27
+ ------------
28
+
29
+ Input: limited audio frame y_w (B, M) + K context frames (B, K, M)
30
+
31
+ SpectralFeatureExtractor
32
+ • log-mel spectrogram (n_mels=32) per frame
33
+ • short-time loudness (RMS dB)
34
+ → (B, K+1, n_mels+1)
35
+
36
+ ContextEncoder (causal GRU)
37
+ • 2-layer GRU, hidden_size=128
38
+ • Last hidden state → Linear → h_cond (B, D_cond=64)
39
+ → Global conditioning vector capturing limiter dynamics
40
+
41
+ FrameProcessor (dilated 1D convolutions + FiLM conditioning)
42
+ • Input: y_w reshaped to (B, 1, M)
43
+ • 6 residual blocks with dilation 1, 2, 4, 8, 16, 32
44
+ • FiLM modulation injects h_cond every 2 blocks
45
+ • Output: r̂ (B, M) — predicted residual
46
+
47
+ Output: x̂_w = y_w + r̂
48
+
49
+ Loss function (3 terms, no conflicts)
50
+ ---------------------------------------
51
+ 1. Time MSE: ‖x̂ − x_clean‖² (auto-focuses on affected regions)
52
+ 2. Multi-scale STFT: L1 on |STFT| at 3 scales (spectral fidelity)
53
+ 3. Residual L1: α · ‖r̂‖₁ (light sparsity on correction, prevents hallucination)
54
+
55
+ Total trainable parameters: ~265K
56
+
57
+ References
58
+ ----------
59
+ [1] Perraudin et al., "A fast Griffin-Lim algorithm", WASPAA 2013.
60
+ [2] Défossez et al., "Real Time Speech Enhancement in the Waveform Domain", 2020.
61
+ [3] Dumoulin et al., "FiLM: Visual Reasoning with a General Conditioning Layer", 2018.
62
+ """
63
+
64
+ from __future__ import annotations
65
+
66
+ import math
67
+ from dataclasses import dataclass, field
68
+ from typing import Literal, Optional, Tuple
69
+
70
+ import numpy as np
71
+
72
+ try:
73
+ import torch
74
+ import torch.nn as nn
75
+ import torch.nn.functional as F
76
+ _TORCH_OK = True
77
+ except ImportError:
78
+ raise ImportError("PyTorch is required (pip install torch)")
79
+
80
+
81
+ # =============================================================================
82
+ # Config
83
+ # =============================================================================
84
+
85
+ @dataclass
86
+ class TransientNetConfig:
87
+ """All hyperparameters for TransientNet."""
88
+
89
+ # ── Signal / WOLA ─────────────────────────────────────────────────────
90
+ window_length: int = 2048 # M — samples per WOLA frame
91
+ hop_length: int = 512 # WOLA hop
92
+ sample_rate: int = 44100
93
+
94
+ # ── Context encoder ───────────────────────────────────────────────────
95
+ K_context: int = 8 # past frames fed to GRU
96
+ n_mels: int = 32 # mel bands for feature extraction
97
+ gru_hidden: int = 128 # GRU hidden size
98
+ gru_layers: int = 2 # GRU depth
99
+ cond_dim: int = 64 # conditioning vector dimension
100
+
101
+ # ── Frame processor ───────────────────────────────────────────────────
102
+ # Dilated conv stack with FiLM conditioning.
103
+ # Receptive field = 1 + 2*(k-1)*Σ(dilations) = 1 + 4*63 = 253 samples
104
+ # ≈ 5.7 ms at 44.1 kHz — captures limiter attack transients.
105
+ # Longer-scale release dynamics are handled by GRU conditioning.
106
+ conv_channels: int = 48 # width of conv blocks
107
+ n_res_blocks: int = 6 # residual blocks
108
+ kernel_size: int = 3 # conv kernel size in res blocks
109
+ dilations: Tuple[int, ...] = (1, 2, 4, 8, 16, 32)
110
+ film_every: int = 2 # inject FiLM conditioning every N blocks
111
+
112
+ # ── LF/HF split (for hybrid inference) ────────────────────────────────
113
+ lf_cutoff_hz: float = 8000.0 # crossover for hybrid processing
114
+
115
+
116
+ # =============================================================================
117
+ # Spectral Feature Extractor (reused from spade_unrolled.py, identical)
118
+ # =============================================================================
119
+
120
+ def _dct2(x: torch.Tensor) -> torch.Tensor:
121
+ """Batched orthonormal DCT-II. x: (..., N) → (..., N)."""
122
+ N = x.shape[-1]
123
+ v = torch.cat([x[..., ::2], x[..., 1::2].flip(-1)], dim=-1)
124
+ V = torch.fft.fft(v.double(), dim=-1)
125
+ k = torch.arange(N, device=x.device, dtype=torch.float64)
126
+ tw = torch.exp(-1j * math.pi * k / (2.0 * N))
127
+ C = (tw * V).real * math.sqrt(2.0 / N)
128
+ C = C.clone()
129
+ C[..., 0] /= math.sqrt(2.0)
130
+ return C.to(x.dtype)
131
+
132
+
133
+ class SpectralFeatureExtractor(nn.Module):
134
+ """Converts raw audio frame → log-mel + loudness features.
135
+ Identical to the one in spade_unrolled.py — no trainable params."""
136
+
137
+ def __init__(self, cfg: TransientNetConfig):
138
+ super().__init__()
139
+ self.M = cfg.window_length
140
+ self.sr = cfg.sample_rate
141
+ self.n_mels = cfg.n_mels
142
+
143
+ mel_filters = self._build_mel_filterbank()
144
+ self.register_buffer("mel_filters", mel_filters)
145
+
146
+ def _build_mel_filterbank(self) -> torch.Tensor:
147
+ def hz_to_mel(f):
148
+ return 2595.0 * math.log10(1.0 + f / 700.0)
149
+ def mel_to_hz(m):
150
+ return 700.0 * (10.0 ** (m / 2595.0) - 1.0)
151
+
152
+ mel_lo = hz_to_mel(0.0)
153
+ mel_hi = hz_to_mel(self.sr / 2.0)
154
+ mels = torch.linspace(mel_lo, mel_hi, self.n_mels + 2)
155
+ hz_pts = torch.tensor([mel_to_hz(m) for m in mels])
156
+ bin_pts = (hz_pts / self.sr * self.M).long().clamp(0, self.M - 1)
157
+
158
+ filters = torch.zeros(self.n_mels, self.M)
159
+ for i in range(self.n_mels):
160
+ lo, mid, hi = bin_pts[i], bin_pts[i + 1], bin_pts[i + 2]
161
+ if mid > lo:
162
+ filters[i, lo:mid] = torch.linspace(0, 1, int(mid - lo))
163
+ if hi > mid:
164
+ filters[i, mid:hi] = torch.linspace(1, 0, int(hi - mid))
165
+
166
+ area = filters.sum(dim=1, keepdim=True).clamp(min=1e-8)
167
+ return filters / area
168
+
169
+ def forward(self, x: torch.Tensor) -> torch.Tensor:
170
+ """x: (B, M) → (B, n_mels+1)"""
171
+ dct_coeff = _dct2(x.float())
172
+ power_spec = dct_coeff[:, :self.M] ** 2
173
+ mel_spec = torch.matmul(power_spec, self.mel_filters.T)
174
+ log_mel = torch.log(mel_spec.clamp(min=1e-10))
175
+
176
+ rms = x.pow(2).mean(dim=-1, keepdim=True).clamp(min=1e-10).sqrt()
177
+ lufs = 20.0 * torch.log10(rms.clamp(min=1e-10))
178
+
179
+ return torch.cat([log_mel, lufs], dim=-1)
180
+
181
+
182
+ # =============================================================================
183
+ # Context Encoder (GRU → conditioning vector)
184
+ # =============================================================================
185
+
186
+ class ContextEncoder(nn.Module):
187
+ """
188
+ Causal GRU that produces a global conditioning vector from K+1 frames.
189
+
190
+ Unlike the SPADE version (which output 5 constrained parameters), this
191
+ outputs a general D_cond-dimensional vector used for FiLM conditioning.
192
+ The FrameProcessor decides how to use this information.
193
+
194
+ ~190K parameters (GRU dominates).
195
+ """
196
+
197
+ def __init__(self, cfg: TransientNetConfig):
198
+ super().__init__()
199
+ n_feats = cfg.n_mels + 1
200
+ proj_dim = 64
201
+
202
+ self.input_proj = nn.Sequential(
203
+ nn.Linear(n_feats, proj_dim),
204
+ nn.LayerNorm(proj_dim),
205
+ nn.GELU(),
206
+ )
207
+ self.gru = nn.GRU(
208
+ input_size=proj_dim,
209
+ hidden_size=cfg.gru_hidden,
210
+ num_layers=cfg.gru_layers,
211
+ batch_first=True,
212
+ dropout=0.1 if cfg.gru_layers > 1 else 0.0,
213
+ )
214
+ self.head = nn.Sequential(
215
+ nn.Linear(cfg.gru_hidden, cfg.cond_dim),
216
+ nn.GELU(),
217
+ )
218
+
219
+ def forward(self, feat_seq: torch.Tensor) -> torch.Tensor:
220
+ """
221
+ feat_seq: (B, K+1, n_feats) — spectral features, last = current frame
222
+ returns: (B, D_cond) — conditioning vector
223
+ """
224
+ projected = self.input_proj(feat_seq)
225
+ gru_out, _ = self.gru(projected)
226
+ h_t = gru_out[:, -1, :] # last step = current frame
227
+ return self.head(h_t)
228
+
229
+
230
+ # =============================================================================
231
+ # FiLM (Feature-wise Linear Modulation)
232
+ # =============================================================================
233
+
234
+ class FiLM(nn.Module):
235
+ """Condition feature maps on a global vector via affine transform.
236
+ γ, β = Linear(h_cond) → out = γ * x + β"""
237
+
238
+ def __init__(self, cond_dim: int, channels: int):
239
+ super().__init__()
240
+ self.gamma = nn.Linear(cond_dim, channels)
241
+ self.beta = nn.Linear(cond_dim, channels)
242
+ # Init: γ=1, β=0 (identity at start)
243
+ nn.init.ones_(self.gamma.weight[:, 0])
244
+ nn.init.zeros_(self.gamma.weight[:, 1:])
245
+ nn.init.zeros_(self.gamma.bias)
246
+ nn.init.zeros_(self.beta.weight)
247
+ nn.init.zeros_(self.beta.bias)
248
+
249
+ def forward(self, x: torch.Tensor, h: torch.Tensor) -> torch.Tensor:
250
+ """
251
+ x: (B, C, T) — feature maps
252
+ h: (B, D_cond) — conditioning vector
253
+ """
254
+ g = self.gamma(h).unsqueeze(-1) # (B, C, 1)
255
+ b = self.beta(h).unsqueeze(-1) # (B, C, 1)
256
+ return g * x + b
257
+
258
+
259
+ # =============================================================================
260
+ # Residual Block (dilated 1D conv)
261
+ # =============================================================================
262
+
263
+ class ResBlock(nn.Module):
264
+ """Residual block: dilated depthwise-separable conv + pointwise + skip.
265
+
266
+ Depthwise-separable saves parameters while maintaining receptive field.
267
+ """
268
+
269
+ def __init__(self, channels: int, kernel_size: int = 3, dilation: int = 1):
270
+ super().__init__()
271
+ padding = dilation * (kernel_size - 1) // 2
272
+ self.net = nn.Sequential(
273
+ # Dilated depthwise conv (groups=channels for efficiency)
274
+ nn.Conv1d(channels, channels, kernel_size, padding=padding,
275
+ dilation=dilation, groups=channels, bias=False),
276
+ nn.Conv1d(channels, channels, 1, bias=True), # pointwise
277
+ nn.GELU(),
278
+ nn.Conv1d(channels, channels, 1, bias=True), # second pointwise
279
+ )
280
+
281
+ def forward(self, x: torch.Tensor) -> torch.Tensor:
282
+ return x + self.net(x)
283
+
284
+
285
+ # =============================================================================
286
+ # Frame Processor (dilated TCN + FiLM)
287
+ # =============================================================================
288
+
289
+ class FrameProcessor(nn.Module):
290
+ """
291
+ Processes a single WOLA frame to predict the additive residual.
292
+
293
+ Architecture: dilated TCN with FiLM conditioning from the GRU.
294
+
295
+ y_w (B, 1, M)
296
+ → Conv1d(1, C, k=7)
297
+ → [ResBlock(C, d=1), ResBlock(C, d=2), FiLM, ...] × 3 groups
298
+ → Conv1d(C, 1, k=1)
299
+ → r̂ (B, M)
300
+
301
+ The dilated convolutions capture local transient structure.
302
+ FiLM conditioning injects global limiter-dynamics awareness.
303
+ The residual connection (x̂ = y + r̂) is handled outside this module.
304
+
305
+ ~75K parameters.
306
+ """
307
+
308
+ def __init__(self, cfg: TransientNetConfig):
309
+ super().__init__()
310
+ C = cfg.conv_channels
311
+ D = cfg.cond_dim
312
+
313
+ # Input projection
314
+ self.input_conv = nn.Sequential(
315
+ nn.Conv1d(1, C, kernel_size=7, padding=3, bias=True),
316
+ nn.GELU(),
317
+ )
318
+
319
+ # Residual blocks with interleaved FiLM
320
+ self.blocks = nn.ModuleList()
321
+ self.films = nn.ModuleDict()
322
+
323
+ for i, d in enumerate(cfg.dilations):
324
+ self.blocks.append(ResBlock(C, kernel_size=cfg.kernel_size, dilation=d))
325
+ # FiLM every N blocks
326
+ if (i + 1) % cfg.film_every == 0:
327
+ self.films[str(i)] = FiLM(D, C)
328
+
329
+ # Output projection → residual
330
+ self.output_conv = nn.Conv1d(C, 1, kernel_size=1, bias=True)
331
+
332
+ # Init output near zero (start as identity: x̂ ≈ y)
333
+ nn.init.zeros_(self.output_conv.weight)
334
+ nn.init.zeros_(self.output_conv.bias)
335
+
336
+ def forward(self, y_w: torch.Tensor, h_cond: torch.Tensor) -> torch.Tensor:
337
+ """
338
+ y_w: (B, M) — windowed limited frame
339
+ h_cond: (B, D_cond) — conditioning from GRU
340
+ returns: (B, M) — predicted residual r̂
341
+ """
342
+ x = y_w.unsqueeze(1) # (B, 1, M)
343
+ x = self.input_conv(x) # (B, C, M)
344
+
345
+ for i, block in enumerate(self.blocks):
346
+ x = block(x)
347
+ if str(i) in self.films:
348
+ x = self.films[str(i)](x, h_cond)
349
+
350
+ r = self.output_conv(x) # (B, 1, M)
351
+ return r.squeeze(1) # (B, M)
352
+
353
+
354
+ # =============================================================================
355
+ # Full TransientNet model
356
+ # =============================================================================
357
+
358
+ class TransientNet(nn.Module):
359
+ """
360
+ Full transient restoration model.
361
+
362
+ Forward: (y_w, ctx_frames) → x̂_w = y_w + r̂
363
+ """
364
+
365
+ def __init__(self, cfg: TransientNetConfig):
366
+ super().__init__()
367
+ self.cfg = cfg
368
+
369
+ self.feature_extractor = SpectralFeatureExtractor(cfg)
370
+ self.context_encoder = ContextEncoder(cfg)
371
+ self.frame_processor = FrameProcessor(cfg)
372
+
373
+ def forward(
374
+ self,
375
+ yc_w: torch.Tensor, # (B, M) — windowed limited frame
376
+ ctx_frames: torch.Tensor, # (B, K, M) — K previous windowed frames
377
+ ) -> Tuple[torch.Tensor, torch.Tensor]:
378
+ """
379
+ Returns
380
+ -------
381
+ x_hat : (B, M) — restored frame
382
+ residual : (B, M) — predicted residual (for regularisation)
383
+ """
384
+ B, M = yc_w.shape
385
+ K = ctx_frames.shape[1]
386
+
387
+ # ── Extract spectral features for all frames ──────────────────────
388
+ all_frames = torch.cat([ctx_frames, yc_w.unsqueeze(1)], dim=1) # (B, K+1, M)
389
+ feats = torch.stack(
390
+ [self.feature_extractor(all_frames[:, t, :]) for t in range(K + 1)],
391
+ dim=1
392
+ ) # (B, K+1, n_feats)
393
+
394
+ # ── Context encoding → conditioning vector ────────────────────────
395
+ h_cond = self.context_encoder(feats) # (B, D_cond)
396
+
397
+ # ── Frame processing → residual ───────────────────────────────────
398
+ residual = self.frame_processor(yc_w, h_cond) # (B, M)
399
+
400
+ # ── Output: additive residual ─────────────────────────────────────
401
+ x_hat = yc_w + residual
402
+
403
+ return x_hat, residual
404
+
405
+ def parameter_count(self) -> int:
406
+ return sum(p.numel() for p in self.parameters() if p.requires_grad)
407
+
408
+
409
+ # =============================================================================
410
+ # Loss function (3 terms — clean, no conflicts)
411
+ # =============================================================================
412
+
413
+ class TransientLoss(nn.Module):
414
+ """
415
+ 4-term loss for transient restoration.
416
+
417
+ 1. Time MSE: ‖x̂ − x_clean‖²
418
+ PRIMARY signal. Auto-focuses on affected regions (unaffected regions
419
+ have near-zero error already, so gradient is negligible there).
420
+
421
+ 2. Multi-scale STFT: L1 on magnitude spectrograms
422
+ Captures spectral envelope fidelity across time-frequency resolutions.
423
+
424
+ 3. Residual L1: α · mean(|r̂|)
425
+ Light regularisation preventing the model from hallucinating energy
426
+ where no correction is needed. Much weaker than SPADE's sparsity
427
+ loss because the physical prior is different: corrections are NOT
428
+ expected to be sparse in frequency — they are sparse in TIME.
429
+
430
+ 4. Energy ratio: β · mean(log²(E_pred / E_gt))
431
+ Penalises residual magnitude miscalibration. The model can achieve
432
+ high cos_sim (correct shape) while over/under-shooting amplitude.
433
+ This term forces energy alignment between predicted and GT residual,
434
+ preventing the ΔSDR oscillation observed in early training where
435
+ cos_sim was 0.96+ but ΔSDR swung ±3 dB between epochs.
436
+ """
437
+
438
+ def __init__(
439
+ self,
440
+ w_time: float = 1.0,
441
+ w_stft: float = 0.5,
442
+ w_residual: float = 0.01, # very light: just anti-hallucination
443
+ w_energy: float = 0.1, # residual energy calibration (ramped via warmup)
444
+ stft_wins: Tuple[int, ...] = (512, 1024, 2048),
445
+ energy_warmup_epochs: int = 5, # ramp energy loss over this many epochs
446
+ energy_log_clamp: float = 4.0, # clamp |log(ratio)| to prevent init explosion
447
+ ):
448
+ super().__init__()
449
+ self.w_time = w_time
450
+ self.w_stft = w_stft
451
+ self.w_residual = w_residual
452
+ self.w_energy = w_energy
453
+ self.stft_wins = stft_wins
454
+ self.energy_warmup_epochs = energy_warmup_epochs
455
+ self.energy_log_clamp = energy_log_clamp
456
+ self._current_epoch = 0 # set by training loop
457
+
458
+ def forward(
459
+ self,
460
+ x_hat: torch.Tensor, # (B, M) — model output
461
+ x_clean: torch.Tensor, # (B, M) — ground truth
462
+ yc_w: torch.Tensor, # (B, M) — limited input (for diagnostics)
463
+ residual: torch.Tensor, # (B, M) — predicted residual
464
+ ) -> Tuple[torch.Tensor, dict]:
465
+
466
+ details = {}
467
+
468
+ # ── 1. Time-domain MSE ────────────────────────────────────────────
469
+ loss_time = F.mse_loss(x_hat, x_clean)
470
+ details["time_mse"] = loss_time.item()
471
+
472
+ # ── 2. Multi-scale STFT L1 ───────────────────────────────────────
473
+ loss_stft = x_hat.new_zeros(1)
474
+ for win in self.stft_wins:
475
+ if win > x_hat.shape[-1]:
476
+ continue
477
+ hop = win // 4
478
+ wnd = torch.hann_window(win, device=x_hat.device, dtype=x_hat.dtype)
479
+ S_hat = torch.stft(x_hat.float(), n_fft=win, hop_length=hop,
480
+ win_length=win, window=wnd, return_complex=True)
481
+ S_clean = torch.stft(x_clean.float(), n_fft=win, hop_length=hop,
482
+ win_length=win, window=wnd, return_complex=True)
483
+ loss_stft = loss_stft + F.l1_loss(S_hat.abs(), S_clean.abs())
484
+ loss_stft = loss_stft / max(1, len(self.stft_wins))
485
+ details["stft"] = loss_stft.item()
486
+
487
+ # ── 3. Residual L1 (anti-hallucination) ──────────────────────────
488
+ loss_res = residual.abs().mean()
489
+ details["res_l1"] = loss_res.item()
490
+
491
+ # ── 4. Residual energy ratio (amplitude calibration) ────────────���
492
+ # Clamped log²(E_pred / E_gt) with linear warmup.
493
+ #
494
+ # Problem solved: at init, residual ≈ 0 → E_pred ≈ ε → log → -∞.
495
+ # The unclamped version produced loss ≈ 130, saturating grad_clip
496
+ # and drowning the MSE/STFT signal. The model learned to push
497
+ # energy in ANY direction (cos_sim < 0) instead of the right one.
498
+ #
499
+ # Fix: (a) clamp |log(ratio)| ≤ 4 (≈ ±35 dB, generous but bounded)
500
+ # (b) warmup ramp: w_energy × min(1, epoch/warmup_epochs)
501
+ # → first epochs: direction only (MSE+STFT)
502
+ # later epochs: direction + amplitude (all 4 terms)
503
+ gt_residual = x_clean - yc_w
504
+ E_pred = residual.pow(2).sum(-1) + 1e-8 # (B,)
505
+ E_gt = gt_residual.pow(2).sum(-1) + 1e-8 # (B,)
506
+ log_ratio = (E_pred / E_gt).log().clamp(-self.energy_log_clamp,
507
+ self.energy_log_clamp)
508
+ loss_energy = log_ratio.pow(2).mean()
509
+ details["energy_ratio"] = loss_energy.item()
510
+
511
+ # Warmup ramp (0 at epoch 0, 1.0 at epoch >= warmup_epochs)
512
+ if self.energy_warmup_epochs > 0:
513
+ ramp = min(1.0, self._current_epoch / self.energy_warmup_epochs)
514
+ else:
515
+ ramp = 1.0
516
+ w_energy_eff = self.w_energy * ramp
517
+ details["w_energy_eff"] = w_energy_eff
518
+
519
+ # ── Total ─────────────────────────────────────────────────────────
520
+ total = (self.w_time * loss_time
521
+ + self.w_stft * loss_stft.squeeze()
522
+ + self.w_residual * loss_res
523
+ + w_energy_eff * loss_energy)
524
+ details["total"] = total.item()
525
+
526
+ return total, details
527
+
528
+
529
+ # =============================================================================
530
+ # WOLA inference wrapper
531
+ # =============================================================================
532
+
533
+ class TransientNetInference:
534
+ """
535
+ Process a full audio signal via WOLA (Weighted Overlap-Add).
536
+
537
+ Usage
538
+ -----
539
+ model = TransientNet(cfg)
540
+ model.load_state_dict(...)
541
+ model.eval()
542
+
543
+ infer = TransientNetInference(model, device="cuda")
544
+ x_hat = infer.process(y_limited, sample_rate=44100)
545
+ """
546
+
547
+ def __init__(
548
+ self,
549
+ model: TransientNet,
550
+ device: str = "cuda",
551
+ batch_frames: int = 256,
552
+ ):
553
+ self.model = model.to(device)
554
+ self.model.eval()
555
+ self.cfg = model.cfg
556
+ self.device = device
557
+ self.batch_frames = batch_frames
558
+
559
+ @torch.no_grad()
560
+ def process(self, y_limited: np.ndarray, sample_rate: int = 44100) -> np.ndarray:
561
+ """
562
+ y_limited : (N,) — limited mono audio (float32 or float64)
563
+ returns : (N,) — restored audio (float64)
564
+ """
565
+ from scipy.signal.windows import hann as _hann
566
+
567
+ M = self.cfg.window_length
568
+ a = self.cfg.hop_length
569
+ K = self.cfg.K_context
570
+
571
+ y = y_limited.astype(np.float64)
572
+ dc = float(np.mean(y))
573
+ y -= dc
574
+
575
+ N_sig = len(y)
576
+
577
+ # Pad to full frames
578
+ n_frames = max(1, int(np.ceil(N_sig / a)))
579
+ N_pad = (n_frames - 1) * a + M
580
+ y_pad = np.zeros(N_pad, dtype=np.float64)
581
+ y_pad[:N_sig] = y
582
+
583
+ # WOLA window (sqrt-Hann for analysis + synthesis)
584
+ win = np.sqrt(_hann(M, sym=False)).astype(np.float64)
585
+
586
+ # Extract all frames
587
+ frames_np = np.array([
588
+ y_pad[i * a : i * a + M] * win
589
+ for i in range(n_frames)
590
+ ], dtype=np.float32) # (F, M)
591
+
592
+ # Process in batches
593
+ out_frames = np.zeros_like(frames_np)
594
+ F_total = len(frames_np)
595
+
596
+ for b_start in range(0, F_total, self.batch_frames):
597
+ b_end = min(b_start + self.batch_frames, F_total)
598
+ b_size = b_end - b_start
599
+
600
+ # Current frames
601
+ yc_batch = torch.from_numpy(frames_np[b_start:b_end]).to(self.device)
602
+
603
+ # Context frames (K previous per frame)
604
+ ctx_list = []
605
+ for fi in range(b_start, b_end):
606
+ ctx = np.zeros((K, M), dtype=np.float32)
607
+ for k in range(K):
608
+ ci = fi - (K - k)
609
+ if 0 <= ci < F_total:
610
+ ctx[k] = frames_np[ci]
611
+ ctx_list.append(ctx)
612
+ ctx_batch = torch.from_numpy(np.array(ctx_list)).to(self.device)
613
+
614
+ x_hat_batch, _ = self.model(yc_batch, ctx_batch)
615
+ out_frames[b_start:b_end] = x_hat_batch.cpu().numpy()
616
+
617
+ # WOLA synthesis (overlap-add with window)
618
+ output = np.zeros(N_pad, dtype=np.float64)
619
+ norm = np.zeros(N_pad, dtype=np.float64)
620
+
621
+ for i in range(n_frames):
622
+ idx = i * a
623
+ output[idx:idx + M] += out_frames[i].astype(np.float64) * win
624
+ norm[idx:idx + M] += win ** 2
625
+
626
+ norm = np.maximum(norm, 1e-12)
627
+ output /= norm
628
+ output[:N_sig] += dc
629
+
630
+ return output[:N_sig]
631
+
632
+
633
+ # =============================================================================
634
+ # Hybrid inference (LF: TransientNet, HF: classical SPADE)
635
+ # =============================================================================
636
+
637
+ class HybridTransientInference:
638
+ """
639
+ LR crossover split:
640
+ HF (> crossover_hz): classical SPADE v11/v13 (unchanged, works well)
641
+ LF (< crossover_hz): TransientNet (learned gain inversion)
642
+ """
643
+
644
+ def __init__(
645
+ self,
646
+ model: TransientNet,
647
+ crossover_hz: float = 8000.0,
648
+ delta_db: float = 3.5,
649
+ device: str = "cuda",
650
+ # HF SPADE params (pass through to v11/v13)
651
+ hf_delta_db: float = 3.5,
652
+ hf_max_gain_db: float = 9.0,
653
+ hf_release_ms: float = 80.0,
654
+ hf_max_iter: int = 200,
655
+ hf_window_length: int = 2048,
656
+ hf_hop_length: int = 512,
657
+ ):
658
+ self.crossover_hz = crossover_hz
659
+ self.delta_db = delta_db
660
+ self.device = device
661
+ self._lf_infer = TransientNetInference(model, device=device)
662
+
663
+ # Store HF params
664
+ self.hf_delta_db = hf_delta_db
665
+ self.hf_max_gain_db = hf_max_gain_db
666
+ self.hf_release_ms = hf_release_ms
667
+ self.hf_max_iter = hf_max_iter
668
+ self.hf_window_length = hf_window_length
669
+ self.hf_hop_length = hf_hop_length
670
+
671
+ @staticmethod
672
+ def _lr_split(x, crossover_hz, sr):
673
+ from scipy.signal import butter, sosfiltfilt
674
+ fc = float(np.clip(crossover_hz, 1.0, sr / 2.0 - 1.0))
675
+ sos = butter(2, fc, btype="low", fs=sr, output="sos")
676
+ lp = sosfiltfilt(sos, x)
677
+ hp = x - lp
678
+ return lp, hp
679
+
680
+ def _process_hf(self, hf_mono, sr):
681
+ try:
682
+ from spade_declip_v13 import declip as _declip, DeclipParams
683
+ except ImportError:
684
+ try:
685
+ from spade_declip_v11 import declip as _declip, DeclipParams
686
+ except ImportError:
687
+ # No SPADE available — return HF unchanged
688
+ return hf_mono
689
+
690
+ params = DeclipParams(
691
+ algo = "sspade",
692
+ frame = "rdft",
693
+ mode = "soft",
694
+ delta_db = self.hf_delta_db,
695
+ window_length = self.hf_window_length,
696
+ hop_length = self.hf_hop_length,
697
+ max_gain_db = self.hf_max_gain_db,
698
+ release_ms = self.hf_release_ms,
699
+ max_iter = self.hf_max_iter,
700
+ sample_rate = sr,
701
+ use_gpu = (self.device != "cpu"),
702
+ show_progress = False,
703
+ verbose = False,
704
+ )
705
+ fixed, _ = _declip(hf_mono, params)
706
+ return fixed
707
+
708
+ @torch.no_grad()
709
+ def process(self, y_limited: np.ndarray, sample_rate: int = 44100) -> np.ndarray:
710
+ mono = y_limited.ndim == 1
711
+ if mono:
712
+ y_limited = y_limited[:, None]
713
+ _, C = y_limited.shape
714
+ out_channels = []
715
+
716
+ for ch in range(C):
717
+ yc = y_limited[:, ch].astype(np.float64)
718
+ lf, hf = self._lr_split(yc, self.crossover_hz, sample_rate)
719
+
720
+ hf_rec = self._process_hf(hf, sample_rate)
721
+ lf_rec = self._lf_infer.process(lf.astype(np.float32), sample_rate)
722
+
723
+ L = min(len(lf_rec), len(hf_rec))
724
+ combined = lf_rec[:L] + hf_rec[:L]
725
+ out_channels.append(combined)
726
+
727
+ result = np.column_stack(out_channels)
728
+ return result[:, 0] if mono else result
729
+
730
+
731
+ # =============================================================================
732
+ # Model factory
733
+ # =============================================================================
734
+
735
+ def build_model(cfg: Optional[TransientNetConfig] = None) -> TransientNet:
736
+ if cfg is None:
737
+ cfg = TransientNetConfig()
738
+ model = TransientNet(cfg)
739
+ n = model.parameter_count()
740
+ print(f"[TransientNet] Built model: {n:,} trainable parameters")
741
+ return model
742
+
743
+
744
+ # =============================================================================
745
+ # Smoke test
746
+ # =============================================================================
747
+
748
+ def _smoke_test():
749
+ print("=" * 60)
750
+ print("TransientNet — Smoke Test")
751
+ print("=" * 60)
752
+
753
+ cfg = TransientNetConfig(
754
+ window_length=512,
755
+ hop_length=128,
756
+ K_context=4,
757
+ n_mels=16,
758
+ gru_hidden=64,
759
+ gru_layers=1,
760
+ cond_dim=32,
761
+ conv_channels=32,
762
+ n_res_blocks=4,
763
+ dilations=(1, 2, 4, 8),
764
+ )
765
+ model = build_model(cfg)
766
+ model.eval()
767
+
768
+ B = 4
769
+ M = cfg.window_length
770
+ K = cfg.K_context
771
+
772
+ yc_w = torch.randn(B, M) * 0.3
773
+ ctx = torch.randn(B, K, M) * 0.3
774
+
775
+ with torch.no_grad():
776
+ x_hat, residual = model(yc_w, ctx)
777
+
778
+ print(f" Input yc_w: {tuple(yc_w.shape)}")
779
+ print(f" Output x_hat: {tuple(x_hat.shape)}")
780
+ print(f" Residual: {tuple(residual.shape)}")
781
+ print(f" Residual range: [{residual.min():.4f}, {residual.max():.4f}]")
782
+ print(f" Residual mean: {residual.mean():.6f} (should be ~0 at init)")
783
+
784
+ # Loss test
785
+ x_clean = yc_w + torch.randn_like(yc_w) * 0.05
786
+ loss_fn = TransientLoss()
787
+ loss, details = loss_fn(x_hat, x_clean, yc_w, residual)
788
+ print(f"\n Loss: {loss.item():.6f}")
789
+ for k, v in details.items():
790
+ print(f" {k:12s}: {v:.6f}")
791
+
792
+ # Gradient test
793
+ model.train()
794
+ x_hat2, res2 = model(yc_w, ctx)
795
+ loss2, _ = loss_fn(x_hat2, x_clean, yc_w, res2)
796
+ loss2.backward()
797
+
798
+ grad_norms = {
799
+ name: p.grad.norm().item()
800
+ for name, p in model.named_parameters()
801
+ if p.grad is not None
802
+ }
803
+ print(f"\n Gradient norms (first 6):")
804
+ for k, v in list(grad_norms.items())[:6]:
805
+ print(f" {k:45s}: {v:.6f}")
806
+
807
+ zero_grad = sum(1 for v in grad_norms.values() if v < 1e-10)
808
+ print(f"\n Zero-gradient params: {zero_grad}/{len(grad_norms)} "
809
+ f"({'OK' if zero_grad == 0 else 'WARNING'})")
810
+
811
+ print("\n ✓ Smoke test passed.")
812
+
813
+
814
+ if __name__ == "__main__":
815
+ _smoke_test()