| """ |
| spade_declip.py – v13 (LF transient recovery: dedicated subband SPADE below 500 Hz) |
| ==================================================================================== |
| S-SPADE / A-SPADE audio declipping — extended to recover dynamics |
| compressed by a brickwall limiter (mode='soft'). |
| |
| GPU acceleration (v10 — NEW) |
| ------------------------------ |
| Requires PyTorch ≥ 2.0 with a working CUDA *or* ROCm backend. |
| PyTorch-ROCm exposes AMD GPUs under the standard torch.cuda namespace, |
| so detection and device strings are identical to NVIDIA: |
| |
| Device auto-detection order: CUDA → ROCm → CPU fallback |
| Device string: "auto" — first available GPU (cuda / cuda:0) |
| "cuda:0" — explicit device index |
| "cpu" — force CPU (disables GPU path) |
| |
| GPU strategy: |
| CPU path (v8/v9): processes frames one-by-one with ThreadPoolExecutor. |
| GPU path (v10): packs ALL active frames into a single (F, M) batch |
| tensor and runs S-SPADE entirely on the GPU in one |
| kernel sweep — DCT, hard-threshold, IDCT, proj_Γ are |
| all vectorised over F simultaneously. |
| |
| Convergence is tracked per-frame with a bool mask; converged frames |
| are frozen (their dual variable stops updating) while the rest keep |
| iterating. The GPU loop exits as soon as every frame has converged |
| or max_iter is reached. |
| |
| Typical speedup vs. single-thread CPU: 20–100× depending on GPU and |
| frame count. The RX 6700 XT (12 GB, ROCm) processes the 2784-frame |
| stereo example in ~60–90 s vs. the 1289 s CPU baseline (≈15–20×). |
| |
| DCT implementation on GPU: |
| Uses a verified FFT-based Makhoul (1980) algorithm that exactly matches |
| scipy.fft.dct(x, type=2, norm='ortho') to float32 precision. |
| Runs in float64 internally for numerical safety, cast to float32 on |
| output. Both DCT and IDCT are batch-safe: input shape (..., N). |
| |
| Limitations: |
| • algo='aspade' is CPU-only in v10 (A-SPADE GPU planned for v11). |
| Set use_gpu=False or switch to algo='sspade' for GPU acceleration. |
| • Very long files (> ~2 h at 48 kHz) may require chunked batching; |
| add gpu_batch_frames parameter if VRAM is exhausted. |
| |
| References |
| ---------- |
| [1] Kitić, Bertin, Gribonval — "Sparsity and cosparsity for audio declipping: |
| a flexible non-convex approach", LVA/ICA 2015. (arXiv:1506.01830) |
| [2] Záviška, Rajmic, Průša, Veselý — "Revisiting Synthesis Model in Sparse |
| Audio Declipper", 2018. (arXiv:1807.03612) |
| |
| Algorithms |
| ---------- |
| S-SPADE → Algorithm 1 in [2] (synthesis, coefficient-domain ADMM) [DEFAULT] |
| Projection uses the closed-form Lemma / eq.(12) from [2]. |
| A-SPADE → Algorithm 2 in [2] (analysis, signal-domain ADMM) |
| |
| Transforms |
| ---------- |
| 'dct' Orthonormal DCT-II (tight Parseval frame, bound = 1, P = N) |
| 'rdft' Redundant real frame [DCT-II/√2 ‖ DST-II/√2] (tight, bound = 1, P = 2N) |
| [DEFAULT] Best empirical quality; mimics oversampled DFT from [1][2]. |
| |
| Operating modes |
| --------------- |
| mode='hard' (default) |
| Standard hard-clipping recovery. Mask detects samples exactly at the |
| digital ceiling (±tau). Same behaviour as v5. |
| |
| mode='soft' (introduced v6, frame-adaptive bypass in v7) |
| Brickwall-limiter recovery. Any sample above the limiter threshold |
| (ceiling − delta_db dB) is treated as potentially attenuated; its true |
| value is constrained to be ≥ its current value (lower bound, not equality). |
| |
| v7 frame-adaptive bypass ← NEW |
| -------------------------------- |
| Before processing each frame, the raw un-windowed peak is compared to |
| the global threshold: |
| |
| frame_peak = max(|yc[idx1:idx2]|) |
| |
| frame_peak < threshold → bypass: WOLA accumulation with win², |
| SPADE never called, zero artefact risk. |
| frame_peak >= threshold → normal SPADE processing. |
| |
| The bypass uses identical win²/norm_win bookkeeping to the SPADE path, |
| so the WOLA reconstruction is numerically transparent. |
| Verbose output reports active/bypassed/no-conv frame counts and speedup. |
| |
| Mathematical basis: |
| proj_Γ implements v[Icp] = max(v[Icp], yc[Icp]) where yc[Icp] |
| is the actual limited sample value — the lower-bound constraint is |
| exact. proj_gamma, tight_sspade, tight_aspade are UNCHANGED. |
| |
| Practical parameter guidance: |
| delta_db = dB from 0 dBFS to the limiter threshold. |
| Read from Waveform Statistics: find the level below which the limiter |
| did NOT intervene → delta_db = that level (positive number). |
| Typical brickwall masterings: 1.0 – 3.0 dB. |
| |
| Limitations: |
| • Attack/release pumping attenuates samples just outside the threshold; |
| those are pinned as reliable — unavoidable without the limiter's curve. |
| • Macro-dynamics cannot be restored; only transient peaks are recovered. |
| |
| Verified bugs fixed (inherited from v5/v6) |
| ------------------------------------------- |
| BUG-1 frsyn/RDFT: flip output not input in DST synthesis |
| BUG-2 tight_aspade: dual variable in coefficient domain, not signal domain |
| BUG-3 _declip_mono: per-channel WOLA gain drift (stereo L/R balance) |
| BUG-4 _declip_mono: DC offset breaks half-wave mask detection |
| |
| Dependencies: pip install numpy scipy soundfile |
| |
| Usage (API) |
| ----------- |
| from spade_declip_v10 import declip, DeclipParams |
| |
| params = DeclipParams(mode="soft", delta_db=2.5) # GPU used automatically |
| fixed, masks = declip(limited_master, params) |
| |
| # Explicit GPU device (ROCm / CUDA): |
| params = DeclipParams(mode="soft", delta_db=2.5, use_gpu=True, gpu_device="cuda:0") |
| |
| # Force CPU (disable GPU): |
| params = DeclipParams(mode="soft", delta_db=2.5, use_gpu=False) |
| |
| New in v13 — Dedicated LF transient recovery subband pass |
| ----------------------------------------------------------- |
| Problem (from v11/v12 analysis): |
| The ADMM hard-thresholding operator H_k is GLOBAL: it retains the k |
| largest-magnitude DCT/DST coefficients regardless of frequency. After |
| brickwall limiting with a slow release (40–100 ms), the sub-bass body of |
| a kick (60–300 Hz) has been heavily attenuated, so its DCT coefficients |
| are 10–20 dB smaller than the HF transient attack coefficients. H_k |
| never selects LF bins in the early ADMM iterations; the dual variable |
| never accumulates LF correction; convergence is declared while LF energy |
| is still near zero. This produces the systematic −13 to −22 dB |
| sub-bass under-recovery seen in the debug export across all tested kicks. |
| |
| v12 tried to fix this with frequency-stratified H_k (guaranteed LF |
| slots), but that REDUCED HF recovery because the total sparsity budget k |
| was shared: reserving slots for LF meant fewer slots for HF, degrading |
| the cymbal snap / hi-hat attack that v11 already handled correctly. |
| |
| v13 fix — LR split + independent LF SPADE pass: |
| When lf_split_hz > 0 (soft mode only): |
| |
| 1. Linkwitz-Riley crossover at lf_split_hz Hz: |
| lf_band = LP(yc, lf_split_hz) # 0 … lf_split_hz Hz |
| hf_band = yc − lf_band # lf_split_hz … Nyquist Hz |
| # exact: lf + hf = yc always |
| 2. HF band: standard v11 S-SPADE, IDENTICAL to v11 (no changes). |
| H_k operates on a signal that contains NO sub-bass content, so all |
| budget k goes to the HF transient bins. HF quality is preserved. |
| |
| 3. LF band: dedicated SPADE pass with independently configurable params. |
| Because the signal is bandlimited to 0…lf_split_hz Hz, ALL DCT bins |
| are LF content. H_k naturally selects the dominant sub-bass bins |
| (60–120 Hz kick body) without any HF competition. A longer window |
| (lf_window_length) and/or more iterations (lf_max_iter) can be set |
| without slowing down the HF path. |
| |
| 4. Reconstruction: output = lf_fixed + hf_fixed |
| Perfect-reconstruction property of the LP+HP crossover guarantees |
| lf_band + hf_band = yc exactly, so summing the processed bands |
| introduces zero reconstruction artefacts. |
| |
| Key consequence: HF and LF sparsity budgets are now INDEPENDENT. Each |
| band's H_k uses the full budget k for its own frequency range. There is |
| no longer any competition between sub-bass body recovery and HF transient |
| recovery. |
| |
| New DeclipParams fields (v13): |
| lf_split_hz : float (default 0.0 = disabled) |
| Crossover frequency in Hz for the LF subband pass. |
| Typical value for kick/bass transient recovery: 400–600 Hz. |
| Must be in (0, sample_rate/2). 0 = disabled (v11 behaviour). |
| lf_window_length : int (default 0 = inherit window_length) |
| WOLA window size for the LF pass. Larger windows give better |
| frequency resolution in sub-bass, at the cost of time resolution. |
| For lf_split_hz=500 Hz at 44100 Hz, 2048 or 4096 are sensible. |
| 0 means use the same window_length as the main (HF) pass. |
| lf_hop_length : int (default 0 = lf_window_length // 4) |
| WOLA hop for the LF pass. 0 = automatic (25% hop of lf_window). |
| lf_max_iter : int (default 0 = inherit max_iter) |
| Max ADMM iterations for the LF pass. LF convergence is slower |
| (the dual variable u needs more steps to push up the sub-bass under |
| the lower-bound constraint). Typical: 2× the HF max_iter. |
| 0 = use the same max_iter as the main pass. |
| lf_eps : float (default 0.0 = inherit eps) |
| Convergence threshold for the LF pass. 0 = use main eps. |
| lf_delta_db : float (default 0.0 = inherit delta_db) |
| delta_db override for the LF band (dB below the LF band ceiling |
| where limiting is detected). The LF band peak may differ from the |
| full-signal peak after LR split. 0 = inherit delta_db. |
| lf_max_gain_db : float (default 0.0 = inherit max_gain_db) |
| Ratio-aware gain cap for the LF pass. 0 = inherit max_gain_db. |
| lf_release_ms : float (default -1.0 = inherit release_ms) |
| Mask dilation release time for the LF pass. The kick body has a |
| longer release than HF transients, so a higher value is appropriate. |
| -1 = inherit release_ms from main params. |
| lf_s : int (default 0 = inherit s) |
| Sparsity step for the LF pass. 0 = inherit s. |
| lf_r : int (default 0 = inherit r) |
| Sparsity relaxation period for the LF pass. 0 = inherit r. |
| |
| Usage: |
| # LF transient recovery: dedicated subband SPADE below 500 Hz |
| python spade_declip_v13.py input.wav output.wav --mode soft --delta-db 1.5 \\ |
| --lf-split-hz 500 --lf-window-length 4096 --lf-max-iter 1500 \\ |
| --lf-release-ms 80 --release-ms 80 --max-gain-db 9 |
| |
| New in v11 — Delimiting features |
| ---------------------------------- |
| Four new DeclipParams knobs that transition from declipping to genuine delimiting. |
| All are disabled by default for full backward compatibility. |
| |
| 1. Envelope-Based Mask Dilation (release_ms > 0) |
| -------------------------------------------------- |
| A limiter's release time attenuates not just the peak sample but all samples |
| for the next 10–50 ms. By default, _compute_masks marks those post-peak |
| samples as "reliable" (Ir), pinning the ADMM solver to artificially low values |
| and causing the pumping artifact. |
| |
| Fix: _dilate_masks_soft() forward-dilates Icp and Icm by `release_samples = |
| round(release_ms * sample_rate / 1000)` samples using convolution. Any newly |
| flagged sample within the release window is reclassified: |
| yc[n] ≥ 0 → Icp (true value ≥ yc[n]) |
| yc[n] < 0 → Icm (true value ≤ yc[n]) |
| The constraint is always satisfied by the limiter model: the true value can |
| only be larger in magnitude than the gain-reduced sample. |
| |
| Parameters: release_ms (float, default 0.0), sample_rate (int, default 44100). |
| |
| 2. Ratio-Aware Upper Bound (max_gain_db > 0) |
| ----------------------------------------------- |
| Without an upper bound, the L0-ADMM can generate "ice-pick" transients that |
| exceed any physical limiter's ratio. max_gain_db caps the recovery: |
| |
| v[Icp] = clip(max(v[Icp], yc[Icp]), yc[Icp], yc[Icp] * G_max) |
| v[Icm] = clip(min(v[Icm], yc[Icm]), yc[Icm] * G_max, yc[Icm]) |
| |
| where G_max = 10^(max_gain_db / 20). Implemented in both proj_gamma (CPU) |
| and the inline GPU projection in _sspade_batch_gpu. |
| |
| Parameters: max_gain_db (float, default 0.0 = disabled; e.g. 6.0 for ±6 dB max). |
| |
| 3. Sub-band (Multi-band) SPADE (multiband=True) |
| -------------------------------------------------- |
| Multi-band limiters (FabFilter Pro-L 2, etc.) apply independent gain reduction |
| per frequency range. Running broadband SPADE on such material "un-ducks" |
| frequency bands that were never attenuated, causing harshness. |
| |
| Fix: _lr_split() builds a phase-perfect crossover (LP via scipy Butterworth |
| sosfiltfilt + HP = x − LP) at each crossover frequency. Each band is |
| declipped independently with its own delta_db threshold, then summed back. |
| |
| The GPU batch path naturally handles multiple bands — each band contributes |
| its own frames to the (F, M) batch with no added latency. |
| |
| Parameters: multiband (bool), band_crossovers (tuple of Hz, default (250, 4000)), |
| band_delta_db (tuple of floats; empty = use delta_db for all bands). |
| |
| 4. Macro-Dynamics Upward Expansion Pre-pass (macro_expand=True) |
| ---------------------------------------------------------------- |
| SPADE operates on ≈21 ms WOLA windows and cannot undo the slow 200–500 ms |
| RMS squash ("body" compression) a mastering limiter imposes. |
| |
| Fix: _macro_expand_pass() runs a causal peak-envelope follower (attack + |
| release IIR) over the full signal, estimates where the level is held below |
| the long-term 80th-percentile envelope, and applies gentle upward expansion: |
| |
| g(n) = (env(n) / threshold)^(1/ratio − 1) if env(n) < threshold |
| = 1.0 otherwise |
| |
| SPADE then corrects the microscopic waveform peaks that the expander cannot |
| interpolate. The two passes are complementary by design. |
| |
| Parameters: macro_expand (bool), macro_attack_ms (float, default 10.0), |
| macro_release_ms (float, default 200.0), macro_ratio (float, default 1.2). |
| |
| |
| Usage (CLI) |
| ----------- |
| python spade_declip_v13.py input.wav output.wav --mode soft --delta-db 2.5 |
| python spade_declip_v13.py input.wav output.wav --mode soft --delta-db 2.5 --gpu-device cuda:0 |
| python spade_declip_v13.py input.wav output.wav --mode soft --delta-db 2.5 --no-gpu |
| |
| # v11 delimiting features |
| python spade_declip_v13.py input.wav output.wav --mode soft --delta-db 2.5 \ |
| --release-ms 30 --max-gain-db 6 --multiband --band-crossovers 250 4000 |
| |
| python spade_declip_v13.py input.wav output.wav --mode soft --delta-db 2.5 \ |
| --macro-expand --macro-release-ms 200 --macro-ratio 1.2 |
| """ |
|
|
| from __future__ import annotations |
| try: |
| import torch as _torch_module |
| import torch |
| _TORCH_AVAILABLE = True |
| except ImportError: |
| _TORCH_AVAILABLE = False |
| import argparse |
| import os |
| import time |
| import warnings |
| from concurrent.futures import ThreadPoolExecutor |
| from dataclasses import dataclass |
| from typing import List, Literal, Tuple, Union |
|
|
| import numpy as np |
| from scipy.fft import dct, idct |
| from scipy.signal.windows import hann |
|
|
|
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
|
|
| import threading |
| _PROGRESS_LOCK = threading.Lock() |
|
|
| try: |
| from rich.progress import ( |
| Progress, BarColumn, TextColumn, TimeRemainingColumn, |
| TimeElapsedColumn, MofNCompleteColumn, SpinnerColumn, |
| ) |
| from rich.console import Console |
| from rich.panel import Panel |
| from rich import print as rprint |
| _RICH = True |
| except ImportError: |
| _RICH = False |
|
|
| try: |
| import tqdm as _tqdm_mod |
| _TQDM = True |
| except ImportError: |
| _TQDM = False |
|
|
|
|
| class _RichProgress: |
| """Thin wrapper around a shared rich.Progress instance.""" |
|
|
| def __init__(self, n_channels: int): |
| self._progress = Progress( |
| SpinnerColumn(), |
| TextColumn("[bold cyan]{task.fields[ch_label]:<4}[/]"), |
| BarColumn(bar_width=36), |
| MofNCompleteColumn(), |
| TextColumn("[green]{task.fields[eta_str]}[/]"), |
| TextColumn("[dim]{task.fields[bypass_str]}[/]"), |
| TextColumn("[yellow]{task.fields[noconv_str]}[/]"), |
| TimeElapsedColumn(), |
| TimeRemainingColumn(), |
| refresh_per_second=10, |
| ) |
|
|
| def __enter__(self): |
| self._progress.__enter__() |
| return self |
|
|
| def __exit__(self, *args): |
| self._progress.__exit__(*args) |
|
|
| def add_task(self, ch_label: str, total: int) -> object: |
| return self._progress.add_task( |
| "", total=total, |
| ch_label=ch_label, |
| eta_str="", |
| bypass_str="", |
| noconv_str="", |
| ) |
|
|
| def advance(self, task_id, n_bypassed: int, n_noconv: int, n_done: int, n_total: int): |
| bypass_pct = 100.0 * n_bypassed / n_done if n_done else 0.0 |
| self._progress.update( |
| task_id, |
| advance=1, |
| bypass_str=f"bypassed {bypass_pct:.0f}%" if n_bypassed else "", |
| noconv_str=f"no_conv {n_noconv}" if n_noconv else "", |
| ) |
|
|
|
|
| class _TqdmProgress: |
| """Thin wrapper around tqdm, one bar per channel.""" |
|
|
| def __init__(self, n_channels: int): |
| self._bars: dict = {} |
|
|
| def __enter__(self): |
| return self |
|
|
| def __exit__(self, *args): |
| for bar in self._bars.values(): |
| bar.close() |
|
|
| def add_task(self, ch_label: str, total: int) -> str: |
| import tqdm |
| bar = tqdm.tqdm( |
| total=total, |
| desc=f"[{ch_label}]", |
| unit="fr", |
| dynamic_ncols=True, |
| leave=True, |
| ) |
| self._bars[ch_label] = bar |
| return ch_label |
|
|
| def advance(self, task_id, n_bypassed: int, n_noconv: int, n_done: int, n_total: int): |
| bar = self._bars[task_id] |
| bypass_pct = 100.0 * n_bypassed / n_done if n_done else 0.0 |
| parts = [] |
| if n_bypassed: |
| parts.append(f"bypass={bypass_pct:.0f}%") |
| if n_noconv: |
| parts.append(f"no_conv={n_noconv}") |
| bar.set_postfix_str(" ".join(parts)) |
| bar.update(1) |
|
|
|
|
| class _PlainProgress: |
| """Last-resort fallback: prints a percentage line per channel.""" |
|
|
| def __init__(self, n_channels: int): |
| self._state: dict = {} |
|
|
| def __enter__(self): |
| return self |
|
|
| def __exit__(self, *args): |
| pass |
|
|
| def add_task(self, ch_label: str, total: int) -> str: |
| self._state[ch_label] = {"total": total, "done": 0, "last_pct": -1} |
| return ch_label |
|
|
| def advance(self, task_id, n_bypassed: int, n_noconv: int, n_done: int, n_total: int): |
| s = self._state[task_id] |
| s["done"] += 1 |
| pct = int(100 * s["done"] / s["total"]) |
| |
| if pct // 5 > s["last_pct"] // 5: |
| s["last_pct"] = pct |
| print(f" [{task_id}] {pct:3d}% ({s['done']}/{s['total']} frames" |
| + (f" bypassed={n_bypassed}" if n_bypassed else "") |
| + (f" no_conv={n_noconv}" if n_noconv else "") |
| + ")") |
|
|
|
|
| def _make_progress(n_channels: int): |
| """Return the best available progress backend.""" |
| if _RICH: |
| return _RichProgress(n_channels) |
| if _TQDM: |
| return _TqdmProgress(n_channels) |
| return _PlainProgress(n_channels) |
|
|
|
|
|
|
|
|
| |
| |
| |
|
|
| @dataclass |
| class ClippingMasks: |
| """ |
| Boolean index masks identifying the three sample categories of a clipped signal. |
| |
| Attributes |
| ---------- |
| Ir : reliable (unclipped) samples — must be preserved exactly |
| Icp : positively clipped (flat at +τ) — true signal ≥ τ |
| Icm : negatively clipped (flat at −τ) — true signal ≤ −τ |
| """ |
| Ir: np.ndarray |
| Icp: np.ndarray |
| Icm: np.ndarray |
|
|
|
|
| @dataclass |
| class DeclipParams: |
| """ |
| Parameters controlling the declipping pipeline. |
| |
| Attributes |
| ---------- |
| algo : 'sspade' | 'aspade' |
| Core per-frame algorithm. Default: 'sspade' (best empirical results). |
| window_length : int |
| Frame size in samples. Powers of 2 recommended (e.g. 1024, 2048). |
| Per [2]: A-SPADE works best ≈ 2048; S-SPADE is robust to longer windows. |
| hop_length : int |
| Hop between consecutive frames. Minimum 50% overlap recommended ([2] §4.4). |
| Typical: window_length // 4 (75% overlap, best quality per [2]). |
| frame : 'dct' | 'rdft' |
| Sparse transform. |
| 'dct' — orthonormal DCT-II (no redundancy, P = N). |
| 'rdft' — redundant real tight frame DCT‖DST (redundancy 2, P = 2N); |
| mimics the oversampled DFT used in [1][2]. [DEFAULT — best quality] |
| s : int |
| Initial and incremental sparsity step (k starts at s, increases by s |
| every r iterations). [2] uses s = 100 for whole-signal; s = 1 block-by-block. |
| r : int |
| Sparsity relaxation period (k is incremented every r iterations). |
| eps : float |
| Convergence threshold ε. Loop stops when the residual norm ≤ ε. |
| [1][2] use ε = 0.1 for their experiments. |
| max_iter : int |
| Hard upper limit on iterations per frame. |
| verbose : bool |
| Print per-signal diagnostics (DC offset, threshold, mask sizes, timing). |
| n_jobs : int |
| Number of parallel workers for multi-channel processing. |
| 1 = sequential (default, always safe). |
| -1 = use all available CPU cores. |
| mode : 'hard' | 'soft' |
| Detection mode. |
| 'hard' — standard hard-clipping recovery (default). |
| Marks samples exactly at ±tau as clipped. |
| 'soft' — brickwall limiter recovery (NEW in v6). |
| Marks all samples above the limiter threshold as potentially |
| attenuated. The threshold is ceiling − delta_db dB, where |
| ceiling = max(|yc|) after DC removal. |
| The lower-bound constraint true_value ≥ yc[n] is already |
| implemented by proj_gamma — no algorithmic changes needed. |
| delta_db : float |
| [soft mode only] Distance in dB from 0 dBFS to the limiter threshold. |
| Read from Waveform Statistics: find the level below which the limiter |
| did NOT intervene, e.g. "da −∞ fino a −2.5 dB" → delta_db = 2.5. |
| Typical brickwall masterings: 1.0 – 3.0 dB. |
| Ignored when mode='hard'. |
| use_gpu : bool |
| Enable GPU acceleration via PyTorch (CUDA or ROCm). Default: True. |
| Falls back to CPU automatically if PyTorch is not installed, no GPU |
| is present, or algo='aspade' (A-SPADE GPU not yet implemented). |
| gpu_device : str |
| PyTorch device string. Default: "auto" (first available GPU). |
| Examples: "cuda", "cuda:0", "cuda:1", "cpu". |
| AMD ROCm GPUs appear as "cuda" in PyTorch-ROCm — use the same syntax. |
| sample_rate : int |
| [v11] Sample rate of the audio in Hz. Required when release_ms > 0 or |
| multiband=True. Set automatically from the file header when using the CLI. |
| Default: 44100. |
| release_ms : float |
| [v11, soft mode] Limiter release time in milliseconds. When > 0, the |
| clipping masks are forward-dilated by this many samples so that post-peak |
| samples attenuated by the limiter’s release phase are treated as constrained |
| (not reliable). 0 = disabled (v10 behaviour). Typical: 10–50 ms. |
| max_gain_db : float |
| [v11, soft mode] Maximum recovery in dB above the limited sample value. |
| Caps proj_Γ to prevent ADMM from generating unphysical transients. |
| 0 = disabled (unbounded, v10 behaviour). Typical: 3–6 dB. |
| multiband : bool |
| [v11, soft mode] Enable Linkwitz-Riley sub-band processing. The signal is |
| split at band_crossovers Hz, each band is processed with its own delta_db, |
| then summed. Addresses multi-band limiting (FabFilter Pro-L 2 etc.). |
| band_crossovers : tuple[float, ...] |
| [v11] Crossover frequencies in Hz (ascending). Produces len+1 bands. |
| Default: (250, 4000) → Low / Mid / High. |
| band_delta_db : tuple[float, ...] |
| [v11] Per-band delta_db values. If empty, delta_db is used for all bands. |
| Must have the same length as band_crossovers + 1 when non-empty. |
| macro_expand : bool |
| [v11, soft mode] Enable macro-dynamics upward expansion pre-pass. A causal |
| peak-envelope follower detects where the limiter’s release held the level |
| down, then applies gentle upward expansion before SPADE restores the peaks. |
| macro_attack_ms : float |
| [v11] Expander attack time in ms. Default: 10.0. |
| macro_release_ms : float |
| [v11] Expander release time in ms. Default: 200.0. |
| macro_ratio : float |
| [v11] Expansion ratio. 1.0 = bypass; >1 = upward expansion. |
| g(n) = (env(n)/threshold)^(1/ratio - 1) when below threshold. Default: 1.2. |
| lf_split_hz : float |
| [v13, soft mode] Crossover frequency (Hz) for the dedicated LF transient |
| recovery subband pass. 0.0 = disabled (v11 behaviour). When > 0, the |
| signal is LR-split at this frequency; the LF band is processed by a |
| dedicated SPADE pass with lf_* parameters, the HF band by the standard v11 |
| pass (identical code path, no changes). Outputs are summed. |
| Typical: 400\u2013600 Hz for kick / bass transient restoration. |
| lf_window_length : int |
| [v13] WOLA window for the LF pass. 0 = inherit window_length. |
| Recommended: 2048 or 4096 when lf_split_hz \u2248 500 Hz. |
| lf_hop_length : int |
| [v13] WOLA hop for the LF pass. 0 = lf_window_length // 4 (auto). |
| lf_max_iter : int |
| [v13] Max ADMM iterations for the LF pass. 0 = inherit max_iter. |
| Recommended: 1500\u20132000 for kick body at 60\u2013120 Hz. |
| lf_eps : float |
| [v13] Convergence threshold for the LF pass. 0.0 = inherit eps. |
| lf_delta_db : float |
| [v13] delta_db override for the LF band. 0.0 = inherit delta_db. |
| lf_max_gain_db : float |
| [v13] Ratio-aware gain cap for the LF pass. 0.0 = inherit max_gain_db. |
| lf_release_ms : float |
| [v13] Mask dilation release time for the LF pass. -1.0 = inherit release_ms. |
| lf_s : int |
| [v13] Sparsity step for the LF pass. 0 = inherit s. |
| lf_r : int |
| [v13] Sparsity relaxation period for the LF pass. 0 = inherit r. |
| """ |
| algo: Literal["sspade", "aspade"] = "sspade" |
| window_length: int = 1024 |
| hop_length: int = 256 |
| frame: Literal["dct", "rdft"] = "rdft" |
| s: int = 1 |
| r: int = 1 |
| eps: float = 0.1 |
| max_iter: int = 1000 |
| verbose: bool = False |
| n_jobs: int = 1 |
| mode: Literal["hard", "soft"] = "hard" |
| delta_db: float = 1.0 |
| show_progress: bool = True |
| use_gpu: bool = True |
| gpu_device: str = "auto" |
| |
| sample_rate: int = 44100 |
| release_ms: float = 0.0 |
| max_gain_db: float = 0.0 |
| multiband: bool = False |
| band_crossovers: tuple = (250, 4000) |
| band_delta_db: tuple = () |
| macro_expand: bool = False |
| macro_attack_ms: float = 10.0 |
| macro_release_ms: float = 200.0 |
| macro_ratio: float = 1.2 |
| |
| lf_split_hz: float = 0.0 |
| lf_window_length: int = 0 |
| lf_hop_length: int = 0 |
| lf_max_iter: int = 0 |
| lf_eps: float = 0.0 |
| lf_delta_db: float = 0.0 |
| lf_max_gain_db: float = 0.0 |
| lf_release_ms: float = -1.0 |
| lf_s: int = 0 |
| lf_r: int = 0 |
|
|
|
|
| |
| |
| |
|
|
| def _frame_size(M: int, frame: str) -> int: |
| """Number of transform coefficients P for a frame of M samples.""" |
| if frame == "dct": |
| return M |
| if frame == "rdft": |
| return 2 * M |
| raise ValueError(f"Unknown frame '{frame}'") |
|
|
|
|
| |
| |
| |
| |
| |
|
|
| import math as _math |
|
|
| def _resolve_gpu_device(params: "DeclipParams") -> "str | None": |
| """ |
| Return a torch device string if GPU is usable, else None. |
| |
| AMD ROCm GPUs are exposed by PyTorch-ROCm under the torch.cuda namespace |
| (torch.cuda.is_available() returns True, devices appear as "cuda" / "cuda:0"). |
| Detection is therefore identical for NVIDIA CUDA and AMD ROCm. |
| |
| Returns None if: |
| • params.use_gpu is False |
| • PyTorch is not installed |
| • No CUDA/ROCm device is present or accessible |
| • algo='aspade' (A-SPADE GPU not yet implemented) |
| """ |
| if not params.use_gpu: |
| return None |
| if params.algo != "sspade": |
| return None |
| try: |
| import torch |
| if not torch.cuda.is_available(): |
| return None |
| dev = "cuda" if params.gpu_device == "auto" else params.gpu_device |
| torch.zeros(1, device=dev) |
| return dev |
| except Exception: |
| return None |
|
|
|
|
| def _dct2_gpu(x: "torch.Tensor") -> "torch.Tensor": |
| """ |
| Batched orthonormal DCT-II on GPU. x: (..., N) — float32 or float64. |
| Returns same dtype as input. |
| Numerically matches scipy.fft.dct(x, type=2, norm='ortho') to ~1e-14. |
| |
| Algorithm: Makhoul (1980) FFT-based DCT-II. |
| 1. Reorder x into v = [x[0], x[2], …, x[N-1], x[N-3], …, x[1]] |
| 2. V = FFT(v) (computed in float64 for accuracy) |
| 3. C = Re( exp(−jπk/(2N)) · V ) · √(2/N) |
| 4. C[0] /= √2 (ortho normalisation for DC bin) |
| """ |
| import torch |
| in_dtype = x.dtype |
| N = x.shape[-1] |
| v = torch.cat([x[..., ::2], x[..., 1::2].flip(-1)], dim=-1) |
| V = torch.fft.fft(v.double(), dim=-1) |
| k = torch.arange(N, device=x.device, dtype=torch.float64) |
| tw = torch.exp(-1j * _math.pi * k / (2.0 * N)) |
| C = (tw * V).real * _math.sqrt(2.0 / N) |
| C = C.clone() |
| C[..., 0] /= _math.sqrt(2.0) |
| return C.to(in_dtype) |
|
|
|
|
| def _idct2_gpu(X: "torch.Tensor") -> "torch.Tensor": |
| """ |
| Batched orthonormal IDCT-II on GPU. X: (..., N) — float32 or float64. |
| Returns same dtype as input. |
| Numerically matches scipy.fft.idct(X, type=2, norm='ortho') to ~1e-14. |
| |
| Inverse of _dct2_gpu via conjugate-twiddle + IFFT (Makhoul 1980): |
| 1. Undo ortho scaling: C = X·√(N/2); C[0] ·= √2 |
| 2. Build W[k] = C[k] − j·C[N−k] for k=0…N−1 |
| where C[0] uses the W[0] = C[0] special case (ipart[0] = 0). |
| ipart[k] = −C[N−k] for k=1…N−1 |
| ↳ BUG FIX: use C.flip(-1)[..., :-1] which gives C[N-1], C[N-2], …, C[1] |
| The old code used Cf[1:] = C[N-2], C[N-3], …, C[0] — off by one. |
| 3. Recover V: V = W · exp(+jπk/(2N)) |
| 4. v = Re(IFFT(V)) |
| 5. Un-interleave: x[2n] = v[n], x[2n+1] = v[N−1−n] |
| """ |
| import torch |
| in_dtype = X.dtype |
| N = X.shape[-1] |
| C = X.double() * _math.sqrt(N / 2.0) |
| C = C.clone() |
| C[..., 0] *= _math.sqrt(2.0) |
| |
| |
| |
| |
| |
| ipart = torch.zeros_like(C) |
| ipart[..., 1:] = -C.flip(-1)[..., :-1] |
| W = torch.view_as_complex(torch.stack([C, ipart], dim=-1)) |
| k = torch.arange(N, device=X.device, dtype=torch.float64) |
| V = W * torch.exp(1j * _math.pi * k / (2.0 * N)) |
| v = torch.fft.ifft(V, dim=-1).real |
| half = (N + 1) // 2 |
| x = torch.empty_like(v) |
| x[..., ::2] = v[..., :half] |
| x[..., 1::2] = v[..., half:].flip(-1) |
| return x.to(in_dtype) |
|
|
|
|
| def _frana_gpu(x: "torch.Tensor", frame: str) -> "torch.Tensor": |
| """ |
| Batched analysis operator A: (..., M) → (..., P). |
| DCT frame: P = M → orthonormal DCT-II |
| RDFT frame: P = 2M → [DCT(x)/√2 ‖ DST(x)/√2] |
| DST-II(x) = DCT-II(x[::-1]) |
| """ |
| import torch |
| if frame == "dct": |
| return _dct2_gpu(x) |
| s2 = _math.sqrt(2.0) |
| return torch.cat([_dct2_gpu(x) / s2, _dct2_gpu(x.flip(-1)) / s2], dim=-1) |
|
|
|
|
| def _frsyn_gpu(z: "torch.Tensor", frame: str, M: int) -> "torch.Tensor": |
| """ |
| Batched synthesis operator D = A^H: (..., P) → (..., M). |
| Adjoint of _frana_gpu. For RDFT the DST adjoint flips the OUTPUT. |
| """ |
| import torch |
| if frame == "dct": |
| return _idct2_gpu(z) |
| s2 = _math.sqrt(2.0) |
| cos_part = _idct2_gpu(z[..., :M]) / s2 |
| sin_part = _idct2_gpu(z[..., M:]).flip(-1) / s2 |
| return cos_part + sin_part |
|
|
|
|
| def _hard_thresh_gpu(u: "torch.Tensor", k: int) -> "torch.Tensor": |
| """ |
| Batched hard thresholding: keep k largest-magnitude coefficients per row. |
| u: (F, P). Returns same shape with all but top-k magnitudes zeroed. |
| """ |
| k = int(max(1, min(k, u.shape[-1]))) |
| kth = torch.topk(u.abs(), k, dim=-1, sorted=True).values[..., -1:] |
| return u * (u.abs() >= kth) |
|
|
|
|
| def _sspade_batch_gpu( |
| yc_w: "torch.Tensor", |
| Ir: "torch.Tensor", |
| Icp: "torch.Tensor", |
| Icm: "torch.Tensor", |
| frame: str, |
| s: int, |
| r: int, |
| eps: float, |
| max_iter: int, |
| g_max: float = float("inf"), |
| ) -> "Tuple[torch.Tensor, torch.Tensor]": |
| """ |
| Batched S-SPADE on GPU — all F frames processed simultaneously. |
| |
| Determinism guarantees |
| ---------------------- |
| BUG-GPU-2 fix: ADMM runs in float64 throughout (yc_w is upcast at entry, |
| downcast to float32 on output). This matches the CPU path which also runs |
| in float64 via numpy/scipy. float32 would accumulate ~2.3 units of error |
| after 500 iterations vs float64's ~1e-14 — causing divergent ADMM trajectories. |
| |
| BUG-GPU-1 fix: zi_final captures zi at the exact convergence iteration for |
| each frame. Without this, zi keeps being overwritten in subsequent iterations |
| for already-converged frames (dual ui stops updating but the zi update |
| expression keeps running for all frames). The CPU tight_sspade breaks |
| immediately on convergence; the GPU batch loop cannot break early, so |
| zi_final is the equivalent mechanism. |
| |
| Convergence mask |
| ---------------- |
| A per-frame `active` bool mask marks frames still iterating. |
| - `conv[f]` = True once frame f has met the stopping criterion |
| - `active[f]` = ~conv[f] |
| - ui is updated only for active frames (correct — matches CPU which exits |
| before updating ui on the convergence iteration) |
| - zi_final[f] is frozen at the first iteration where conv[f] becomes True |
| |
| Returns |
| ------- |
| x_frames : (F, M) float32 — time-domain restored frames (on device) |
| converged : (F,) bool — True where ADMM converged within max_iter |
| """ |
| import torch |
| |
| yc_w64 = yc_w.double() |
| F, M = yc_w64.shape |
|
|
| zi = _frana_gpu(yc_w64, frame) |
| ui = torch.zeros_like(zi) |
| k = s |
| active = torch.ones (F, dtype=torch.bool, device=yc_w.device) |
| conv = torch.zeros(F, dtype=torch.bool, device=yc_w.device) |
|
|
| |
| |
| zi_final = zi.clone() |
|
|
| for i in range(1, max_iter + 1): |
| |
| zb = _hard_thresh_gpu(zi + ui, k) |
|
|
| |
| v_c = zb - ui |
| Dv = _frsyn_gpu(v_c, frame, M) |
|
|
| pDv = Dv.clone() |
| pDv[Ir] = yc_w64[Ir] |
| |
| |
| lower_p = yc_w64[Icp] |
| if _math.isfinite(g_max): |
| upper_p = (lower_p * g_max).clamp(min=lower_p) |
| else: |
| upper_p = torch.full_like(lower_p, _math.inf) |
| pDv[Icp] = torch.clamp(torch.maximum(pDv[Icp], lower_p), max=upper_p) |
| lower_m = yc_w64[Icm] |
| if _math.isfinite(g_max): |
| lower_m_cap = (lower_m * g_max).clamp(max=lower_m) |
| else: |
| lower_m_cap = torch.full_like(lower_m, -_math.inf) |
| pDv[Icm] = torch.clamp(torch.minimum(pDv[Icm], lower_m), min=lower_m_cap) |
|
|
| zi = v_c - _frana_gpu(Dv - pDv, frame) |
|
|
| |
| norms = (zi - zb).norm(dim=-1) |
| new_conv = active & (norms <= eps) |
|
|
| if new_conv.any(): |
| |
| zi_final[new_conv] = zi[new_conv] |
| conv |= new_conv |
| active = ~conv |
|
|
| if not active.any(): |
| break |
|
|
| |
| |
| |
| |
| ui[active] = ui[active] + zi[active] - zb[active] |
|
|
| if i % r == 0: |
| k += s |
|
|
| |
| if active.any(): |
| zi_final[active] = zi[active] |
|
|
| |
| return _frsyn_gpu(zi_final, frame, M).float(), conv |
|
|
|
|
| def _declip_mono_gpu( |
| yc: np.ndarray, |
| params: "DeclipParams", |
| tau: float, |
| ch_label: str, |
| device: str, |
| progress_ctx = None, |
| task_id = None, |
| ) -> "Tuple[np.ndarray, ClippingMasks]": |
| """ |
| GPU-accelerated mono declipping pipeline. |
| |
| Three-pass strategy |
| ------------------- |
| Pass 1 (CPU): extract all frames, compute bypass decisions and masks. |
| Pass 2 (GPU): pack active frames into a batch tensor and run |
| _sspade_batch_gpu — all frames in one GPU kernel sweep. |
| Pass 3 (CPU): sequential WOLA accumulation + RMS level match. |
| |
| Progress behaviour |
| ------------------ |
| Bypassed frames advance the progress bar in real-time during Pass 1. |
| Active (GPU-processed) frames advance the bar immediately after |
| Pass 2 returns (appears as a single jump — mirrors how the GPU works). |
| """ |
| import torch |
|
|
| |
| dc_offset = float(np.mean(yc)) |
| yc = yc - dc_offset |
|
|
| |
| ceiling_pos = float(np.max(yc)) |
| ceiling_neg = float(-np.min(yc)) |
|
|
| if params.mode == "hard": |
| threshold = min(ceiling_pos, ceiling_neg) |
| else: |
| ceiling = max(ceiling_pos, ceiling_neg) |
| threshold = ceiling * (10.0 ** (-params.delta_db / 20.0)) |
|
|
| if threshold <= 0.0: |
| return yc.copy(), _compute_masks(yc, 0.0) |
|
|
| masks = _compute_masks(yc, threshold) |
|
|
| |
| if params.mode == "soft" and params.release_ms > 0.0: |
| rel_samp = max(0, round(params.release_ms * params.sample_rate / 1000.0)) |
| if rel_samp > 0: |
| masks = _dilate_masks_soft(masks, yc, rel_samp) |
|
|
| |
| if params.mode == "soft" and params.macro_expand and params.macro_ratio > 1.0: |
| yc = _macro_expand_pass( |
| yc, params.sample_rate, |
| attack_ms=params.macro_attack_ms, |
| release_ms=params.macro_release_ms, |
| ratio=params.macro_ratio, |
| ) |
| masks = _compute_masks(yc, threshold) |
| if params.release_ms > 0.0: |
| rel_samp = max(0, round(params.release_ms * params.sample_rate / 1000.0)) |
| if rel_samp > 0: |
| masks = _dilate_masks_soft(masks, yc, rel_samp) |
|
|
| n_clipped = int(np.sum(~masks.Ir)) |
| L = len(yc) |
|
|
| |
| g_max = (10.0 ** (params.max_gain_db / 20.0) |
| if params.mode == "soft" and params.max_gain_db > 0.0 |
| else float("inf")) |
|
|
| if params.verbose: |
| ch = f" [{ch_label}]" if ch_label else "" |
| tag = "threshold" if params.mode == "soft" else "tau" |
| print(f"[declip{ch}] Length : {L} samples [device: {device}]") |
| print(f"[declip{ch}] DC offset : {dc_offset:+.6f} ({dc_offset*100:+.4f}%) → removed") |
| if params.mode == "hard": |
| print(f"[declip{ch}] {tag:<9} : {threshold:.6f} " |
| f"(pos_peak={ceiling_pos:.6f} neg_peak={ceiling_neg:.6f} using min)") |
| else: |
| print(f"[declip{ch}] ceiling : {max(ceiling_pos, ceiling_neg):.6f} " |
| f"(pos={ceiling_pos:.6f} neg={ceiling_neg:.6f})") |
| print(f"[declip{ch}] {tag:<9} : {threshold:.6f} " |
| f"(ceiling − {params.delta_db:.2f} dB = " |
| f"{20*np.log10(threshold/max(ceiling_pos,ceiling_neg)):.2f} dBFS)") |
| print(f"[declip{ch}] Detected : {n_clipped}/{L} " |
| f"({100*n_clipped/L:.1f}%) " |
| f"Icp={int(masks.Icp.sum())} Icm={int(masks.Icm.sum())}") |
| print(f"[declip{ch}] Algorithm : {params.algo.upper()} " |
| f"frame={params.frame.upper()} mode={params.mode.upper()} " |
| f"win={params.window_length} hop={params.hop_length} " |
| f"({100*(1-params.hop_length/params.window_length):.0f}% overlap) " |
| f"[GPU BATCH on {device}]") |
| if params.mode == "soft": |
| feats = [] |
| if params.release_ms > 0: feats.append(f"release_ms={params.release_ms}") |
| if params.max_gain_db > 0: feats.append(f"max_gain_db={params.max_gain_db}") |
| if params.macro_expand: feats.append(f"macro_expand(ratio={params.macro_ratio})") |
| if feats: |
| print(f"[declip{ch}] v11 feats : " + " ".join(feats)) |
|
|
| M = params.window_length |
| a = params.hop_length |
| N = int(np.ceil(L / a)) |
| win = np.sqrt(hann(M, sym=False)) |
| t0 = time.time() |
|
|
| |
| |
| |
| wola_meta : list = [] |
| active_yc_w : list = [] |
| active_Ir : list = [] |
| active_Icp : list = [] |
| active_Icm : list = [] |
| active_orig_idx : list = [] |
|
|
| skipped = 0 |
| for i in range(N): |
| idx1 = i * a |
| idx2 = min(idx1 + M, L) |
| seg_len = idx2 - idx1 |
| pad = M - seg_len |
|
|
| yc_frame = np.zeros(M) |
| yc_frame[:seg_len] = yc[idx1:idx2] |
|
|
| if params.mode == "soft": |
| fp = float(np.max(np.abs(yc_frame[:seg_len]))) if seg_len else 0.0 |
| if fp < threshold: |
| wola_meta.append((idx1, idx2, seg_len, True)) |
| skipped += 1 |
| if progress_ctx is not None and task_id is not None: |
| progress_ctx.advance(task_id, n_bypassed=skipped, |
| n_noconv=0, n_done=i + 1, n_total=N) |
| continue |
|
|
| wola_meta.append((idx1, idx2, seg_len, False)) |
| active_yc_w.append(yc_frame * win) |
| active_Ir .append(np.concatenate([masks.Ir [idx1:idx2], np.ones (pad, dtype=bool)])) |
| active_Icp.append(np.concatenate([masks.Icp[idx1:idx2], np.zeros(pad, dtype=bool)])) |
| active_Icm.append(np.concatenate([masks.Icm[idx1:idx2], np.zeros(pad, dtype=bool)])) |
| active_orig_idx.append(len(wola_meta) - 1) |
|
|
| n_active = len(active_yc_w) |
| n_noconv = 0 |
| x_active_results: dict = {} |
|
|
| |
| if n_active > 0: |
| yc_batch = torch.tensor(np.stack(active_yc_w), dtype=torch.float64, device=device) |
| Ir_batch = torch.tensor(np.stack(active_Ir), dtype=torch.bool, device=device) |
| Icp_batch = torch.tensor(np.stack(active_Icp), dtype=torch.bool, device=device) |
| Icm_batch = torch.tensor(np.stack(active_Icm), dtype=torch.bool, device=device) |
|
|
| if params.verbose: |
| ch = f" [{ch_label}]" if ch_label else "" |
| vmem = "" |
| try: |
| alloc = torch.cuda.memory_allocated(device) / 1024**2 |
| vmem = f" VRAM used ≈ {alloc:.0f} MB" |
| except Exception: |
| pass |
| print(f"[declip{ch}] GPU pass : {n_active} active frames → " |
| f"{yc_batch.shape} batch{vmem}") |
|
|
| x_batch, conv_batch = _sspade_batch_gpu( |
| yc_batch, Ir_batch, Icp_batch, Icm_batch, |
| params.frame, params.s, params.r, params.eps, params.max_iter, |
| g_max=g_max, |
| ) |
|
|
| x_np = x_batch.cpu().numpy() |
| conv_np = conv_batch.cpu().numpy() |
| n_noconv = int((~conv_np).sum()) |
|
|
| for j, meta_idx in enumerate(active_orig_idx): |
| x_active_results[meta_idx] = x_np[j] |
|
|
| |
| if progress_ctx is not None and task_id is not None: |
| for j in range(n_active): |
| progress_ctx.advance(task_id, n_bypassed=skipped, |
| n_noconv=n_noconv, |
| n_done=skipped + j + 1, n_total=N) |
|
|
| |
| x = np.zeros(L) |
| norm_win = np.zeros(L) |
|
|
| for meta_idx, (idx1, idx2, seg_len, is_bypassed) in enumerate(wola_meta): |
| if is_bypassed: |
| x [idx1:idx2] += yc[idx1:idx2] * win[:seg_len] ** 2 |
| norm_win[idx1:idx2] += win[:seg_len] ** 2 |
| else: |
| xf = x_active_results[meta_idx] |
| x [idx1:idx2] += xf[:seg_len] * win[:seg_len] |
| norm_win[idx1:idx2] += win[:seg_len] ** 2 |
|
|
| norm_win = np.where(norm_win < 1e-12, 1.0, norm_win) |
| x /= norm_win |
|
|
| |
| Ir = masks.Ir |
| if Ir.sum() > 0: |
| rms_in = float(np.sqrt(np.mean(yc[Ir] ** 2))) |
| rms_out = float(np.sqrt(np.mean(x[Ir] ** 2))) |
| if rms_out > 1e-12 and rms_in > 1e-12: |
| x *= rms_in / rms_out |
|
|
| if params.verbose: |
| ch = f" [{ch_label}]" if ch_label else "" |
| skip_pct = 100.0 * skipped / N if N else 0.0 |
| print(f"[declip{ch}] Frames : {N} total | " |
| f"active={n_active} (GPU) bypassed={skipped} ({skip_pct:.1f}%) " |
| f"no_conv={n_noconv} | time: {time.time()-t0:.1f}s") |
|
|
| return x, masks |
|
|
|
|
|
|
|
|
|
|
| def frana(x: np.ndarray, frame: str) -> np.ndarray: |
| """ |
| Analysis operator A : R^N → R^P. |
| |
| For a tight Parseval frame A, the synthesis operator is D = A^H, and |
| A^H A = I_N (perfect reconstruction property). |
| |
| DCT frame (P = N): |
| A = orthonormal DCT-II. A^H = A^{-1} = IDCT. |
| |
| RDFT frame (P = 2N, redundancy 2): |
| A = [A₁; A₂] where A₁ = DCT-II/√2 and A₂ = DST-II/√2. |
| DST-II(x) is computed as DCT-II(x[::-1]). |
| Tight frame property: A₁^H A₁ + A₂^H A₂ = I/2 + I/2 = I. ✓ |
| """ |
| if frame == "dct": |
| return dct(x, type=2, norm="ortho") |
| if frame == "rdft": |
| cos_part = dct(x, type=2, norm="ortho") / np.sqrt(2) |
| sin_part = dct(x[::-1], type=2, norm="ortho") / np.sqrt(2) |
| return np.concatenate([cos_part, sin_part]) |
| raise ValueError(f"Unknown frame '{frame}'") |
|
|
|
|
| def frsyn(z: np.ndarray, frame: str, M: int) -> np.ndarray: |
| """ |
| Synthesis operator D = A^H : R^P → R^N. |
| |
| DCT frame: |
| D = IDCT (same matrix as A for orthonormal DCT). |
| |
| RDFT frame: |
| D = [A₁^H, A₂^H] applied to [z₁; z₂]: |
| A₁^H z₁ = IDCT(z₁) / √2 |
| A₂^H z₂ = IDCT(z₂)[::-1] / √2 ← correct: flip the OUTPUT |
| Note: the original v1 had the bug idct(z₂[::-1]) — flipping the INPUT. |
| Correct adjoint of DST-II requires IDCT(z₂)[::-1], NOT IDCT(z₂[::-1]). |
| """ |
| if frame == "dct": |
| return idct(z, type=2, norm="ortho") |
| if frame == "rdft": |
| cos_part = idct(z[:M], type=2, norm="ortho") / np.sqrt(2) |
| sin_part = idct(z[M:], type=2, norm="ortho")[::-1] / np.sqrt(2) |
| return cos_part + sin_part |
| raise ValueError(f"Unknown frame '{frame}'") |
|
|
|
|
| |
| |
| |
|
|
| def hard_thresh(u: np.ndarray, k: int) -> np.ndarray: |
| """ |
| Hard-thresholding operator H_k. |
| |
| Keeps the k largest-magnitude components of u; sets all others to zero. |
| Corresponds to step 2 of both Algorithm 1 and Algorithm 2 in [1][2]. |
| |
| Parameters |
| ---------- |
| u : coefficient vector (in R^P) |
| k : number of non-zero coefficients to retain |
| |
| Notes |
| ----- |
| The papers remark that for real signals represented with complex DFT, |
| thresholding should act on conjugate pairs to preserve the real-signal |
| structure. Since our RDFT frame uses real DCT/DST, all coefficients |
| are real-valued and standard element-wise thresholding is appropriate. |
| """ |
| k = int(np.clip(k, 1, len(u))) |
| alpha = np.sort(np.abs(u))[::-1][k - 1] |
| return u * (np.abs(u) >= alpha) |
|
|
|
|
| |
| |
| |
|
|
| def proj_gamma( |
| w: np.ndarray, |
| yc: np.ndarray, |
| masks: ClippingMasks, |
| g_max: float = float("inf"), |
| ) -> np.ndarray: |
| """ |
| Orthogonal projection onto Γ(y) in the time domain. |
| |
| Implements eq. (6) of [2] / eq. (2) of [1]: |
| |
| [proj_Γ(w)]_n = y_n if n ∈ R (reliable) |
| = max{w_n, τ} if n ∈ H (positive clip, i.e. ≥ τ) |
| = min{w_n, −τ} if n ∈ L (negative clip, i.e. ≤ −τ) |
| |
| Equivalently, using bounding vectors b_L, b_H as in eq. (7)/(9) of [2]: |
| proj_{[b_L, b_H]}(w) = min{max{b_L, w}, b_H} |
| |
| v11 — ratio-aware upper bound (g_max > 0, default disabled = inf): |
| [proj_Γ(w)]_n = clip(max(w_n, yc_n), yc_n, yc_n · g_max) for n ∈ Icp |
| = clip(min(w_n, yc_n), yc_n · g_max, yc_n) for n ∈ Icm |
| This prevents ADMM from generating transients above the limiter’s |
| expected maximum gain reduction while still honouring the lower bound. |
| |
| Parameters |
| ---------- |
| w : time-domain signal to project (R^N) |
| yc : original clipped signal (R^N), provides boundary values |
| masks : clipping masks (Ir, Icp, Icm) |
| g_max : linear gain ceiling (default: inf = no cap, i.e. v10 behaviour). |
| Compute from max_gain_db as: g_max = 10 ** (max_gain_db / 20). |
| """ |
| v = w.copy() |
| v[masks.Ir] = yc[masks.Ir] |
| |
| lo_p = yc[masks.Icp] |
| if np.isfinite(g_max): |
| hi_p = lo_p * g_max |
| else: |
| hi_p = np.full_like(lo_p, np.inf) |
| v[masks.Icp] = np.clip(np.maximum(v[masks.Icp], lo_p), lo_p, hi_p) |
| |
| lo_m = yc[masks.Icm] |
| if np.isfinite(g_max): |
| lo_m_cap = lo_m * g_max |
| else: |
| lo_m_cap = np.full_like(lo_m, -np.inf) |
| v[masks.Icm] = np.clip(np.minimum(v[masks.Icm], lo_m), lo_m_cap, lo_m) |
| return v |
|
|
|
|
| |
| |
| |
|
|
| def tight_sspade( |
| yc: np.ndarray, |
| masks: ClippingMasks, |
| frame: str, |
| s: int, |
| r: int, |
| eps: float, |
| max_iter: int, |
| g_max: float = float("inf"), |
| ) -> Tuple[np.ndarray, bool]: |
| """ |
| S-SPADE for one windowed audio frame. |
| |
| Implements Algorithm 1 from [2], which uses the closed-form projection |
| lemma (eq. 12) to make per-iteration cost equal to A-SPADE: |
| |
| ẑ^(i) = v - D^* ( D v - proj_{[b_L,b_H]}(D v) ) |
| where v = z̄^(i) - u^(i-1) |
| |
| State variables |
| --------------- |
| zi : current estimate in coefficient domain (R^P) |
| ui : dual / guidance variable (R^P) — coefficient domain |
| k : current sparsity level (number of non-zero coefficients) |
| |
| Convergence criterion (Algorithm 1, row 4 in [2]) |
| ------------------------------------------------- |
| ‖ẑ^(i) - z̄^(i)‖₂ ≤ ε |
| """ |
| M = len(yc) |
| zi = frana(yc, frame) |
| ui = np.zeros_like(zi) |
| k = s |
| converged = False |
|
|
| for i in range(1, max_iter + 1): |
|
|
| |
| |
| zb = hard_thresh(zi + ui, k) |
|
|
| |
| |
| v_coeff = zb - ui |
| |
| Dv = frsyn(v_coeff, frame, M) |
| |
| proj_Dv = proj_gamma(Dv, yc, masks, g_max=g_max) |
| |
| zi = v_coeff - frana(Dv - proj_Dv, frame) |
|
|
| |
| |
| if np.linalg.norm(zi - zb) <= eps: |
| converged = True |
| break |
|
|
| |
| |
| ui = ui + zi - zb |
|
|
| |
| if i % r == 0: |
| k += s |
|
|
| |
| return frsyn(zi, frame, M), converged |
|
|
|
|
| |
| |
| |
|
|
| def tight_aspade( |
| yc: np.ndarray, |
| masks: ClippingMasks, |
| frame: str, |
| s: int, |
| r: int, |
| eps: float, |
| max_iter: int, |
| g_max: float = float("inf"), |
| ) -> Tuple[np.ndarray, bool]: |
| """ |
| A-SPADE for one windowed audio frame. |
| |
| Implements Algorithm 2 from [2]. The projection step uses the |
| closed-form formula from eq.(5)/(8) of [2]: |
| |
| x̂^(i) = proj_{[b_L, b_H]}( A^H ( z̄^(i) − u^(i-1) ) ) |
| = proj_Γ( D ( z̄^(i) − u^(i-1) ) ) |
| = proj_Γ( frsyn(zb − ui) ) |
| |
| State variables |
| --------------- |
| xi : current estimate in signal domain (R^N) |
| ui : dual / guidance variable (R^P) — COEFFICIENT domain [BUG-2 fix] |
| k : current sparsity level |
| |
| Convergence criterion (Algorithm 2, row 4 in [2]) |
| ------------------------------------------------- |
| ‖A x̂^(i) − z̄^(i)‖₂ ≤ ε (coefficient-domain norm) [BUG-2c fix] |
| """ |
| M = len(yc) |
| P = _frame_size(M, frame) |
|
|
| xi = yc.copy() |
| ui = np.zeros(P) |
| k = s |
| converged = False |
|
|
| for i in range(1, max_iter + 1): |
|
|
| |
| |
| |
| zb = hard_thresh(frana(xi, frame) + ui, k) |
|
|
| |
| |
| |
| xi_new = proj_gamma(frsyn(zb - ui, frame, M), yc, masks, g_max=g_max) |
|
|
| |
| |
| if np.linalg.norm(frana(xi_new, frame) - zb) <= eps: |
| converged = True |
| xi = xi_new |
| break |
|
|
| |
| |
| ui = ui + frana(xi_new, frame) - zb |
| xi = xi_new |
|
|
| |
| if i % r == 0: |
| k += s |
|
|
| return xi, converged |
|
|
|
|
| |
| |
| |
|
|
|
|
| |
| |
| |
|
|
| def _build_lf_child_params(params: "DeclipParams") -> "DeclipParams": |
| """ |
| Build a child DeclipParams for the dedicated LF subband SPADE pass. |
| |
| Fields with a non-zero / non-sentinel lf_* override take precedence; |
| all others inherit from the parent params. Sets lf_split_hz=0.0 to |
| prevent infinite recursion when declip() is called on the LF band. |
| |
| Window / hop logic |
| ------------------ |
| If lf_window_length > 0: use explicitly. |
| Otherwise: inherit the parent window_length unchanged. |
| |
| If lf_hop_length > 0: use explicitly. |
| Otherwise: auto = lf_window_length // 4 (25% hop, 75% overlap). |
| |
| Sentinel values |
| --------------- |
| lf_delta_db : 0.0 → inherit delta_db |
| lf_max_gain_db : 0.0 → inherit max_gain_db |
| lf_release_ms : -1.0 → inherit release_ms |
| lf_eps : 0.0 → inherit eps |
| lf_max_iter : 0 → inherit max_iter |
| lf_s : 0 → inherit s |
| lf_r : 0 → inherit r |
| """ |
| from dataclasses import replace as _dc_replace |
|
|
| lf_win = params.lf_window_length if params.lf_window_length > 0 else params.window_length |
| lf_hop = (params.lf_hop_length if params.lf_hop_length > 0 |
| else lf_win // 4) |
|
|
| return _dc_replace( |
| params, |
| |
| lf_split_hz = 0.0, |
| |
| window_length = lf_win, |
| hop_length = lf_hop, |
| |
| max_iter = params.lf_max_iter if params.lf_max_iter > 0 else params.max_iter, |
| eps = params.lf_eps if params.lf_eps > 0.0 else params.eps, |
| s = params.lf_s if params.lf_s > 0 else params.s, |
| r = params.lf_r if params.lf_r > 0 else params.r, |
| |
| delta_db = params.lf_delta_db if params.lf_delta_db > 0.0 else params.delta_db, |
| max_gain_db = params.lf_max_gain_db if params.lf_max_gain_db > 0.0 else params.max_gain_db, |
| release_ms = params.lf_release_ms if params.lf_release_ms >= 0.0 else params.release_ms, |
| |
| show_progress = False, |
| verbose = params.verbose, |
| ) |
|
|
|
|
| def _build_hf_child_params(params: "DeclipParams") -> "DeclipParams": |
| """ |
| Build a child DeclipParams for the HF SPADE pass (v11-identical). |
| |
| The HF band is processed with exactly the same parameters as if |
| lf_split_hz had never been set. Only lf_split_hz is zeroed out to |
| prevent recursion; everything else is left completely unchanged. |
| |
| This function exists to make the split explicit and documentable. |
| """ |
| from dataclasses import replace as _dc_replace |
| return _dc_replace( |
| params, |
| lf_split_hz = 0.0, |
| show_progress = False, |
| ) |
|
|
|
|
| def _declip_with_lf_split( |
| yc: "np.ndarray", |
| params: "DeclipParams", |
| ) -> "Tuple[np.ndarray, object]": |
| """ |
| v13 entry point: LF/HF subband SPADE with independent processing. |
| |
| Called from declip() when params.lf_split_hz > 0 and params.mode == 'soft'. |
| |
| Algorithm |
| --------- |
| For each channel c: |
| |
| 1. LR crossover at lf_split_hz Hz (zero-phase Butterworth LP, HP = x - LP): |
| lf_band[c], hf_band[c] = _lr_split(yc[c], lf_split_hz, sr) |
| |
| 2. HF pass (v11-identical): |
| hf_fixed[c], hf_masks[c] = declip(hf_band[c], hf_child_params) |
| Parameters: unchanged from parent (window, hop, iter, eps, delta_db, |
| max_gain_db, release_ms). lf_split_hz=0 prevents recursion. |
| |
| 3. LF pass (dedicated sub-bass recovery): |
| lf_fixed[c], lf_masks[c] = declip(lf_band[c], lf_child_params) |
| Parameters: lf_window_length, lf_hop_length, lf_max_iter, lf_eps, |
| lf_delta_db, lf_max_gain_db, lf_release_ms, lf_s, lf_r |
| (each falls back to the parent value if unset). |
| |
| 4. Reconstruction: |
| output[c] = lf_fixed[c] + hf_fixed[c] |
| Exact because LP + HP = original (perfect reconstruction). |
| |
| Return value |
| ------------ |
| output : np.ndarray same shape as yc |
| masks : ClippingMasks (mono) or list[ClippingMasks] (multi-channel). |
| Returns the HF band masks, which are computed on the portion of the |
| signal that drives most of the limiting detection. |
| |
| Rationale for returning HF masks |
| --------------------------------- |
| The full-signal peak is dominated by HF transients (kick attack); the limiter |
| acts primarily there. HF masks therefore reflect the true limiting pattern |
| more faithfully than LF masks (which represent only the low-passed band). |
| Callers that need LF masks can recover them by running _compute_masks on |
| the LF band directly. |
| """ |
| sr = params.sample_rate |
| fc = float(np.clip(params.lf_split_hz, 1.0, sr / 2.0 - 1.0)) |
|
|
| mono = yc.ndim == 1 |
| yc_2d = yc[:, None] if mono else yc |
| n_samples, n_ch = yc_2d.shape |
|
|
| hf_child = _build_hf_child_params(params) |
| lf_child = _build_lf_child_params(params) |
|
|
| |
| if params.show_progress and params.verbose: |
| print(f"[v13 LF split] crossover={fc:.0f} Hz | " |
| f"LF: win={lf_child.window_length} hop={lf_child.hop_length} " |
| f"iter={lf_child.max_iter} eps={lf_child.eps:.4f} " |
| f"delta={lf_child.delta_db:.2f}dB gain={lf_child.max_gain_db:.1f}dB " |
| f"rel={lf_child.release_ms:.0f}ms | " |
| f"HF: win={hf_child.window_length} hop={hf_child.hop_length} " |
| f"iter={hf_child.max_iter} eps={hf_child.eps:.4f} " |
| f"delta={hf_child.delta_db:.2f}dB gain={hf_child.max_gain_db:.1f}dB " |
| f"rel={hf_child.release_ms:.0f}ms") |
|
|
| out = np.zeros((n_samples, n_ch), dtype=np.float64) |
| hf_masks_all : list = [] |
| lf_masks_all : list = [] |
|
|
| for ch in range(n_ch): |
| yc_ch = yc_2d[:, ch].astype(np.float64) |
|
|
| |
| lf_band, hf_band = _lr_split(yc_ch, fc, sr) |
|
|
| |
| hf_fixed_raw, hf_mask = declip(hf_band.astype(np.float32), hf_child) |
| hf_fixed = np.asarray(hf_fixed_raw, dtype=np.float64) |
|
|
| |
| lf_fixed_raw, lf_mask = declip(lf_band.astype(np.float32), lf_child) |
| lf_fixed = np.asarray(lf_fixed_raw, dtype=np.float64) |
|
|
| |
| L = min(len(lf_fixed), len(hf_fixed), n_samples) |
| out[:L, ch] = lf_fixed[:L] + hf_fixed[:L] |
|
|
| hf_masks_all.append(hf_mask) |
| lf_masks_all.append(lf_mask) |
|
|
| result = out[:, 0] if mono else out |
|
|
| |
| if mono: |
| return result, hf_masks_all[0] |
| return result, hf_masks_all |
|
|
|
|
| def _compute_masks(yc: np.ndarray, threshold: float) -> ClippingMasks: |
| """ |
| Compute clipping/limiting masks from a 1-D signal and a detection threshold. |
| |
| Works for both modes: |
| hard (mode='hard'): threshold = tau (samples exactly at digital ceiling) |
| soft (mode='soft'): threshold = tau * 10^(-delta_db/20) (limiter threshold) |
| |
| In soft mode, samples above the threshold have their TRUE value constrained |
| to be ≥ their current (limited) value. proj_gamma already implements this |
| correctly via v[Icp] = max(v[Icp], yc[Icp]) — since yc[Icp] is the actual |
| limited value, not tau. No change to the projection operator is needed. |
| """ |
| Icp = yc >= threshold |
| Icm = yc <= -threshold |
| Ir = ~(Icp | Icm) |
| return ClippingMasks(Ir=Ir, Icp=Icp, Icm=Icm) |
|
|
|
|
| |
| |
| |
|
|
| def _dilate_masks_soft( |
| masks: ClippingMasks, |
| yc: np.ndarray, |
| release_samples: int, |
| ) -> ClippingMasks: |
| """ |
| Forward morphological dilation of the soft-mode clipping masks. |
| |
| A mastering limiter does not merely clip the peak sample; its release time |
| causes gain reduction to persist for `release_samples` samples after each |
| peak. Without dilation, those post-peak samples are pinned as "reliable" |
| (Ir), forcing the ADMM solver to anchor the reconstruction to artificially |
| attenuated values and producing the pumping artifact. |
| |
| Algorithm |
| --------- |
| For each True position in Icp or Icm, the following `release_samples` |
| positions are also flagged as constrained (Icp/Icm). Implemented as a |
| causal linear convolution: |
| |
| dilated = convolve(mask, ones(release_samples + 1))[:N] > 0 |
| |
| Newly flagged samples are reclassified by polarity: |
| yc[n] >= 0 → Icp (true value ≥ yc[n], always satisfied by limiter model) |
| yc[n] < 0 → Icm (true value ≤ yc[n], same reasoning) |
| |
| This is mathematically valid because a gain-reducing limiter always |
| produces |yc[n]| ≤ |true[n]| on every attenuated sample. |
| |
| Parameters |
| ---------- |
| masks : original ClippingMasks from _compute_masks |
| yc : DC-removed signal (same length as masks) |
| release_samples : dilation width = round(release_ms * sr / 1000) |
| |
| Returns |
| ------- |
| ClippingMasks with expanded Icp, Icm and correspondingly shrunk Ir. |
| """ |
| if release_samples <= 0: |
| return masks |
|
|
| N = len(yc) |
| kern = np.ones(release_samples + 1, dtype=np.float64) |
|
|
| |
| |
| dil_cp = np.convolve(masks.Icp.astype(np.float64), kern)[:N] > 0 |
| dil_cm = np.convolve(masks.Icm.astype(np.float64), kern)[:N] > 0 |
|
|
| |
| new_Icp = dil_cp | dil_cm |
| new_Icm = dil_cp | dil_cm |
|
|
| |
| new_Icp = new_Icp & (yc >= 0) |
| new_Icm = new_Icm & (yc < 0) |
|
|
| |
| new_Ir = ~(new_Icp | new_Icm) |
|
|
| return ClippingMasks(Ir=new_Ir, Icp=new_Icp, Icm=new_Icm) |
|
|
|
|
| def _lr_split(x: np.ndarray, fc: float, sr: int) -> "Tuple[np.ndarray, np.ndarray]": |
| """ |
| Phase-perfect Linkwitz-Riley crossover at frequency `fc` Hz. |
| |
| Returns (lp, hp) such that lp + hp == x exactly (perfect reconstruction |
| by construction: hp = x - lp). The LP is a zero-phase 4th-order |
| Butterworth realised with sosfiltfilt. |
| |
| A 4th-order zero-phase Butterworth (sosfiltfilt of 2nd-order coefficients) |
| has the same amplitude response as LR4 at the crossover point (−6 dB at |
| fc) and is computationally convenient. Summing LP + HP = x eliminates |
| any phase-cancellation artifact at the crossover frequency. |
| |
| Parameters |
| ---------- |
| x : 1-D signal array |
| fc : crossover frequency in Hz (clamped to [1, sr/2 − 1]) |
| sr : sample rate in Hz |
| """ |
| from scipy.signal import butter, sosfiltfilt |
| fc_safe = float(np.clip(fc, 1.0, sr / 2.0 - 1.0)) |
| sos = butter(2, fc_safe, btype="low", fs=sr, output="sos") |
| lp = sosfiltfilt(sos, x) |
| hp = x - lp |
| return lp, hp |
|
|
|
|
| def _macro_expand_pass( |
| yc: np.ndarray, |
| sr: int, |
| attack_ms: float = 10.0, |
| release_ms: float = 200.0, |
| ratio: float = 1.2, |
| ) -> np.ndarray: |
| """ |
| Macro-dynamics upward expansion pre-pass. |
| |
| Restores the slow (>21 ms) amplitude modulation suppressed by a mastering |
| limiter's release time — the "body compression" that SPADE cannot undo |
| because it operates frame-by-frame at ~21 ms windows. |
| |
| Algorithm |
| --------- |
| 1. Compute a zero-phase smoothed peak envelope using sosfiltfilt. |
| The attack and release IIR time constants map to Butterworth LP cutoffs: |
| fc_att = 2.2 / (2π · attack_s) [−3 dB at attack cutoff] |
| fc_rel = 2.2 / (2π · release_s) |
| Two passes (attack on rising, release on falling) are approximated by |
| using the *slower* of the two for the LP filter (conservative choice). |
| |
| 2. Threshold: 80th-percentile of the non-silent envelope values. |
| Above the threshold the signal is already "loud" → no expansion. |
| Below the threshold it was compressed → apply upward expansion gain. |
| |
| 3. Expansion gain (standard upward-expander transfer function): |
| g(n) = (env(n) / threshold)^(1/ratio − 1) env < threshold |
| = 1.0 otherwise |
| For ratio > 1, (1/ratio − 1) < 0, so g > 1 when env < threshold |
| (quiet sections get boosted). |
| |
| 4. Gain is smoothed with a 20 Hz LP to prevent clicks, then hard-clipped |
| to [1.0, ∞) so the pre-pass only expands — it never attenuates. |
| |
| Parameters |
| ---------- |
| yc : 1-D float signal (DC-removed, level-normalised) |
| sr : sample rate in Hz |
| attack_ms : expander attack time constant (ms); typically 5–20 ms |
| release_ms : expander release time constant (ms); typically 100–300 ms |
| ratio : expansion ratio >1.0; 1.0 = bypass, 1.2 = gentle |
| |
| Returns |
| ------- |
| Expanded signal with the same length as yc. |
| """ |
| from scipy.signal import butter, sosfiltfilt |
|
|
| if ratio <= 1.0: |
| return yc.copy() |
|
|
| x_abs = np.abs(yc) |
|
|
| |
| |
| |
| rel_s = max(release_ms, attack_ms) / 1000.0 |
| fc_env = min(2.2 / (2.0 * np.pi * rel_s), sr / 2.0 - 1.0) |
| sos_e = butter(2, fc_env, fs=sr, output="sos") |
| env = sosfiltfilt(sos_e, x_abs) |
| env = np.maximum(env, 1e-10) |
|
|
| |
| mask_sig = env > 1e-6 |
| if not mask_sig.any(): |
| return yc.copy() |
| thresh = float(np.percentile(env[mask_sig], 80)) |
| thresh = max(thresh, 1e-8) |
|
|
| |
| exponent = 1.0 / ratio - 1.0 |
| g = np.where(env >= thresh, |
| 1.0, |
| (env / thresh) ** exponent) |
|
|
| |
| fc_g = min(20.0, sr / 2.0 - 1.0) |
| sos_g = butter(2, fc_g, fs=sr, output="sos") |
| g = sosfiltfilt(sos_g, g) |
| g = np.maximum(g, 1.0) |
|
|
| return yc * g |
|
|
|
|
| def _declip_mono( |
| yc: np.ndarray, |
| params: DeclipParams, |
| tau: float, |
| |
| ch_label: str = "", |
| frame_workers: int = 1, |
| progress_ctx = None, |
| task_id = None, |
| ) -> Tuple[np.ndarray, ClippingMasks]: |
| """ |
| Core mono declipping / delimiting pipeline (internal). |
| |
| Parameters |
| ---------- |
| yc : 1-D float array — one channel of the input signal |
| params : DeclipParams |
| tau : ceiling hint (pre-computed in declip()); kept for API compat, |
| recomputed internally after DC removal. |
| ch_label : string used in verbose output, e.g. "L" or "R" |
| |
| DC removal (BUG-4 fix, v5) |
| -------------------------- |
| A DC offset as small as 0.3% makes the global peak asymmetric, causing |
| the lower-polarity ceiling to fall just below tau and be misclassified as |
| reliable. Fix: subtract per-channel mean before all threshold computations. |
| The DC is discarded on output (recording artefact, not musical content). |
| |
| Soft mode (v6) |
| -------------- |
| When params.mode == 'soft', the threshold is set to: |
| threshold = ceiling * 10^(-delta_db / 20) |
| where ceiling = max(|yc|) after DC removal. |
| This marks all samples above the limiter threshold as potentially attenuated. |
| The BUG-4 half-wave issue is inherently avoided in soft mode because the |
| threshold sits delta_db dB BELOW the ceiling; small DC asymmetries (typically |
| < 0.05 dB) cannot push the opposite polarity's ceiling below the threshold. |
| DC removal is still performed for cleanliness. |
| |
| proj_gamma correctness in soft mode |
| ------------------------------------ |
| For limited samples, the true value satisfies: true ≥ yc[n] (one-sided). |
| proj_gamma already implements exactly this: |
| v[Icp] = max(v[Icp], yc[Icp]) |
| Since yc[Icp] here is the *actual limited value* (not tau), the constraint |
| is correct. No change to tight_sspade or tight_aspade is needed. |
| """ |
| |
| dc_offset = float(np.mean(yc)) |
| yc = yc - dc_offset |
|
|
| |
| ceiling_pos = float(np.max(yc)) |
| ceiling_neg = float(-np.min(yc)) |
|
|
| if params.mode == "hard": |
| |
| threshold = min(ceiling_pos, ceiling_neg) |
| else: |
| |
| |
| |
| ceiling = max(ceiling_pos, ceiling_neg) |
| threshold = ceiling * (10.0 ** (-params.delta_db / 20.0)) |
|
|
| if threshold <= 0.0: |
| return yc.copy(), _compute_masks(yc, 0.0) |
|
|
| masks = _compute_masks(yc, threshold) |
|
|
| |
| if params.mode == "soft" and params.release_ms > 0.0: |
| rel_samp = max(0, round(params.release_ms * params.sample_rate / 1000.0)) |
| if rel_samp > 0: |
| masks = _dilate_masks_soft(masks, yc, rel_samp) |
|
|
| |
| if params.mode == "soft" and params.macro_expand and params.macro_ratio > 1.0: |
| yc = _macro_expand_pass( |
| yc, params.sample_rate, |
| attack_ms=params.macro_attack_ms, |
| release_ms=params.macro_release_ms, |
| ratio=params.macro_ratio, |
| ) |
| |
| masks = _compute_masks(yc, threshold) |
| if params.release_ms > 0.0: |
| rel_samp = max(0, round(params.release_ms * params.sample_rate / 1000.0)) |
| if rel_samp > 0: |
| masks = _dilate_masks_soft(masks, yc, rel_samp) |
|
|
| n_clipped = int(np.sum(~masks.Ir)) |
| L = len(yc) |
|
|
| |
| g_max = (10.0 ** (params.max_gain_db / 20.0) |
| if params.mode == "soft" and params.max_gain_db > 0.0 |
| else float("inf")) |
|
|
| if params.verbose: |
| ch = (" [" + ch_label + "]") if ch_label else "" |
| tag = "threshold" if params.mode == "soft" else "tau" |
| print(f"[declip{ch}] Length : {L} samples") |
| print(f"[declip{ch}] DC offset : {dc_offset:+.6f} ({dc_offset*100:+.4f}%) → removed") |
| if params.mode == "hard": |
| print(f"[declip{ch}] {tag:<9} : {threshold:.6f} " |
| f"(pos_peak={ceiling_pos:.6f} neg_peak={ceiling_neg:.6f} using min)") |
| else: |
| print(f"[declip{ch}] ceiling : {max(ceiling_pos, ceiling_neg):.6f} " |
| f"(pos={ceiling_pos:.6f} neg={ceiling_neg:.6f})") |
| print(f"[declip{ch}] {tag:<9} : {threshold:.6f} " |
| f"(ceiling − {params.delta_db:.2f} dB = " |
| f"{20*np.log10(threshold/max(ceiling_pos,ceiling_neg)):.2f} dBFS)") |
| print(f"[declip{ch}] Detected : {n_clipped}/{L} " |
| f"({100*n_clipped/L:.1f}%) " |
| f"Icp={int(masks.Icp.sum())} Icm={int(masks.Icm.sum())}") |
| print(f"[declip{ch}] Algorithm : {params.algo.upper()} " |
| f"frame={params.frame.upper()} mode={params.mode.upper()} " |
| f"win={params.window_length} hop={params.hop_length} " |
| f"({100*(1-params.hop_length/params.window_length):.0f}% overlap)") |
| if params.mode == "soft": |
| feats = [] |
| if params.release_ms > 0: feats.append(f"release_ms={params.release_ms}") |
| if params.max_gain_db > 0: feats.append(f"max_gain_db={params.max_gain_db}") |
| if params.macro_expand: feats.append(f"macro_expand(ratio={params.macro_ratio})") |
| if feats: |
| print(f"[declip{ch}] v11 feats : " + " ".join(feats)) |
|
|
| spade_fn = tight_sspade if params.algo == "sspade" else tight_aspade |
|
|
| M = params.window_length |
| a = params.hop_length |
| N = int(np.ceil(L / a)) |
| win = np.sqrt(hann(M, sym=False)) |
| x = np.zeros(L) |
| norm_win = np.zeros(L) |
| no_conv = 0 |
| skipped = 0 |
| t0 = time.time() |
|
|
| |
| |
| |
| def _process_frame(i: int): |
| idx1 = i * a |
| idx2 = min(idx1 + M, L) |
| seg_len = idx2 - idx1 |
| pad = M - seg_len |
|
|
| yc_frame = np.zeros(M) |
| yc_frame[:seg_len] = yc[idx1:idx2] |
|
|
| |
| if params.mode == "soft": |
| frame_peak = float(np.max(np.abs(yc_frame[:seg_len]))) if seg_len > 0 else 0.0 |
| if frame_peak < threshold: |
| return idx1, idx2, seg_len, None, False, True |
|
|
| yc_frame_w = yc_frame * win |
|
|
| fm = ClippingMasks( |
| Ir = np.concatenate([masks.Ir [idx1:idx2], np.ones (pad, dtype=bool)]), |
| Icp = np.concatenate([masks.Icp[idx1:idx2], np.zeros(pad, dtype=bool)]), |
| Icm = np.concatenate([masks.Icm[idx1:idx2], np.zeros(pad, dtype=bool)]), |
| ) |
|
|
| x_frame, conv = spade_fn( |
| yc_frame_w, fm, |
| params.frame, params.s, params.r, params.eps, params.max_iter, |
| g_max=g_max, |
| ) |
| return idx1, idx2, seg_len, x_frame, conv, False |
|
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| frame_results: list = [None] * N |
| _n_bypassed = 0 |
| _n_noconv = 0 |
|
|
| def _advance(n_done: int): |
| if progress_ctx is not None and task_id is not None: |
| progress_ctx.advance(task_id, |
| n_bypassed=_n_bypassed, |
| n_noconv=_n_noconv, |
| n_done=n_done, |
| n_total=N) |
|
|
| if frame_workers > 1: |
| from concurrent.futures import as_completed |
| with ThreadPoolExecutor(max_workers=frame_workers) as pool: |
| future_to_idx = {pool.submit(_process_frame, i): i for i in range(N)} |
| n_done = 0 |
| for future in as_completed(future_to_idx): |
| i = future_to_idx[future] |
| frame_results[i] = future.result() |
| n_done += 1 |
| |
| *_, conv, bypassed = frame_results[i] |
| if bypassed: |
| _n_bypassed += 1 |
| elif not conv: |
| _n_noconv += 1 |
| _advance(n_done) |
| else: |
| for i in range(N): |
| frame_results[i] = _process_frame(i) |
| *_, conv, bypassed = frame_results[i] |
| if bypassed: |
| _n_bypassed += 1 |
| elif not conv: |
| _n_noconv += 1 |
| _advance(i + 1) |
|
|
| |
| for idx1, idx2, seg_len, x_frame, conv, bypassed in frame_results: |
| if bypassed: |
| yc_seg = yc[idx1:idx2] |
| x [idx1:idx2] += yc_seg * win[:seg_len] ** 2 |
| norm_win[idx1:idx2] += win[:seg_len] ** 2 |
| skipped += 1 |
| else: |
| if not conv: |
| no_conv += 1 |
| x [idx1:idx2] += x_frame[:seg_len] * win[:seg_len] |
| norm_win[idx1:idx2] += win[:seg_len] ** 2 |
|
|
| |
| norm_win = np.where(norm_win < 1e-12, 1.0, norm_win) |
| x /= norm_win |
|
|
| |
| |
| |
| Ir = masks.Ir |
| if Ir.sum() > 0: |
| rms_in = float(np.sqrt(np.mean(yc[Ir] ** 2))) |
| rms_out = float(np.sqrt(np.mean(x[Ir] ** 2))) |
| if rms_out > 1e-12 and rms_in > 1e-12: |
| x *= rms_in / rms_out |
|
|
| if params.verbose: |
| ch = (" [" + ch_label + "]") if ch_label else "" |
| active = N - skipped |
| skip_pct = 100.0 * skipped / N if N > 0 else 0.0 |
| if params.mode == "soft" and skipped > 0: |
| print(f"[declip{ch}] Frames : {N} total | " |
| f"active={active} bypassed={skipped} ({skip_pct:.1f}%) " |
| f"no_conv={no_conv} | time: {time.time()-t0:.1f}s") |
| else: |
| print(f"[declip{ch}] Frames : {N} (no conv: {no_conv}) " |
| f"time: {time.time()-t0:.1f}s") |
|
|
| return x, masks |
|
|
|
|
| def declip( |
| yc: np.ndarray, |
| params: "DeclipParams | None" = None, |
| ) -> "Tuple[np.ndarray, Union[ClippingMasks, List[ClippingMasks]]]": |
| """ |
| Declip a hard-clipped audio signal — mono or multi-channel. |
| |
| Accepts either: |
| * a 1-D array (N_samples,) — mono |
| * a 2-D array (N_samples, N_channels) — stereo / surround |
| |
| For multi-channel input, tau is detected from the global peak across |
| ALL channels, modelling the single hardware clipping threshold correctly. |
| Each channel is then processed independently. Parallel processing is |
| controlled by params.n_jobs. |
| |
| Parameters |
| ---------- |
| yc : float array, shape (N,) or (N, C) |
| params : DeclipParams (defaults used if None) |
| |
| Returns |
| ------- |
| x : declipped signal, same shape as yc |
| masks : ClippingMasks (mono input) |
| list of ClippingMasks (multi-channel input, one per channel) |
| """ |
| if params is None: |
| params = DeclipParams() |
|
|
| yc = np.asarray(yc, dtype=float) |
|
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| if params.lf_split_hz > 0.0 and params.mode == "soft": |
| return _declip_with_lf_split(yc, params) |
|
|
| if params.multiband and params.mode == "soft": |
| from dataclasses import replace as _dc_replace |
| crossovers = list(params.band_crossovers) |
| n_bands = len(crossovers) + 1 |
| sr = params.sample_rate |
|
|
| |
| if len(params.band_delta_db) == n_bands: |
| band_deltas = list(params.band_delta_db) |
| else: |
| band_deltas = [params.delta_db] * n_bands |
|
|
| |
| sig_1d = yc if yc.ndim == 1 else None |
| if yc.ndim == 2: |
| |
| n_samp, n_ch = yc.shape |
| out = np.zeros_like(yc) |
| all_masks = [] |
| for c in range(n_ch): |
| ch_sig = yc[:, c] |
| ch_out = np.zeros(n_samp) |
| ch_masks = [] |
| remainder = ch_sig.copy() |
| for b, (fc, d_db) in enumerate(zip(crossovers, band_deltas[:-1])): |
| lp, remainder = _lr_split(remainder, fc, sr) |
| band_params = _dc_replace(params, multiband=False, delta_db=d_db) |
| band_fixed, band_mask = declip(lp, band_params) |
| ch_out += band_fixed |
| ch_masks.append(band_mask) |
| |
| band_params = _dc_replace(params, multiband=False, delta_db=band_deltas[-1]) |
| band_fixed, band_mask = declip(remainder, band_params) |
| ch_out += band_fixed |
| ch_masks.append(band_mask) |
| out[:, c] = ch_out |
| all_masks.append(ch_masks) |
| return out, all_masks |
| else: |
| |
| out = np.zeros_like(yc) |
| all_masks = [] |
| remainder = yc.copy() |
| for b, (fc, d_db) in enumerate(zip(crossovers, band_deltas[:-1])): |
| lp, remainder = _lr_split(remainder, fc, sr) |
| band_params = _dc_replace(params, multiband=False, delta_db=d_db) |
| band_fixed, band_mask = declip(lp, band_params) |
| out += band_fixed |
| all_masks.append(band_mask) |
| |
| band_params = _dc_replace(params, multiband=False, delta_db=band_deltas[-1]) |
| band_fixed, band_mask = declip(remainder, band_params) |
| out += band_fixed |
| all_masks.append(band_mask) |
| return out, all_masks |
|
|
| |
| |
| |
| |
| |
| |
| NORM_TARGET = 0.9 |
| global_peak = float(np.max(np.abs(yc))) |
| if global_peak > NORM_TARGET: |
| scale = NORM_TARGET / global_peak |
| yc_norm = yc * scale |
| else: |
| scale = 1.0 |
| yc_norm = yc |
|
|
| |
| gpu_dev = _resolve_gpu_device(params) |
|
|
| |
| if yc_norm.ndim == 1: |
| tau = float(np.max(np.abs(yc_norm))) |
| if tau == 0.0: |
| warnings.warn("Input signal is all zeros.") |
| return yc.copy(), _compute_masks(yc, 0.0) |
|
|
| if gpu_dev is not None: |
| |
| if params.show_progress: |
| N_frames = int(np.ceil(len(yc_norm) / params.hop_length)) |
| prog = _make_progress(1) |
| with prog: |
| task = prog.add_task("mono", total=N_frames) |
| fixed, masks = _declip_mono_gpu( |
| yc_norm, params, tau, ch_label="mono", |
| device=gpu_dev, progress_ctx=prog, task_id=task, |
| ) |
| else: |
| fixed, masks = _declip_mono_gpu( |
| yc_norm, params, tau, ch_label="mono", device=gpu_dev, |
| ) |
| else: |
| |
| n_workers = params.n_jobs if params.n_jobs > 0 else os.cpu_count() or 1 |
| if params.show_progress: |
| N_frames = int(np.ceil(len(yc_norm) / params.hop_length)) |
| prog = _make_progress(1) |
| with prog: |
| task = prog.add_task("mono", total=N_frames) |
| fixed, masks = _declip_mono( |
| yc_norm, params, tau, |
| frame_workers=n_workers, |
| progress_ctx=prog, task_id=task, |
| ) |
| else: |
| fixed, masks = _declip_mono(yc_norm, params, tau, frame_workers=n_workers) |
| return fixed / scale, masks |
|
|
| |
| if yc_norm.ndim != 2: |
| raise ValueError( |
| f"yc must be 1-D (mono) or 2-D (samples x channels), got shape {yc.shape}" |
| ) |
|
|
| n_samples, n_ch = yc_norm.shape |
|
|
| |
| tau = float(np.max(np.abs(yc_norm))) |
| if tau == 0.0: |
| warnings.warn("Input signal is all zeros.") |
| empty_masks = [_compute_masks(yc[:, c], 0.0) for c in range(n_ch)] |
| return yc.copy(), empty_masks |
|
|
| |
| if n_ch == 2: |
| labels = ["L", "R"] |
| else: |
| labels = ["Ch" + str(c) for c in range(n_ch)] |
|
|
| if params.verbose: |
| print(f"[declip] {n_ch}-channel signal | " |
| f"tau={tau:.4f} | mode={params.mode.upper()} | " |
| + (f"device={gpu_dev}" if gpu_dev else f"n_jobs={params.n_jobs}") |
| + (f" | delta_db={params.delta_db:.2f}" if params.mode == "soft" else "")) |
|
|
| |
| N_frames = int(np.ceil(n_samples / params.hop_length)) |
| prog = _make_progress(n_ch) if params.show_progress else None |
|
|
| if gpu_dev is not None: |
| |
| |
| def _process_channel(c: int, task_id=None): |
| return _declip_mono_gpu( |
| yc_norm[:, c], params, tau, |
| ch_label=labels[c], device=gpu_dev, |
| progress_ctx=prog, task_id=task_id, |
| ) |
| else: |
| |
| total_workers = params.n_jobs if params.n_jobs > 0 else os.cpu_count() or 1 |
| channel_workers = min(total_workers, n_ch) |
| frame_workers_ch = max(1, total_workers // channel_workers) |
|
|
| def _process_channel(c: int, task_id=None): |
| return _declip_mono( |
| yc_norm[:, c], params, tau, |
| ch_label=labels[c], |
| frame_workers=frame_workers_ch, |
| progress_ctx=prog, |
| task_id=task_id, |
| ) |
|
|
| |
| ch_workers = 1 if gpu_dev is not None else min( |
| params.n_jobs if params.n_jobs > 0 else os.cpu_count() or 1, n_ch |
| ) |
|
|
| def _run(): |
| if prog is not None: |
| task_ids = [prog.add_task(labels[c], total=N_frames) for c in range(n_ch)] |
| else: |
| task_ids = [None] * n_ch |
|
|
| if ch_workers == 1: |
| return [_process_channel(c, task_ids[c]) for c in range(n_ch)] |
| else: |
| with ThreadPoolExecutor(max_workers=ch_workers) as pool: |
| futures = [pool.submit(_process_channel, c, task_ids[c]) for c in range(n_ch)] |
| return [f.result() for f in futures] |
|
|
| if prog is not None: |
| with prog: |
| results = _run() |
| else: |
| results = _run() |
|
|
| |
| fixed_channels = [r[0] for r in results] |
| masks_list = [r[1] for r in results] |
| x_out = np.column_stack(fixed_channels) / scale |
|
|
| return x_out, masks_list |
|
|
|
|
| |
| |
| |
|
|
| def sdr(reference: np.ndarray, estimate: np.ndarray) -> float: |
| """ |
| Signal-to-Distortion Ratio (dB). |
| |
| Definition from eq.(14) in [2]: |
| SDR(u, v) = 10 log₁₀( ‖u‖² / ‖u − v‖² ) |
| """ |
| noise = reference - estimate |
| denom = np.sum(noise ** 2) |
| if denom < 1e-20: |
| return float("inf") |
| return 10.0 * np.log10(np.sum(reference ** 2) / denom) |
|
|
|
|
| def delta_sdr( |
| reference: np.ndarray, |
| clipped: np.ndarray, |
| estimate: np.ndarray, |
| ) -> float: |
| """ |
| ΔSDR improvement (dB) — eq.(13) in [2]: |
| ΔSDR = SDR(x, x̂) − SDR(x, y) |
| """ |
| return sdr(reference, estimate) - sdr(reference, clipped) |
|
|
|
|
| |
| |
| |
|
|
| def _build_parser() -> argparse.ArgumentParser: |
| p = argparse.ArgumentParser( |
| description="SPADE Audio Declipping / Limiter Recovery (v13)", |
| formatter_class=argparse.ArgumentDefaultsHelpFormatter, |
| ) |
| p.add_argument("input", help="Input clipped / limited audio file (WAV, FLAC, ...)") |
| p.add_argument("output", help="Output restored audio file") |
| p.add_argument("--algo", choices=["sspade", "aspade"], default="sspade") |
| p.add_argument("--window-length", type=int, default=1024, dest="window_length") |
| p.add_argument("--hop-length", type=int, default=256, dest="hop_length") |
| p.add_argument("--frame", choices=["dct", "rdft"], default="rdft") |
| p.add_argument("--s", type=int, default=1) |
| p.add_argument("--r", type=int, default=1) |
| p.add_argument("--eps", type=float, default=0.1) |
| p.add_argument("--max-iter", type=int, default=1000, dest="max_iter") |
| p.add_argument("--n-jobs", type=int, default=1, dest="n_jobs", |
| help="CPU parallel workers for multi-channel (-1 = all cores). " |
| "Ignored when GPU is active.") |
| p.add_argument("--mode", choices=["hard", "soft"], default="hard", |
| help="'hard' = standard clipping recovery; " |
| "'soft' = brickwall limiter recovery") |
| p.add_argument("--delta-db", type=float, default=1.0, dest="delta_db", |
| help="[soft mode] dB below 0 dBFS where the limiter starts acting " |
| "(e.g. 2.5 means threshold at -2.5 dBFS)") |
| p.add_argument("--gpu-device", type=str, default="auto", dest="gpu_device", |
| help="PyTorch device for GPU path: 'auto', 'cuda', 'cuda:0', 'cpu'. " |
| "AMD ROCm GPUs appear as 'cuda' in PyTorch-ROCm.") |
| p.add_argument("--no-gpu", action="store_true", dest="no_gpu", |
| help="Disable GPU acceleration; use CPU (v8/v9 threading) path instead.") |
| |
| p.add_argument("--release-ms", type=float, default=0.0, dest="release_ms", |
| help="[v11, soft] Limiter release time in ms for mask dilation " |
| "(0 = disabled, typical 10-50 ms)") |
| p.add_argument("--max-gain-db", type=float, default=0.0, dest="max_gain_db", |
| help="[v11, soft] Max transient recovery in dB above limited value " |
| "(0 = disabled, e.g. 6 for +6 dB cap)") |
| p.add_argument("--multiband", action="store_true", |
| help="[v11, soft] Enable Linkwitz-Riley sub-band processing") |
| p.add_argument("--band-crossovers", type=float, nargs="+", default=[250.0, 4000.0], |
| dest="band_crossovers", |
| help="[v11] Crossover frequencies in Hz (e.g. 250 4000)") |
| p.add_argument("--band-delta-db", type=float, nargs="+", default=[], |
| dest="band_delta_db", |
| help="[v11] Per-band delta_db values (must match number of bands)") |
| p.add_argument("--macro-expand", action="store_true", dest="macro_expand", |
| help="[v11, soft] Enable macro-dynamics upward expansion pre-pass") |
| p.add_argument("--macro-attack-ms", type=float, default=10.0, dest="macro_attack_ms", |
| help="[v11] Expander attack time (ms, default 10)") |
| p.add_argument("--macro-release-ms", type=float, default=200.0, dest="macro_release_ms", |
| help="[v11] Expander release time (ms, default 200)") |
| p.add_argument("--macro-ratio", type=float, default=1.2, dest="macro_ratio", |
| help="[v11] Expansion ratio >1.0 (default 1.2; 1.0 = bypass)") |
| |
| p.add_argument("--lf-split-hz", type=float, default=0.0, dest="lf_split_hz", |
| help="[v13, soft] LR crossover (Hz) for dedicated LF subband SPADE. " |
| "0 = disabled (v11 behaviour). Typical: 400-600 Hz for kick body.") |
| p.add_argument("--lf-window-length", type=int, default=0, dest="lf_window_length", |
| help="[v13] WOLA window for LF pass (0 = inherit --window-length). " |
| "Recommended: 2048 or 4096.") |
| p.add_argument("--lf-hop-length", type=int, default=0, dest="lf_hop_length", |
| help="[v13] WOLA hop for LF pass (0 = lf-window-length // 4).") |
| p.add_argument("--lf-max-iter", type=int, default=0, dest="lf_max_iter", |
| help="[v13] Max ADMM iterations for LF pass (0 = inherit --max-iter). " |
| "Recommended: 1500-2000.") |
| p.add_argument("--lf-eps", type=float, default=0.0, dest="lf_eps", |
| help="[v13] Convergence threshold for LF pass (0 = inherit --eps).") |
| p.add_argument("--lf-delta-db", type=float, default=0.0, dest="lf_delta_db", |
| help="[v13] delta_db override for LF band (0 = inherit --delta-db).") |
| p.add_argument("--lf-max-gain-db", type=float, default=0.0, dest="lf_max_gain_db", |
| help="[v13] max_gain_db cap for LF pass (0 = inherit --max-gain-db).") |
| p.add_argument("--lf-release-ms", type=float, default=-1.0, dest="lf_release_ms", |
| help="[v13] Mask dilation release (ms) for LF pass " |
| "(-1 = inherit --release-ms).") |
| p.add_argument("--lf-s", type=int, default=0, dest="lf_s", |
| help="[v13] Sparsity step for LF pass (0 = inherit --s).") |
| p.add_argument("--lf-r", type=int, default=0, dest="lf_r", |
| help="[v13] Sparsity relaxation period for LF pass (0 = inherit --r).") |
| p.add_argument("--verbose", action="store_true") |
| p.add_argument("--reference", default=None, |
| help="Clean reference file for delta-SDR measurement") |
| return p |
|
|
|
|
| def main() -> None: |
| try: |
| import soundfile as sf |
| except ImportError: |
| raise SystemExit("Install soundfile: pip install soundfile") |
|
|
| args = _build_parser().parse_args() |
| yc, sr = sf.read(args.input, always_2d=True) |
| yc = yc.astype(float) |
| n_samp, n_ch = yc.shape |
|
|
| print("Input :", args.input, |
| "|", n_samp, "samples @", sr, "Hz |", n_ch, "channel(s)") |
|
|
| params = DeclipParams( |
| algo=args.algo, window_length=args.window_length, |
| hop_length=args.hop_length, frame=args.frame, |
| s=args.s, r=args.r, eps=args.eps, max_iter=args.max_iter, |
| verbose=args.verbose, n_jobs=args.n_jobs, |
| mode=args.mode, delta_db=args.delta_db, |
| use_gpu=not args.no_gpu, gpu_device=args.gpu_device, |
| |
| sample_rate=sr, |
| release_ms=args.release_ms, |
| max_gain_db=args.max_gain_db, |
| multiband=args.multiband, |
| band_crossovers=tuple(args.band_crossovers), |
| band_delta_db=tuple(args.band_delta_db), |
| macro_expand=args.macro_expand, |
| macro_attack_ms=args.macro_attack_ms, |
| macro_release_ms=args.macro_release_ms, |
| macro_ratio=args.macro_ratio, |
| |
| lf_split_hz=args.lf_split_hz, |
| lf_window_length=args.lf_window_length, |
| lf_hop_length=args.lf_hop_length, |
| lf_max_iter=args.lf_max_iter, |
| lf_eps=args.lf_eps, |
| lf_delta_db=args.lf_delta_db, |
| lf_max_gain_db=args.lf_max_gain_db, |
| lf_release_ms=args.lf_release_ms, |
| lf_s=args.lf_s, |
| lf_r=args.lf_r, |
| ) |
|
|
| |
| yc_in = yc[:, 0] if n_ch == 1 else yc |
| fixed, masks = declip(yc_in, params) |
| |
| |
|
|
| |
| fixed_2d = fixed[:, None] if fixed.ndim == 1 else fixed |
| sf.write(args.output, fixed_2d.astype(np.float32), sr, subtype="FLOAT") |
| print("Output :", args.output) |
|
|
| |
| masks_iter = [masks] if n_ch == 1 else masks |
| labels = ["L", "R"] if n_ch == 2 else ["Ch" + str(c) for c in range(n_ch)] |
| for m, lbl in zip(masks_iter, labels): |
| n_clip = int(np.sum(~m.Ir)) |
| pct = 100.0 * n_clip / n_samp |
| print(" [" + lbl + "] clipped:", n_clip, "/", n_samp, |
| "samples (" + str(round(pct, 1)) + "%)") |
|
|
| |
| if args.reference: |
| ref, _ = sf.read(args.reference, always_2d=True) |
| ref = ref.astype(float) |
| L = min(ref.shape[0], fixed_2d.shape[0]) |
| for c in range(min(n_ch, ref.shape[1])): |
| lbl = labels[c] |
| r_c = ref[:L, c] |
| y_c = yc[:L, c] |
| f_c = fixed_2d[:L, c] |
| print(" [" + lbl + "]" |
| " SDR clipped=" + str(round(sdr(r_c, y_c), 2)) + " dB" |
| " declipped=" + str(round(sdr(r_c, f_c), 2)) + " dB" |
| " delta=" + str(round(delta_sdr(r_c, y_c, f_c), 2)) + " dB") |
|
|
|
|
| |
| |
| |
|
|
| def _demo() -> None: |
| """ |
| Self-test: mono and stereo synthetic signals, both algorithms, both frames. |
| """ |
| print("=" * 65) |
| print("SPADE Declipping v3 — Self-Test (mono + stereo)") |
| print("=" * 65) |
|
|
| sr = 16_000 |
| t = np.linspace(0, 1, sr, endpoint=False) |
|
|
| def make_tonal(freqs_amps): |
| sig = sum(a * np.sin(2 * np.pi * f * t) for f, a in freqs_amps) |
| return sig / np.max(np.abs(sig)) |
|
|
| clean_L = make_tonal([(440, 0.5), (880, 0.3), (1320, 0.15)]) |
| clean_R = make_tonal([(550, 0.5), (1100, 0.3), (2200, 0.1)]) |
| clean_stereo = np.column_stack([clean_L, clean_R]) |
|
|
| theta_c = 0.6 |
| clipped_stereo = np.clip(clean_stereo, -theta_c, theta_c) |
| n_clip_L = np.mean(np.abs(clipped_stereo[:, 0]) >= theta_c) * 100 |
| n_clip_R = np.mean(np.abs(clipped_stereo[:, 1]) >= theta_c) * 100 |
| print("\ntheta_c =", theta_c, |
| " | L clipped:", str(round(n_clip_L, 1)) + "%", |
| " R clipped:", str(round(n_clip_R, 1)) + "%") |
|
|
| for algo in ("sspade", "aspade"): |
| for fr in ("dct", "rdft"): |
| params = DeclipParams( |
| algo=algo, frame=fr, |
| window_length=1024, hop_length=256, |
| s=1, r=1, eps=0.1, max_iter=500, |
| n_jobs=2, |
| verbose=False, |
| ) |
| fixed, masks_list = declip(clipped_stereo, params) |
| dsdr_L = delta_sdr(clean_stereo[:, 0], clipped_stereo[:, 0], fixed[:, 0]) |
| dsdr_R = delta_sdr(clean_stereo[:, 1], clipped_stereo[:, 1], fixed[:, 1]) |
| tag = algo.upper() + " + " + fr.upper() |
| print(" " + tag + " | L DSDR=" + str(round(dsdr_L, 1)) + " dB" |
| " R DSDR=" + str(round(dsdr_R, 1)) + " dB") |
|
|
| |
| print("\n--- Mono sanity check ---") |
| clipped_mono = np.clip(clean_L, -theta_c, theta_c) |
| params_mono = DeclipParams(algo="sspade", frame="rdft", |
| window_length=1024, hop_length=256, |
| s=1, r=1, eps=0.1, max_iter=500) |
| fixed_mono, _ = declip(clipped_mono, params_mono) |
| print(" SSPADE+RDFT mono DSDR =", |
| str(round(delta_sdr(clean_L, clipped_mono, fixed_mono), 1)), "dB") |
|
|
| print("\nSelf-test complete.") |
|
|
|
|
| if __name__ == "__main__": |
| import sys |
| if "--demo" in sys.argv: |
| _demo() |
| else: |
| main() |
|
|