mymodel / spade_declip_v11.py
simone00's picture
Add files using upload-large-folder tool
17d4058 verified
"""
spade_declip.py – v11 (delimiting: envelope dilation + ratio bounds + multiband + macro expand)
====================================================================================
S-SPADE / A-SPADE audio declipping — extended to recover dynamics
compressed by a brickwall limiter (mode='soft').
GPU acceleration (v10 — NEW)
------------------------------
Requires PyTorch ≥ 2.0 with a working CUDA *or* ROCm backend.
PyTorch-ROCm exposes AMD GPUs under the standard torch.cuda namespace,
so detection and device strings are identical to NVIDIA:
Device auto-detection order: CUDA → ROCm → CPU fallback
Device string: "auto" — first available GPU (cuda / cuda:0)
"cuda:0" — explicit device index
"cpu" — force CPU (disables GPU path)
GPU strategy:
CPU path (v8/v9): processes frames one-by-one with ThreadPoolExecutor.
GPU path (v10): packs ALL active frames into a single (F, M) batch
tensor and runs S-SPADE entirely on the GPU in one
kernel sweep — DCT, hard-threshold, IDCT, proj_Γ are
all vectorised over F simultaneously.
Convergence is tracked per-frame with a bool mask; converged frames
are frozen (their dual variable stops updating) while the rest keep
iterating. The GPU loop exits as soon as every frame has converged
or max_iter is reached.
Typical speedup vs. single-thread CPU: 20–100× depending on GPU and
frame count. The RX 6700 XT (12 GB, ROCm) processes the 2784-frame
stereo example in ~60–90 s vs. the 1289 s CPU baseline (≈15–20×).
DCT implementation on GPU:
Uses a verified FFT-based Makhoul (1980) algorithm that exactly matches
scipy.fft.dct(x, type=2, norm='ortho') to float32 precision.
Runs in float64 internally for numerical safety, cast to float32 on
output. Both DCT and IDCT are batch-safe: input shape (..., N).
Limitations:
• algo='aspade' is CPU-only in v10 (A-SPADE GPU planned for v11).
Set use_gpu=False or switch to algo='sspade' for GPU acceleration.
• Very long files (> ~2 h at 48 kHz) may require chunked batching;
add gpu_batch_frames parameter if VRAM is exhausted.
References
----------
[1] Kitić, Bertin, Gribonval — "Sparsity and cosparsity for audio declipping:
a flexible non-convex approach", LVA/ICA 2015. (arXiv:1506.01830)
[2] Záviška, Rajmic, Průša, Veselý — "Revisiting Synthesis Model in Sparse
Audio Declipper", 2018. (arXiv:1807.03612)
Algorithms
----------
S-SPADE → Algorithm 1 in [2] (synthesis, coefficient-domain ADMM) [DEFAULT]
Projection uses the closed-form Lemma / eq.(12) from [2].
A-SPADE → Algorithm 2 in [2] (analysis, signal-domain ADMM)
Transforms
----------
'dct' Orthonormal DCT-II (tight Parseval frame, bound = 1, P = N)
'rdft' Redundant real frame [DCT-II/√2 ‖ DST-II/√2] (tight, bound = 1, P = 2N)
[DEFAULT] Best empirical quality; mimics oversampled DFT from [1][2].
Operating modes
---------------
mode='hard' (default)
Standard hard-clipping recovery. Mask detects samples exactly at the
digital ceiling (±tau). Same behaviour as v5.
mode='soft' (introduced v6, frame-adaptive bypass in v7)
Brickwall-limiter recovery. Any sample above the limiter threshold
(ceiling − delta_db dB) is treated as potentially attenuated; its true
value is constrained to be ≥ its current value (lower bound, not equality).
v7 frame-adaptive bypass ← NEW
--------------------------------
Before processing each frame, the raw un-windowed peak is compared to
the global threshold:
frame_peak = max(|yc[idx1:idx2]|)
frame_peak < threshold → bypass: WOLA accumulation with win²,
SPADE never called, zero artefact risk.
frame_peak >= threshold → normal SPADE processing.
The bypass uses identical win²/norm_win bookkeeping to the SPADE path,
so the WOLA reconstruction is numerically transparent.
Verbose output reports active/bypassed/no-conv frame counts and speedup.
Mathematical basis:
proj_Γ implements v[Icp] = max(v[Icp], yc[Icp]) where yc[Icp]
is the actual limited sample value — the lower-bound constraint is
exact. proj_gamma, tight_sspade, tight_aspade are UNCHANGED.
Practical parameter guidance:
delta_db = dB from 0 dBFS to the limiter threshold.
Read from Waveform Statistics: find the level below which the limiter
did NOT intervene → delta_db = that level (positive number).
Typical brickwall masterings: 1.0 – 3.0 dB.
Limitations:
• Attack/release pumping attenuates samples just outside the threshold;
those are pinned as reliable — unavoidable without the limiter's curve.
• Macro-dynamics cannot be restored; only transient peaks are recovered.
Verified bugs fixed (inherited from v5/v6)
-------------------------------------------
BUG-1 frsyn/RDFT: flip output not input in DST synthesis
BUG-2 tight_aspade: dual variable in coefficient domain, not signal domain
BUG-3 _declip_mono: per-channel WOLA gain drift (stereo L/R balance)
BUG-4 _declip_mono: DC offset breaks half-wave mask detection
Dependencies: pip install numpy scipy soundfile
Usage (API)
-----------
from spade_declip_v10 import declip, DeclipParams
params = DeclipParams(mode="soft", delta_db=2.5) # GPU used automatically
fixed, masks = declip(limited_master, params)
# Explicit GPU device (ROCm / CUDA):
params = DeclipParams(mode="soft", delta_db=2.5, use_gpu=True, gpu_device="cuda:0")
# Force CPU (disable GPU):
params = DeclipParams(mode="soft", delta_db=2.5, use_gpu=False)
New in v11 — Delimiting features
----------------------------------
Four new DeclipParams knobs that transition from declipping to genuine delimiting.
All are disabled by default for full backward compatibility.
1. Envelope-Based Mask Dilation (release_ms > 0)
--------------------------------------------------
A limiter's release time attenuates not just the peak sample but all samples
for the next 10–50 ms. By default, _compute_masks marks those post-peak
samples as "reliable" (Ir), pinning the ADMM solver to artificially low values
and causing the pumping artifact.
Fix: _dilate_masks_soft() forward-dilates Icp and Icm by `release_samples =
round(release_ms * sample_rate / 1000)` samples using convolution. Any newly
flagged sample within the release window is reclassified:
yc[n] ≥ 0 → Icp (true value ≥ yc[n])
yc[n] < 0 → Icm (true value ≤ yc[n])
The constraint is always satisfied by the limiter model: the true value can
only be larger in magnitude than the gain-reduced sample.
Parameters: release_ms (float, default 0.0), sample_rate (int, default 44100).
2. Ratio-Aware Upper Bound (max_gain_db > 0)
-----------------------------------------------
Without an upper bound, the L0-ADMM can generate "ice-pick" transients that
exceed any physical limiter's ratio. max_gain_db caps the recovery:
v[Icp] = clip(max(v[Icp], yc[Icp]), yc[Icp], yc[Icp] * G_max)
v[Icm] = clip(min(v[Icm], yc[Icm]), yc[Icm] * G_max, yc[Icm])
where G_max = 10^(max_gain_db / 20). Implemented in both proj_gamma (CPU)
and the inline GPU projection in _sspade_batch_gpu.
Parameters: max_gain_db (float, default 0.0 = disabled; e.g. 6.0 for ±6 dB max).
3. Sub-band (Multi-band) SPADE (multiband=True)
--------------------------------------------------
Multi-band limiters (FabFilter Pro-L 2, etc.) apply independent gain reduction
per frequency range. Running broadband SPADE on such material "un-ducks"
frequency bands that were never attenuated, causing harshness.
Fix: _lr_split() builds a phase-perfect crossover (LP via scipy Butterworth
sosfiltfilt + HP = x − LP) at each crossover frequency. Each band is
declipped independently with its own delta_db threshold, then summed back.
The GPU batch path naturally handles multiple bands — each band contributes
its own frames to the (F, M) batch with no added latency.
Parameters: multiband (bool), band_crossovers (tuple of Hz, default (250, 4000)),
band_delta_db (tuple of floats; empty = use delta_db for all bands).
4. Macro-Dynamics Upward Expansion Pre-pass (macro_expand=True)
----------------------------------------------------------------
SPADE operates on ≈21 ms WOLA windows and cannot undo the slow 200–500 ms
RMS squash ("body" compression) a mastering limiter imposes.
Fix: _macro_expand_pass() runs a causal peak-envelope follower (attack +
release IIR) over the full signal, estimates where the level is held below
the long-term 80th-percentile envelope, and applies gentle upward expansion:
g(n) = (env(n) / threshold)^(1/ratio − 1) if env(n) < threshold
= 1.0 otherwise
SPADE then corrects the microscopic waveform peaks that the expander cannot
interpolate. The two passes are complementary by design.
Parameters: macro_expand (bool), macro_attack_ms (float, default 10.0),
macro_release_ms (float, default 200.0), macro_ratio (float, default 1.2).
Usage (CLI)
-----------
python spade_declip_v11.py input.wav output.wav --mode soft --delta-db 2.5
python spade_declip_v11.py input.wav output.wav --mode soft --delta-db 2.5 --gpu-device cuda:0
python spade_declip_v11.py input.wav output.wav --mode soft --delta-db 2.5 --no-gpu
# v11 delimiting features
python spade_declip_v11.py input.wav output.wav --mode soft --delta-db 2.5 \
--release-ms 30 --max-gain-db 6 --multiband --band-crossovers 250 4000
python spade_declip_v11.py input.wav output.wav --mode soft --delta-db 2.5 \
--macro-expand --macro-release-ms 200 --macro-ratio 1.2
"""
from __future__ import annotations
try:
import torch as _torch_module
import torch
_TORCH_AVAILABLE = True
except ImportError:
_TORCH_AVAILABLE = False
import argparse
import os
import time
import warnings
from concurrent.futures import ThreadPoolExecutor
from dataclasses import dataclass
from typing import List, Literal, Tuple, Union
import numpy as np
from scipy.fft import dct, idct
from scipy.signal.windows import hann
# ============================================================================
# Progress-bar backend (rich → tqdm → plain fallback, zero hard deps)
# ============================================================================
# Three concrete backends implement the same interface:
#
# ctx = _make_progress(n_channels)
# with ctx:
# task = ctx.add_task(label, total=N)
# ctx.advance(task) # +1 frame done
# ctx.set_postfix(task, key=value) # update live counters
#
# The module-level _PROGRESS_LOCK serialises add_task() calls so that two
# channel threads don't interleave their header prints.
import threading
_PROGRESS_LOCK = threading.Lock()
try:
from rich.progress import (
Progress, BarColumn, TextColumn, TimeRemainingColumn,
TimeElapsedColumn, MofNCompleteColumn, SpinnerColumn,
)
from rich.console import Console
from rich.panel import Panel
from rich import print as rprint
_RICH = True
except ImportError:
_RICH = False
try:
import tqdm as _tqdm_mod
_TQDM = True
except ImportError:
_TQDM = False
class _RichProgress:
"""Thin wrapper around a shared rich.Progress instance."""
def __init__(self, n_channels: int):
self._progress = Progress(
SpinnerColumn(),
TextColumn("[bold cyan]{task.fields[ch_label]:<4}[/]"),
BarColumn(bar_width=36),
MofNCompleteColumn(),
TextColumn("[green]{task.fields[eta_str]}[/]"),
TextColumn("[dim]{task.fields[bypass_str]}[/]"),
TextColumn("[yellow]{task.fields[noconv_str]}[/]"),
TimeElapsedColumn(),
TimeRemainingColumn(),
refresh_per_second=10,
)
def __enter__(self):
self._progress.__enter__()
return self
def __exit__(self, *args):
self._progress.__exit__(*args)
def add_task(self, ch_label: str, total: int) -> object:
return self._progress.add_task(
"", total=total,
ch_label=ch_label,
eta_str="",
bypass_str="",
noconv_str="",
)
def advance(self, task_id, n_bypassed: int, n_noconv: int, n_done: int, n_total: int):
bypass_pct = 100.0 * n_bypassed / n_done if n_done else 0.0
self._progress.update(
task_id,
advance=1,
bypass_str=f"bypassed {bypass_pct:.0f}%" if n_bypassed else "",
noconv_str=f"no_conv {n_noconv}" if n_noconv else "",
)
class _TqdmProgress:
"""Thin wrapper around tqdm, one bar per channel."""
def __init__(self, n_channels: int):
self._bars: dict = {}
def __enter__(self):
return self
def __exit__(self, *args):
for bar in self._bars.values():
bar.close()
def add_task(self, ch_label: str, total: int) -> str:
import tqdm
bar = tqdm.tqdm(
total=total,
desc=f"[{ch_label}]",
unit="fr",
dynamic_ncols=True,
leave=True,
)
self._bars[ch_label] = bar
return ch_label
def advance(self, task_id, n_bypassed: int, n_noconv: int, n_done: int, n_total: int):
bar = self._bars[task_id]
bypass_pct = 100.0 * n_bypassed / n_done if n_done else 0.0
parts = []
if n_bypassed:
parts.append(f"bypass={bypass_pct:.0f}%")
if n_noconv:
parts.append(f"no_conv={n_noconv}")
bar.set_postfix_str(" ".join(parts))
bar.update(1)
class _PlainProgress:
"""Last-resort fallback: prints a percentage line per channel."""
def __init__(self, n_channels: int):
self._state: dict = {}
def __enter__(self):
return self
def __exit__(self, *args):
pass
def add_task(self, ch_label: str, total: int) -> str:
self._state[ch_label] = {"total": total, "done": 0, "last_pct": -1}
return ch_label
def advance(self, task_id, n_bypassed: int, n_noconv: int, n_done: int, n_total: int):
s = self._state[task_id]
s["done"] += 1
pct = int(100 * s["done"] / s["total"])
# Print only at each 5% step to avoid flooding stdout
if pct // 5 > s["last_pct"] // 5:
s["last_pct"] = pct
print(f" [{task_id}] {pct:3d}% ({s['done']}/{s['total']} frames"
+ (f" bypassed={n_bypassed}" if n_bypassed else "")
+ (f" no_conv={n_noconv}" if n_noconv else "")
+ ")")
def _make_progress(n_channels: int):
"""Return the best available progress backend."""
if _RICH:
return _RichProgress(n_channels)
if _TQDM:
return _TqdmProgress(n_channels)
return _PlainProgress(n_channels)
# ============================================================================
# Data structures
# ============================================================================
@dataclass
class ClippingMasks:
"""
Boolean index masks identifying the three sample categories of a clipped signal.
Attributes
----------
Ir : reliable (unclipped) samples — must be preserved exactly
Icp : positively clipped (flat at +τ) — true signal ≥ τ
Icm : negatively clipped (flat at −τ) — true signal ≤ −τ
"""
Ir: np.ndarray
Icp: np.ndarray
Icm: np.ndarray
@dataclass
class DeclipParams:
"""
Parameters controlling the declipping pipeline.
Attributes
----------
algo : 'sspade' | 'aspade'
Core per-frame algorithm. Default: 'sspade' (best empirical results).
window_length : int
Frame size in samples. Powers of 2 recommended (e.g. 1024, 2048).
Per [2]: A-SPADE works best ≈ 2048; S-SPADE is robust to longer windows.
hop_length : int
Hop between consecutive frames. Minimum 50% overlap recommended ([2] §4.4).
Typical: window_length // 4 (75% overlap, best quality per [2]).
frame : 'dct' | 'rdft'
Sparse transform.
'dct' — orthonormal DCT-II (no redundancy, P = N).
'rdft' — redundant real tight frame DCT‖DST (redundancy 2, P = 2N);
mimics the oversampled DFT used in [1][2]. [DEFAULT — best quality]
s : int
Initial and incremental sparsity step (k starts at s, increases by s
every r iterations). [2] uses s = 100 for whole-signal; s = 1 block-by-block.
r : int
Sparsity relaxation period (k is incremented every r iterations).
eps : float
Convergence threshold ε. Loop stops when the residual norm ≤ ε.
[1][2] use ε = 0.1 for their experiments.
max_iter : int
Hard upper limit on iterations per frame.
verbose : bool
Print per-signal diagnostics (DC offset, threshold, mask sizes, timing).
n_jobs : int
Number of parallel workers for multi-channel processing.
1 = sequential (default, always safe).
-1 = use all available CPU cores.
mode : 'hard' | 'soft'
Detection mode.
'hard' — standard hard-clipping recovery (default).
Marks samples exactly at ±tau as clipped.
'soft' — brickwall limiter recovery (NEW in v6).
Marks all samples above the limiter threshold as potentially
attenuated. The threshold is ceiling − delta_db dB, where
ceiling = max(|yc|) after DC removal.
The lower-bound constraint true_value ≥ yc[n] is already
implemented by proj_gamma — no algorithmic changes needed.
delta_db : float
[soft mode only] Distance in dB from 0 dBFS to the limiter threshold.
Read from Waveform Statistics: find the level below which the limiter
did NOT intervene, e.g. "da −∞ fino a −2.5 dB" → delta_db = 2.5.
Typical brickwall masterings: 1.0 – 3.0 dB.
Ignored when mode='hard'.
use_gpu : bool
Enable GPU acceleration via PyTorch (CUDA or ROCm). Default: True.
Falls back to CPU automatically if PyTorch is not installed, no GPU
is present, or algo='aspade' (A-SPADE GPU not yet implemented).
gpu_device : str
PyTorch device string. Default: "auto" (first available GPU).
Examples: "cuda", "cuda:0", "cuda:1", "cpu".
AMD ROCm GPUs appear as "cuda" in PyTorch-ROCm — use the same syntax.
sample_rate : int
[v11] Sample rate of the audio in Hz. Required when release_ms > 0 or
multiband=True. Set automatically from the file header when using the CLI.
Default: 44100.
release_ms : float
[v11, soft mode] Limiter release time in milliseconds. When > 0, the
clipping masks are forward-dilated by this many samples so that post-peak
samples attenuated by the limiter’s release phase are treated as constrained
(not reliable). 0 = disabled (v10 behaviour). Typical: 10–50 ms.
max_gain_db : float
[v11, soft mode] Maximum recovery in dB above the limited sample value.
Caps proj_Γ to prevent ADMM from generating unphysical transients.
0 = disabled (unbounded, v10 behaviour). Typical: 3–6 dB.
multiband : bool
[v11, soft mode] Enable Linkwitz-Riley sub-band processing. The signal is
split at band_crossovers Hz, each band is processed with its own delta_db,
then summed. Addresses multi-band limiting (FabFilter Pro-L 2 etc.).
band_crossovers : tuple[float, ...]
[v11] Crossover frequencies in Hz (ascending). Produces len+1 bands.
Default: (250, 4000) → Low / Mid / High.
band_delta_db : tuple[float, ...]
[v11] Per-band delta_db values. If empty, delta_db is used for all bands.
Must have the same length as band_crossovers + 1 when non-empty.
macro_expand : bool
[v11, soft mode] Enable macro-dynamics upward expansion pre-pass. A causal
peak-envelope follower detects where the limiter’s release held the level
down, then applies gentle upward expansion before SPADE restores the peaks.
macro_attack_ms : float
[v11] Expander attack time in ms. Default: 10.0.
macro_release_ms : float
[v11] Expander release time in ms. Default: 200.0.
macro_ratio : float
[v11] Expansion ratio. 1.0 = bypass; >1 = upward expansion.
g(n) = (env(n)/threshold)^(1/ratio - 1) when below threshold. Default: 1.2.
"""
algo: Literal["sspade", "aspade"] = "sspade"
window_length: int = 1024
hop_length: int = 256
frame: Literal["dct", "rdft"] = "rdft"
s: int = 1
r: int = 1
eps: float = 0.1
max_iter: int = 1000
verbose: bool = False
n_jobs: int = 1
mode: Literal["hard", "soft"] = "hard"
delta_db: float = 1.0
show_progress: bool = True
use_gpu: bool = True # v10: GPU acceleration
gpu_device: str = "auto" # v10: device string
# ── v11: delimiting features ─────────────────────────────────────────
sample_rate: int = 44100 # required for release_ms and multiband
release_ms: float = 0.0 # mask dilation (0 = disabled)
max_gain_db: float = 0.0 # ratio-aware cap (0 = disabled)
multiband: bool = False # Linkwitz-Riley sub-band processing
band_crossovers: tuple = (250, 4000) # Hz crossover frequencies
band_delta_db: tuple = () # per-band delta_db; empty = use delta_db
macro_expand: bool = False # upward expansion pre-pass
macro_attack_ms: float = 10.0 # expander attack (ms)
macro_release_ms: float = 200.0 # expander release (ms)
macro_ratio: float = 1.2 # expansion ratio (1.0 = bypass)
# ============================================================================
# Sparse transform — Analysis (A) and Synthesis (D = A^H) operators
# ============================================================================
def _frame_size(M: int, frame: str) -> int:
"""Number of transform coefficients P for a frame of M samples."""
if frame == "dct":
return M
if frame == "rdft":
return 2 * M
raise ValueError(f"Unknown frame '{frame}'")
# ============================================================================
# GPU engine (PyTorch — CUDA or ROCm)
# ============================================================================
# All GPU functions are defined unconditionally but only called when torch is
# available. Type annotations use strings to avoid NameError at import time.
import math as _math
def _resolve_gpu_device(params: "DeclipParams") -> "str | None":
"""
Return a torch device string if GPU is usable, else None.
AMD ROCm GPUs are exposed by PyTorch-ROCm under the torch.cuda namespace
(torch.cuda.is_available() returns True, devices appear as "cuda" / "cuda:0").
Detection is therefore identical for NVIDIA CUDA and AMD ROCm.
Returns None if:
• params.use_gpu is False
• PyTorch is not installed
• No CUDA/ROCm device is present or accessible
• algo='aspade' (A-SPADE GPU not yet implemented)
"""
if not params.use_gpu:
return None
if params.algo != "sspade":
return None # A-SPADE GPU not implemented; fall through to CPU path
try:
import torch
if not torch.cuda.is_available():
return None
dev = "cuda" if params.gpu_device == "auto" else params.gpu_device
torch.zeros(1, device=dev) # warm-up / validity check
return dev
except Exception:
return None
def _dct2_gpu(x: "torch.Tensor") -> "torch.Tensor":
"""
Batched orthonormal DCT-II on GPU. x: (..., N) — float32 or float64.
Returns same dtype as input.
Numerically matches scipy.fft.dct(x, type=2, norm='ortho') to ~1e-14.
Algorithm: Makhoul (1980) FFT-based DCT-II.
1. Reorder x into v = [x[0], x[2], …, x[N-1], x[N-3], …, x[1]]
2. V = FFT(v) (computed in float64 for accuracy)
3. C = Re( exp(−jπk/(2N)) · V ) · √(2/N)
4. C[0] /= √2 (ortho normalisation for DC bin)
"""
import torch
in_dtype = x.dtype
N = x.shape[-1]
v = torch.cat([x[..., ::2], x[..., 1::2].flip(-1)], dim=-1)
V = torch.fft.fft(v.double(), dim=-1)
k = torch.arange(N, device=x.device, dtype=torch.float64)
tw = torch.exp(-1j * _math.pi * k / (2.0 * N))
C = (tw * V).real * _math.sqrt(2.0 / N)
C = C.clone()
C[..., 0] /= _math.sqrt(2.0)
return C.to(in_dtype)
def _idct2_gpu(X: "torch.Tensor") -> "torch.Tensor":
"""
Batched orthonormal IDCT-II on GPU. X: (..., N) — float32 or float64.
Returns same dtype as input.
Numerically matches scipy.fft.idct(X, type=2, norm='ortho') to ~1e-14.
Inverse of _dct2_gpu via conjugate-twiddle + IFFT (Makhoul 1980):
1. Undo ortho scaling: C = X·√(N/2); C[0] ·= √2
2. Build W[k] = C[k] − j·C[N−k] for k=0…N−1
where C[0] uses the W[0] = C[0] special case (ipart[0] = 0).
ipart[k] = −C[N−k] for k=1…N−1
↳ BUG FIX: use C.flip(-1)[..., :-1] which gives C[N-1], C[N-2], …, C[1]
The old code used Cf[1:] = C[N-2], C[N-3], …, C[0] — off by one.
3. Recover V: V = W · exp(+jπk/(2N))
4. v = Re(IFFT(V))
5. Un-interleave: x[2n] = v[n], x[2n+1] = v[N−1−n]
"""
import torch
in_dtype = X.dtype
N = X.shape[-1]
C = X.double() * _math.sqrt(N / 2.0)
C = C.clone() # avoid in-place on original
C[..., 0] *= _math.sqrt(2.0)
# ── BUG-GPU-3 FIX ────────────────────────────────────────────────────
# ipart[k] must equal -C[N-k] for k=1..N-1.
# C.flip(-1) = [C[N-1], C[N-2], ..., C[1], C[0]]
# C.flip(-1)[..., :-1] = [C[N-1], C[N-2], ..., C[1]] ← correct
# (old buggy code: -Cf[..., 1:] = -[C[N-2], C[N-3], ..., C[0]] ← off by one)
ipart = torch.zeros_like(C)
ipart[..., 1:] = -C.flip(-1)[..., :-1]
W = torch.view_as_complex(torch.stack([C, ipart], dim=-1))
k = torch.arange(N, device=X.device, dtype=torch.float64)
V = W * torch.exp(1j * _math.pi * k / (2.0 * N))
v = torch.fft.ifft(V, dim=-1).real
half = (N + 1) // 2
x = torch.empty_like(v)
x[..., ::2] = v[..., :half]
x[..., 1::2] = v[..., half:].flip(-1)
return x.to(in_dtype)
def _frana_gpu(x: "torch.Tensor", frame: str) -> "torch.Tensor":
"""
Batched analysis operator A: (..., M) → (..., P).
DCT frame: P = M → orthonormal DCT-II
RDFT frame: P = 2M → [DCT(x)/√2 ‖ DST(x)/√2]
DST-II(x) = DCT-II(x[::-1])
"""
import torch
if frame == "dct":
return _dct2_gpu(x)
s2 = _math.sqrt(2.0)
return torch.cat([_dct2_gpu(x) / s2, _dct2_gpu(x.flip(-1)) / s2], dim=-1)
def _frsyn_gpu(z: "torch.Tensor", frame: str, M: int) -> "torch.Tensor":
"""
Batched synthesis operator D = A^H: (..., P) → (..., M).
Adjoint of _frana_gpu. For RDFT the DST adjoint flips the OUTPUT.
"""
import torch
if frame == "dct":
return _idct2_gpu(z)
s2 = _math.sqrt(2.0)
cos_part = _idct2_gpu(z[..., :M]) / s2
sin_part = _idct2_gpu(z[..., M:]).flip(-1) / s2
return cos_part + sin_part
def _hard_thresh_gpu(u: "torch.Tensor", k: int) -> "torch.Tensor":
"""
Batched hard thresholding: keep k largest-magnitude coefficients per row.
u: (F, P). Returns same shape with all but top-k magnitudes zeroed.
"""
k = int(max(1, min(k, u.shape[-1])))
kth = torch.topk(u.abs(), k, dim=-1, sorted=True).values[..., -1:] # (F,1)
return u * (u.abs() >= kth)
def _sspade_batch_gpu(
yc_w: "torch.Tensor", # (F, M) windowed frames, already on device
Ir: "torch.Tensor", # (F, M) bool — reliable samples
Icp: "torch.Tensor", # (F, M) bool — positively limited
Icm: "torch.Tensor", # (F, M) bool — negatively limited
frame: str,
s: int,
r: int,
eps: float,
max_iter: int,
g_max: float = float("inf"), # v11: ratio-aware upper bound (linear)
) -> "Tuple[torch.Tensor, torch.Tensor]":
"""
Batched S-SPADE on GPU — all F frames processed simultaneously.
Determinism guarantees
----------------------
BUG-GPU-2 fix: ADMM runs in float64 throughout (yc_w is upcast at entry,
downcast to float32 on output). This matches the CPU path which also runs
in float64 via numpy/scipy. float32 would accumulate ~2.3 units of error
after 500 iterations vs float64's ~1e-14 — causing divergent ADMM trajectories.
BUG-GPU-1 fix: zi_final captures zi at the exact convergence iteration for
each frame. Without this, zi keeps being overwritten in subsequent iterations
for already-converged frames (dual ui stops updating but the zi update
expression keeps running for all frames). The CPU tight_sspade breaks
immediately on convergence; the GPU batch loop cannot break early, so
zi_final is the equivalent mechanism.
Convergence mask
----------------
A per-frame `active` bool mask marks frames still iterating.
- `conv[f]` = True once frame f has met the stopping criterion
- `active[f]` = ~conv[f]
- ui is updated only for active frames (correct — matches CPU which exits
before updating ui on the convergence iteration)
- zi_final[f] is frozen at the first iteration where conv[f] becomes True
Returns
-------
x_frames : (F, M) float32 — time-domain restored frames (on device)
converged : (F,) bool — True where ADMM converged within max_iter
"""
import torch
# ── BUG-GPU-2 FIX: upcast to float64 to match CPU float64 path ───────
yc_w64 = yc_w.double()
F, M = yc_w64.shape
zi = _frana_gpu(yc_w64, frame) # (F, P) float64
ui = torch.zeros_like(zi) # float64
k = s
active = torch.ones (F, dtype=torch.bool, device=yc_w.device)
conv = torch.zeros(F, dtype=torch.bool, device=yc_w.device)
# ── BUG-GPU-1 FIX: zi_final captures zi at the convergence iteration ─
# Frames that never converge will have zi_final = zi at loop exit.
zi_final = zi.clone()
for i in range(1, max_iter + 1):
# ── Step 2: sparsity (all frames) ────────────────────────────────
zb = _hard_thresh_gpu(zi + ui, k) # (F, P)
# ── Step 3: project onto Γ via eq.(12) ───────────────────────────
v_c = zb - ui # (F, P)
Dv = _frsyn_gpu(v_c, frame, M) # (F, M)
pDv = Dv.clone()
pDv[Ir] = yc_w64[Ir]
# Ratio-aware projection (v11): lower bound max(v, yc) AND optional upper bound
# Use finite g_max check to avoid 0 * inf = nan when g_max=inf (disabled).
lower_p = yc_w64[Icp]
if _math.isfinite(g_max):
upper_p = (lower_p * g_max).clamp(min=lower_p)
else:
upper_p = torch.full_like(lower_p, _math.inf)
pDv[Icp] = torch.clamp(torch.maximum(pDv[Icp], lower_p), max=upper_p)
lower_m = yc_w64[Icm] # negative values
if _math.isfinite(g_max):
lower_m_cap = (lower_m * g_max).clamp(max=lower_m)
else:
lower_m_cap = torch.full_like(lower_m, -_math.inf)
pDv[Icm] = torch.clamp(torch.minimum(pDv[Icm], lower_m), min=lower_m_cap)
zi = v_c - _frana_gpu(Dv - pDv, frame) # (F, P)
# ── Step 4: convergence check for still-active frames ─────────────
norms = (zi - zb).norm(dim=-1) # (F,)
new_conv = active & (norms <= eps)
if new_conv.any():
# Freeze zi at the convergence point — equivalent to CPU 'break'
zi_final[new_conv] = zi[new_conv]
conv |= new_conv
active = ~conv
if not active.any():
break
# ── Step 7: dual update for active frames only ────────────────────
# CPU tight_sspade updates ui AFTER the convergence check,
# meaning ui is NOT updated on the convergence iteration.
# Matching that: only active frames (not yet converged) update ui.
ui[active] = ui[active] + zi[active] - zb[active]
if i % r == 0:
k += s
# Frames that never converged: use their final zi
if active.any():
zi_final[active] = zi[active]
# Downcast output to float32 for WOLA accumulation
return _frsyn_gpu(zi_final, frame, M).float(), conv
def _declip_mono_gpu(
yc: np.ndarray,
params: "DeclipParams",
tau: float,
ch_label: str,
device: str,
progress_ctx = None,
task_id = None,
) -> "Tuple[np.ndarray, ClippingMasks]":
"""
GPU-accelerated mono declipping pipeline.
Three-pass strategy
-------------------
Pass 1 (CPU): extract all frames, compute bypass decisions and masks.
Pass 2 (GPU): pack active frames into a batch tensor and run
_sspade_batch_gpu — all frames in one GPU kernel sweep.
Pass 3 (CPU): sequential WOLA accumulation + RMS level match.
Progress behaviour
------------------
Bypassed frames advance the progress bar in real-time during Pass 1.
Active (GPU-processed) frames advance the bar immediately after
Pass 2 returns (appears as a single jump — mirrors how the GPU works).
"""
import torch
# ── DC removal (BUG-4 fix) ───────────────────────────────────────────
dc_offset = float(np.mean(yc))
yc = yc - dc_offset
# ── Ceiling and threshold ────────────────────────────────────────────
ceiling_pos = float(np.max(yc))
ceiling_neg = float(-np.min(yc))
if params.mode == "hard":
threshold = min(ceiling_pos, ceiling_neg)
else:
ceiling = max(ceiling_pos, ceiling_neg)
threshold = ceiling * (10.0 ** (-params.delta_db / 20.0))
if threshold <= 0.0:
return yc.copy(), _compute_masks(yc, 0.0)
masks = _compute_masks(yc, threshold)
# ── v11 Feature 1: envelope-based mask dilation (GPU path) ────────────
if params.mode == "soft" and params.release_ms > 0.0:
rel_samp = max(0, round(params.release_ms * params.sample_rate / 1000.0))
if rel_samp > 0:
masks = _dilate_masks_soft(masks, yc, rel_samp)
# ── v11 Feature 4: macro-dynamics upward expansion pre-pass ──────────
if params.mode == "soft" and params.macro_expand and params.macro_ratio > 1.0:
yc = _macro_expand_pass(
yc, params.sample_rate,
attack_ms=params.macro_attack_ms,
release_ms=params.macro_release_ms,
ratio=params.macro_ratio,
)
masks = _compute_masks(yc, threshold)
if params.release_ms > 0.0:
rel_samp = max(0, round(params.release_ms * params.sample_rate / 1000.0))
if rel_samp > 0:
masks = _dilate_masks_soft(masks, yc, rel_samp)
n_clipped = int(np.sum(~masks.Ir))
L = len(yc)
# ── v11 Feature 2: ratio-aware upper bound (linear) ──────────────────
g_max = (10.0 ** (params.max_gain_db / 20.0)
if params.mode == "soft" and params.max_gain_db > 0.0
else float("inf"))
if params.verbose:
ch = f" [{ch_label}]" if ch_label else ""
tag = "threshold" if params.mode == "soft" else "tau"
print(f"[declip{ch}] Length : {L} samples [device: {device}]")
print(f"[declip{ch}] DC offset : {dc_offset:+.6f} ({dc_offset*100:+.4f}%) → removed")
if params.mode == "hard":
print(f"[declip{ch}] {tag:<9} : {threshold:.6f} "
f"(pos_peak={ceiling_pos:.6f} neg_peak={ceiling_neg:.6f} using min)")
else:
print(f"[declip{ch}] ceiling : {max(ceiling_pos, ceiling_neg):.6f} "
f"(pos={ceiling_pos:.6f} neg={ceiling_neg:.6f})")
print(f"[declip{ch}] {tag:<9} : {threshold:.6f} "
f"(ceiling − {params.delta_db:.2f} dB = "
f"{20*np.log10(threshold/max(ceiling_pos,ceiling_neg)):.2f} dBFS)")
print(f"[declip{ch}] Detected : {n_clipped}/{L} "
f"({100*n_clipped/L:.1f}%) "
f"Icp={int(masks.Icp.sum())} Icm={int(masks.Icm.sum())}")
print(f"[declip{ch}] Algorithm : {params.algo.upper()} "
f"frame={params.frame.upper()} mode={params.mode.upper()} "
f"win={params.window_length} hop={params.hop_length} "
f"({100*(1-params.hop_length/params.window_length):.0f}% overlap) "
f"[GPU BATCH on {device}]")
if params.mode == "soft":
feats = []
if params.release_ms > 0: feats.append(f"release_ms={params.release_ms}")
if params.max_gain_db > 0: feats.append(f"max_gain_db={params.max_gain_db}")
if params.macro_expand: feats.append(f"macro_expand(ratio={params.macro_ratio})")
if feats:
print(f"[declip{ch}] v11 feats : " + " ".join(feats))
M = params.window_length
a = params.hop_length
N = int(np.ceil(L / a))
win = np.sqrt(hann(M, sym=False))
t0 = time.time()
# ── Pass 1 (CPU): frame extraction, bypass filter, mask build ────────
# wola_meta[i] = (idx1, idx2, seg_len, is_bypassed)
# active_* = lists for non-bypassed frames only, in order
wola_meta : list = []
active_yc_w : list = [] # windowed frames for SPADE
active_Ir : list = []
active_Icp : list = []
active_Icm : list = []
active_orig_idx : list = [] # original frame index i → maps back into wola_meta
skipped = 0
for i in range(N):
idx1 = i * a
idx2 = min(idx1 + M, L)
seg_len = idx2 - idx1
pad = M - seg_len
yc_frame = np.zeros(M)
yc_frame[:seg_len] = yc[idx1:idx2]
if params.mode == "soft":
fp = float(np.max(np.abs(yc_frame[:seg_len]))) if seg_len else 0.0
if fp < threshold:
wola_meta.append((idx1, idx2, seg_len, True))
skipped += 1
if progress_ctx is not None and task_id is not None:
progress_ctx.advance(task_id, n_bypassed=skipped,
n_noconv=0, n_done=i + 1, n_total=N)
continue
wola_meta.append((idx1, idx2, seg_len, False))
active_yc_w.append(yc_frame * win)
active_Ir .append(np.concatenate([masks.Ir [idx1:idx2], np.ones (pad, dtype=bool)]))
active_Icp.append(np.concatenate([masks.Icp[idx1:idx2], np.zeros(pad, dtype=bool)]))
active_Icm.append(np.concatenate([masks.Icm[idx1:idx2], np.zeros(pad, dtype=bool)]))
active_orig_idx.append(len(wola_meta) - 1) # index into wola_meta
n_active = len(active_yc_w)
n_noconv = 0
x_active_results: dict = {} # wola_meta_index → x_frame (M,) numpy
# ── Pass 2 (GPU): batched S-SPADE ────────────────────────────────────
if n_active > 0:
yc_batch = torch.tensor(np.stack(active_yc_w), dtype=torch.float64, device=device)
Ir_batch = torch.tensor(np.stack(active_Ir), dtype=torch.bool, device=device)
Icp_batch = torch.tensor(np.stack(active_Icp), dtype=torch.bool, device=device)
Icm_batch = torch.tensor(np.stack(active_Icm), dtype=torch.bool, device=device)
if params.verbose:
ch = f" [{ch_label}]" if ch_label else ""
vmem = ""
try:
alloc = torch.cuda.memory_allocated(device) / 1024**2
vmem = f" VRAM used ≈ {alloc:.0f} MB"
except Exception:
pass
print(f"[declip{ch}] GPU pass : {n_active} active frames → "
f"{yc_batch.shape} batch{vmem}")
x_batch, conv_batch = _sspade_batch_gpu(
yc_batch, Ir_batch, Icp_batch, Icm_batch,
params.frame, params.s, params.r, params.eps, params.max_iter,
g_max=g_max,
)
x_np = x_batch.cpu().numpy()
conv_np = conv_batch.cpu().numpy()
n_noconv = int((~conv_np).sum())
for j, meta_idx in enumerate(active_orig_idx):
x_active_results[meta_idx] = x_np[j]
# Advance progress bar for GPU-processed frames (bulk update)
if progress_ctx is not None and task_id is not None:
for j in range(n_active):
progress_ctx.advance(task_id, n_bypassed=skipped,
n_noconv=n_noconv,
n_done=skipped + j + 1, n_total=N)
# ── Pass 3 (CPU): WOLA accumulation ───────────────────────────────────
x = np.zeros(L)
norm_win = np.zeros(L)
for meta_idx, (idx1, idx2, seg_len, is_bypassed) in enumerate(wola_meta):
if is_bypassed:
x [idx1:idx2] += yc[idx1:idx2] * win[:seg_len] ** 2
norm_win[idx1:idx2] += win[:seg_len] ** 2
else:
xf = x_active_results[meta_idx]
x [idx1:idx2] += xf[:seg_len] * win[:seg_len]
norm_win[idx1:idx2] += win[:seg_len] ** 2
norm_win = np.where(norm_win < 1e-12, 1.0, norm_win)
x /= norm_win
# ── Reliable-sample RMS match (BUG-3 fix) ────────────────────────────
Ir = masks.Ir
if Ir.sum() > 0:
rms_in = float(np.sqrt(np.mean(yc[Ir] ** 2)))
rms_out = float(np.sqrt(np.mean(x[Ir] ** 2)))
if rms_out > 1e-12 and rms_in > 1e-12:
x *= rms_in / rms_out
if params.verbose:
ch = f" [{ch_label}]" if ch_label else ""
skip_pct = 100.0 * skipped / N if N else 0.0
print(f"[declip{ch}] Frames : {N} total | "
f"active={n_active} (GPU) bypassed={skipped} ({skip_pct:.1f}%) "
f"no_conv={n_noconv} | time: {time.time()-t0:.1f}s")
return x, masks
def frana(x: np.ndarray, frame: str) -> np.ndarray:
"""
Analysis operator A : R^N → R^P.
For a tight Parseval frame A, the synthesis operator is D = A^H, and
A^H A = I_N (perfect reconstruction property).
DCT frame (P = N):
A = orthonormal DCT-II. A^H = A^{-1} = IDCT.
RDFT frame (P = 2N, redundancy 2):
A = [A₁; A₂] where A₁ = DCT-II/√2 and A₂ = DST-II/√2.
DST-II(x) is computed as DCT-II(x[::-1]).
Tight frame property: A₁^H A₁ + A₂^H A₂ = I/2 + I/2 = I. ✓
"""
if frame == "dct":
return dct(x, type=2, norm="ortho")
if frame == "rdft":
cos_part = dct(x, type=2, norm="ortho") / np.sqrt(2) # DCT-II / √2
sin_part = dct(x[::-1], type=2, norm="ortho") / np.sqrt(2) # DST-II / √2
return np.concatenate([cos_part, sin_part])
raise ValueError(f"Unknown frame '{frame}'")
def frsyn(z: np.ndarray, frame: str, M: int) -> np.ndarray:
"""
Synthesis operator D = A^H : R^P → R^N.
DCT frame:
D = IDCT (same matrix as A for orthonormal DCT).
RDFT frame:
D = [A₁^H, A₂^H] applied to [z₁; z₂]:
A₁^H z₁ = IDCT(z₁) / √2
A₂^H z₂ = IDCT(z₂)[::-1] / √2 ← correct: flip the OUTPUT
Note: the original v1 had the bug idct(z₂[::-1]) — flipping the INPUT.
Correct adjoint of DST-II requires IDCT(z₂)[::-1], NOT IDCT(z₂[::-1]).
"""
if frame == "dct":
return idct(z, type=2, norm="ortho")
if frame == "rdft":
cos_part = idct(z[:M], type=2, norm="ortho") / np.sqrt(2)
sin_part = idct(z[M:], type=2, norm="ortho")[::-1] / np.sqrt(2) # BUG-1 fix
return cos_part + sin_part
raise ValueError(f"Unknown frame '{frame}'")
# ============================================================================
# Hard thresholding H_k
# ============================================================================
def hard_thresh(u: np.ndarray, k: int) -> np.ndarray:
"""
Hard-thresholding operator H_k.
Keeps the k largest-magnitude components of u; sets all others to zero.
Corresponds to step 2 of both Algorithm 1 and Algorithm 2 in [1][2].
Parameters
----------
u : coefficient vector (in R^P)
k : number of non-zero coefficients to retain
Notes
-----
The papers remark that for real signals represented with complex DFT,
thresholding should act on conjugate pairs to preserve the real-signal
structure. Since our RDFT frame uses real DCT/DST, all coefficients
are real-valued and standard element-wise thresholding is appropriate.
"""
k = int(np.clip(k, 1, len(u)))
alpha = np.sort(np.abs(u))[::-1][k - 1] # k-th largest magnitude
return u * (np.abs(u) >= alpha)
# ============================================================================
# Projection onto the consistency set Γ
# ============================================================================
def proj_gamma(
w: np.ndarray,
yc: np.ndarray,
masks: ClippingMasks,
g_max: float = float("inf"), # v11: ratio-aware upper bound (linear)
) -> np.ndarray:
"""
Orthogonal projection onto Γ(y) in the time domain.
Implements eq. (6) of [2] / eq. (2) of [1]:
[proj_Γ(w)]_n = y_n if n ∈ R (reliable)
= max{w_n, τ} if n ∈ H (positive clip, i.e. ≥ τ)
= min{w_n, −τ} if n ∈ L (negative clip, i.e. ≤ −τ)
Equivalently, using bounding vectors b_L, b_H as in eq. (7)/(9) of [2]:
proj_{[b_L, b_H]}(w) = min{max{b_L, w}, b_H}
v11 — ratio-aware upper bound (g_max > 0, default disabled = inf):
[proj_Γ(w)]_n = clip(max(w_n, yc_n), yc_n, yc_n · g_max) for n ∈ Icp
= clip(min(w_n, yc_n), yc_n · g_max, yc_n) for n ∈ Icm
This prevents ADMM from generating transients above the limiter’s
expected maximum gain reduction while still honouring the lower bound.
Parameters
----------
w : time-domain signal to project (R^N)
yc : original clipped signal (R^N), provides boundary values
masks : clipping masks (Ir, Icp, Icm)
g_max : linear gain ceiling (default: inf = no cap, i.e. v10 behaviour).
Compute from max_gain_db as: g_max = 10 ** (max_gain_db / 20).
"""
v = w.copy()
v[masks.Ir] = yc[masks.Ir] # reliable: fix exactly
# Positive clipped: lower bound ≥ yc, optional upper bound ≤ yc * g_max
lo_p = yc[masks.Icp]
if np.isfinite(g_max):
hi_p = lo_p * g_max
else:
hi_p = np.full_like(lo_p, np.inf) # avoid 0 * inf = nan
v[masks.Icp] = np.clip(np.maximum(v[masks.Icp], lo_p), lo_p, hi_p)
# Negative clipped: upper bound ≤ yc, optional lower bound ≥ yc * g_max
lo_m = yc[masks.Icm] # negative values
if np.isfinite(g_max):
lo_m_cap = lo_m * g_max # more negative than lo_m
else:
lo_m_cap = np.full_like(lo_m, -np.inf) # avoid 0 * inf = nan
v[masks.Icm] = np.clip(np.minimum(v[masks.Icm], lo_m), lo_m_cap, lo_m)
return v
# ============================================================================
# S-SPADE (Algorithm 1 in [2])
# ============================================================================
def tight_sspade(
yc: np.ndarray,
masks: ClippingMasks,
frame: str,
s: int,
r: int,
eps: float,
max_iter: int,
g_max: float = float("inf"), # v11: ratio-aware upper bound
) -> Tuple[np.ndarray, bool]:
"""
S-SPADE for one windowed audio frame.
Implements Algorithm 1 from [2], which uses the closed-form projection
lemma (eq. 12) to make per-iteration cost equal to A-SPADE:
ẑ^(i) = v - D^* ( D v - proj_{[b_L,b_H]}(D v) )
where v = z̄^(i) - u^(i-1)
State variables
---------------
zi : current estimate in coefficient domain (R^P)
ui : dual / guidance variable (R^P) — coefficient domain
k : current sparsity level (number of non-zero coefficients)
Convergence criterion (Algorithm 1, row 4 in [2])
-------------------------------------------------
‖ẑ^(i) - z̄^(i)‖₂ ≤ ε
"""
M = len(yc)
zi = frana(yc, frame) # ẑ^(0) = A^H y (eq. D^H y in [2])
ui = np.zeros_like(zi) # u^(0) = 0
k = s
converged = False
for i in range(1, max_iter + 1):
# ── Step 2 : enforce sparsity ─────────────────────────────────────
# z̄^(i) = H_k( ẑ^(i-1) + u^(i-1) )
zb = hard_thresh(zi + ui, k)
# ── Step 3 : project onto Γ via eq.(12) from [2] ─────────────────
# v = z̄^(i) - u^(i-1) (coefficient domain)
v_coeff = zb - ui
# D v (time domain)
Dv = frsyn(v_coeff, frame, M)
# proj_{Γ}(D v)
proj_Dv = proj_gamma(Dv, yc, masks, g_max=g_max)
# ẑ^(i) = v - D^*( D v - proj(D v) )
zi = v_coeff - frana(Dv - proj_Dv, frame)
# ── Step 4 : convergence check ────────────────────────────────────
# ‖ẑ^(i) - z̄^(i)‖₂ ≤ ε
if np.linalg.norm(zi - zb) <= eps:
converged = True
break
# ── Step 7 : update dual variable ────────────────────────────────
# u^(i) = u^(i-1) + ẑ^(i) - z̄^(i)
ui = ui + zi - zb
# ── Sparsity relaxation (rows 9-11 in [2]) ────────────────────────
if i % r == 0:
k += s
# Return time-domain estimate: x̂ = D ẑ^(i)
return frsyn(zi, frame, M), converged
# ============================================================================
# A-SPADE (Algorithm 2 in [2])
# ============================================================================
def tight_aspade(
yc: np.ndarray,
masks: ClippingMasks,
frame: str,
s: int,
r: int,
eps: float,
max_iter: int,
g_max: float = float("inf"), # v11: ratio-aware upper bound
) -> Tuple[np.ndarray, bool]:
"""
A-SPADE for one windowed audio frame.
Implements Algorithm 2 from [2]. The projection step uses the
closed-form formula from eq.(5)/(8) of [2]:
x̂^(i) = proj_{[b_L, b_H]}( A^H ( z̄^(i) − u^(i-1) ) )
= proj_Γ( D ( z̄^(i) − u^(i-1) ) )
= proj_Γ( frsyn(zb − ui) )
State variables
---------------
xi : current estimate in signal domain (R^N)
ui : dual / guidance variable (R^P) — COEFFICIENT domain [BUG-2 fix]
k : current sparsity level
Convergence criterion (Algorithm 2, row 4 in [2])
-------------------------------------------------
‖A x̂^(i) − z̄^(i)‖₂ ≤ ε (coefficient-domain norm) [BUG-2c fix]
"""
M = len(yc)
P = _frame_size(M, frame)
xi = yc.copy() # x̂^(0) = y
ui = np.zeros(P) # u^(0) = 0 — coefficient domain R^P [BUG-2 fix]
k = s
converged = False
for i in range(1, max_iter + 1):
# ── Step 2 : enforce sparsity ─────────────────────────────────────
# z̄^(i) = H_k( A x̂^(i-1) + u^(i-1) )
# Note: frana(xi) + ui, NOT frana(xi + frsyn(ui)) [BUG-2a fix]
zb = hard_thresh(frana(xi, frame) + ui, k)
# ── Step 3 : project onto Γ ───────────────────────────────────────
# x̂^(i) = proj_Γ( A^H( z̄^(i) - u^(i-1) ) )
# = proj_Γ( frsyn( zb - ui ) ) [BUG-2b fix]
xi_new = proj_gamma(frsyn(zb - ui, frame, M), yc, masks, g_max=g_max)
# ── Step 4 : convergence check ────────────────────────────────────
# ‖A x̂^(i) - z̄^(i)‖₂ ≤ ε (coefficient-domain norm) [BUG-2c fix]
if np.linalg.norm(frana(xi_new, frame) - zb) <= eps:
converged = True
xi = xi_new
break
# ── Step 7 : update dual variable ────────────────────────────────
# u^(i) = u^(i-1) + A x̂^(i) - z̄^(i) [BUG-2d fix]
ui = ui + frana(xi_new, frame) - zb
xi = xi_new
# ── Sparsity relaxation ───────────────────────────────────────────
if i % r == 0:
k += s
return xi, converged
# ============================================================================
# Main declipping pipeline
# ============================================================================
def _compute_masks(yc: np.ndarray, threshold: float) -> ClippingMasks:
"""
Compute clipping/limiting masks from a 1-D signal and a detection threshold.
Works for both modes:
hard (mode='hard'): threshold = tau (samples exactly at digital ceiling)
soft (mode='soft'): threshold = tau * 10^(-delta_db/20) (limiter threshold)
In soft mode, samples above the threshold have their TRUE value constrained
to be ≥ their current (limited) value. proj_gamma already implements this
correctly via v[Icp] = max(v[Icp], yc[Icp]) — since yc[Icp] is the actual
limited value, not tau. No change to the projection operator is needed.
"""
Icp = yc >= threshold
Icm = yc <= -threshold
Ir = ~(Icp | Icm)
return ClippingMasks(Ir=Ir, Icp=Icp, Icm=Icm)
# ============================================================================
# v11 — Delimiting helper functions
# ============================================================================
def _dilate_masks_soft(
masks: ClippingMasks,
yc: np.ndarray,
release_samples: int,
) -> ClippingMasks:
"""
Forward morphological dilation of the soft-mode clipping masks.
A mastering limiter does not merely clip the peak sample; its release time
causes gain reduction to persist for `release_samples` samples after each
peak. Without dilation, those post-peak samples are pinned as "reliable"
(Ir), forcing the ADMM solver to anchor the reconstruction to artificially
attenuated values and producing the pumping artifact.
Algorithm
---------
For each True position in Icp or Icm, the following `release_samples`
positions are also flagged as constrained (Icp/Icm). Implemented as a
causal linear convolution:
dilated = convolve(mask, ones(release_samples + 1))[:N] > 0
Newly flagged samples are reclassified by polarity:
yc[n] >= 0 → Icp (true value ≥ yc[n], always satisfied by limiter model)
yc[n] < 0 → Icm (true value ≤ yc[n], same reasoning)
This is mathematically valid because a gain-reducing limiter always
produces |yc[n]| ≤ |true[n]| on every attenuated sample.
Parameters
----------
masks : original ClippingMasks from _compute_masks
yc : DC-removed signal (same length as masks)
release_samples : dilation width = round(release_ms * sr / 1000)
Returns
-------
ClippingMasks with expanded Icp, Icm and correspondingly shrunk Ir.
"""
if release_samples <= 0:
return masks
N = len(yc)
kern = np.ones(release_samples + 1, dtype=np.float64)
# Causal forward dilation: each True position infects the next
# release_samples positions (conv[:N] gives the causal output).
dil_cp = np.convolve(masks.Icp.astype(np.float64), kern)[:N] > 0
dil_cm = np.convolve(masks.Icm.astype(np.float64), kern)[:N] > 0
# Union of original and dilated masks
new_Icp = dil_cp | dil_cm # will be filtered by polarity below
new_Icm = dil_cp | dil_cm
# Assign dilated samples by polarity of the limited signal
new_Icp = new_Icp & (yc >= 0) # positive half
new_Icm = new_Icm & (yc < 0) # negative half
# Reliable = everything not in Icp or Icm
new_Ir = ~(new_Icp | new_Icm)
return ClippingMasks(Ir=new_Ir, Icp=new_Icp, Icm=new_Icm)
def _lr_split(x: np.ndarray, fc: float, sr: int) -> "Tuple[np.ndarray, np.ndarray]":
"""
Phase-perfect Linkwitz-Riley crossover at frequency `fc` Hz.
Returns (lp, hp) such that lp + hp == x exactly (perfect reconstruction
by construction: hp = x - lp). The LP is a zero-phase 4th-order
Butterworth realised with sosfiltfilt.
A 4th-order zero-phase Butterworth (sosfiltfilt of 2nd-order coefficients)
has the same amplitude response as LR4 at the crossover point (−6 dB at
fc) and is computationally convenient. Summing LP + HP = x eliminates
any phase-cancellation artifact at the crossover frequency.
Parameters
----------
x : 1-D signal array
fc : crossover frequency in Hz (clamped to [1, sr/2 − 1])
sr : sample rate in Hz
"""
from scipy.signal import butter, sosfiltfilt
fc_safe = float(np.clip(fc, 1.0, sr / 2.0 - 1.0))
sos = butter(2, fc_safe, btype="low", fs=sr, output="sos")
lp = sosfiltfilt(sos, x)
hp = x - lp # perfect reconstruction: no leakage at any frequency
return lp, hp
def _macro_expand_pass(
yc: np.ndarray,
sr: int,
attack_ms: float = 10.0,
release_ms: float = 200.0,
ratio: float = 1.2,
) -> np.ndarray:
"""
Macro-dynamics upward expansion pre-pass.
Restores the slow (>21 ms) amplitude modulation suppressed by a mastering
limiter's release time — the "body compression" that SPADE cannot undo
because it operates frame-by-frame at ~21 ms windows.
Algorithm
---------
1. Compute a zero-phase smoothed peak envelope using sosfiltfilt.
The attack and release IIR time constants map to Butterworth LP cutoffs:
fc_att = 2.2 / (2π · attack_s) [−3 dB at attack cutoff]
fc_rel = 2.2 / (2π · release_s)
Two passes (attack on rising, release on falling) are approximated by
using the *slower* of the two for the LP filter (conservative choice).
2. Threshold: 80th-percentile of the non-silent envelope values.
Above the threshold the signal is already "loud" → no expansion.
Below the threshold it was compressed → apply upward expansion gain.
3. Expansion gain (standard upward-expander transfer function):
g(n) = (env(n) / threshold)^(1/ratio − 1) env < threshold
= 1.0 otherwise
For ratio > 1, (1/ratio − 1) < 0, so g > 1 when env < threshold
(quiet sections get boosted).
4. Gain is smoothed with a 20 Hz LP to prevent clicks, then hard-clipped
to [1.0, ∞) so the pre-pass only expands — it never attenuates.
Parameters
----------
yc : 1-D float signal (DC-removed, level-normalised)
sr : sample rate in Hz
attack_ms : expander attack time constant (ms); typically 5–20 ms
release_ms : expander release time constant (ms); typically 100–300 ms
ratio : expansion ratio >1.0; 1.0 = bypass, 1.2 = gentle
Returns
-------
Expanded signal with the same length as yc.
"""
from scipy.signal import butter, sosfiltfilt
if ratio <= 1.0:
return yc.copy()
x_abs = np.abs(yc)
# ── Envelope follower ─────────────────────────────────────────────────
# Use the *slower* time constant (release) for the zero-phase LP filter.
# This approximates a peak-hold envelope that attacks fast and releases slow.
rel_s = max(release_ms, attack_ms) / 1000.0
fc_env = min(2.2 / (2.0 * np.pi * rel_s), sr / 2.0 - 1.0)
sos_e = butter(2, fc_env, fs=sr, output="sos")
env = sosfiltfilt(sos_e, x_abs)
env = np.maximum(env, 1e-10)
# ── Threshold: 80th percentile of non-silent samples ─────────────────
mask_sig = env > 1e-6
if not mask_sig.any():
return yc.copy()
thresh = float(np.percentile(env[mask_sig], 80))
thresh = max(thresh, 1e-8)
# ── Expansion gain ────────────────────────────────────────────────────
exponent = 1.0 / ratio - 1.0 # negative for ratio > 1
g = np.where(env >= thresh,
1.0,
(env / thresh) ** exponent)
# ── Smooth gain to avoid clicks (~20 Hz LP) ───────────────────────────
fc_g = min(20.0, sr / 2.0 - 1.0)
sos_g = butter(2, fc_g, fs=sr, output="sos")
g = sosfiltfilt(sos_g, g)
g = np.maximum(g, 1.0) # upward only — never attenuate
return yc * g
def _declip_mono(
yc: np.ndarray,
params: DeclipParams,
tau: float, # pre-computed global ceiling — used only as hint;
# always recomputed internally after DC removal.
ch_label: str = "",
frame_workers: int = 1, # v8: intra-channel frame-level parallelism
progress_ctx = None, # v9: shared _*Progress instance (or None)
task_id = None, # v9: task handle returned by progress_ctx.add_task
) -> Tuple[np.ndarray, ClippingMasks]:
"""
Core mono declipping / delimiting pipeline (internal).
Parameters
----------
yc : 1-D float array — one channel of the input signal
params : DeclipParams
tau : ceiling hint (pre-computed in declip()); kept for API compat,
recomputed internally after DC removal.
ch_label : string used in verbose output, e.g. "L" or "R"
DC removal (BUG-4 fix, v5)
--------------------------
A DC offset as small as 0.3% makes the global peak asymmetric, causing
the lower-polarity ceiling to fall just below tau and be misclassified as
reliable. Fix: subtract per-channel mean before all threshold computations.
The DC is discarded on output (recording artefact, not musical content).
Soft mode (v6)
--------------
When params.mode == 'soft', the threshold is set to:
threshold = ceiling * 10^(-delta_db / 20)
where ceiling = max(|yc|) after DC removal.
This marks all samples above the limiter threshold as potentially attenuated.
The BUG-4 half-wave issue is inherently avoided in soft mode because the
threshold sits delta_db dB BELOW the ceiling; small DC asymmetries (typically
< 0.05 dB) cannot push the opposite polarity's ceiling below the threshold.
DC removal is still performed for cleanliness.
proj_gamma correctness in soft mode
------------------------------------
For limited samples, the true value satisfies: true ≥ yc[n] (one-sided).
proj_gamma already implements exactly this:
v[Icp] = max(v[Icp], yc[Icp])
Since yc[Icp] here is the *actual limited value* (not tau), the constraint
is correct. No change to tight_sspade or tight_aspade is needed.
"""
# ── DC removal (BUG-4 fix, applies to both modes) ────────────────────
dc_offset = float(np.mean(yc))
yc = yc - dc_offset # DC-free working copy
# ── Ceiling and threshold ─────────────────────────────────────────────
ceiling_pos = float(np.max(yc)) # positive peak after DC removal
ceiling_neg = float(-np.min(yc)) # negative peak (absolute value)
if params.mode == "hard":
# BUG-4 fix: use min(pos, neg) so both half-waves are always detected
threshold = min(ceiling_pos, ceiling_neg)
else:
# soft mode: threshold = ceiling − delta_db dB
# Use max(pos, neg) for ceiling — we want the actual brickwall level,
# and the threshold is well below it so small asymmetries don't matter.
ceiling = max(ceiling_pos, ceiling_neg)
threshold = ceiling * (10.0 ** (-params.delta_db / 20.0))
if threshold <= 0.0:
return yc.copy(), _compute_masks(yc, 0.0)
masks = _compute_masks(yc, threshold)
# ── v11 Feature 1: envelope-based mask dilation ───────────────────────
if params.mode == "soft" and params.release_ms > 0.0:
rel_samp = max(0, round(params.release_ms * params.sample_rate / 1000.0))
if rel_samp > 0:
masks = _dilate_masks_soft(masks, yc, rel_samp)
# ── v11 Feature 4: macro-dynamics upward expansion pre-pass ──────────
if params.mode == "soft" and params.macro_expand and params.macro_ratio > 1.0:
yc = _macro_expand_pass(
yc, params.sample_rate,
attack_ms=params.macro_attack_ms,
release_ms=params.macro_release_ms,
ratio=params.macro_ratio,
)
# Recompute masks on the expanded signal so Ir values are correct
masks = _compute_masks(yc, threshold)
if params.release_ms > 0.0:
rel_samp = max(0, round(params.release_ms * params.sample_rate / 1000.0))
if rel_samp > 0:
masks = _dilate_masks_soft(masks, yc, rel_samp)
n_clipped = int(np.sum(~masks.Ir))
L = len(yc)
# ── v11 Feature 2: ratio-aware upper bound (linear) ──────────────────
g_max = (10.0 ** (params.max_gain_db / 20.0)
if params.mode == "soft" and params.max_gain_db > 0.0
else float("inf"))
if params.verbose:
ch = (" [" + ch_label + "]") if ch_label else ""
tag = "threshold" if params.mode == "soft" else "tau"
print(f"[declip{ch}] Length : {L} samples")
print(f"[declip{ch}] DC offset : {dc_offset:+.6f} ({dc_offset*100:+.4f}%) → removed")
if params.mode == "hard":
print(f"[declip{ch}] {tag:<9} : {threshold:.6f} "
f"(pos_peak={ceiling_pos:.6f} neg_peak={ceiling_neg:.6f} using min)")
else:
print(f"[declip{ch}] ceiling : {max(ceiling_pos, ceiling_neg):.6f} "
f"(pos={ceiling_pos:.6f} neg={ceiling_neg:.6f})")
print(f"[declip{ch}] {tag:<9} : {threshold:.6f} "
f"(ceiling − {params.delta_db:.2f} dB = "
f"{20*np.log10(threshold/max(ceiling_pos,ceiling_neg)):.2f} dBFS)")
print(f"[declip{ch}] Detected : {n_clipped}/{L} "
f"({100*n_clipped/L:.1f}%) "
f"Icp={int(masks.Icp.sum())} Icm={int(masks.Icm.sum())}")
print(f"[declip{ch}] Algorithm : {params.algo.upper()} "
f"frame={params.frame.upper()} mode={params.mode.upper()} "
f"win={params.window_length} hop={params.hop_length} "
f"({100*(1-params.hop_length/params.window_length):.0f}% overlap)")
if params.mode == "soft":
feats = []
if params.release_ms > 0: feats.append(f"release_ms={params.release_ms}")
if params.max_gain_db > 0: feats.append(f"max_gain_db={params.max_gain_db}")
if params.macro_expand: feats.append(f"macro_expand(ratio={params.macro_ratio})")
if feats:
print(f"[declip{ch}] v11 feats : " + " ".join(feats))
spade_fn = tight_sspade if params.algo == "sspade" else tight_aspade
M = params.window_length
a = params.hop_length
N = int(np.ceil(L / a))
win = np.sqrt(hann(M, sym=False)) # sqrt-Hann: satisfies COLA
x = np.zeros(L)
norm_win = np.zeros(L)
no_conv = 0
skipped = 0 # frames bypassed by frame-adaptive threshold (soft only)
t0 = time.time()
# ── Per-frame worker (pure computation, no shared-state writes) ──────
# Returns all data needed for WOLA accumulation; the accumulation itself
# is always done sequentially to avoid race conditions on x / norm_win.
def _process_frame(i: int):
idx1 = i * a
idx2 = min(idx1 + M, L)
seg_len = idx2 - idx1
pad = M - seg_len
yc_frame = np.zeros(M)
yc_frame[:seg_len] = yc[idx1:idx2]
# ── Frame-adaptive bypass (soft mode only, v7) ───────────────────
if params.mode == "soft":
frame_peak = float(np.max(np.abs(yc_frame[:seg_len]))) if seg_len > 0 else 0.0
if frame_peak < threshold:
return idx1, idx2, seg_len, None, False, True # bypassed=True
yc_frame_w = yc_frame * win
fm = ClippingMasks(
Ir = np.concatenate([masks.Ir [idx1:idx2], np.ones (pad, dtype=bool)]),
Icp = np.concatenate([masks.Icp[idx1:idx2], np.zeros(pad, dtype=bool)]),
Icm = np.concatenate([masks.Icm[idx1:idx2], np.zeros(pad, dtype=bool)]),
)
x_frame, conv = spade_fn(
yc_frame_w, fm,
params.frame, params.s, params.r, params.eps, params.max_iter,
g_max=g_max,
)
return idx1, idx2, seg_len, x_frame, conv, False # bypassed=False
# ── Parallel SPADE compute (v8) + live progress (v9) ─────────────────
# scipy.fft.dct releases the GIL → threads run truly in parallel on DCT.
# WOLA accumulation (cheap) is kept sequential to avoid data races.
#
# Progress strategy:
# parallel: pool.submit + as_completed → advance bar as each frame lands
# sequential: plain loop with advance after each frame
#
# frame_results[i] is stored by *original index* so WOLA order is preserved.
frame_results: list = [None] * N
_n_bypassed = 0
_n_noconv = 0
def _advance(n_done: int):
if progress_ctx is not None and task_id is not None:
progress_ctx.advance(task_id,
n_bypassed=_n_bypassed,
n_noconv=_n_noconv,
n_done=n_done,
n_total=N)
if frame_workers > 1:
from concurrent.futures import as_completed
with ThreadPoolExecutor(max_workers=frame_workers) as pool:
future_to_idx = {pool.submit(_process_frame, i): i for i in range(N)}
n_done = 0
for future in as_completed(future_to_idx):
i = future_to_idx[future]
frame_results[i] = future.result()
n_done += 1
# Peek at result to update live counters before advancing bar
*_, conv, bypassed = frame_results[i]
if bypassed:
_n_bypassed += 1
elif not conv:
_n_noconv += 1
_advance(n_done)
else:
for i in range(N):
frame_results[i] = _process_frame(i)
*_, conv, bypassed = frame_results[i]
if bypassed:
_n_bypassed += 1
elif not conv:
_n_noconv += 1
_advance(i + 1)
# ── Sequential WOLA accumulation ─────────────────────────────────────
for idx1, idx2, seg_len, x_frame, conv, bypassed in frame_results:
if bypassed:
yc_seg = yc[idx1:idx2]
x [idx1:idx2] += yc_seg * win[:seg_len] ** 2
norm_win[idx1:idx2] += win[:seg_len] ** 2
skipped += 1
else:
if not conv:
no_conv += 1
x [idx1:idx2] += x_frame[:seg_len] * win[:seg_len]
norm_win[idx1:idx2] += win[:seg_len] ** 2
# WOLA normalisation
norm_win = np.where(norm_win < 1e-12, 1.0, norm_win)
x /= norm_win
# ── Reliable-sample level matching (BUG-3 fix) ───────────────────────
# Rescale output so its RMS on reliable samples matches the input RMS.
# Eliminates per-channel WOLA gain drift (up to 5 dB in stereo material).
Ir = masks.Ir
if Ir.sum() > 0:
rms_in = float(np.sqrt(np.mean(yc[Ir] ** 2)))
rms_out = float(np.sqrt(np.mean(x[Ir] ** 2)))
if rms_out > 1e-12 and rms_in > 1e-12:
x *= rms_in / rms_out
if params.verbose:
ch = (" [" + ch_label + "]") if ch_label else ""
active = N - skipped
skip_pct = 100.0 * skipped / N if N > 0 else 0.0
if params.mode == "soft" and skipped > 0:
print(f"[declip{ch}] Frames : {N} total | "
f"active={active} bypassed={skipped} ({skip_pct:.1f}%) "
f"no_conv={no_conv} | time: {time.time()-t0:.1f}s")
else:
print(f"[declip{ch}] Frames : {N} (no conv: {no_conv}) "
f"time: {time.time()-t0:.1f}s")
return x, masks
def declip(
yc: np.ndarray,
params: "DeclipParams | None" = None,
) -> "Tuple[np.ndarray, Union[ClippingMasks, List[ClippingMasks]]]":
"""
Declip a hard-clipped audio signal — mono or multi-channel.
Accepts either:
* a 1-D array (N_samples,) — mono
* a 2-D array (N_samples, N_channels) — stereo / surround
For multi-channel input, tau is detected from the global peak across
ALL channels, modelling the single hardware clipping threshold correctly.
Each channel is then processed independently. Parallel processing is
controlled by params.n_jobs.
Parameters
----------
yc : float array, shape (N,) or (N, C)
params : DeclipParams (defaults used if None)
Returns
-------
x : declipped signal, same shape as yc
masks : ClippingMasks (mono input)
list of ClippingMasks (multi-channel input, one per channel)
"""
if params is None:
params = DeclipParams()
yc = np.asarray(yc, dtype=float)
# ── v11 Feature 3: Multiband (Linkwitz-Riley) routing ───────────────────
# When multiband=True, we split the signal into frequency bands, process
# each independently with its own delta_db, then sum back. The split
# uses perfect-reconstruction LP+HP pairs (HP = input − LP), so the sum
# always reconstructs the original without leakage artifacts.
# This wrapper recurses into declip() with multiband=False for each band.
if params.multiband and params.mode == "soft":
from dataclasses import replace as _dc_replace
crossovers = list(params.band_crossovers)
n_bands = len(crossovers) + 1
sr = params.sample_rate
# Per-band delta_db: use band_delta_db if fully specified, else fall back
if len(params.band_delta_db) == n_bands:
band_deltas = list(params.band_delta_db)
else:
band_deltas = [params.delta_db] * n_bands
# Split signal into bands using cascaded LP / HP = input − LP
sig_1d = yc if yc.ndim == 1 else None # handle below per-channel
if yc.ndim == 2:
# Process each channel’s bands independently, same crossovers
n_samp, n_ch = yc.shape
out = np.zeros_like(yc)
all_masks = []
for c in range(n_ch):
ch_sig = yc[:, c]
ch_out = np.zeros(n_samp)
ch_masks = []
remainder = ch_sig.copy()
for b, (fc, d_db) in enumerate(zip(crossovers, band_deltas[:-1])):
lp, remainder = _lr_split(remainder, fc, sr)
band_params = _dc_replace(params, multiband=False, delta_db=d_db)
band_fixed, band_mask = declip(lp, band_params)
ch_out += band_fixed
ch_masks.append(band_mask)
# Last band (remainder is the HP)
band_params = _dc_replace(params, multiband=False, delta_db=band_deltas[-1])
band_fixed, band_mask = declip(remainder, band_params)
ch_out += band_fixed
ch_masks.append(band_mask)
out[:, c] = ch_out
all_masks.append(ch_masks)
return out, all_masks
else:
# Mono multiband
out = np.zeros_like(yc)
all_masks = []
remainder = yc.copy()
for b, (fc, d_db) in enumerate(zip(crossovers, band_deltas[:-1])):
lp, remainder = _lr_split(remainder, fc, sr)
band_params = _dc_replace(params, multiband=False, delta_db=d_db)
band_fixed, band_mask = declip(lp, band_params)
out += band_fixed
all_masks.append(band_mask)
# Last band
band_params = _dc_replace(params, multiband=False, delta_db=band_deltas[-1])
band_fixed, band_mask = declip(remainder, band_params)
out += band_fixed
all_masks.append(band_mask)
return out, all_masks
# ── Normalisation fix ────────────────────────────────────────────────
# SPADE recovers values *above* tau at formerly-clipped positions.
# If the input is at the digital ceiling (tau = 1.0), those recovered
# values exceed 1.0 and any hard np.clip(-1,1) by the caller destroys
# all improvement, making the output identical to the input.
# Fix: normalise so tau < 1.0 before processing; undo normalisation after.
NORM_TARGET = 0.9
global_peak = float(np.max(np.abs(yc)))
if global_peak > NORM_TARGET:
scale = NORM_TARGET / global_peak # < 1
yc_norm = yc * scale
else:
scale = 1.0
yc_norm = yc
# ── GPU detection (once, shared across all channels) ─────────────────
gpu_dev = _resolve_gpu_device(params)
# ── Mono path ────────────────────────────────────────────────────────
if yc_norm.ndim == 1:
tau = float(np.max(np.abs(yc_norm)))
if tau == 0.0:
warnings.warn("Input signal is all zeros.")
return yc.copy(), _compute_masks(yc, 0.0)
if gpu_dev is not None:
# GPU path: single channel, no threading needed
if params.show_progress:
N_frames = int(np.ceil(len(yc_norm) / params.hop_length))
prog = _make_progress(1)
with prog:
task = prog.add_task("mono", total=N_frames)
fixed, masks = _declip_mono_gpu(
yc_norm, params, tau, ch_label="mono",
device=gpu_dev, progress_ctx=prog, task_id=task,
)
else:
fixed, masks = _declip_mono_gpu(
yc_norm, params, tau, ch_label="mono", device=gpu_dev,
)
else:
# CPU path (v8/v9)
n_workers = params.n_jobs if params.n_jobs > 0 else os.cpu_count() or 1
if params.show_progress:
N_frames = int(np.ceil(len(yc_norm) / params.hop_length))
prog = _make_progress(1)
with prog:
task = prog.add_task("mono", total=N_frames)
fixed, masks = _declip_mono(
yc_norm, params, tau,
frame_workers=n_workers,
progress_ctx=prog, task_id=task,
)
else:
fixed, masks = _declip_mono(yc_norm, params, tau, frame_workers=n_workers)
return fixed / scale, masks
# ── Multi-channel path ───────────────────────────────────────────────
if yc_norm.ndim != 2:
raise ValueError(
f"yc must be 1-D (mono) or 2-D (samples x channels), got shape {yc.shape}"
)
n_samples, n_ch = yc_norm.shape
# Global tau: same hardware threshold for all channels
tau = float(np.max(np.abs(yc_norm)))
if tau == 0.0:
warnings.warn("Input signal is all zeros.")
empty_masks = [_compute_masks(yc[:, c], 0.0) for c in range(n_ch)]
return yc.copy(), empty_masks
# Channel labels: L/R for stereo, Ch0/Ch1/… for more
if n_ch == 2:
labels = ["L", "R"]
else:
labels = ["Ch" + str(c) for c in range(n_ch)]
if params.verbose:
print(f"[declip] {n_ch}-channel signal | "
f"tau={tau:.4f} | mode={params.mode.upper()} | "
+ (f"device={gpu_dev}" if gpu_dev else f"n_jobs={params.n_jobs}")
+ (f" | delta_db={params.delta_db:.2f}" if params.mode == "soft" else ""))
# ── Parallel / sequential dispatch ───────────────────────────────────
N_frames = int(np.ceil(n_samples / params.hop_length))
prog = _make_progress(n_ch) if params.show_progress else None
if gpu_dev is not None:
# GPU path: channels processed sequentially (GPU already uses all VRAM
# for the frame batch; no benefit from running channels concurrently)
def _process_channel(c: int, task_id=None):
return _declip_mono_gpu(
yc_norm[:, c], params, tau,
ch_label=labels[c], device=gpu_dev,
progress_ctx=prog, task_id=task_id,
)
else:
# CPU path: two-level parallelism (channel-workers × frame-workers)
total_workers = params.n_jobs if params.n_jobs > 0 else os.cpu_count() or 1
channel_workers = min(total_workers, n_ch)
frame_workers_ch = max(1, total_workers // channel_workers)
def _process_channel(c: int, task_id=None):
return _declip_mono(
yc_norm[:, c], params, tau,
ch_label=labels[c],
frame_workers=frame_workers_ch,
progress_ctx=prog,
task_id=task_id,
)
# Channel-level concurrency: GPU uses 1 worker (sequential), CPU uses n_ch
ch_workers = 1 if gpu_dev is not None else min(
params.n_jobs if params.n_jobs > 0 else os.cpu_count() or 1, n_ch
)
def _run():
if prog is not None:
task_ids = [prog.add_task(labels[c], total=N_frames) for c in range(n_ch)]
else:
task_ids = [None] * n_ch
if ch_workers == 1:
return [_process_channel(c, task_ids[c]) for c in range(n_ch)]
else:
with ThreadPoolExecutor(max_workers=ch_workers) as pool:
futures = [pool.submit(_process_channel, c, task_ids[c]) for c in range(n_ch)]
return [f.result() for f in futures]
if prog is not None:
with prog:
results = _run()
else:
results = _run()
# Reassemble into (N_samples, N_channels)
fixed_channels = [r[0] for r in results]
masks_list = [r[1] for r in results]
x_out = np.column_stack(fixed_channels) / scale
return x_out, masks_list
# ============================================================================
# Quality metrics
# ============================================================================
def sdr(reference: np.ndarray, estimate: np.ndarray) -> float:
"""
Signal-to-Distortion Ratio (dB).
Definition from eq.(14) in [2]:
SDR(u, v) = 10 log₁₀( ‖u‖² / ‖u − v‖² )
"""
noise = reference - estimate
denom = np.sum(noise ** 2)
if denom < 1e-20:
return float("inf")
return 10.0 * np.log10(np.sum(reference ** 2) / denom)
def delta_sdr(
reference: np.ndarray,
clipped: np.ndarray,
estimate: np.ndarray,
) -> float:
"""
ΔSDR improvement (dB) — eq.(13) in [2]:
ΔSDR = SDR(x, x̂) − SDR(x, y)
"""
return sdr(reference, estimate) - sdr(reference, clipped)
# ============================================================================
# Command-line interface
# ============================================================================
def _build_parser() -> argparse.ArgumentParser:
p = argparse.ArgumentParser(
description="SPADE Audio Declipping / Limiter Recovery (v11)",
formatter_class=argparse.ArgumentDefaultsHelpFormatter,
)
p.add_argument("input", help="Input clipped / limited audio file (WAV, FLAC, ...)")
p.add_argument("output", help="Output restored audio file")
p.add_argument("--algo", choices=["sspade", "aspade"], default="sspade")
p.add_argument("--window-length", type=int, default=1024, dest="window_length")
p.add_argument("--hop-length", type=int, default=256, dest="hop_length")
p.add_argument("--frame", choices=["dct", "rdft"], default="rdft")
p.add_argument("--s", type=int, default=1)
p.add_argument("--r", type=int, default=1)
p.add_argument("--eps", type=float, default=0.1)
p.add_argument("--max-iter", type=int, default=1000, dest="max_iter")
p.add_argument("--n-jobs", type=int, default=1, dest="n_jobs",
help="CPU parallel workers for multi-channel (-1 = all cores). "
"Ignored when GPU is active.")
p.add_argument("--mode", choices=["hard", "soft"], default="hard",
help="'hard' = standard clipping recovery; "
"'soft' = brickwall limiter recovery")
p.add_argument("--delta-db", type=float, default=1.0, dest="delta_db",
help="[soft mode] dB below 0 dBFS where the limiter starts acting "
"(e.g. 2.5 means threshold at -2.5 dBFS)")
p.add_argument("--gpu-device", type=str, default="auto", dest="gpu_device",
help="PyTorch device for GPU path: 'auto', 'cuda', 'cuda:0', 'cpu'. "
"AMD ROCm GPUs appear as 'cuda' in PyTorch-ROCm.")
p.add_argument("--no-gpu", action="store_true", dest="no_gpu",
help="Disable GPU acceleration; use CPU (v8/v9 threading) path instead.")
# v11 delimiting features
p.add_argument("--release-ms", type=float, default=0.0, dest="release_ms",
help="[v11, soft] Limiter release time in ms for mask dilation "
"(0 = disabled, typical 10-50 ms)")
p.add_argument("--max-gain-db", type=float, default=0.0, dest="max_gain_db",
help="[v11, soft] Max transient recovery in dB above limited value "
"(0 = disabled, e.g. 6 for +6 dB cap)")
p.add_argument("--multiband", action="store_true",
help="[v11, soft] Enable Linkwitz-Riley sub-band processing")
p.add_argument("--band-crossovers", type=float, nargs="+", default=[250.0, 4000.0],
dest="band_crossovers",
help="[v11] Crossover frequencies in Hz (e.g. 250 4000)")
p.add_argument("--band-delta-db", type=float, nargs="+", default=[],
dest="band_delta_db",
help="[v11] Per-band delta_db values (must match number of bands)")
p.add_argument("--macro-expand", action="store_true", dest="macro_expand",
help="[v11, soft] Enable macro-dynamics upward expansion pre-pass")
p.add_argument("--macro-attack-ms", type=float, default=10.0, dest="macro_attack_ms",
help="[v11] Expander attack time (ms, default 10)")
p.add_argument("--macro-release-ms", type=float, default=200.0, dest="macro_release_ms",
help="[v11] Expander release time (ms, default 200)")
p.add_argument("--macro-ratio", type=float, default=1.2, dest="macro_ratio",
help="[v11] Expansion ratio >1.0 (default 1.2; 1.0 = bypass)")
p.add_argument("--verbose", action="store_true")
p.add_argument("--reference", default=None,
help="Clean reference file for delta-SDR measurement")
return p
def main() -> None:
try:
import soundfile as sf
except ImportError:
raise SystemExit("Install soundfile: pip install soundfile")
args = _build_parser().parse_args()
yc, sr = sf.read(args.input, always_2d=True) # shape: (N, C) always
yc = yc.astype(float)
n_samp, n_ch = yc.shape
print("Input :", args.input,
"|", n_samp, "samples @", sr, "Hz |", n_ch, "channel(s)")
params = DeclipParams(
algo=args.algo, window_length=args.window_length,
hop_length=args.hop_length, frame=args.frame,
s=args.s, r=args.r, eps=args.eps, max_iter=args.max_iter,
verbose=args.verbose, n_jobs=args.n_jobs,
mode=args.mode, delta_db=args.delta_db,
use_gpu=not args.no_gpu, gpu_device=args.gpu_device,
# v11: delimiting features
sample_rate=sr,
release_ms=args.release_ms,
max_gain_db=args.max_gain_db,
multiband=args.multiband,
band_crossovers=tuple(args.band_crossovers),
band_delta_db=tuple(args.band_delta_db),
macro_expand=args.macro_expand,
macro_attack_ms=args.macro_attack_ms,
macro_release_ms=args.macro_release_ms,
macro_ratio=args.macro_ratio,
)
# Pass 1-D array for mono so return type stays ClippingMasks (not list)
yc_in = yc[:, 0] if n_ch == 1 else yc
fixed, masks = declip(yc_in, params)
# NOTE: do NOT clip to [-1, 1] — recovered transients may legitimately
# exceed 1.0. Write as 32-bit float to preserve them.
# soundfile always wants 2-D for write
fixed_2d = fixed[:, None] if fixed.ndim == 1 else fixed
sf.write(args.output, fixed_2d.astype(np.float32), sr, subtype="FLOAT")
print("Output :", args.output)
# Per-channel clipping summary
masks_iter = [masks] if n_ch == 1 else masks
labels = ["L", "R"] if n_ch == 2 else ["Ch" + str(c) for c in range(n_ch)]
for m, lbl in zip(masks_iter, labels):
n_clip = int(np.sum(~m.Ir))
pct = 100.0 * n_clip / n_samp
print(" [" + lbl + "] clipped:", n_clip, "/", n_samp,
"samples (" + str(round(pct, 1)) + "%)")
# Optional SDR vs. clean reference
if args.reference:
ref, _ = sf.read(args.reference, always_2d=True)
ref = ref.astype(float)
L = min(ref.shape[0], fixed_2d.shape[0])
for c in range(min(n_ch, ref.shape[1])):
lbl = labels[c]
r_c = ref[:L, c]
y_c = yc[:L, c]
f_c = fixed_2d[:L, c]
print(" [" + lbl + "]"
" SDR clipped=" + str(round(sdr(r_c, y_c), 2)) + " dB"
" declipped=" + str(round(sdr(r_c, f_c), 2)) + " dB"
" delta=" + str(round(delta_sdr(r_c, y_c, f_c), 2)) + " dB")
# =============================================================================
# Demo / self-test (mono + stereo)
# =============================================================================
def _demo() -> None:
"""
Self-test: mono and stereo synthetic signals, both algorithms, both frames.
"""
print("=" * 65)
print("SPADE Declipping v3 — Self-Test (mono + stereo)")
print("=" * 65)
sr = 16_000
t = np.linspace(0, 1, sr, endpoint=False)
def make_tonal(freqs_amps):
sig = sum(a * np.sin(2 * np.pi * f * t) for f, a in freqs_amps)
return sig / np.max(np.abs(sig))
clean_L = make_tonal([(440, 0.5), (880, 0.3), (1320, 0.15)])
clean_R = make_tonal([(550, 0.5), (1100, 0.3), (2200, 0.1)])
clean_stereo = np.column_stack([clean_L, clean_R]) # (N, 2)
theta_c = 0.6
clipped_stereo = np.clip(clean_stereo, -theta_c, theta_c)
n_clip_L = np.mean(np.abs(clipped_stereo[:, 0]) >= theta_c) * 100
n_clip_R = np.mean(np.abs(clipped_stereo[:, 1]) >= theta_c) * 100
print("\ntheta_c =", theta_c,
" | L clipped:", str(round(n_clip_L, 1)) + "%",
" R clipped:", str(round(n_clip_R, 1)) + "%")
for algo in ("sspade", "aspade"):
for fr in ("dct", "rdft"):
params = DeclipParams(
algo=algo, frame=fr,
window_length=1024, hop_length=256,
s=1, r=1, eps=0.1, max_iter=500,
n_jobs=2, # process L and R in parallel
verbose=False,
)
fixed, masks_list = declip(clipped_stereo, params)
dsdr_L = delta_sdr(clean_stereo[:, 0], clipped_stereo[:, 0], fixed[:, 0])
dsdr_R = delta_sdr(clean_stereo[:, 1], clipped_stereo[:, 1], fixed[:, 1])
tag = algo.upper() + " + " + fr.upper()
print(" " + tag + " | L DSDR=" + str(round(dsdr_L, 1)) + " dB"
" R DSDR=" + str(round(dsdr_R, 1)) + " dB")
# Quick mono sanity check
print("\n--- Mono sanity check ---")
clipped_mono = np.clip(clean_L, -theta_c, theta_c)
params_mono = DeclipParams(algo="sspade", frame="rdft",
window_length=1024, hop_length=256,
s=1, r=1, eps=0.1, max_iter=500)
fixed_mono, _ = declip(clipped_mono, params_mono)
print(" SSPADE+RDFT mono DSDR =",
str(round(delta_sdr(clean_L, clipped_mono, fixed_mono), 1)), "dB")
print("\nSelf-test complete.")
if __name__ == "__main__":
import sys
if "--demo" in sys.argv:
_demo()
else:
main()