Get trending papers in your email inbox once a day!
Get trending papers in your email inbox!
SubscribeSpectral Normalization for Generative Adversarial Networks
One of the challenges in the study of generative adversarial networks is the instability of its training. In this paper, we propose a novel weight normalization technique called spectral normalization to stabilize the training of the discriminator. Our new normalization technique is computationally light and easy to incorporate into existing implementations. We tested the efficacy of spectral normalization on CIFAR10, STL-10, and ILSVRC2012 dataset, and we experimentally confirmed that spectrally normalized GANs (SN-GANs) is capable of generating images of better or equal quality relative to the previous training stabilization techniques.
A noncommutative 2-sphere generated by the quantum complex plane
S. L. Woronowicz's theory of introducing C*-algebras generated by unbounded elements is applied to q-normal operators satisfying the defining relation of the quantum complex plane. The unique non-degenerate C*-algebra of bounded operators generated by a q-normal operator is computed and an abstract description is given by using crossed product algebras. If the spectrum of the modulus of the q-normal operator is the positive half line, this C*-algebra will be considered as the algebra of continuous functions on the quantum complex plane vanishing at infinity, and its unitization will be viewed as the algebra of continuous functions on a quantum 2-sphere.
Normalizable fermion modes in a holographic superconductor
We consider fermions in a zero-temperature superconducting anti-de Sitter domain wall solution and find continuous bands of normal modes. These bands can be either partially filled or totally empty and gapped. We present a semi-classical argument which approximately captures the main features of the normal mode spectrum.
Best Proximity Point Results for Perimetric Contractions
This paper has two aims, first one is to introduce special kind of proximal contractions guaranteeing a finite number of best proximity points, and second one is to derive best proximity point results for perimetric contractions. To meet these two aims, we introduce two new proximal contractions: perimetric proximal contractions of the first and the second kind, and derive best proximity point results for these mappings. We establish that for these particular mappings, best proximity points are not necessarily unique; however, we provide an upper bound, proving that at most two such points can exist. To establish the validity of our results, we provide illustrative examples demonstrating that these newly defined mappings can possess unique or exactly two best proximity points.
Spectral bipartite Turan problems on linear hypergraphs
Let F be a graph, and let B_r(F) be the class of r-uniform Berge-F hypergraphs. In this paper, we establish a relationship between the spectral radius of the adjacency tensor of a uniform hypergraph and its local structure through walks. Based on the relationship, we give a spectral asymptotic bound for B_{r}(C_3)-free linear r-uniform hypergraphs and upper bounds for the spectral radii of B_{r}(K_{2,t})-free or {B_{r}(K_{s,t}),B_{r}(C_{3})}-free linear r-uniform hypergraphs, where C_{3} and K_{s,t} are respectively the triangle and the complete bipartite graph with one part having s vertices and the other part having t vertices. Our work implies an upper bound for the number of edges of {B_{r}(K_{s,t}),B_{r}(C_{3})}-free linear r-uniform hypergraphs and extends some of the existing research on (spectral) extremal problems of hypergraphs.
Enhancing LLM Training via Spectral Clipping
While spectral-based optimizers like Muon operate directly on the spectrum of updates, standard adaptive methods such as AdamW do not account for the global spectral structure of weights and gradients, leaving them vulnerable to two empirical issues in large language model (LLM) training: (i) the optimizer updates can have large spectral norms, potentially destabilizing training and degrading generalization; (ii) stochastic gradient noise can exhibit sparse spectral spikes, with a few dominant singular values much larger than the rest. We propose SPECTRA, a general framework addressing these by (i) post-spectral clipping of updates to enforce spectral-norm constraints; (ii) optional pre-spectral clipping of gradients to suppress spectral noise spikes. We prove that post-clipping constitutes a Composite Frank-Wolfe method with spectral-norm constraints and weight regularization, recovering Frobenius and ell_{infty}-norm regularization with SGD-based and sign-based methods. We further analyze how pre-clipping mitigates sparse spectral spikes. We propose efficient soft spectral clipping via Newton-Schulz iterations, avoiding expensive SVD. Experiments on LLM pretraining show SPECTRA uniformly improves validation loss for various optimizers, including AdamW, Signum, and AdEMAMix, with the best-performing variants achieving state-of-the-art results. Models trained with SPECTRA exhibit smaller weight norms, confirming the link between spectral clipping and regularization.
Spectral Sufficient Conditions for Graph Factors
The {K_{1,1}, K_{1,2},C_m: mgeq3}-factor of a graph is a spanning subgraph whose each component is an element of {K_{1,1}, K_{1,2},C_m: mgeq3}. In this paper, through the graph spectral methods, we establish the lower bound of the signless Laplacian spectral radius and the upper bound of the distance spectral radius to determine whether a graph admits a {K_2}-factor. We get a lower bound on the size (resp. the spectral radius) of G to guarantee that G contains a {K_{1,1}, K_{1,2},C_m: mgeq3}-factor. Then we determine an upper bound on the distance spectral radius of G to ensure that G has a {K_{1,1}, K_{1,2},C_m: mgeq3}-factor. Furthermore, by constructing extremal graphs, we show that the above all bounds are best possible.
Modulate Your Spectrum in Self-Supervised Learning
Whitening loss offers a theoretical guarantee against feature collapse in self-supervised learning (SSL) with joint embedding architectures. Typically, it involves a hard whitening approach, transforming the embedding and applying loss to the whitened output. In this work, we introduce Spectral Transformation (ST), a framework to modulate the spectrum of embedding and to seek for functions beyond whitening that can avoid dimensional collapse. We show that whitening is a special instance of ST by definition, and our empirical investigations unveil other ST instances capable of preventing collapse. Additionally, we propose a novel ST instance named IterNorm with trace loss (INTL). Theoretical analysis confirms INTL's efficacy in preventing collapse and modulating the spectrum of embedding toward equal-eigenvalues during optimization. Our experiments on ImageNet classification and COCO object detection demonstrate INTL's potential in learning superior representations. The code is available at https://github.com/winci-ai/INTL.
Solving High-Dimensional PDEs with Latent Spectral Models
Deep models have achieved impressive progress in solving partial differential equations (PDEs). A burgeoning paradigm is learning neural operators to approximate the input-output mappings of PDEs. While previous deep models have explored the multiscale architectures and various operator designs, they are limited to learning the operators as a whole in the coordinate space. In real physical science problems, PDEs are complex coupled equations with numerical solvers relying on discretization into high-dimensional coordinate space, which cannot be precisely approximated by a single operator nor efficiently learned due to the curse of dimensionality. We present Latent Spectral Models (LSM) toward an efficient and precise solver for high-dimensional PDEs. Going beyond the coordinate space, LSM enables an attention-based hierarchical projection network to reduce the high-dimensional data into a compact latent space in linear time. Inspired by classical spectral methods in numerical analysis, we design a neural spectral block to solve PDEs in the latent space that approximates complex input-output mappings via learning multiple basis operators, enjoying nice theoretical guarantees for convergence and approximation. Experimentally, LSM achieves consistent state-of-the-art and yields a relative gain of 11.5% averaged on seven benchmarks covering both solid and fluid physics. Code is available at https://github.com/thuml/Latent-Spectral-Models.
