diff --git "a/GdAyT4oBgHgl3EQffPg-/content/tmp_files/load_file.txt" "b/GdAyT4oBgHgl3EQffPg-/content/tmp_files/load_file.txt" new file mode 100644--- /dev/null +++ "b/GdAyT4oBgHgl3EQffPg-/content/tmp_files/load_file.txt" @@ -0,0 +1,1939 @@ +filepath=/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf,len=1938 +page_content='Theoretical Characterization of How Neural Network Pruning Affects its Generalization Hongru Yang ∗ Yingbin Liang † Xiaojie Guo‡ Lingfei Wu§ Zhangyang Wang ¶ Abstract It has been observed in practice that applying pruning-at-initialization methods to neural networks and training the sparsified networks can not only retain the testing performance of the original dense models, but also sometimes even slightly boost the generalization performance.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' Theoretical understanding for such experimental observations are yet to be developed.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' This work makes the first attempt to study how different pruning fractions affect the model’s gradient descent dynamics and generalization.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' Specifically, this work considers a classification task for overparameterized two-layer neural networks, where the network is randomly pruned according to different rates at the initialization.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' It is shown that as long as the pruning fraction is below a certain threshold, gradient descent can drive the training loss toward zero and the network exhibits good generalization performance.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' More surprisingly, the generalization bound gets better as the pruning fraction gets larger.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' To complement this positive result, this work further shows a negative result: there exists a large pruning fraction such that while gradient descent is still able to drive the training loss toward zero (by memorizing noise), the generalization performance is no better than random guessing.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' This further suggests that pruning can change the feature learning process, which leads to the performance drop of the pruned neural network.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' Up to our knowledge, this is the first generalization result for pruned neural networks, suggesting that pruning can improve the neural network’s generalization.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' 1 Introduction Neural network pruning can be dated back to the early stage of the development of neural networks (LeCun et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=', 1989).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' Since then, many research works have been focusing on using neural network pruning as a model compression technique, e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' (Molchanov et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=', 2019;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' Luo and Wu, 2017;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' Ye et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=', 2020;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' Yang et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=', 2021).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' However, all these work focused on pruning neural networks after training to reduce inference time, and, thus, the efficiency gain from pruning cannot be directly transferred to the training phase.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' It is not until the recent days that Frankle and Carbin (2018) showed a surprising phenomenon: a neural network pruned at the initialization can be trained to achieve competitive performance to the dense model.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' They called this phenomenon the lottery ticket hypothesis.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' The lottery ticket hypothesis states that there exists a sparse subnetwork inside ∗Department of Computer Science, The University of Texas at Austin;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' e-mail: hy6385@utexas.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content='edu †Department of Electrical and Computer Engineering, The Ohio State University;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' e-mail: liang.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content='889@osu.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content='edu ‡IBM Thomas.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content='J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' Watson Research Center;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' e-mail: xguo7@gmu.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content='edu §Pinterest;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' e-mail: lwu@email.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content='wm.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content='edu ¶Department of Electrical and Computer Engineering, The University of Texas at Austin;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' e-mail: atlaswang@utexas.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content='edu 1 arXiv:2301.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content='00335v1 [cs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content='LG] 1 Jan 2023 a dense network at the random initialization stage such that when trained in isolation, it can match the test accuracy of the original dense network after training for at most the same number of iterations.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' On the other hand, the algorithm Frankle and Carbin (2018) proposed to find the lottery ticket requires many rounds of pruning and retraining which is computationally expensive.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' Many subsequent works focused on developing new methods to reduce the cost of finding such a network at the initialization (Lee et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=', 2018;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' Wang et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=', 2019;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' Tanaka et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=', 2020;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' Liu and Zenke, 2020;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' Chen et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=', 2021a).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' A further investigation by Frankle et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' (2020) showed that some of these methods merely discover the layer-wise pruning ratio instead of sparsity pattern.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' The discovery of the lottery ticket hypothesis sparkled further interest in understanding this phenomenon.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' Another line of research focused on finding a subnetwork inside a dense network at the random initialization such that the subnetwork can achieve good performance (Zhou et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=', 2019;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' Ramanujan et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=', 2020).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' Shortly after that, Malach et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' (2020) formalized this phenomenon which they called the strong lottery ticket hypothesis: under certain assumption on the weight ini- tialization distribution, a sufficiently overparameterized neural network at the initialization contains a subnetwork with roughly the same accuracy as the target network.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' Later, Pensia et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' (2020) improved the overparameterization parameters and Sreenivasan et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' (2021) showed that such a type of result holds even if the weight is binary.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' Unsurprisingly, as it was pointed out by Malach et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' (2020), finding such a subnetwork is computationally hard.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' Nonetheless, all of the analysis is from a function approximation perspective and none of the aforementioned works have considered the effect of pruning on gradient descent dynamics, let alone the neural networks’ generalization.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' Interestingly, via empirical experiments, people have found that sparsity can further improve generalization in certain scenarios (Chen et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=', 2021b;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' Ding et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=', 2021;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' He et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=', 2022).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' There have also been empirical works showing that random pruning can be effective (Frankle et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=', 2020;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' Su et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=', 2020;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' Liu et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=', 2021b).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' However, theoretical understanding of such benefit of pruning of neural networks is still limited.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' In this work, we take the first step to answer the following important open question from a theoretical perspective: How does pruning fraction affect the training dynamics and the model’s generalization, if the model is pruned at the initialization and trained by gradient descent?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' We study this question using random pruning.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' We consider a classification task where the input data consists of class-dependent sparse signal and random noise.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' We analyze the training dynamics of a two-layer convolutional neural network pruned at the initialization.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' Specifically, this work makes the following contributions: Mild pruning.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' We prove that there indeed exists a range of pruning fraction where the pruning fraction is small and the generalization error bound gets better as pruning fraction gets larger.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' In this case, the signal in the feature is well-preserved and due to the effect of pruning purifying the feature, the effect from noise is reduced.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' We provide detailed explana- tion in Section 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' Up to our knowledge, this is the first theoretical result on generalization for pruned neural networks, which suggests that pruning can improve generalization under some setting.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' Further, we conduct experiments to verify our results.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' Over pruning.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' To complement the above positive result, we also show a negative result: if the pruning fraction is larger than a certain threshold, then the generalization performance is no better than a simple random guessing, although gradient descent is still able to drive the training loss toward zero.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' This further suggests that the performance drop of the pruned 2 Probability Density Signal Strength μ Mild Pruning Full model Over Pruning Figure 1: A pictorial demonstration of our results.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' The bell-shaped curves model the distribution of the signal in the features, where the mean represents the signal strength and the width of the curve indicates the variance of noise.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' Our results show that mild pruning preserves the signal strength and reduces the noise variance (and hence yields better generalization), whereas over pruning lowers signal strength albeit reducing noise variance.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' neural network is not solely caused by the pruned network’s own lack of trainability or ex- pressiveness, but also by the change of gradient descent dynamics due to pruning.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' Technically, we develop novel analysis to bound pruning effect to weight-noise and weight- signal correlation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' Further, in contrast to many previous works that considered only the binary case, our analysis handles multi-class classification with general cross-entropy loss.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' Here, a key technical development is a gradient upper bound for multi-class cross-entropy loss, which might be of independent interest.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' Pictorially, our result is summarized in Figure 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' We point out that the neural network training we consider is in the feature learning regime, where the weight parameters can go far away from their initialization.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' This is fundamentally different from the popular neural tangent kernel regime, where the neural networks essentially behave similar to its linearization.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content='1 Related Works The Lottery Ticket Hypothesis and Sparse Training.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' The discovery of the lottery ticket hypothesis (Frankle and Carbin, 2018) has inspired further investigation and applications.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' One line of research has focused on developing computationally efficient methods to enable sparse training: the static sparse training methods are aiming at identifying a sparse mask at the initialization stage based on different criterion such as SNIP (loss-based) (Lee et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=', 2018), GraSP (gradient-based) (Wang et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=', 2019), SynFlow (synaptic strength-based) (Tanaka et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=', 2020), neural tangent kernel based method (Liu and Zenke, 2020) and one-shot pruning (Chen et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=', 2021a).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' Random pruning has also been considered in static sparse training such as uniform pruning (Mariet and Sra, 2015;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' He et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=', 2017;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' Gale et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=', 2019;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' Suau et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=', 2018), non-uniform pruning (Mocanu et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=', 2016), expander-graph-related techniques (Prabhu et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=', 2018;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' Kepner and Robinett, 2019) Erd¨os-R´enyi (Mocanu et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=', 2018) and Erd¨os-R´enyi-Kernel (Evci et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=', 2020).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' On the other hand, dynamic sparse training allows the sparse mask to be updated (Mocanu et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=', 2018;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' Mostafa and Wang, 2019;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' Evci et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=', 2020;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' Jayakumar et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=', 2020;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' Liu et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=', 2021c,d,a;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' Peste et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=', 2021).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' The sparsity pattern can also be learned by using sparsity-inducing regularizer (Yang et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=', 2020).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' Recently, He et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' (2022) discovered that pruning can exhibit a double descent phenomenon when the data-set labels are corrupted.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' Another line of research has focused on studying pruning the neural networks at its random initialization to achieve good performance (Zhou et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=', 2019;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' Ramanujan et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=', 2020).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' In particular, 3 Ramanujan et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' (2020) showed that it is possible to prune a randomly initialized wide ResNet-50 to match the performance of a ResNet-34 trained on ImageNet.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' This phenomenon is named the strong lottery ticket hypothesis.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' Later, Malach et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' (2020) proved that under certain assumption on the initialization distribution, a target network of width d and depth l can be approximated by pruning a randomly initialized network that is of a polynomial factor (in d, l) wider and twice deeper even without any further training.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' However finding such a network is computationally hard, which can be shown by reducing the pruning problem to optimizing a neural network.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' Later, Pensia et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' (2020) improved the widening factor to being logarithmic and Sreenivasan et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' (2021) proved that with a polylogarithmic widening factor, such a result holds even if the network weight is binary.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' A follow-up work shows that it is possible to find a subnetwork achieving good performance at the initialization and then fine-tune (Sreenivasan et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=', 2022).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' Our work, on the other hand, analyzes the gradient descent dynamics of a pruned neural network and its generalization after training.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' Analyses of Training Neural Networks by Gradient Descent.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' A series of work (Allen- Zhu et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=', 2019;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' Du et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=', 2019;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' Lee et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=', 2019;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' Zou et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=', 2020;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' Zou and Gu, 2019;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' Ji and Telgarsky, 2019;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' Chen et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=', 2020b;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' Song and Yang, 2019;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' Oymak and Soltanolkotabi, 2020) has proved that if a deep neural network is wide enough, then (stochastic) gradient descent provably can drive the training loss toward zero in a fast rate based on neural tangent kernel (NTK) (Jacot et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=', 2018).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' Further, under certain assumption on the data, the learned network is able to generalize (Cao and Gu, 2019;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' Arora et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=', 2019).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' However, as it is pointed out by Chizat et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' (2019), in the NTK regime, the gradient descent dynamics of the neural network essentially behaves similarly to its linearization and the learned weight is not far away from the initialization, which prohibits the network from performing any useful feature learning.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' In order to go beyond NTK regime, one line of research has focused on the mean field limit (Song et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=', 2018;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' Chizat and Bach, 2018;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' Rotskoff and Vanden-Eijnden, 2018;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' Wei et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=', 2019;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' Chen et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=', 2020a;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' Sirignano and Spiliopoulos, 2020;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' Fang et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=', 2021).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' Recently, people have started to study the neural network training dynamics in the feature learning regime where data from different class is defined by a set of class-related signals which are low rank (Allen-Zhu and Li, 2020, 2022;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' Cao et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=', 2022;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' Shi et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=', 2021;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' Telgarsky, 2022).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' However, all previous works did not consider the effect of pruning.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' Our work also focuses on the aforementioned feature learning regime, but for the first time characterizes the impact of pruning on the generalization performance of neural networks.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' 2 Preliminaries and Problem Formulation In this section, we introduce our notation, data generation process, neural network architecture and the optimization algorithm.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' Notations.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' We use lower case letters to denote scalars and boldface letters and symbols (e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' x) to denote vectors and matrices.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' We use ⊙ to denote element-wise product.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' For an integer n, we use [n] to denote the set of integers {1, 2, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' , n}.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' We use x = O(y), x = Ω(y), x = Θ(y) to denote that there exists a constant C such that x ≤ Cy, x ≥ Cy, x = Cy respectively.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' We use �O, �Ω and �Θ to hide polylogarithmic factor in these notations.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' Finally, we use x = poly(y) if x = O(yC) for some positive constant C, and x = poly log y if x = poly(log y).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content='1 Settings Definition 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content='1 (Data distribution of K classes).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' Consider we are given the set of signal vectors {µei}K i=1, where µ > 0 denotes the strength of the signal, and ei denotes the i-th standard basis 4 vector with its i-th entry being 1 and all other coordinates being 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' Each data point (x, y) with x = [x⊤ 1 , x⊤ 2 ]⊤ ∈ R2d and y ∈ [K] is generated from the following distribution D: 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' The label y is generated from a uniform distribution over [K].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' A noise vector ξ is generated from the Gaussian distribution N(0, σ2 nI).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' With probability 1/2, assign x1 = µy, x2 = ξ;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' with probability 1/2, assign x2 = µy, x1 = ξ where µy = µey.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' The sparse signal model is motivated by the empirical observation that during the process of training neural networks, the output of each layer of ReLU is usually sparse instead of dense.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' This is partially due to the fact that in practice the bias term in the linear layer is used (Song et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=', 2021).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' For samples from different classes, usually a different set of neurons fire.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' Our study can be seen as a formal analysis on pruning the second last layer of a deep neural network in the layer- peeled model as in Zhu et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' (2021);' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' Zhou et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' (2022).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' We also point out that our assumption on the sparsity of the signal is necessary for our analysis.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' If we don’t have this sparsity assumption and only make assumption on the ℓ2 norm of the signal, then in the extreme case, the signal is uniformly distributed across all coordinate and the effect of pruning to the signal and the noise will be essentially the same: their ℓ2 norm will both be reduced by a factor of √p.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' Network architecture and random pruning.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' We consider a two-layer convolutional neural network model with polynomial ReLU activation σ(z) = (max{0, z})q, where we focus on the case when q = 3 1 The network is pruned at the initialization by mask M where each entry in the mask M is generated i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content='i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content='d.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' from Bernoulli(p).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' Let mj,r denotes the r-th row of Mj.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' Given the data (x, y), the output of the neural network can be written as F(W ⊙ M, x) = (F1(W1 ⊙ M1, x), F2(W2 ⊙ M2, x), .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' , Fk(Wk ⊙ Mk, x)) where the j-th output is given by Fj(Wj ⊙ Mj, x) = m � r=1 [σ(⟨wj,r ⊙ mj,r, x1⟩) + σ(⟨wj,r ⊙ mj,r, x2⟩)] = m � r=1 [σ(⟨wj,r ⊙ mj,r, µ⟩) + σ(⟨wj,r ⊙ mj,r, ξ⟩)].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' The mask M is only sampled once at the initialization and remains fixed through the entire training process.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' From now on, we use tilde over a symbol to denote its masked version, e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=', � W = W ⊙ M and �wj,r = wj,r ⊙ mj,r.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' Since µj ⊙ mj,r = 0 with probability 1 − p, some neurons will not receive the corresponding signal at all and will only learn noise.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' Therefore, for each class j ∈ [k], we split the neurons into two sets based on whether it receives its corresponding signal or not: Sj signal = {r ∈ [m] : µj ⊙ mj,r ̸= 0}, Sj noise = {r ∈ [m] : µj ⊙ mj,r = 0}.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' Gradient descent algorithm.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' We consider the network is trained by cross-entropy loss with softmax.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' We denote by logiti(F, x) := eFi(x) � j∈[k] eFj(x) and the cross-entropy loss can be written as 1We point out that as many previous works (Allen-Zhu and Li, 2020;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' Zou et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=', 2021;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' Cao et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=', 2022), polynomial ReLU activation can help us simplify the analysis of gradient descent, because polynomial ReLU activation can give a much larger separation of signal and noise (thus, cleaner analysis) than ReLU.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' Our analysis can be generalized to ReLU activation by using the arguments in (Allen-Zhu and Li, 2022).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' 5 ℓ(F(x, y)) = − log logity(F, x).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' The convolutional neural network is trained by minimizing the empirical cross-entropy loss given by LS(W) = 1 n n � i=1 ℓ[F(W ⊙ M;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' xi, yi)] = E S ℓ[F(W ⊙ M;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' xi, yi)], where S = {(xi, yi)}n i=1 is the training data set.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' Similarly, we define the generalization loss as LD := E (x,y)[ℓ(F(W ⊙ M;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' x, y))].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' The model weights are initialized from a i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content='i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content='d.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' Gaussian N(0, σ2 0).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' The gradient of the cross-entropy loss is given by ℓ′ j,i := ℓ′ j(xi, yi) = logitj(F, xi) − I(j = yi).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' Since ∇wj,rLS(W ⊙ M) = ∇wj,r⊙mj,rLS(W ⊙ M) ⊙ mj,r = ∇ �wj,rLS(� W) ⊙ mj,r, we can write the full-batch gradient descent update of the weights as �w(t+1) j,r = �w(t) j,r − η∇ �wj,rLS(� W) ⊙ mj,r = �w(t) j,r − η n n � i=1 ℓ′(t) j,i · σ′ �� �w(t) j,r, ξi �� �ξj,r,i − η n n � i=1 ℓ′(t) j,i σ′ �� �w(t) j,r, µyi �� µyi ⊙ mj,r, for j ∈ [K] and r ∈ [m], where �ξj,r,i = ξi ⊙ mj,r.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' Condition 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' We consider the parameter regime described as follows: (1) Number of classes K = O(log d).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' (2) Total number of training samples n = poly log d.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' (3) Dimension d ≥ Cd for some sufficiently large constant Cd.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' (4) Relationship between signal strength and noise strength: µ = Θ(σn √ d log d) = Θ(1).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' (5) The number of neurons in the network m = Ω(poly log d).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' (6) Initialization variance: σ0 = �Θ(m−4n−1µ−1).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' (7) Learning rate: Ω(1/ poly(d)) ≤ η ≤ �O(1/µ2).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' (8) Target training loss: ϵ = Θ(1/ poly(d)).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' Conditions (1) and (2) ensure that there are enough samples in each class with high probability.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' Condition (3) ensures that our setting is in high-dimensional regime.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' Condition (4) ensures that the full model can be trained to exhibit good generalization.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' Condition (5), (6) and (7) ensures that the neural network is sufficiently overparameterized and can be optimized efficiently by gradient descent.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' Condition (7) and (8) further ensures that training time is polynomial in d.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' We further discuss the practical consideration of η and ϵ to justify their condition in Remark D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content='9.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' 3 Mild Pruning 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content='1 Main result The first main result shows that there exists a threshold on the pruning fraction p such that pruning helps the neural network’s generalization.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' Theorem 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content='1 (Main Theorem for Mild Pruning, Informal).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' Under Condition 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content='2, if p ∈ [C1 log d m , 1] for some constant C1, then with probability at least 1 − O(d−1) over the randomness in the data, network initialization and pruning, there exists T = �O(Kη−1σ2−q 0 µ−q +K2m4µ−2η−1ϵ−1) such that 6 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' The training loss is below ϵ: LS(� W(T)) ≤ ϵ.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' The generalization loss can be bounded by LD(� W(T)) ≤ O(Kϵ) + exp(−n2/p).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' Theorem 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content='1 indicates that there exists a threshold in the order of Θ(log d m ) such that if p is above this threshold (i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=', the fraction of the pruned weights is small), gradient descent is able to drive the training loss towards zero (as item 1 claims) and the overparameterized network achieves good testing performance (as item 2 claims).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' In the next subsection, we explain why pruning can help generalization via an outline of our proof, and we defer all the detailed proofs in Appendix D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content='2 Proof Outline Our proof contains the establishment of the following two properties: First we show that after mild pruning the network is still able to learn the signal, and the magnitude of the signal in the feature is preserved.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' Then we show that given a new sample, pruning reduces the noise effect in the feature which leads to the improvement of generalization.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' We first show the above properties for three stages of gradient descent: initialization, feature growing phase, and converging phase, and then establish the generalization property.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' Initialization.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' First of all, readers might wonder why pruning can even preserve signal at all.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' Intuitively, a network will achieve good performance if its weights are highly correlated with the signal (i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=', their inner product is large).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' Two intuitive but misleading heuristics are given by the following: Consider a fixed neuron weight.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' At the random initialization, in expectation, the signal correlation with the weights is given by Ew,m[| ⟨w ⊙ m, µ⟩ |] ≤ pσ0µ and the noise correlation with the weights is given by Ew,m,ξ[| ⟨w ⊙ m, ξ⟩ |] ≤ � Ew,m,ξ[⟨w ⊙ m, ξ⟩2] = σ0σn √pd by Jensen’s inequality.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' Based on this argument, taking a sum over all the neurons, pruning will hurt weight-signal correlation more than weight-noise correlation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' Since we are pruning with Bernoulli(p), a given neuron will not receive signal at all with probability 1 − p.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' Thus, there is roughly p fraction of the neurons receiving the signal and the rest 1 − p fraction will be purely learning from noise.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' Even though for every neuron, roughly √p portion of ℓ2 mass from the noise is reduced, at the same time, pruning also creates 1 − p fraction of neurons which do not receive signals at all and will purely output noise after training.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' Summing up the contributions from every neuron, the signal strength is reduced by a factor of p while the noise strength is reduced by a factor of √p.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' We again reach the conclusion of pruning under any rate will hurt the signal more than noise.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' The above analysis shows that under any pruning rate, it seems pruning can only hurt the signal more than noise at the initialization.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' Such analysis would be indicative if the network training is under the neural tangent kernel regime, where the weight of each neuron does not travel far from its initialization so that the above analysis can still hold approximately after training.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' However, when the neural network training is in the feature learning regime, this average type analysis becomes misleading.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' Namely, in such a regime, the weights with large correlation with the signal at the initialization will quickly evolve into singleton neurons and those weights with small correlation 7 will remain small.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' In our proof, we focus on the featuring learning regime, and analyze how the network weights change and what are the effect of pruning during various stages of gradient descent.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' We now analyze the effect of pruning on weight-signal correlation and weight-noise correlation at the initialization.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' Our first lemma leverages the sparsity of our signal and shows that if the pruning is mild, then it will not hurt the maximum weight-signal correlation much at the initialization.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' On the other hand, the maximum weight-noise correlation is reduced by a factor of √p.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' Lemma 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content='2 (Initialization).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' With probability at least 1 − 2/d, for all i ∈ [n], σ0σn � pd ≤ max r � �w(0) j,r , ξi � ≤ � 2 log(Kmd)σ0σn � pd.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' Further, suppose pm ≥ Ω(log(Kd)), with probability 1 − 2/d, for all j ∈ [K], σ0 ∥µj∥2 ≤ max r∈Sj signal � �w(0) j,r , µj � ≤ � 2 log(8pmKd)σ0 ∥µj∥2 .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' Given this lemma, we now prove that there exists at least one neuron that is heavily aligned with the signal after training.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' Similarly to previous works (Allen-Zhu and Li, 2020;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' Zou et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=', 2021;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' Cao et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=', 2022), the analysis is divided into two phases: feature growing phase and converging phase.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' Feature Growing Phase.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' In this phase, the gradient of the cross-entropy is large and the weight-signal correlation grows much more quickly than weight-noise correlation thanks to the polynomial ReLU.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' We show that the signal strength is relatively unaffected by pruning while the noise level is reduced by a factor of √p.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' Lemma 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content='3 (Feature Growing Phase, Informal).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' Under Condition 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content='2, there exists time T1 such that 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' The max weight-signal correlation is large: maxr � �w(T1) j,r , µj � ≥ m−1/q for j ∈ [K].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' The weight-noise and cross-class weight-signal correlations are small: if j ̸= yi, then maxj,r,i ��� � �w(T1) j,r , ξi ���� ≤ O(σ0σn √pd) and maxj,r,k ��� � �w(T1) j,r , µk ���� ≤ �O(σ0µ).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' Converging Phase.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' We show that gradient descent can drive the training loss toward zero while the signal in the feature is still large.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' An important intermediate step in our argument is the development of the following gradient upper bound for multi-class cross-entropy loss which introduces an extra factor of K in the gradient upper bound.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' Lemma 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content='4 (Gradient Upper Bound, Informal).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' Under Condition 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content='2, we have ���∇LS(� W(t)) ⊙ M ��� 2 F ≤ O(Km2/qµ2)LS(� W(t)).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' Proof Sketch.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' To prove this upper bound, note that for a given input (xi, yi), ℓ′(t) yi,i∇Fyi(xi) should make major contribution to ���∇ℓ(� W;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' xi, yi) ��� F .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' Further note that |ℓ′(t) yi,i| = 1 − logityi(F;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' xi) = � j̸=yi eFj(xi) � j eFj(xi) ≤ � j̸=yi eFj(xi) eFyi (xi) .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' Now, apply the property that Fj(xi) is small for j ̸= yi (which we prove in the appendix), the numerator will contribute a factor of K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' To bound the rest, we utilize 8 the special property of multi-class cross-entropy loss: |ℓ′(t) j,i | ≤ |ℓ′(t) yi,i| ≤ ℓ(t) i .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' However, a naive application of this inequality will result in a factor of K3 instead K in our bound.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' The trick is to further use the fact that � j̸=yi |ℓ′(t) j,i | = |ℓ′(t) yi,i|.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' Using the above gradient upper bound, we can show that the objective can be minimized.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' Lemma 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content='5 (Converging Phase, Informal).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' Under Condition 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content='2, there exists T2 such that for some time t ∈ [T1, T2] we have 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' The results from the feature growing phase (Lemma 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content='3) hold up to constant factors.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' The training loss is small LS(� W(t)) ≤ ϵ.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' Notice that the weight-noise correlation still remains reduced by a factor of √p after training.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' Lemma 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content='5 proves the statement of the training loss in Theorem 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' Generalization Analysis.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' Finally, we show that pruning can purify the feature by reducing the variance of the noise by a factor of p when a new sample is given.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' The lemma below shows that the variance of weight-noise correlation for the trained weights is reduced by a factor of p.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' Lemma 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content='6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' The neural network weight � W⋆ after training satisfies that P ξ � max j,r ����w⋆ j,r, ξ ��� ≥ (2m)−2/q � ≤ 2Km exp � − (2m)−4/q O(σ2 0σ2npd) � .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' Using this lemma, we can show that pruning yields better generalization bound (i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=', the bound on the generalization loss) claimed in Theorem 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' 4 Over Pruning Our second result shows that there exists a relatively large pruning fraction (i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=', small p) such that the learned model yields poor generalization, although gradient descent is still able to drive the training error toward zero.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' The full proof is defered to Appendix E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' Theorem 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content='1 (Main Theorem for Over Pruning, Informal).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' Under Condition 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content='2 if p = Θ( 1 Km log d), then with probability at least 1−1/ poly log d over the randomness in the data, network initialization and pruning, there exists T = O(η−1nσq−2 0 σ−q n (pd)−q/2 + η−1ϵ−1m4nσ−2 n (pd)−1) such that 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' The training loss is below ϵ: LS(� W(T)) ≤ ϵ.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' The generalization loss is large: LD(� W(T)) ≥ Ω(log K).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' Remark 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' The above theorem indicates that in the over-pruning case, the training loss can still go to zero.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' However, the generalization loss of our neural network behaves no much better than random guessing, because given any sample, random guessing will assign each class with probability 1/K, which yields a generalization loss of log K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' The readers might wonder why the condition for this to happen is p = Θ( 1 Km log d) instead of O( 1 Km log d).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' Indeed, the generalization will still be bad if p is too small.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' However, now the neural network is not only unable to learn the signal but also cannot efficiently memorize the noise via gradient descent.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' Proof Outline.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' Now we analyze the over-pruning case.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' We first show that there is a good chance that the model will not receive any signal after pruning due to the sparse signal assumption and mild overparameterization of the neural network.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' Then, leveraging such a property, we bound the 9 weight-signal and weight-noise properties for the feature growing and converging phases of gradient descent, as stated in the following two lemmas, respectively.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' Our result indicates that the training loss can still be driven toward zero by letting the neural network memorize the noise, the proof of which further exploits the fact that high dimensional Gaussian noise are nearly orthogonal.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' Lemma 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content='3 (Feature Growing Phase, Informal).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' Under Condition 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content='2, there exists T1 such that Some weights has large correlation with noise: maxr � �w(T1) yi,r , ξi � ≥ m−1/q for all i ∈ [n].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' The cross-class weight-noise and weight-signal correlations are small: if j ̸= yi, then maxj,r,i ��� � �w(T1) j,r , ξi ���� = �O(σ0σn √pd) and maxj,r,k ��� � �w(T1) j,r , µk ���� ≤ �O(σ0µ).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' Lemma 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content='4 (Converging Phase, Informal).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' Under Condition 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content='2, there exists a time T2 such that ∃t ∈ [T1, T2], the results from phase 1 still holds (up to constant factors) and LS(� W(t)) ≤ ϵ.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' Finally, since the above lemmas show that the network is purely memorizing the noise, we further show that such a network yields poor generalization performance as stated in Theorem 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' 5 Experiments 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content='1 Simulations to Verify Our Results In this section, we conduct simulations to verify our results.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' We conduct our experiment using binary classification task and show that our result holds for ReLU networks.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' Our experiment settings are the follows: we choose input to be x = [x1, x2] = [ye1, ξ] ∈ R800 and x1, x2 ∈ R400, where ξi is sampled from a Gaussian distribution.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' The class labels y are {±1}.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' We use 100 training examples and 100 testing examples.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' The network has width 150 and is initialized with random Gaussian distribution with variance 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content='01.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' Then, p fraction of the weights are randomly pruned.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' We use the learning rate of 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content='001 and train the network over 1000 iterations by gradient descent.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' The observations are summarized as follows.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' In Figure 2a, when the noise level is σn = 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content='5, the pruned network usually can perform at the similar level with the full model when p ≤ 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content='5 and noticably better when p = 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content='3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' When p > 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content='5, the test error increases dramatically while the training accuracy still remains perfect.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' On the other hand, when the noise level becomes large σn = 1 (Figure 2b), the full model can no longer achieve good testing performance but mild pruning can improve the model’s generalization.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' Note that the training accuracy in this case is still perfect (omitted in the figure).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' We observe that in both settings when the model test error is large, the variance is also large.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' However, in Figure 2b, despite the large variance, the mean curve is already smooth.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' In particular, Figure 2c plots the testing error over the training iterations under p = 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content='5 pruning rate.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' This suggests that pruning can be beneficial even when the input noise is large.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content='2 On the Real World Dataset To further demonstrate the mild/over pruning phenomenon, we conduct experiments on MNIST (Deng, 2012) and CIFAR-10 (Krizhevsky et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=', 2009) datasets.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' We consider neural network ar- chitectures including MLP with 2 hidden layers of width 1024, VGG, ResNets (He et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=', 2016) and wide ResNet (Zagoruyko and Komodakis, 2016).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' In addition to random pruning, we also add 10 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content='0 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content='2 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content='4 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content='6 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content='8 Pruning rates 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content='00 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content='05 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content='10 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content='15 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content='20 Error Training/Testing Error over Pruning Rates Testing error Training error (a) 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content='0 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content='2 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content='4 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content='6 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content='8 Pruning rates 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content='20 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content='22 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content='24 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content='26 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content='28 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content='30 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content='32 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content='34 Error Training/Testing Error over Pruning Rates Testing error (b) 0 200 400 600 800 1000 Iterations 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content='20 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content='25 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content='30 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content='35 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content='40 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content='45 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content='50 Error Testing error full pruned (c) Figure 2: Figure (a) shows the relationship between pruning rates p and training/testing error under noise variance σn = 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content='5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' Figure (b) shows the relationship between pruning rates p and testing error under noise variance σn = 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' The training error is omitted since it stays effectively at zero across all pruning rates.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' Figure (c) shows a particular training curve under pruning rate p = 50% and noise variance σn = 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' Each data point is created by taking an average over 10 independent runs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content='0 20.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content='0 36.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content='0 48.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content='8 59.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content='0 67.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content='2 73.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content='8 79.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content='0 83.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content='2 86.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content='6 89.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content='3 91.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content='4 93.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content='1 94.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content='5 95.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content='6 96.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content='5 97.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content='2 Sparsity 96.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content='5 97.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content='0 97.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content='5 98.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content='0 98.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content='5 99.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content='0 99.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content='5 100.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content='0 Accuracy MLP MNIST Accuracy vs Sparsity Random (Train) Random (Test) IMP (Train) IMP (Test) (a) 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content='0 20.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content='0 36.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content='0 48.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content='8 59.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content='0 67.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content='2 73.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content='8 79.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content='0 83.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content='2 86.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content='6 89.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content='3 91.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content='4 93.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content='1 94.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content='5 95.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content='6 96.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content='5 97.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content='2 Sparsity 88 90 92 94 96 98 100 Accuracy VGG-16 CIFAR-10 Accuracy vs Sparsity Random (Train) Random (Test) IMP (Train) IMP (Test) (b) Figure 3: Figure (a) shows the result between pruning rates p and accuracy on MLP-1024-1024 on MNIST.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' Figure (b) shows the result on VGG-16 on CIFAR-10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' Each data point is created by taking an average over 3 independent runs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' iterative-magnitude-based pruning Frankle and Carbin (2018) into our experiments.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' Both pruning methods are prune-at-initialization methods.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' Our implementation is based on Chen et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' (2021c).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' Under the real world setting, we do not expect our theorem to hold exactly.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' Instead, our theorem implies that (1) there exists a threshold such that the testing performance is no much worse than (or sometimes may slightly better than) its dense counter part;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' and (2) the training error decreases later than the testing error decreases.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' Our experiments on MLP (Figure 3a) and VGG-16 (Figure 3b) show that this is the case: for MLP the test accuracy is steady competitive to its dense counterpart when the sparsity is less than 79% and 36% for VGG-16.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' We further provide experiments on ResNet in the appendix for validation of our theoretical results.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' 6 Discussion and Future Direction In this work, we provide theory on the generalization performance of pruned neural networks trained by gradient descent under different pruning rates.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' Our results characterize the effect of pruning under different pruning rates: in the mild pruning case, the signal in the feature is well-preserved 11 and the noise level is reduced which leads to improvement in the trained network’s generalization;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' on the other hand, over pruning significantly destroys signal strength despite of reducing noise variance.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' One open problem on this topic still appears challenging.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' In this paper, we characterize two cases of pruning: in mild pruning the signal is preserved and in over pruning the signal is completely destroyed.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' However, the transition between these two cases is not well-understood.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' Further, it would be interesting to consider more general data distribution, and understand how pruning affects training multi-layer neural networks.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' We leave these interesting directions as future works.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' References Allen-Zhu, Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' and Li, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' (2020).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' Towards understanding ensemble, knowledge distillation and self-distillation in deep learning.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' arXiv preprint arXiv:2012.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content='09816 .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' Allen-Zhu, Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' and Li, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' (2022).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' Feature purification: How adversarial training performs robust deep learning.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' In 2021 IEEE 62nd Annual Symposium on Foundations of Computer Science (FOCS).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' IEEE.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' Allen-Zhu, Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=', Li, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' and Song, Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' (2019).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' A convergence theory for deep learning via over- parameterization.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' In International Conference on Machine Learning.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' PMLR.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' Arora, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=', Du, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=', Hu, W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=', Li, Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' and Wang, R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' (2019).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' Fine-grained analysis of optimization and generalization for overparameterized two-layer neural networks.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' In International Conference on Machine Learning.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' PMLR.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' Cao, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=', Chen, Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=', Belkin, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' and Gu, Q.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' (2022).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' Benign overfitting in two-layer convolutional neural networks.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' arXiv preprint arXiv:2202.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content='06526 .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' Cao, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' and Gu, Q.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' (2019).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' Generalization bounds of stochastic gradient descent for wide and deep neural networks.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' Advances in neural information processing systems 32.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' Chen, T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=', Ji, B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=', Ding, T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=', Fang, B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=', Wang, G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=', Zhu, Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=', Liang, L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=', Shi, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=', Yi, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' and Tu, X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' (2021a).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' Only train once: A one-shot neural network training and pruning framework.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' Advances in Neural Information Processing Systems 34.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' Chen, T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=', Zhang, Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=', Balachandra, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=', Ma, H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=', Wang, Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=', Wang, Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' (2021b).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' Sparsity winning twice: Better robust generalization from more efficient training.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' In International Conference on Learning Representations.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' Chen, X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=', Cheng, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=', Wang, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=', Gan, Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=', Liu, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' and Wang, Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' (2021c).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' The elastic lottery ticket hypothesis.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' Github Repository, MIT License .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' Chen, Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=', Cao, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=', Gu, Q.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' and Zhang, T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' (2020a).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' A generalized neural tangent kernel analysis for two-layer neural networks.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' Advances in Neural Information Processing Systems 33 13363– 13373.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' Chen, Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=', Cao, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=', Zou, D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' and Gu, Q.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' (2020b).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' How much over-parameterization is sufficient to learn deep relu networks?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' In International Conference on Learning Representations.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' 12 Chizat, L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' and Bach, F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' (2018).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' On the global convergence of gradient descent for over- parameterized models using optimal transport.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' Advances in neural information processing sys- tems 31.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' Chizat, L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=', Oyallon, E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' and Bach, F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' (2019).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' On lazy training in differentiable programming.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' Advances in Neural Information Processing Systems 32.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' Deng, L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' (2012).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' The mnist database of handwritten digit images for machine learning research [best of the web].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' IEEE signal processing magazine 29 141–142.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' Ding, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=', Chen, T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' and Wang, Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' (2021).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' Audio lottery: Speech recognition made ultra- lightweight, noise-robust, and transferable.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' In International Conference on Learning Represen- tations.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' Du, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=', Lee, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=', Li, H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=', Wang, L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' and Zhai, X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' (2019).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' Gradient descent finds global minima of deep neural networks.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' In International conference on machine learning.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' PMLR.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' Evci, U.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=', Gale, T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=', Menick, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=', Castro, P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' and Elsen, E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' (2020).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' Rigging the lottery: Making all tickets winners.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' In International Conference on Machine Learning.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' PMLR.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' Fang, C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=', Lee, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=', Yang, P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' and Zhang, T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' (2021).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' Modeling from features: a mean-field frame- work for over-parameterized deep neural networks.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' In Conference on learning theory.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' PMLR.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' Frankle, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' and Carbin, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' (2018).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' The lottery ticket hypothesis: Finding sparse, trainable neural networks.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' In International Conference on Learning Representations.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' Frankle, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=', Dziugaite, G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=', Roy, D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' and Carbin, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' (2020).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' Pruning neural networks at initialization: Why are we missing the mark?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' In International Conference on Learning Representations.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' Gale, T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=', Elsen, E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' and Hooker, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' (2019).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' The state of sparsity in deep neural networks.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' arXiv preprint arXiv:1902.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content='09574 .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' He, K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=', Zhang, X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=', Ren, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' and Sun, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' (2016).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' Deep residual learning for image recognition.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' In Proceedings of the IEEE conference on computer vision and pattern recognition.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' He, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=', Zhang, X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' and Sun, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' (2017).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' Channel pruning for accelerating very deep neural networks.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' In Proceedings of the IEEE international conference on computer vision.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' He, Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=', Xie, Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=', Zhu, Q.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' and Qin, Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' (2022).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' Sparse double descent: Where network pruning aggravates overfitting.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' In International Conference on Machine Learning.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' PMLR.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' Jacot, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=', Gabriel, F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' and Hongler, C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' (2018).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' Neural tangent kernel: Convergence and generalization in neural networks.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' Advances in neural information processing systems 31.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' Jayakumar, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=', Pascanu, R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=', Rae, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=', Osindero, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' and Elsen, E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' (2020).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' Top-kast: Top-k always sparse training.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' Advances in Neural Information Processing Systems 33 20744–20754.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' Ji, Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' and Telgarsky, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' (2019).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' Polylogarithmic width suffices for gradient descent to achieve arbitrarily small test error with shallow relu networks.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' In International Conference on Learning Representations.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' 13 Kepner, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' and Robinett, R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' (2019).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' Radix-net: Structured sparse matrices for deep neural networks.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' In 2019 IEEE International Parallel and Distributed Processing Symposium Workshops (IPDPSW).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' IEEE.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' Krizhevsky, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=', Hinton, G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' (2009).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' Learning multiple layers of features from tiny images .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' LeCun, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=', Denker, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' and Solla, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' (1989).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' Optimal brain damage.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' Advances in neural information processing systems 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' Lee, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=', Xiao, L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=', Schoenholz, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=', Bahri, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=', Novak, R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=', Sohl-Dickstein, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' and Pen- nington, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' (2019).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' Wide neural networks of any depth evolve as linear models under gradient descent.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' Advances in neural information processing systems 32.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' Lee, N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=', Ajanthan, T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' and Torr, P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' (2018).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' Snip: Single-shot network pruning based on connection sensitivity.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' In International Conference on Learning Representations.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' Liu, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=', Chen, T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=', Chen, X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=', Atashgahi, Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=', Yin, L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=', Kou, H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=', Shen, L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=', Pechenizkiy, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=', Wang, Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' and Mocanu, D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' (2021a).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' Sparse training via boosting pruning plasticity with neuroregeneration.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' Advances in Neural Information Processing Systems 34.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' Liu, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=', Chen, T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=', Chen, X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=', Shen, L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=', Mocanu, D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=', Wang, Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' and Pechenizkiy, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' (2021b).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' The unreasonable effectiveness of random pruning: Return of the most naive baseline for sparse training.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' In International Conference on Learning Representations.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' Liu, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=', Mocanu, D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=', Matavalam, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=', Pei, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' and Pechenizkiy, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' (2021c).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' Sparse evolutionary deep learning with over one million artificial neurons on commodity hardware.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' Neural Computing and Applications 33 2589–2604.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' Liu, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=', Yin, L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=', Mocanu, D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' and Pechenizkiy, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' (2021d).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' Do we actually need dense over- parameterization?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' in-time over-parameterization in sparse training.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' In International Conference on Machine Learning.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' PMLR.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' Liu, T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' and Zenke, F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' (2020).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' Finding trainable sparse networks through neural tangent transfer.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' In International Conference on Machine Learning.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' PMLR.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' Luo, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content='-H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' and Wu, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' (2017).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' An entropy-based pruning method for cnn compression.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' arXiv preprint arXiv:1706.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content='05791 .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' Malach, E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=', Yehudai, G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=', Shalev-Schwartz, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' and Shamir, O.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' (2020).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' Proving the lottery ticket hypothesis: Pruning is all you need.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' In International Conference on Machine Learning.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' PMLR.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' Mariet, Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' and Sra, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' (2015).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' Diversity networks: Neural network compression using determi- nantal point processes.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' arXiv preprint arXiv:1511.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content='05077 .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' Mocanu, D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=', Mocanu, E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=', Nguyen, P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=', Gibescu, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' and Liotta, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' (2016).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' A topological insight into restricted boltzmann machines.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' Machine Learning 104 243–270.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' Mocanu, D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=', Mocanu, E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=', Stone, P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=', Nguyen, P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=', Gibescu, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' and Liotta, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' (2018).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' Scalable training of artificial neural networks with adaptive sparse connectivity inspired by network science.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' Nature communications 9 1–12.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' 14 Molchanov, P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=', Tyree, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=', Karras, T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=', Aila, T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' and Kautz, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' (2019).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' Pruning convolutional neural networks for resource efficient inference.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' In 5th International Conference on Learning Representations, ICLR 2017-Conference Track Proceedings.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' Mostafa, H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' and Wang, X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' (2019).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' Parameter efficient training of deep convolutional neural net- works by dynamic sparse reparameterization.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' In International Conference on Machine Learning.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' PMLR.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' Oymak, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' and Soltanolkotabi, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' (2020).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' Toward moderate overparameterization: Global convergence guarantees for training shallow neural networks.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' IEEE Journal on Selected Areas in Information Theory 1 84–105.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' Pensia, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=', Rajput, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=', Nagle, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=', Vishwakarma, H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' and Papailiopoulos, D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' (2020).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' Optimal lottery tickets via subset sum: Logarithmic over-parameterization is sufficient.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' Advances in Neural Information Processing Systems 33 2599–2610.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' Peste, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=', Iofinova, E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=', Vladu, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' and Alistarh, D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' (2021).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' Ac/dc: Alternating com- pressed/decompressed training of deep neural networks.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' Advances in Neural Information Pro- cessing Systems 34.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' Prabhu, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=', Varma, G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' and Namboodiri, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' (2018).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' Deep expander networks: Efficient deep networks from graph theory.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' In Proceedings of the European Conference on Computer Vision (ECCV).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' Ramanujan, V.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=', Wortsman, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=', Kembhavi, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=', Farhadi, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' and Rastegari, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' (2020).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' What’s hidden in a randomly weighted neural network?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' In Proceedings of the IEEE CVF Conference on Computer Vision and Pattern Recognition.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' Rotskoff, G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' and Vanden-Eijnden, E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' (2018).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' Neural networks as interacting particle systems: Asymptotic convexity of the loss landscape and universal scaling of the approximation error.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' stat 1050 22.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' Shi, Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=', Wei, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' and Liang, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' (2021).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' A theoretical analysis on feature learning in neural networks: Emergence from inputs and advantage over fixed features.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' In International Conference on Learning Representations.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' Sirignano, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' and Spiliopoulos, K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' (2020).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' Mean field analysis of neural networks: A law of large numbers.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' SIAM Journal on Applied Mathematics 80 725–752.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' Song, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=', Montanari, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' and Nguyen, P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' (2018).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' A mean field view of the landscape of two-layers neural networks.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' Proceedings of the National Academy of Sciences 115 E7665–E7671.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' Song, Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=', Yang, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' and Zhang, R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' (2021).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' Does preprocessing help training over-parameterized neural networks?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' Advances in Neural Information Processing Systems 34.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' Song, Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' and Yang, X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' (2019).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' Quadratic suffices for over-parametrization via matrix chernoff bound.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' arXiv preprint arXiv:1906.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content='03593 .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' Sreenivasan, K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=', Rajput, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=', Sohn, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content='-y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' and Papailiopoulos, D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' (2021).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' Finding everything within random binary networks.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' arXiv preprint arXiv:2110.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content='08996 .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' 15 Sreenivasan, K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=', Sohn, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content='-y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=', Yang, L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=', Grinde, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=', Nagle, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=', Wang, H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=', Lee, K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' and Papailiopoulos, D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' (2022).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' Rare gems: Finding lottery tickets at initialization.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' arXiv preprint arXiv:2202.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content='12002 .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' Su, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=', Chen, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=', Cai, T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=', Wu, T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=', Gao, R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=', Wang, L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' and Lee, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' (2020).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' Sanity- checking pruning methods: Random tickets can win the jackpot.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' Advances in Neural Information Processing Systems 33 20390–20401.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' Suau, X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=', Zappella, L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' and Apostoloff, N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' (2018).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' Network compression using correlation analysis of layer responses .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' Tanaka, H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=', Kunin, D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=', Yamins, D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' and Ganguli, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' (2020).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' Pruning neural networks with- out any data by iteratively conserving synaptic flow.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' Advances in Neural Information Processing Systems 33 6377–6389.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' Telgarsky, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' (2022).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' Feature selection with gradient descent on two-layer networks in low- rotation regimes.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' arXiv preprint arXiv:2208.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content='02789 .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' Wang, C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=', Zhang, G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' and Grosse, R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' (2019).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' Picking winning tickets before training by pre- serving gradient flow.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' In International Conference on Learning Representations.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' Wei, C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=', Lee, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=', Liu, Q.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' and Ma, T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' (2019).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' Regularization matters: Generalization and optimization of neural nets vs their induced kernel.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' Advances in Neural Information Processing Systems 32.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' Yang, H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=', Wen, W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' and Li, H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' (2020).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' Deephoyer: Learning sparser neural network with differentiable scale-invariant sparsity measures.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' In International Conference on Learning Repre- sentations.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' Yang, Q.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=', Mao, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=', Wang, Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' and Hai, H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' (2021).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' Dynamic regularization on activation sparsity for neural network efficiency improvement.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' ACM Journal on Emerging Technologies in Computing Systems (JETC) 17 1–16.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' Ye, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=', Gong, C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=', Nie, L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=', Zhou, D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=', Klivans, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' and Liu, Q.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' (2020).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' Good subnetworks provably exist: Pruning via greedy forward selection.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' In International Conference on Machine Learning.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' PMLR.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' Zagoruyko, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' and Komodakis, N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' (2016).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' Wide residual networks.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' In British Machine Vision Conference 2016.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' British Machine Vision Association.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' Zhou, H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=', Lan, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=', Liu, R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' and Yosinski, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' (2019).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' Deconstructing lottery tickets: Zeros, signs, and the supermask.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' Advances in neural information processing systems 32.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' Zhou, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=', Li, X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=', Ding, T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=', You, C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=', Qu, Q.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' and Zhu, Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' (2022).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' On the optimization landscape of neural collapse under mse loss: Global optimality with unconstrained features.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' arXiv preprint arXiv:2203.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content='01238 .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' Zhu, Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=', Ding, T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=', Zhou, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=', Li, X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=', You, C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=', Sulam, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' and Qu, Q.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' (2021).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' A geometric anal- ysis of neural collapse with unconstrained features.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' Advances in Neural Information Processing Systems 34.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' 16 Zou, D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=', Cao, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=', Li, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' and Gu, Q.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' (2021).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' Understanding the generalization of adam in learning neural networks with proper regularization.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' arXiv preprint arXiv:2108.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content='11371 .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' Zou, D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=', Cao, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=', Zhou, D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' and Gu, Q.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' (2020).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' Gradient descent optimizes over-parameterized deep relu networks.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' Machine Learning 109 467–492.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' Zou, D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' and Gu, Q.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' (2019).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' An improved analysis of training over-parameterized deep neural networks.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' Advances in neural information processing systems 32.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' 17 A Experiment Details The experiments of MLP, VGG and ResNet-32 are run on NVIDIA A5000 and ResNet-50 and ResNet-20-128 is run on 4 NIVIDIA V100s.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' We list the hyperparameters we used in training.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' All of our models are trained with SGD and the detailed settings are summarized below.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' Table 1: Summary of architectures, dataset and training hyperparameters Model Data Epoch Batch Size LR Momentum LR Decay, Epoch Weight Decay LeNet MNIST 120 128 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content='1 0 0 0 VGG CIFAR-10 160 128 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content='1 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content='9 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content='1 × [80, 120] 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content='0001 ResNets CIFAR-10 160 128 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content='1 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content='9 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content='1 × [80, 120] 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content='0001 B Further Experiment Results We plot the experiment result of ResNet-20-128 in Figure 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' This figure further verifies our results that there exists pruning rate threshold such that the testing performance of the pruned network is on par with the testing performance of the dense model while the training accuracy remains perfect.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content='0 20.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content='0 36.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content='0 48.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content='8 59.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content='0 67.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content='2 73.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content='8 79.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content='0 83.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content='2 86.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content='6 Sparsity 95 96 97 98 99 100 Accuracy ResNet-20-128 CIFAR-10 Accuracy vs Sparsity Random (Train) Random (Test) IMP (Train) IMP (Test) Figure 4: The figure shows the experiment results of ResNet-20-128 under various sparsity by random pruning and IMP.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' Each data point is averaged over 2 runs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' C Preliminary for Analysis In this section, we introduce the following signal-noise decomposition of each neuron weight from Cao et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' (2022), and some useful properties for the terms in such a decomposition, which are useful in our analysis.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' Definition C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content='1 (signal-noise decomposition).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' For each neuron weight j ∈ [K], r ∈ [m], there exist 18 coefficients γ(t) j,r,k, ζ(t) j,r,i, ω(t) j,r,i such that �w(t) j,r = �w(0) j,r + K � k=1 γ(t) j,r,k · ∥µk∥−2 2 µk ⊙ mj,r + n � i=1 ζ(t) j,r,i · ����ξj,r,i ��� −2 2 �ξj,r,i + n � i=1 ω(t) j,r,i ����ξj,r,i ��� −2 2 �ξj,r,i, where γ(t) j,r,j ≥ 0, γ(t) j,r,k ≤ 0, ζ(t) j,r,i ≥ 0, ω(t) j,r,i ≤ 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' It is straightforward to see the following: γ(0) j,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content='r,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content='k,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' ζ(0) j,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content='r,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content='i,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' ω(0) j,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content='r,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content='i = 0,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' γ(t+1) j,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content='r,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content='j = γ(t) j,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content='r,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content='j − I(r ∈ Sj signal)η n n � i=1 ℓ′(t) j,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content='i · σ′ �� �w(t) j,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content='r,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' µyi �� ∥µyi∥2 2 I(yi = j),' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' γ(t+1) j,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content='r,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content='k = γ(t) j,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content='r,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content='k − I((mj,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content='r)k = 1)η n n � i=1 ℓ′(t) j,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content='i · σ′ �� �w(t) j,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content='r,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' µyi �� ∥µyi∥2 2 I(yi = k),' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' ∀j ̸= k,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' ζ(t+1) j,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content='r,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content='i = ζ(t) j,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content='r,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content='i − η n · ℓ′(t) j,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content='i · σ′ �� �w(t) j,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content='r,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' ξi �� ����ξj,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content='r,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content='i ��� 2 2 I(j = yi),' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' ω(t+1) j,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content='r,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content='i = ω(t) j,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content='r,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content='i − η n · ℓ′(t) j,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content='i · σ′ �� �w(t) j,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content='r,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' ξi �� ����ξj,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content='r,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content='i ��� 2 2 I(j ̸= yi),' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' where {γ(t) j,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content='r,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content='j}T t=1,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' {ζ(t) j,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content='r,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content='i}T t=1 are increasing sequences and {γ(t) j,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content='r,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content='k}T t=1,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' {ω(t) j,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content='r,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content='i}T t=1 are decreasing se- quences,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' because −ℓ′(t) j,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content='i ≥ 0 when j = yi,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' and −ℓ′(t) j,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content='i ≤ 0 when j ̸= yi.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' By Lemma D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content='4, we have pd > n+K, and hence the set of vectors {µk}K k=1 �{�ξi}n i=1 is linearly independent with probability measure 1 over the Gaussian distribution for each j ∈ [K], r ∈ [m].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' Therefore the decomposition is unique.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' D Proof of Theorem 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content='1 We first formally restate Theorem 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' Theorem D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content='1 (Formal Restatement of Theorem 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content='1).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' Under Condition 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content='2, choose initialization variance σ0 = �Θ(m−4n−1µ−1) and learning rate η ≤ �O(1/µ2).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' For ϵ > 0, if p ≥ C1 log d m for some sufficiently large constant C1, then with probability at least 1 − O(d−1) over the randomness in the data, network initialization and pruning, there exists T = �O(Kη−1σ2−q 0 µ−q + K2m4µ−2η−1ϵ−1) such that the following holds: 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' The training loss is below ϵ: LS(� W(T)) ≤ ϵ.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' The weights of the CNN highly correlate with its corresponding class signal: maxr γ(T) j,r,j ≥ Ω(m−1/q) for all j ∈ [K].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' The weights of the CNN doesn’t have high correlation with the signal from different classes: maxj̸=k,r∈[m] |γ(T) j,r,k| ≤ �O(σ0µ).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' None of the weights is highly correlated with the noise: maxj,r,i ζ(T) j,r,i = �O(σ0σn √pd), maxj,r,i |ω(T) j,r,i| = �O(σ0σn √pd).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' 19 Moreover, the testing loss is upper-bounded by LD(� W(T)) ≤ O(Kϵ) + exp(−n2/p).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' The proof of Theorem 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content='1 consists of the analysis of the pruning on the signal and noise for three stages of gradient descent: initialization, feature growing phase, and converging phase, and the establishment of the generalization property.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' We present these analysis in detail in the following subsections.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' A special note is that the constant C showing up in the following proof of each subsequent Lemmas is defined locally instead of globally, which means the constant C within each Lemma is the same but may be different across different Lemma.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content='1 Initialization We analyze the effect of pruning on weight-signal correlation and weight-noise correlation at the initialization.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' We first present a few supporting lemmas, and finally provide our main result of Lemma D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content='7, which shows that if the pruning is mild, then it will not hurt the max weight-signal correlation much at the initialization.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' On the other hand, the max weight-noise correlation is reduced by a factor of √p.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' Lemma D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' Assume n = Ω(K2 log Kd).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' Then, with probability at least 1 − 1/d, |{i ∈ [n] : yi = j}| = Θ(n/K) ∀j ∈ [K].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' Proof.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' By Hoeffding’s inequality, with probability at least 1 − δ/2K, for a fixed j ∈ [K], we have ����� 1 n n � i=1 I(yi = j) − 1 K ����� ≤ � log(4K/δ) 2n .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' Therefore, as long as n ≥ 2K2 log(4K/δ), we have ����� 1 n n � i=1 I(yi = j) − 1 K ����� ≤ 1 2K .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' Taking a union bound over j ∈ [K] and making δ = 1/d yield the result.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' Lemma D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content='3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' Assume pm = Ω(log d) and m = poly log d.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' Then, with probability 1 − 1/d, for all j ∈ [K], k ∈ [K], we have �m r=1(mj,r)k = Θ(pm), which implies that |Sj signal| = Θ(pm) for all j ∈ [K].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' Proof.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' When pm = Ω(log d), by multiplicative Chernoff’s bound, for a given k ∈ [K], we have P ������ m � r=1 (mj,r)k − pm ����� ≥ 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content='5pm � ≤ 2 exp {−Ω (pm)} .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' Take a union bound over j ∈ [K], k ∈ [K], we have P ������ m � r=1 (mj,r)k − pm ����� ≥ 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content='5pm, ∀j ∈ [K], k ∈ [K] � ≤ 2K2 exp {−Ω (pm)} ≤ 1/d.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' 20 Lemma D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content='4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' Assume p = 1/ poly log d.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' Then with probability at least 1 − 1/d, for all j ∈ [K], r ∈ [m], �d i=1(mj,r)i = Θ(pd).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' Proof.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' By multiplicative Chernoff’s bound, we have for a given j, r P ������ d � i=1 (mj,r)i − pd ����� ≥ 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content='5pd � ≤ 2 exp{−Ω(pd)}.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' Take a union bound over j, r, we have P ������ d � i=1 (mj,r)i − pd ����� ≥ 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content='5pd, ∀j ∈ [K], r ∈ [m] � ≤ 2Km exp{−Ω(pd)} ≤ 1/d, where the last inequality follows from our choices of p, K, m, d.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' Lemma D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content='5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' Suppose p = Ω(1/ poly log d), and m, n = poly log d.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' With probability at least 1−1/d, we have ����ξj,r,i ��� 2 2 = Θ(σ2 npd), ��� � �ξj,r,i, ξi′ ���� ≤ O(σ2 n � pd log d), ��� � µk, �ξj,r,i ���� ≤ | ⟨µ, ξi⟩ | ≤ O(σnµ � log d), for all j ∈ {−1, 1}, r ∈ [m], i, i′ ∈ [n] and i ̸= i′.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' Proof.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' From Lemma D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content='4, we have with probability at least 1 − 1/d, d � k=1 (mj,r)k = Θ(pd), ∀j ∈ [K], r ∈ [m].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' For a set of Gaussian random variable g1, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' , gN ∼ N(0, σ2), by Bernstein’s inequality, with prob- ability at least 1 − δ, we have ������� N � i=1 g2 i − σ2N ����� ≲ σ2 � N log 1 δ .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' Thus, by a union bound over j, r, i, with probability at least 1 − 1/d, we have ����ξj,r,i ��� 2 2 = Θ(σ2 npd).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' For i ̸= i′, again by Bernstein’s bound, we have with probability at least 1 − δ, ��� � �ξj,r,i, ξi′ ���� ≤ O � σ2 n � pd log Kmn δ � , for all j, r, i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' Plugging in δ = 1/d gives the result.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' The proof for | ⟨µ, ξi⟩ | is similar.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' 21 Lemma D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content='6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' Suppose we have m independent Gaussian random variables g1, g2, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' , gm ∼ N(0, σ2).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' Then with probability 1 − δ, max i gi ≥ σ � log m log 1/δ.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' Proof.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' By the standard tail bound of Gaussian random variable, we have for every x > 0, �σ x − σ3 x3 � e−x2/2σ2 √ 2π ≤ P [g > x] ≤ σ x e−x2/2σ2 √ 2π .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' We want to pick a x⋆ such that P � max i gi ≤ x⋆ � = (P [gi ≤ x⋆])m = (1 − P [gi ≥ x⋆])m ≤ e−m P[gi≥x⋆] ≤ δ ⇒ P[gi ≥ x⋆] = Θ �log(1/δ) m � ⇒ x⋆ = Θ(σ � log(m/(log(1/δ) log m))).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' Lemma D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content='7 (Formal Restatement of Lemma 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content='2).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' With probability at least 1−2/d, for all i ∈ [n], σ0σn � pd ≤ max r � �w(0) j,r , ξi � ≤ � 2 log(Kmd)σ0σn � pd.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' Further, suppose pm ≥ Ω(log(Kd)).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' Then with probability 1 − 2/d, for all j ∈ [K], σ0 ∥µj∥2 ≤ max r∈Sj signal � �w(0) j,r , µj � ≤ � 2 log(8pmKd)σ0 ∥µj∥2 .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' Proof.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' We first give a proof for the second inequality.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' From Lemma D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content='3, we know that |Sj signal| = Θ(pm).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' The upper bound can be obtained by taking a union bound over r ∈ Sj signal, j ∈ [K].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' To prove the lower bound, applying Lemma D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content='6, with probability at least 1−δ/K, we have for a given j ∈ [K] max r∈Sj signal � �w(0) j,r , µj � ≥ σ0 ∥µj∥2 � log pm log K/δ.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' Now, notice that we can control the constant in pm (by controlling the constant in the lower bound of p) such that pm/ log(Kd) ≥ e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' Thus, taking a union bound over j ∈ [K] and setting δ = 1/d yield the result.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' The proof of the first inequality is similar.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content='2 Supporting Properties for Entire Training Process This subsection establishes a few properties (summarized in Proposition D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content='10) that will be used in the analysis of feature growing phase and converging phase of gradient descent presented in the next two subsections.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' Define T ⋆ = η−1 poly(1/ϵ, µ, d−1, σ−2 n , σ−1 0 n, m, d).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' Denote α = Θ(log1/q(T ⋆)), β = 22 2 maxi,j,r,k ���� � �w(0) j,r , µk ���� , ��� � �w(0) j,r , ξi ���� � .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' We need the following bound holds for our subsequent analysis.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' 4m1/q max j,r,i �� �w(0) j,r , µyi � , Cnαµ√log d σnpd , � �w(0) j,r , ξi � , 3Cnα � log d pd � ≤ 1 (D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content='1) Remark D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content='8.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' To see why Equation (D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content='1) can hold under Condition 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content='2, we convert everything in terms of d.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' First recall from Condition 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content='2 that m, n = poly(log d) and µ = Θ(σn √ d log d) = Θ(1).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' In both mild pruning and over pruning we require p ≥ Ω(1/poly log d).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' Since α = Θ(log1/q(T ⋆)), if we assume T ⋆ ≤ O(poly(d)) for a moment (which we are going to justify in the next paragraph), then α = O(log1/q(d)).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' Then if we set d to be large enough, we have 4m1/qCnα µ√log d σnpd ≤ poly log d √ d ≤ 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' Finally for the quantity 4m1/q maxj,r,i{⟨�w(0) j,r , µyi⟩, ⟨�w(0) j,r , ξi⟩}, by Lemma 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content='2, our assumption of K = O(log d) in Condition 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content='2 and our choice of σ0 = �Θ(m−4n−1µ−1) in Theorem 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content='1 (or Theorem D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content='1), we can easily see that this quantity can also be made smaller than 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' Now, to justify that T ⋆ ≤ O(poly(d)), we only need to justify that all the quantities T ⋆ depend on is polynomial in d.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' First of all, based on Condition 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content='2, n, m = poly log(d) and µ = Θ(σn √ d log d) = Θ(1) further implies σ−2 n = Θ(d log2 d).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' Since Theorem 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content='1 only requires σ0 = �Θ(m−4n−1µ−1), this implies σ−1 0 ≤ O(poly log d).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' Hence σ−1 0 n = O(poly log d).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' Together with our assumption that ϵ, η ≥ Ω(1/ poly(d)) (which implies 1/ϵ, 1/η ≤ O(poly(d))), we have justified that all terms involved in T ⋆ are at most of order poly(d).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' Hence T ⋆ = poly(d).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' Remark D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content='9.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' Here we make remark on our assumption on ϵ and η in Condition 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' For our assumption on ϵ, since the cross-entropy loss is (1) not strongly-convex and (2) achieves its infimum at infinity.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' In practice, the cross-entropy loss is minimized to a constant level, say 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content='001.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' We make this assumption to avoid the pathological case where ϵ is exponentially small in d (say ϵ = 2−d) which is unrealistic.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' Thus, for realistic setting, we assume ϵ ≥ Ω(1/ poly(d)) or 1/ϵ ≤ O(poly(d)).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' To deal with η, the only restriction we have is η = O(1/µ2) in Theorem 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content='1 and Theorem 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' However, in practice, we don’t use a learning rate that is exponentially small, say η = 2−d.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' Thus, like dealing with ϵ, we assume η ≥ Ω(1/ poly(d)) or 1/η ≤ O(poly d).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' We make the above assumption to simplify analysis when analyzing the magnitude of Fj(X) for j ̸= y given sample (X, y).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' Proposition D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content='10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' Under Condition 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content='2, during the training time t < T ⋆, we have 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' γ(t) j,r,j, ζ(t) j,r,i ≤ α, 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' ω(t) j,r,i ≥ −β − 6Cnα � log d pd .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' γ(t) j,r,k ≥ −β − 2Cnα µ√log d σnpd .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' Notice that the lower bound has absolute value smaller than the upper bound.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' Proof of Proposition D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content='10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' We use induction to prove Proposition D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content='10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' 23 Induction Hypothesis: Suppose Proposition D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content='10 holds for all t < T ≤ T ⋆.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' We next show that this also holds for t = T via the following a few lemmas.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' Lemma D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content='11.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' Under Condition 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content='2, for t < T, there exists a constant C such that � �w(t) j,r − �w(0) j,r , µk � = � γ(t) j,r,k ± Cnαµ√log d σnpd � I((mj,r)k = 1), � �w(t) j,r − �w(0) j,r , ξi � = ζ(t) j,r,i ± 3Cnα � log d pd , � �w(t) j,r − �w(0) j,r , ξi � = ω(t) j,r,i ± 3Cnα � log d pd .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' Proof.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' From Lemma D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content='5, there exists a constant C such that with probability at least 1 − 1/d, ��� � �ξj,r,i, ξi′ ���� ����ξj,r,i ��� 2 2 ≤ C � log d pd , ��� � �ξj,r,i, µk ���� ����ξj,r,i ��� 2 2 ≤ C µ√log d σnpd , | ⟨µk, ξi⟩ | ∥µk∥2 2 ≤ C σn √log d µ .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' Using the signal-noise decomposition and assuming (mj,r)k = 1, we have ��� � �w(t) j,r − �w(0) j,r , µk � − γ(t) j,r,k ��� = ����� n � i=1 ζ(t) j,r,i · ����ξj,r,i ��� −2 2 � �ξj,r,i, µk � + n � i=1 ω(t) j,r,i ����ξj,r,i ��� −2 2 � �ξj,r,i, µk ������ ≤ C µ√log d σnpd n � i=1 ���ζ(t) j,r,i ��� + C µ√log d σnpd n � i=1 ���ω(t) j,r,i ��� ≤ 2C µ√log d σnpd nα.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' where the second last inequality is by Lemma D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content='5 and the last inequality is by induction hypothesis.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' To prove the second equality,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' for j = yi,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' ��� � �w(t) j,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content='r − �w(0) j,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content='r ,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' ξi � − ζ(t) j,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content='r,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content='i ��� = ������� K � k=1 γ(t) j,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content='r,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content='k · ⟨µk,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' ξi⟩ ∥µk∥2 2 + � i′̸=i ζ(t) j,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content='r,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content='i′ · � �ξj,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content='r,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content='i′,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' ξi � ����ξj,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content='r,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content='i′ ��� 2 2 + n � i′=1 ω(t) j,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content='r,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content='i′ � �ξj,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content='r,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content='i′,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' ξi � ����ξj,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content='r,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content='i′ ��� 2 2 ������� ≤ C σn √log d µ K � k=1 |γ(t) j,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content='r,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content='k| + C � log d pd � i′̸=i |ζ(t) j,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content='r,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content='i′| + C � log d pd n � i′=1 |ω(t) j,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content='r,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content='i′| = C σn √log d µ Kα + 2Cnα � log d pd 24 ≤ 3Cnα � log d pd .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' where the last inequality is by n ≫ K and µ = Θ(σn √ d log d).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' The proof for the case of j ̸= yi is similar.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' Lemma D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content='12 (Off-diagonal Correlation Upper Bound).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' Under Condition 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content='2, for t < T, j ̸= yi, we have that � �w(t) j,r, µyi � ≤ � �w(0) j,r , µyi � + Cnαµ√log d σnpd , � �w(t) j,r, ξi � ≤ � �w(0) j,r , ξi � + 3Cnα � log d pd , Fj(� W(t) j , xi) ≤ 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' Proof.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' If j ̸= yi, then γ(t) j,r,k ≤ 0 and we have that � �w(t) j,r, µyi � ≤ � �w(0) j,r , µyi � + � γ(t) j,r,yi + Cnαµ√log d σnpd � I((mj,r)yi = 1) ≤ � �w(0) j,r , µyi � + Cnαµ√log d σnpd .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' Further, we can obtain � �w(t) j,r, ξi � ≤ � �w(0) j,r , ξi � + ω(t) j,r,i + 3Cnα � log d pd ≤ � �w(0) j,r , ξi � + 3Cnα � log d pd .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' Then, we have the following bound: Fj(� W(t) j , xi) = m � r=1 [σ(⟨�wj,r, µyi⟩) + σ(⟨�wj,r, ξi⟩)] ≤ m2q+1 max j,r,i �� �w(0) j,r , µyi � , Cnαµ√log d σnpd , � �w(0) j,r , ξi � , 3Cnα � log d pd �q ≤ 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' where the first inequality is by Equation (D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content='1).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' Lemma D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content='13 (Diagonal Correlation Upper Bound).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' Under Condition 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content='2, for t < T, j = yi, we have � �w(t) j,r, µj � ≤ � �w(0) j,r , µj � + γ(t) j,r,j + Cnαµ√log d σnpd , 25 � �w(t) j,r, ξi � ≤ � �w(0) j,r , ξi � + ζ(t) j,r,i + 3Cnα � log d pd .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' If max{γ(t) j,r,j, ζ(t) j,r,i} ≤ m−1/q, we further have that Fj(� W(t) j , xi) ≤ O(1).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' Proof.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' The two inequalities are immediate consequences of Lemma D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content='11.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' If max{γ(t) j,r,j, ζ(t) j,r,i} ≤ m−1/q, we have Fj(� W(t) j , xi) = m � r=1 [σ(⟨�wj,r, µj⟩) + σ(⟨�wj,r, ξi⟩)] ≤ 2 · 3qm max j,r,i � γ(t) j,r, ζ(t) j,r,i, ��� � �w(0) j,r , µj ���� , ��� � �w(0) j,r , ξi ���� , Cnαµ√log d σnpd , 3Cnα � log d pd �q ≤ O(1).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' Lemma D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content='14.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' Under Condition 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content='2, for t ≤ T, we have that 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' ω(t) j,r,i ≥ −β − 6Cnα � log d pd ;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' γ(t) j,r,k ≥ −β − 2Cnα µ√log d σnpd .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' Proof.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' When j = yi, we have ω(t) j,r,i = 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' We only need to consider the case of j ̸= yi.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' When ω(T−1) j,r,i ≤ −0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content='5β − 3Cnα � log d pd , by Lemma D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content='11 we have � �w(T−1) j,r , ξi � ≤ � �w(0) j,r , ξi � + ω(T−1) j,r,i + 3Cnα � log d pd ≤ 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' Thus, ω(T) j,r,i = ω(T−1) j,r,i − η n · ℓ′(T−1) j,i σ′ �� �w(T−1) j,r , ξi �� ����ξj,r,i ��� 2 2 I(j ̸= yi) = ω(T−1) j,r,i ≥ −β − 6Cnα � log d pd .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' When ω(T−1) j,r,i ≥ −0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content='5β − 3Cnα � log d pd , we have ω(T) j,r,i = ω(T−1) j,r,i − η n · ℓ′(T−1) j,i σ′ �� �w(T−1) j,r , ξi �� ����ξj,r,i ��� 2 2 I(j ̸= yi) ≥ −0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content='5β − 3Cnα � log d pd − η nσ′ � 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content='5β + 3Cnα � log d pd � ����ξj,r,i ��� 2 2 ≥ −β − 6Cnα � log d pd , 26 where the last inequality is by setting η ≤ nq−1 � 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content='5β + 3Cnα � log d pd �2−q (C2σ2 nd)−1 and C2 is the constant such that ����ξj,r,i ��� 2 2 ≤ C2σ2 npd for all j, r, i in Lemma D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content='5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' For γ(t) j,r,k, the proof is similar.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' Consider I((mj,r)k) = 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' When γ(t) j,r,k ≤ −0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content='5β − Cnα µ√log d σnpd , by Lemma D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content='11, we have � �w(t) j,r, µk � ≤ � �w(0) j,r , µk � + γ(t) j,r,k + Cnαµ√log d σnpd ≤ 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' Hence, γ(T) j,r,k = γ(T−1) j,r,k − η n n � i=1 ℓ′(T−1) j,i σ′ �� �w(T−1) j,r , µk �� µ2I(yi = k) = γ(T−1) j,r,k ≥ −β − 2Cnαµ√log d σnpd .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' When γ(t) j,r,k ≥ −0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content='5β − Cnα µ√log d σnpd , we have γ(T) j,r,k = γ(T−1) j,r,k − η n n � i=1 ℓ′(T−1) j,i σ′ �� �w(T−1) j,r , µk �� µ2I(yi = k) ≥ −0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content='5β − Cnαµ√log d σnpd − C2 η K σ′ � 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content='5β + Cnαµ√log d σnpd � µ2 ≥ −β − 2Cnαµ√log d σnpd , where the first inequality follows from the fact that there are Θ( n K ) samples such that I(yi = k), and the last inequality follows from picking η ≤ K(0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content='5β + Cnα µ√log d σnpd )2−qµ−2q−1C−1 2 .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' Lemma D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content='15.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' Under Condition 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content='2, for t ≤ T, we have γ(t) j,r,j, ζ(t) j,r,i ≤ α.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' Proof.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' For yi ̸= j or r /∈ Sj signal, γ(t) j,r,j, ζ(t) j,r,i = 0 ≤ α.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' If yi = j, then by Lemma D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content='12 we have ���ℓ′(t) j,i ��� = 1 − logitj(F;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' X) = � i̸=j eFi(X) �K i=1 eFi(X) ≤ Ke eFj(X) .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' (D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content='2) Recall that γ(t+1) j,r,j = γ(t) j,r,j − I(r ∈ Sj signal)η n n � i=1 ℓ′(t) j,i · σ′ �� �w(t) j,r, µyi �� ∥µyi∥2 2 I(yi = j), ζ(t+1) j,r,i = ζ(t) j,r,i − η n · ℓ′(t) j,i · σ′ �� �w(t) j,r, ξi �� ����ξj,r,i ��� 2 2 I(j = yi).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' 27 We first bound ζ(T) j,r,i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' Let Tj,r,i be the last time t < T that ζ(t) j,r,i ≤ 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content='5α.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GdAyT4oBgHgl3EQffPg-/content/2301.00335v1.pdf'} +page_content=' Then we have ζ(T) j,r,i = ζ(Tj,r,i) j,r,i − η nℓ′(Tj,r,i) i σ′ �� �w(Tj,r,i) j,r , ξi �� I(yi = j) ����ξj,r,i ��� 2 2 � �� � I1 − � Tj,r,i