| [ |
| { |
| "chunk_id": "b9b5d541-ecd7-4d1b-8c15-b77e75119e44", |
| "text": "Guillermo Valle Pérez Chico Q. Camargo\nUniversity of Oxford University of Oxford\nguillermo.valle@dtc.ox.ac.uk Louis\nUniversity of Oxford\nard.louis@physics.ox.ac.uk2019\nApr ABSTRACT\n21 Deep neural networks (DNNs) generalize remarkably well without explicit regularization even in the strongly over-parametrized regime where classical learning theory would instead predict that they would severely overfit. While many\nproposals for some kind of implicit regularization have been made to rationalise\nthis success, there is no consensus for the fundamental reason why DNNs do not\nstrongly overfit. In this paper, we provide a new explanation. By applying a\nvery general probability-complexity bound recently derived from algorithmic information theory (AIT), we argue that the parameter-function map of many DNNs[stat.ML] should be exponentially biased towards simple functions. We then provide clear\nevidence for this strong simplicity bias in a model DNN for Boolean functions,\nas well as in much larger fully connected and convolutional networks applied to\nCIFAR10 and MNIST. As the target functions in many real problems are expected\nto be highly structured, this intrinsic simplicity bias helps explain why deep networks generalize well on real world problems. This picture also facilitates a novel\nPAC-Bayes approach where the prior is taken over the DNN input-output function\nspace, rather than the more conventional prior over parameter space. If we assume that the training algorithm samples parameters close to uniformly within the\nzero-error region then the PAC-Bayes theorem can be used to guarantee good expected generalization for target functions producing high-likelihood training sets. By exploiting recently discovered connections between DNNs and Gaussian processes to estimate the marginal likelihood, we produce relatively tight generalization PAC-Bayes error bounds which correlate well with the true error on realistic Deep learning is a machine learning paradigm based on very large, expressive and composable\nmodels, which most often require similarly large data sets to train. The name comes from the main\ncomponent in the models: deep neural networks (DNNs) with many layers of representation. These\nmodels have been remarkably successful in domains ranging from image recognition and synthesis,\nto natural language processing, and reinforcement learning (Mnih et al. (2015); LeCun et al. (2015);\nRadford et al. (2016); Schmidhuber (2015)). There has been work on understanding the expressive\npower of certain classes of deep networks (Poggio et al. (2017)), their learning dynamics (Advani &\nSaxe (2017); Liao & Poggio (2017)), and generalization properties (Kawaguchi et al. (2017); Poggio\net al. (2018); Neyshabur et al. (2017b)).", |
| "paper_id": "1805.08522", |
| "title": "Deep learning generalizes because the parameter-function map is biased towards simple functions", |
| "authors": [ |
| "Guillermo Valle-Pérez", |
| "Chico Q. Camargo", |
| "Ard A. Louis" |
| ], |
| "published_date": "2018-05-22", |
| "primary_category": "stat.ML", |
| "arxiv_url": "http://arxiv.org/abs/1805.08522v5", |
| "chunk_index": 1, |
| "total_chunks": 75, |
| "char_count": 2756, |
| "word_count": 398, |
| "chunking_strategy": "semantic" |
| }, |
| { |
| "chunk_id": "af2e8486-ff79-4b75-80c3-e13fff28cfec", |
| "text": "However, a full theoretical understanding of many of these\nproperties is still lacking. DNNs are typically overparametrized, with many more parameters than training examples. Classical\nlearning theory suggests that overparametrized models lead to overfitting, and so poorer generalization performance. By contrast, for deep learning there is good evidence that increasing the number\nof parameters leads to improved generalization (see e.g. Neyshabur et al. (2018)). For a typical\nsupervised learning scenario, classical learning theory provides bounds on the generalization error\nϵ(f) for target function f that typically scale as the complexity of the hypothesis class H. Complexity measures, C(H), include simply the number of functions in H, the VC dimension, and the\nRademacher complexity (Shalev-Shwartz & Ben-David (2014)). Since neural networks are highly\nexpressive, typical measures of C(H) will be extremely large, leading to trivial bounds. Many empirical schemes such as dropout (Srivastava et al. (2014)), weight decay (Krogh & Hertz\n(1992)), early stopping (Morgan & Bourlard (1990)), have been proposed as sources of regularization that effectively lower C(H). However, in an important recent paper (Zhang et al. (2017a)),\nit was explicitly demonstrated that these regularization methods are not necessary to obtain good\ngeneralization. Moreover, by randomly labelling images in the well known CIFAR10 data set\n(Krizhevsky & Hinton (2009)), these authors showed that DNNs are sufficiently expressive to memorize a data set in not much more time than it takes to train on uncorrupted CIFAR10 data. By\nshowing that it is relatively easy to train DNNs to find functions f that do not generalize at all, this\nwork sharpened the question as to why DNNs generalize so well when presented with uncorrupted\ntraining data. This study stimulated much recent work, see e.g. (Kawaguchi et al. (2017); Arora et al.\n(2018); Morcos et al. (2018); Neyshabur et al. (2017b); Dziugaite & Roy (2017; 2018); Neyshabur\net al. (2017a; 2018)), but there is no consensus as to why DNNs generalize so well. Because DNNs have so many parameters, minimizing the loss function L to find a minimal training set error is a challenging numerical problem. The most popular methods for performing such\noptimization rely on some version of stochastic gradient descent (SGD).", |
| "paper_id": "1805.08522", |
| "title": "Deep learning generalizes because the parameter-function map is biased towards simple functions", |
| "authors": [ |
| "Guillermo Valle-Pérez", |
| "Chico Q. Camargo", |
| "Ard A. Louis" |
| ], |
| "published_date": "2018-05-22", |
| "primary_category": "stat.ML", |
| "arxiv_url": "http://arxiv.org/abs/1805.08522v5", |
| "chunk_index": 2, |
| "total_chunks": 75, |
| "char_count": 2355, |
| "word_count": 360, |
| "chunking_strategy": "semantic" |
| }, |
| { |
| "chunk_id": "2d6d3a74-d400-459c-8b3f-967aff3389fc", |
| "text": "In addition, many authors\nhave also argued that SGD may exploit certain features of the loss-function to find solutions that\ngeneralize particularly well (Soudry et al. (2017); Zhang et al. (2017b; 2018)) However, while SGD\nis typically superior to other standard minimization methods in terms of optimization performance,\nthere is no consensus in the field on how much of the remarkable generalization performance of\nDNNs is linked to SGD (Krueger et al. (2017)). In fact DNNs generalize well when other optimization methods are used (from variants of SGD, like Adam (Kingma & Ba (2014)), to gradient-free\nmethods (Such et al. (2018))). For example, in recent papers (Wu et al. (2017); Zhang et al. (2018);\nKeskar et al. (2016) simple gradient descent (GD) was shown to lead to differences in generalization\nwith SGD of at most a few percent. Of course in practical applications such small improvements in\ngeneralization performance can be important. However, the question we want to address in this paper is the broader one of Why do DNNs generalize at all, given that they are so highly expressive\nand overparametrized?. While SGD is important for optimization, and may aid generalization, it\ndoes not appear to be the fundamental source of generalization in DNNs. Another longstanding family of arguments focuses on the local curvature of a stationary point of\nthe loss function, typically quantified in terms of products of eigenvalues of the local Hessian matrix. Flatter stationary points (often simply called minima) are associated with better generalization\nperformance (Hochreiter & Schmidhuber (1997); Hinton & van Camp (1993)). Part of the intuition\nis that flatter minima are associated with simpler functions (Hochreiter & Schmidhuber (1997); Wu\net al. (2017)), which should generalize better. Recent work (Dinh et al. (2017)) has pointed out that\nflat minima can be transformed to sharp minima under suitable re-scaling of parameters, so care\nmust be taken in how flatness is defined. In an important recent paper (Wu et al. (2017)) an attack\ndata set was used to vary the generalization performance of a standard DNN from a few percent\nerror to nearly 100% error. This performance correlates closely with a robust measure of the flatness\nof the minima (see also Zhang et al. (2018) for a similar correlation over a much smaller range,\nbut with evidence that SGD leads to slightly flatter minima than simple GD does). The authors also\nconjectured that this large difference in local flatness would be reflected in large differences between\nthe volume Vgood of the basin of attraction for solutions that generalize well and Vbad for solutions\nthat generalize badly. If these volumes differ sufficiently, then this may help explain why SGD and\nother methods such as GD converge to good solutions; these are simply much easier to find than bad\nones. Although this line of argument provides a tantalizing suggestion for why DNNs generalize\nwell in spite of being heavily overparametrized, it still begs the fundamental question of Why do\nsolutions vary so much in flatness or associated properties such as basin volume?", |
| "paper_id": "1805.08522", |
| "title": "Deep learning generalizes because the parameter-function map is biased towards simple functions", |
| "authors": [ |
| "Guillermo Valle-Pérez", |
| "Chico Q. Camargo", |
| "Ard A. Louis" |
| ], |
| "published_date": "2018-05-22", |
| "primary_category": "stat.ML", |
| "arxiv_url": "http://arxiv.org/abs/1805.08522v5", |
| "chunk_index": 3, |
| "total_chunks": 75, |
| "char_count": 3130, |
| "word_count": 504, |
| "chunking_strategy": "semantic" |
| }, |
| { |
| "chunk_id": "a52bbe22-26cb-489b-b66b-a728c3c6acc7", |
| "text": "In this paper we build on recent applications of algorithmic information theory (AIT) (Dingle et al.\n(2018)) to suggest that the large observed differences in flatness observed by (Wu et al. (2017))\ncan be correlated with measures of descriptional complexity. We then apply a connection between\nGaussian processes and DNNs to empirically demonstrate for several different standard architectures\nthat the probability of obtaining a function f in DNNs upon a random choice of parameters varies\nover many orders of magnitude. This bias allows us to apply a classical result from PAC-Bayes\ntheory to help explain why DNNs generalize so well. 1.1 MAIN CONTRIBUTIONS Our main contributions are:", |
| "paper_id": "1805.08522", |
| "title": "Deep learning generalizes because the parameter-function map is biased towards simple functions", |
| "authors": [ |
| "Guillermo Valle-Pérez", |
| "Chico Q. Camargo", |
| "Ard A. Louis" |
| ], |
| "published_date": "2018-05-22", |
| "primary_category": "stat.ML", |
| "arxiv_url": "http://arxiv.org/abs/1805.08522v5", |
| "chunk_index": 4, |
| "total_chunks": 75, |
| "char_count": 688, |
| "word_count": 107, |
| "chunking_strategy": "semantic" |
| }, |
| { |
| "chunk_id": "6cb1be29-e645-4dd3-81e0-a8e282917b47", |
| "text": "• We argue that the parameter-function map provides a fruitful lens through which to analyze\nthe generalization performance of DNNs.\n• We apply recent arguments from AIT to show that, if the parameter-function map is highly\nbiased, then high probability functions will have low descriptional complexity.\n• We show empirically that the parameter-function map of DNNs is extremely biased towards\nsimple functions, and therefore the prior over functions is expected to be extremely biased\ntoo. We claim that this intrinsic bias towards simpler functions is the fundamental source\nof regularization that allows DNNs to generalize.\n• We approximate the prior over functions using Gaussian processes, and present evidence\nthat Gaussian processes reproduce DNN marginal likelihoods remarkably well even for\nfinite width networks.\n• Using the Gaussian process approximation of the prior over functions, we compute PACBayes expected generalization error bounds for a variety of common DNN architectures\nand datasets. We show that this shift from the more commonly used priors over parameters\nto one over functions allows us to obtain relatively tight bounds which follow the behaviour\nof the real generalization error. 2 THE PARAMETER-FUNCTION MAP Definition 1. (Parameter-function map) For a parametrized supervised learning model, let the input\nspace be X and the output space be Y. The space of functions that the model can express is then\nF ⊆Y|X|. If the model has p real valued parameters, taking values within a set Θ ⊆Rp, the\nparameter-function map, M, is defined as: where fθ is the function implemented by the model with choice of parameter vector θ. This map is of interest when using an algorithm that searches in parameter space, such as SGD, as it\ndetermines how the behaviour of the algorithm in parameter space maps to its behavior in function\nspace. The latter is what determines properties of interest such as generalization. 3 ALGORITHMIC INFORMATION THEORY AND SIMPLICITY-BIAS IN THE\nPARAMETER-FUNCTION MAP One of the main sources of inspiration for our current work comes from a recent paper (Dingle\net al. (2018)) which derives an upper bound for the probability P(x) that an output x ∈O of an\ninput-output map g : I →O obtains upon random sampling1 of inputs I K(x)+b˜ P(x) ≤2−a (1) 1We typically assume uniform sampling of inputs, but other distributions with simple descriptional complexity would also work where ˜K(x) is an approximation to the (uncomputable) Kolmogorov complexity of x, and a and b\nare scalar parameters which depend on the map g but not on x.", |
| "paper_id": "1805.08522", |
| "title": "Deep learning generalizes because the parameter-function map is biased towards simple functions", |
| "authors": [ |
| "Guillermo Valle-Pérez", |
| "Chico Q. Camargo", |
| "Ard A. Louis" |
| ], |
| "published_date": "2018-05-22", |
| "primary_category": "stat.ML", |
| "arxiv_url": "http://arxiv.org/abs/1805.08522v5", |
| "chunk_index": 5, |
| "total_chunks": 75, |
| "char_count": 2577, |
| "word_count": 414, |
| "chunking_strategy": "semantic" |
| }, |
| { |
| "chunk_id": "35f34f29-487d-4fc5-9567-d778eb960a81", |
| "text": "This bound can be derived for inputs\nx and maps g that satisfy a simplicity criterion K(g)+K(n) ≪K(x)+O(1), where n is a measure\nof the size of the inputs space (e.g. if |I| = 2n). A few other conditions also need to be fulfilled in\norder for equation (1) to hold, including that there is redundancy in the map, e.g. that multiple inputs\nmap to the same output x so that P(x) can vary (for more details see Appendix E)). Statistical lower\nbounds can also be derived for P(x) which show that, upon random sampling of inputs, outputs x\nwill typically be close (on a log scale) to the upper bound. If there is a significant linear variation in K(x), then the bound (1) will vary over many orders of\nmagnitude. While these arguments (Dingle et al. (2018)) do not prove that maps are biased, they\ndo say that if there is bias, it will follow Eq. (1). It was further shown empirically that Eq. (1)\npredicts the behavior of maps ranging from the RNA sequence to secondary structure map to a map\nfor option pricing in financial mathematics very well. It is not hard to see that the DNN parameter-function map M is simple in the sense described above,\nand also fulfills the other necessary conditions for simplicity bias (Appendix E). The success of\nEq. (1) for other maps suggests that the parameter-function map of many different kinds of DNN\nshould also exhibit simplicity bias. 4 SIMPLICITY BIAS IN A DNN IMPLEMENTING BOOLEAN FUNCTIONS In order to explore the properties of the parameter-function map, we consider random neural networks. We put a probability distribution over the space of parameters Θ, and are interested in the\ndistribution over functions induced by this distribution via the parameter-function map of a given\nneural network. To estimate the probability of different functions, we take a large sample of parameters and simply count the number of samples producing individual functions (empirical frequency). This procedure is easiest for a discrete space of functions, and a small enough function space so\nthat the probabilities of obtaining functions more than once is not negligible. We achieve this by\nusing a neural network with 7 Boolean inputs, two hidden layers of 40 ReLU neurons each, and a\nsingle Boolean output. This means that the input space is X = {0, 1}7 and the space of functions\nis F ⊆{0, 1}27 (we checked that this neural network can in fact express almost all functions in\n{0, 1}27).", |
| "paper_id": "1805.08522", |
| "title": "Deep learning generalizes because the parameter-function map is biased towards simple functions", |
| "authors": [ |
| "Guillermo Valle-Pérez", |
| "Chico Q. Camargo", |
| "Ard A. Louis" |
| ], |
| "published_date": "2018-05-22", |
| "primary_category": "stat.ML", |
| "arxiv_url": "http://arxiv.org/abs/1805.08522v5", |
| "chunk_index": 6, |
| "total_chunks": 75, |
| "char_count": 2417, |
| "word_count": 418, |
| "chunking_strategy": "semantic" |
| }, |
| { |
| "chunk_id": "2aeb5d80-9abb-4d0d-b32b-db51bdc03858", |
| "text": "For the distributions on parameter space we used Gaussian distributions or uniform within\na hypercube, with several variances2 The resulting probabilities P(f) can be seen in Figure 1a, where we plot the (normalized) empirical\nfrequencies versus the rank, which exhibits a range of probabilities spanning as many orders of\nmagnitude as the finite sample size allows. Using different distributions over parameters has a very\nsmall effect on the overall curve. We show in Figure 1a that the rank plot of P(f) can be accurately fit with a normalized Zipf law\nP(r) = (ln(NO)r)−1, where r is the rank, and NO = 227 = 2128. It is remarkable that this form fits\nthe rank plot so well without any adjustable parameters, suggesting that Zipf behaviour holds over\nthe whole rank plot, leading to an estimated range of 39 orders of magnitude in P(f). Also, once\nthe DNN is sufficiently expressive so that all NO functions can be found, this arguments suggests\nthat P(f) will not vary much with an increase in parameters. Independently, Eq. (1) also suggests\nthat to first order P(f) is independent of the number of parameters since it follows from K(f), and\nthe map only enters through a and b, which are constrained (Dingle et al. (2018)). The bound of equation (1) predicts that high probability functions should be of low descriptional\ncomplexity. In Figure 10, we show that we indeed find this simplicity-bias phenomenon, with all\nhigh-probability functions having low Lempel-Ziv (LZ) complexity (defined in Appendix F.1). The\nliterature on complexity measures is vast. Here we simply note that there is nothing fundamental\nabout LZ.", |
| "paper_id": "1805.08522", |
| "title": "Deep learning generalizes because the parameter-function map is biased towards simple functions", |
| "authors": [ |
| "Guillermo Valle-Pérez", |
| "Chico Q. Camargo", |
| "Ard A. Louis" |
| ], |
| "published_date": "2018-05-22", |
| "primary_category": "stat.ML", |
| "arxiv_url": "http://arxiv.org/abs/1805.08522v5", |
| "chunk_index": 7, |
| "total_chunks": 75, |
| "char_count": 1626, |
| "word_count": 268, |
| "chunking_strategy": "semantic" |
| }, |
| { |
| "chunk_id": "30e61afb-fc0b-472f-b19e-240a410e1bdd", |
| "text": "Other approximate complexity measures that capture essential aspects of Kolmogorov\ncomplexity also show similar correlations. In Appendix F.4, we demonstrate that probability also\ncorrelates with the size of the smallest Boolean expression expressing that function, as well as with\ntwo complexity measures related to the sensitivity of the output to changes in the input. In Figure 9,\nwe also compare the different measures. We can see that they all correlate, but also show differences\n2for Figures 1b,11 we used a uniform distribution with variance of 1/√n with n the input size to the layer Figure 1: (a) Probability versus rank of each of the functions (ranked by probability) from a sample of 1010 (blue) or 107 (others) parameters. The labels are different parameter distributions. (b)\nProbability versus Lempel-Ziv complexity. Probabilities are estimated from a sample of 108 parameters. Points with a frequency of 10−8 are removed for clarity because these suffer from finite-size\neffects (see Appendix G). The red line is the simplicity bias bound of Eq.(1) (d) Generalization error\nincreases with the Lempel-Ziv complexity of different target functions, when training the network\nwith advSGD – see Appendix A. in their ability to recognize regularity. Which complexity measures best capture real-world regularity\nremains an open question. There are many reasons to believe that real-world functions are simple or have some structure (Lin\net al. (2017); Schmidhuber (1997)) and will therefore have low descriptional complexity. Putting this\ntogether with the above results means we expect a DNN that exhibits simplicity bias to generalize\nwell for real-world datasets and functions. On the other hand, as can also be seen in Figure 1c, our\nnetwork does not generalize well for complex (random) functions. By simple counting arguments,\nthe number of high complexity functions is exponentially larger than the number of low complexity\nfunctions.", |
| "paper_id": "1805.08522", |
| "title": "Deep learning generalizes because the parameter-function map is biased towards simple functions", |
| "authors": [ |
| "Guillermo Valle-Pérez", |
| "Chico Q. Camargo", |
| "Ard A. Louis" |
| ], |
| "published_date": "2018-05-22", |
| "primary_category": "stat.ML", |
| "arxiv_url": "http://arxiv.org/abs/1805.08522v5", |
| "chunk_index": 8, |
| "total_chunks": 75, |
| "char_count": 1952, |
| "word_count": 300, |
| "chunking_strategy": "semantic" |
| }, |
| { |
| "chunk_id": "a28d0bf8-de69-4575-91b9-46ae138faf9d", |
| "text": "Nevertheless, such functions may be less common in real-world applications. Although here we have only shown that high-probability functions have low complexity for a relatively small DNN, the generality of the AIT arguments from Dingle et al. (2018) suggests that an\nexponential probability-complexity bias should hold for larger neural networks as well. To test this\nfor larger networks, we restricted the input to a random sample of 1000 images from CIFAR10, and\nthen sampled networks parameters for a 4 layer CNN, to sample labelings of these inputs. We then\ncomputed the probability of these networks, together with the critical sample ratio (a complexity\nmeasured defined in Appendix F.1) of the network on these inputs. As can be seen in Figure 2a they\ncorrelate remarkably well, suggesting that simplicity bias is also found for more realistic DNNs. We next turn to a complementary approach from learning theory to explore the link between bias\nand generalization. 5 PAC-BAYES GENERALIZATION ERROR BOUNDS Following the failure of worst-case generalization bounds for deep learning, algorithm and datadependent bounds have been recently investigated. The main three approaches have been based\non algorithmic stability (Hardt et al. (2016)), margin theory (Neyshabur et al. (2015); Keskar et al.\n(2016); Neyshabur et al. (2017b;a); Bartlett et al. (2017); Golowich et al. (2018); Arora et al. (2018);\nNeyshabur et al. (2018)), and PAC-Bayes theory (Dziugaite & Roy (2017; 2018); Neyshabur et al.\n(2017a;b)). Here we build on the classic PAC-Bayes theorem by McAllester (1999a) which bounds the expected\ngeneralization error when picking functions according to some distribution Q (which can depend\non the training set), given a (training-set independent) prior P over functions. Langford & Seeger\n(2001) tightened the bound into the form shown in Theorem 1. Theorem 1. (PAC-Bayes theorem (Langford & Seeger (2001)) For any distribution P on any concept space and any distribution D on a space of instances we have, for 0 < δ ≤1, that with probability at least 1 −δ over the choice of sample S of m instances, all distributions Q over the\nconcept space satisfy the following: ˆϵ(Q) 1 −ˆϵ(Q) ≤KL(Q||P) + ln 2m ˆϵ(Q) ln + (1 −ˆϵ(Q)) ln δ (2)\nϵ(Q) 1 −ϵ(Q) m −1 where ϵ(Q) = Pc Q(c)ϵ(c), and ˆϵ(Q) = Pc Q(c)ˆϵ(c).", |
| "paper_id": "1805.08522", |
| "title": "Deep learning generalizes because the parameter-function map is biased towards simple functions", |
| "authors": [ |
| "Guillermo Valle-Pérez", |
| "Chico Q. Camargo", |
| "Ard A. Louis" |
| ], |
| "published_date": "2018-05-22", |
| "primary_category": "stat.ML", |
| "arxiv_url": "http://arxiv.org/abs/1805.08522v5", |
| "chunk_index": 9, |
| "total_chunks": 75, |
| "char_count": 2314, |
| "word_count": 375, |
| "chunking_strategy": "semantic" |
| }, |
| { |
| "chunk_id": "c006a30e-00c5-4799-9c99-a9d3e15927a3", |
| "text": "Here, ϵ(c) is the generalization error\n(probability of the concept c disagreeing with the target concept, when sampling inputs according\nto D), and ˆϵ(c) is the empirical error (fraction of samples in S where c disagrees with the target\nconcept). In the realizable case (where zero training error is achievable for any training sample of size m),\nwe can consider an algorithm that achieves zero training error and samples functions with a weight\nproportional the prior, namely Q(c) = Q∗(c) = P c∈UP (c)P (c), where U is the set of concepts consistent\nwith the training set. This is just the posterior distribution, when the prior is P(c) and likelihood\nequals 1 if c ∈U and 0 otherwise. It also is the Q that minimizes the general PAC-Bayes bound\n2 (McAllester (1999a)). In this case, the KL divergence in the bound simplifies to the marginal\nlikelihood (Bayesian evidence) of the data3, and the right hand side becomes an invertible function\nof the error. This is shown in Corollary 1, which is just a tighter version of the original bound by\nMcAllester (1998) (Theorem 1) for Bayesian binary classifiers.", |
| "paper_id": "1805.08522", |
| "title": "Deep learning generalizes because the parameter-function map is biased towards simple functions", |
| "authors": [ |
| "Guillermo Valle-Pérez", |
| "Chico Q. Camargo", |
| "Ard A. Louis" |
| ], |
| "published_date": "2018-05-22", |
| "primary_category": "stat.ML", |
| "arxiv_url": "http://arxiv.org/abs/1805.08522v5", |
| "chunk_index": 10, |
| "total_chunks": 75, |
| "char_count": 1106, |
| "word_count": 188, |
| "chunking_strategy": "semantic" |
| }, |
| { |
| "chunk_id": "a175f662-6303-453c-953e-100a3ac8e170", |
| "text": "In practice, modern DNNs are often\nin the realizable case, as they are typically trained to reach 100% training accuracy. Corollary 1. (Realizable PAC-Bayes theorem (for Bayesian classifier)) Under the same setting as\nin Theorem 1, with the extra assumption that D is realizable, we have: ln P (U)1 + ln 2mδ\n−ln (1 −ϵ(Q∗)) ≤\nm −1 where Q∗(c) = P c∈UP (c)P (c), U is the set of concepts in H consistent with the sample S, and where\nP(U) = Pc∈U P(c) Here we interpret ϵ(Q) as the expected value of the generalization error of the classifier obtained\nafter running a stochastic algorithm (such as SGD), where the expectation is over runs. In order to\napply the PAC-Bayes corollary(which assumes sampling according to Q∗), we make the following\n(informal) assumption: Stochastic gradient descent samples the zero-error region close to uniformly. Given some distribution over parameters ˜P(θ), the distribution over functions P(c) is determined\nby the parameter-function map as P(c) = ˜P(M−1(c)). If the parameter distribution is not too far\nfrom uniform, then P(c) should be heavily biased as in Figure 1a. In Section 7, we will discuss\nand show further evidence for the validity of this assumption on the training algorithm. One way to\nunderstand the bias observed in Fig 1a is that the volumes of regions of parameter space producing\nfunctions vary exponentially. This is likely to have a very significant effect on which functions SGD\nfinds. Thus, even if the parameter distributions used here do not capture the exact behavior of SGD,\nthe bias will probably still play an important role. Our measured large variation in P(f) should correlate with a large variation in the basin volume V\nthat Wu et al. (2017) used to explain why they obtained similar results using GD and SGD for their\nDNNs trained on CIFAR10. Because the region of parameter space with zero-error may be unbounded, we will use, unless stated\notherwise, a Gaussian distribution with a sufficiently large variance4.", |
| "paper_id": "1805.08522", |
| "title": "Deep learning generalizes because the parameter-function map is biased towards simple functions", |
| "authors": [ |
| "Guillermo Valle-Pérez", |
| "Chico Q. Camargo", |
| "Ard A. Louis" |
| ], |
| "published_date": "2018-05-22", |
| "primary_category": "stat.ML", |
| "arxiv_url": "http://arxiv.org/abs/1805.08522v5", |
| "chunk_index": 11, |
| "total_chunks": 75, |
| "char_count": 1981, |
| "word_count": 330, |
| "chunking_strategy": "semantic" |
| }, |
| { |
| "chunk_id": "e57f9d7d-60b9-454c-8226-5de61a248ca8", |
| "text": "We discuss further the effect\nof the choice of variance in Appendix C. 3This can be obtained, for instance, by noticing that the KL divergence between Q and P equals the evidence\nlower bound (ELBO) plus the log likelihood. As Q∗is the true posterior, the bound becomes an equality, and\nin our case the log likelihood is zero.\n4Note that in high dimensions a Gaussian distribution is very similar to a uniform distribution over a sphere. Figure 2: (a) Probability (using GP approximation) versus critical sample ratio (CSR) of labelings\nof 1000 random CIFAR10 inputs, produced by 250 random samples of parameters. The network\nis a 4 layer CNN. (b) Comparing the empirical frequency of different labelings for a sample of m\nMNIST images, obtained from randomly sampling parameters from a neural neural network, versus\nthat obtained by sampling from the corresponding GP. The network has 2 fully connected hidden\nlayers of 784 ReLU neurons each. σw = σb = 1.0.", |
| "paper_id": "1805.08522", |
| "title": "Deep learning generalizes because the parameter-function map is biased towards simple functions", |
| "authors": [ |
| "Guillermo Valle-Pérez", |
| "Chico Q. Camargo", |
| "Ard A. Louis" |
| ], |
| "published_date": "2018-05-22", |
| "primary_category": "stat.ML", |
| "arxiv_url": "http://arxiv.org/abs/1805.08522v5", |
| "chunk_index": 12, |
| "total_chunks": 75, |
| "char_count": 957, |
| "word_count": 161, |
| "chunking_strategy": "semantic" |
| }, |
| { |
| "chunk_id": "9dad311f-51ce-470a-ba77-0df21831d760", |
| "text": "Sample size is 107, and only points obtained in\nboth samples are displayed. These figures also demonstrate significant (simplicity) bias for P(f). In order to use the PAC-Bayes approach, we need a method to calculate P(U) for large systems, a\nproblem we now turn to. 5.1 GAUSSIAN PROCESS APPROXIMATION TO THE PRIOR OVER FUNCTIONS In recent work (Lee et al. (2017); Matthews et al. (2018); Garriga-Alonso et al. (2018); Novak et al.\n(2018)), it was shown that infinitely-wide neural networks (including convolutional and residual\nnetworks) are equivalent to Gaussian processes. This means that if the parameters are distributed\ni.i.d. (for instance with a Gaussian with diagonal covariance), then the (real-valued) outputs of the\nneural network, corresponding to any finite set of inputs, are jointly distributed with a Gaussian\ndistribution. More precisely, assume the i.i.d. distribution over parameters is ˜P with zero mean,\nthen for a set of n inputs (x1, ..., xn), Pθ∼˜P (fθ(x1) = ˜y1, ..., fθ(xn) = ˜yn) ∝exp −1 K−1˜y , (3) 2˜yT where ˜y = (˜y1, ..., ˜yn). The entries of the covariance matrix K are given by the kernel function\nk as Kij = k(xi, xj). The kernel function depends on the choice of architecture, and properties\nof ˜P, in particular the weight variance σ2w/n (where n is the size of the input to the layer) and the\nbias variance σ2b. The kernel for fully connected ReLU networks has a well known analytical form\nknown as the arccosine kernel (Cho & Saul (2009)), while for convolutional and residual networks\nit can be efficiently computed5. The main quantity in the PAC-Bayes theorem, P(U), is precisely the probability of a given set of output labels for the set of instances in the training set, also known as marginal likelihood, a connection\nexplored in recent work (Smith & Le (2017); Germain et al. (2016)). For binary classification, these\nlabels are binary, and are related to the real-valued outputs of the network via a nonlinear function\nsuch as a step functionwhich we denote σ. Then, for a training set U = {(x1, y1), ..., (xm, ym)},\nP(U) = Pθ∼˜P (σ(fθ(x1)) = y1, ..., σ(fθ(xm)) = ym).", |
| "paper_id": "1805.08522", |
| "title": "Deep learning generalizes because the parameter-function map is biased towards simple functions", |
| "authors": [ |
| "Guillermo Valle-Pérez", |
| "Chico Q. Camargo", |
| "Ard A. Louis" |
| ], |
| "published_date": "2018-05-22", |
| "primary_category": "stat.ML", |
| "arxiv_url": "http://arxiv.org/abs/1805.08522v5", |
| "chunk_index": 13, |
| "total_chunks": 75, |
| "char_count": 2117, |
| "word_count": 357, |
| "chunking_strategy": "semantic" |
| }, |
| { |
| "chunk_id": "a73d64c9-9419-4794-8b73-da6525150857", |
| "text": "We will discuss\nhow to circumvent this. But first, we explore the more fundamental issue of neural networks not 5We use the code from Garriga-Alonso et al. (2018) to compute the kernel for convolutional networks being infinitely-wide in practice6.", |
| "paper_id": "1805.08522", |
| "title": "Deep learning generalizes because the parameter-function map is biased towards simple functions", |
| "authors": [ |
| "Guillermo Valle-Pérez", |
| "Chico Q. Camargo", |
| "Ard A. Louis" |
| ], |
| "published_date": "2018-05-22", |
| "primary_category": "stat.ML", |
| "arxiv_url": "http://arxiv.org/abs/1805.08522v5", |
| "chunk_index": 15, |
| "total_chunks": 75, |
| "char_count": 247, |
| "word_count": 39, |
| "chunking_strategy": "semantic" |
| }, |
| { |
| "chunk_id": "9ae46bca-5c31-43bd-afea-c5ca6ded1bca", |
| "text": "To test whether the equation above provides a good approximation\nfor P(U) for common neural network architectures, we sampled y (labelings for a particular set\nof inputs) from a fully connected neural network, and the corresponding Gaussian process, and\ncompared the empirical frequencies of each function. We can obtain good estimates of P(U) in this\ndirect way, for very small sets of inputs (here we use m = 10 random MNIST images). The results\nare plotted in Figure 2, showing that the agreement between the neural network probabilities and the\nGaussian probabilities is extremely good, even this far from the infinite width limit (and for input\nsets of this size). In order to calculate P(U) using the GPs, we use the expectation-propagation (EP) approximation,\nimplemented in GPy (since 2012), which is more accurate than the Laplacian approximation (see\nRasmussen (2004) for a description and comparison of the algorithms). To see how good these\napproximations are, we compared them with the empirical frequencies obtained by directly sampling\nthe neural network. The results are in Figure 5 in the Appendix B. We find that the both the EP and\nLaplacian approximations correlate with the the empirical neural network likelihoods. In larger sets\nof inputs (1000), we also found that the relative difference between the log-likelihoods given by the\ntwo approximations was less than about 10%. 6 EXPERIMENTAL RESULTS FOR PAC BAYES We tested the expected generalization error bounds described in the previous section in a variety\nof networks trained on binarized7 versions of MNIST (LeCun et al. (1998)), fashion-MNIST (Xiao\net al. (2017)), and CIFAR10 (Krizhevsky & Hinton (2009)). Zhang et al. (2017a) found that the\ngeneralization error increased continuously as the labels in CIFAR10 where randomized with an\nincreasing probability. In Figure 3, we replicate these results for three datasets, and show that\nour bounds correctly predict the increase in generalization error. Furthermore, the bounds show\nthat, for low corruption, MNIST and fashion-MNIST are similarly hard (although fashion-MIST\nis slightly harder), and CIFAR10 is considerably harder. This mirrors what is obtained from the\ntrue generalization errors. Also note that the bounds for MNIST and fashion-MNIST with little\ncorruption are significantly below 0.5 (random guessing).", |
| "paper_id": "1805.08522", |
| "title": "Deep learning generalizes because the parameter-function map is biased towards simple functions", |
| "authors": [ |
| "Guillermo Valle-Pérez", |
| "Chico Q. Camargo", |
| "Ard A. Louis" |
| ], |
| "published_date": "2018-05-22", |
| "primary_category": "stat.ML", |
| "arxiv_url": "http://arxiv.org/abs/1805.08522v5", |
| "chunk_index": 16, |
| "total_chunks": 75, |
| "char_count": 2349, |
| "word_count": 363, |
| "chunking_strategy": "semantic" |
| }, |
| { |
| "chunk_id": "1c12c130-dccf-4e9c-9924-4630316dc428", |
| "text": "For experimental details see Appendix A. In the inset of Figure 3, we show that P(U) decreases over many orders of magnitude with increasing\nlabel corruption, which is a proxy for complexity. Note that the functions which generalize (at low\ncorruption) have much higher probability than those that merely memorize (at high corruption) as\nin (Zhang et al. (2017a)). Thus if functions that generalize are possible, these are much more likely\nto be found than those that memorize. In Table 1, we list the mean generalisation error and the bounds for the three datasets (at 0 label\ncorruption), demonstrating that the PAC-Bayes bound closely follows the same trends. MNIST fashion-MNIST CIFAR\nNetwork Error Bound Error Bound Error Bound\nCNN 0.023 0.134 0.071 0.175 0.320 0.485\nFC 0.031 0.169 0.070 0.188 0.341 0.518 Table 1: Mean generalization errors and PAC-Bayes bounds for the convolutional and fully connected network for 0 label corruption, for a sample of 10000 from different datasets. The networks\nare the same as in Figure 3 (4 layer CNN and 1 layer FC)", |
| "paper_id": "1805.08522", |
| "title": "Deep learning generalizes because the parameter-function map is biased towards simple functions", |
| "authors": [ |
| "Guillermo Valle-Pérez", |
| "Chico Q. Camargo", |
| "Ard A. Louis" |
| ], |
| "published_date": "2018-05-22", |
| "primary_category": "stat.ML", |
| "arxiv_url": "http://arxiv.org/abs/1805.08522v5", |
| "chunk_index": 17, |
| "total_chunks": 75, |
| "char_count": 1059, |
| "word_count": 175, |
| "chunking_strategy": "semantic" |
| }, |
| { |
| "chunk_id": "20d757ae-3e9c-4e3b-9ce4-deee3166f695", |
| "text": "7 SGD VERSUS BAYESIAN SAMPLING In this section we test the assumption that SGD samples parameters close to uniformly within the\nzero-error region. In the literature, Bayesian sampling of the parameters of a neural network has\nbeen argued to produce generalization performance similar to the same network trained with SGD. 6Note that the Gaussian approximation has been previously used to study finite neural networks as a meanfield approximation (Schoenholz et al. (2017a)).\n7We label an image as 0 if it belongs to one of the first five classes and as 1 otherwise (a) for a 4 hidden layers convolutional network (b) for a 1 hidden layer fully connected network Figure 3: Mean generalization error and corresponding PAC-Bayes bound versus percentage of\nlabel corruption, for three datasets and a training set of size 10000. Training set error is 0 in all\nexperiments. Note that the bounds follow the same trends as the true generalization errors. The\nempirical errors are averaged over 8 initializations. The Gaussian process parameters were σw =\n1.0, σb = 1.0 for the CNN and σw = 10.0, σb = 10.0 for the FC. Insets show the marginal\nlikelihood of the data as computed by the Gaussian process approximation (in natural log scale),\nversus the label corruption. The evidence is based on comparing SGD-trained network with a Gaussian process approximation\n(Lee et al. (2017)), as well as showing that this approximation is similar to Bayesian sampling via\nMCMC methods (Matthews et al. (2018)). We performed experiments showing direct evidence that the probability with which two variants of\nSGD find functions is close to the probability of obtaining the function by uniform sampling of parameters in the zero-error region. Due to computational limitations, we consider the neural network\nfrom Section 4. We are interested in the probability of finding individual functions consistent with\nthe training set, by two methods:(1 Training the neural network with variants of SGD8; in particular,\nadvSGD and Adam (described in Appendix A) (2 Bayesian inference using the Gaussian process\ncorresponding to the neural network architecture.", |
| "paper_id": "1805.08522", |
| "title": "Deep learning generalizes because the parameter-function map is biased towards simple functions", |
| "authors": [ |
| "Guillermo Valle-Pérez", |
| "Chico Q. Camargo", |
| "Ard A. Louis" |
| ], |
| "published_date": "2018-05-22", |
| "primary_category": "stat.ML", |
| "arxiv_url": "http://arxiv.org/abs/1805.08522v5", |
| "chunk_index": 18, |
| "total_chunks": 75, |
| "char_count": 2131, |
| "word_count": 340, |
| "chunking_strategy": "semantic" |
| }, |
| { |
| "chunk_id": "7358ecba-95e1-469d-9195-afe0420c9527", |
| "text": "This approximates the behavior of sampling parameters close to uniformly in the zero-error region (i.i.d. Gaussian prior to be precise). We estimated the probability of finding individual functions, averaged over training sets, for these\ntwo methods (see Appendix D for the details), when learning a target Boolean function of LZ\ncomplexity84.0. In Figures 4 and 8, we plot this average probability, for an SGD-like algorithm, and\nfor the approximate Bayesian inference. We find that there is close agreement (specially taking into\naccount that the EP approximation we use appears to overestimate probabilities, see Appendix B),\nalthough with some scatter (the source of which is hard to discern, given that the SGD probabilities\nhave sampling error). These results are promising evidence that SGD may behave similarly to uniform sampling of parameters (within zero-error region).", |
| "paper_id": "1805.08522", |
| "title": "Deep learning generalizes because the parameter-function map is biased towards simple functions", |
| "authors": [ |
| "Guillermo Valle-Pérez", |
| "Chico Q. Camargo", |
| "Ard A. Louis" |
| ], |
| "published_date": "2018-05-22", |
| "primary_category": "stat.ML", |
| "arxiv_url": "http://arxiv.org/abs/1805.08522v5", |
| "chunk_index": 19, |
| "total_chunks": 75, |
| "char_count": 880, |
| "word_count": 133, |
| "chunking_strategy": "semantic" |
| }, |
| { |
| "chunk_id": "9d854dda-f99f-4b33-8209-87875601d32b", |
| "text": "However, this is still a question that needs much further work. We discuss in Appendix C some potential evidence for SGD sometimes diverging from Bayesian\nparameter sampling. 8 CONCLUSION AND FUTURE WORK In this paper, we present an argument that we believe offers a first-order explanation of generalization\nin highly overparametrized DNNs. First, PAC-Bayes shows how priors which are sufficiently biased\ntowards the true distribution can result in good generalization for highly expressive models, e.g. even\nif there are many more parameters than data points. Second, the huge bias towards simple functions 8These methods were chosen because other methods we tried, including plain SGD, didn't converge to zero\nerror in this task (a) advSGD. ρ = 0.99,p = 3e−31 (b) Adam. ρ = 0.99,p = 8e−42 Figure 4: Average probability of finding a function for a variant of SGD, versus average probability\nof finding a function when using the Gaussian process approximation. This is done for a randomly\nchosen, but fixed, target Boolean function of Lempel-Ziv complexity 84.0. See Appendix D for\ndetails. The Gaussian process parameters are σw = 10.0, and σb = 10.0. For advSGD, we have\nremoved functions which only appeared once in the whole sample, to avoid finite-size effects.", |
| "paper_id": "1805.08522", |
| "title": "Deep learning generalizes because the parameter-function map is biased towards simple functions", |
| "authors": [ |
| "Guillermo Valle-Pérez", |
| "Chico Q. Camargo", |
| "Ard A. Louis" |
| ], |
| "published_date": "2018-05-22", |
| "primary_category": "stat.ML", |
| "arxiv_url": "http://arxiv.org/abs/1805.08522v5", |
| "chunk_index": 20, |
| "total_chunks": 75, |
| "char_count": 1267, |
| "word_count": 203, |
| "chunking_strategy": "semantic" |
| }, |
| { |
| "chunk_id": "4852a7a2-f351-4749-a6ef-5e60372813db", |
| "text": "In the\ncaptions, ρ refers to the 2-tailed Pearson correlation coefficient, and p to its corresponding p value. in the parameter-function map strongly suggests that neural networks have a similarly biased prior. The number of parameters in a fully expressive DNN should not significantly affect the bias. Third,\nsince real-world problems tend to be far from random, using these same complexity measures, we\nexpect the prior to be biased towards the right class of solutions for real-world datasets and problems. It should be noted that our approach is not yet able to explain the effects that different tricks used in\npractice have on generalization. However, most of these (important) improvements tend to be of the\norder of a few percent in accuracy. The aim of this paper is to explain the bulk of the generalization,\nwhich classical learning theory would predict to be poor in this highly overparametrized regime. It remains an open question whether our approach can be extended to explain the effect of some of\nthe tricks used in practice. Our approach should also apply to methods other than neural networks\n(Belkin et al. (2018)) as long as they also have simple parameter-function maps. Interesting future work may include testing simplicity bias for other DNN architectures. Extending\nto multi-class classification, or regression would also be desirable. To stimulate work in improving our PAC Bayes bounds, we summarize here the main potential\nsources of their inaccuracy. The probability that the training algorithm (like SGD) finds a particular function in the\nzero-error region can be approximated by the probability that the function obtains upon\ni.i.d. sampling of parameters. Gaussian processes model neural networks with i.i.d.-sampled parameters well even for\nfinite widths. Expectation-propagation gives a good approximation of the Gaussian process marginal\nlikelihood.", |
| "paper_id": "1805.08522", |
| "title": "Deep learning generalizes because the parameter-function map is biased towards simple functions", |
| "authors": [ |
| "Guillermo Valle-Pérez", |
| "Chico Q. Camargo", |
| "Ard A. Louis" |
| ], |
| "published_date": "2018-05-22", |
| "primary_category": "stat.ML", |
| "arxiv_url": "http://arxiv.org/abs/1805.08522v5", |
| "chunk_index": 21, |
| "total_chunks": 75, |
| "char_count": 1887, |
| "word_count": 292, |
| "chunking_strategy": "semantic" |
| }, |
| { |
| "chunk_id": "6acc54e5-6caf-4b0b-8eba-6f8977210d48", |
| "text": "PAC-Bayes offers tight bounds given the correct marginal likelihood P(U). We have shown evidence that number 2 is a very good approximation, and that numbers 1 and 3 are\nreasonably good. In addition, the fact that our bounds are able to correctly predict the behavior of the\ntrue error, offers evidence for the set of approximations as a whole, although further work in testing\ntheir validity is needed, specially that of number 1. Nevertheless, we think that the good agreement\nof our bounds constitutes good evidence for the approach we describe in the paper as well as for the\nclaim that bias in the parameter-function map is the main reason for generalization. Further work in\nunderstanding these assumptions can sharpen the results obtained here significantly. Madhu S Advani and Andrew M Saxe. High-dimensional dynamics of generalization error in neural Sanjeev Arora, Rong Ge, Behnam Neyshabur, and Yi Zhang.", |
| "paper_id": "1805.08522", |
| "title": "Deep learning generalizes because the parameter-function map is biased towards simple functions", |
| "authors": [ |
| "Guillermo Valle-Pérez", |
| "Chico Q. Camargo", |
| "Ard A. Louis" |
| ], |
| "published_date": "2018-05-22", |
| "primary_category": "stat.ML", |
| "arxiv_url": "http://arxiv.org/abs/1805.08522v5", |
| "chunk_index": 22, |
| "total_chunks": 75, |
| "char_count": 915, |
| "word_count": 148, |
| "chunking_strategy": "semantic" |
| }, |
| { |
| "chunk_id": "74bccf6b-bd71-4b4f-8a55-bb5359182a85", |
| "text": "Stronger generalization bounds for\ndeep nets via a compression approach. In Proceedings of the 35th International Conference\non Machine Learning, volume 80 of Proceedings of Machine Learning Research, pp. 254–263. PMLR, 10–15 Jul 2018. URL http://proceedings.mlr.press/v80/arora18b.\nhtml. Peter L Bartlett, Dylan J Foster, and Matus J Telgarsky. Spectrally-normalized margin bounds for\nneural networks. In Advances in Neural Information Processing Systems, pp. 6240–6249, 2017. Eric B Baum and David Haussler. What size net gives valid generalization? In Advances in neural\ninformation processing systems, pp. 81–90, 1989. Mikhail Belkin, Siyuan Ma, and Soumik Mandal.", |
| "paper_id": "1805.08522", |
| "title": "Deep learning generalizes because the parameter-function map is biased towards simple functions", |
| "authors": [ |
| "Guillermo Valle-Pérez", |
| "Chico Q. Camargo", |
| "Ard A. Louis" |
| ], |
| "published_date": "2018-05-22", |
| "primary_category": "stat.ML", |
| "arxiv_url": "http://arxiv.org/abs/1805.08522v5", |
| "chunk_index": 23, |
| "total_chunks": 75, |
| "char_count": 668, |
| "word_count": 92, |
| "chunking_strategy": "semantic" |
| }, |
| { |
| "chunk_id": "34ce99d4-f031-4283-9aba-8629cfdad274", |
| "text": "To understand deep learning we need to understand kernel learning. In Jennifer Dy and Andreas Krause (eds.), Proceedings of the 35th\nInternational Conference on Machine Learning, volume 80 of Proceedings of Machine Learning\nResearch, pp. 541–549, StockholmsmÃd'ssan, Stockholm Sweden, 10–15 Jul 2018. URL\nhttp://proceedings.mlr.press/v80/belkin18a.html. Anselm Blumer, Andrzej Ehrenfeucht, David Haussler, and Manfred K Warmuth. Information processing letters, 24(6):377–380, 1987. Youngmin Cho and Lawrence K Saul. Kernel methods for deep learning. In Advances in neural\ninformation processing systems, pp. 342–350, 2009. Thomas M Cover and Joy A Thomas. Elements of information theory. John Wiley & Sons, 2012. Kamaludin Dingle, Chico Q Camargo, and Ard A Louis. Input–output maps are strongly biased\ntowards simple outputs. Nature communications, 9(1):761, 2018. Laurent Dinh, Razvan Pascanu, Samy Bengio, and Yoshua Bengio.", |
| "paper_id": "1805.08522", |
| "title": "Deep learning generalizes because the parameter-function map is biased towards simple functions", |
| "authors": [ |
| "Guillermo Valle-Pérez", |
| "Chico Q. Camargo", |
| "Ard A. Louis" |
| ], |
| "published_date": "2018-05-22", |
| "primary_category": "stat.ML", |
| "arxiv_url": "http://arxiv.org/abs/1805.08522v5", |
| "chunk_index": 24, |
| "total_chunks": 75, |
| "char_count": 927, |
| "word_count": 126, |
| "chunking_strategy": "semantic" |
| }, |
| { |
| "chunk_id": "3649b99d-2700-426f-b12b-4171ac4c436d", |
| "text": "Sharp minima can generalize for\ndeep nets. In Proceedings of the 34th International Conference on Machine Learning, volume 70\nof Proceedings of Machine Learning Research, pp. 1019–1028. PMLR, 06–11 Aug 2017. URL\nhttp://proceedings.mlr.press/v70/dinh17b.html. Felix Draxler, Kambis Veschgini, Manfred Salmhofer, and Fred Hamprecht.", |
| "paper_id": "1805.08522", |
| "title": "Deep learning generalizes because the parameter-function map is biased towards simple functions", |
| "authors": [ |
| "Guillermo Valle-Pérez", |
| "Chico Q. Camargo", |
| "Ard A. Louis" |
| ], |
| "published_date": "2018-05-22", |
| "primary_category": "stat.ML", |
| "arxiv_url": "http://arxiv.org/abs/1805.08522v5", |
| "chunk_index": 25, |
| "total_chunks": 75, |
| "char_count": 330, |
| "word_count": 42, |
| "chunking_strategy": "semantic" |
| }, |
| { |
| "chunk_id": "4f82daab-f246-4468-b0bc-574006af94ca", |
| "text": "Essentially no barriers\nin neural network energy landscape. In Proceedings of the 35th International Conference on\nMachine Learning, volume 80 of Proceedings of Machine Learning Research, pp. 1309–1318. PMLR, 10–15 Jul 2018. URL http://proceedings.mlr.press/v80/draxler18a.\nhtml. Gintare Karolina Dziugaite and Daniel M.", |
| "paper_id": "1805.08522", |
| "title": "Deep learning generalizes because the parameter-function map is biased towards simple functions", |
| "authors": [ |
| "Guillermo Valle-Pérez", |
| "Chico Q. Camargo", |
| "Ard A. Louis" |
| ], |
| "published_date": "2018-05-22", |
| "primary_category": "stat.ML", |
| "arxiv_url": "http://arxiv.org/abs/1805.08522v5", |
| "chunk_index": 26, |
| "total_chunks": 75, |
| "char_count": 320, |
| "word_count": 41, |
| "chunking_strategy": "semantic" |
| }, |
| { |
| "chunk_id": "b04ec395-c14d-4249-a20b-e78e1365e51d", |
| "text": "Computing nonvacuous generalization bounds for\ndeep (stochastic) neural networks with many more parameters than training data. In Proceedings of the Thirty-Third Conference on Uncertainty in Artificial Intelligence, UAI 2017, Sydney,\nAustralia, August 11-15, 2017, 2017. URL http://auai.org/uai2017/proceedings/\npapers/173.pdf. Gintare Karolina Dziugaite and Daniel M Roy. Data-dependent pac-bayes priors via differential E Estevez-Rams, R Lora Serrano, B Aragón Fernández, and I Brito Reyes.", |
| "paper_id": "1805.08522", |
| "title": "Deep learning generalizes because the parameter-function map is biased towards simple functions", |
| "authors": [ |
| "Guillermo Valle-Pérez", |
| "Chico Q. Camargo", |
| "Ard A. Louis" |
| ], |
| "published_date": "2018-05-22", |
| "primary_category": "stat.ML", |
| "arxiv_url": "http://arxiv.org/abs/1805.08522v5", |
| "chunk_index": 27, |
| "total_chunks": 75, |
| "char_count": 492, |
| "word_count": 62, |
| "chunking_strategy": "semantic" |
| }, |
| { |
| "chunk_id": "65168c0b-3a40-480d-9659-a001366be12f", |
| "text": "On the non-randomness\nof maximum lempel ziv complexity sequences of finite size. Chaos: An Interdisciplinary Journal\nof Nonlinear Science, 23(2):023118, 2013. Generalization ability of boolean functions implemented in feedforward neural\nnetworks. Neurocomputing, 70(1):351–361, 2006. Leonardo Franco and Martin Anthony.", |
| "paper_id": "1805.08522", |
| "title": "Deep learning generalizes because the parameter-function map is biased towards simple functions", |
| "authors": [ |
| "Guillermo Valle-Pérez", |
| "Chico Q. Camargo", |
| "Ard A. Louis" |
| ], |
| "published_date": "2018-05-22", |
| "primary_category": "stat.ML", |
| "arxiv_url": "http://arxiv.org/abs/1805.08522v5", |
| "chunk_index": 28, |
| "total_chunks": 75, |
| "char_count": 319, |
| "word_count": 39, |
| "chunking_strategy": "semantic" |
| }, |
| { |
| "chunk_id": "c883bb4e-5680-450d-a149-bc73b10b940f", |
| "text": "On a generalization complexity measure for boolean functions. In Neural Networks, 2004. Proceedings. 2004 IEEE International Joint Conference on,\nvolume 2, pp. 973–978. Boolean functions with low average sensitivity depend on few coordinates. Combinatorica, 18(1):27–35, 1998. Adrià Garriga-Alonso, Laurence Aitchison, and Carl Edward Rasmussen. Deep convolutional\nhttps://arxiv.org/abs/1808.05587. Pascal Germain, Francis Bach, Alexandre Lacoste, and Simon Lacoste-Julien.", |
| "paper_id": "1805.08522", |
| "title": "Deep learning generalizes because the parameter-function map is biased towards simple functions", |
| "authors": [ |
| "Guillermo Valle-Pérez", |
| "Chico Q. Camargo", |
| "Ard A. Louis" |
| ], |
| "published_date": "2018-05-22", |
| "primary_category": "stat.ML", |
| "arxiv_url": "http://arxiv.org/abs/1805.08522v5", |
| "chunk_index": 29, |
| "total_chunks": 75, |
| "char_count": 473, |
| "word_count": 56, |
| "chunking_strategy": "semantic" |
| }, |
| { |
| "chunk_id": "642e6573-d012-41d3-ac68-0017b4d0f9f3", |
| "text": "Pac-bayesian theory\nmeets bayesian inference. In Advances in Neural Information Processing Systems, pp. 1884–\n1892, 2016. Raja Giryes, Guillermo Sapiro, and Alexander M Bronstein. Deep neural networks with random\ngaussian weights: a universal classification strategy? Signal Processing, 64(13):\n3444–3457, 2016. Noah Golowich, Alexander Rakhlin, and Ohad Shamir.", |
| "paper_id": "1805.08522", |
| "title": "Deep learning generalizes because the parameter-function map is biased towards simple functions", |
| "authors": [ |
| "Guillermo Valle-Pérez", |
| "Chico Q. Camargo", |
| "Ard A. Louis" |
| ], |
| "published_date": "2018-05-22", |
| "primary_category": "stat.ML", |
| "arxiv_url": "http://arxiv.org/abs/1805.08522v5", |
| "chunk_index": 30, |
| "total_chunks": 75, |
| "char_count": 362, |
| "word_count": 47, |
| "chunking_strategy": "semantic" |
| }, |
| { |
| "chunk_id": "87b7e628-68de-4814-913d-b3aebe80b22b", |
| "text": "Size-independent sample complexity of\nneural networks. In Proceedings of the 31st Conference On Learning Theory, volume 75 of\nProceedings of Machine Learning Research, pp. 297–299. PMLR, 06–09 Jul 2018. URL http:\n//proceedings.mlr.press/v75/golowich18a.html. Ian J Goodfellow, Jonathon Shlens, and Christian Szegedy. Explaining and harnessing adversarial GPy: A gaussian process framework in python. http://github.com/SheffieldML/\nGPy, since 2012. Sam F Greenbury, Steffen Schaper, Sebastian E Ahnert, and Ard A Louis. Genetic correlations\ngreatly increase mutational robustness and can both reduce and enhance evolvability. PLoS computational biology, 12(3):e1004773, 2016. Moritz Hardt, Ben Recht, and Yoram Singer.", |
| "paper_id": "1805.08522", |
| "title": "Deep learning generalizes because the parameter-function map is biased towards simple functions", |
| "authors": [ |
| "Guillermo Valle-Pérez", |
| "Chico Q. Camargo", |
| "Ard A. Louis" |
| ], |
| "published_date": "2018-05-22", |
| "primary_category": "stat.ML", |
| "arxiv_url": "http://arxiv.org/abs/1805.08522v5", |
| "chunk_index": 31, |
| "total_chunks": 75, |
| "char_count": 717, |
| "word_count": 92, |
| "chunking_strategy": "semantic" |
| }, |
| { |
| "chunk_id": "4c220aa3-393f-4402-8689-8ad6c8b8cb80", |
| "text": "Train faster, generalize better: Stability of stochastic\ngradient descent. In Proceedings of The 33rd International Conference on Machine Learning,\nvolume 48 of Proceedings of Machine Learning Research, pp. 1225–1234. PMLR, 20–22 Jun\n2016. URL http://proceedings.mlr.press/v48/hardt16.html. Nick Harvey, Christopher Liaw, and Abbas Mehrabian.", |
| "paper_id": "1805.08522", |
| "title": "Deep learning generalizes because the parameter-function map is biased towards simple functions", |
| "authors": [ |
| "Guillermo Valle-Pérez", |
| "Chico Q. Camargo", |
| "Ard A. Louis" |
| ], |
| "published_date": "2018-05-22", |
| "primary_category": "stat.ML", |
| "arxiv_url": "http://arxiv.org/abs/1805.08522v5", |
| "chunk_index": 32, |
| "total_chunks": 75, |
| "char_count": 342, |
| "word_count": 42, |
| "chunking_strategy": "semantic" |
| }, |
| { |
| "chunk_id": "7528c522-0f5b-4e6e-abdf-55c28690a6d6", |
| "text": "Nearly-tight VC-dimension bounds for\npiecewise linear neural networks. In Proceedings of the 2017 Conference on Learning Theory,\nvolume 65 of Proceedings of Machine Learning Research, pp. 1064–1068. PMLR, 07–10 Jul\n2017. URL http://proceedings.mlr.press/v65/harvey17a.html. Hinton and Drew van Camp. Keeping the neural networks simple by minimizing the description length of the weights. In Proceedings of the Sixth Annual Conference on Computational\nLearning Theory, COLT '93, pp. 5–13, New York, NY, USA, 1993. Sepp Hochreiter and Jürgen Schmidhuber. Neural Computation, 9(1):1–42, 1997. Kenji Kawaguchi, Leslie Pack Kaelbling, and Yoshua Bengio. Generalization in deep learning. CoRR, abs/1710.05468, 2017. URL http://arxiv.org/abs/1710.05468. Nitish Shirish Keskar, Dheevatsa Mudigere, Jorge Nocedal, Mikhail Smelyanskiy, and Ping Tak Peter Tang. On large-batch training for deep learning: Generalization gap and sharp minima.", |
| "paper_id": "1805.08522", |
| "title": "Deep learning generalizes because the parameter-function map is biased towards simple functions", |
| "authors": [ |
| "Guillermo Valle-Pérez", |
| "Chico Q. Camargo", |
| "Ard A. Louis" |
| ], |
| "published_date": "2018-05-22", |
| "primary_category": "stat.ML", |
| "arxiv_url": "http://arxiv.org/abs/1805.08522v5", |
| "chunk_index": 33, |
| "total_chunks": 75, |
| "char_count": 930, |
| "word_count": 122, |
| "chunking_strategy": "semantic" |
| }, |
| { |
| "chunk_id": "97fdfa0d-b681-4447-946f-e7e721332626", |
| "text": "CoRR,\nabs/1609.04836, 2016. URL http://arxiv.org/abs/1609.04836. Diederik P Kingma and Jimmy Lei Ba. Adam: Amethod for stochastic optimization. Representations, 2014. Alex Krizhevsky and Geoffrey Hinton.", |
| "paper_id": "1805.08522", |
| "title": "Deep learning generalizes because the parameter-function map is biased towards simple functions", |
| "authors": [ |
| "Guillermo Valle-Pérez", |
| "Chico Q. Camargo", |
| "Ard A. Louis" |
| ], |
| "published_date": "2018-05-22", |
| "primary_category": "stat.ML", |
| "arxiv_url": "http://arxiv.org/abs/1805.08522v5", |
| "chunk_index": 34, |
| "total_chunks": 75, |
| "char_count": 203, |
| "word_count": 24, |
| "chunking_strategy": "semantic" |
| }, |
| { |
| "chunk_id": "d8ec6b47-d72b-453a-8ace-b62001f80887", |
| "text": "Learning multiple layers of features from tiny images. Technical report, Citeseer, 2009. Anders Krogh and John A Hertz. A simple weight decay can improve generalization. In Advances\nin neural information processing systems, pp. 950–957, 1992. David Krueger, Nicolas Ballas, Stanislaw Jastrzebski, Devansh Arpit, Maxinder S. Kanwal, Tegan\nMaharaj, Emmanuel Bengio, Asja Fischer, Aaron Courville, Simon Lacoste-Julien, and Yoshua\nBengio.", |
| "paper_id": "1805.08522", |
| "title": "Deep learning generalizes because the parameter-function map is biased towards simple functions", |
| "authors": [ |
| "Guillermo Valle-Pérez", |
| "Chico Q. Camargo", |
| "Ard A. Louis" |
| ], |
| "published_date": "2018-05-22", |
| "primary_category": "stat.ML", |
| "arxiv_url": "http://arxiv.org/abs/1805.08522v5", |
| "chunk_index": 35, |
| "total_chunks": 75, |
| "char_count": 435, |
| "word_count": 59, |
| "chunking_strategy": "semantic" |
| }, |
| { |
| "chunk_id": "80377620-9988-415c-a594-7808467deafc", |
| "text": "A closer look at memorization in deep networks. Proceedings of the 34th International Conference on Machine Learning (ICML'17), 2017. URL https://arxiv.org/abs/\n1706.05394. John Langford and Matthias Seeger. Bounds for averaging classifiers. 2001. Tor Lattimore and Marcus Hutter.", |
| "paper_id": "1805.08522", |
| "title": "Deep learning generalizes because the parameter-function map is biased towards simple functions", |
| "authors": [ |
| "Guillermo Valle-Pérez", |
| "Chico Q. Camargo", |
| "Ard A. Louis" |
| ], |
| "published_date": "2018-05-22", |
| "primary_category": "stat.ML", |
| "arxiv_url": "http://arxiv.org/abs/1805.08522v5", |
| "chunk_index": 36, |
| "total_chunks": 75, |
| "char_count": 280, |
| "word_count": 37, |
| "chunking_strategy": "semantic" |
| }, |
| { |
| "chunk_id": "b0e6d7b0-bde9-4661-9a18-34b2e07ff6de", |
| "text": "No free lunch versus Occam's razor in supervised learning. In\nAlgorithmic Probability and Friends. Bayesian Prediction and Artificial Intelligence, pp. 223–\n235. Yann LeCun, Léon Bottou, Yoshua Bengio, and Patrick Haffner. Gradient-based learning applied to\ndocument recognition. Proceedings of the IEEE, 86(11):2278–2324, 1998. Yann LeCun, Yoshua Bengio, and Geoffrey Hinton. Nature, 521(7553):436, 2015. Jaehoon Lee, Yasaman Bahri, Roman Novak, Samuel S Schoenholz, Jeffrey Pennington, and Jascha\n2017. Abraham Lempel and Jacob Ziv. On the complexity of finite sequences. IEEE Transactions on\ninformation theory, 22(1):75–81, 1976. Qianli Liao and Tomaso Poggio. Theory of deep learning ii: Landscape of the empirical risk in deep Henry W Lin, Max Tegmark, and David Rolnick.", |
| "paper_id": "1805.08522", |
| "title": "Deep learning generalizes because the parameter-function map is biased towards simple functions", |
| "authors": [ |
| "Guillermo Valle-Pérez", |
| "Chico Q. Camargo", |
| "Ard A. Louis" |
| ], |
| "published_date": "2018-05-22", |
| "primary_category": "stat.ML", |
| "arxiv_url": "http://arxiv.org/abs/1805.08522v5", |
| "chunk_index": 37, |
| "total_chunks": 75, |
| "char_count": 777, |
| "word_count": 110, |
| "chunking_strategy": "semantic" |
| }, |
| { |
| "chunk_id": "172d96fe-9c01-4957-9b6e-0390cc082335", |
| "text": "Journal of Statistical Physics, 168(6):1223–1247, 2017. Alexander G de G Matthews, Mark Rowland, Jiri Hron, Richard E Turner, and Zoubin Ghahramani.\n2018. Some pac-bayesian theorems. In Proceedings of the eleventh annual conference on Computational learning theory, pp. 230–234. Pac-bayesian model averaging. In COLT, volume 99, pp. 164–170.", |
| "paper_id": "1805.08522", |
| "title": "Deep learning generalizes because the parameter-function map is biased towards simple functions", |
| "authors": [ |
| "Guillermo Valle-Pérez", |
| "Chico Q. Camargo", |
| "Ard A. Louis" |
| ], |
| "published_date": "2018-05-22", |
| "primary_category": "stat.ML", |
| "arxiv_url": "http://arxiv.org/abs/1805.08522v5", |
| "chunk_index": 39, |
| "total_chunks": 75, |
| "char_count": 341, |
| "word_count": 47, |
| "chunking_strategy": "semantic" |
| }, |
| { |
| "chunk_id": "879e54f0-4c39-4da2-a4b2-73008e3308f4", |
| "text": "Pac-bayesian model averaging. In Proceedings of the Twelfth Annual Conference on Computational Learning Theory, COLT '99, pp. 164–170, New York, NY, USA, 1999b. ISBN 1-58113-167-4. doi: 10.1145/307400.307435. URL http://doi.acm.org/\n10.1145/307400.307435. LI Ming and Paul MB Vitányi.", |
| "paper_id": "1805.08522", |
| "title": "Deep learning generalizes because the parameter-function map is biased towards simple functions", |
| "authors": [ |
| "Guillermo Valle-Pérez", |
| "Chico Q. Camargo", |
| "Ard A. Louis" |
| ], |
| "published_date": "2018-05-22", |
| "primary_category": "stat.ML", |
| "arxiv_url": "http://arxiv.org/abs/1805.08522v5", |
| "chunk_index": 40, |
| "total_chunks": 75, |
| "char_count": 284, |
| "word_count": 36, |
| "chunking_strategy": "semantic" |
| }, |
| { |
| "chunk_id": "23a38dde-fb3f-4f7e-9908-1674d672505f", |
| "text": "Kolmogorov complexity and its applications. Algorithms and Complexity, 1:187, 2014. Volodymyr Mnih, Koray Kavukcuoglu, David Silver, Andrei A Rusu, Joel Veness, Marc G Bellemare, Alex Graves, Martin Riedmiller, Andreas K Fidjeland, Georg Ostrovski, et al. Human-level\ncontrol through deep reinforcement learning. Nature, 518(7540):529, 2015. Guido F Montufar, Razvan Pascanu, Kyunghyun Cho, and Yoshua Bengio.", |
| "paper_id": "1805.08522", |
| "title": "Deep learning generalizes because the parameter-function map is biased towards simple functions", |
| "authors": [ |
| "Guillermo Valle-Pérez", |
| "Chico Q. Camargo", |
| "Ard A. Louis" |
| ], |
| "published_date": "2018-05-22", |
| "primary_category": "stat.ML", |
| "arxiv_url": "http://arxiv.org/abs/1805.08522v5", |
| "chunk_index": 41, |
| "total_chunks": 75, |
| "char_count": 409, |
| "word_count": 54, |
| "chunking_strategy": "semantic" |
| }, |
| { |
| "chunk_id": "6c171eca-fcb1-47c1-960c-5f923aac014d", |
| "text": "On the number of linear\nregions of deep neural networks. In Advances in neural information processing systems, pp.\n2924–2932, 2014. Ari S Morcos, David GT Barrett, Neil C Rabinowitz, and Matthew Botvinick. Nelson Morgan and Hervé Bourlard. Generalization and parameter estimation in feedforward nets:\nSome experiments. In Advances in neural information processing systems, pp. 630–637, 1990. Behnam Neyshabur, Ryota Tomioka, and Nathan Srebro. Norm-based capacity control in neural\nnetworks. In Conference on Learning Theory, pp. 1376–1401, 2015. Behnam Neyshabur, Srinadh Bhojanapalli, David McAllester, and Nathan Srebro. A pacbayesian approach to spectrally-normalized margin bounds for neural networks. arXiv preprint Behnam Neyshabur, Srinadh Bhojanapalli, David McAllester, and Nati Srebro. Exploring generalization in deep learning. In Advances in Neural Information Processing Systems, pp. 5949–5958,\n2017b. Behnam Neyshabur, Zhiyuan Li, Srinadh Bhojanapalli, Yann LeCun, and Nathan Srebro.", |
| "paper_id": "1805.08522", |
| "title": "Deep learning generalizes because the parameter-function map is biased towards simple functions", |
| "authors": [ |
| "Guillermo Valle-Pérez", |
| "Chico Q. Camargo", |
| "Ard A. Louis" |
| ], |
| "published_date": "2018-05-22", |
| "primary_category": "stat.ML", |
| "arxiv_url": "http://arxiv.org/abs/1805.08522v5", |
| "chunk_index": 42, |
| "total_chunks": 75, |
| "char_count": 998, |
| "word_count": 133, |
| "chunking_strategy": "semantic" |
| }, |
| { |
| "chunk_id": "30855c17-1e83-44a1-b228-1230dc9b771e", |
| "text": "Towards understanding the role of over-parametrization in generalization of neural networks. arXiv Roman Novak, Lechao Xiao, Jaehoon Lee, Yasaman Bahri, Daniel A Abolafia, Jeffrey Pennington, and Jascha Sohl-Dickstein. Bayesian convolutional neural networks with many channels are Tomaso Poggio, Hrushikesh Mhaskar, Lorenzo Rosasco, Brando Miranda, and Qianli Liao. Why\nand when can deep-but not shallow-networks avoid the curse of dimensionality: A review. International Journal of Automation and Computing, 14(5):503–519, 2017. Tomaso Poggio, Kenji Kawaguchi, Qianli Liao, Brando Miranda, Lorenzo Rosasco, Xavier Boix,\nJack Hidary, and Hrushikesh Mhaskar. Theory of deep learning iii: the non-overfitting puzzle. Technical report, CBMM memo 073, 2018. Ben Poole, Subhaneil Lahiri, Maithra Raghu, Jascha Sohl-Dickstein, and Surya Ganguli. Exponential expressivity in deep neural networks through transient chaos. In Advances in neural information processing systems, pp. 3360–3368, 2016. Alec Radford, Luke Metz, and Soumith Chintala.", |
| "paper_id": "1805.08522", |
| "title": "Deep learning generalizes because the parameter-function map is biased towards simple functions", |
| "authors": [ |
| "Guillermo Valle-Pérez", |
| "Chico Q. Camargo", |
| "Ard A. Louis" |
| ], |
| "published_date": "2018-05-22", |
| "primary_category": "stat.ML", |
| "arxiv_url": "http://arxiv.org/abs/1805.08522v5", |
| "chunk_index": 43, |
| "total_chunks": 75, |
| "char_count": 1035, |
| "word_count": 137, |
| "chunking_strategy": "semantic" |
| }, |
| { |
| "chunk_id": "d3e075d4-48a5-48c3-894d-5f5f6c2cc94d", |
| "text": "Unsupervised representation learning with deep\nconvolutional generative adversarial networks. In Proceedings of the International Conference on\nLearning Representations (ICLR), 2016. URL https://arxiv.org/abs/1511.06434. Carl Edward Rasmussen. Gaussian processes in machine learning. In Advanced lectures on machine\nlearning, pp. 63–71. Modeling by shortest data description. Automatica, 14(5):465–471, 1978. Levent Sagun, Utku Evci, V Ugur Guney, Yann Dauphin, and Leon Bottou.", |
| "paper_id": "1805.08522", |
| "title": "Deep learning generalizes because the parameter-function map is biased towards simple functions", |
| "authors": [ |
| "Guillermo Valle-Pérez", |
| "Chico Q. Camargo", |
| "Ard A. Louis" |
| ], |
| "published_date": "2018-05-22", |
| "primary_category": "stat.ML", |
| "arxiv_url": "http://arxiv.org/abs/1805.08522v5", |
| "chunk_index": 44, |
| "total_chunks": 75, |
| "char_count": 478, |
| "word_count": 58, |
| "chunking_strategy": "semantic" |
| }, |
| { |
| "chunk_id": "7bc3a446-0f1f-4b66-9600-498e68dab816", |
| "text": "Empirical analysis of Discovering neural nets with low kolmogorov complexity and high generalization capability. Neural Networks, 10(5):857–873, 1997. Deep learning in neural networks: An overview. Neural networks, 61:85–117,\n2015. Samuel S Schoenholz, Justin Gilmer, Surya Ganguli, and Jascha Sohl-Dickstein.", |
| "paper_id": "1805.08522", |
| "title": "Deep learning generalizes because the parameter-function map is biased towards simple functions", |
| "authors": [ |
| "Guillermo Valle-Pérez", |
| "Chico Q. Camargo", |
| "Ard A. Louis" |
| ], |
| "published_date": "2018-05-22", |
| "primary_category": "stat.ML", |
| "arxiv_url": "http://arxiv.org/abs/1805.08522v5", |
| "chunk_index": 45, |
| "total_chunks": 75, |
| "char_count": 309, |
| "word_count": 39, |
| "chunking_strategy": "semantic" |
| }, |
| { |
| "chunk_id": "7be1a049-d7ca-4750-a76c-96cfa3e0c096", |
| "text": "Deep information propagation. In Proceedings of the International Conference on Learning Representations\n(ICLR), 2017a. URL https://arxiv.org/abs/1611.01232.", |
| "paper_id": "1805.08522", |
| "title": "Deep learning generalizes because the parameter-function map is biased towards simple functions", |
| "authors": [ |
| "Guillermo Valle-Pérez", |
| "Chico Q. Camargo", |
| "Ard A. Louis" |
| ], |
| "published_date": "2018-05-22", |
| "primary_category": "stat.ML", |
| "arxiv_url": "http://arxiv.org/abs/1805.08522v5", |
| "chunk_index": 46, |
| "total_chunks": 75, |
| "char_count": 157, |
| "word_count": 16, |
| "chunking_strategy": "semantic" |
| }, |
| { |
| "chunk_id": "da98f3e3-4d6b-4655-ab3d-946974b6d218", |
| "text": "Samuel S Schoenholz, Jeffrey Pennington, and Jascha Sohl-Dickstein. A correspondence between Shai Shalev-Shwartz and Shai Ben-David. Understanding machine learning: From theory to algorithms. Cambridge university press, 2014. A bayesian perspective on generalization and stochastic gradient\ndescent. CoRR, abs/1710.06451, 2017. URL http://arxiv.org/abs/1710.06451. Daniel Soudry, Elad Hoffer, and Nathan Srebro. The implicit bias of gradient descent on separable Nitish Srivastava, Geoffrey Hinton, Alex Krizhevsky, Ilya Sutskever, and Ruslan Salakhutdinov. Dropout: A simple way to prevent neural networks from overfitting. The Journal of Machine\nLearning Research, 15(1):1929–1958, 2014. Felipe Petroski Such, Vashisht Madhavan, Edoardo Conti, Joel Lehman, Kenneth O Stanley, and\nJeff Clune.", |
| "paper_id": "1805.08522", |
| "title": "Deep learning generalizes because the parameter-function map is biased towards simple functions", |
| "authors": [ |
| "Guillermo Valle-Pérez", |
| "Chico Q. Camargo", |
| "Ard A. Louis" |
| ], |
| "published_date": "2018-05-22", |
| "primary_category": "stat.ML", |
| "arxiv_url": "http://arxiv.org/abs/1805.08522v5", |
| "chunk_index": 47, |
| "total_chunks": 75, |
| "char_count": 793, |
| "word_count": 100, |
| "chunking_strategy": "semantic" |
| }, |
| { |
| "chunk_id": "0d4811a7-b34c-408a-9336-8f261431959a", |
| "text": "Deep neuroevolution: Genetic algorithms are a competitive alternative for training\ndeep neural networks for reinforcement learning. NIPS Deep Reinforcement Learning Workshop,\n2018. URL https://arxiv.org/abs/1712.06567. Shizhao Sun, Wei Chen, Liwei Wang, Xiaoguang Liu, and Tie-Yan Liu. On the depth of deep neural\nnetworks: A theoretical view.", |
| "paper_id": "1805.08522", |
| "title": "Deep learning generalizes because the parameter-function map is biased towards simple functions", |
| "authors": [ |
| "Guillermo Valle-Pérez", |
| "Chico Q. Camargo", |
| "Ard A. Louis" |
| ], |
| "published_date": "2018-05-22", |
| "primary_category": "stat.ML", |
| "arxiv_url": "http://arxiv.org/abs/1805.08522v5", |
| "chunk_index": 48, |
| "total_chunks": 75, |
| "char_count": 343, |
| "word_count": 45, |
| "chunking_strategy": "semantic" |
| }, |
| { |
| "chunk_id": "d9d59f59-bacc-4c81-b917-f70fb3d0560b", |
| "text": "In AAAI, pp. 2066–2072, 2016. The nature of statistical learning theory. Springer science & business media,\n2013. David H Wolpert and R Waters. The relationship between pac, the statistical physics framework, the\nbayesian framework, and the vc framework. Lei Wu, Zhanxing Zhu, et al. Towards understanding generalization of deep learning: Perspective Han Xiao, Kashif Rasul, and Roland Vollgraf. Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms.", |
| "paper_id": "1805.08522", |
| "title": "Deep learning generalizes because the parameter-function map is biased towards simple functions", |
| "authors": [ |
| "Guillermo Valle-Pérez", |
| "Chico Q. Camargo", |
| "Ard A. Louis" |
| ], |
| "published_date": "2018-05-22", |
| "primary_category": "stat.ML", |
| "arxiv_url": "http://arxiv.org/abs/1805.08522v5", |
| "chunk_index": 49, |
| "total_chunks": 75, |
| "char_count": 478, |
| "word_count": 68, |
| "chunking_strategy": "semantic" |
| }, |
| { |
| "chunk_id": "f288bd67-8fed-41f6-8168-b85c5da824d1", |
| "text": "CoRR, abs/1708.07747, 2017. URL http://arxiv.org/\nabs/1708.07747. Huan Xu and Shie Mannor. Robustness and generalization. Machine learning, 86(3):391–423,\n2012. Yuan Yao, Lorenzo Rosasco, and Andrea Caponnetto. On early stopping in gradient descent learning. Constructive Approximation, 26(2):289–315, 2007. Chiyuan Zhang, Samy Bengio, Moritz Hardt, Benjamin Recht, and Oriol Vinyals.", |
| "paper_id": "1805.08522", |
| "title": "Deep learning generalizes because the parameter-function map is biased towards simple functions", |
| "authors": [ |
| "Guillermo Valle-Pérez", |
| "Chico Q. Camargo", |
| "Ard A. Louis" |
| ], |
| "published_date": "2018-05-22", |
| "primary_category": "stat.ML", |
| "arxiv_url": "http://arxiv.org/abs/1805.08522v5", |
| "chunk_index": 50, |
| "total_chunks": 75, |
| "char_count": 384, |
| "word_count": 47, |
| "chunking_strategy": "semantic" |
| }, |
| { |
| "chunk_id": "434e768a-9912-4788-8b3b-09ead968a8c5", |
| "text": "Understanding\ndeep learning requires rethinking generalization. In Proceedings of the International Conference on Learning Representations (ICLR), 2017a. URL https://arxiv.org/abs/1611.\n03530. Chiyuan Zhang, Qianli Liao, Alexander Rakhlin, Karthik Sridharan, Brando Miranda, Noah Golowich, and Tomaso Poggio. Musings on deep learning: Properties of sgd. 04/2017 2017b. URL https://cbmm.mit.edu/publications/\nmusings-deep-learning-properties-sgd. formerly titled \"Theory of Deep\nLearning III: Generalization Properties of SGD\". Yao Zhang, Andrew M Saxe, Madhu S Advani, and Alpha A Lee. Energy–entropy competition and\nthe effectiveness of stochastic gradient descent in machine learning. Molecular Physics, pp. 1–10,\n2018. A BASIC EXPERIMENTAL DETAILS In the main experiments of the paper we used two classes of architectures. Here we describe them\nin more detail. • Fully connected networks (FCs), with varying number of layers. The size of the hidden\nlayers was the same as the input dimension, and the nonlinearity was ReLU. The last layer\nwas a single Softmax neuron.", |
| "paper_id": "1805.08522", |
| "title": "Deep learning generalizes because the parameter-function map is biased towards simple functions", |
| "authors": [ |
| "Guillermo Valle-Pérez", |
| "Chico Q. Camargo", |
| "Ard A. Louis" |
| ], |
| "published_date": "2018-05-22", |
| "primary_category": "stat.ML", |
| "arxiv_url": "http://arxiv.org/abs/1805.08522v5", |
| "chunk_index": 51, |
| "total_chunks": 75, |
| "char_count": 1070, |
| "word_count": 147, |
| "chunking_strategy": "semantic" |
| }, |
| { |
| "chunk_id": "1b24517b-4997-439a-bc3b-9003373172d9", |
| "text": "We used default Keras settings for initialization (Glorot\nuniform).\n• Convolutional neural networks (CNNs), with varying number of layers. The number of\nfilters was 200, and the nonlinearity was ReLU. The last layer was a fully connected single\nSoftmax neuron.", |
| "paper_id": "1805.08522", |
| "title": "Deep learning generalizes because the parameter-function map is biased towards simple functions", |
| "authors": [ |
| "Guillermo Valle-Pérez", |
| "Chico Q. Camargo", |
| "Ard A. Louis" |
| ], |
| "published_date": "2018-05-22", |
| "primary_category": "stat.ML", |
| "arxiv_url": "http://arxiv.org/abs/1805.08522v5", |
| "chunk_index": 52, |
| "total_chunks": 75, |
| "char_count": 260, |
| "word_count": 40, |
| "chunking_strategy": "semantic" |
| }, |
| { |
| "chunk_id": "38343adc-9b19-4c35-8850-d22b20cd8422", |
| "text": "The filter sizes alternated between (2, 2) and (5, 5), and the padding\nbetween SAME and VALID, the strides were 1 (same default settings as in the code for\nGarriga-Alonso et al. (2018)). We used default Keras settings for initialization (Glorot\nuniform). In all experiments in Section 6 we trained with SGD with a learning rate of 0.01, and early stopping\nwhen the accuracy on the whole training set reaches 100%. In the experiments where we learn Boolean functions with the smaller neural network with 7\nBoolean inputs and one Boolean output (results in Figure 1c), we use a variation of SGD similar\nto the method of adversarial training proposed by Ian Goodfellow Goodfellow et al. (2014). We\nchose this second method because SGD often did not find a solution with 0 training error for all\nthe Boolean functions, even with many thousand iterations. By contrast, the adversarial method\nsucceeded in almost all cases, at least for the relatively small neural networks which we focus on\nhere. We call this method adversarial SGD, or advSGD, for short. In SGD, the network is trained using\nthe average loss of a random sample of the training set, called a mini-batch. In advSGD, after every\ntraining step, the classification error for each of the training examples in the mini-batch is computed,\nand a moving average of each of these classification errors is updated. This moving average gives a\nscore for each training example, measuring how \"bad\" the network has recently been at predicting\nthis example. Before getting the next mini-batch, the scores of all the examples are passed through a\nsoftmax to determine the probability that each example is put in the mini-batch. This way, we force\nthe network to focus on the examples it does worst on. For advSGD, we also used a batch size of\n10.", |
| "paper_id": "1805.08522", |
| "title": "Deep learning generalizes because the parameter-function map is biased towards simple functions", |
| "authors": [ |
| "Guillermo Valle-Pérez", |
| "Chico Q. Camargo", |
| "Ard A. Louis" |
| ], |
| "published_date": "2018-05-22", |
| "primary_category": "stat.ML", |
| "arxiv_url": "http://arxiv.org/abs/1805.08522v5", |
| "chunk_index": 53, |
| "total_chunks": 75, |
| "char_count": 1792, |
| "word_count": 305, |
| "chunking_strategy": "semantic" |
| }, |
| { |
| "chunk_id": "04bf8cf4-9f14-4ba4-9bd3-bdbf2d848814", |
| "text": "In all experiments we used binary cross entropy as the loss function. We found that Adam could\nlearn Boolean functions with the smaller neural network as well, but only when choosing the meansquared error loss function. For the algorithms to approximate the marginal likelihood in Section 5.1, we used a Bernoulli likelihood with a probit link function to approximate the true likelihood (given by the Heaviside function) B TESTING THE APPROXIMATIONS TO THE GAUSSIAN PROCESS MARGINAL\nLIKELIHOOD In Figure 5, we show results comparing the empirical frequency of labellings for a sample of 10\nrandom MNIST images, when these frequencies are obtained by sampling parameters of a neural\nnetwork (with a Gaussian distribution with parameters σw = σb = 1.0), versus that calculated\nusing two methods to approximate the marginal likelihood of the Gaussian process corresponding\nto the neural network architecture we use. We compare the Laplacian and expectation-propagation\napproximations.(see Rasmussen (2004) for a description of the algorithms) The network has 2 fully\nconnected hidden layers of 784 ReLU neurons each. C THE CHOICE OF VARIANCE HYPERPARAMETERS One limitation of our approach is that it depends on the choice of the variances of the weights and\nbiases used to define the equivalent Gaussian process. Most of the trends shown in the previous Figure 5: Comparing the empirical frequency of different labellings for a sample of 10 MNIST\nimages obtained from randomly sampling parameters from a neural neural network, versus the approximate marginal likelihood from the corresponding Gaussian process. Orange dots correspond\nto the expectation-propagation approximation, and blue dots to the Laplace approximation. The network has 2 fully connected hidden layers of 784 ReLU neurons each. The weight and bias variances\nare 1.0.", |
| "paper_id": "1805.08522", |
| "title": "Deep learning generalizes because the parameter-function map is biased towards simple functions", |
| "authors": [ |
| "Guillermo Valle-Pérez", |
| "Chico Q. Camargo", |
| "Ard A. Louis" |
| ], |
| "published_date": "2018-05-22", |
| "primary_category": "stat.ML", |
| "arxiv_url": "http://arxiv.org/abs/1805.08522v5", |
| "chunk_index": 54, |
| "total_chunks": 75, |
| "char_count": 1834, |
| "word_count": 282, |
| "chunking_strategy": "semantic" |
| }, |
| { |
| "chunk_id": "165093ff-1074-4e62-aa4c-5bc959d52dc1", |
| "text": "section were robust to this choice, but not all. For instance, the bound for MNIST was higher than\nthat for fashion-MNIST for the fully connected network, if the variance was chosen to be 1.0. In Figures 7 and 6, we show the effect of the variance hyperparameters on the bound. Note that\nfor the fully connected network, the variance of the weights σw seems to have a much bigger role. This is consistent with what is found in Lee et al. (2017). Furthermore, in Lee et al. (2017) they\nfind, for smaller depths, that the neural network Gaussian process behaves best above σw ≈1.0,\nwhich marks the transition between two phases characterized by the asymptotic behavior of the\ncorrelation of activations with depth. This also agrees with the behaviour of the PAC-Bayes bound. For CIFAR10, we find that the bound is best near the phase transition, which is also compatible with\nresults in Lee et al. (2017). For convolutional networks, we found sharper transitions with weight\nvariance, and an larger dependence on bias variance (see Fig. 7). For our experiments, we chose\nvariances values above the phase transition, and which were fixed for each architecture. The best choice of variance would correspond to the Gaussian distribution best approximates the\nbehaviour of SGD. We measured the variance of the weights after training with SGD and early\nstopping (stop when 100% accuracy is reached) from a set of initializations, and obtained values an\norder of magnitude smaller than those used in the experiments above. Using these variances gave\nsignificantly worse bounds, above 50% for all levels of corruption.", |
| "paper_id": "1805.08522", |
| "title": "Deep learning generalizes because the parameter-function map is biased towards simple functions", |
| "authors": [ |
| "Guillermo Valle-Pérez", |
| "Chico Q. Camargo", |
| "Ard A. Louis" |
| ], |
| "published_date": "2018-05-22", |
| "primary_category": "stat.ML", |
| "arxiv_url": "http://arxiv.org/abs/1805.08522v5", |
| "chunk_index": 55, |
| "total_chunks": 75, |
| "char_count": 1609, |
| "word_count": 266, |
| "chunking_strategy": "semantic" |
| }, |
| { |
| "chunk_id": "d71a7d69-2d29-4c51-8a24-fe615385dce6", |
| "text": "This measured variance does not necessarily measure the variance of the Gaussian prior that best\nmodels SGD, as it also depends on the shape of the zero-error surface (the likelihood function\non parameter space). However, it might suggest that SGD is biased towards better solutions in\nparamater space, giving a stronger/better bias than that predicted only by the parameter-function\nmap with Gaussian sampling of parameters. One way this could happen is if SGD is more likely\nto find flat (global) \"minima\"9 than what is expected from near-uniform sampling of the region of\nzero-error (probability proportional to volume of minimum). This may be one of the main sources of\nerror in our approach. A better understanding of the relation between SGD and Bayesian parameter\nsampling is needed to progress on this front (Lee et al. (2017); Matthews et al. (2018)). 9Note that the notion of minimum is not well defined given that the region of zero error seems to be mostly\nflat and connected (Sagun et al. (2017); Draxler et al. (2018)) (a) fashion-MNIST (b) fashion-MNIST (e) CIFAR10 (f) CIFAR10 Figure 6: Dependence of PAC-Bayes bound on variance hyperparameters. We plot the value of the\nPAC-Bayes bound versus the standard deviation parameter for the weights and biases, for a sample\nof 10000 instances from different datasets, and a two-layer fully connected network (with the layers\nof the same size as input). The fixed parameter is put to 1.0 in all cases.", |
| "paper_id": "1805.08522", |
| "title": "Deep learning generalizes because the parameter-function map is biased towards simple functions", |
| "authors": [ |
| "Guillermo Valle-Pérez", |
| "Chico Q. Camargo", |
| "Ard A. Louis" |
| ], |
| "published_date": "2018-05-22", |
| "primary_category": "stat.ML", |
| "arxiv_url": "http://arxiv.org/abs/1805.08522v5", |
| "chunk_index": 56, |
| "total_chunks": 75, |
| "char_count": 1460, |
| "word_count": 240, |
| "chunking_strategy": "semantic" |
| }, |
| { |
| "chunk_id": "aa11f184-6cf2-46a3-ab53-d57759f7a495", |
| "text": "(a) fashion-MNIST (b) fashion-MNIST (e) CIFAR10 (f) CIFAR10 Figure 7: Dependence of PAC-Bayes bound on variance hyperparameters. PAC-Bayes bound versus the standard deviation parameter for the weights and biases, for a sample of 10000 instances\nfrom different datasets, and a four-layer convolutional network. The fixed parameter is put to 1.0 in\nall cases. D DETAILS ON THE EXPERIMENTS COMPARING THE PROBABILITY OF\nFINDING A FUNCTION BY SGD AND NEURAL NETWORK GAUSSIAN\nPROCESSES Here we describe in more detail the experiments carried out in Section 7 in the main text.", |
| "paper_id": "1805.08522", |
| "title": "Deep learning generalizes because the parameter-function map is biased towards simple functions", |
| "authors": [ |
| "Guillermo Valle-Pérez", |
| "Chico Q. Camargo", |
| "Ard A. Louis" |
| ], |
| "published_date": "2018-05-22", |
| "primary_category": "stat.ML", |
| "arxiv_url": "http://arxiv.org/abs/1805.08522v5", |
| "chunk_index": 57, |
| "total_chunks": 75, |
| "char_count": 570, |
| "word_count": 89, |
| "chunking_strategy": "semantic" |
| }, |
| { |
| "chunk_id": "e0f1fbf4-f14e-4840-a76a-14278a554c23", |
| "text": "We aim\nto compare the probability P(f|S) of finding a particular function f when training with SGD, and\nwhen training with approximate Bayesian inference (ABI), given a training set S. We consider this\nprobability, averaged over of training sets, to look at the properties of the algorithm rather than some\nparticular training set. In particular, consider a sample of N training sets {S1, ..., SN} of size 118. Then we are interested in the quantity (which we refer to as average probability of finding function\nf): ⟨P(f)⟩:= X P(f|Si)\ni=1 For the SGD-like algorithms, we approximate P(f|Si) by the fraction of times we obtain f from a\nset of M random runs of the algorithm (with random initializations too).", |
| "paper_id": "1805.08522", |
| "title": "Deep learning generalizes because the parameter-function map is biased towards simple functions", |
| "authors": [ |
| "Guillermo Valle-Pérez", |
| "Chico Q. Camargo", |
| "Ard A. Louis" |
| ], |
| "published_date": "2018-05-22", |
| "primary_category": "stat.ML", |
| "arxiv_url": "http://arxiv.org/abs/1805.08522v5", |
| "chunk_index": 58, |
| "total_chunks": 75, |
| "char_count": 707, |
| "word_count": 119, |
| "chunking_strategy": "semantic" |
| }, |
| { |
| "chunk_id": "5eed2efd-16bc-4942-8db4-6cc31f3c211f", |
| "text": "For ABI, we calculate it\nas: ( P (f) if f consistent with Si P (Si) P(f|Si) = ,\n0 otherwise where P(f) is the prior probability of f computed using the EP approximation of the Gaussian\nprocess corresponding to the architecture we use, and P(Si) is the marginal likelihood of Si under\nthis prior probability. In Fig. 8, we show results for the same experiment as in Figure 4 in main text, but with lower\nvariance hyperparameters for the Gaussian process, which results in significantly worse correlation. (a) advSGD. ρ = 0.72,p = 1e−6 (b) Adam. ρ = 0.66,p = 2e−7 Figure 8: We plot ⟨P(f)⟩for a variant of SGD, versus ⟨P(f)⟩computed using the ABI method\ndescribed above (which approximates uniform sampling on parameter space on the zero-error region). The Gaussian process parameters (see Section 5.1 in main text) are σw = 1.0, and σb = 1.0. For advSGD, we have removed functions which only appared once in the whole sample, to avoid\nfinite-size effects.", |
| "paper_id": "1805.08522", |
| "title": "Deep learning generalizes because the parameter-function map is biased towards simple functions", |
| "authors": [ |
| "Guillermo Valle-Pérez", |
| "Chico Q. Camargo", |
| "Ard A. Louis" |
| ], |
| "published_date": "2018-05-22", |
| "primary_category": "stat.ML", |
| "arxiv_url": "http://arxiv.org/abs/1805.08522v5", |
| "chunk_index": 59, |
| "total_chunks": 75, |
| "char_count": 953, |
| "word_count": 166, |
| "chunking_strategy": "semantic" |
| }, |
| { |
| "chunk_id": "2acbec7a-279d-4711-a431-8078a432d766", |
| "text": "In the captions ρ refers to the 2-tailed Pearson correlation coefficient, and p to its\ncorresponding p value. E SIMPLICITY BIAS AND THE PARAMETER-FUNCTION MAP An important argument in Section 3 in the main text is that the parameter-function map of neural\nnetworks should exhibit the basic simplicity bias phenomenolgy recently described in Dingle et al. in Dingle et al. (2018). In this section we briely describe some key results of reference Dingle et al.\n(2018) relevant to this argument. A computable10 input-output map f : I →O, mapping NI inputs from the set I to NO outputs x\nfrom the set O11 may exhibit simplicity bias if the following restrictions are satisfied (Dingle et al.\n(2018)): 1) Map simplicity The map should have limited complexity, that is its Kolmogorov complexity K(f)\nshould asymptotically satisfy K(f)+K(n) ≪K(x)+O(1), for typical x ∈O where n is a measure\nof the size of the input set (e.g. for binary input sequences, NI = 2n.). 2) Redundancy: There should be many more inputs than outputs (NI ≫NO) so that the probability\nP(x) that the map generates output x upon random selection of inputs ∈I can in principle vary\nsignificantly. 3) Finite size NO ≫1 to avoid potential finite size effects. 4) Nonlinearity: The map f must be a nonlinear function since linear functions do not exhibit bias. 5) Well behaved: The map should not primarily produce pseudorandom outputs (such as the digits of\nπ), because complexity approximators needed for practical applications will mistakenly label these\nas highly complex. For the deep learning learning systems studied in this paper, the inputs of the map f are the parameters that fix the weights for the particular neural network architecture chosen, and the outputs are the\nfunctions that the system produces. Consider, for example, the configuration for Boolean functions\nstudied in the main text. While the output functions rapidly grow in complexity with increasing size\nof the input layer, the map itself can be described with a low-complexity procedure, since it consists\nof reading the list of parameters, populating a given neural network architecture and evaluating it for\nall inputs. For reasonable architectures, the information needed to describe the map grows logarithmically with the input dimension n, so for large enough n, the amount of information required to\ndescribe the map will be much less than the information needed to describe a typical function, which\nrequires 22n bits.", |
| "paper_id": "1805.08522", |
| "title": "Deep learning generalizes because the parameter-function map is biased towards simple functions", |
| "authors": [ |
| "Guillermo Valle-Pérez", |
| "Chico Q. Camargo", |
| "Ard A. Louis" |
| ], |
| "published_date": "2018-05-22", |
| "primary_category": "stat.ML", |
| "arxiv_url": "http://arxiv.org/abs/1805.08522v5", |
| "chunk_index": 60, |
| "total_chunks": 75, |
| "char_count": 2465, |
| "word_count": 401, |
| "chunking_strategy": "semantic" |
| }, |
| { |
| "chunk_id": "d5f876d9-0d96-4749-ab70-08b3fcc06820", |
| "text": "Thus the Kolmogorov complexity K(f) of this map is asymptotically smaller than\nthe the typical complexity of the output, as required by the map simplicity condition 1) above. The redundancy condition 2) depends on the network architecture and discretization. For overparameterised networks, this condition is typically satisfied. In our specific case, where we use floating\npoint numbers for the parameters (input set I), and Boolean functions (output set O), this condition\nis clearly satisfied. Neural nets can represent very large numbers of potential functions (see for example estimates of VC dimension Harvey et al. (2017); Baum & Haussler (1989)), so that condition\n3) is also generally satisfied. Neural network parameter-function maps are evidently non-linear, satisfying condition 4).", |
| "paper_id": "1805.08522", |
| "title": "Deep learning generalizes because the parameter-function map is biased towards simple functions", |
| "authors": [ |
| "Guillermo Valle-Pérez", |
| "Chico Q. Camargo", |
| "Ard A. Louis" |
| ], |
| "published_date": "2018-05-22", |
| "primary_category": "stat.ML", |
| "arxiv_url": "http://arxiv.org/abs/1805.08522v5", |
| "chunk_index": 61, |
| "total_chunks": 75, |
| "char_count": 794, |
| "word_count": 117, |
| "chunking_strategy": "semantic" |
| }, |
| { |
| "chunk_id": "a107181b-934a-482c-a875-113030a5c457", |
| "text": "Condition 5) is perhaps the least understood condition within simplicity bias. However, the lack of any function with high probability and high complexity (at least when using\nLZ complexity), provides some empirical validation. This condition also agrees with the expectation that neural networks will not predict the outputs of a good pseudorandom number generator. One of the implicit assumptions in the simplicity bias framework is that, although true Kolmogorov\ncomplexity is always uncomputable, approximations based on well chosen complexity measures\nperform well for most relevant outputs x. Nevertheless, where and when this assumptions holds is a\ndeep problem for which further research is needed.", |
| "paper_id": "1805.08522", |
| "title": "Deep learning generalizes because the parameter-function map is biased towards simple functions", |
| "authors": [ |
| "Guillermo Valle-Pérez", |
| "Chico Q. Camargo", |
| "Ard A. Louis" |
| ], |
| "published_date": "2018-05-22", |
| "primary_category": "stat.ML", |
| "arxiv_url": "http://arxiv.org/abs/1805.08522v5", |
| "chunk_index": 62, |
| "total_chunks": 75, |
| "char_count": 706, |
| "word_count": 104, |
| "chunking_strategy": "semantic" |
| }, |
| { |
| "chunk_id": "ffc47f06-7adf-4502-a21d-28d51630efc9", |
| "text": "F OTHER COMPLEXITY MEASURES One of the key steps to practical application of the simplicity bias framework of Dingle et al. in Dingle et al. (2018) is the identification of a suitable complexity measure ˜K(x) which mimics aspects of\nthe (uncomputable) Kolmogorov complexity K(x) for the problem being studied. 10Here computable simply means that all inputs lead to outputs, in other words there is no halting problem.\n11This language of finite input and outputs sets assumes discrete inputs and outputs, either because they are\nintrinsically discrete, or because they can be made discrete by a coarse-graining procedure. For the parameterfunction maps studied in this paper the set of outputs (the full hypothesis class) is typically naturally discrete,\nbut the inputs are continuous. However, the input parameters can always be discretised without any loss of\ngenerality. the maps in Dingle et al. (2018) that several different complexity measures all generated the same\nqualitative simplicity bias behaviour: K(x)+b)˜ P(x) ≤2−(a (4) but with different values of a and b depending on the complexity measure and of course depending on\nthe map, but independent of output x. Showing that the same qualitative results obtain for different\ncomplexity measures is sign of robustness for simplicity bias. Below we list a number of different descriptional complexity measures which we used, to extend\nthe experiments in Section 3 in the main text. F.1 COMPLEXTY MEASURES Lempel-Ziv complexity (LZ complexity for short). The Boolean functions studied in the main text\ncan be written as binary strings, which makes it possible to use measures of complexity based\non finding regularities in binary strings. One of the best is Lempel-Ziv complexity, based on the\nLempel-Ziv compression algorithm. It has many nice properties, like asymptotic optimality, and\nbeing asymptotically equal to the Kolmogorov complexity for an ergodic source.", |
| "paper_id": "1805.08522", |
| "title": "Deep learning generalizes because the parameter-function map is biased towards simple functions", |
| "authors": [ |
| "Guillermo Valle-Pérez", |
| "Chico Q. Camargo", |
| "Ard A. Louis" |
| ], |
| "published_date": "2018-05-22", |
| "primary_category": "stat.ML", |
| "arxiv_url": "http://arxiv.org/abs/1805.08522v5", |
| "chunk_index": 63, |
| "total_chunks": 75, |
| "char_count": 1925, |
| "word_count": 298, |
| "chunking_strategy": "semantic" |
| }, |
| { |
| "chunk_id": "c2693694-59ed-4c2f-82c9-371be5f58bd7", |
| "text": "We use the variation of Lempel-Ziv complexity from Dingle et al. (2018) which is based on the 1976 Lempel Ziv\nalgorithm (Lempel & Ziv (1976)): log2(n), x = 0n or 1n\nKLZ(x) = (5)\nlog2(n)[Nw(x1...xn) + Nw(xn...x1)]/2, otherwise where n is the length of the binary string, and Nw(x1...xn) is the number of words in the LempelZiv \"dictionary\" when it compresses output x. The symmetrization makes the measure more finegrained, and the log2(n) factor as well as the value for the simplest strings ensures that they scale as\nexpected for Kolmogorov complexity. This complexity measure is the primary one used in the main\ntext. We note that the binary string representation depends on the order in which inputs are listed to construct it, which is not a feature of the function itself. This may affect the LZ complexity, although for\nlow-complexity input orderings (we use numerical ordering of the binary inputs), it has a negligible\neffect, so that K(x) will be very close to the Kolmogorov complexity of the function. A fundamental, though weak, measure of complexity is the entropy. For a given binary\nstring this is defined as S = −n0N log2 n0N −n1N log2 n1N , where n0 is the number of zeros in the string,\nand n1 is the number of ones, and N = n0 + n1. This measure is close to 1 when the number of\nones and zeros is similar, and is close to 0 when the string is mostly ones, or mostly zeros. Entropy\nand KLZ(x) are compared in Fig. 9, and in more detail in supplementary note 7 (and supplementary\ninformation figure 1) of reference Dingle et al. (2018). They correlate, in the sense that low entropy\nS(x) means low KLZ(x), but it is also possible to have Large entropy but low KLZ(x), for example\nfor a string such as 10101010.... Boolean expression complexity. Boolean functions can be compressed by finding simpler ways to\nrepresent them. We used the standard SciPy implementation of the Quine-McCluskey algorithm to\nminimize the Boolean function into a small sum of products form, and then defined the number of\noperations in the resulting Boolean expression as a Boolean complexity measure.", |
| "paper_id": "1805.08522", |
| "title": "Deep learning generalizes because the parameter-function map is biased towards simple functions", |
| "authors": [ |
| "Guillermo Valle-Pérez", |
| "Chico Q. Camargo", |
| "Ard A. Louis" |
| ], |
| "published_date": "2018-05-22", |
| "primary_category": "stat.ML", |
| "arxiv_url": "http://arxiv.org/abs/1805.08522v5", |
| "chunk_index": 64, |
| "total_chunks": 75, |
| "char_count": 2095, |
| "word_count": 363, |
| "chunking_strategy": "semantic" |
| }, |
| { |
| "chunk_id": "312b1e32-6213-4c1a-a67c-4451a9c53cb4", |
| "text": "Generalization complexity. Franco et al. have introduced a complexity measure for Boolean\nfunctions, designed to capture how difficult the function is to learn and generalize (Franco & Anthony (2004)), which was used to empirically find that simple functions generalize better in a neural\nnetwork (Franco (2006)). The measure consists of a sum of terms, each measuring the average\nover all inputs fraction of neighbours which change the output. The first term considers neighbours\nat Hamming distance of 1, the second at Hamming distance of 2 and so on. The first term is also\nknown (up to a normalization constant) as average sensitivity (Friedgut (1998)). The terms in the series have also been called \"generalized robustness\" in the evolutionary theory literature (Greenbury\net al. (2016)). Here we use the first two terms, so the measure is:", |
| "paper_id": "1805.08522", |
| "title": "Deep learning generalizes because the parameter-function map is biased towards simple functions", |
| "authors": [ |
| "Guillermo Valle-Pérez", |
| "Chico Q. Camargo", |
| "Ard A. Louis" |
| ], |
| "published_date": "2018-05-22", |
| "primary_category": "stat.ML", |
| "arxiv_url": "http://arxiv.org/abs/1805.08522v5", |
| "chunk_index": 65, |
| "total_chunks": 75, |
| "char_count": 845, |
| "word_count": 135, |
| "chunking_strategy": "semantic" |
| }, |
| { |
| "chunk_id": "9703c69d-b487-4645-afe2-155d4ff70aa2", |
| "text": "C(f) = C1(f) + C2(f),\nC1(f) = X X |f(x) −f(y)|,\n2nn\nx∈X y∈Nei1(x)\nC1(f) = X X |f(x) −f(y)|,\n2nn(n −1)\nx∈X y∈Nei2(x) where Neii(x) is all neighbours of x at Hamming distance i. Critical sample ratio. A measure of the complexity of a function was introduced in Krueger et al.\n(2017) to explore the dependence of generalization with complexity. In general, it is defined with\nrespect to a sample of inputs as the fraction of those samples which are critical samples, defined to\nbe an input such that there is another input within a ball of radius r, producing a different output (for\ndiscrete outputs). Here, we define it as the fraction of all inputs, that have another input at Hamming\ndistance 1, producing a different output. F.2 CORRELATION BETWEEN COMPLEXITIES In Fig. 9, we compare the different complexity measures against one another. We also plot the\nfrequency of each complexity; generally more functions are found with higher complexity. F.3 PROBABILITY-COMPLEXITY PLOTS In Fig. 10 we show how the probability versus complexity plots look for other complexity measures. The behaviour is similar to that seen for the LZ complexity measure in Fig 1(b) of the main text. In\nFig. 12 we show probability versus LZ complexity plots for other choices of parameter distributions. Figure 9: Scatter matrix showing the correlation between the different complexity measures used\nin this paper On the diagonal, a histogram (in grey) of frequency versus complexity is depicted. The\nfunctions are from the sample of 108 parameters for the (7, 40, 40, 1) network.", |
| "paper_id": "1805.08522", |
| "title": "Deep learning generalizes because the parameter-function map is biased towards simple functions", |
| "authors": [ |
| "Guillermo Valle-Pérez", |
| "Chico Q. Camargo", |
| "Ard A. Louis" |
| ], |
| "published_date": "2018-05-22", |
| "primary_category": "stat.ML", |
| "arxiv_url": "http://arxiv.org/abs/1805.08522v5", |
| "chunk_index": 66, |
| "total_chunks": 75, |
| "char_count": 1557, |
| "word_count": 260, |
| "chunking_strategy": "semantic" |
| }, |
| { |
| "chunk_id": "30a5f8f4-6dcf-4a90-a8ce-be112fd3ffd8", |
| "text": "(a) Probability versus Boolean complexity (b) Probability versus generalization complexity (c) Probability versus entropy (d) Probability versus critical sample ratio Figure 10: Probability versus different measures of complexity (see main text for Lempel-Ziv),\nestimated from a sample of 108 parameters, for a network of shape (7, 40, 40, 1). Points with a\nfrequency of 10−8 are removed for clarity because these suffer from finite-size effects (see Appendix G). The measures of complexity are described in Appendix F. Figure 11: Histogram of functions in the probability versus Lempel-Ziv complexity plane, weighted\naccording to their probability. Probabilities are estimated from a sample of 108 parameters, for a\nnetwork of shape (7, 40, 40, 1) Figure 12: Probability versus LZ complexity for network of shape (7, 40, 40, 1) and varying sampling distributions. Samples are of size 107. (a) Weights are sampled from a Gaussian with variance\n1/√n where n is the input dimension of each layer. (b) Weights are sampled from a Gaussian with\nvariance 2.5 F.4 EFFECTS OF TARGET FUNCTION COMPLEXITY ON LEARNING FOR DIFFERENT\nCOMPLEXITY MEASURES Here we show the effect of the complexity of the target function on learning, as well as other complementary results. Here we compare neural network learning to random guessing, which we call\n\"unbiased learner\". Note that both probably have the same hypothesis class as we tested that the\nneural network used here can fit random functions. The functions in these experiments were chosen by randomly sampling parameters of the neural\nnetwork used, and so even the highest complexity ones are probably not fully random12. In fact,\nwhen training the network on truly random functions, we obtain generalization errors equal or above\nthose of the unbiased learner.", |
| "paper_id": "1805.08522", |
| "title": "Deep learning generalizes because the parameter-function map is biased towards simple functions", |
| "authors": [ |
| "Guillermo Valle-Pérez", |
| "Chico Q. Camargo", |
| "Ard A. Louis" |
| ], |
| "published_date": "2018-05-22", |
| "primary_category": "stat.ML", |
| "arxiv_url": "http://arxiv.org/abs/1805.08522v5", |
| "chunk_index": 67, |
| "total_chunks": 75, |
| "char_count": 1800, |
| "word_count": 282, |
| "chunking_strategy": "semantic" |
| }, |
| { |
| "chunk_id": "4b6ef90f-b5d9-4543-abcb-e1881c060651", |
| "text": "This is expected from the No Free Lunch theorem, which says that no\nalgorithm can generalize better (for off-training error) uniformly over all functions than any other\nalgorithm (Wolpert & Waters (1994)). (a) Generalization error of learned functions (b) Complexity of learned functions (c) Number of iterations to perfectly fit training set (d) Net Euclidean distance traveled in parameter space\nto fit training set Figure 13: Different learning metrics versus the LZ complexity of the target function, when learning\nwith a network of shape (7, 40, 40, 1). Dots represent the means, while the shaded envelope corresponds to piecewise linear interpolation of the standard deviation, over 500 random initializations\nand training sets. F.5 LEMPEL-ZIV VERSUS ENTROPY To check that the correlation between LZ complexity and generalization is not only because of\na correlation with function entropy (which is just a measure of the fraction of inputs mapping to 12The fact that non-random strings can have maximum LZ complexity is a consequence of LZ complexity\nbeing a less powerful complexity measure than Kolmogorov complexity, see e.g. Estevez-Rams et al. (2013). The fact that neural networks do well for non-random functions, even if they have maximum LZ, suggests that\ntheir simplicity bias captures a notion of complexity stronger than LZ. (a) Generalization error of learned functions (b) Complexity of learned functions (c) Number of iterations to perfectly fit training set (d) Net Euclidean distance traveled in parameter space\nto fit training set Figure 14: Different learning metrics versus the generalization complexity of the target function,\nwhen learning with a network of shape (7, 40, 40, 1). Dots represent the means, while the shaded\nenvelope corresponds to piecewise linear interpolation of the standard deviation, over 500 random\ninitializations and training sets. (a) Generalization error of learned functions (b) Complexity of learned functions", |
| "paper_id": "1805.08522", |
| "title": "Deep learning generalizes because the parameter-function map is biased towards simple functions", |
| "authors": [ |
| "Guillermo Valle-Pérez", |
| "Chico Q. Camargo", |
| "Ard A. Louis" |
| ], |
| "published_date": "2018-05-22", |
| "primary_category": "stat.ML", |
| "arxiv_url": "http://arxiv.org/abs/1805.08522v5", |
| "chunk_index": 68, |
| "total_chunks": 75, |
| "char_count": 1965, |
| "word_count": 299, |
| "chunking_strategy": "semantic" |
| }, |
| { |
| "chunk_id": "0c70ef2b-2d24-4491-acca-53d55c0481ac", |
| "text": "(c) Number of iterations to perfectly fit training set (d) Net Euclidean distance traveled in parameter space\nto fit training set Figure 15: Different learning metrics versus the Boolean complexity of the target function, when\nlearning with a network of shape (7, 40, 40, 1). Dots represent the means, while the shaded envelope\ncorresponds to piecewise linear interpolation of the standard deviation, over 500 random initializations and training sets. (a) Generalization error of learned functions (b) Complexity of learned functions (c) Number of iterations to perfectly fit training set (d) Net Euclidean distance traveled in parameter space\nto fit training set Figure 16: Different learning metrics versus the entropy of the target function, when learning with a\nnetwork of shape (7, 40, 40, 1). Dots represent the means, while the shaded envelope corresponds to\npiecewise linear interpolation of the standard deviation, over 500 random initializations and training\nsets. 1 or 0, see Section F), we observed that for some target functions with maximum entropy (but\nwhich are simple when measured using LZ complexity), the network still generalizes better than\nthe unbiased learner, showing that the bias towards simpler functions is better captured by more\npowerful complexity measures than entropy13. This is confirmed by the results in Fig. 17 where\nwe fix the target function entropy to 1.0, and observe that the generalization error still exhibits\nconsiderable variation, as well as a positive correlation with complexity Figure 17: Generalization error of learned function versus the complexity of the target function for\ntarget functions with fixed entropy 1.0, for a network of shape (7, 20, 20, 1). Complexity measures\nare (a) LZ and (b) generalisation complexity. Here the training set size was of size 64, but sampled\nwith replacement, and the generalization error is over the whole input space. Note that despite the\nfixed entropy there is still variation in generalization error, which correlates with the complexity of\nthe function. These figures demonstrate that entropy is a less accurate complexity measure than LZ\nor generalisation complexity, for predicting generalization performance. G FINITE-SIZE EFFECTS FOR SAMPLING PROBABILITY", |
| "paper_id": "1805.08522", |
| "title": "Deep learning generalizes because the parameter-function map is biased towards simple functions", |
| "authors": [ |
| "Guillermo Valle-Pérez", |
| "Chico Q. Camargo", |
| "Ard A. Louis" |
| ], |
| "published_date": "2018-05-22", |
| "primary_category": "stat.ML", |
| "arxiv_url": "http://arxiv.org/abs/1805.08522v5", |
| "chunk_index": 69, |
| "total_chunks": 75, |
| "char_count": 2253, |
| "word_count": 344, |
| "chunking_strategy": "semantic" |
| }, |
| { |
| "chunk_id": "e8584977-1e7a-4431-b627-f141def0960c", |
| "text": "Since for a sample of size N the minimum estimated probability is 1/N, many of the low-probability\nsamples that arise just once may in fact have a much lower probability than suggested. See Figure 18), for an illustration of how this finite-size sampling effect manifests with changing sample\nsize N. For this reason, these points are typically removed from plots.", |
| "paper_id": "1805.08522", |
| "title": "Deep learning generalizes because the parameter-function map is biased towards simple functions", |
| "authors": [ |
| "Guillermo Valle-Pérez", |
| "Chico Q. Camargo", |
| "Ard A. Louis" |
| ], |
| "published_date": "2018-05-22", |
| "primary_category": "stat.ML", |
| "arxiv_url": "http://arxiv.org/abs/1805.08522v5", |
| "chunk_index": 70, |
| "total_chunks": 75, |
| "char_count": 364, |
| "word_count": 60, |
| "chunking_strategy": "semantic" |
| }, |
| { |
| "chunk_id": "cfa90111-e3b6-4b65-bd93-3c6ccba63290", |
| "text": "H EFFECT OF NUMBER OF LAYERS ON SIMPLICITY BIAS In Figure 19 we show the effect of the number of layers on the bias (for feedforward neural networks with 40 neurons per layer). The left figures show the probability of individual functions\nversus the complexity. The right figure shows the histogram of complexities, weighted by the probability by which the function appeared in the sample of parameters. The histograms therefore show\nthe distribution over complexities when randomly sampling parameters14 We can see that between\nthe 0 layer perceptron and the 2 layer network there is an increased number of higher complexity\nfunctions. This is most likely because of the increasing expressivity of the network. For 2 layers\nand above, the expressivity does not significantly change, and instead, we observe a shift of the\ndistribution towards lower complexity. 13LZ is a better approximation to Kolmogorov complexity than entropy (Cover & Thomas (2012)), but of\ncourse LZ can still fail, for example when measuring the complexity of the digits of π.\n14using a Gaussian with 1/sqrtn variance in this case, n being number of inputs to neuron Figure 18: Probability (calculated from frequency) versus Lempel-Ziv complexity for a neural\nnetwork of shape (7, 40, 40, 1), and sample sizes N = 106, 107, 108. The lowest frequency functions\nfor a given sample size can be seen to suffer from finite-size effects, causing them to have a higher\nfrequency than their true probability. I BIAS AND THE CURSE OF DIMENSIONALITY We have argued that the main reason deep neural networks are able to generalize is because their\nimplicit prior over functions is heavily biased. We base this claim on PAC-Bayes theory, which tells\nus that enough bias implies generalization. The contrapositive of this claim is that bad generalization\nimplies small bias. In other words, models which perform poorly on certain tasks relative to deep\nlearning, should show little bias.", |
| "paper_id": "1805.08522", |
| "title": "Deep learning generalizes because the parameter-function map is biased towards simple functions", |
| "authors": [ |
| "Guillermo Valle-Pérez", |
| "Chico Q. Camargo", |
| "Ard A. Louis" |
| ], |
| "published_date": "2018-05-22", |
| "primary_category": "stat.ML", |
| "arxiv_url": "http://arxiv.org/abs/1805.08522v5", |
| "chunk_index": 71, |
| "total_chunks": 75, |
| "char_count": 1948, |
| "word_count": 316, |
| "chunking_strategy": "semantic" |
| }, |
| { |
| "chunk_id": "ffa4fb2e-b060-4a43-94d2-e914c85216b1", |
| "text": "Here we describe some examples of this, connecting them to the\ncurse of dimensionality. In future work, we plan to explore the converse statement, that small bias\nimplies bad generalization — one way of approaching this would be via lower bounds matching the\nPAC-Bayes upper bound. Complex machine learning tasks require models which are expressive enough to be able to learn\nthe target function. For this reason, before deep learning, the main approach to complex tasks was\nto use non-parametric models which are infinitely expressible. These include Gaussian processes\nand other kernel methods. However, unlike deep learning, these models were not successful when\napplied to tasks where the dimensionality of the input space was very large. We therefore expect\nthat these models show little bias, as they generalize poorly. Many of these models use kernels which encode some notion of local continuity. For example, the\nGaussian kernel ensures that points within a ball of radius λ are highly correlated. On the other\nhand, points separated by a distance greater than λ can be very different. Intuitively, we can divide\nthe space into regions of length scale λ. If the input domain we re considering has O(1) volume, and\nhas dimensionality d (is a subset of Rd), then the volume of each of these regions is of order λd, and\nthe number of these regions is of order 1/λd. In the case of binary classification, we can estimate\nthe effective number of functions which the kernel \"prefers\" by constraining the function to take\nlabel 0 or 1 within each region, but with no further constraint. The number of such functions is 2ad,\nwhere we let a := 1/λ. Each of these functions is equally likely, and together they take the bulk of\nthe total probability, so that they have probability close to 2−ad, which decreases very quickly with\ndimension.", |
| "paper_id": "1805.08522", |
| "title": "Deep learning generalizes because the parameter-function map is biased towards simple functions", |
| "authors": [ |
| "Guillermo Valle-Pérez", |
| "Chico Q. Camargo", |
| "Ard A. Louis" |
| ], |
| "published_date": "2018-05-22", |
| "primary_category": "stat.ML", |
| "arxiv_url": "http://arxiv.org/abs/1805.08522v5", |
| "chunk_index": 72, |
| "total_chunks": 75, |
| "char_count": 1839, |
| "word_count": 308, |
| "chunking_strategy": "semantic" |
| }, |
| { |
| "chunk_id": "52e7913a-b69f-4463-b7fd-88a8d04f503b", |
| "text": "Kernels like the Gaussian kernel are biased towards functions which are locally continuous. However, for high dimension d, they are not biased enough. In particular, as the the probability of the\nmost likely functions grows doubly exponentially with d, we expect PAC-Bayes-like bounds to grow\nexponentially with d, quickly becoming vacuous. This argument is essentially a way of understanding the curse of dimensionality from the persective of priors over functions. (a) Perceptron (b) Perceptron (c) 1 hidden layer (d) 1 hidden layer (e) 2 hidden layers (f) 2 hidden layers (g) 5 hidden layers (h) 5 hidden layers (i) 8 hidden layers (j) 8 hidden layers Figure 19: Probability versus LZ complexity for networks with different number of layers. Samples\nare of size 106 (a) & (b) A perceptron with 7 input neurons (complexity is capped at 80 in (a) to\naid comparison with the other figures). (c) & (d) A network with 1 hidden layer of 40 neurons (e)\n& (f) A network with 2 hidden layer of 40 neurons (g) & (h) A network with 5 hidden layers of 40\nneurons each. (i) & (j) A network with 8 hidden layers of 40 neurons each The topic of generalization in neural networks has been extensively studied both in theory and experiment, and the literature is vast. Theoretical approaches to generalization include classical notions\nlike VC dimension Baum & Haussler (1989); Harvey et al. (2017) and Rademacher complexity Sun\net al. (2016), but also more modern concepts such as stability Hardt et al. (2016), robustness Xu &\nMannor (2012), compression Arora et al. (2018) as well as studies on the relation between generalization and properties of stochastic gradient descent (SGD) algorithms Zhang et al. (2017b); Soudry\net al. (2017); Advani & Saxe (2017). Empirical studies have also pushed the boundaries proposed by theory, In particular, in recent work\nby Zhang et al. Zhang et al. (2017a), it is shown that while deep neural networks are expressive\nenough to fit randomly labeled data, they can still generalize for data with structure.", |
| "paper_id": "1805.08522", |
| "title": "Deep learning generalizes because the parameter-function map is biased towards simple functions", |
| "authors": [ |
| "Guillermo Valle-Pérez", |
| "Chico Q. Camargo", |
| "Ard A. Louis" |
| ], |
| "published_date": "2018-05-22", |
| "primary_category": "stat.ML", |
| "arxiv_url": "http://arxiv.org/abs/1805.08522v5", |
| "chunk_index": 73, |
| "total_chunks": 75, |
| "char_count": 2033, |
| "word_count": 341, |
| "chunking_strategy": "semantic" |
| }, |
| { |
| "chunk_id": "9fdb9216-85fb-4ecd-aa9c-8ad78afea0ee", |
| "text": "The generalization error correlates with the amount of randomization in the labels. A similar result was found\nmuch earlier in experiments with smaller neural networks Franco (2006), where the authors defined\na complexity measure for Boolean functions, called generalization complexity (see Appendix F),\nwhich appears to correlate well with the generalization error. Inspired by the results of Zhang et al. Zhang et al. (2017a), Arpit et al. Krueger et al. (2017) propose\nthat the data dependence of generalization for neural networks can be explained because they tend to\nprioritize learning simple patterns first. The authors show some experimental evidence supporting\nthis hypothesis, and suggest that SGD might be the origin of this implicit regularization. This\nargument is inspired by the fact that SGD converges to minimum norm solutions for linear models\nYao et al. (2007), but only suggestive empirical results are available for the case of nonlinear models,\nso that the question remains open Soudry et al. (2017). Wu et al. (2017) argue that fullbatch gradient descent also generalizes well, suggesting that SGD is not the main cause behind\ngeneralization. It may be that SGD provides some form of implicit regularisation, but here we argue\nthat the exponential bias towards simplicity is so strong that it is likely the main origin of the implicit\nregularization in the parameter-function map. The idea of having a bias towards simple patterns has a long history, going back to the philosophical\nprinciple of Occam's razor, but having been formalized much more recently in several ways in\nlearning theory. For instance, the concepts of minimum description length (MDL) Rissanen (1978),\nBlumer algorithms Blumer et al. (1987); Wolpert & Waters (1994), and universal induction Ming\n& Vitányi (2014) all rely on a bias towards simple hypotheses. Interestingly, these approaches\ngo hand in hand with non-uniform learnability, which is an area of learning theory which tries to\npredict data-dependent generalization. For example, MDL tends to be analyzed using structural\nrisk minimization or the related PAC-Bayes approach Vapnik (2013); Shalev-Shwartz & Ben-David\n(2014). Lattimore & Hutter (2013) have shown that the generalization error grows with the\ntarget function complexity for a perfect Occam algorithm15 which uses Kolmogorov complexity\nto choose between hypotheses. Schmidhuber applied variants of universal induction to learn neural\nnetworks Schmidhuber (1997).", |
| "paper_id": "1805.08522", |
| "title": "Deep learning generalizes because the parameter-function map is biased towards simple functions", |
| "authors": [ |
| "Guillermo Valle-Pérez", |
| "Chico Q. Camargo", |
| "Ard A. Louis" |
| ], |
| "published_date": "2018-05-22", |
| "primary_category": "stat.ML", |
| "arxiv_url": "http://arxiv.org/abs/1805.08522v5", |
| "chunk_index": 74, |
| "total_chunks": 75, |
| "char_count": 2480, |
| "word_count": 376, |
| "chunking_strategy": "semantic" |
| }, |
| { |
| "chunk_id": "56ab9ef2-fe8c-4fb7-8c88-7554863e2a06", |
| "text": "The simplicity bias from Dingle et al. Dingle et al. (2018) arises from\na simpler version of the coding theorem of Solomonoff and Levin Ming & Vitányi (2014). More\ntheoretical work is needed to make these connections rigorous, but it may be that neural networks\nintrinsically approximate universal induction because the parameter-function map results in a prior\nwhich approximates the universal distribution. Other approaches that have been explored for neural networks try to bound generalization by bounding capacity measures like different types of norms of the weights (Neyshabur et al. (2015); Keskar\net al. (2016); Neyshabur et al. (2017b;a); Bartlett et al. (2017); Golowich et al. (2018); Arora et al.\n(2018)), or unit capacity Neyshabur et al. (2018). These capture the behaviour of the real test error\n(like its improvement with overparametrization (Neyshabur et al. (2018)), or with training epoch\n(Arora et al. (2018))). However, these approaches have not been able to obtain nonvacuous bounds\nyet.", |
| "paper_id": "1805.08522", |
| "title": "Deep learning generalizes because the parameter-function map is biased towards simple functions", |
| "authors": [ |
| "Guillermo Valle-Pérez", |
| "Chico Q. Camargo", |
| "Ard A. Louis" |
| ], |
| "published_date": "2018-05-22", |
| "primary_category": "stat.ML", |
| "arxiv_url": "http://arxiv.org/abs/1805.08522v5", |
| "chunk_index": 75, |
| "total_chunks": 75, |
| "char_count": 1010, |
| "word_count": 156, |
| "chunking_strategy": "semantic" |
| }, |
| { |
| "chunk_id": "0de8a5f3-4275-42ea-87cb-3e99a0b65610", |
| "text": "15Here what we call a 'perfect Occam algorithm' is an algorithm which returns the simplest hypothesis\nwhich is consistent with the training data, as measured using some complexity measure, such as Kolmogorov\ncomplexity. Another popular approach to explaining generalisation is based around the idea of flat minima\nKeskar et al. (2016); Wu et al. (2017). In Hochreiter & Schmidhuber (1997), Hochreiter and Schmidhuber argue that flatness could be linked to generalization via the MDL principle. Several experiments also suggest that flatness correlates with generalization. However, it has also been pointed out\nthat flatness is not enough to understand generalization, as sharp minima can also generalize Dinh\net al. (2017). We show in Section 2 in the main text that simple functions have much larger regions\nof parameter space producing them, so that they likely give rise to flat minima, even though the same\nfunction might also be produced by other sharp regions of parameter space. Other papers discussing properties of the parameter-function map in neural networks include Montufar et al. Montufar et al. (2014), who suggested that looking at the size of parameter space producing\nfunctions of certain complexity (measured by the number of linear regions) would be interesting,\nbut left it for future work. In Poole et al. (2016), Poole et al. briefly look at the sensitivity to small\nperturbations of the parameter-function map. In spite of these previous works, there is clearly still\nmuch scope to study the properties of the parameter-function map for neural networks. Other work applying PAC-Bayes theory to deep neural networks include (Dziugaite & Roy (2017;\n2018); Neyshabur et al. (2017a;b)). The work of Dziugaite and Roy is noteworthy in that it also\ncomputes non-vacuous bounds. However, their networks differ from the ones used in practice in\nthat they are either stochastic (Dziugaite & Roy (2017)) or trained using a non-standard variant of\nSGD (Dziugaite & Roy (2018)). Admittedly, they only do this because current analysis cannot be\nrigorously extended to SGD, and further work on that end could make their bounds applicable to\nrealistic networks. One crucial difference with previous PAC-Bayes work is that we consider the prior over functions,\nrather than the prior over parameters.", |
| "paper_id": "1805.08522", |
| "title": "Deep learning generalizes because the parameter-function map is biased towards simple functions", |
| "authors": [ |
| "Guillermo Valle-Pérez", |
| "Chico Q. Camargo", |
| "Ard A. Louis" |
| ], |
| "published_date": "2018-05-22", |
| "primary_category": "stat.ML", |
| "arxiv_url": "http://arxiv.org/abs/1805.08522v5", |
| "chunk_index": 76, |
| "total_chunks": 75, |
| "char_count": 2308, |
| "word_count": 363, |
| "chunking_strategy": "semantic" |
| }, |
| { |
| "chunk_id": "ec286093-757a-4b9c-90cf-8e4a53724834", |
| "text": "We think this is one reason why our bounds are tighter than\nother PAC-Bayes bounds. The many-to-one nature of the parameter-function map in models with\nredundancy means that: low KL divergence between prior and posterior in parameter space implies\nlow KL divergence between corresponding prior and posterior in function space; conversely, one can\nhave low KL divergence in function space with a large KL divergence in parameter space. Using the\nPAC-Bayes bound from McAllester (1999b), this implies that bounds using distributions in function\nspace can be tighter. As we explain in Section 5, we actually use the less-commonly used bound\nfrom McAllester (1998), which assumes the posterior is the Bayesian posterior, which we argue is\na reasonable approximation. This posterior is known to give the tightest PAC-Bayes bounds for a\nparticular prior (McAllester (1999b)). Finally, our work follows the growing line of work exploring random neural networks Schoenholz\net al. (2017a); Giryes et al. (2016); Poole et al. (2016); Schoenholz et al. (2017b), as a way to\nunderstand fundamental properties of neural networks, robust to other choices like initialization,\nobjective function, and training algorithm.", |
| "paper_id": "1805.08522", |
| "title": "Deep learning generalizes because the parameter-function map is biased towards simple functions", |
| "authors": [ |
| "Guillermo Valle-Pérez", |
| "Chico Q. Camargo", |
| "Ard A. Louis" |
| ], |
| "published_date": "2018-05-22", |
| "primary_category": "stat.ML", |
| "arxiv_url": "http://arxiv.org/abs/1805.08522v5", |
| "chunk_index": 77, |
| "total_chunks": 75, |
| "char_count": 1205, |
| "word_count": 184, |
| "chunking_strategy": "semantic" |
| } |
| ] |