researchpilot-data / chunks /1806.04854_semantic.json
Subhadip007's picture
feat: Upload full 358k vector database
e8a2c2e
[
{
"chunk_id": "54562b6f-4be8-4862-a8a6-acc7aab6474d",
"text": "Fast and Scalable Bayesian Deep Learning by Weight-Perturbation in Adam Mohammad Emtiyaz Khan * 1 Didrik Nielsen * 1 Voot Tangkaratt * 1 Wu Lin 2 Yarin Gal 3 Akash Srivastava 4 Abstract using Bayes' rule. Unfortunately, this is infeasible in large\nmodels such as Bayesian neural networks. Traditional methUncertainty computation in deep learning is es- ods such as Markov Chain Monte Carlo (MCMC) methods\nsential to design robust and reliable systems. Vari- converge slowly and might require a large memory (Balan\national inference (VI) is a promising approach et al., 2015). In contrast, variational inference (VI) methfor such computation, but requires more effort to2018 ods can scale to large models by using stochastic-gradient\nimplement and execute compared to maximum- (SG) methods, as recent work has shown (Graves, 2011;\nlikelihood methods. In this paper, we propose Blundell et al., 2015; Ranganath et al., 2014; Salimans et al.,\nnew natural-gradient algorithms to reduce such ef- 2013). These works employ adaptive learning-rate meth-Aug forts for Gaussian mean-field VI. Our algorithms ods, such as RMSprop (Tieleman & Hinton, 2012), Adam\n2 can be implemented within the Adam optimizer (Kingma & Ba, 2015) and AdaGrad (Duchi et al., 2011), for\nby perturbing the network weights during gradi- which easy-to-use implementations are available in existing\nent evaluations, and uncertainty estimates can be codebases.\ncheaply obtained by using the vector that adapts\nthe learning rate. This requires lower memory, Despite their simplicity, these VI methods require more\ncomputation, and implementation effort than ex- computation, memory, and implementation effort compared\nisting VI methods, while obtaining uncertainty to maximum-likelihood estimation (MLE). One reason for\nthis is that the number of parameters in VI is usually much[stat.ML] estimates of comparable quality. Our empirical\nresults confirm this and further suggest that the larger than in MLE, which increases the memory and comweight-perturbation in our algorithm could be use- putation costs. Another reason is that existing codebases are\nful for exploration in reinforcement learning and designed and optimized for tasks such as MLE, and their apstochastic optimization. plication to VI involves significant amount of modifications\nin the code. We ask the following question: is it possible to\navoid these issues and make VI as easy as MLE?\n1.",
"paper_id": "1806.04854",
"title": "Fast and Scalable Bayesian Deep Learning by Weight-Perturbation in Adam",
"authors": [
"Mohammad Emtiyaz Khan",
"Didrik Nielsen",
"Voot Tangkaratt",
"Wu Lin",
"Yarin Gal",
"Akash Srivastava"
],
"published_date": "2018-06-13",
"primary_category": "stat.ML",
"arxiv_url": "http://arxiv.org/abs/1806.04854v3",
"chunk_index": 0,
"total_chunks": 91,
"char_count": 2421,
"word_count": 367,
"chunking_strategy": "semantic"
},
{
"chunk_id": "60f68d7f-88c3-4510-9dca-6730807c38e8",
"text": "Introduction In this paper, we propose to use natural-gradient methods to address these issues for Gaussian mean-field VI. By\nDeep learning methods have had enormous recent success in\nproposing a natural-momentum method along with a series\nfields where prediction accuracy is important, e.g., computer\nof approximations, we obtain algorithms that can be implevision and speech recognition. However, for these methods\nmented with minimal changes to the existing codebases of\nto be useful in fields such as robotics and medical diagnosadaptive learning-rate methods.",
"paper_id": "1806.04854",
"title": "Fast and Scalable Bayesian Deep Learning by Weight-Perturbation in Adam",
"authors": [
"Mohammad Emtiyaz Khan",
"Didrik Nielsen",
"Voot Tangkaratt",
"Wu Lin",
"Yarin Gal",
"Akash Srivastava"
],
"published_date": "2018-06-13",
"primary_category": "stat.ML",
"arxiv_url": "http://arxiv.org/abs/1806.04854v3",
"chunk_index": 1,
"total_chunks": 91,
"char_count": 564,
"word_count": 82,
"chunking_strategy": "semantic"
},
{
"chunk_id": "657e9eaa-1ef3-41e3-895d-35eb61b11234",
"text": "The main change involves\ntics, we need to know the uncertainty of our predictions. For\nperturbing the network weights during the gradient compuexample, physicians might need such uncertainty estimates\nobtained by using the vector that adapts the learning rate. Lack of such estimates might result in unreliable decisions\nThis requires lower memory, computation, and implemenwhich can sometime have disastrous consequences.\ntation efforts than existing methods for VI while obtaining\nOne of the goals of Bayesian inference is to provide uncer- uncertainty estimates of comparable quality. Our experitainty estimates by using the posterior distribution obtained mental results confirm this, and suggest that the estimated\nuncertainty could improve exploration in problems such as\nproject, Tokyo, Japan 2University of British Columbia, Vancouver,\nKhan <emtiyaz.khan@riken.jp>. Bayesian inference in models such as neural networks\nProceedings of the 35 th International Conference on Machine has a long history in machine learning (MacKay, 2003;\nLearning, Stockholm, Sweden, PMLR 80, 2018. Copyright 2018 Bishop, 2006). Earlier work proposed a variety of algoby the author(s).",
"paper_id": "1806.04854",
"title": "Fast and Scalable Bayesian Deep Learning by Weight-Perturbation in Adam",
"authors": [
"Mohammad Emtiyaz Khan",
"Didrik Nielsen",
"Voot Tangkaratt",
"Wu Lin",
"Yarin Gal",
"Akash Srivastava"
],
"published_date": "2018-06-13",
"primary_category": "stat.ML",
"arxiv_url": "http://arxiv.org/abs/1806.04854v3",
"chunk_index": 2,
"total_chunks": 91,
"char_count": 1172,
"word_count": 166,
"chunking_strategy": "semantic"
},
{
"chunk_id": "720e275a-5619-4f40-b87a-8674a977da70",
"text": "Bayesian Deep Learning by Weight-Perturbation in Adam Adam Vadam\n1: while not converged do 1: while not converged do √\n2: θ ←µ 2: θ ←µ + σ ◦ϵ, where ϵ ∼N(0, I), σ ←1/ Ns + λ\n3: Randomly sample a data example Di 3: Randomly sample a data example Di\n4: g ←−∇log p(Di|θ) 4: g ←−∇log p(Di|θ)\n5: m ←γ1 m + (1 −γ1) g 5: m ←γ1 m + (1 −γ1) (g + λµ/N)\n6: s ←γ2 s + (1 −γ2) (g ◦g) 6: s ←γ2 s + (1 −γ2) (g ◦g)\n7: ˆm ←m/(1 −γt1),√ ˆs ←s/(1 −γt2) 7: ˆm ←m/(1 −γt1),√ ˆs ←s/(1 −γt2)\n8: µ ←µ −α ˆm/( ˆs + δ) 8: µ ←µ −α ˆm/( ˆs + λ/N)\n9: t ←t + 1 9: t ←t + 1\n10: end while 10: end while Comparison of Adam (left) and one of our proposed method Vadam (right). Adam performs maximum-likelihood estimation\nwhile Vadam performs variational inference, yet the two pseudocodes differ only slightly (differences highlighted in red). A major\ndifference is in line 2 where, in Vadam, weights are perturbed during the gradient evaluations. rithms such as MCMC methods (Neal, 1995), Laplace's (2016); Balan et al. (2015) use stochastic-gradient Langevin\nmethod (Denker & Lecun, 1991), and variational inference dynamics. Such approaches are viable alternatives to the\n(Hinton & Van Camp, 1993; Barber & Bishop, 1998).",
"paper_id": "1806.04854",
"title": "Fast and Scalable Bayesian Deep Learning by Weight-Perturbation in Adam",
"authors": [
"Mohammad Emtiyaz Khan",
"Didrik Nielsen",
"Voot Tangkaratt",
"Wu Lin",
"Yarin Gal",
"Akash Srivastava"
],
"published_date": "2018-06-13",
"primary_category": "stat.ML",
"arxiv_url": "http://arxiv.org/abs/1806.04854v3",
"chunk_index": 3,
"total_chunks": 91,
"char_count": 1190,
"word_count": 232,
"chunking_strategy": "semantic"
},
{
"chunk_id": "fd7fe5ae-b75a-4ed8-9b29-c56f748c71be",
"text": "The mean-field VI approach we use.\nmean-field approximation has also been a popular tool from\nAnother related work by Mandt et al. (2017) views SG devery early on (Saul et al., 1996; Anderson & Peterson, 1987).\nscent as VI but requires additional effort to obtain posterior\nThese previous works lay the foundation of methods now\napproximations, while in our approach the approximation is\nused for Bayesian deep learning (Gal, 2016).\nautomatically obtained within an adaptive method. Recent approaches (Graves, 2011; Blundell et al., 2015)\nOur weight-perturbed algorithms are also related to globalenable the application of Gaussian mean-field VI methods to\noptimization methods, e.g., Gaussian-homotopy continularge deep-learning problems. They do so by using gradientation methods (Mobahi & Fisher III, 2015), smoothedbased methods. In contrast, we propose to use naturaloptimization method (Leordeanu & Hebert, 2008), gradugradient methods which, as we show, lead to algorithms\nated optimization method (Hazan et al., 2016), and stochasthat are simpler to implement and require lower memory\ntic search methods (Zhou & Hu, 2014). In particular, our\nand computations than gradient-based methods. Natural\nalgorithm is related to recent approaches in deep learning\ngradients are also better suited for VI because they can\nfor exploration to avoid local minima, e.g., natural evolution\nimprove convergence rates by exploiting the information\nstrategy (Wierstra et al., 2014), entropy-SGD (Chaudhari\ngeometry of posterior approximations (Khan et al., 2016).\net al., 2016), and noisy networks for reinforcement learning\nSome of our algorithms inherit these properties too.\n(Fortunato et al., 2018; Plappert et al., 2018). An earlier\nA recent independent work on noisy-Adam by Zhang et al. version of our work (Khan et al., 2017) focuses exclusively\n(2018) is algorithmically very similar to our Vadam method, on this problem, and in this paper we modify it to be implehowever their derivation lacks a strong motivation for the use mented within an adaptive algorithm like Adam.\nof momentum. In our derivation, we incorporate a naturalmomentum term based on Polyak's heavy-ball method, 2. Gaussian Mean-Field Variational Inference\nwhich provides a theoretical justification for the use of momentum. In addition, we analyze the approximation error We consider modeling of a dataset D = {D1, D2, . . . , DN}\nintroduced in Vadam and discuss ways to reduce it. by using a deep neural network (DNN). We assume a probabilistic framework where each data example Di is sampled\nZhang et al. (2018) also propose an interesting extension\nindependently from a probability distribution p(Di|θ) paby using K-FAC, which could find better approximations rameterized by a DNN with weights θ ∈RD, e.g., the disthan the mean-field method. The goal of this approach is\ntribution could be an exponential-family distribution whose\nsimilar to other approaches that employ structured approximean parameter is the output of a DNN (Bishop, 2006).\nmations (Ritter et al., 2018; Louizos & Welling, 2016; Sun\net al., 2017). Many other works have explored variety of One of the most popular approaches to estimate θ given D\napproximation methods, e.g., Gal & Ghahramani (2016) use is maximum-likelihood estimation (MLE), where we maxdropout for VI, Hernandez-Lobato & Adams (2015); Hasen- imize the log-likelihood: log p(D|θ). This optimization\nclever et al. (2017) use expectation propagation, Li et al. problem can be efficiently solved by applying SG methods Bayesian Deep Learning by Weight-Perturbation in Adam",
"paper_id": "1806.04854",
"title": "Fast and Scalable Bayesian Deep Learning by Weight-Perturbation in Adam",
"authors": [
"Mohammad Emtiyaz Khan",
"Didrik Nielsen",
"Voot Tangkaratt",
"Wu Lin",
"Yarin Gal",
"Akash Srivastava"
],
"published_date": "2018-06-13",
"primary_category": "stat.ML",
"arxiv_url": "http://arxiv.org/abs/1806.04854v3",
"chunk_index": 4,
"total_chunks": 91,
"char_count": 3579,
"word_count": 543,
"chunking_strategy": "semantic"
},
{
"chunk_id": "3bee0c6c-82c8-4232-8a8f-462560c5e427",
"text": "such as RMSProp, AdaGrad and Adam. For large problems, Despite this, a direct application of adaptive learning-rate\nthese methods are extremely popular, partly due to the sim- methods for VI may result in algorithms that use more complicity and efficiency of their implementations (see Fig. 1 putation and memory than necessary, and also require more\nfor Adam's pseudocode). implementation effort. Compared to MLE, the memory\nand computation costs increase because the number of paOne of the goals of Bayesian deep learning is to go berameters to be optimized is doubled and we now have two\nyond MLE and estimate the posterior distribution of θ\nvectors µ and σ to estimate. Using adaptive methods into obtain an uncertainty estimate of the weights. Unforcreases this cost further as these methods require storing\ntunately, the computation of the posterior is challenging\nthe scaling vectors that adapt the learning rate for both µ\nin deep models. The posterior is obtained by specifyand σ. In addition, using existing codebases require seving a prior distribution p(θ) and then using Bayes' rule:\neral modifications as they are designed and optimized for\np(θ|D) := p(D|θ)p(θ)/p(D). This requires computation\nMLE. For example, we need to make changes in the comof the normalization constant p(D) = R p(D|θ)p(θ)dθ\nputation graph where the objective function is changed to\nwhich is a very difficult task for DNNs. One source of\nthe variational objective and network weights are replaced\nthe difficulty is the size of θ and D which are usually very\nby random variables. Together, these small issues make VI\nlarge in deep learning. Another source is the nonconjugacy\nmore difficult to implement and execute than MLE.\nof the likelihood p(Di|θ) and the prior p(θ), i.e., the two\ndistributions do not take the same form with respect to θ The algorithms developed in this paper solve some of these\n(Bishop, 2006). As a result, the product p(D|θ)p(θ) does issues and can be implemented within Adam with minimal\nnot take a form with which p(D) can be easily computed. changes to the code. We derive our algorithm by approxiDue to these issues, Bayesian inference in deep learning is mating a natural-gradient method and then using a naturalcomputationally challenging. momentum method. We now describe our method in detail.",
"paper_id": "1806.04854",
"title": "Fast and Scalable Bayesian Deep Learning by Weight-Perturbation in Adam",
"authors": [
"Mohammad Emtiyaz Khan",
"Didrik Nielsen",
"Voot Tangkaratt",
"Wu Lin",
"Yarin Gal",
"Akash Srivastava"
],
"published_date": "2018-06-13",
"primary_category": "stat.ML",
"arxiv_url": "http://arxiv.org/abs/1806.04854v3",
"chunk_index": 5,
"total_chunks": 91,
"char_count": 2312,
"word_count": 374,
"chunking_strategy": "semantic"
},
{
"chunk_id": "4ad40083-0978-4a3d-8e29-b0cf0f19ea9d",
"text": "Variational inference (VI) simplifies the problem by approximating p(θ|D) with a distribution q(θ) whose normalizing 3. Approximate Natural-Gradient VI\nconstant is relatively easier to compute. Following previous\nIn this section, we introduce a natural-gradient method to\nwork (Ranganath et al., 2014; Blundell et al., 2015; Graves,\nperform VI and then propose several approximations that\n2011), we choose both p(θ) and q(θ) to be Gaussian distrienable implementation within Adam.\nbutions with diagonal covariances:\nNatural-gradient VI methods exploit the Riemannian geomp(θ) := N(θ|0, I/λ), q(θ) := N(θ|µ, diag(σ2)), (1) etry of q(θ) by scaling the gradient with the inverse of its\nFisher information matrix (FIM). We build upon the naturalwhere λ ∈R is a known precision parameter with λ > gradient method of Khan & Lin (2017), which simplifies\n0, and µ, σ ∈RD are mean and standard deviation of q. the update by avoiding a direct computation of the FIM. The distribution q(θ) is known as the Gaussian mean-field The main idea is to use the expectation parameters of the\nvariational distribution and its parameters µ and σ2 can be exponential-family distribution to compute natural gradiobtained by maximizing the following variational objective: ents in the natural-parameter space. We provide a brief\ndescription of their method in Appendix B.\np(θ)\nL(µ, σ2) := X Eq [log p(Di|θ)] + Eq log . (2) For Gaussian mean-field VI, the method of Khan & Lin\nq(θ) i=1 (2017) gives the following update: h i\nA straightforward approach used in the previous work (Ran- NGVI : µt+1 = µt + βt σ2t+1 ◦ b∇µLt , (4)\nganath et al., 2014; Blundell et al., 2015; Graves, 2011) is\n, (5)to maximize L by using an SG method, e.g., we can use the σ−2t+1 = σ−2t −2βt h b∇σ2Lt i\nfollowing update:\nwhere βt > 0 is a scalar learning rate and a ◦b denotes the\nelement-wise product between vectors a and b. We refer to\nµt+1 = µt + ρt b∇µLt, σt+1 = σt + δt b∇σLt, (3) this update as natural-gradient variational inference (NGVI). A detailed derivation is given in Appendix C.\nwhere t is the iteration number, b∇xLt denotes an unbiased\nSG estimate of L at µt, σ2t with respect to x, and ρt, δt > 0 The NGVI update differs from (3) in one major aspect: the\nare learning rates which can be adapted using methods such learning rate βt in (4) is adapted by the variance σ2t+1.",
"paper_id": "1806.04854",
"title": "Fast and Scalable Bayesian Deep Learning by Weight-Perturbation in Adam",
"authors": [
"Mohammad Emtiyaz Khan",
"Didrik Nielsen",
"Voot Tangkaratt",
"Wu Lin",
"Yarin Gal",
"Akash Srivastava"
],
"published_date": "2018-06-13",
"primary_category": "stat.ML",
"arxiv_url": "http://arxiv.org/abs/1806.04854v3",
"chunk_index": 6,
"total_chunks": 91,
"char_count": 2342,
"word_count": 404,
"chunking_strategy": "semantic"
},
{
"chunk_id": "90fcc69e-129b-4b0f-9e51-a5e7c8f40293",
"text": "This\nas RMSprop or AdaGrad. These approaches make use of plays a crucial role in reducing the NGVI update to an\nexisting codebases for adaptive learning-rate methods to Adam-like update, as we show the next section. The update\nperform VI, which can handle many network architectures requires a constraint σ2 > 0 but, as we show in Section 3.2,\nand can scale well to large datasets. we can eliminate this constraint using an approximation. Bayesian Deep Learning by Weight-Perturbation in Adam Variational Online-Newton (VON) positive, it will remain positive in the subsequent iterations. Using this approximation to update st in (8) and denoting\nWe start by expressing the NGVI update in terms of the the vector of ˆhj(θ) by ˆh(θ), we get,\nMLE objective, so that we can directly compute gradients\non the MLE objective using backpropagation. We start by VOGN : st+1 = (1 −βt)st + βt ˆh(θt). (10)\ndefining the MLE objective (denoted by f) and minibatch\nstochastic-gradient estimates (denoted by ˆg): Using this update in VON, we get the \"variational online\nGauss-Newton\" (VOGN) algorithm.\n1 1\nf(θ) := X fi(θ), ˆg(θ) := X ∇θfi(θ), (6) The GGN approximation is proposed by Graves (2011)\nN M i=1 i∈M for mean-field Gaussian VI to derive a fast gradient-based\nmethod (see Eq. (17) in his paper1). This approximation\nwhere fi(θ) := −log p(Di|θ) is the negative log-likelihood is very useful for our natural-gradient method since it elimof i'th data example, and the minibatch M contains M inates the constraint on σ2, giving VOGN an algorithmic\nexamples chosen uniformly at random. Similarly, we can\nadvantage over VON.\nobtain a minibatch stochastic-approximation of the Hessian\nwhich we denote by b∇2θθf(θ). How good is this approximation? For an MLE problem,\nthe approximation error of the GGN in (9) decreases as the\nAs we show in Appendix D, the NGVI update can be written\nmodel-fit improves during training (Martens, 2014). For VI,\nin terms of the stochastic gradients and Hessian of f:\nwe expect the same however, since θ are sampled from q,\nVON : µt+1 = µt −βt (ˆg(θt) + ˜λµt)/(st+1 + ˜λ), (7) the expectation of the error is unlikely to be zero. Therefore,\nthe solutions found by VOGN will typically differ from\nst+1 = (1 −βt)st + βt diag[b∇2θθf(θt)], (8) those found by VON, but their performances are expected\nto be similar.where a/b is an element-wise division operation between\nvectors a and b, and we have approximated the expecta- An issue with VOGN is that its implementation is not easy\ntion with respect to q using one Monte-Carlo (MC) sam- within existing deep-learning codebases. This is because\nple θt ∼N(θ|µt, σ2t) with σ2t := 1/[N(st + ˜λ)] and these codebases are optimized to directly compute the sum\n˜λ := λ/N.",
"paper_id": "1806.04854",
"title": "Fast and Scalable Bayesian Deep Learning by Weight-Perturbation in Adam",
"authors": [
"Mohammad Emtiyaz Khan",
"Didrik Nielsen",
"Voot Tangkaratt",
"Wu Lin",
"Yarin Gal",
"Akash Srivastava"
],
"published_date": "2018-06-13",
"primary_category": "stat.ML",
"arxiv_url": "http://arxiv.org/abs/1806.04854v3",
"chunk_index": 7,
"total_chunks": 91,
"char_count": 2729,
"word_count": 459,
"chunking_strategy": "semantic"
},
{
"chunk_id": "1d570d23-aee3-4234-8efa-26af1f97f501",
"text": "The update can be easily modified when multi- of the gradients over minibatches, and do not support compuple samples are used. This update can leverage backprop- tation of individual gradients as required in (9). A solution\nagation to perform the gradient and Hessian computation. for such computations is discussed by Goodfellow (2015),\nSince the scaling vector st contains an online estimate of but this requires additional implementation effort. Instead,\nthe diagonal of the Hessian, we call this the \"variational we address this issue by using another approximation in the\nonline-Newton\" (VON) method. VON is expected to per- next section.\nform as well as NGVI, but does not require the gradients of\nthe variational objective. 3.3. Variational RMSprop (Vprop)\nThe Hessian can be computed by using methods such as To simplify the implementation of VOGN, we propose to\nautomatic-differentiation or the reparameterization trick. approximate the Hessian by the gradient magnitude (GM)\nHowever, since f is a non-convex function, the Hessian (Bottou et al., 2016):\ncan be negative which might make σ2 negative, in which\ncase the method will break down. One could use a con- \" #2 1\nstrained optimization method to solve this issue, but this ∇2θjθjf(θ) ≈ X ∇θjfi(θ) = [ˆgj(θ)]2 . (11) M\nmight be difficult to implement and execute (we discuss this i∈M\nbriefly in Appendix D.1). In the next section, we propose a Compared to the GGN which computes the sum of squaredsimple fix to this problem by using an approximation. gradients, this approximation instead computes the square\nof the sum. This approximation is also used in RMSprop\n3.2. Variational Online Gauss-Newton (VOGN) which uses the following update given weights θt: To avoid negative variances in the VON update, we propose RMSprop : θt+1 = θt −αt ˆg(θt)/(p¯st+1 + δ), (12)\nto use the Generalized Gauss-Newton (GGN) approximation\n¯st+1 = (1 −βt)¯st + βt [ˆg(θt) ◦ˆg(θt)] , (13)(Schraudolph, 2002; Martens, 2014; Graves, 2011):\n2 where ¯st is the vector that adapts the learning rate and δ ∇2θjθjf(θ) ≈1 X ∇θjfi(θ) := ˆhj(θ), (9) is a small positive scalar added to avoid dividing by zero. M\ni∈M\n1There is a discrepancy between Eq. (17) and (12) in Graves\nwhere θj is the j'th element of θ. This approximation will (2011), however the text below Eq. (12) mentions relationship to\nalways be nonnegative, therefore if the initial σ2 at t = 1 is FIM, from which it is clear that the GGN approximation is used.",
"paper_id": "1806.04854",
"title": "Fast and Scalable Bayesian Deep Learning by Weight-Perturbation in Adam",
"authors": [
"Mohammad Emtiyaz Khan",
"Didrik Nielsen",
"Voot Tangkaratt",
"Wu Lin",
"Yarin Gal",
"Akash Srivastava"
],
"published_date": "2018-06-13",
"primary_category": "stat.ML",
"arxiv_url": "http://arxiv.org/abs/1806.04854v3",
"chunk_index": 8,
"total_chunks": 91,
"char_count": 2461,
"word_count": 407,
"chunking_strategy": "semantic"
},
{
"chunk_id": "019ef238-6c24-4826-b65d-c99380e4a6c4",
"text": "Bayesian Deep Learning by Weight-Perturbation in Adam The update of ¯st uses the GM approximation to the Hessian of size M = 1, we have w = 1 and the GM is an unbiased\n(Bottou et al., 2016). Adam and AdaGrad also use this estimator of the GGN, but when M = N it is purely the\napproximation. magnitude of the gradient and does not contain any secondorder information. Using the GM approximation and an additional modification in the VON update, we can make the VON update very Therefore, if our focus is to obtain uncertainty estimates\nsimilar to RMSprop. Our modification involves taking the with good accuracy, VOGN with M = 1 might be a good\nsquare-root over st+1 in (7) and then using the GM approxi- choice since it is as easy as Vprop to implement. However,\nmation for the Hessian. We also use different learning rates this might require a small learning-rate and converge slowly.\nαt and βt to update µ and s, respectively. The resulting Vprop with M > 1 will converge fast and is much easier\nupdate is very similar to the RMSprop update: to implement than VOGN with M > 1, but might result in\nslightly worse estimates. Using Vprop with M = 1 may\nVprop: µt+1 = µt −αt (ˆg(θt)+˜λµt)/(√st+1 + ˜λ), not be as good because of the square-root2 over st.\nst+1 = (1 −βt)st + βt [ˆg(θt) ◦ˆg(θt)] , (14)\n4. Variational Adam (Vadam)\nwhere θt ∼N(θ|µt, σ2t) with σ2t := 1/[N(st + ˜λ)]. We\ncall this update \"Variational RMSprop\" or simply \"Vprop\". We now propose a natural-momentum method which will\nenable an Adam-like update.The Vprop update resembles RMSprop but with three differences (highlighted in red). First, the gradient in Vprop Momentum methods generally take the following form that\nis evaluated at the weights θt sampled from N(θ|µt, σ2t). uses Polyak's heavy-ball method:\nThis is a weight-perturbation where the variance σ2t of\nthe perturbation is obtained from the vector st that adapts θt+1 = θt + ¯αt∇θf1(θt) + ¯γt(θt −θt−1), (16)\nthe learning rate. The variance is also the uncertainty estimates. Therefore, VI can be performed simply by using where f1 is the function we want to maximize and the\nan RMSprop update with a few simple changes.",
"paper_id": "1806.04854",
"title": "Fast and Scalable Bayesian Deep Learning by Weight-Perturbation in Adam",
"authors": [
"Mohammad Emtiyaz Khan",
"Didrik Nielsen",
"Voot Tangkaratt",
"Wu Lin",
"Yarin Gal",
"Akash Srivastava"
],
"published_date": "2018-06-13",
"primary_category": "stat.ML",
"arxiv_url": "http://arxiv.org/abs/1806.04854v3",
"chunk_index": 9,
"total_chunks": 91,
"char_count": 2151,
"word_count": 377,
"chunking_strategy": "semantic"
},
{
"chunk_id": "17471084-fc83-49af-95c0-50bd7ab5c476",
"text": "The sec- last term is the momentum term. We propose a naturalond difference between Vprop and RMSprop is that Vprop momentum version of this algorithm which employs naturalhas an extra term ˜λµt in the update of µt which is due to gradients instead of the gradients. We assume q to be an\nthe Gaussian prior. Finally, the third difference is that the exponential-family distribution with natural-parameter η.\nconstant δ in RMSprop is replaced by ˜λ. We propose the following natural-momentum method in the\nnatural-parameter space: Analysis of the GM approximation\nηt+1 = ηt + ¯αt e∇ηLt + ¯γt(ηt −ηt−1), (17)\nIt is clear that the GM approximation might not be the\nbest approximation of the Hessian. Taking square of a where e∇denotes the natural-gradients in the natural- 2sum leads to a sum with M terms which, depending on parameter space, i.e., the gradient scaled by the Fisher inforthe correlations between the individual gradients, would mation matrix of q(θ).\neither shrink or expand the estimate. The following theorem\nWe show in Appendix E that, for Gaussian q(θ), we can exformalizes this intuition. It states that, given a minibatch\npress the above update as a VON update with momentum3:\nof size M, the expectation of the GM approximation is\nsomewhere between the GGN and square of the full-batch 1\ngradient. µt+1 = µt −¯αt ◦ ∇θf(θt) + ˜λµt\nst+1 + ˜λ\nTheorem 1. Denote the full-batch gradient with respect to \" # st + ˜λ\nθj by gj(θ) and the corresponding full-batch GGN approxi- + ¯γt ◦(µt −µt−1), (18)\nmation by hj(θ). Suppose minibatches M are sampled from st+1 + ˜λ\nthe uniform distribution p(M) over all MN minibatches, st+1 = (1 −¯αt) st + ¯αt ∇2θθf(θt), (19)\nand denote a minibatch gradient by ˆgj(θ; M), then the\nexpected value of the GM approximation is the following, where θt ∼N(θ|µt, σ2t) with σ2t = 1/[N(st + ˜λ)]. This\nupdate is similar to (17), but here the learning rates are\nEp(M) ˆgj(θ; M)2 = whj(θ) + (1 −w)[gj(θ)]2, (15) adapted. An attractive feature of this update is that it is very\n1 similar to Adam. Specifically the Adam update shown in\nwhere w = M (N −M)/(N −1).\n2Note that the square-root does not affect a fixed point (see ApA proof is given in Appendix G. This result clearly shows pendix H) but it might still affect the steps taken by the algorithm.\nthe bias introduced in the GM approximation and also that 3This is not an exact update for (17).",
"paper_id": "1806.04854",
"title": "Fast and Scalable Bayesian Deep Learning by Weight-Perturbation in Adam",
"authors": [
"Mohammad Emtiyaz Khan",
"Didrik Nielsen",
"Voot Tangkaratt",
"Wu Lin",
"Yarin Gal",
"Akash Srivastava"
],
"published_date": "2018-06-13",
"primary_category": "stat.ML",
"arxiv_url": "http://arxiv.org/abs/1806.04854v3",
"chunk_index": 10,
"total_chunks": 91,
"char_count": 2386,
"word_count": 418,
"chunking_strategy": "semantic"
},
{
"chunk_id": "acc208ac-24f4-4d40-97c1-cb8250f9c2a5",
"text": "We make one approximathe bias increases with the minibatch size. For a minibatch tion in Appendix E to derive it.",
"paper_id": "1806.04854",
"title": "Fast and Scalable Bayesian Deep Learning by Weight-Perturbation in Adam",
"authors": [
"Mohammad Emtiyaz Khan",
"Didrik Nielsen",
"Voot Tangkaratt",
"Wu Lin",
"Yarin Gal",
"Akash Srivastava"
],
"published_date": "2018-06-13",
"primary_category": "stat.ML",
"arxiv_url": "http://arxiv.org/abs/1806.04854v3",
"chunk_index": 11,
"total_chunks": 91,
"char_count": 113,
"word_count": 20,
"chunking_strategy": "semantic"
},
{
"chunk_id": "d934fc45-dca3-4fb7-b9c8-a9a38a44c182",
"text": "Bayesian Deep Learning by Weight-Perturbation in Adam Fig. 1 can be expressed as the following adaptive version of VI with a negative log-likelihood F(θ), and when τ = 0, it\n(16) as shown in Wilson et al. (2017)4, corresponds to VO. Similar objectives have been proposed\nin existing works (Blundell et al., 2015; Higgins et al., 2016) \" # 1\nθt+1 = θt −˜αt ◦∇θf(θt) where τ is used to improve convergence.\npˆst+1 + δ\nFor twice-differentiable F, we can follow a similar deriva-\n\" # √ˆst + δ tion as Section 3, and obtain the following algorithm,\n+ ˜γt ◦(θt −θt−1), (20)\npˆst+1 + δ\nµt+1 = µt −αt (b∇θF(θ) + τλµt)/(st+1 + τλ), (23)\nˆst+1 = γ2ˆst + (1 −γ2) [ˆg(θt)]2, (21)\nst+1 = (1 −τβt)st + βt b∇2θθF(θ), (24)\nwhere ˜αt, ˜γt are appropriately defined in terms of the\nwhere θt ∼N(θ|µt, σ2t) with σ2t := 1/(st + τλ). ThisAdam's learning rate α and γ1: ˜αt := α (1 −γ1) / (1 −γt1) algorithm is identical to the VON algorithm when τ = 1,\nand ˜γt := γ1 1 −γt−11 (1 −γt1). but when τ = 0, we perform VO with an algorithm which\nUsing a similar procedure as the derivation of Vprop, we is a diagonal version of the Variational Adaptive-Newton\ncan express the update as an Adam-like update, which we (VAN) algorithm proposed in Khan et al. (2017). By setting\ncall \"variational Adam\" or simply \"Vadam\". A pseudocode the value of τ between 0 and 1, we can interpolate between\nis given in Fig. 1, where we use learning rates of the Adam VO and VI. When the function is not differentiable, we can\nupdate insteof of choosing them accoring to ¯αt and ¯γt. A still compute the derivative of Eq[F(θ)] by using methods\nderivation is given in Appendix E.4. such as REINFORCE (Williams, 1992). When Hessian is difficult to compute, we can employ a GM\n5. Variational AdaGrad (VadaGrad) approximation and take the square-root as we did in Vprop. For τ = 0, the updates turn out to be similar to AdaGrad,\nVprop and Vadam perform variational inference, but they\nwhich we call \"variational AdaGrad\" or simply \"VadaGrad\".\ncan be modified to perform optimization instead of inference. The exact updates are given in Appendix F.",
"paper_id": "1806.04854",
"title": "Fast and Scalable Bayesian Deep Learning by Weight-Perturbation in Adam",
"authors": [
"Mohammad Emtiyaz Khan",
"Didrik Nielsen",
"Voot Tangkaratt",
"Wu Lin",
"Yarin Gal",
"Akash Srivastava"
],
"published_date": "2018-06-13",
"primary_category": "stat.ML",
"arxiv_url": "http://arxiv.org/abs/1806.04854v3",
"chunk_index": 12,
"total_chunks": 91,
"char_count": 2097,
"word_count": 381,
"chunking_strategy": "semantic"
},
{
"chunk_id": "55777fb6-d61d-4f26-843d-ba026a1063cd",
"text": "Unlike Vprop\nWe now derive such an algorithm which turns out to be a\nand Vadam, the scaling vector st in VadaGrad is a weighted\nvariational version of AdaGrad.\nsum of the past gradient-magnitudes. Therefore, the entries\nWe follow Staines & Barber (2013) who consider mini- in st never decrease, and the variance estimate of VadaGrad\nmization of black-box functions F(θ) via the variational never expands. This implies that it is highly likely that\noptimization5 (VO) framework. In this framework, instead q(θ) will converge to a Dirac delta and therefore arrive at a\nof directly minimizing F(θ), we minimize its expectation minimum of F. Eq [F(θ)] under a distribution q(θ) := N(θ|µ, σ2) with\nrespect to µ and σ2. The main idea behind VO is that the ex- 6.",
"paper_id": "1806.04854",
"title": "Fast and Scalable Bayesian Deep Learning by Weight-Perturbation in Adam",
"authors": [
"Mohammad Emtiyaz Khan",
"Didrik Nielsen",
"Voot Tangkaratt",
"Wu Lin",
"Yarin Gal",
"Akash Srivastava"
],
"published_date": "2018-06-13",
"primary_category": "stat.ML",
"arxiv_url": "http://arxiv.org/abs/1806.04854v3",
"chunk_index": 13,
"total_chunks": 91,
"char_count": 756,
"word_count": 129,
"chunking_strategy": "semantic"
},
{
"chunk_id": "ab7c2d35-7a69-4047-b3c3-c720c3c425d9",
"text": "Results\npectation can be used as a surrogate to the original optimization problem since minθ F(θ) ≤Eq [F(θ)]. The equality In this section, our goal is to show that the quality of\nis attained when σ2 →0, i.e., all mass of N(θ|µ, σ2) is at the uncertainty approximations obtained using our algothe mode. The main advantage of VO is that Eq [F(θ)] is rithms are comparable to existing methods, and compudifferentiable even when F itself is non-differentiable. This tation of uncertainty is scalable. We present results on\nway we can use SG optimizers to solve such problems. Bayesian logistic regression for classification, Bayesian\nneural networks for regression, and deep reinforcement\nSimilarly to Vprop, we can derive an algorithm for VO\nlearning. An additional result illustrating avoidance of\nby noting that VO can be seen as a special case of the VI\nlocal-minima using Vadam is in Appendix L. Another result\nproblem (2) where the KL term is absent and F(θ) is the\nshowing benefits of weight-perturbation in Vadam is in Apnegative log-likelihood. With this in mind, we define the\npendix M.",
"paper_id": "1806.04854",
"title": "Fast and Scalable Bayesian Deep Learning by Weight-Perturbation in Adam",
"authors": [
"Mohammad Emtiyaz Khan",
"Didrik Nielsen",
"Voot Tangkaratt",
"Wu Lin",
"Yarin Gal",
"Akash Srivastava"
],
"published_date": "2018-06-13",
"primary_category": "stat.ML",
"arxiv_url": "http://arxiv.org/abs/1806.04854v3",
"chunk_index": 14,
"total_chunks": 91,
"char_count": 1093,
"word_count": 181,
"chunking_strategy": "semantic"
},
{
"chunk_id": "dfa422d9-9e74-438e-b42d-ec579804e322",
"text": "The code to reproduce our results is available at\nfollowing variational objective with an additional parameter\nhttps://github.com/emtiyaz/vadam.\nτ ∈[0, 1]: Uncertainty Estimation in Logistic Regression LF (µ, σ2) := −Eq [F(θ)] + τEq log . (22)\nq(θ)\nIn this experiment, we compare the posterior approximaThe parameter τ allows us to interpolate between inference tions found with our algorithms to the optimal variational\nand optimization. When τ = 1, the objective corresponds to approximation that minimizes the variational objective. For\nBayesian logistic regression we can compute the optimal 4Wilson et al. (2017) do not use the constant δ, but in Adam a\nsmall constant is added for numerical stability. mean-field Gaussian approximations using the method de-\n5The exact conditions on F under which VO can be applied scribed in Marlin et al. (2011) (refer to as 'MF-Exact'),\nare also discussed by Staines & Barber (2013). and compare it to the following methods: VOGN with Bayesian Deep Learning by Weight-Perturbation in Adam 10-3\n10 VOGN-1 ELBO 0.32 10 MAP 4 Vadam MF-Exact 0.28\nVadam 8 VOGN-1 3 Negative 0.24\n0.16 6 KL-divergence\n5 2 0.14 Weight LogLoss 4\n0.12\n1 30 Symmetric 2\nKL 20\n0 0 10 0 0 5 10 15 20 1 8 16 32 64 0\nWeight 1 MF-Exact VOGN-1 Vadam Minibatch size\n(a) (b) (c) Experiments on Bayesian logistic regression showing (a) posterior approximations on a toy example, (b) performance on\n'USPS-3v5' measuring negative ELBO, log-loss, and the symmetric KL divergence of the posterior approximation to MF-Exact, (c)\nsymmetric KL divergence of Vadam for various minibatch sizes on 'Breast-Cancer' compared to VOGN with a minibatch of size 1. minibatch size M = 1 and a momentum term (referred matches VOGN-1. The results are still different because\nto as 'VOGN-1'), and Vadam with M ≥1 (referred to Vadam does not reduce to VOGN-1, even when M = 1 due\nas 'Vadam'). Since our goal is to compare the accuracy to the use of the square-root over st.\nof posterior approximations and not the speed of convergence, we run both the methods for many iterations with a 6.2. Uncertainty Estimation in Neural Network\nsmall learning rate to make sure that they converge. We\nWe show results on the standard UCI benchmark. We repeatuse three datasets: a toy dataset (N = 60, D = 2),\nthe experimental setup used in Gal & Ghahramani (2016).USPS-3vs5 (N = 1781, D = 256) and Breast-Cancer\nFollowing their work, we use a neural network with one(N = 683, D = 10). Details are in Appendix I.\nhidden layer, 50 hidden units, and ReLU activation funcFig. 2(a) visualizes the approximations on a two- tions. We use the 20 splits of the data provided by Gal\ndimensional toy example from Murphy (2012).",
"paper_id": "1806.04854",
"title": "Fast and Scalable Bayesian Deep Learning by Weight-Perturbation in Adam",
"authors": [
"Mohammad Emtiyaz Khan",
"Didrik Nielsen",
"Voot Tangkaratt",
"Wu Lin",
"Yarin Gal",
"Akash Srivastava"
],
"published_date": "2018-06-13",
"primary_category": "stat.ML",
"arxiv_url": "http://arxiv.org/abs/1806.04854v3",
"chunk_index": 15,
"total_chunks": 91,
"char_count": 2688,
"word_count": 456,
"chunking_strategy": "semantic"
},
{
"chunk_id": "2762f46e-de48-4f5d-9ea6-51b66cd2f4db",
"text": "We use\nposterior distribution is shown with the contour in the back- Bayesian optimization to select the prior precision λ and\nground. Both, Vadam and VOGN-1 find approximations noise precision of the Gaussian likelihood. Further details\nthat are different from MF-Exact, which is clearly due to of the experiments are given in Appendix J.\ndifferences in the type of Hessian approximations they use. We compare Vadam to MC-Dropout (Gal & Ghahramani,\nFor real datasets, we compare performances using three met- 2016) using the results reported in Gal & Ghahramani\nrics. First, the negative of the variational objective on the (2016). We also compare to an SG method using the repatraining data (the evidence lower-bound or ELBO), log-loss rameterization trick and the Adam optimizer (referred to as\non the test data, and the symmetric KL distance between 'BBVI'). For a fair comparison, the Adam optimizer is run\nMF-Exact and the approximation found by a method. Fig. with the same learning rates as Vadam, although these can\n2(b) shows the results averaged over 20 random splits of be tuned further to get better performance.\nthe USPS-3vs5 dataset. ELBO and log-loss are comparaTable 1 shows the performance in terms of the test RMSEble for all methods, although Vadam does slightly worse\nand the test log-likelihood. The better method out of BBVIon ELBO and VOGN-1 has slightly higher variance for\nand Vadam is shown in boldface found using a paired t-testlog-loss. However, performance on the KL distance clearly\nwith p-value > 0.01. Both methods perform comparably,shows the difference in the quality of posterior approximawhich supports our conclusion, however, MC-Dropout out-tions. VOGN-1 performs quite well since it uses an unbiased\nperforms both the methods. We also find that VOGN showsapproximation of the GNN. Vadam does worse due to the\nsimilar results to Vadam and BBVI (we omit the results duebias introduced in the GM approximation with minibatch\nto lack of space).",
"paper_id": "1806.04854",
"title": "Fast and Scalable Bayesian Deep Learning by Weight-Perturbation in Adam",
"authors": [
"Mohammad Emtiyaz Khan",
"Didrik Nielsen",
"Voot Tangkaratt",
"Wu Lin",
"Yarin Gal",
"Akash Srivastava"
],
"published_date": "2018-06-13",
"primary_category": "stat.ML",
"arxiv_url": "http://arxiv.org/abs/1806.04854v3",
"chunk_index": 17,
"total_chunks": 91,
"char_count": 1981,
"word_count": 316,
"chunking_strategy": "semantic"
},
{
"chunk_id": "7fc2ef7a-267d-48e8-a67c-dd606dbb1b98",
"text": "The convergence plots for the final runs isM > 1, as indicated by Theorem 1.\ngiven in Appendix J. Fig. 2(c) further shows the effect of M where, for each M,\nFor many tasks, we find that VOGN and Vadam convergewe plot results for 20 random initializations on one split\nmuch faster than BBVI. An example is shown in Figure 3of the Breast-Cancer dataset. As we decrease M, Vadam's\n(see the first 3 figures in the left; details are in Appendix J).performance gets better, as expected. For M = 1, it closely",
"paper_id": "1806.04854",
"title": "Fast and Scalable Bayesian Deep Learning by Weight-Perturbation in Adam",
"authors": [
"Mohammad Emtiyaz Khan",
"Didrik Nielsen",
"Voot Tangkaratt",
"Wu Lin",
"Yarin Gal",
"Akash Srivastava"
],
"published_date": "2018-06-13",
"primary_category": "stat.ML",
"arxiv_url": "http://arxiv.org/abs/1806.04854v3",
"chunk_index": 18,
"total_chunks": 91,
"char_count": 502,
"word_count": 92,
"chunking_strategy": "semantic"
},
{
"chunk_id": "3d79c5f8-176a-4f7d-b250-2f4dd7184fc6",
"text": "Bayesian Deep Learning by Weight-Perturbation in Adam Performance comparisons for BNN regression. The better method out of BBVI and Vadam is shown in boldface according to a\npaired t-test with p-value> 0.01. Both methods perform comparably but MC-Dropout outperforms them. Test RMSE Test log-likelihood\nDataset N D MC-Dropout BBVI Vadam MC-Dropout BBVI Vadam\nBoston 506 13 2.97 ± 0.19 3.58 ± 0.21 3.93 ± 0.26 -2.46 ± 0.06 -2.73 ± 0.05 -2.85 ± 0.07\nConcrete 1030 8 5.23 ± 0.12 6.14 ± 0.13 6.85 ± 0.09 -3.04 ± 0.02 -3.24 ± 0.02 -3.39 ± 0.02\nEnergy 768 8 1.66 ± 0.04 2.79 ± 0.06 1.55 ± 0.08 -1.99 ± 0.02 -2.47 ± 0.02 -2.15 ± 0.07\nKin8nm 8192 8 0.10 ± 0.00 0.09 ± 0.00 0.10 ± 0.00 0.95 ± 0.01 0.95 ± 0.01 0.76 ± 0.00\nNaval 11934 16 0.01 ± 0.00 0.00 ± 0.00 0.00 ± 0.00 3.80 ± 0.01 4.46 ± 0.03 4.72 ± 0.22\nPower 9568 4 4.02 ± 0.04 4.31 ± 0.03 4.28 ± 0.03 -2.80 ± 0.01 -2.88 ± 0.01 -2.88 ± 0.01\nWine 1599 11 0.62 ± 0.01 0.65 ± 0.01 0.66 ± 0.01 -0.93 ± 0.01 -1.00 ± 0.01 -1.01 ± 0.01\nYacht 308 6 1.11 ± 0.09 2.05 ± 0.06 1.32 ± 0.10 -1.55 ± 0.03 -2.41 ± 0.02 -1.70 ± 0.03 M = 1, S = 1 M = 1, S = 16 M = 128, S = 16\n2.0 2.0 2.0 5000\nBBVI\nVadam 4000\nVOGN\n1.5 1.5 1.5 3000 rewards log2loss log2loss log2loss 2000\nTest Test Test\n1.0 1.0 1.0 1000 Cumulative 0 Vadam SGD-Plain\nVadaGrad SGD-Explore\n0.5 0.5 0.5\n0 2000 4000 0 2000 4000 0 2000 4000 −1000 0 500 1000 1500 2000 2500 3000\nIteration Iteration Iteration Time step (x1000) The first 3 figures in the left show results on the Australian-Scale dataset using a neural network with a hidden layer of 64 units\nfor different minibatch sizes M and number of MC samples S.",
"paper_id": "1806.04854",
"title": "Fast and Scalable Bayesian Deep Learning by Weight-Perturbation in Adam",
"authors": [
"Mohammad Emtiyaz Khan",
"Didrik Nielsen",
"Voot Tangkaratt",
"Wu Lin",
"Yarin Gal",
"Akash Srivastava"
],
"published_date": "2018-06-13",
"primary_category": "stat.ML",
"arxiv_url": "http://arxiv.org/abs/1806.04854v3",
"chunk_index": 19,
"total_chunks": 91,
"char_count": 1607,
"word_count": 330,
"chunking_strategy": "semantic"
},
{
"chunk_id": "171ee07e-b2a2-40c2-adfc-8a48aa14a3e9",
"text": "We see that VOGN converges the fastest, and Vadam too performs well\nfor M = 1. The rightmost figure shows results for exploration in deep RL where Vadam and VadaGrad outperform SGD-based methods. We have observed similar trends on other datasets.",
"paper_id": "1806.04854",
"title": "Fast and Scalable Bayesian Deep Learning by Weight-Perturbation in Adam",
"authors": [
"Mohammad Emtiyaz Khan",
"Didrik Nielsen",
"Voot Tangkaratt",
"Wu Lin",
"Yarin Gal",
"Akash Srivastava"
],
"published_date": "2018-06-13",
"primary_category": "stat.ML",
"arxiv_url": "http://arxiv.org/abs/1806.04854v3",
"chunk_index": 20,
"total_chunks": 91,
"char_count": 246,
"word_count": 41,
"chunking_strategy": "semantic"
},
{
"chunk_id": "1a936732-41b0-4b9b-aff0-13538393f5d8",
"text": "This suggests that the exploration strategy has a high impact\non the early learning performance in the Half-Cheetah task,\n6.3. Exploration in Deep Reinforcement Learning and the effect of good exploration decreases over time as\nthe agent collect more informative training samples. A good exploration strategy is crucial in reinforcement learning (RL) since the data is sequentially collected. We show\nthat weight-perturbation in Vadam improves exploration 7. Due to space constraints, we only provide a brief In this paper, we present new VI algorithms which are as\nsummary of our results, and give details in Appendix K. simple to implement and execute as algorithms for MLE. We consider the deep deterministic policy gradient (DDPG) We obtain them by using a series of approximations and a\nmethod for the Half-Cheetah task using a two-layer neural natural momentum method for a natural-gradient VI method.\nnetworks with 400 and 300 ReLU hidden units (Lillicrap The resulting algorithms can be implemented within Adam\net al., 2015).",
"paper_id": "1806.04854",
"title": "Fast and Scalable Bayesian Deep Learning by Weight-Perturbation in Adam",
"authors": [
"Mohammad Emtiyaz Khan",
"Didrik Nielsen",
"Voot Tangkaratt",
"Wu Lin",
"Yarin Gal",
"Akash Srivastava"
],
"published_date": "2018-06-13",
"primary_category": "stat.ML",
"arxiv_url": "http://arxiv.org/abs/1806.04854v3",
"chunk_index": 21,
"total_chunks": 91,
"char_count": 1033,
"word_count": 162,
"chunking_strategy": "semantic"
},
{
"chunk_id": "8fc08f31-2586-4db4-ab18-6585a2a9bbb4",
"text": "We compare Vadam and VadaGrad to two with minimal changes. Our empirical findings confirm that\nSGD methods, one of which does exploration (referred to our proposed algorithms obtain comparable uncertainty estias 'SGD-Explore'), and the other does not (referred to as mates to existing VI methods, but require less computational\n'SGD-plain'). The rightmost plot in Figure 3 shows the and implementation effort6.\ncumulative rewards (higher is better) of each method against An interesting direction we hope to pursue in the future is\ntraining iterations. VadaGrad and Vadam clearly learn faster to generalize our natural-gradient approach to other types\nthan both SGD-Plain and SGD-Explore. We also compare of approximation, e.g., exponetial-family distributions and\nthe performances against Adam variants of SGD-Plain and their mixtures. We would also like to further explore the\nSGD-Explore. Their results, given in the Appendix K, show application to areas such as RL and stochastic optimization.\nthat Vadam and VadaGrad still learn faster, but only in the\nbeginning and Adam based methods can catch up quickly. 6We made many new changes in this camera-ready version of\nthe paper. A list of the changes is given in Appendix A. Bayesian Deep Learning by Weight-Perturbation in Adam Acknowledgements Gal, Y. Uncertainty in Deep Learning. PhD thesis, University of\nCambridge, 2016. We thank the anonymous reviewers for their feedback. We\ngreatly appreciate many insightful discussions with Aaron Gal, Y. and Ghahramani, Z. Dropout as a Bayesian approximation:\nRepresenting model uncertainty in deep learning.",
"paper_id": "1806.04854",
"title": "Fast and Scalable Bayesian Deep Learning by Weight-Perturbation in Adam",
"authors": [
"Mohammad Emtiyaz Khan",
"Didrik Nielsen",
"Voot Tangkaratt",
"Wu Lin",
"Yarin Gal",
"Akash Srivastava"
],
"published_date": "2018-06-13",
"primary_category": "stat.ML",
"arxiv_url": "http://arxiv.org/abs/1806.04854v3",
"chunk_index": 22,
"total_chunks": 91,
"char_count": 1606,
"word_count": 241,
"chunking_strategy": "semantic"
},
{
"chunk_id": "fc36115c-d4b7-41d6-9b3c-7faecb5b8df5",
"text": "In InternaMishkin (UBC) and Frederik Kunstner (EPFL), and also tional Conference on Machine Learning, pp. 1050–1059, 2016.\nthank them for their help on carrying out experiments and reviewing the manuscript. We would also like to thank Roger Goodfellow, I. Efficient Per-Example Gradient Computations. Grosse and David Duvenaud from the University of Toronto ArXiv e-prints, October 2015.\nfor useful discussions. We would like to thank Zuozhu Liu Graves, A. Practical variational inference for neural networks.\n(SUTD, Singapore) for his help with the experiment on deep In Advances in Neural Information Processing Systems, pp. RL and logistic regression. Finally, we are thankful for the 2348–2356, 2011.",
"paper_id": "1806.04854",
"title": "Fast and Scalable Bayesian Deep Learning by Weight-Perturbation in Adam",
"authors": [
"Mohammad Emtiyaz Khan",
"Didrik Nielsen",
"Voot Tangkaratt",
"Wu Lin",
"Yarin Gal",
"Akash Srivastava"
],
"published_date": "2018-06-13",
"primary_category": "stat.ML",
"arxiv_url": "http://arxiv.org/abs/1806.04854v3",
"chunk_index": 23,
"total_chunks": 91,
"char_count": 704,
"word_count": 104,
"chunking_strategy": "semantic"
},
{
"chunk_id": "77ead0cd-16c9-43dc-8fb5-ceecd6356869",
"text": "RAIDEN computing system at the RIKEN Center for AI Hasenclever, L., Webb, S., Lienart, T., Vollmer, S., LakshmiProject, which we extensively used for our experiments. narayanan, B., Blundell, C., and Teh, Y. Distributed\nBayesian learning with stochastic natural gradient expectation\npropagation and the posterior server. Journal of Machine LearnReferences ing Research, 18:1–37, 2017. Information geometry and its applications. Springer,\nHazan, E., Levy, K. Y., and Shalev-Shwartz, S. On graduated opti- 2016.\nmization for stochastic non-convex problems. In International\nAnderson, J.",
"paper_id": "1806.04854",
"title": "Fast and Scalable Bayesian Deep Learning by Weight-Perturbation in Adam",
"authors": [
"Mohammad Emtiyaz Khan",
"Didrik Nielsen",
"Voot Tangkaratt",
"Wu Lin",
"Yarin Gal",
"Akash Srivastava"
],
"published_date": "2018-06-13",
"primary_category": "stat.ML",
"arxiv_url": "http://arxiv.org/abs/1806.04854v3",
"chunk_index": 24,
"total_chunks": 91,
"char_count": 584,
"word_count": 80,
"chunking_strategy": "semantic"
},
{
"chunk_id": "0bef8266-1a50-40f7-905c-fe33959a651f",
"text": "A mean field theory learning Conference on Machine Learning, pp. 1833–1841, 2016.\nalgorithm for neural networks. Complex Systems, 1:995–1019,\nHensman, J., Rattray, M., and Lawrence, N. Fast variational\n1987.\ninference in the conjugate exponential family. In Advances in\nNeural Information Processing Systems, pp. 2888–2896, 2012.Balan, A. K., Rathod, V., Murphy, K.",
"paper_id": "1806.04854",
"title": "Fast and Scalable Bayesian Deep Learning by Weight-Perturbation in Adam",
"authors": [
"Mohammad Emtiyaz Khan",
"Didrik Nielsen",
"Voot Tangkaratt",
"Wu Lin",
"Yarin Gal",
"Akash Srivastava"
],
"published_date": "2018-06-13",
"primary_category": "stat.ML",
"arxiv_url": "http://arxiv.org/abs/1806.04854v3",
"chunk_index": 25,
"total_chunks": 91,
"char_count": 365,
"word_count": 51,
"chunking_strategy": "semantic"
},
{
"chunk_id": "e4edfdf7-23e7-4245-a26c-d895ff9cd0a6",
"text": "Bayesian\ndark knowledge. In Advances in Neural Information Processing Hernandez-Lobato, J. Probabilistic backpropSystems, pp. 3438–3446, 2015. agation for scalable learning of Bayesian neural networks. In\nInternational Conference on Machine Learning, pp. 1861–1869,\nBarber, D. and Bishop, C. Ensemble learning in Bayesian neu-\n2015. ral networks. Generalization in Neural Networks and Machine\nLearning, 168:215–238, 1998. Higgins, I., Matthey, L., Pal, A., Burgess, C., Glorot, X., Botvinick,\nM., Mohamed, S., and Lerchner, A. beta-VAE: Learning basic\nBishop, C. Pattern Recognition and Machine Learning. 2006.\nvisual concepts with a constrained variational framework. In\nInternational Conference on Learning Representations, 2016.Blundell, C., Cornebise, J., Kavukcuoglu, K., and Wierstra, D. Weight uncertainty in neural networks. In International Confer- Hinton, G. Keeping the neural networks\nence on Machine Learning, pp. 1613–1622, 2015. simple by minimizing the description length of the weights. In Annual Conference on Computational Learning Theory, pp.",
"paper_id": "1806.04854",
"title": "Fast and Scalable Bayesian Deep Learning by Weight-Perturbation in Adam",
"authors": [
"Mohammad Emtiyaz Khan",
"Didrik Nielsen",
"Voot Tangkaratt",
"Wu Lin",
"Yarin Gal",
"Akash Srivastava"
],
"published_date": "2018-06-13",
"primary_category": "stat.ML",
"arxiv_url": "http://arxiv.org/abs/1806.04854v3",
"chunk_index": 26,
"total_chunks": 91,
"char_count": 1062,
"word_count": 142,
"chunking_strategy": "semantic"
},
{
"chunk_id": "b4338834-2238-4bbd-a2c1-f15b327b7108",
"text": "Bottou, L., Curtis, F. Optimization methods for\n5–13, 1993.\n2016. Conjugate-computation variational inference: converting variational inference in non-conjugate models\nBrockman, G., Cheung, V., Pettersson, L., Schneider, J., Schulman, to inferences in conjugate models. In International Conference\nJ., Tang, J., and Zaremba, W. OpenAI Gym, 2016. on Artificial Intelligence and Statistics, pp. 878–887, 2017. Chaudhari, P., Choromanska, A., Soatto, S., LeCun, Y., Bal- Khan, M. E., Babanezhad, R., Lin, W., Schmidt, M., and Sugiyama,\ndassi, C., Borgs, C., Chayes, J. T., Sagun, L., and Zecchina, R.",
"paper_id": "1806.04854",
"title": "Fast and Scalable Bayesian Deep Learning by Weight-Perturbation in Adam",
"authors": [
"Mohammad Emtiyaz Khan",
"Didrik Nielsen",
"Voot Tangkaratt",
"Wu Lin",
"Yarin Gal",
"Akash Srivastava"
],
"published_date": "2018-06-13",
"primary_category": "stat.ML",
"arxiv_url": "http://arxiv.org/abs/1806.04854v3",
"chunk_index": 27,
"total_chunks": 91,
"char_count": 597,
"word_count": 85,
"chunking_strategy": "semantic"
},
{
"chunk_id": "47ebc67c-3a66-42ef-bd2a-9d8754d10663",
"text": "Faster stochastic variational inference using proximalEntropy-sgd: Biasing gradient descent into wide valleys. In gradient methods with general divergence functions. In ProceedInternational Conference on Learning Representations, 2016. ings of the Conference on Uncertainty in Artificial Intelligence,\n2016. Sampling Techniques, 3rd Edition. E., Lin, W., Tangkaratt, V., Liu, Z., and Nielsen, D. Variational Adaptive-Newton Method for Explorative Learning. Transforming neural-net output levels ArXiv e-prints, November 2017.\nto probability distributions. In Advances in Neural Information\nProcessing Systems, pp. 853–859, 1991. Kingma, D. and Ba, J.",
"paper_id": "1806.04854",
"title": "Fast and Scalable Bayesian Deep Learning by Weight-Perturbation in Adam",
"authors": [
"Mohammad Emtiyaz Khan",
"Didrik Nielsen",
"Voot Tangkaratt",
"Wu Lin",
"Yarin Gal",
"Akash Srivastava"
],
"published_date": "2018-06-13",
"primary_category": "stat.ML",
"arxiv_url": "http://arxiv.org/abs/1806.04854v3",
"chunk_index": 28,
"total_chunks": 91,
"char_count": 650,
"word_count": 82,
"chunking_strategy": "semantic"
},
{
"chunk_id": "1dec827c-f689-4c81-8b91-aa2a0dff1560",
"text": "Adam: A method for stochastic optimization. In International Conference on Learning Representations,\nDuchi, J., Hazan, E., and Singer, Y. Adaptive subgradient methods 2015.\nfor online learning and stochastic optimization. Journal of\nMachine Learning Research, 12:2121–2159, 2011. Leordeanu, M. and Hebert, M.",
"paper_id": "1806.04854",
"title": "Fast and Scalable Bayesian Deep Learning by Weight-Perturbation in Adam",
"authors": [
"Mohammad Emtiyaz Khan",
"Didrik Nielsen",
"Voot Tangkaratt",
"Wu Lin",
"Yarin Gal",
"Akash Srivastava"
],
"published_date": "2018-06-13",
"primary_category": "stat.ML",
"arxiv_url": "http://arxiv.org/abs/1806.04854v3",
"chunk_index": 29,
"total_chunks": 91,
"char_count": 308,
"word_count": 41,
"chunking_strategy": "semantic"
},
{
"chunk_id": "a405aaf9-2614-4761-b0d5-891d2b5508fd",
"text": "Smoothing-based optimization. In\nComputer Vision and Pattern Recognition, pp. 1–8, 2008. Fortunato, M., Azar, M. G., Piot, B., Menick, J., Osband, I., Graves,\nA., Mnih, V., Munos, R., Hassabis, D., Pietquin, O., et al. Li, C., Chen, C., Carlson, D. Preconditioned\nNoisy networks for exploration. In International Conference on stochastic gradient langevin dynamics for deep neural networks. Learning Representations, 2018. In AAAI Conference on Artificial Intelligence, pp. 4–10, 2016. Bayesian Deep Learning by Weight-Perturbation in Adam",
"paper_id": "1806.04854",
"title": "Fast and Scalable Bayesian Deep Learning by Weight-Perturbation in Adam",
"authors": [
"Mohammad Emtiyaz Khan",
"Didrik Nielsen",
"Voot Tangkaratt",
"Wu Lin",
"Yarin Gal",
"Akash Srivastava"
],
"published_date": "2018-06-13",
"primary_category": "stat.ML",
"arxiv_url": "http://arxiv.org/abs/1806.04854v3",
"chunk_index": 30,
"total_chunks": 91,
"char_count": 539,
"word_count": 76,
"chunking_strategy": "semantic"
},
{
"chunk_id": "5f072b6a-0f54-4205-9781-3632bdc204a8",
"text": "J., Pritzel, A., Heess, N., Erez, T., Tassa, Saul, L. K., Jaakkola, T., and Jordan, M. Mean field theory\nY., Silver, D., and Wierstra, D. Continuous control with deep for sigmoid belief networks. Journal of Artificial Intelligence\nreinforcement learning. CoRR, abs/1509.02971, 2015. Research, 4:61–76, 1996. Louizos, C. and Welling, M.",
"paper_id": "1806.04854",
"title": "Fast and Scalable Bayesian Deep Learning by Weight-Perturbation in Adam",
"authors": [
"Mohammad Emtiyaz Khan",
"Didrik Nielsen",
"Voot Tangkaratt",
"Wu Lin",
"Yarin Gal",
"Akash Srivastava"
],
"published_date": "2018-06-13",
"primary_category": "stat.ML",
"arxiv_url": "http://arxiv.org/abs/1806.04854v3",
"chunk_index": 31,
"total_chunks": 91,
"char_count": 335,
"word_count": 50,
"chunking_strategy": "semantic"
},
{
"chunk_id": "9fdf20c0-da0a-476d-b428-f339d78054a2",
"text": "Structured and efficient variational Schraudolph, N. Fast curvature matrix-vector products for\ndeep learning with matrix gaussian posteriors. In International second-order gradient descent. Neural computation, 14(7):1723–\nConference on Machine Learning, pp. 1708–1716, 2016. 1738, 2002. Information theory, inference and learning algo- Silver, D., Lever, G., Heess, N., Degris, T., Wierstra, D., and\nrithms. Cambridge university press, 2003.",
"paper_id": "1806.04854",
"title": "Fast and Scalable Bayesian Deep Learning by Weight-Perturbation in Adam",
"authors": [
"Mohammad Emtiyaz Khan",
"Didrik Nielsen",
"Voot Tangkaratt",
"Wu Lin",
"Yarin Gal",
"Akash Srivastava"
],
"published_date": "2018-06-13",
"primary_category": "stat.ML",
"arxiv_url": "http://arxiv.org/abs/1806.04854v3",
"chunk_index": 32,
"total_chunks": 91,
"char_count": 441,
"word_count": 56,
"chunking_strategy": "semantic"
},
{
"chunk_id": "1f3d5bdb-e182-4248-8ee0-9fd2c8b86137",
"text": "Deterministic policy gradient algorithms. In\nProceedings of the 31th International Conference on Machine\ndescent as approximate Bayesian inference. Journal of Machine 387–395, 2014. Learning Research, 18:1–35, 2017. Staines, J. and Barber, D.",
"paper_id": "1806.04854",
"title": "Fast and Scalable Bayesian Deep Learning by Weight-Perturbation in Adam",
"authors": [
"Mohammad Emtiyaz Khan",
"Didrik Nielsen",
"Voot Tangkaratt",
"Wu Lin",
"Yarin Gal",
"Akash Srivastava"
],
"published_date": "2018-06-13",
"primary_category": "stat.ML",
"arxiv_url": "http://arxiv.org/abs/1806.04854v3",
"chunk_index": 33,
"total_chunks": 91,
"char_count": 242,
"word_count": 32,
"chunking_strategy": "semantic"
},
{
"chunk_id": "a5cec078-6a86-444b-bc6a-b79c243ae9e9",
"text": "Optimization by variational bounding. Marlin, B., Khan, M., and Murphy, K. Piecewise bounds for In European Symposium on Artificial Neural Networks, 2013.\nestimating Bernoulli-logistic latent Gaussian models. In International Conference on Machine Learning, 2011. Sun, S., Chen, C., and Carin, L. Learning structured weight uncertainty in Bayesian neural networks.",
"paper_id": "1806.04854",
"title": "Fast and Scalable Bayesian Deep Learning by Weight-Perturbation in Adam",
"authors": [
"Mohammad Emtiyaz Khan",
"Didrik Nielsen",
"Voot Tangkaratt",
"Wu Lin",
"Yarin Gal",
"Akash Srivastava"
],
"published_date": "2018-06-13",
"primary_category": "stat.ML",
"arxiv_url": "http://arxiv.org/abs/1806.04854v3",
"chunk_index": 34,
"total_chunks": 91,
"char_count": 364,
"word_count": 49,
"chunking_strategy": "semantic"
},
{
"chunk_id": "5bc03508-84be-49a1-b4ce-43023c3cae93",
"text": "In International Conference\nMartens, J. New insights and perspectives on the natural gradient on Artificial Intelligence and Statistics, pp. 1283–1292, 2017. Tieleman, T. and Hinton, G. Lecture 6.5-RMSprop: Divide the graMobahi, H. and Fisher III, J. A theoretical analysis of optimiza- dient by a running average of its recent magnitude. COURSERA:\ntion by Gaussian continuation. In AAAI Conference on Artificial Neural Networks for Machine Learning 4, 2012. Intelligence, pp. 1205–1211, 2015. Wierstra, D., Schaul, T., Glasmachers, T., Sun, Y., Peters, J., and\nMurphy, K. Machine Learning: A Probabilistic Perspective. Natural evolution strategies. Journal of MaThe MIT Press, 2012. ISBN 0262018020, 9780262018029. chine Learning Research, 15(1):949–980, 2014. Bayesian learning for neural networks.",
"paper_id": "1806.04854",
"title": "Fast and Scalable Bayesian Deep Learning by Weight-Perturbation in Adam",
"authors": [
"Mohammad Emtiyaz Khan",
"Didrik Nielsen",
"Voot Tangkaratt",
"Wu Lin",
"Yarin Gal",
"Akash Srivastava"
],
"published_date": "2018-06-13",
"primary_category": "stat.ML",
"arxiv_url": "http://arxiv.org/abs/1806.04854v3",
"chunk_index": 35,
"total_chunks": 91,
"char_count": 800,
"word_count": 111,
"chunking_strategy": "semantic"
},
{
"chunk_id": "91d157f4-fe79-4f5e-b192-fb0698620735",
"text": "PhD thesis,\nWilliams, R. Simple statistical gradient-following algorithms\nUniversity of Toronto, 1995.\nfor connectionist reinforcement learning. Machine learning, 8\n(3-4):229–256, 1992.Opper, M. and Archambeau, C. The variational Gaussian approximation revisited. Neural Computation, 21(3):786–792, 2009. C., Roelofs, R., Stern, M., Srebro, N., and Recht, B. Plappert, M., Houthooft, R., Dhariwal, P., Sidor, S., Chen, R. Y., The marginal value of adaptive gradient methods in machine\nChen, X., Asfour, T., Abbeel, P., and Andrychowicz, M. In Advances in Neural Information Processing Syseter space noise for exploration. In International Conference tems, pp. 4151–4161, 2017.\non Learning Representations, 2018. Zhang, G., Sun, S., Duvenaud, D. Noisy\nRanganath, R., Gerrish, S., and Blei, D. Black box variational natural gradient as variational inference. arXiv preprint\nand Statistics, pp. 814–822, 2014. Gradient-based adaptive stochastic search for\nRaskutti, G. and Mukherjee, S. The information geometry of non-differentiable optimization.",
"paper_id": "1806.04854",
"title": "Fast and Scalable Bayesian Deep Learning by Weight-Perturbation in Adam",
"authors": [
"Mohammad Emtiyaz Khan",
"Didrik Nielsen",
"Voot Tangkaratt",
"Wu Lin",
"Yarin Gal",
"Akash Srivastava"
],
"published_date": "2018-06-13",
"primary_category": "stat.ML",
"arxiv_url": "http://arxiv.org/abs/1806.04854v3",
"chunk_index": 36,
"total_chunks": 91,
"char_count": 1044,
"word_count": 140,
"chunking_strategy": "semantic"
},
{
"chunk_id": "373cdd52-80a5-4fb4-8c80-2c3325ec6488",
"text": "IEEE Transactions on Automirror descent. IEEE Transactions on Information Theory, 61 matic Control, 59(7):1818–1832, 2014.\n(3):1451–1457, 2015. J., Mohamed, S., and Wierstra, D. Stochastic backpropagation and approximate inference in deep generative models. In International Conference on Machine Learning, pp.\n1278–1286, 2014. Ritter, H., Botev, A., and Barber, D. A scalable laplace approximation for neural networks. In International Conference on\nLearning Representations, 2018. Monte Carlo Statistical Methods\n(Springer Texts in Statistics). Springer-Verlag New York, Inc.,\nSecaucus, NJ, USA, 2005. R¨uckstieß, T., Sehnke, F., Schaul, T., Wierstra, D., Sun, Y.,\nand Schmidhuber, J. Exploring parameter space in reinforcement learning.",
"paper_id": "1806.04854",
"title": "Fast and Scalable Bayesian Deep Learning by Weight-Perturbation in Adam",
"authors": [
"Mohammad Emtiyaz Khan",
"Didrik Nielsen",
"Voot Tangkaratt",
"Wu Lin",
"Yarin Gal",
"Akash Srivastava"
],
"published_date": "2018-06-13",
"primary_category": "stat.ML",
"arxiv_url": "http://arxiv.org/abs/1806.04854v3",
"chunk_index": 37,
"total_chunks": 91,
"char_count": 739,
"word_count": 97,
"chunking_strategy": "semantic"
},
{
"chunk_id": "75838436-716a-49df-abd1-7b429c6ae05e",
"text": "Paladyn, 1(1):14–24, 2010. doi: 10.2478/\ns13230-010-0002-4. Salimans, T., Knowles, D. Fixed-form variational\nposterior approximation through stochastic linear regression. Bayesian Analysis, 8(4):837–882, 2013. Bayesian Deep Learning by Weight-Perturbation in Adam Changes in the Camera-Ready Version Compared to the Submitted Version • Taking reviewer's suggestions into account, we changed the title of our paper. The title of our submitted version was\n\"Vadam: Fast and Scalable Variational Inference by Perturbing Adam\".",
"paper_id": "1806.04854",
"title": "Fast and Scalable Bayesian Deep Learning by Weight-Perturbation in Adam",
"authors": [
"Mohammad Emtiyaz Khan",
"Didrik Nielsen",
"Voot Tangkaratt",
"Wu Lin",
"Yarin Gal",
"Akash Srivastava"
],
"published_date": "2018-06-13",
"primary_category": "stat.ML",
"arxiv_url": "http://arxiv.org/abs/1806.04854v3",
"chunk_index": 38,
"total_chunks": 91,
"char_count": 522,
"word_count": 68,
"chunking_strategy": "semantic"
},
{
"chunk_id": "2c5db8f1-d9b4-47d8-9362-03f1fc1fc7f5",
"text": "• In the submitted version, we motivated our approach based on its ease of implementation. In the new version, we\nchanged the motivation to make VI as easy to implement and execute as MLE. • In the new version, we have added a separate section on related work. • We improved the discussion of our approximation methods, and added an error analysis. • Overall conclusions of our paper have also slightly changed in the new version. The new conclusions suggest that there\nis a trade-off between the ease-of-implementation and quality of uncertainty approximation. • As per reviewers suggestions, we also made major improvements in our experiment results. – We added test log-likelihood in the BNN results. We changed the hyperparameter selection from grid search\nto Bayesian optimization. We removed two methods from the table, namely PBP and VIG, since they use\ndifferent splits compared to our setting.",
"paper_id": "1806.04854",
"title": "Fast and Scalable Bayesian Deep Learning by Weight-Perturbation in Adam",
"authors": [
"Mohammad Emtiyaz Khan",
"Didrik Nielsen",
"Voot Tangkaratt",
"Wu Lin",
"Yarin Gal",
"Akash Srivastava"
],
"published_date": "2018-06-13",
"primary_category": "stat.ML",
"arxiv_url": "http://arxiv.org/abs/1806.04854v3",
"chunk_index": 39,
"total_chunks": 91,
"char_count": 902,
"word_count": 147,
"chunking_strategy": "semantic"
},
{
"chunk_id": "1dd153ef-8bf9-428a-bb5c-b709cc023512",
"text": "We improved the performance of BBVI by using better initialization and\nlearning rates. We corrected a scaling problem with our Vadam method. The new method gives a slightly worse\nperformance than the results present in the submitted version.\n– We added a logistic regression experiment where we evaluate the quality of uncertainty estimates.\n– We added the details of the RL experiments which we forgot to add in the submitted version. We also added a\ncomparison to Adam-based methods in the appendix for the RL experiment.\n– We removed an unclear result about reducing overfitting. • We added an additional result comparing VOGN with Vadam and BBVI on Bayesian neural network. Review of Natural-Gradient Variational Inference Khan & Lin (2017) propose a natural-gradient method for variational inference. In this section, we briefly discuss this\nmethod.",
"paper_id": "1806.04854",
"title": "Fast and Scalable Bayesian Deep Learning by Weight-Perturbation in Adam",
"authors": [
"Mohammad Emtiyaz Khan",
"Didrik Nielsen",
"Voot Tangkaratt",
"Wu Lin",
"Yarin Gal",
"Akash Srivastava"
],
"published_date": "2018-06-13",
"primary_category": "stat.ML",
"arxiv_url": "http://arxiv.org/abs/1806.04854v3",
"chunk_index": 40,
"total_chunks": 91,
"char_count": 854,
"word_count": 135,
"chunking_strategy": "semantic"
},
{
"chunk_id": "69b6e0b9-c87b-4f3a-ad3e-7b62b6d4f7b4",
"text": "Denote the variational objective by L(η) for the variational distribution qη(θ) which takes an exponential-family form with\nnatural-parameter η. The objective is given as follows: p(θ)\nL(η) := X Eq [log p(Di|θ)] + Eq log . (25)\nqη(θ) i=1 We assume that the exponential-family is in minimal representation, which ensures that there is a one-to-one mapping\nbetween the natural parameter η and the expectation parameter, denoted by m. Therefore, it is possible to express L(η) in\nterms of m. We denote this new objective by L∗(m) := L(η). We can also reparameterize qη in terms of m and denote it\nby qm. Natural-gradient methods exploit the Riemannian geometry of q(θ) by scaling the gradient by the inverse of the Fisher\ninformation matrix. The method of Khan & Lin (2017) simplifies the update by avoiding a direct computation of the\nFIM. This is made possible due to a relationship between the natural parameter η and the expectation parameter m of an\nexponential-family distribution. The relationship dictates that the natural gradient with respect to η is equal to the gradient\nwith respect to m. This is stated below where FIM is denoted by F(η), F(η)−1∇ηL(η) = ∇mL∗(m), (26)",
"paper_id": "1806.04854",
"title": "Fast and Scalable Bayesian Deep Learning by Weight-Perturbation in Adam",
"authors": [
"Mohammad Emtiyaz Khan",
"Didrik Nielsen",
"Voot Tangkaratt",
"Wu Lin",
"Yarin Gal",
"Akash Srivastava"
],
"published_date": "2018-06-13",
"primary_category": "stat.ML",
"arxiv_url": "http://arxiv.org/abs/1806.04854v3",
"chunk_index": 41,
"total_chunks": 91,
"char_count": 1178,
"word_count": 196,
"chunking_strategy": "semantic"
},
{
"chunk_id": "6bbfc108-483f-4f39-b474-27d4c0eaea6e",
"text": "This relationship has been discussed in the earlier work of Hensman et al. (2012) and can also be found in Amari (2016). The method of Khan & Lin (2017) exploits this result within a mirror descent framework. They propose to use a mirrordescent update in the expectation- parameter space which is equivalent to the natural-gradient update in the natural-parameter\nspace. Therefore, the natural-gradient update can be performed by using the gradient with respect to the expectation\nparameter. We give a formal statement below.",
"paper_id": "1806.04854",
"title": "Fast and Scalable Bayesian Deep Learning by Weight-Perturbation in Adam",
"authors": [
"Mohammad Emtiyaz Khan",
"Didrik Nielsen",
"Voot Tangkaratt",
"Wu Lin",
"Yarin Gal",
"Akash Srivastava"
],
"published_date": "2018-06-13",
"primary_category": "stat.ML",
"arxiv_url": "http://arxiv.org/abs/1806.04854v3",
"chunk_index": 42,
"total_chunks": 91,
"char_count": 525,
"word_count": 83,
"chunking_strategy": "semantic"
},
{
"chunk_id": "54d69f40-e6e5-40fc-a0cc-0ec7af8b5d56",
"text": "Bayesian Deep Learning by Weight-Perturbation in Adam Consider the following mirror-descent step: mt+1 = arg min ⟨m, −∇mL∗(mt)⟩+ DKL[qm(θ) ∥qmt(θ)], (27)\nm βt\nwhere DKL[· ∥·] is the Kullback-Leibler divergence and βt is the learning rate in iteration t. Each step of this mirror\ndescent update is equivalent to the following natural-gradient descent in the natural-parameter space: ηt+1 = ηt + βtF(ηt)−1∇ηL(ηt) (28) A formal proof of this statement can be found in Raskutti & Mukherjee (2015). Using (26), the natural-gradient update above can be simply written as the following: ηt+1 = ηt + βt∇mL∗(mt) (29) which involves computing the gradient with respect to m but taking a step in the natural-parameter space. As we show in the next section, the above relationship enables us to derive a simple natural-gradient update because, for a\nGaussian distribution, the gradient with respect to m leads to a simple update. Derivation of Natural-Gradient Updates for Gaussian Mean-Field Variational Inference In this section, we derive the natural-gradient update for the Gaussian approximation qη(θ) := N(θ|µ, Σ) with mean µ\nand covariance matrix Σ. In the end, we will make the mean-field approximation: Σ = diag(σ2) to get the final update. We start by defining the natural and expectation parameters of a Gaussian: η(1) := Σ−1µ, η(2) = −12Σ−1 (30)\nm(1) := Eq(θ) = µ, M(2) = Eq(θθ⊤) = µµ⊤+ Σ. (31)",
"paper_id": "1806.04854",
"title": "Fast and Scalable Bayesian Deep Learning by Weight-Perturbation in Adam",
"authors": [
"Mohammad Emtiyaz Khan",
"Didrik Nielsen",
"Voot Tangkaratt",
"Wu Lin",
"Yarin Gal",
"Akash Srivastava"
],
"published_date": "2018-06-13",
"primary_category": "stat.ML",
"arxiv_url": "http://arxiv.org/abs/1806.04854v3",
"chunk_index": 43,
"total_chunks": 91,
"char_count": 1394,
"word_count": 227,
"chunking_strategy": "semantic"
},
{
"chunk_id": "bbe869f4-a8ce-45d4-8680-f13ae5370f15",
"text": "Now we will express the gradient with respect to these expectation parameters in terms of the gradients with respect to µ\nand Σ using the chain rule (see Appendix B.1 in Khan & Lin (2017) for a derivation): ∇m(1)L∗= ∇µL −2 [∇ΣL] µ, (32)\n∇M (2)L∗= ∇ΣL. (33) Next, using the definition of natural parameters, we can rewrite (29) in terms of µ and Σ (here ∇xLt implies that it is\ngradient of the variational objective with respect to a variable x at x = xt): Σ−1t+1 = Σ−1t −2βt [∇ΣLt] (34)\nµt+1 = Σt+1 Σ−1t µt + βt (∇µLt −2 [∇ΣLt] µt) (35)\n= Σt+1 Σ−1t µt + βt∇µLt −2βt [∇ΣLt] µt (36)\n= Σt+1 Σ−1t −2βt [∇ΣLt] µt + βt∇µLt (37)\n= Σt+1 Σ−1t+1µt + βt∇µLt (38)\n= µt + βtΣt+1 [∇µLt] . (39) In summary, the natural-gradient update is Σ−1t+1 = Σ−1t −2βt [∇ΣLt] , (40)\nµt+1 = µt + βtΣt+1 [∇µLt] . (41) By considering a Gaussian mean-field VI with a diagonal covariance: Σ = diag(σ2), we obtain σ−2t+1 = σ−2t −2βt [∇σ2Lt] , (42)\nµt+1 = µt + βtσ2t+1 ◦[∇µLt] . (43)",
"paper_id": "1806.04854",
"title": "Fast and Scalable Bayesian Deep Learning by Weight-Perturbation in Adam",
"authors": [
"Mohammad Emtiyaz Khan",
"Didrik Nielsen",
"Voot Tangkaratt",
"Wu Lin",
"Yarin Gal",
"Akash Srivastava"
],
"published_date": "2018-06-13",
"primary_category": "stat.ML",
"arxiv_url": "http://arxiv.org/abs/1806.04854v3",
"chunk_index": 44,
"total_chunks": 91,
"char_count": 949,
"word_count": 187,
"chunking_strategy": "semantic"
},
{
"chunk_id": "e64c29ac-79d5-4f08-8508-f55e10ade48e",
"text": "Bayesian Deep Learning by Weight-Perturbation in Adam In update (5), we use stochastic gradients instead of exact gradients. Note that there is an explicit constraint in the above update, i.e., the precision σ−2 needs to be positive at every step. The\nlearning rate can be adapted to make sure that the constraint is always satisfied. We discuss this method in Appendix D.1.",
"paper_id": "1806.04854",
"title": "Fast and Scalable Bayesian Deep Learning by Weight-Perturbation in Adam",
"authors": [
"Mohammad Emtiyaz Khan",
"Didrik Nielsen",
"Voot Tangkaratt",
"Wu Lin",
"Yarin Gal",
"Akash Srivastava"
],
"published_date": "2018-06-13",
"primary_category": "stat.ML",
"arxiv_url": "http://arxiv.org/abs/1806.04854v3",
"chunk_index": 45,
"total_chunks": 91,
"char_count": 374,
"word_count": 62,
"chunking_strategy": "semantic"
},
{
"chunk_id": "3b410aff-708e-4bc3-a67f-7e1b61689508",
"text": "Another option is to make an approximation, such as a Gauss-Newton approximation, to make sure that this constraint is\nalways satisfied. We make this assumption in one of our methods called Variational Online Gauss-Newton Method. Derivation of the Variational Online-Newton method (VON) In this section, we derive the variational online-Newton (VON) method proposed in Section 3. We will modify the NGVI\nupdate in (41).",
"paper_id": "1806.04854",
"title": "Fast and Scalable Bayesian Deep Learning by Weight-Perturbation in Adam",
"authors": [
"Mohammad Emtiyaz Khan",
"Didrik Nielsen",
"Voot Tangkaratt",
"Wu Lin",
"Yarin Gal",
"Akash Srivastava"
],
"published_date": "2018-06-13",
"primary_category": "stat.ML",
"arxiv_url": "http://arxiv.org/abs/1806.04854v3",
"chunk_index": 46,
"total_chunks": 91,
"char_count": 419,
"word_count": 64,
"chunking_strategy": "semantic"
},
{
"chunk_id": "9e97ebf0-60ab-4866-b76d-2ec3573bb3a4",
"text": "The variational lower-bound in (25) can be re-expressed as L(µ, Σ) := Eq [−Nf(θ) + log p(θ) −log q(θ)] , (44) where f(θ) = −1N PNi=1 log p(Di|θ). To derive VON, we use the Bonnet's and Price's theorems (Opper & Archambeau,\n2009; Rezende et al., 2014) to express the gradients of the expectation of f(θ) with respect to µ and Σ in terms of the\ngradient and Hessian of f(θ), i.e., ∇µEq [f(θ)] = Eq [∇θf(θ)] := Eq [g(θ)] , (45)\n∇ΣEq [f(θ)] = 12Eq ∇2θθf(θ) := 12Eq [H(θ)] , (46)\nwhere g(θ) := ∇θf(θ) and H(θ) := ∇2θθf(θ) denote the gradient and Hessian of f(θ), respectively. Using these, we can\nrewrite the gradients of L required in the NGVI update in (41) as ∇µL = ∇µEq [−Nf(θ) + log p(θ) −log q(θ)] (47)\n= −(Eq [N∇θf(θ)] + λµ) (48)\n= −(Eq [Ng(θ)] + λµ) , (49)\n∇ΣL = 2Eq1 −N∇2θθf(θ) −12λI + 12Σ−1 (50)\n= 12Eq [−NH(θ)] −12λI + 12Σ−1, (51) By substituting these into the NGVI update of (41) and then approximating the expectation by one Monte-Carlo sample\nθt ∼N(θ|µt, Σt), we get the following update: µt+1 = µt −βt Σt+1 [Ng(θt) + λµt] , (52)\nΣ−1t+1 = (1 −βt)Σ−1t + βt [NH(θt) + λI] . (53) By defining a matrix St := (Σ−1t −λI)/N, we get the following:\nVON (Full-covariance): µt+1 = µt −βt (St+1 + λI/N)−1 (g(θt) + λµt/N) ,\nSt+1 = (1 −βt)St + βt H(θt), (54) where θt ∼N(θ|µt, Σt) with Σt = [N(St + λI/N)]−1. We refer to this update as the Variational Online-Newton (VON)\nmethod because it resembles a regularized version of online Newton's method where the scaling matrix is estimated online\nusing the Hessians. For the mean-field variant, we can use a diagonal Hessian: VON: µt+1 = µt −βt g(θt) + ˜λµt / st+1 + ˜λ , st+1 = (1 −βt)st + βt diag(H(θt)), (55)",
"paper_id": "1806.04854",
"title": "Fast and Scalable Bayesian Deep Learning by Weight-Perturbation in Adam",
"authors": [
"Mohammad Emtiyaz Khan",
"Didrik Nielsen",
"Voot Tangkaratt",
"Wu Lin",
"Yarin Gal",
"Akash Srivastava"
],
"published_date": "2018-06-13",
"primary_category": "stat.ML",
"arxiv_url": "http://arxiv.org/abs/1806.04854v3",
"chunk_index": 47,
"total_chunks": 91,
"char_count": 1653,
"word_count": 314,
"chunking_strategy": "semantic"
},
{
"chunk_id": "96d92981-8b2b-478a-b3c7-acb5f52d8d81",
"text": "where a/b denote the element-wise division operation between vectors a and b, and we have defined ˜λ = λ/N, and\nθt ∼N(θ|µt, diag(σ2t)) with σ2t = 1/[N(st + ˜λ)]. By replacing g and H by their stochastic estimates, we obtain the VON update shown in (8) of the main text. Bayesian Deep Learning by Weight-Perturbation in Adam Hessian Approximation Using the Reparameterization Trick In this section we briefly discuss an alternative Hessian approximation approach for mean-field VI beside the generalized\nGauss-Newton and gradient magnitude which are discussed in the main paper. This approach is based on the reparameterization trick for the expectation of function over a Gaussian distribution. By using the identity in (45) for the mean-field case,\nwe can derive a Hessian approximation using this: Eq ∇2θθf(θ) = 2∇σ2Eq[f(θ)], (56)\n= 2∇σ2EN(ϵ|0,I) [f(µ + σϵ)] (57)\n= 2EN(ϵ|0,I) [∇σ2f(µ + σϵ)] (58)\n= EN(ϵ|0,I) [∇θf(θ)ϵ/σ] (59)\n≈ˆg(θ) (ϵ/σ) , (60) where ϵ ∼N(ϵ|0, I) and θ = µ+σϵ. By defining st := σ−2t −λ, we can write the VON update using the reparameterization\ntrick Hessian approximation as VON (Reparam): µt+1 = µt −αt ˆg(θt) + ˜λµt / st+1 + ˜λ (61) st+1 = (1 −βt)st + βt [ˆg(θt)(ϵt/σt)] , (62) qwhere ϵt ∼N(ϵ|0, I) and θt = µt + ϵt/ N(st + ˜λ). One major issue with this approximation is that it might have a high variance and st may be negative. To make sure that\nst > 0 for all t, we can use a simple back-tracking method described below.",
"paper_id": "1806.04854",
"title": "Fast and Scalable Bayesian Deep Learning by Weight-Perturbation in Adam",
"authors": [
"Mohammad Emtiyaz Khan",
"Didrik Nielsen",
"Voot Tangkaratt",
"Wu Lin",
"Yarin Gal",
"Akash Srivastava"
],
"published_date": "2018-06-13",
"primary_category": "stat.ML",
"arxiv_url": "http://arxiv.org/abs/1806.04854v3",
"chunk_index": 48,
"total_chunks": 91,
"char_count": 1447,
"word_count": 252,
"chunking_strategy": "semantic"
},
{
"chunk_id": "649f27c7-4630-472b-a3e4-b99c659cffc7",
"text": "Denote element d of s as sd and simplify\nnotation by denoting hd to be the d'th element of ˆg(θ) (ϵ/σ). For s to remain positive, we need sd + βthd > 0, ∀d. As hd\ncan become negative, a too large value for βt will move s out of the feasible set. We thus have to find the largest value we\ncan set βt to such that s is still in the feasible set. Let I denote the indices d for which sd + βthd ≤0. We can ensure that s\nstays in the feasible set by setting\nβt = min β0, δ min , (63)\nd∈I |hd| where β0 is the maximum learning rate and 0 < δ < 1 is a constant to keep s strictly within the feasible set (away from the\nborders). However, this back-tracking method may be computationally expensive and is not trivial to implement within the\nRMSProp and Adam optimizers. Adam as an Adaptive Heavy-Ball Method Consider the following update of Adam (in the pseudocode in the main text, we used γ2 = 1 −β): Adam: ut+1 = γ1ut + (1 −γ1)ˆg(θt)\nst+1 = (1 −β)st + β [ˆg(θt)]2\nˆut+1 = ut+1/(1 −γt1) (64)\nˆst+1 = st+1/(1 −(1 −β)t)\nθt+1 = θt −α ˆut+1/(pˆst+1 + δ), This update can be expressed as the following adaptive version of the Polyak's heavy ball7 method as shown in Wilson et al.\n(2017), \" # \" # 1 √ˆst + δ\nθt+1 = θt −¯αt ˆg(θt) + ¯γt (θt −θt−1), (65)\npˆst+1 + δ pˆst+1 + δ\n7Wilson et al. (2017) do not add the constant δ in √st but in Adam a small constant is added. Bayesian Deep Learning by Weight-Perturbation in Adam",
"paper_id": "1806.04854",
"title": "Fast and Scalable Bayesian Deep Learning by Weight-Perturbation in Adam",
"authors": [
"Mohammad Emtiyaz Khan",
"Didrik Nielsen",
"Voot Tangkaratt",
"Wu Lin",
"Yarin Gal",
"Akash Srivastava"
],
"published_date": "2018-06-13",
"primary_category": "stat.ML",
"arxiv_url": "http://arxiv.org/abs/1806.04854v3",
"chunk_index": 49,
"total_chunks": 91,
"char_count": 1410,
"word_count": 287,
"chunking_strategy": "semantic"
},
{
"chunk_id": "b822f922-0978-419e-a9bf-ca05ddc12f54",
"text": "where ¯αt, ¯γt are appropriately defined in terms of γ1 as shown below: −γ1 1 −γt−11 ¯αt := α1 , ¯γt := γ1 (66)\n1 −γt1 1 −γt1 We will now show that, by using natural gradients in the Polyak's heavy ball, we get an update that is similar to (65). This\nallows us to implement our approximated NGVI methods by using Adam. Natural Momentum for Natural Gradient VI We propose the following update: ηt+1 = ηt + ¯αt e∇ηLt + ¯γt(ηt −ηt−1), (67) We can show that (67) can be written as the following mirror descent extension of (27), mt+1 = arg min ⟨m, −∇mL∗(mt)⟩+ DKL[qm(θ) ∥qmt(θ)] −αt DKL[qm(θ) ∥qmt−1(θ)], (68)\nm βt βt where L∗refers to the variational lower-bound defined in Appendix B, and αt and βt are two learning rates defined in terms\nof ¯αt and ¯γt. The last term here is a natural momentum term, which is very similar to the momentum term in the heavy-ball\nmethods.",
"paper_id": "1806.04854",
"title": "Fast and Scalable Bayesian Deep Learning by Weight-Perturbation in Adam",
"authors": [
"Mohammad Emtiyaz Khan",
"Didrik Nielsen",
"Voot Tangkaratt",
"Wu Lin",
"Yarin Gal",
"Akash Srivastava"
],
"published_date": "2018-06-13",
"primary_category": "stat.ML",
"arxiv_url": "http://arxiv.org/abs/1806.04854v3",
"chunk_index": 50,
"total_chunks": 91,
"char_count": 869,
"word_count": 161,
"chunking_strategy": "semantic"
},
{
"chunk_id": "d502da90-8c55-4cd5-87c0-9787b0764c02",
"text": "For example, (16) can be written as the following optimization problem: min θT ∇θf1(θt) + ∥θ −θt∥2 −αt ∥θ −θt−1∥2. (69)\nθ βt βt In our natural-momentum method, the Euclidean distance is replaced by a KL divergence, which explains the name\nnatural-momentum.",
"paper_id": "1806.04854",
"title": "Fast and Scalable Bayesian Deep Learning by Weight-Perturbation in Adam",
"authors": [
"Mohammad Emtiyaz Khan",
"Didrik Nielsen",
"Voot Tangkaratt",
"Wu Lin",
"Yarin Gal",
"Akash Srivastava"
],
"published_date": "2018-06-13",
"primary_category": "stat.ML",
"arxiv_url": "http://arxiv.org/abs/1806.04854v3",
"chunk_index": 51,
"total_chunks": 91,
"char_count": 256,
"word_count": 42,
"chunking_strategy": "semantic"
},
{
"chunk_id": "51a4aa3c-8dde-4e4e-a779-af2f039b0907",
"text": "Equivalence between (68) to (67) can be established by directly taking the derivative, setting it to zero, and simplifying: −∇mL∗(mt) + (ηt+1 −ηt) −αt (ηt+1 −ηt−1) = 0 (70)\nβt βt\n1 βt αt\n⇒ηt+1 = ηt + ∇mL∗(mt) − ηt−1, (71)\n1 −αt 1 −αt 1 −αt\nβt αt\n= ηt + ∇mL∗(mt) + (ηt −ηt−1), (72)\n1 −αt 1 −αt where we use the fact that gradient of the KL divergence with respect to m is qual to the difference between the natural\nparameters of the distributions (Raskutti & Mukherjee, 2015; Khan & Lin, 2017). Noting that the gradient with respect to m\nis the natural-gradient with respect to η. Therefore, defining ¯αt := βt/(1 −αt) and ¯γt := αt/(1 −αt), we establish that\nmirror-descent is equivalent to the natural-momentum approach we proposed. NGVI with Natural Momentum for Gaussian Approximations We will now derive the update for a Gaussian approximation q(θ) := N(θ|µ, Σ). Recalling that the mean parameters of a Gaussian q(θ) = N(θ|µ, Σ) are m(1) = µ and M(2) = Σ + µµT . Similarly to\nAppendix C, By using the chain rule, we can express the gradient ∇mL∗in terms of µ and Σ as ∇m(1)L∗= ∇µL −2 [∇ΣL] µ, (73)\n∇M (2)L∗= ∇ΣL. (74) Using the natural parameters of a Gaussian defined as η(1) = Σ−1µ and η(2) = −12Σ−1, we can rewrite the update (72) in\nterms of the update for µ and Σ. First, the update for Σ is obtained by plugging η(2) = −12Σ−1 into (72): 1 αt 2βt\nΣ−1t+1 = Σ−1t − Σ−1t−1 − [∇ΣLt] . (75)\n1 −αt 1 −αt 1 −αt Bayesian Deep Learning by Weight-Perturbation in Adam",
"paper_id": "1806.04854",
"title": "Fast and Scalable Bayesian Deep Learning by Weight-Perturbation in Adam",
"authors": [
"Mohammad Emtiyaz Khan",
"Didrik Nielsen",
"Voot Tangkaratt",
"Wu Lin",
"Yarin Gal",
"Akash Srivastava"
],
"published_date": "2018-06-13",
"primary_category": "stat.ML",
"arxiv_url": "http://arxiv.org/abs/1806.04854v3",
"chunk_index": 52,
"total_chunks": 91,
"char_count": 1466,
"word_count": 280,
"chunking_strategy": "semantic"
},
{
"chunk_id": "51cf3c77-c123-4512-a259-0510703be4d7",
"text": "Now, for µ, we first plugging η(1) = Σ−1µ into (72) and then rearrange the update to express some of the terms as Σt+1: 1 αt βt\nΣ−1t+1µt+1 = Σ−1t µt − Σ−1t−1µt−1 + (∇µLt −2 [∇ΣLt] µt) (76)\n1 −αt 1 −αt 1 −αt\n1 αt βt αt\n= Σ−1t µt − Σ−1t−1µt−1 + (∇µLt −2 [∇ΣLt] µt) + Σ−1t−1µt −Σ−1t−1µt\n1 −αt 1 −αt 1 −αt 1 −αt\n(77)\n1 αt 2βt βt αt\n= Σ−1t − Σ−1t−1 − [∇ΣLt] µt + ∇µLt + Σ−1t−1(µt −µt−1) (78)\n1 −αt 1 −αt 1 −αt 1 −αt 1 −αt\nβt αt\n⇒µt+1 = µt + Σt+1 [∇µLt] + Σt+1Σ−1t−1(µt −µt−1), (79)\n1 −αt 1 −αt where in the final step, we substitute the definition of Σ−1t+1 from (75). To express these updates similar to VON, we make an approximation where we replace the instances of Σt−1 by Σt in both\n(75) and (79). With this approximation, we get the following: βt αt\nµt+1 = µt + Σt+1 [∇µLt] + Σt+1Σ−1t (µt −µt−1), (80)\n1 −αt 1 −αt\n2βt\nΣ−1t+1 = Σ−1t − [∇ΣLt] . (81)\n1 −αt We build upon this update to express it as VON update with momentum.",
"paper_id": "1806.04854",
"title": "Fast and Scalable Bayesian Deep Learning by Weight-Perturbation in Adam",
"authors": [
"Mohammad Emtiyaz Khan",
"Didrik Nielsen",
"Voot Tangkaratt",
"Wu Lin",
"Yarin Gal",
"Akash Srivastava"
],
"published_date": "2018-06-13",
"primary_category": "stat.ML",
"arxiv_url": "http://arxiv.org/abs/1806.04854v3",
"chunk_index": 53,
"total_chunks": 91,
"char_count": 923,
"word_count": 200,
"chunking_strategy": "semantic"
},
{
"chunk_id": "ea18afd6-e748-4f06-9076-aa98c5ba20b0",
"text": "Variational Online Newton with Natural Momentum Now, we derive VON with natural momentum. To do so, we follow the same procedure used to derive the VON update in\nSection D. That is, we first use Bonnet's and Price's theorem to express the gradients with respect to µ and Σ in terms of\nthe expectations of gradients and Hessian of f(θ). Then, we substitute the expectation with a sample θt ∼N(θ|µt, Σt). Finally, we redefine the matrix St := (Σ−1t −λI)/N. With this we get the following update which is a momentum version\nof VON: βt −1 αt\nµt+1 = µt − St+1 + ˜λI g(θt) + ˜λµt + (St+1 + ˜λI)−1(St + ˜λI)(µt −µt−1), (82)\n1 −αt 1 −αt\nβt βt\nSt+1 = 1 − St + H(θt), (83)\n1 −αt 1 −αt where θt ∼N(θ|µt, Σt) with Σt = [N(St + ˜λI)]−1. To get a momentum version of Vprop, we follow a similar method to Section 3.",
"paper_id": "1806.04854",
"title": "Fast and Scalable Bayesian Deep Learning by Weight-Perturbation in Adam",
"authors": [
"Mohammad Emtiyaz Khan",
"Didrik Nielsen",
"Voot Tangkaratt",
"Wu Lin",
"Yarin Gal",
"Akash Srivastava"
],
"published_date": "2018-06-13",
"primary_category": "stat.ML",
"arxiv_url": "http://arxiv.org/abs/1806.04854v3",
"chunk_index": 54,
"total_chunks": 91,
"char_count": 800,
"word_count": 158,
"chunking_strategy": "semantic"
},
{
"chunk_id": "41bf74f0-e15e-437b-add5-836df2743c37",
"text": "That is, we first employ a mean-field\napproximation, and then replace the Hessian by the gradient-magnitude approximation. \" # βt 1 αt st + ˜λ\nµt+1 = µt − g(θt) + ˜λµt + (µt −µt−1), (84)\n1 −αt st+1 + ˜λ 1 −αt st+1 + ˜λ\nβt βt\nst+1 = 1 − st + [g(θt)]2 , (85)\n1 −αt 1 −αt where θt ∼N(θ|µt, σ2t) with σ2t = 1/[N(st + ˜λ)]. Finally, we use an unbiased gradient estimate ˆg(θ), introduce the\nsquare-root for the scaling vector in the mean update, and define step-sizes ¯αt := βt/(1 −αt) and ¯γt := αt/(1 −αt). The\nresult is a Vprop with momentum update: \" # \" # 1 √st + ˜λ\nµt+1 = µt −¯αt ˆg(θt) + ˜λµt + ¯γt (µt −µt−1), (86) √st+1 + ˜λ √st+1 + ˜λ\nst+1 = (1 −¯αt) st + ¯αt [ˆg(θt)]2 , (87) Bayesian Deep Learning by Weight-Perturbation in Adam",
"paper_id": "1806.04854",
"title": "Fast and Scalable Bayesian Deep Learning by Weight-Perturbation in Adam",
"authors": [
"Mohammad Emtiyaz Khan",
"Didrik Nielsen",
"Voot Tangkaratt",
"Wu Lin",
"Yarin Gal",
"Akash Srivastava"
],
"published_date": "2018-06-13",
"primary_category": "stat.ML",
"arxiv_url": "http://arxiv.org/abs/1806.04854v3",
"chunk_index": 55,
"total_chunks": 91,
"char_count": 736,
"word_count": 153,
"chunking_strategy": "semantic"
},
{
"chunk_id": "6d7c5d5d-f24e-47e2-b5e5-34f838eedd00",
"text": "where θt ∼N(θ|µt, σ2t) with σ2t = 1/[N(st + ˜λ)]. This is very similar to the update (65) of Adam expressed in the\nmomentum form. By introducing the bias correction term for m and s, we can implement this update by using Adam's\nupdate shown in Fig. 1. The final update of Vadam is shown below, where we highlight the differences from Adam in red.\nθt∼N(θ|µt, 1/[N(st + ˜λ)]), (88)\nut+1 = γ1ut + (1 −γ1) ˆg(θt) + ˜λµt (89)\nst+1 = (1 −β)st + β [ˆg(θt)]2 (90)\nˆut+1 = ut+1/(1 −γt1) (91)\nˆst+1 = st+1/(1 −(1 −β)t) (92)\nµt+1 = µt −α ˆut+1/(pˆst+1 + ˜λ). (93) Note that we do not use the same step-size ¯αt for st and µt, but rather choose the step-sizes according to the Adam update.",
"paper_id": "1806.04854",
"title": "Fast and Scalable Bayesian Deep Learning by Weight-Perturbation in Adam",
"authors": [
"Mohammad Emtiyaz Khan",
"Didrik Nielsen",
"Voot Tangkaratt",
"Wu Lin",
"Yarin Gal",
"Akash Srivastava"
],
"published_date": "2018-06-13",
"primary_category": "stat.ML",
"arxiv_url": "http://arxiv.org/abs/1806.04854v3",
"chunk_index": 56,
"total_chunks": 91,
"char_count": 677,
"word_count": 131,
"chunking_strategy": "semantic"
},
{
"chunk_id": "b339473f-bb08-401a-a1c4-f2c01e1e0946",
"text": "In the pseudocode, we define γ2 = 1 −β. By setting τ = 0 in (23), we get the following update: h i\nµt+1 = µt −αt b∇θF(θ)/st+1 , (94)\nst+1 = st + βt b∇2θθF(θ), (95)\nwhere θt ∼N(θ|µt, σ2t) with σ2t := 1/st. By replacing the Hessian by a GM approximation, and taking the square-root\nas in Vprop, we get the following update we call VadaGrad: h i , µt+1 = µt −αt b∇θF(θ)/√st+1\nh i\nst+1 = st + βt b∇θF(θ) ◦b∇θF(θ) , (96)\nwhere θt ∼N(θ|µt, σ2t) with σ2t := 1/st. Let gi := ∇θfi(θ) denote the gradient for an individual data point, gM := M1 Pi∈M ∇θfi(θ) denote the average gradient\nover a minibatch of size M and g = N1 PNi=1 ∇θfi(θ) denote the average full-batch gradient. Let p(i) denote a uniform\ndistribution over the data samples {1, 2, ..., N} and p(M) a uniform distribution over the MN possible minibatches of size\nM. Let further G denote the average GGN matrix, G = X gigTi = Ep(i)[gigTi ]. (97)\ni=1 Using the following two results,\nCovp(i)[gi] = Ep(i)[gigTi ] −Ep(i)[gi]Ep(i)[gi]T = G −ggT , (98)\nCovp(M)[gM] = Ep(M)[gMgTM] −Ep(M)[gM]Ep(M)[gM]T = Ep(M)[gMgTM] −ggT , (99) along with Theorem 2.2 of Cochran (1977) which states that\n1 −MN N\nCovp(M)[gM] = (100) M N −1Covp(i)[gi],\nwe get the following:\nEp(M)[gMgTM] = wG + (1 −w)ggT , (101)\nwhere w = M1 (N −M)/(N −1). Denoting dimension j of the full-batch gradient by gj(θ), dimension j of the average gradient over a minibatch by ˆgj(θ; M)\nand dimension j of the diagonal of the average GGN, we get the stated result. Bayesian Deep Learning by Weight-Perturbation in Adam",
"paper_id": "1806.04854",
"title": "Fast and Scalable Bayesian Deep Learning by Weight-Perturbation in Adam",
"authors": [
"Mohammad Emtiyaz Khan",
"Didrik Nielsen",
"Voot Tangkaratt",
"Wu Lin",
"Yarin Gal",
"Akash Srivastava"
],
"published_date": "2018-06-13",
"primary_category": "stat.ML",
"arxiv_url": "http://arxiv.org/abs/1806.04854v3",
"chunk_index": 57,
"total_chunks": 91,
"char_count": 1524,
"word_count": 282,
"chunking_strategy": "semantic"
},
{
"chunk_id": "44cd76ab-e50c-4667-acda-561bd6f3f00a",
"text": "Proof to Show That Fixed-Point of Vprop Do Not Change with Square-root We now show that the fixed-points do not change when we take the square root of st+1. Denote the variational distribution\nat iteration t by qt := N(θ|µt, σ2t). Assume no stochasticity, i.e., we compute the full-batch gradients and also can exactly\ncompute the expectation with respect to q. A fixed point q∗(θ) := N(θ|µ∗, σ2∗) of the variational objective satisfies the following: NEq∗[∇θf(θ)] + λµ∗= 0, NEq∗ diag ∇2θθf(θ) + λ −σ2∗= 0, (102) If we replace the Hessian by the GM approximation, we get the following fixed-point: NEq∗[∇θf(θ)] + λµ∗= 0, NEq∗ h (∇θf(θ))2i + λ −σ2∗= 0, (103) This fixed-point does not depend on the fact whether we scale by using the square-root or not. However, the iterations do\ndepend on it and the scaling is expected to affect the convergence and also the path that we take to approach the solution.",
"paper_id": "1806.04854",
"title": "Fast and Scalable Bayesian Deep Learning by Weight-Perturbation in Adam",
"authors": [
"Mohammad Emtiyaz Khan",
"Didrik Nielsen",
"Voot Tangkaratt",
"Wu Lin",
"Yarin Gal",
"Akash Srivastava"
],
"published_date": "2018-06-13",
"primary_category": "stat.ML",
"arxiv_url": "http://arxiv.org/abs/1806.04854v3",
"chunk_index": 58,
"total_chunks": 91,
"char_count": 903,
"word_count": 157,
"chunking_strategy": "semantic"
},
{
"chunk_id": "d7e9f296-c0c9-49c9-ac84-93ca5f3b7de4",
"text": "Details for the Logistic Regression Experiments We used the toy example given in Murphy (2012) (see Fig. 8.6 in the book). The data is generated from a mixture of two\nGaussians (details are given in the book). We used the generating mechansim desribed in the book to generate N = 60\nexamples.",
"paper_id": "1806.04854",
"title": "Fast and Scalable Bayesian Deep Learning by Weight-Perturbation in Adam",
"authors": [
"Mohammad Emtiyaz Khan",
"Didrik Nielsen",
"Voot Tangkaratt",
"Wu Lin",
"Yarin Gal",
"Akash Srivastava"
],
"published_date": "2018-06-13",
"primary_category": "stat.ML",
"arxiv_url": "http://arxiv.org/abs/1806.04854v3",
"chunk_index": 59,
"total_chunks": 91,
"char_count": 292,
"word_count": 52,
"chunking_strategy": "semantic"
},
{
"chunk_id": "a73aa368-13f6-49a9-b4cd-68200b41f862",
"text": "For all methods, a prior precision of λ = 0.01 and 1 MC sample is used. The initial settings of all methods are\nα0 = 0.1 and β0 = 0.9. For every iteration t, the learning rates are decayed as αt = , βt = 1 −1 −β0 . (104)\n1 + t0.55 1 + t0.55",
"paper_id": "1806.04854",
"title": "Fast and Scalable Bayesian Deep Learning by Weight-Perturbation in Adam",
"authors": [
"Mohammad Emtiyaz Khan",
"Didrik Nielsen",
"Voot Tangkaratt",
"Wu Lin",
"Yarin Gal",
"Akash Srivastava"
],
"published_date": "2018-06-13",
"primary_category": "stat.ML",
"arxiv_url": "http://arxiv.org/abs/1806.04854v3",
"chunk_index": 60,
"total_chunks": 91,
"char_count": 240,
"word_count": 56,
"chunking_strategy": "semantic"
},
{
"chunk_id": "fcc473fd-71b5-41df-aea3-97793b56161e",
"text": "Vadam and VOGN are run for 83,333 epochs using a minibatch size of M = 10 (corresponding to 500,000 iterations). For\nVadam, γ1 is set to βt. VOGN-1 is run for 8000 epochs with a minibatch size of M = 1 (also corresponding to 500,000\niterations). Real-Data Experiments Datasets for logistic regression are available at https://www.csie.ntu.edu.tw/˜cjlin/libsvmtools/datasets/\nbinary.html. For the Breast Cancer dataset, we use the hyper-parameters found by Khan & Lin (2017). For USPS, we\nused the procedure of Khan & Lin (2017) to find the hyperparameter.",
"paper_id": "1806.04854",
"title": "Fast and Scalable Bayesian Deep Learning by Weight-Perturbation in Adam",
"authors": [
"Mohammad Emtiyaz Khan",
"Didrik Nielsen",
"Voot Tangkaratt",
"Wu Lin",
"Yarin Gal",
"Akash Srivastava"
],
"published_date": "2018-06-13",
"primary_category": "stat.ML",
"arxiv_url": "http://arxiv.org/abs/1806.04854v3",
"chunk_index": 61,
"total_chunks": 91,
"char_count": 555,
"word_count": 87,
"chunking_strategy": "semantic"
},
{
"chunk_id": "02117bc1-fe8a-451f-b412-7ae14d839370",
"text": "All details are given in Table 2. For all datasets we use\n20 random splits. Datasets for logistic regression. NTrain is the number of training data. Dataset N D NTrain Hyperparameters M\nUSPS3vs5 1,781 256 884 λ = 25 64\nBreast-cancer-scale 683 10 341 λ = 1.0 32 Performance comparison of MF-Exact, VOGN-1, Vadam: We used 20 random 50-50 splits of the USPS 3vs5 dataset. For all methods, a prior precision of λ = 25 is used. MF-Exact and Vadam are run for 10000 epochs with a minibatch\nsize of M = 64. The learning rates for both methods are decayed according to (104) with initial settings α0 = 0.01 and\nβ0 = 0.99.",
"paper_id": "1806.04854",
"title": "Fast and Scalable Bayesian Deep Learning by Weight-Perturbation in Adam",
"authors": [
"Mohammad Emtiyaz Khan",
"Didrik Nielsen",
"Voot Tangkaratt",
"Wu Lin",
"Yarin Gal",
"Akash Srivastava"
],
"published_date": "2018-06-13",
"primary_category": "stat.ML",
"arxiv_url": "http://arxiv.org/abs/1806.04854v3",
"chunk_index": 62,
"total_chunks": 91,
"char_count": 613,
"word_count": 114,
"chunking_strategy": "semantic"
},
{
"chunk_id": "8123ef59-e16d-4ea8-bc94-fa6e67edad0c",
"text": "For Vadam, 1 MC sample is used. VOGN-1, on the other hand, is run for 200 epochs with a minibatch size of\nM = 1, using 1 MC sample and learning rates α = 0.0005 and β = 0.9995. Minibatch experiment comparing VOGN-1 and Vadam: We use the Breast-Cancer dataset with 20 random initializations. For both VOGN-1 and Vadam, a prior precision of λ = 1 is used. For VOGN-1, the learning rates are set to α = 0.0005 and\nβ = 0.9995. It is run for 2000 epochs using a minibatch size of M = 1 and 1 MC sample. For Vadam, the learning rates are\ndecayed according to (104) with initial settings α0 = 0.01 and β0 = 0.99. The method is run with 1 MC sample for various\nminibatch sizes M ∈{1, 8, 16, 32, 64}. Bayesian Deep Learning by Weight-Perturbation in Adam",
"paper_id": "1806.04854",
"title": "Fast and Scalable Bayesian Deep Learning by Weight-Perturbation in Adam",
"authors": [
"Mohammad Emtiyaz Khan",
"Didrik Nielsen",
"Voot Tangkaratt",
"Wu Lin",
"Yarin Gal",
"Akash Srivastava"
],
"published_date": "2018-06-13",
"primary_category": "stat.ML",
"arxiv_url": "http://arxiv.org/abs/1806.04854v3",
"chunk_index": 63,
"total_chunks": 91,
"char_count": 745,
"word_count": 146,
"chunking_strategy": "semantic"
},
{
"chunk_id": "9cfe30d1-33ac-4159-be44-8acca8f71554",
"text": "1e1 boston 1e1 concrete 1e1 energy 1e 1 kin8nm 1.1 1.0 1.0 2.0\n0.9 0.9 1.7 0.7 RMSE RMSE RMSE RMSE 0.7 0.8 1.4\nTest0.5 Test0.7 Test0.4 Test1.1\n0.3 0.6 0.1 0.8\n0 10 20 30 40 0 10 20 30 40 0 10 20 30 40 0 10 20 30 40\nEpochs Epochs Epochs Epochs 1e 2 naval powerplant 1e 1 wine 1e1 yacht 1.5 4.8 7.6 1.1\n0.9\n1.0 4.6 7.2 RMSE RMSE RMSE RMSE0.7\nTest 0.5 0.5 Test4.4 Test6.8 Test 0.3\n0.0 4.2 6.4 0.1\n0 10 20 30 40 0 10 20 30 40 0 10 20 30 40 0 10 20 30 40\nEpochs Epochs Epochs Epochs The mean plus-minus one standard error of the Test RMSE (using 100 Monte Carlo samples) on the test sets of UCI experiments. The mean and standard errors are computed over the 20 data splits.",
"paper_id": "1806.04854",
"title": "Fast and Scalable Bayesian Deep Learning by Weight-Perturbation in Adam",
"authors": [
"Mohammad Emtiyaz Khan",
"Didrik Nielsen",
"Voot Tangkaratt",
"Wu Lin",
"Yarin Gal",
"Akash Srivastava"
],
"published_date": "2018-06-13",
"primary_category": "stat.ML",
"arxiv_url": "http://arxiv.org/abs/1806.04854v3",
"chunk_index": 64,
"total_chunks": 91,
"char_count": 669,
"word_count": 146,
"chunking_strategy": "semantic"
},
{
"chunk_id": "0eab5802-111c-49ba-87ff-3fe8d968fc1c",
"text": "Details for the Bayesian Neural Network Experiment UCI Regression Experiments The 8 datasets together with their sizes N and number of features D are listed in Table 1. For each of the datasets, we\nuse the 20 random train-test splits provided by Gal & Ghahramani (2016)8. Following earlier work, we use 30 iterations\nof Bayesian Optimization (BO) to tune the prior precision λ and the noise precision τ. For each iteration of BO, 5-fold\ncross-validation is used to evaluate the considered hyperparameter setting. This is repeated for each of the 20 train-test splits\nfor each dataset. The final values reported in the table for each dataset are the mean and standard error from these 20 runs. The final runs for the 8 datasets are shown in Figure 4. Following earlier work, we use neural networks with one hidden layer and 50 hidden units with ReLU activation functions. All networks were trained for 40 epochs.",
"paper_id": "1806.04854",
"title": "Fast and Scalable Bayesian Deep Learning by Weight-Perturbation in Adam",
"authors": [
"Mohammad Emtiyaz Khan",
"Didrik Nielsen",
"Voot Tangkaratt",
"Wu Lin",
"Yarin Gal",
"Akash Srivastava"
],
"published_date": "2018-06-13",
"primary_category": "stat.ML",
"arxiv_url": "http://arxiv.org/abs/1806.04854v3",
"chunk_index": 65,
"total_chunks": 91,
"char_count": 911,
"word_count": 154,
"chunking_strategy": "semantic"
},
{
"chunk_id": "d44c5356-a325-43bd-8dc0-1672794893f4",
"text": "For the 4 smallest datasets, we use a minibatch size of 32, 10 MC samples for\nVadam and 20 MC samples for BBVI. For the 4 larger datasets, we use a minibatch size of 128, 5 MC samples for Vadam\nand 10 MC samples for BBVI. For evaluation, 100 MC samples were used in all cases.",
"paper_id": "1806.04854",
"title": "Fast and Scalable Bayesian Deep Learning by Weight-Perturbation in Adam",
"authors": [
"Mohammad Emtiyaz Khan",
"Didrik Nielsen",
"Voot Tangkaratt",
"Wu Lin",
"Yarin Gal",
"Akash Srivastava"
],
"published_date": "2018-06-13",
"primary_category": "stat.ML",
"arxiv_url": "http://arxiv.org/abs/1806.04854v3",
"chunk_index": 66,
"total_chunks": 91,
"char_count": 276,
"word_count": 56,
"chunking_strategy": "semantic"
},
{
"chunk_id": "68fadf96-6332-4391-94c9-e3771d897e51",
"text": "For BBVI, we optimize the variational objective using the Adam optimizer. For both BBVI and Vadam we use a learning\nrate of α = 0.01 and set γ1 = 0.99 and γ2 = 0.9 to encourage convergence within 40 epochs. For both BBVI and Vadam,\nthe initial precision of the variational distribution q was set to 10. VOGN Convergence Experiments We apply BBVI, Vadam, and VOGN to train a neural network with a single-hidden layer of 64 units and ReLU activations\non a random train-test split of the Australian-Scale dataset (N = 690, D = 14). For VOGN, we do not use the naturalmomentum term. The prior precision λ is set to 1. We run each method for 5000 iterations. For both Adam and Vadam,\nwe set α = 0.001, γ1 = 0.9, and γ2 = 0.999. For VOGN, we set α = 0.001 and γ1 = 0.9.",
"paper_id": "1806.04854",
"title": "Fast and Scalable Bayesian Deep Learning by Weight-Perturbation in Adam",
"authors": [
"Mohammad Emtiyaz Khan",
"Didrik Nielsen",
"Voot Tangkaratt",
"Wu Lin",
"Yarin Gal",
"Akash Srivastava"
],
"published_date": "2018-06-13",
"primary_category": "stat.ML",
"arxiv_url": "http://arxiv.org/abs/1806.04854v3",
"chunk_index": 67,
"total_chunks": 91,
"char_count": 763,
"word_count": 148,
"chunking_strategy": "semantic"
},
{
"chunk_id": "4abec91c-fd31-4a82-a889-a744a44a56dd",
"text": "We run experiments for\ndifferent minibatch sizes M and number of MC samples S. The left side of Figure 3 shows results for (M = 1, S = 1),\n(M = 1, S = 16) and (M = 128, S = 16). 8The splits are publicly available from https://github.com/yaringal/DropoutUncertaintyExps",
"paper_id": "1806.04854",
"title": "Fast and Scalable Bayesian Deep Learning by Weight-Perturbation in Adam",
"authors": [
"Mohammad Emtiyaz Khan",
"Didrik Nielsen",
"Voot Tangkaratt",
"Wu Lin",
"Yarin Gal",
"Akash Srivastava"
],
"published_date": "2018-06-13",
"primary_category": "stat.ML",
"arxiv_url": "http://arxiv.org/abs/1806.04854v3",
"chunk_index": 68,
"total_chunks": 91,
"char_count": 268,
"word_count": 49,
"chunking_strategy": "semantic"
},
{
"chunk_id": "83c50859-6b4c-49da-8e8d-48065916331e",
"text": "Bayesian Deep Learning by Weight-Perturbation in Adam Details for the Exploration for Deep Reinforcement Learning Experiment Reinforcement learning (RL) aims to solve the sequential decision making problem where at each discrete time step t an\nagent observes a state st and selects an action at using a policy π, i.e., at ∼π(a|st). The agent then receives an immediate\nreward rt = r(st, at) and observes a next state st ∼p(s′|st, at). The goal in RL is to learn the optimal policy π∗which\nmaximizes the expected return E P∞t γt−1rt where γ is the discounted factor and the expectation is taken over a sequence\nof densities π(a|st) and p(s′|st, at). A central component of RL algorithms is the Q-function, Qπ(s, a), which denotes the expected return after executing\nan action a in a state s and following the policy π afterwards. Formally, the Q-function is defined as Qπ(s, a) =\nE P∞t=1 γt−1rt|s1 = s, a1 = a . The Q-function also satisfies a recursive relation also known as the Bellman equation:\nQπ(s, a) = r(s, a) + γEp(s′|s,a)π(a′|s′) [Qπ(s′, a′)]. Using the Q-function and a parameterized policy πθ, the goal of\nreinforcement learning can be simply stated as finding a policy parameter θ which maximizes the expected Q-function9: max Ep(s)πθ(a|s) [Qπ(s, a)] . (105) In practice, the Q-function is unknown and is commonly approximated by a parameterized function bQω(s, a) with parameter\nω learned such that it satisfies the Bellman equation on average: minω Ep(s)β(a|s)[(r(s, a) + γEπθ(a′|s′)[bQ˜ω(s′, a′)] −\nbQω(s, a))2], where β(a|s) is a behavior policy used to collect samples and ˜ω is either a copy or a slowly updated value of ω\nwhose ∇ω bQ˜ω(s′, a′) = 0. By using an approximated Q-function, the goal of RL is to find a policy parameter maximizing\nthe expected value of bQω(s, a):\nh i h i max\nθ Ep(s)πθ(a|s) bQω(s, a) := minθ −Ep(s)πθ(a|s) bQω(s, a) := minθ F(θ). (106) In the remainder, we consider the minimization problem minθ F(θ) to be consistent with the variational optimization\nproblem setting in the main text. Stochastic Policy Gradient and Deterministic Policy Gradient The RL objective in (106) is often minimized by gradient descent.",
"paper_id": "1806.04854",
"title": "Fast and Scalable Bayesian Deep Learning by Weight-Perturbation in Adam",
"authors": [
"Mohammad Emtiyaz Khan",
"Didrik Nielsen",
"Voot Tangkaratt",
"Wu Lin",
"Yarin Gal",
"Akash Srivastava"
],
"published_date": "2018-06-13",
"primary_category": "stat.ML",
"arxiv_url": "http://arxiv.org/abs/1806.04854v3",
"chunk_index": 69,
"total_chunks": 91,
"char_count": 2159,
"word_count": 362,
"chunking_strategy": "semantic"
},
{
"chunk_id": "a0b14377-deb2-4068-820d-145acf135c7a",
"text": "The gradient computation depends on stochasticity of πθ. For a stochastic policy, πθ(a|s), policy gradient or REINFORCE can be computed using the likelihood ratio trick: h i h i\nF(θ) = −Ep(s)πθ(a|s) bQω(s, a) , ∇θF(θ) = −Ep(s)πθ(a|s) ∇θ log πθ(a|s)bQω(s, a) . (107) For a deterministic policy πθ(s), deterministic policy gradient (DPG) (Silver et al., 2014) can be computed using the\nchain-rule: h i h i\nF(θ) = −Ep(s) bQω(s, πθ(s)) , ∇θF(θ) = −Ep(s) ∇θπθ(s)∇a bQω(s, πθ(s)) . (108)",
"paper_id": "1806.04854",
"title": "Fast and Scalable Bayesian Deep Learning by Weight-Perturbation in Adam",
"authors": [
"Mohammad Emtiyaz Khan",
"Didrik Nielsen",
"Voot Tangkaratt",
"Wu Lin",
"Yarin Gal",
"Akash Srivastava"
],
"published_date": "2018-06-13",
"primary_category": "stat.ML",
"arxiv_url": "http://arxiv.org/abs/1806.04854v3",
"chunk_index": 70,
"total_chunks": 91,
"char_count": 481,
"word_count": 81,
"chunking_strategy": "semantic"
},
{
"chunk_id": "fed2cc89-abac-48a6-b756-f1ee21c43956",
"text": "As discussed by Silver et al. (2014), the deterministic policy gradient is more advantageous than the stochastic counter part\ndue to its lower variance. However, the issue of a deterministic policy is that it does not perform exploration by itself. In\npractice, exploration is done by injecting a noise to the policy output, i.e., a = πθ(s) + ϵ where ϵ is a noise from some\nrandom process such as Gaussian noise. However, action-space noise may be insufficient in some problems (R¨uckstieß\net al., 2010). Next, we discussed parameter-based exploration approach where exploration is done in the parameter space. Then, we show that such exploration can be achieved by simply applying VadaGrad and Vadam to policy gradient methods. Parameter-based Exploration Policy Gradient Parameter-based exploration policy gradient (R¨uckstieß et al., 2010) relaxes the RL objective in (106) by assuming that the\nparameter θ is sampled from a Gaussian distribution q(θ) := N(θ|µ, σ2) with a diagonal covariance. Formally, it solves\nan optimization problem min EN(θ|µ,σ2)[F(θ)], (109)\nµ,σ2 9In this section, we omit a case where the state distribution p(s) depends on the policy. In practice many policy gradient methods\n(especially actor-critic type methods) also often ignore the dependency between the state distribution and the policy.",
"paper_id": "1806.04854",
"title": "Fast and Scalable Bayesian Deep Learning by Weight-Perturbation in Adam",
"authors": [
"Mohammad Emtiyaz Khan",
"Didrik Nielsen",
"Voot Tangkaratt",
"Wu Lin",
"Yarin Gal",
"Akash Srivastava"
],
"published_date": "2018-06-13",
"primary_category": "stat.ML",
"arxiv_url": "http://arxiv.org/abs/1806.04854v3",
"chunk_index": 71,
"total_chunks": 91,
"char_count": 1323,
"word_count": 204,
"chunking_strategy": "semantic"
},
{
"chunk_id": "29dac72b-a8f4-4cda-9ed5-4361b0ecddc3",
"text": "Bayesian Deep Learning by Weight-Perturbation in Adam where F(θ) is either the objective function for the stochastic policy in (107) or the deterministic policy in (108). In each\ntime step, the agent samples a policy parameter θ ∼N(θ|µ, σ2) and uses it to determine an action10. This exploration\nstrategy is advantageous since the stochasticity of θ allows the agent to exhibit much more richer explorative behaviors\nwhen compared with exploration by action noise injection. Notice that (109) is exactly the variational optimization problem discussed in the main text. As explained in the main text,\nthis problem can be solved by our methods.",
"paper_id": "1806.04854",
"title": "Fast and Scalable Bayesian Deep Learning by Weight-Perturbation in Adam",
"authors": [
"Mohammad Emtiyaz Khan",
"Didrik Nielsen",
"Voot Tangkaratt",
"Wu Lin",
"Yarin Gal",
"Akash Srivastava"
],
"published_date": "2018-06-13",
"primary_category": "stat.ML",
"arxiv_url": "http://arxiv.org/abs/1806.04854v3",
"chunk_index": 72,
"total_chunks": 91,
"char_count": 642,
"word_count": 102,
"chunking_strategy": "semantic"
},
{
"chunk_id": "52317d80-7a3e-4fee-bb11-df4c5ee63a0d",
"text": "In the next section, we apply VadaGrad and Vadam to the deep deterministic\npolicy gradient and show that a parameter-exploration strategy induced by our methods allows the agent to achieve a better\nperformance when compared to existing methods. Parameter-based Exploration Deep Deterministic Policy Gradient via VadaGrad and Vadam While parameter-based exploration strategy can be applied to both stochastic and deterministic policies, it is commonly\napplied to a deterministic policy. In our experiment, we adopt a variant of deterministic policy gradient called deep\ndeterministic policy gradient (DDPG) (Lillicrap et al., 2015). In DDPG, the policy πθ(s) and the Q-function bQω(s, a) are\nrepresented by deep neural networks. To improve stability, DDPG introduces target networks, π˜θ(s) and bQ˜ω(s, a), whose\nweight parameters are updated by ˜θ ←(1 −τ)˜θ + τθ and ˜ω ←(1 −τ)˜ω + τω for 0 < τ < 1. The target network are\nused to update the Q-function by solving min Ep(s)β(a|s) r(s, a) + γ bQ˜ω(s′, π˜θ(s′)) −bQω(s, a) . (110) ω h i\nGradient descent on (110) yields an update: ω ←ω+κEp(s)β(a|s) r(s, a) + γ bQ˜ω(s′, π˜θ(s′)) −bQω(s, a) ∇ω bQω(s, a) ,\nwhere κ > 0 is the step-size.",
"paper_id": "1806.04854",
"title": "Fast and Scalable Bayesian Deep Learning by Weight-Perturbation in Adam",
"authors": [
"Mohammad Emtiyaz Khan",
"Didrik Nielsen",
"Voot Tangkaratt",
"Wu Lin",
"Yarin Gal",
"Akash Srivastava"
],
"published_date": "2018-06-13",
"primary_category": "stat.ML",
"arxiv_url": "http://arxiv.org/abs/1806.04854v3",
"chunk_index": 73,
"total_chunks": 91,
"char_count": 1182,
"word_count": 194,
"chunking_strategy": "semantic"
},
{
"chunk_id": "0297431a-c34b-430b-90ad-500426920a88",
"text": "DDPG also uses a replay buffer which is a first-in-first-out queue that store past collected\nsamples. In each update iteration, DDPG uniformly draws M minibatch training samples from the replay buffer to\napproximate the expectations. To apply VadaGrad to solve (109), in each update iteration we sample θt ∼N(θ|µt, σ2t) and then updates the mean and\nvariance using the VadaGrad update in (96). The deterministic policy gradient ∇θF(θ) can be computed using the chain-rule\nas shown in (108). The computational complexity of DDPG with VadaGrad is almost identical to DDPG with Adagrad,\nexcept that we require gradient computation at sampled weight θt. Algorithm 1 below outlines our parameter-based\nexploration DPG via VadaGrad where the only difference from DDPG is the sampling procedure in line 3. Note that we use\nN = 1 in this case. Also note that the target policy network π˜µ does not performed parameter-exploration and it is updated\nby the mean µ instead of a sampled weight θ.",
"paper_id": "1806.04854",
"title": "Fast and Scalable Bayesian Deep Learning by Weight-Perturbation in Adam",
"authors": [
"Mohammad Emtiyaz Khan",
"Didrik Nielsen",
"Voot Tangkaratt",
"Wu Lin",
"Yarin Gal",
"Akash Srivastava"
],
"published_date": "2018-06-13",
"primary_category": "stat.ML",
"arxiv_url": "http://arxiv.org/abs/1806.04854v3",
"chunk_index": 74,
"total_chunks": 91,
"char_count": 984,
"word_count": 161,
"chunking_strategy": "semantic"
},
{
"chunk_id": "dccf8fbf-c9be-4158-aff9-be99f873d603",
"text": "In VadaGrad, the precision matrix always increases overtime and it guarantees that the policy eventually becomes deterministic. This is beneficial since it is known that there always exists a deterministic optimal policy for MDP. However, this\nbehavior may not be desirable in practice since the policy may become deterministic too fast which leads to premature\nconvergence. Moreover, for VadaGrad the effective gradient step-size, √st+1α , will be close to zero for a nearly deterministic\nvariational distribution q which leads to no policy improvement. This issue can be avoided by applying Vadam instead\nof VadaGrad. As will be shown in the experiment, Vadam allows the agent to keep explore and avoids the premature\nconvergence issue. Note that parameter-based exploration RL is not a VI problem and we require some modification to the\nVadam update, as shown in line 3 of Algorithm 2 below. We perform experiment using the Half-Cheetah task from the OpenAI gym platform.",
"paper_id": "1806.04854",
"title": "Fast and Scalable Bayesian Deep Learning by Weight-Perturbation in Adam",
"authors": [
"Mohammad Emtiyaz Khan",
"Didrik Nielsen",
"Voot Tangkaratt",
"Wu Lin",
"Yarin Gal",
"Akash Srivastava"
],
"published_date": "2018-06-13",
"primary_category": "stat.ML",
"arxiv_url": "http://arxiv.org/abs/1806.04854v3",
"chunk_index": 75,
"total_chunks": 91,
"char_count": 974,
"word_count": 154,
"chunking_strategy": "semantic"
},
{
"chunk_id": "5d3a6c97-48c8-45ea-b6c2-31957024505e",
"text": "We compare DDPG with VadaGrad\nand Vadam against four baseline methods. • SGD-Plain: the original DDPG without any noise injection optimized by SGD, • Adam-Plain: the original DDPG without any noise injection optimized by Adam, • SGD-Explore: a naive parameter exploration DDPG based on VO optimized by SGD, and • Adam-Explore: a naive parameter exploration DDPG based on VO optimized by Adam. 10The original work of (R¨uckstieß et al., 2010) considers an episode-based method where the policy parameter is sampled only at the\nstart of an episode. However, we consider DPG which is a step-based method. Therefore, we sample the policy parameter in every time\nstep. Note that we may only sample the policy parameter at the start of the episode as well.",
"paper_id": "1806.04854",
"title": "Fast and Scalable Bayesian Deep Learning by Weight-Perturbation in Adam",
"authors": [
"Mohammad Emtiyaz Khan",
"Didrik Nielsen",
"Voot Tangkaratt",
"Wu Lin",
"Yarin Gal",
"Akash Srivastava"
],
"published_date": "2018-06-13",
"primary_category": "stat.ML",
"arxiv_url": "http://arxiv.org/abs/1806.04854v3",
"chunk_index": 76,
"total_chunks": 91,
"char_count": 750,
"word_count": 123,
"chunking_strategy": "semantic"
},
{
"chunk_id": "18bb9ca8-939c-4ce6-8b19-80613477e0f6",
"text": "Bayesian Deep Learning by Weight-Perturbation in Adam Method α γ2 γ1 α(σ) γ(σ)2 γ(σ)1\nVadaGrad 10−2 0.99 - - - -\nVO/VI\nVadam 10−4 0.999 0.9 - - -\nSGD-Plain 10−4 - - - - -\nPlain\nAdam-Plain 10−4 0.999 0.9 - - -\nSGD-Explore 10−4 - - 10−2 - -\nExplore\nAdam-Explore 10−4 0.999 0.9 10−2 0.999 0.9 Hyper-parameter setting for the deep reinforcement learning experiment. We refer to Algorithm 1 and 2 for the meaning of each\nhyper-parameter.",
"paper_id": "1806.04854",
"title": "Fast and Scalable Bayesian Deep Learning by Weight-Perturbation in Adam",
"authors": [
"Mohammad Emtiyaz Khan",
"Didrik Nielsen",
"Voot Tangkaratt",
"Wu Lin",
"Yarin Gal",
"Akash Srivastava"
],
"published_date": "2018-06-13",
"primary_category": "stat.ML",
"arxiv_url": "http://arxiv.org/abs/1806.04854v3",
"chunk_index": 77,
"total_chunks": 91,
"char_count": 432,
"word_count": 80,
"chunking_strategy": "semantic"
},
{
"chunk_id": "d0549dc4-5e8f-4e49-b95c-fe38c92521cd",
"text": "In SGD-Explore and Adam-Explore, we separately optimizes the mean and variance of the Gaussian distribution: µt+1 = µt −α∇µEN(θ|µt,σ2t )[F(θ)], σ2t+1 = σ2t −α(σ)∇σ2EN(θ|µt,σ2t )[F(θ)], (111)\nwhere α > 0 is the mean step-size and α(σ) > 0 is the variance step-size. The gradients of EN(θ|µt,σ2t )[F(θ)] are computed\nby chain-rule and automatic-differentiation. For Adam-Explore, two Adam optimizers with different scaling vectors are\nused to independently update the mean and variance. All methods use the DDPG network architectures as described by Lillicrap et al. (2015); two-layer neural networks with 400\nand 300 ReLU hidden units. The output of the policy network is scaled by a hyperbolic tangent to bound the actions. The\nminibatch size is M = 64.",
"paper_id": "1806.04854",
"title": "Fast and Scalable Bayesian Deep Learning by Weight-Perturbation in Adam",
"authors": [
"Mohammad Emtiyaz Khan",
"Didrik Nielsen",
"Voot Tangkaratt",
"Wu Lin",
"Yarin Gal",
"Akash Srivastava"
],
"published_date": "2018-06-13",
"primary_category": "stat.ML",
"arxiv_url": "http://arxiv.org/abs/1806.04854v3",
"chunk_index": 78,
"total_chunks": 91,
"char_count": 753,
"word_count": 118,
"chunking_strategy": "semantic"
},
{
"chunk_id": "db37f3f4-1912-4fa8-8053-78af98e0622c",
"text": "All methods optimize the Q-network by Adam with step-size κ = 10−3. The target Q-network\nand target policy network use a moving average step-size τ = 10−3. The expectation over the variational distribution of all\nmethods are approximated using one MC sample. For optimizing the policy network, we use the step-sizes given in Table 3. For Adam we use δ = 10−8. We also use the same value of λ = 10−8 for Vadam. The initial precision for SGD-Explore,\nAdam-Explore, and VadaGrad is σ−2t=1 = 10000. For Vadam we use st=1 = 0 for the initial second-order moment estimate and add a constant value of c = 10000 to the\nprecision matrix σ−2t = st + λ + c for sampling (see line 3 of Algorithm 2). We do this for two reasons.",
"paper_id": "1806.04854",
"title": "Fast and Scalable Bayesian Deep Learning by Weight-Perturbation in Adam",
"authors": [
"Mohammad Emtiyaz Khan",
"Didrik Nielsen",
"Voot Tangkaratt",
"Wu Lin",
"Yarin Gal",
"Akash Srivastava"
],
"published_date": "2018-06-13",
"primary_category": "stat.ML",
"arxiv_url": "http://arxiv.org/abs/1806.04854v3",
"chunk_index": 79,
"total_chunks": 91,
"char_count": 715,
"word_count": 133,
"chunking_strategy": "semantic"
},
{
"chunk_id": "a2517a3c-ab44-4e90-82ea-501948da17e2",
"text": "First, we set\nst=1 = 0 so that the initial conditions and hyper-parameters of Vadam and Adam-Plain are exactly the same. Second, we\nadd a constant c to prevent an ill-conditioned precision matrix during sampling. Without this constant, the initial precision\nvalue of σ−2t=1 = λ = 10−8 is highly ill-conditioned and sampled weights do not contain any information. This is highly\nproblematic in RL since we do not have training samples at first and the agent needs to collect training samples from scratch. Training samples collected initially using σ−2t=1 = 10−8 is highly uninformative (e.g., all actions are either the maximum or\nminimum action values) and the agent cannot correctly estimate uncertainty. We emphasize that the constant is only used for\nsampling and not used for gradient update. We expect that this numerical trick is not required for a large enough value of λ,\nbut setting an appropriate value of λ is not trivial in deep learning and is not in the scope of this paper. As such, we leave\nfinding a more appropriate approach to deal with this issue as a future work.",
"paper_id": "1806.04854",
"title": "Fast and Scalable Bayesian Deep Learning by Weight-Perturbation in Adam",
"authors": [
"Mohammad Emtiyaz Khan",
"Didrik Nielsen",
"Voot Tangkaratt",
"Wu Lin",
"Yarin Gal",
"Akash Srivastava"
],
"published_date": "2018-06-13",
"primary_category": "stat.ML",
"arxiv_url": "http://arxiv.org/abs/1806.04854v3",
"chunk_index": 80,
"total_chunks": 91,
"char_count": 1085,
"word_count": 185,
"chunking_strategy": "semantic"
},
{
"chunk_id": "40f90dde-0832-456b-94c3-698861819440",
"text": "We perform experiment using the Half-Cheetah task from the OpenAI gym platform (Brockman et al., 2016). We measure\nthe performance of each method by computing cumulative rewards along 20 test episodes without exploration. The early\nlearning performance of Vadam, Adam-Plain, and Adam-Explore in Figure 5 shows that Vadam learns faster than the other\nmethods. We conjecture that exploration through a variational distribution allows Vadam agent to collect more information\ntraining samples when compare to Adam-Plain. While Adam-Explore also performs exploration through a variational\ndistribution, its performance is quite unstable with high fluctuations. This fluctuation in Vadam-Explore is likely because the\nmean and variance of the variational distribution are optimized independently. In contrast, Vadam uses natural-gradient to\noptimizes the two quantities in a strongly correlated manner, yielding a more stable performance.",
"paper_id": "1806.04854",
"title": "Fast and Scalable Bayesian Deep Learning by Weight-Perturbation in Adam",
"authors": [
"Mohammad Emtiyaz Khan",
"Didrik Nielsen",
"Voot Tangkaratt",
"Wu Lin",
"Yarin Gal",
"Akash Srivastava"
],
"published_date": "2018-06-13",
"primary_category": "stat.ML",
"arxiv_url": "http://arxiv.org/abs/1806.04854v3",
"chunk_index": 81,
"total_chunks": 91,
"char_count": 932,
"word_count": 130,
"chunking_strategy": "semantic"
},
{
"chunk_id": "6164c5b7-35d4-4087-84ae-942e98d5c881",
"text": "Figure 6 shows the learning performance for a longer period of training for all methods. We can see that VadaGrad learns\nfaster than SGD and Adam-based methods initially, but it suffers from a premature convergence and are outperformed\nby Adam-based methods. In contrast, Vadam does not suffer from the premature convergence. We can also observe that\nwhile Adam-based method learn slower than Vadam initially, they eventually catch up with Vadam and obtain a comparable\nperformance at the 3 million time-steps. We conjecture that this is because exploration strategy is very important in the early\nstage of learning where the agent does not have sufficient amount of informative training samples. As learning progress and\nthere is a sufficient amount of informative samples, exploration would not help much. Nonetheless, we can still see that\nVadam and Adam-Explore give slightly better performance than Adam-Plain, showing that parameter-based exploration is Bayesian Deep Learning by Weight-Perturbation in Adam",
"paper_id": "1806.04854",
"title": "Fast and Scalable Bayesian Deep Learning by Weight-Perturbation in Adam",
"authors": [
"Mohammad Emtiyaz Khan",
"Didrik Nielsen",
"Voot Tangkaratt",
"Wu Lin",
"Yarin Gal",
"Akash Srivastava"
],
"published_date": "2018-06-13",
"primary_category": "stat.ML",
"arxiv_url": "http://arxiv.org/abs/1806.04854v3",
"chunk_index": 82,
"total_chunks": 91,
"char_count": 1013,
"word_count": 153,
"chunking_strategy": "semantic"
},
{
"chunk_id": "a2c4d2ec-97e7-4770-a4ee-a9a7ed710929",
"text": "still beneficial for DDPG. Toy Example on Local-Minima Avoidance using Vadam Fig. 7 shows an illustration of variational optimization on a two-dimensional objective function. The objective function\nh(x, y) = exp{−(x sin(20y) + y sin(20x))2 −(x cos(10y) −y sin(10x))2} is taken from Fig. 5.2 in Robert & Casella\n(2005). Variational optimization is performed by gradually turning off the KL-term for Vadam, thus annealing Vadam\ntowards VadaGrad. This is referred to as \"Vadam to VadaGrad\". We show results for 4 multiple runs of each method started\nwith a different initial values.",
"paper_id": "1806.04854",
"title": "Fast and Scalable Bayesian Deep Learning by Weight-Perturbation in Adam",
"authors": [
"Mohammad Emtiyaz Khan",
"Didrik Nielsen",
"Voot Tangkaratt",
"Wu Lin",
"Yarin Gal",
"Akash Srivastava"
],
"published_date": "2018-06-13",
"primary_category": "stat.ML",
"arxiv_url": "http://arxiv.org/abs/1806.04854v3",
"chunk_index": 83,
"total_chunks": 91,
"char_count": 579,
"word_count": 90,
"chunking_strategy": "semantic"
},
{
"chunk_id": "161b8a11-7c57-4c22-967d-a92a9328f2b9",
"text": "The figure shows that variational optimization can better navigate the landscape to reach the\nflat (and global) minimum than gradient descent. Experiment on Improving \"Marginal Value of Adaptive-Gradient Methods\" Recently, Wilson et al. (2017) show some examples where adaptive gradient methods, namely Adam and AdaGrad, generalize\nworse than SGD. We repeated their experiments to see whether weight-perturbation in VadaGrad improves the generalization\nperformance of AdaGrad. We firstly consider an experiment on the character-level language modeling on the novel War\nand Peace dataset (shown in Fig. 2b in Wilson et al. (2017)). Figure 8 shows the test error of SGD, AdaGrad, Adam and\nVadaGrad. We can see that VadaGrad generalizes well and achieves the same performance as SGD unlike AdaGrad and\nAdam. We also repeated the CIFAR-10 experiment discussed in the paper, and found that the improvement using VadaGrad\nwas minor. We believe that regularization techniques such as batch normalization, batch flip, and dropout, play an important\nrole for the CIFAR-10 dataset and that is why we did not see an improvement by using VadaGrad. Further investigation will\nbe done in future works.",
"paper_id": "1806.04854",
"title": "Fast and Scalable Bayesian Deep Learning by Weight-Perturbation in Adam",
"authors": [
"Mohammad Emtiyaz Khan",
"Didrik Nielsen",
"Voot Tangkaratt",
"Wu Lin",
"Yarin Gal",
"Akash Srivastava"
],
"published_date": "2018-06-13",
"primary_category": "stat.ML",
"arxiv_url": "http://arxiv.org/abs/1806.04854v3",
"chunk_index": 84,
"total_chunks": 91,
"char_count": 1187,
"word_count": 182,
"chunking_strategy": "semantic"
},
{
"chunk_id": "ed309da6-d1cf-45d9-be46-de91eb599adc",
"text": "We use the following hyper-parameter setting in this experiment. For all methods, we divide the step-size α by 10 once\nevery K epochs, as described by Wilson et al. (2017). For VadaGrad, AdaGrad, and Adam, we fixed the value of scaling\nvector step-size β and do not decay. For AdaGrad and Adam, the initial value of scaling vector is 0. For VadaGrad, we use\nthe initial value s1 = 10−4 to avoid numerical issue of sampling from Gaussian with 0 precision. (Recall that VadaGrad\ndoes not have a prior λ). We perform grid search to find the best value of initial α, K, and β for all methods except Adam\nwhich use the default value of β = 0.999 and γ = 0.9. The best hyper-parameter values which give the minimum test loss\nare given in Table 4. Method K α β\nVadaGrad 40 0.0075 0.5\nSGD 80 0.5 -\nAdaGrad 80 0.025 0.5\nAdam 40 0.0012 0.999 Hyper-parameter setting for the \"Marginal Value of Adaptive-Gradient Methods\" experiment. Bayesian Deep Learning by Weight-Perturbation in Adam Algorithm 1 Parameter-based exploration DDPG via VadaGrad\n1: Initialize: Variational distribution N(θ|µ1, s−11 ) with random initial mean and initial precision s1 = 10000.\n2: for Time step t = 1, ..., ∞do\n3: Sample policy parameter θt ∼N(θ|µt, s−1t ).\n4: Observe st, execute at = πθt(st) observe rt and transit to s′t. Then add (st, at, rt, s′t) to a replay buffer D.\n5: Drawn M minibatch samples {(si, ai, ri, s′i)}Mi=1 from D.\n6: Update the Q-network weight ω by stochastic gradient descent or Adam: ωt+1 = ωt + κ PMi=1 ri + γ bQ˜ωt(s′i, π˜µt(s′i)) −bQωt(si, ai) ∇ω bQωt(si, ai)/M. 7: Compute deterministic policy gradient using the sampled policy parameter:",
"paper_id": "1806.04854",
"title": "Fast and Scalable Bayesian Deep Learning by Weight-Perturbation in Adam",
"authors": [
"Mohammad Emtiyaz Khan",
"Didrik Nielsen",
"Voot Tangkaratt",
"Wu Lin",
"Yarin Gal",
"Akash Srivastava"
],
"published_date": "2018-06-13",
"primary_category": "stat.ML",
"arxiv_url": "http://arxiv.org/abs/1806.04854v3",
"chunk_index": 85,
"total_chunks": 91,
"char_count": 1636,
"word_count": 289,
"chunking_strategy": "semantic"
},
{
"chunk_id": "0fde7c57-689a-4cf6-801b-242c42e83d4c",
"text": "ˆ∇θF(θt) = −PMi=1 ∇θπθt(si)∇aQωt+1(si, πθt(si))/M. 8: Update the mean µ and variance σ2 by VadaGrad:\nµt+1 = µt −α ˆ∇θF(θt)/√st+1, st+1 = st + (1 −γ2)[ˆ∇θF(θt)]2. 9: Update target network parameters ˜ωt+1 and ˜µt+1 by moving average: ˜ωt+1 = (1 −τ)˜ωt + τωt+1, ˜µt+1 = (1 −τ)˜µt + τµt+1. Algorithm 2 Parameter-based exploration DDPG via Vadam\n1: Initialize: Initial mean µ1, 1st-order moment m1 = 0, 2nd-order moment s1 = 0, prior λ = 10−8, constant c = 10000.\n2: for Time step t = 1, ..., ∞do\n3: Sample policy parameter θt ∼N(θ|µt, (st + λ + c)−1).\n4: Observe st, execute at = πθt(st) observe rt and transit to s′t. Then add (st, at, rt, s′t) to a replay buffer D.\n5: Drawn M minibatch samples {(si, ai, ri, s′i)}Mi=1 from D.\n6: Update the Q-network weight ω by stochastic gradient descent or Adam: ωt+1 = ωt + κ PMi=1 ri + γ bQ˜ωt(s′i, π˜µt(s′i)) −bQωt(si, ai) ∇ω bQωt(si, ai)/M. 7: Compute deterministic policy gradient using the sampled policy parameter:",
"paper_id": "1806.04854",
"title": "Fast and Scalable Bayesian Deep Learning by Weight-Perturbation in Adam",
"authors": [
"Mohammad Emtiyaz Khan",
"Didrik Nielsen",
"Voot Tangkaratt",
"Wu Lin",
"Yarin Gal",
"Akash Srivastava"
],
"published_date": "2018-06-13",
"primary_category": "stat.ML",
"arxiv_url": "http://arxiv.org/abs/1806.04854v3",
"chunk_index": 86,
"total_chunks": 91,
"char_count": 957,
"word_count": 172,
"chunking_strategy": "semantic"
},
{
"chunk_id": "7be72b75-787c-49d0-90d9-b9f1e739012b",
"text": "ˆ∇θF(θt) = −PMi=1 ∇θπθt(si)∇aQωt+1(si, πθt(si))/M. 8: Update and correct the bias of the 1st-order moment m and the 2nd-order moment s by Vadam: mt+1 = γ1mt + (1 −γ1)(ˆ∇θF(θt) + λµt), st+1 = γ2st + (1 −γ2)[ˆ∇θF(θt)]2,\nˆmt+1 = mt/(1 −γt1), ˆst+1 = st+1/(1 −γt2). 9: Update the mean µ using the moment estimates by Vadam: µt+1 = µt −α ˆmt+1/(pˆst + λ). 10: Update target network parameters ˜ωt+1 and ˜µt+1 by moving average: ˜ωt+1 = (1 −τ)˜ωt + τωt+1, ˜µt+1 = (1 −τ)˜µt + τµt+1.",
"paper_id": "1806.04854",
"title": "Fast and Scalable Bayesian Deep Learning by Weight-Perturbation in Adam",
"authors": [
"Mohammad Emtiyaz Khan",
"Didrik Nielsen",
"Voot Tangkaratt",
"Wu Lin",
"Yarin Gal",
"Akash Srivastava"
],
"published_date": "2018-06-13",
"primary_category": "stat.ML",
"arxiv_url": "http://arxiv.org/abs/1806.04854v3",
"chunk_index": 87,
"total_chunks": 91,
"char_count": 476,
"word_count": 86,
"chunking_strategy": "semantic"
},
{
"chunk_id": "09ed59bb-be49-48d8-9bb5-c870d731ba54",
"text": "Bayesian Deep Learning by Weight-Perturbation in Adam Vadam Adam-Plain Adam-Explore\n0 500 1000 1500 2000 2500 3000\nTime step (x1000) The early learning performance of Vadam, Adam-Plain and Adam-Explore, on the half-cheetah task in the reinforcement\nlearning experiment. Vadam shows faster learning in this early stage of learning. The mean and standard error are computed over 5 trials. reward\nCumulative 2000 1000 Vadam SGD-Plain Adam-Plain\nVadaGrad SGD-Explore Adam-Explore\n0 500 1000 1500 2000 2500 3000\nTime step (x1000)",
"paper_id": "1806.04854",
"title": "Fast and Scalable Bayesian Deep Learning by Weight-Perturbation in Adam",
"authors": [
"Mohammad Emtiyaz Khan",
"Didrik Nielsen",
"Voot Tangkaratt",
"Wu Lin",
"Yarin Gal",
"Akash Srivastava"
],
"published_date": "2018-06-13",
"primary_category": "stat.ML",
"arxiv_url": "http://arxiv.org/abs/1806.04854v3",
"chunk_index": 88,
"total_chunks": 91,
"char_count": 524,
"word_count": 78,
"chunking_strategy": "semantic"
},
{
"chunk_id": "f26c419f-2dee-4478-a0f4-0286e1eca828",
"text": "The performance of all evaluated methods on the half-cheetah task in the reinforcement learning experiment. Vadam and\nAdam-based methods perform well overall and give comparable final performance. VadaGrad also learns well but shows sign of premature\nconvergence. SGD-based method do not learn well throughout. The mean and standard error are computed over 5 trials.",
"paper_id": "1806.04854",
"title": "Fast and Scalable Bayesian Deep Learning by Weight-Perturbation in Adam",
"authors": [
"Mohammad Emtiyaz Khan",
"Didrik Nielsen",
"Voot Tangkaratt",
"Wu Lin",
"Yarin Gal",
"Akash Srivastava"
],
"published_date": "2018-06-13",
"primary_category": "stat.ML",
"arxiv_url": "http://arxiv.org/abs/1806.04854v3",
"chunk_index": 89,
"total_chunks": 91,
"char_count": 366,
"word_count": 54,
"chunking_strategy": "semantic"
},
{
"chunk_id": "4e079c7f-c2da-4dde-8827-4ea301883fd6",
"text": "Bayesian Deep Learning by Weight-Perturbation in Adam Illustration of variational optimization on a complex 2D objective function. Variational optimization is performed from four\ndifferent initial positions. The four runs are shown in solid lines in different colors.",
"paper_id": "1806.04854",
"title": "Fast and Scalable Bayesian Deep Learning by Weight-Perturbation in Adam",
"authors": [
"Mohammad Emtiyaz Khan",
"Didrik Nielsen",
"Voot Tangkaratt",
"Wu Lin",
"Yarin Gal",
"Akash Srivastava"
],
"published_date": "2018-06-13",
"primary_category": "stat.ML",
"arxiv_url": "http://arxiv.org/abs/1806.04854v3",
"chunk_index": 90,
"total_chunks": 91,
"char_count": 267,
"word_count": 37,
"chunking_strategy": "semantic"
},
{
"chunk_id": "cbcf00ee-b1f9-4f55-b222-f791d18c1f0c",
"text": "Gradient descent (shown in dotted, black lines) is also\ninitialized at the same locations. 'Vadam to VadaGrad' shows ability to navigate the landscape to reach the flat (and global) minimum,\nwhile gradient descent gets stuck in various locations. 1.50\nVadaGrad\nAdam 1.45 AdaGrad\nAdam: 1.2834 SGD\n1.40 Loss\nAdaGrad: 1.2783\nTest1.35 SGD: 1.2575\nVadaGrad: 1.2551\n1.30 1.25\n0 50 100 150 200\nEpoch Results for character-level language modeling using the War and Peace dataset for the \"Marginal Value of Adaptive-Gradient\nMethods\" experiment. We repeat the results shown in Fig. 2 (b) of Wilson et al. (2017) and show that VadaGrad does not suffer from the\nissues pointed in that paper, and it performs comparable to SGD.",
"paper_id": "1806.04854",
"title": "Fast and Scalable Bayesian Deep Learning by Weight-Perturbation in Adam",
"authors": [
"Mohammad Emtiyaz Khan",
"Didrik Nielsen",
"Voot Tangkaratt",
"Wu Lin",
"Yarin Gal",
"Akash Srivastava"
],
"published_date": "2018-06-13",
"primary_category": "stat.ML",
"arxiv_url": "http://arxiv.org/abs/1806.04854v3",
"chunk_index": 91,
"total_chunks": 91,
"char_count": 715,
"word_count": 116,
"chunking_strategy": "semantic"
}
]