researchpilot-data / chunks /1805.09613_semantic.json
Subhadip007's picture
feat: Upload full 358k vector database
e8a2c2e
[
{
"chunk_id": "3ff80408-2a78-4a94-b31b-6ae784580020",
"text": "A0C: Alpha Zero in Continuous Action Space Moerland∗†, Joost Broekens∗, Aske Plaat† and Catholijn M. ∗Dep. of Computer Science, Delft University of Technology, The Netherlands\n†Dep. of Computer Science, Leiden University, The Netherlands\n2018 Abstract A core novelty of Alpha Zero is the interleaving of tree search and deep learning,May which has proven very successful in board games like Chess, Shogi and Go. These\ngames have a discrete action space. However, many real-world reinforcement\nlearning domains have continuous action spaces, for example in robotic control,24\nnavigation and self-driving cars. This paper presents the necessary theoretical\nextensions of Alpha Zero to deal with continuous action space. We also provide\nsome preliminary experiments on the Pendulum swing-up task, empirically showing\nthe feasibility of our approach. Thereby, this work provides a first step towards\nthe application of iterated search and learning in domains with a continuous action\nspace.[stat.ML] . Alpha Zero has achieved state-of-the-art, super-human performance in Chess, Shogi (Silver et al.,\n2017a) and the game of Go (Silver et al., 2016, 2017b). The key innovation of Alpha Zero compared\nto traditional reinforcement learning approaches is the use of a small, nested tree search as a policy\nevaluation.1 Whereas traditional reinforcement learning treats each environment step or trace as an\nindividual training target, Alpha Zero aggregates the information of multiple traces in a tree, and\neventually aggregates these tree statistics into targets to train a neural network. The neural network is\nthen used as a prior to improve new tree searches. This closes the loop between search and function\napproximation (Figure 1). In section 6 we further discuss why this works so well. While Alpha Zero has been very successful in two-player games with discrete action spaces, it\nhave a continuous action space.",
"paper_id": "1805.09613",
"title": "A0C: Alpha Zero in Continuous Action Space",
"authors": [
"Thomas M. Moerland",
"Joost Broekens",
"Aske Plaat",
"Catholijn M. Jonker"
],
"published_date": "2018-05-24",
"primary_category": "stat.ML",
"arxiv_url": "http://arxiv.org/abs/1805.09613v1",
"chunk_index": 0,
"total_chunks": 26,
"char_count": 1910,
"word_count": 290,
"chunking_strategy": "semantic"
},
{
"chunk_id": "fd2f9ac6-836e-4cfa-ae61-083cfc9925a0",
"text": "We will now list the core contributions of this paper. Compared to\nthe Alpha Zero paradigm for discrete action spaces, we require: A Monte Carlo Tree Search (MCTS) method that works in continuous action space.",
"paper_id": "1805.09613",
"title": "A0C: Alpha Zero in Continuous Action Space",
"authors": [
"Thomas M. Moerland",
"Joost Broekens",
"Aske Plaat",
"Catholijn M. Jonker"
],
"published_date": "2018-05-24",
"primary_category": "stat.ML",
"arxiv_url": "http://arxiv.org/abs/1805.09613v1",
"chunk_index": 1,
"total_chunks": 26,
"char_count": 209,
"word_count": 35,
"chunking_strategy": "semantic"
},
{
"chunk_id": "2f227132-5776-4407-a029-c4cf4b1fbff9",
"text": "We\nbuilt here on earlier results on progressive widening (Section 3.1). Incorporation of a continuous prior to steer a new MCTS iteration. While Alpha Zero uses\nthe discrete density as a prior in a (P)UCT formula (Rosin, 2011; Kocsis and Szepesvári,\n2006), we need to leverage a continuous density (which is unbounded) to direct the next\nMCTS iteration (Section 3.2) 1Additionally, the tree search provides an efficient exploration method, which is a key challenge in reinforcement learning Moerland et al. (2017). Figure 1: Iterated tree search and function approximation. Alpha Zero transforms the MCTS visitation counts to a discrete probability distribution. We need to estimate a continuous density from a set of support points,\nand specify an appropriate training loss in continuous policy space (Section 4). The remainder of this paper is organized as follows. Section 2 presents essential preliminaries on\nreinforcement learning and MCTS. Section 3 discusses the required MCTS modifications for a\ncontinuous action space with a continuous prior (Fig. 1, upper part of the loop). In Section 4 we cover\nthe generation of training targets from the tree search and specify an appropriate neural network loss\n(Fig. 1, lower part of the loop). Sections 5, 6 and 7 present experiments, discussion and conclusions.",
"paper_id": "1805.09613",
"title": "A0C: Alpha Zero in Continuous Action Space",
"authors": [
"Thomas M. Moerland",
"Joost Broekens",
"Aske Plaat",
"Catholijn M. Jonker"
],
"published_date": "2018-05-24",
"primary_category": "stat.ML",
"arxiv_url": "http://arxiv.org/abs/1805.09613v1",
"chunk_index": 2,
"total_chunks": 26,
"char_count": 1314,
"word_count": 206,
"chunking_strategy": "semantic"
},
{
"chunk_id": "709a2e6c-43d0-4145-baaa-08068809154b",
"text": "Markov Decision Process We adopt a finite-horizon Markov Decision Process (MDP) (Sutton\nand Barto, 2018) given by the tuple {S, A, f, R, γ, T}, where S ⊆Rns is a state set, A =⊆Rna\ncontinuous action set, f : S × A →P(S) denotes a transition function, R : S × A →R a (bounded)\nreward function, γ ∈(0, 1] a discount parameter and T the time horizon. At every time-step t we\nobserve a state st ∈S and pick an action at ∈A, after which the environment returns a reward\nrt = R(st, at) and next state st+1 = f(st, at). We act in the MDP according to a stochastic policy\nπ : S →P(A). Define the (policy-dependent) state value V π(st) = Eπ[PTk=0(γ)k · rt+k] and\nstate-action value Qπ(st, at) = Eπ[PTk=0(γ)k · rt+k], respectively. Our goal is to find a policy π\nthat maximizes this cumulative, discounted sum of rewards. Monte Carlo Tree Search We present a brief introduction of the well-known MCTS algorithm\n(Coulom, 2006; Browne et al., 2012). In particular, we discuss a variant of the PUCT algorithm\n(Rosin, 2011), as also used in Alpha Zero (Silver et al., 2017a,b). Every action node in the tree stores\nstatistics {n(s, a), W(s, a), Q(s, a)}, where n(s, a) is the visitation count, W(s, a) the cumulative\nreturn over all roll-outs through (s, a), and Q(s, a) = W(s, a)/n(s, a) is the mean action value\nestimate. PUCT alternates four phases:",
"paper_id": "1805.09613",
"title": "A0C: Alpha Zero in Continuous Action Space",
"authors": [
"Thomas M. Moerland",
"Joost Broekens",
"Aske Plaat",
"Catholijn M. Jonker"
],
"published_date": "2018-05-24",
"primary_category": "stat.ML",
"arxiv_url": "http://arxiv.org/abs/1805.09613v1",
"chunk_index": 3,
"total_chunks": 26,
"char_count": 1338,
"word_count": 244,
"chunking_strategy": "semantic"
},
{
"chunk_id": "22e4e761-9ba8-4388-a087-ba8a9a28efc9",
"text": "Select In the first stage, we descent the tree from the root node according to: \" p n(s) #\nπtree(a|s) = arg max Q(s, a) + cpuct · πφ(a|s) · (1)\na n(s, a) + 1 where n(s) = Pa n(s, a) is the total number of visits to state s in the tree, cpuct ∈R+ is\na constant that scales the amount the exploration/optimism, and πφ(a|s) is the probability\nassigned to action a by the network.2 The tree policy is followed until we either reach a\nterminal state or select an action we have not tried before. 2This equation differs from the standard UCT-like formulas in two ways. The πφ(a|s) term scales the\nconfidence interval based on prior knowledge, as stored in the the policy network. The + 1 term in the\ndenominator ensures that the policy prior already affects the decision when there are unvisited actions. Otherwise,\nevery untried action would be tried at least once, since without the +1 term Eq. 3 becomes ∞for untried actions. This is undesirable for large action spaces and small trees, where we directly want to prune the actions that we\nalready know are inferior from prior experience. Expand We next expand the tree with a new leaf state sL3 obtained from simulating the\nenvironment with the new action from the last state in the current tree.\n3. Roll-out We then require an estimate of the value V (sL) of the new leaf node, for which\nMCTS uses the sum of reward of a (random) roll-out R(sL). In Alpha Zero, this gets\nreplaced by the prediction of a value network R(sL) := Vφ(sL).\n4.",
"paper_id": "1805.09613",
"title": "A0C: Alpha Zero in Continuous Action Space",
"authors": [
"Thomas M. Moerland",
"Joost Broekens",
"Aske Plaat",
"Catholijn M. Jonker"
],
"published_date": "2018-05-24",
"primary_category": "stat.ML",
"arxiv_url": "http://arxiv.org/abs/1805.09613v1",
"chunk_index": 4,
"total_chunks": 26,
"char_count": 1484,
"word_count": 272,
"chunking_strategy": "semantic"
},
{
"chunk_id": "8a2bbc45-61a0-445c-b91c-b81c2fc25471",
"text": "Back-up Finally, we recursively back-up the results in the tree nodes. Denote the current\nforward trace in the tree as {s0, a0, s1, ..sL−1, aL−1, sL}. Then, for each state-action edge\n(si, ai), L > i ≥0, we recursively estimate the state-action value as R(si, ai) = r(si, ai) + γR(si+1, ai+1). (2) where R(sL, aL) := R(sL). We then increment W(si, ai) with the new estimate R(si, ai),\nincrement the visitation count n(si, ai) with 1, and set the mean estimate to Q(si, ai) =\nW(si, ai)/n(si, ai). We repeatedly apply this back-up one step higher in the tree until we\nreach the root node s0. This procedure is repeated until the overall MCTS trace budget Ntrace is reached.",
"paper_id": "1805.09613",
"title": "A0C: Alpha Zero in Continuous Action Space",
"authors": [
"Thomas M. Moerland",
"Joost Broekens",
"Aske Plaat",
"Catholijn M. Jonker"
],
"published_date": "2018-05-24",
"primary_category": "stat.ML",
"arxiv_url": "http://arxiv.org/abs/1805.09613v1",
"chunk_index": 5,
"total_chunks": 26,
"char_count": 671,
"word_count": 120,
"chunking_strategy": "semantic"
},
{
"chunk_id": "45db2b15-6404-4f93-958b-e7ebef08c7b4",
"text": "MCTS\nreturns a set of root actions A0 = {a0,0, a0,1, .., a0,m} with associated counts N0 =\n{n(s0, a0,0), n(s0, a0,1), .., n(s0, a0,m)}. Here m denotes the number of child actions, which\nfor Alpha Zero is always fixed to the cardinality of the discrete action space m = |A|. We select the\nreal action at to play in the environment by sampling from the probability distribution obtained from\nnormalizing the action counts at the root s0(= st): n(s0, a)\nat ∼ˆπ(a|s0), where ˆπ(a|s0) = (3)\nn(s0) and n(s0) = Pb∈A0 n(s0, b). Note that n(s0) ≥Ntrace, since we store the subtree that belongs to\nthe picked action at for the MCTS at the next timestep. Neural Networks We introduce two neural networks - similar to Alpha Zero - to estimate a\nparametrized policy πφ(a|s) and the state value Vφ(s). Both networks share the initial layers. The\njoint set of parameters of both networks is denoted by φ. The neural networks are trained on targets\ngenerated by the MCTS procedure. These training targets, extracted from the tree search, are denoted\nby ˆπ(a|s) and ˆV (s). 3 Tree Search in Continuous Action Space As noted in the introduction, we require two modifications to the MCTS procedure: 1) a method to\ndeal with continuous action spaces, and 2) a way to include a continuous policy network into the\nMCTS search. 3.1 Progressive Widening During MCTS with a discrete action space we evaluate the PUCT formula for all actions.",
"paper_id": "1805.09613",
"title": "A0C: Alpha Zero in Continuous Action Space",
"authors": [
"Thomas M. Moerland",
"Joost Broekens",
"Aske Plaat",
"Catholijn M. Jonker"
],
"published_date": "2018-05-24",
"primary_category": "stat.ML",
"arxiv_url": "http://arxiv.org/abs/1805.09613v1",
"chunk_index": 6,
"total_chunks": 26,
"char_count": 1416,
"word_count": 247,
"chunking_strategy": "semantic"
},
{
"chunk_id": "0b41498f-8707-4886-9fdc-e5910b39c6f8",
"text": "However,\nin continuous action space we can not enumerate all actions, i.e., there are actually infinitely many\nactions in a continuous set. A practical solution to this problem is progressive widening (Coulom,\n2007; Chaslot et al., 2008), where we make the number of child actions of state s in the tree m(s) a\nfunction of the total number of visits to that state n(s). This implies that actions with good returns,\nwhich will get more visits, will also gradually get more child actions for consideration. In particular,\nCouëtoux et al. (2011) uses m(s) = cpw · n(s)κ (4) 3We use superscript st to index real environment states and actions, subscripts sd to index states and actions\nat depth d in the search tree, and double subscripts ad,j to index a specific child action j at depth d. For\nexample, a0,0 is the first child action at the root s0. At every timestep t, the tree root s0 := st, i.e. the current\nenvironment state becomes the tree root. for constants cpw ∈R+ and κ ∈(0, 1), making m(s) a polynomial (root) function of n(s). The\nidea of progressive widening was introduced by Coulom (2007), who made m(s) a logarithmic\nfunction of n(s). Although originally conceived for discrete domains, this technique turns out to be\nan effective solution for continuous action space as well (Couëtoux et al., 2011). 3.2 Continuous policy network prior For now assume we manage to train a policy network πφ(s) from the results of the MCTS procedure. Alpha Zero can enumerate the probability for all available discrete actions, and uses this probability\nas a prior scaling on the upper confidence bound term in the UCT formula (Eq. 1). For the\ncontinuous policy space, we could use a similar equation, where we use πφ(a|s) of the considered a\nas predicted by the network.",
"paper_id": "1805.09613",
"title": "A0C: Alpha Zero in Continuous Action Space",
"authors": [
"Thomas M. Moerland",
"Joost Broekens",
"Aske Plaat",
"Catholijn M. Jonker"
],
"published_date": "2018-05-24",
"primary_category": "stat.ML",
"arxiv_url": "http://arxiv.org/abs/1805.09613v1",
"chunk_index": 7,
"total_chunks": 26,
"char_count": 1768,
"word_count": 305,
"chunking_strategy": "semantic"
},
{
"chunk_id": "24950b33-8206-41c7-aaa8-efb8d987516b",
"text": "However, the continuous πφ(a|s) is unbounded.4 This gives us the risk\nof rescaling/stretching the confidence intervals too much. Another option - which we consider in this\nwork - is to use the policy network to sample new child actions in the tree search (when adding a\nnew action based on progressive widening). Thereby, the policy net steers the actions that we will\nconsider in the tree search. This has a similar effect as Eq. 1 for AlphaGo Zero does, as it effectively\nprunes away child actions in subtrees of which we already know that they perform poorly.",
"paper_id": "1805.09613",
"title": "A0C: Alpha Zero in Continuous Action Space",
"authors": [
"Thomas M. Moerland",
"Joost Broekens",
"Aske Plaat",
"Catholijn M. Jonker"
],
"published_date": "2018-05-24",
"primary_category": "stat.ML",
"arxiv_url": "http://arxiv.org/abs/1805.09613v1",
"chunk_index": 8,
"total_chunks": 26,
"char_count": 562,
"word_count": 97,
"chunking_strategy": "semantic"
},
{
"chunk_id": "381cc25c-e750-4f39-b433-a25d00adf4a2",
"text": "4 Neural network training in continuous action space We next want to use the MCTS output to improve our neural networks. Compared to Alpha Zero, the\ncontinuous action space forces us to come up with a different policy network specification, policy\ntarget calculation and training loss.",
"paper_id": "1805.09613",
"title": "A0C: Alpha Zero in Continuous Action Space",
"authors": [
"Thomas M. Moerland",
"Joost Broekens",
"Aske Plaat",
"Catholijn M. Jonker"
],
"published_date": "2018-05-24",
"primary_category": "stat.ML",
"arxiv_url": "http://arxiv.org/abs/1805.09613v1",
"chunk_index": 9,
"total_chunks": 26,
"char_count": 285,
"word_count": 46,
"chunking_strategy": "semantic"
},
{
"chunk_id": "cbbbed93-99c1-4894-a23f-81da7d5b4875",
"text": "These aspects are covered in Section 4.1. Afterwards, we briefly\ndetail the value network training procedure, including a slight variant of the value target estimation\n(Section 4.2). Policy Network Distribution We require a neural network that outputs a continuous density. However, continuous action spaces usually have some input bounds. For example, when we learn\nthe torques or voltages on a robot manipulator, then a too extreme torque/voltage may break the\nmotor altogether.",
"paper_id": "1805.09613",
"title": "A0C: Alpha Zero in Continuous Action Space",
"authors": [
"Thomas M. Moerland",
"Joost Broekens",
"Aske Plaat",
"Catholijn M. Jonker"
],
"published_date": "2018-05-24",
"primary_category": "stat.ML",
"arxiv_url": "http://arxiv.org/abs/1805.09613v1",
"chunk_index": 10,
"total_chunks": 26,
"char_count": 480,
"word_count": 72,
"chunking_strategy": "semantic"
},
{
"chunk_id": "90651b58-94eb-4efe-b8c3-325615b322ba",
"text": "Therefore, continuous actions spaces are generally symmetrically bounded to\nsome [−cb, cb] interval, for scalar cb ∈R+. To ensure that our density predicts in this range,\nwe use a transformation of a factorized Beta distribution πφ(a|s) = g(u), with elements ui ∼\nBeta(αi(φ), βi(φ)) and deterministic transformation g(·). Details are provided in Appendix A. Note\nthat the remainder of this section holds for any πφ(a|s) network output distribution from which we\nknow how to sample and evaluate (log) densities. Training Target We want to transform the result of the MCTS with progressive widening to a\ncontinuous target density ˆπ (to training our neural network with). Recall that MCTS returns the sets\nA0 and N0 of root actions and root counts, respectively. We can not normalize these counts like\nAlpha Zero does (Eq. 3) for the discrete case. The only assumption, similar to Alpha Zero, that we\nmake here is that the density at a root action a0,i is proportional to the visitation counts, i.e.5 n(s, ai)τ\nˆπ(ai|s) = (5)\nZ(s, τ) where τ ∈R+ specifies some temperature parameter, and Z(s, τ) is a normalization term (that is\nassumed to not depend on ai, as the density at the support points is only proportional to the counts). Note that this does not define a proper density, as we never specified a density in between the support\npoints. However, we can ignore this issue, as we will only consider the loss at the support points.",
"paper_id": "1805.09613",
"title": "A0C: Alpha Zero in Continuous Action Space",
"authors": [
"Thomas M. Moerland",
"Joost Broekens",
"Aske Plaat",
"Catholijn M. Jonker"
],
"published_date": "2018-05-24",
"primary_category": "stat.ML",
"arxiv_url": "http://arxiv.org/abs/1805.09613v1",
"chunk_index": 11,
"total_chunks": 26,
"char_count": 1433,
"word_count": 243,
"chunking_strategy": "semantic"
},
{
"chunk_id": "a4cb44cb-d7dd-470f-b56a-45edbd584962",
"text": "4For a discrete probability distribution, π(a) ≤1 ∀a. However, although the probability density function\n(pdf) of continuous random variables integrates to 1, i.e. R π(a|s)da = 1, this does not bound the value of the\npdf π(a) at a particular point a, i.e. π(a) ∈[0, ∞).\n5The remainder of this section always concerns the root state s0 and root actions a0,i. Therefore, we omit\nthe depth subscript (of 0) for readability. Loss In short, our main idea is to leave the normalization and generalization of the policy over\nthe action space to the network loss. If we specify a network output distribution that enforces\nR a πφ(a|s) = 1, i.e., making it a proper continuous density, then we may specify a loss with respect\nto a target density ˆπ(a|s), even when the target density is only known on a relative scale. More\nextreme counts (relative densities) will produce stronger gradients, and the restrictions of the network\noutput density will ensure that we can not pull the density up or down over the entire support (as\nit needs to integrate to 1). This way, we make our network output density mimic the counts on a\nrelative scale. We will first give a general derivation, acting as if ˆπ(a|s) is a proper density, and swap in the\nempirical density at the end. We minimize a policy loss Lpolicy(φ) based on the Kullback-Leibler\ndivergence between the network output πφ(a|s) and the empirical density ˆπ(a|s) (Eq. 5): Lpolicy(φ) = DKL πφ(a|s)∥ˆπ(a|s) = Ea∼πφ(a|s) h log πφ(a|s) −log ˆπ(a|s)i (6)",
"paper_id": "1805.09613",
"title": "A0C: Alpha Zero in Continuous Action Space",
"authors": [
"Thomas M. Moerland",
"Joost Broekens",
"Aske Plaat",
"Catholijn M. Jonker"
],
"published_date": "2018-05-24",
"primary_category": "stat.ML",
"arxiv_url": "http://arxiv.org/abs/1805.09613v1",
"chunk_index": 12,
"total_chunks": 26,
"char_count": 1492,
"word_count": 255,
"chunking_strategy": "semantic"
},
{
"chunk_id": "894525bb-c082-4fca-b8f4-e3af7d1866f5",
"text": "We may use the REINFORCE6 trick to get an unbiased gradient estimate of the above loss: ∇φLpolicy(φ) = ∇φEa∼πφ(a|s) h log πφ(a|s) −τ log n(a, s) + log Z(s, τ)i h i = Ea∼πφ(a|s) log πφ(a|s) −τ log n(a, s) + log Z(s, τ) ∇φ log πφ(a|s)",
"paper_id": "1805.09613",
"title": "A0C: Alpha Zero in Continuous Action Space",
"authors": [
"Thomas M. Moerland",
"Joost Broekens",
"Aske Plaat",
"Catholijn M. Jonker"
],
"published_date": "2018-05-24",
"primary_category": "stat.ML",
"arxiv_url": "http://arxiv.org/abs/1805.09613v1",
"chunk_index": 13,
"total_chunks": 26,
"char_count": 232,
"word_count": 47,
"chunking_strategy": "semantic"
},
{
"chunk_id": "6c74e2af-2420-4925-91b3-1d70a3f35fcf",
"text": "We now drop Z(s, τ) since it does not depend on φ (or chose an appropriate state-dependent baseline,\nas is common with REINFORCE estimators). Moreover, we replace the expectation over a ∼πφ(a|s)\nwith the empirical support points ai ∼Ds, where Ds denotes the subset of the database containing\nstate s. Our final gradient estimator becomes ∇φLpolicy(φ) = Es∼D,ai∼Ds h log πφ(ai|s) −τ log n(s, ai) ∇φ log πφ(ai|s)i (7) Entropy regularization Continuous policies have a risk to collapse (Haarnoja et al., 2018). If\nall sampled actions are close to each other, then the distribution may narrow too much, loosing\nany exploration. In the worst case, the distribution may completely collapse, which will produce\nNaNs and break the training process. As we empirically observed this problem, we augment the\ntraining objective with an entropy maximization term. This prevents the policy from collapsing, and\nadditionally ensures a minimum level of exploration.",
"paper_id": "1805.09613",
"title": "A0C: Alpha Zero in Continuous Action Space",
"authors": [
"Thomas M. Moerland",
"Joost Broekens",
"Aske Plaat",
"Catholijn M. Jonker"
],
"published_date": "2018-05-24",
"primary_category": "stat.ML",
"arxiv_url": "http://arxiv.org/abs/1805.09613v1",
"chunk_index": 14,
"total_chunks": 26,
"char_count": 949,
"word_count": 149,
"chunking_strategy": "semantic"
},
{
"chunk_id": "676fa463-4bd2-4eff-9894-b5c9d7705ae4",
"text": "We define the entropy loss as LH(φ) = H(πφ(a|s)) = − πφ(a|s) log πφ(a|s)da. (8) Details on the computation of the entropy for the case where πφ(a|s) is a transformed Beta distribution are provided in Appendix A.1. The full policy loss thereby becomes",
"paper_id": "1805.09613",
"title": "A0C: Alpha Zero in Continuous Action Space",
"authors": [
"Thomas M. Moerland",
"Joost Broekens",
"Aske Plaat",
"Catholijn M. Jonker"
],
"published_date": "2018-05-24",
"primary_category": "stat.ML",
"arxiv_url": "http://arxiv.org/abs/1805.09613v1",
"chunk_index": 15,
"total_chunks": 26,
"char_count": 250,
"word_count": 43,
"chunking_strategy": "semantic"
},
{
"chunk_id": "a94872c5-bdf7-45a7-83b7-c1070151f20f",
"text": "Lπ(φ) = Lpolicy(φ) −λLH(φ), (9) where λ is a hyperparameter that scales the contribution of the entropy term to the overall loss. Value network training is almost identical to the Alpha Zero specification. The only thing we modify\nis the estimation of ˆV (s), the training target for the value. Alpha Zero uses the eventual return of the\nfull episode as the training target for every state in the trace. This is an unbiased, but high-variance\nsignal (in reinforcement learning terminology (Sutton and Barto, 2018), it uses a full Monte Carlo 6The REINFORCE trick (Williams, 1992), also known as the likelihood ratio estimator, is an identity\nregarding the derivative of an expectation, when the expectation depends on the parameter towards which we\ndifferentiate: ∇φEa∼pφ(a)[f(a)] = Ea∼pφ(a)[f(a)∇φ log pφ(a)], for some function f(·) of a. Instead, we use the MCTS procedure as a value estimator, leveraging the action value\nestimates Q(s0, a) at the root s0. We could weigh these according to the visitation counts at the root. However, we usually built relatively small trees,7 for which a non-negligible fraction of the traces are\nexploratory. Therefore, we propose an off-policy estimate of the value at the root: ˆV (s0) = max Q(s0, a) (10) The value loss LV (φ) is a standard mean-squared error loss: \" 2 #\nLV (φ) = Es∼D Vφ(s) −ˆV (s) . (11) Figure 2 shows the results of our algorithm on the Pendulum-v0 task from the OpenAI Gym\n(Brockman et al., 2016).",
"paper_id": "1805.09613",
"title": "A0C: Alpha Zero in Continuous Action Space",
"authors": [
"Thomas M. Moerland",
"Joost Broekens",
"Aske Plaat",
"Catholijn M. Jonker"
],
"published_date": "2018-05-24",
"primary_category": "stat.ML",
"arxiv_url": "http://arxiv.org/abs/1805.09613v1",
"chunk_index": 16,
"total_chunks": 26,
"char_count": 1460,
"word_count": 246,
"chunking_strategy": "semantic"
},
{
"chunk_id": "8801de48-001f-4397-861f-ab512cb62d7e",
"text": "The curves show learning performance for different computational budgets\nper MCTS at each timestep. Note that the x-axis displays true environment steps, which includes the\nMCTS simulations. For example, if we use 10 traces per MCTS, then every real environment step\ncounts as 10 on this scale.",
"paper_id": "1805.09613",
"title": "A0C: Alpha Zero in Continuous Action Space",
"authors": [
"Thomas M. Moerland",
"Joost Broekens",
"Aske Plaat",
"Catholijn M. Jonker"
],
"published_date": "2018-05-24",
"primary_category": "stat.ML",
"arxiv_url": "http://arxiv.org/abs/1805.09613v1",
"chunk_index": 17,
"total_chunks": 26,
"char_count": 294,
"word_count": 47,
"chunking_strategy": "semantic"
},
{
"chunk_id": "72eb0aa8-3995-4af3-86c0-8da2c941db2f",
"text": "First, we observe that our continuous Alpha Zero version does indeed learn on the Pendulum task. Interestingly, we observe different learning performance for different tree sizes, where the 'sweet\nspot' appears to be at an intermediate tree size (of 10). For larger trees, we complete less episodes\n(a single episode takes longer) and therefore train our neural network less frequently. Therefore,\nalthough each individual trace gets more budget, it takes longer before the tree search starts to profit\nfrom improved network estimates (generalization). We train our neural network after every completed episode.",
"paper_id": "1805.09613",
"title": "A0C: Alpha Zero in Continuous Action Space",
"authors": [
"Thomas M. Moerland",
"Joost Broekens",
"Aske Plaat",
"Catholijn M. Jonker"
],
"published_date": "2018-05-24",
"primary_category": "stat.ML",
"arxiv_url": "http://arxiv.org/abs/1805.09613v1",
"chunk_index": 18,
"total_chunks": 26,
"char_count": 611,
"word_count": 92,
"chunking_strategy": "semantic"
},
{
"chunk_id": "c38f0574-17ac-487c-b26c-7af288eb8632",
"text": "However, the runs with smaller tree\nsizes complete much more episodes compared to the runs with a larger tree size. Moreover, the\ndata generated from larger tree searches could be deemed 'more trustworthy', as we spend more\ncomputational effort in generating them. We try to compensate for this effect by making the number 7AlphaGo Zero uses 1600 traces per timestep.",
"paper_id": "1805.09613",
"title": "A0C: Alpha Zero in Continuous Action Space",
"authors": [
"Thomas M. Moerland",
"Joost Broekens",
"Aske Plaat",
"Catholijn M. Jonker"
],
"published_date": "2018-05-24",
"primary_category": "stat.ML",
"arxiv_url": "http://arxiv.org/abs/1805.09613v1",
"chunk_index": 19,
"total_chunks": 26,
"char_count": 367,
"word_count": 60,
"chunking_strategy": "semantic"
},
{
"chunk_id": "551debeb-74cb-4703-9158-a7edb1fc3ee1",
"text": "We evaluate on smaller domains, and have less computational\nresources. Figure 2: Learning curves for Pendulum domain. Compared to the OpenAI Gym implementation\nwe rescale every reward by a factor 1/1000 (which leaves the task and optimal solution unchanged). Results averaged over 10 repetitions. of training epochs over the database after each episode proportional to the size of the nested tree\nsearch. Specifically, after each episode we train for",
"paper_id": "1805.09613",
"title": "A0C: Alpha Zero in Continuous Action Space",
"authors": [
"Thomas M. Moerland",
"Joost Broekens",
"Aske Plaat",
"Catholijn M. Jonker"
],
"published_date": "2018-05-24",
"primary_category": "stat.ML",
"arxiv_url": "http://arxiv.org/abs/1805.09613v1",
"chunk_index": 20,
"total_chunks": 26,
"char_count": 450,
"word_count": 69,
"chunking_strategy": "semantic"
},
{
"chunk_id": "776bb8a5-f059-4cb9-9d79-14d3d12c7fef",
"text": "& ' Ntraces\nnepochs = (12) for constant ce ∈R+ and ⌈·⌉denoting the ceiling function. In our experiments we set ce = 20. This may explain why the run with Ntraces = 25 performs suboptimal compared to the others, as the\nnon-linearity in Eq. 12 (due to the ceiling function) may accidentally turn out bad for this number of\ntree traces. Moreover, note that the learning curve of training with a tree size of 1 is shorter than the\nother curves. This happens because we gave each run an equal amount of wall-clock time. The run\nwith tree size 1 finishes much more episodes, and because ce > 1 it still trains more frequently than\nthe other runs, which makes it eventually perform less total steps in the domain.",
"paper_id": "1805.09613",
"title": "A0C: Alpha Zero in Continuous Action Space",
"authors": [
"Thomas M. Moerland",
"Joost Broekens",
"Aske Plaat",
"Catholijn M. Jonker"
],
"published_date": "2018-05-24",
"primary_category": "stat.ML",
"arxiv_url": "http://arxiv.org/abs/1805.09613v1",
"chunk_index": 21,
"total_chunks": 26,
"char_count": 706,
"word_count": 129,
"chunking_strategy": "semantic"
},
{
"chunk_id": "409dc6e9-895e-4827-a1c8-eb240425761c",
"text": "Implementation details We use a three layer neural network with 128 units in each hidden layer\nand ELu activation functions. For the MCTS we set cpuct = 0.05, cpw = 1 and κ = 0.5, and for\nthe policy loss λ = 0.1 and τ = 0.1. We train the networks in Tensorflow (Abadi et al., 2016),\nusing RMSProp optimizer on mini-batches of size 32 with a learning rate of 0.0001. Episodes last at\nmaximum 300 steps.",
"paper_id": "1805.09613",
"title": "A0C: Alpha Zero in Continuous Action Space",
"authors": [
"Thomas M. Moerland",
"Joost Broekens",
"Aske Plaat",
"Catholijn M. Jonker"
],
"published_date": "2018-05-24",
"primary_category": "stat.ML",
"arxiv_url": "http://arxiv.org/abs/1805.09613v1",
"chunk_index": 22,
"total_chunks": 26,
"char_count": 401,
"word_count": 77,
"chunking_strategy": "semantic"
},
{
"chunk_id": "be4f2d00-fa87-4fc5-9f67-cc88a4ea6344",
"text": "The results in Fig. 2 reveal an interesting trade-off in the iterated tree search and function approximation paradigm. We hypothesize that the strength of tree search is the in the locality of information. Each edge stores its own statistics, and this makes it easy to locally separate the effect of actions. Moreover, the forward search gives a more stable value estimate, smoothing out local errors in the\nvalue network. In contrast, the strength of the neural network is generalization. Frequently, we\nre-encounter the (almost) same state in a different subtree during a next episode. Supervised learning\nis a natural way to generalize the already learned knowledge from a previous episode. One of the key observations of the present paper is that we actually need both.",
"paper_id": "1805.09613",
"title": "A0C: Alpha Zero in Continuous Action Space",
"authors": [
"Thomas M. Moerland",
"Joost Broekens",
"Aske Plaat",
"Catholijn M. Jonker"
],
"published_date": "2018-05-24",
"primary_category": "stat.ML",
"arxiv_url": "http://arxiv.org/abs/1805.09613v1",
"chunk_index": 23,
"total_chunks": 26,
"char_count": 773,
"word_count": 125,
"chunking_strategy": "semantic"
},
{
"chunk_id": "ded3e91d-a410-4eab-a340-420c524a6786",
"text": "If we only perform\ntree search, then we eventually fail at solving the domain because all information is kept locally. In\ncontrast, if we only build trees of size 1, then we are continuously generalizing without ever locally\nseparating decisions and improving our training targets. Our results suggest that there is actually a\nsweet spot halfway, where we build trees of moderate size, after which we perform a few epochs of\ntraining. Future work will test the A0C algorithm in more complicated, continuous action space tasks (Brockman et al., 2016; Todorov et al., 2012). Moreover, our algorithm could profit from recent improvements in the MCTS algorithm (Moerland et al., 2018) and other network architectures (Szegedy et al.,\n2015), as also leveraged in Alpha Zero. This paper introduced Alpha Zero for Continuous action space (A0C). Our method learns a continuous\npolicy network - based on transformed Beta distributions - by minimizing a KL-divergence between\nthe network distribution and an unnormalized density at the support points from the MCTS search. Moreover, the policy network also directs new MCTS searches by proposing new candidate child\nactions in the search tree.",
"paper_id": "1805.09613",
"title": "A0C: Alpha Zero in Continuous Action Space",
"authors": [
"Thomas M. Moerland",
"Joost Broekens",
"Aske Plaat",
"Catholijn M. Jonker"
],
"published_date": "2018-05-24",
"primary_category": "stat.ML",
"arxiv_url": "http://arxiv.org/abs/1805.09613v1",
"chunk_index": 24,
"total_chunks": 26,
"char_count": 1183,
"word_count": 187,
"chunking_strategy": "semantic"
},
{
"chunk_id": "50d92f9d-3549-4beb-8d73-7c588a7e4715",
"text": "Preliminary results on the Pendulum task show that our approach does\nindeed learn. Future work will further explore the empirical performance of A0C. In short, A0C may\nbe a first step in transferring the success of iterated search and learning, as observed in two-player\nboard games with discrete action spaces (Silver et al., 2017a,b), to the single-player, continuous\naction space domains, like encountered in robotics, navigation and self-driving cars.",
"paper_id": "1805.09613",
"title": "A0C: Alpha Zero in Continuous Action Space",
"authors": [
"Thomas M. Moerland",
"Joost Broekens",
"Aske Plaat",
"Catholijn M. Jonker"
],
"published_date": "2018-05-24",
"primary_category": "stat.ML",
"arxiv_url": "http://arxiv.org/abs/1805.09613v1",
"chunk_index": 25,
"total_chunks": 26,
"char_count": 455,
"word_count": 69,
"chunking_strategy": "semantic"
}
]