researchpilot-data / chunks /1805.04514_semantic.json
Subhadip007's picture
feat: Upload full 358k vector database
e8a2c2e
[
{
"chunk_id": "5d751b93-1d2d-4e98-a3a8-8c0079a812a8",
"text": "Metatrace Actor-Critic: Online Step-size Tuning\nby Meta-gradient Descent\nfor Reinforcement Learning Control Kenny Young, Baoxiang Wang, and Matthew E. Borealis AI, Edmonton, Alberta, Canada\n{kenny.young, brandon.wang, matthew.taylor}@BorealisAI.com2019",
"paper_id": "1805.04514",
"title": "Metatrace Actor-Critic: Online Step-size Tuning by Meta-gradient Descent for Reinforcement Learning Control",
"authors": [
"Kenny Young",
"Baoxiang Wang",
"Matthew E. Taylor"
],
"published_date": "2018-05-10",
"primary_category": "cs.LG",
"arxiv_url": "http://arxiv.org/abs/1805.04514v2",
"chunk_index": 0,
"total_chunks": 42,
"char_count": 252,
"word_count": 27,
"chunking_strategy": "semantic"
},
{
"chunk_id": "ff071d16-87ca-46fa-b16b-7fdad221c1ef",
"text": "Reinforcement learning (RL) has had many successes in both\n\"deep\" and \"shallow\" settings. In both cases, significant hyperparameterMay\ntuning is often required to achieve good performance. Furthermore, when\nnonlinear function approximation is used, non-stationarity in the state24 representation can lead to learning instability. A variety of techniques\nexist to combat this — most notably large experience replay buffers or\nthe use of multiple parallel actors. These techniques come at the cost\nof moving away from the online RL problem as it is traditionally formulated (i.e., a single agent learning online without maintaining a large\ndatabase of training examples). Meta-learning can potentially help with[cs.LG] both these issues by tuning hyperparameters online and allowing the algorithm to more robustly adjust to non-stationarity in a problem. This\npaper applies meta-gradient descent to derive a set of step-size tuning\nalgorithms specifically for online RL control with eligibility traces. Our\nnovel technique, Metatrace, makes use of an eligibility trace analogous to\nmethods like TD(λ). We explore tuning both a single scalar step-size and\na separate step-size for each learned parameter. We evaluate Metatrace\nfirst for control with linear function approximation in the classic mountain car problem and then in a noisy, non-stationary version. Finally,\nwe apply Metatrace for control with nonlinear function approximation\nin 5 games in the Arcade Learning Environment where we explore how\nit impacts learning speed and robustness to initial step-size choice. Results show that the meta-step-size parameter of Metatrace is easy to set,\nMetatrace can speed learning, and Metatrace can allow an RL algorithm In the supervised learning (SL) setting, there are a variety of optimization methods that build on stochastic gradient descent (SGD) for tuning neural network\n(NN) parameters (e.g., RMSProp [17] and ADAM [7]). These methods generally\naim to accelerate learning by monitoring gradients and modifying updates such\nthat the effective loss surface has more favorable properties. 2 Kenny Young, Baoxiang Wang, and Matthew E.",
"paper_id": "1805.04514",
"title": "Metatrace Actor-Critic: Online Step-size Tuning by Meta-gradient Descent for Reinforcement Learning Control",
"authors": [
"Kenny Young",
"Baoxiang Wang",
"Matthew E. Taylor"
],
"published_date": "2018-05-10",
"primary_category": "cs.LG",
"arxiv_url": "http://arxiv.org/abs/1805.04514v2",
"chunk_index": 1,
"total_chunks": 42,
"char_count": 2138,
"word_count": 316,
"chunking_strategy": "semantic"
},
{
"chunk_id": "c42bfd76-1940-452d-8b24-8c14325a4ab7",
"text": "Most such methods are derived for SGD on a fixed objective (i.e., average loss\nover a training set). This does not translate directly to the online reinforcement\nlearning (RL) problem, where targets incorporate future estimates, and subsequent observations are correlated. Eligibility traces complicated this further, as\nindividual updates no longer correspond to a gradient descent step toward any\ntarget on their own. Eligibility traces break up the target into a series of updates\nsuch that only the sum of updates over time moves toward it.",
"paper_id": "1805.04514",
"title": "Metatrace Actor-Critic: Online Step-size Tuning by Meta-gradient Descent for Reinforcement Learning Control",
"authors": [
"Kenny Young",
"Baoxiang Wang",
"Matthew E. Taylor"
],
"published_date": "2018-05-10",
"primary_category": "cs.LG",
"arxiv_url": "http://arxiv.org/abs/1805.04514v2",
"chunk_index": 2,
"total_chunks": 42,
"char_count": 544,
"word_count": 85,
"chunking_strategy": "semantic"
},
{
"chunk_id": "4236a9b6-6adb-4db0-a1ab-3062fa0ebbe6",
"text": "To apply standard SGD techniques in the RL setting, a common strategy\nis to make the RL problem as close to the SL problem as possible. Techniques\nthat help achieve this include: multiple actors [10], large experience replay buffers\n[11], and separate online and target networks [18]. These all help smooth gradient\nnoise and mitigate non-stationarity such that SL techniques work well.",
"paper_id": "1805.04514",
"title": "Metatrace Actor-Critic: Online Step-size Tuning by Meta-gradient Descent for Reinforcement Learning Control",
"authors": [
"Kenny Young",
"Baoxiang Wang",
"Matthew E. Taylor"
],
"published_date": "2018-05-10",
"primary_category": "cs.LG",
"arxiv_url": "http://arxiv.org/abs/1805.04514v2",
"chunk_index": 3,
"total_chunks": 42,
"char_count": 386,
"word_count": 62,
"chunking_strategy": "semantic"
},
{
"chunk_id": "91811906-cf55-4704-b0c6-1af7fd8c1719",
"text": "They are\nnot, however, applicable to the more standard RL setting where a single agent\nlearns online without maintaining a large database of training examples. This paper applies meta-gradient descent, propagating gradients through the\noptimization algorithm itself, to derive step-size tuning algorithms specifically\nfor the RL control problem. We derive algorithms for this purpose based on the\nIDBD approach [15]. We refer to the resulting methods as Metatrace algorithms. Using this novel approach to meta-gradient descent for RL control we define\nalgorithms for tuning a scalar step-size, as well as a vector of step-sizes (one\nelement for each parameter), and finally a mixed version which aims to leverage\nthe benefits of both. Aside from these algorithms, our main contributions include\napplying meta-gradient descent to actor-critic with eligibility traces (AC(λ)),\nand exploring the performance of meta-gradient descent for RL with a nonstationary state representation, including with nonlinear function approximation\n(NLFA). In particular, we evaluate Metatrace with linear function approximation\n(LFA) for control in the classic mountain car problem and a noisy, non-stationary\nvariant. We also evaluate Metatrace for training a deep NN online in the 5\noriginal training set games in the Arcade Learning Environment (ALE) [9], with\neligibility traces and without using either multiple actors or experience replay. Our work is closely related to IDBD [15] and its extension autostep [8], metagradient decent procedures for step-size tuning in the supervised learning case. Even more closely related, are SID and NOSID [2], analogous meta-gradient\ndescent procedures for SARSA(λ). Our approach differs primarily by explicitly\naccounting for time-varying weights in the optimization objective for the stepsize. In addition, we extend the approach to AC(λ) and to vector-valued stepsizes as well as a \"mixed\" version which utilizes a combination of scalar and\nvector step-sizes. Also related are TIDBD and it's extension AutoTIDBD [5,6],\nto our knowledge the only prior work to investigate learning of vector step-sizes\nfor RL. The authors focuses on TD(λ) for prediction, and explore both vector and\nscalar step-sizes. They demonstrate that for a broad range of parameter settings,\nboth scalar and vector AutoTIDBD outperform ordinary TD(λ), while vector",
"paper_id": "1805.04514",
"title": "Metatrace Actor-Critic: Online Step-size Tuning by Meta-gradient Descent for Reinforcement Learning Control",
"authors": [
"Kenny Young",
"Baoxiang Wang",
"Matthew E. Taylor"
],
"published_date": "2018-05-10",
"primary_category": "cs.LG",
"arxiv_url": "http://arxiv.org/abs/1805.04514v2",
"chunk_index": 4,
"total_chunks": 42,
"char_count": 2363,
"word_count": 349,
"chunking_strategy": "semantic"
},
{
"chunk_id": "e631fc08-eeac-429c-b582-86d19a6c26ef",
"text": "AutoTIDBD outperforms a variety of scalar step-size adaptation methods and\nTD(λ) with an optimal fixed step-size. Aside from focusing on control rather\nthan prediction, our methods differs from TIDBD primarily in the objective\noptimized by the step-size tuning. They use one step TD error; we use a multistep\nobjective closer to that used in SID. Another notable algorithm, crossprop [19],\napplies meta-gradient descent for directly learning good features from input1, as\nopposed to associated step-sizes. The authors demonstrate that using crossprop\nin place of backprop can result in feature representations which are more robust\nto non-stationarity in the task. Our NN experiments draw inspiration from [4],\nto our knowledge the only prior work to apply online RL with eligibility traces2\nto train a modern deep NN. We consider the RL problem where a learning agent interacts with an environment while striving to maximize a reward signal. The problem is generally\nformalized as a Markov Decision Process described by a 5-tuple: ⟨S, A, p, r, γ⟩. At\neach time-step the agent observes the state St ∈S and selects an action At ∈A. Based on St and At, the next state St+1 is generated, according to a probability p(St+1|St, At). The agent additionally observes a reward Rt+1, generated\nby r : S × A →R. Algorithms for reinforcement learning broadly fall into two\ncategories, prediction and control. In prediction the agent follows a fixed policy\nπ : S × A →[0, 1] and seeks to estimate from experience the expectation value\nof the return Gt = P γk−tRk+1, with discount factor γ ∈[0, 1]. In control, the\nk=t\ngoal is to learn, through interaction with the initially unknown environment, a\npolicy π that maximizes the expected return Gt, with discount factor γ ∈[0, 1]. In this work we will derive step-size tuning algorithms for the control case.",
"paper_id": "1805.04514",
"title": "Metatrace Actor-Critic: Online Step-size Tuning by Meta-gradient Descent for Reinforcement Learning Control",
"authors": [
"Kenny Young",
"Baoxiang Wang",
"Matthew E. Taylor"
],
"published_date": "2018-05-10",
"primary_category": "cs.LG",
"arxiv_url": "http://arxiv.org/abs/1805.04514v2",
"chunk_index": 5,
"total_chunks": 42,
"char_count": 1843,
"word_count": 305,
"chunking_strategy": "semantic"
},
{
"chunk_id": "5f54ac45-69c2-4903-a594-29bcff3025fc",
"text": "Action-value methods like Q-learning are often used for RL control. However,\nfor a variety of reasons, actor-critic (AC) methods are becoming increasingly\nmore popular in Deep RL — we will focus on AC. AC methods separately\nlearn a state value function for the current policy and a policy which attempts\nto maximize that value function. In particular, we will derive Metatrace for\nactor critic with eligibility traces, AC(λ) [3,13]. While eligibility traces are often\nassociated with prediction methods like TD(λ) they are also applicable to AC. To specify the objective of TD(λ), and by extension AC(λ), we must first define the lambda return Gλw,t. Here we will define Gλw,t associated with a particular\nset of weights w recursively: Gλw,t = Rt+1 + γ (1 −λ)Vw(St+1) + λGλw,t+1 Gλw,t bootstraps future evaluations to a degree controlled by λ. If λ < 1, then\nGλw,t is a biased estimate of the return, Gt. If λ = 1, then Gλw,t reduces to Gt. 1 Crossprop is used in place of backprop to train a single hidden layer.\n2 In their case, they use SARSA(λ). 4 Kenny Young, Baoxiang Wang, and Matthew E.",
"paper_id": "1805.04514",
"title": "Metatrace Actor-Critic: Online Step-size Tuning by Meta-gradient Descent for Reinforcement Learning Control",
"authors": [
"Kenny Young",
"Baoxiang Wang",
"Matthew E. Taylor"
],
"published_date": "2018-05-10",
"primary_category": "cs.LG",
"arxiv_url": "http://arxiv.org/abs/1805.04514v2",
"chunk_index": 6,
"total_chunks": 42,
"char_count": 1094,
"word_count": 189,
"chunking_strategy": "semantic"
},
{
"chunk_id": "cb39ca63-e59f-4125-92d8-42399686f20c",
"text": "Here we define Gλw,t for a fixed weight w; in Section 4 we will extend this to a time\nvarying wt. Defining TD-error, δt = Rt + γVw(St+1) −Vw(St), we can expand\nGλw,t as the current state value estimate plus the sum of future discounted δt\nvalues:\nGλw,t = Vw(St) + X(γλ)k−tδk\nk=t\nThis form is useful in the derivation of TD(λ) as well as AC(λ). TD(λ) can be\nunderstood as minimizing the mean squared error Gλw,t −Vw(St) between the\nvalue function Vw (a function of the current state St parameterized by weights\nw) and the lambda return Gλw,t. In deriving TD(λ), the target Gλw,t is taken\nas constant despite its dependence on w. For this reason, TD(λ) is often called\na \"semi-gradient\" method.",
"paper_id": "1805.04514",
"title": "Metatrace Actor-Critic: Online Step-size Tuning by Meta-gradient Descent for Reinforcement Learning Control",
"authors": [
"Kenny Young",
"Baoxiang Wang",
"Matthew E. Taylor"
],
"published_date": "2018-05-10",
"primary_category": "cs.LG",
"arxiv_url": "http://arxiv.org/abs/1805.04514v2",
"chunk_index": 7,
"total_chunks": 42,
"char_count": 692,
"word_count": 124,
"chunking_strategy": "semantic"
},
{
"chunk_id": "a1d3f8da-3acf-456f-b9f1-0d580e87b18e",
"text": "Intuitively, we want to modify our current estimate\nto match our future estimates and not the other way around. For AC(λ), we\nwill combine this mean squared error objective with a policy improvement term,\nsuch that the combined objective represents a trade-offbetween the quality of\nour value estimates and the performance of our policy: ∞ ∞ ! 2 1\nJλ(w) = X Gλw,t −Vw(St) − X log(πw (At|St)) Gλw,t −Vw(St) (1)\nt=0 t=0 As in TD(λ), we apply the notion of a semi-gradient to optimizing equation 1. In this case along with Gλw,t, the appearance of Vw(St) in the right sum is taken\nto be constant.",
"paper_id": "1805.04514",
"title": "Metatrace Actor-Critic: Online Step-size Tuning by Meta-gradient Descent for Reinforcement Learning Control",
"authors": [
"Kenny Young",
"Baoxiang Wang",
"Matthew E. Taylor"
],
"published_date": "2018-05-10",
"primary_category": "cs.LG",
"arxiv_url": "http://arxiv.org/abs/1805.04514v2",
"chunk_index": 8,
"total_chunks": 42,
"char_count": 593,
"word_count": 106,
"chunking_strategy": "semantic"
},
{
"chunk_id": "16f7e29f-9e0e-4b09-b353-f3e1fbd92879",
"text": "Intuitively, we wish to improve our actor under the evaluation\nof our critic, not modify our critic to make our actor's performance look better. With this caveat in mind, by the policy gradient theorem [16], the expectation\nof the gradient of the right term in equation 1 is approximately equal to the\n(negated) gradient of the expected return. This approximation is accurate to\nthe extent that our advantage estimate Gλw,t −Vw(St) is accurate. Descending\nthe gradient of the right half of Jλ(w) is then ascending the gradient of an\nestimate of expected return. Taking the semi-gradient of equation 1 yields: ∂ ∂Vw(St) 1 ∂log(πw (At|St))\nt −Vw(St) ∂wJλ(w) = − X ∂w + 2 ∂w Gλ\nt=0\n∞ ∞\n∂Vw(St) 1 ∂log(πw (At|St))\n= − X + X(γλ)k−tδk\n∂w 2 ∂w\nt=0 k=t\n∞ t\n∂Vw(St) 1 ∂log(πw (At|St))\n= − X δt X (γλ)t−k +\n∂w 2 ∂w\nt=0 k=0\nNow define for compactness Uw(St)˙=Vw(St) + 12 log(πw (At|St)) and define the\neligibility trace at time t as zt = tP (γλ)t−k ∂Uw(Sk)∂w , such that:\nk=0 ∂wJλ(w) = − X δtzt (2)\nt=0 Offline AC(λ) can be understood as performing a gradient descent step along\nequation 2. Online AC(λ), analogous to online TD(λ) can be seen as an approximation to this offline version (exact in the limit of quasi-static weights) that\nupdates weights after every time-step. Advantages of the online version include\nmaking immediate use of new information, and being applicable to continual\nlearning3. Online AC(λ) is defined by the following set of equations: ∂Uwt(St)\nzt = γλzt−1 +\n∂wt\nwt+1 = wt + αztδt We will present three variations of Metatrace for control using AC(λ), scalar\n(single α for all model weights), vector (one α per model weight), and finally a\n\"mixed\" version that attempts to leverage the benefits of both. Additionally, we\nwill discuss two practical improvements over the basic algorithm, normalization\nthat helps to mitigate parameter sensitivity across problems and avoid divergence, and entropy regularization which is commonly employed in actor-critic to\navoid premature convergence [10]. 4.1 Scalar Metatrace for AC(λ)",
"paper_id": "1805.04514",
"title": "Metatrace Actor-Critic: Online Step-size Tuning by Meta-gradient Descent for Reinforcement Learning Control",
"authors": [
"Kenny Young",
"Baoxiang Wang",
"Matthew E. Taylor"
],
"published_date": "2018-05-10",
"primary_category": "cs.LG",
"arxiv_url": "http://arxiv.org/abs/1805.04514v2",
"chunk_index": 9,
"total_chunks": 42,
"char_count": 2036,
"word_count": 346,
"chunking_strategy": "semantic"
},
{
"chunk_id": "ae85351c-fbf1-4011-a5dd-20199ba0b8a5",
"text": "Following [15], we define our step-size as α = eβ. For tuning α, it no longer\nmakes sense to define our objective with respect to a fixed weight vector w, as in\nequation 1. We want to optimize α to allow our weights to efficiently track the\nnon-stationary AC(λ) objective. To this end we define the following objective\nincorporating time-dependent weights:\nJλβ (w0..w∞)\n∞ ∞ ! 2 1\n= X Gλt −Vwt(St) − X log(πwt (At|St)) Gλt −Vwt(St) (3)\nt=0 t=0\nHere, Gλt with no subscript w is defined as Gλt = Vwt(St) + P∞k=t(γλ)k−tδk,\nwhere δk = Rk + γVwk(Sk+1) −Vwk(Sk). We will follow a derivation similar to\nSection 4.3.1 of [2], but the derivation there does not explicitly account for the\ntime dependence of wt. Instead, in equation 4.18 they differentiate Jλ(wt) with\nrespect to α as follows4: ∂ ∂ ∂wt\n∂αJλ(wt) = ∂wt Jλ(wt) ∂α\n∂wt\n= − X δt zt\nt=0 3 Where an agent interacts with an environment indefinitely, with no distinct episodes.\n4 Bra-ket notation (⟨·|·⟩) indicates dot product. 6 Kenny Young, Baoxiang Wang, and Matthew E.",
"paper_id": "1805.04514",
"title": "Metatrace Actor-Critic: Online Step-size Tuning by Meta-gradient Descent for Reinforcement Learning Control",
"authors": [
"Kenny Young",
"Baoxiang Wang",
"Matthew E. Taylor"
],
"published_date": "2018-05-10",
"primary_category": "cs.LG",
"arxiv_url": "http://arxiv.org/abs/1805.04514v2",
"chunk_index": 10,
"total_chunks": 42,
"char_count": 1019,
"word_count": 183,
"chunking_strategy": "semantic"
},
{
"chunk_id": "6109c54f-06fa-41cf-8bf8-50d443223b00",
"text": "The first line applies the chain rule with Jλ(wt) treated as a function of a single\nwt vector. The second line is unclear, in that it takes ∂wt∂α inside the sum in\nequation 2. The time index of the sum is not a priori the same as that of wt. We\nsuggest this ambiguity stems from propagating gradients through an objective\ndefined by a single wt value, while the significance of α is that it varies the\nweights over time. For this reason, we hold that it makes more sense to minimize\nequation 3 to tune α. We will see in what follows that this approach yields an\nalgorithm very similar to TD(λ) for tuning the associated step-sizes. Consider each wt a function of β, differentiating equation 3, with the same\nsemi-gradient treatment used in AC(λ) yields: ∂ ∂Vwt(St) 1 log(πwt (At|St))\nJλ(w0..w∞) = − X + Gλt −Vwt(St)\n∂β ∂β 2 ∂β\nt=0\n∞ ∞\n∂Uwt(St) ∂wt = − X X(γλ)k−tδk\n∂wt ∂β t=0 k=t\n∞ t\n∂Uwt(St) ∂wt = − X δt X (γλ)t−k\n∂wt ∂β t=0 k=0 Now, define a new eligibility trace.",
"paper_id": "1805.04514",
"title": "Metatrace Actor-Critic: Online Step-size Tuning by Meta-gradient Descent for Reinforcement Learning Control",
"authors": [
"Kenny Young",
"Baoxiang Wang",
"Matthew E. Taylor"
],
"published_date": "2018-05-10",
"primary_category": "cs.LG",
"arxiv_url": "http://arxiv.org/abs/1805.04514v2",
"chunk_index": 11,
"total_chunks": 42,
"char_count": 967,
"word_count": 186,
"chunking_strategy": "semantic"
},
{
"chunk_id": "604cf57c-4d83-4028-a6fe-b6aeee1c6fb1",
"text": "∂Uwk(Sk) ∂wk\n(4) zβ,t ˙= X (γλ)t−k\n∂wk ∂β\nk=0 Jλ(w0..w∞) = − X δtzβ,t (5)\nt=0 To compute h(t)˙= ∂wt∂β , we use a method analogous to that in [2]. As z itself\nis a sum of first order derivatives with respect to w, we will ignore the effects\nof higher order derivatives and approximate ∂zt∂w = 0. Since log(πwt (At|St))\nnecessarily involves some non-linearity, this is only a first order approximation\neven in the LFA case. Furthermore, in the control case, the weights affect action\nselection. Action selection in turn affects expected weight updates. Hence, there\nare additional higher order effects of modifying β on the expected weight updates,\nand like [2] we do not account for these effects. We leave open the question of\nhow to account for this interaction in the online setting, and to what extent it\nwould make a difference. We compute h(t) as follows: h(t + 1) = [wt + αδtzt]\n∂δt\n≈h(t) + αδtzt + αzt\n∂δt ∂wt\n= h(t) + αzt δt +\n∂wt ∂β\n∂Vwt(St+1) −∂Vwt(St) h(t) (6) = h(t) + αzt δt + γ\n∂wt ∂wt Note that here we use the full gradient of δt as opposed to the semi-gradient\nused in computing equation 5. This can be understood by noting that in equation\n5 we effectively treat α similarly to an ordinary parameter of AC(λ), following a\nsemi-gradient for similar reasons5. h(t) on the other hand is meant to track, as\nclosely as possible, the actual impact of modifying α on w, hence we use the full\ngradient and not the semi-gradient. All together, Scalar Metatrace for AC(λ) is\ndescribed by: ∂Uwt(St)\nzβ ←γλzβ + h\n∂wt\nβ ←β + µzβδt\n∂Vwt(St+1) −∂Vwt(St) h h ←h + eβz δt + γ\n∂wt ∂wt",
"paper_id": "1805.04514",
"title": "Metatrace Actor-Critic: Online Step-size Tuning by Meta-gradient Descent for Reinforcement Learning Control",
"authors": [
"Kenny Young",
"Baoxiang Wang",
"Matthew E. Taylor"
],
"published_date": "2018-05-10",
"primary_category": "cs.LG",
"arxiv_url": "http://arxiv.org/abs/1805.04514v2",
"chunk_index": 12,
"total_chunks": 42,
"char_count": 1584,
"word_count": 296,
"chunking_strategy": "semantic"
},
{
"chunk_id": "116a9842-c829-49f0-b9e8-856f88c967ad",
"text": "The update to zβ is an online computation of equation 4. The update to β is\nexactly analogous to the AC(λ) weight update but with equation 5 in place of\nequation 2. The update to h computes equation 6 online. We will augment this\nbasic algorithm with two practical improvements, entropy regularization and\nnormalization, that we will discuss in the following subsections. Entropy Regularization In practice it is often helpful to add an entropy\nbonus to the objective function to discourage premature convergence [10]. Here\nwe cover how to modify the step-size tuning algorithm to cover this case. We\nobserved that accounting for the entropy bonus in the meta-optimization (as\nopposed to only the underlying objective) improved performance in the ALE\ndomain. Adding an entropy bonus with weight ψ to the actor critic objective of\nequation 2 gives us: ∞ ∞ ! 2 1\nJλ(w) = X Gλt −Vw(st) − X log(πw (At|St)) Gλt −Vw(St) (7)\nt=0 t=0\n−ψ X Hw(St)\nt=0 5 As explained in Section 3. 8 Kenny Young, Baoxiang Wang, and Matthew E.",
"paper_id": "1805.04514",
"title": "Metatrace Actor-Critic: Online Step-size Tuning by Meta-gradient Descent for Reinforcement Learning Control",
"authors": [
"Kenny Young",
"Baoxiang Wang",
"Matthew E. Taylor"
],
"published_date": "2018-05-10",
"primary_category": "cs.LG",
"arxiv_url": "http://arxiv.org/abs/1805.04514v2",
"chunk_index": 13,
"total_chunks": 42,
"char_count": 1016,
"word_count": 176,
"chunking_strategy": "semantic"
},
{
"chunk_id": "0c9cad7a-a2fc-4894-86c6-fdf28fabb58e",
"text": "with Hwt(St) = −P πwt (a|St) log(πwt (a|St)). The associated parameter upa∈A\ndate algorithm becomes:\n∂Uwt(st)\nz ←γλz +\n∂wt\n∂Hwt(st)\nw ←w + α zδt + ψ\n∂wt To modify the meta update algorithms for this case, modify equation 7 with\ntime dependent weights:\nJλβ (w0..w∞)\n∞ ∞ ! 2 1\n= X Gλw,t −Vwt(st) − X log(πwt (At|St)) Gλw,t −Vwt(st)\nt=0 t=0\n−ψ X Hwt(St)\nt=0\n(8) Now taking the derivative of equation 8 with respect to β: ∂ ∞ ∂ Gλt −Vwt(st) (At|St)) ! Jλ(w0..w∞) = X −∂log(πwt Gλt −Vwt(St)\n∂β ∂β ∂β\nt=0\n! ∂Hwt(St) ∂Uwt(St) ∂Hwt(St)\n= − X Gλt −Vwt(St) + ψ\n∂β ∂β\nt=0\n∞ ∞ ! ∂Uwt(St) ∂Hwt(St) = − X X(γλ)k−tδk + ψ\n∂β ∂β\nt=0 k=t\n∞ ∞\n∂Uwt(St) ∂wt = − X X(γλ)k−tδk\n∂wt ∂β t=0 k=t\n! ∂Hwt(St) ∂wt\n+ ψ\n∂wt ∂β ∞ t\n∂Uwk(Sk) ∂wk = − X δt X (γλ)t−k\n∂wk ∂β t=0 k=0\n! ∂Hwt(St) ∂wt\n+ ψ\n∂wt ∂β ∂Hwt(St) ∂wt = − X zβ,tδt + ψ\n∂wt ∂β t=0\nWe modify h(t)˙= ∂wt∂β slightly to account for the entropy regularization. Similar to our handling of the eligibility trace in deriving equation 6, we treat all second order derivatives of Hwt(St) as zero:\n∂ ∂Hwt(st)\nh(t + 1) = w + α ztδt + ψ\n∂β ∂wt\n∂Hwt(st) ∂δt ∂ ∂Hwt(st)\n≈h(t) + α ztδt + ψ + zt + ψ\n∂wt ∂β ∂β ∂wt\n∂Hwt(st) ∂δt ∂wt\n= h(t) + α ztδt + ψ + zt\n∂wt ∂wt ∂β\n∂ ∂Hwt(st) ∂wt\n+ ψ\n∂wt ∂wt ∂β\n∂Hwt(st) ∂δt ∂wt\n≈h(t) + α ztδt + ψ + zt\n∂wt ∂wt ∂β\n∂δt ∂Hwt(st)\n= h(t) + α zt δt + h(t) + ψ\n∂wt ∂wt All together, Scalar Metatrace for AC(λ) with entropy regularization is described\nby: ∂Uwt(St)\nzβ ←γλzβ + h\n∂wt\n∂Hwt(St) ∂wt\nβ ←β + µ zβδt + ψ\n∂wt ∂β\n∂Vwt(St+1) −∂Vwt(St) h + ψ ∂Hwt(st) h ←h + eβ z δt + γ\n∂wt ∂wt ∂wt This entropy regularized extension to the basic algorithm is used in lines 7, 10,\nand 14 of algorithm 1. Algorithm 1 also incorporates a normalization technique,\nanalogous to that used in [8], that we will now discuss. Normalization The algorithms discussed so far can be unstable, and sensitive\nto the parameter µ.",
"paper_id": "1805.04514",
"title": "Metatrace Actor-Critic: Online Step-size Tuning by Meta-gradient Descent for Reinforcement Learning Control",
"authors": [
"Kenny Young",
"Baoxiang Wang",
"Matthew E. Taylor"
],
"published_date": "2018-05-10",
"primary_category": "cs.LG",
"arxiv_url": "http://arxiv.org/abs/1805.04514v2",
"chunk_index": 14,
"total_chunks": 42,
"char_count": 1845,
"word_count": 391,
"chunking_strategy": "semantic"
},
{
"chunk_id": "40fc533d-1fcf-4dcf-ac20-bd5636bca78a",
"text": "Reasons for this and recommended improvements are discussed in [8]. We will attempt to map these improvements to our case to improve\nthe stability of our tuning algorithms. The first issue is that the quantity µzβδt added to β on each time-step is\nproportional to µ times a product of δ values. Depending on the variance of the\nreturns for a particular problem, very different values of µ may be required to\nnormalize this update. The improvement suggested in [8] is straight-forward to\nmap to our case. They divide the β update by a running maximum, and we will\ndo the same. This modification to the beta update is done using the factor v on\nline 9 of algorithm 1. v computes a running maximum of the value ∆β. ∆β is\ndefined as the value multiplied by µ to give the update to β in the unnormalized\nalgorithm. The second issue is that updating the step-size by naive gradient descent\ncan rapidly push it into large unstable values (e.g., α larger than one over the 10 Kenny Young, Baoxiang Wang, and Matthew E.",
"paper_id": "1805.04514",
"title": "Metatrace Actor-Critic: Online Step-size Tuning by Meta-gradient Descent for Reinforcement Learning Control",
"authors": [
"Kenny Young",
"Baoxiang Wang",
"Matthew E. Taylor"
],
"published_date": "2018-05-10",
"primary_category": "cs.LG",
"arxiv_url": "http://arxiv.org/abs/1805.04514v2",
"chunk_index": 15,
"total_chunks": 42,
"char_count": 1010,
"word_count": 183,
"chunking_strategy": "semantic"
},
{
"chunk_id": "64e2b7de-2156-42fd-8fee-c703288e6dd9",
"text": "squared norm of the feature vector in linear function approximation, which leads\nto updates which over shoot the target). To correct this they define effective stepsize as fraction of distance moved toward the target for a particular update. α\nis clipped such that effective step-size is at most one (implying the update will\nnot overshoot the target). With eligibility traces the notion of effective step-size is more subtle. Consider\nthe policy evaluation case and note that our target for a given value function\nis Gλt and our error is then Gλt −Vwt(st) , the update towards this target is\nbroken into a sum of TD-errors. Nonetheless, for a given fixed step-size our\noverall update ∆Vwt(St) in the linear case (or its first order approximation in\nthe nonlinear case) to a given value is: ∂Vwt(St) 2\n∆Vwt(St) = α Gλt −Vwt(st) (9)\n∂wt Dividing by the error, our fractional update, or \"effective step-size\", is then: However, due to our use of eligibility traces, we will not be able to update each\nstate's value function with an independent step-size but must update towards\nthe partial target for multiple states at each timestep with a shared step-size. A\nreasonable way to proceed then would be to simply choose our shared (scalar or\nvector) α to ensure that the maximum effective-step size for any state contributing to the current trace is still less than one to avoid overshooting. We maintain\na running maximum of effective step-sizes over states in the trace, similar to the\nrunning maximum of β updates used in [8], and multiplicatively decaying our\nαs on each time-step by the amount this exceeds one. This procedure is shown\non lines 11, 12 and 13 of algorithm 1. Recall that the effective step-size bounding procedure we just derived was for\nthe case of policy evaluation. For the AC(λ), this is less clear as it is not obvious\nwhat target we should avoid overshooting with the policy parameters. We simply\n∂Vwt(st) ∂Vwt(st) 1 ∂log(πwt(At|St))replace with ∂wt ∂wt + 2 ∂wt in the computation of u.",
"paper_id": "1805.04514",
"title": "Metatrace Actor-Critic: Online Step-size Tuning by Meta-gradient Descent for Reinforcement Learning Control",
"authors": [
"Kenny Young",
"Baoxiang Wang",
"Matthew E. Taylor"
],
"published_date": "2018-05-10",
"primary_category": "cs.LG",
"arxiv_url": "http://arxiv.org/abs/1805.04514v2",
"chunk_index": 16,
"total_chunks": 42,
"char_count": 2009,
"word_count": 341,
"chunking_strategy": "semantic"
},
{
"chunk_id": "6e77e2a2-2a9b-4477-b5d0-474a1ca83087",
"text": "This is a conservative heuristic which normalizes α as if the combined policy\nimprovement, policy evaluation objective were a pure policy evaluation objective. We consider this reasonable since we know of no straightforward way to place\nan upper bound on the useful step-size for policy improvement but it nonetheless\nmakes sense to make sure it does not grow arbitrarily large. The constraint on\nthe αs for policy evaluation will always be tighter than constraining based on\nthe value derivatives alone.",
"paper_id": "1805.04514",
"title": "Metatrace Actor-Critic: Online Step-size Tuning by Meta-gradient Descent for Reinforcement Learning Control",
"authors": [
"Kenny Young",
"Baoxiang Wang",
"Matthew E. Taylor"
],
"published_date": "2018-05-10",
"primary_category": "cs.LG",
"arxiv_url": "http://arxiv.org/abs/1805.04514v2",
"chunk_index": 17,
"total_chunks": 42,
"char_count": 504,
"word_count": 80,
"chunking_strategy": "semantic"
},
{
"chunk_id": "4a61f482-d433-4428-88cd-c7ef43ee9169",
"text": "Other options for normalizing the step-size may be\nworth considering as well. We will depart from [8] somewhat by choosing µ\n1 ∂Vwt(St+1)\nitself as the tracking parameter for v rather than τ αi ∂wt , where τ is a\n∂Vwt(St+1) 2\nhyperparameter. This is because there is no obvious reason to use αi ∂wt\nspecifically here, and it is not clear whether the appropriate analogy for the RL Algorithm 1 Normalized Scalar Metatrace for Actor-Critic\n1: h ←0, β ←β0, v ←0\n2: for each episode do\n3: zβ ←0, u ←0\n4: while episode not complete do\n5: receive Vwt(St), πwt(St|At), δt, z\n6: Uwt ←Vwt + 12 log(πwt (At|St))\nD ∂Uwt E\n7: zβ ←γλzβ + ∂wt h\nD ∂Hwt (St) E\n8: ∆β ←zβδt + ψ ∂wt h\n9: v ←max(|∆β|, v + µ(|∆β| −v)\n10: β ←β + µ (v if v>0∆β else 1)\n∂Uwt 2 ∂Uwt 211: u ←max(eβ , u + (1 −γλ)(eβ −u)) ∂wt ∂wt\n12: M ←max (u, 1)\n13: β ←β −log(M)\nD ∂δt E ∂Hwt (St)14: h ←h + eβ(z(δt + ∂wt h ) + ψ ∂wt )\n15: output α = eβ\n16: end while\n17: end for ∂Vwt(St+1) 2\ncase is to use αi ∂wt , αiz2t , or something else. We use µ for simplicity,\nthus enforcing the tracking rate of the maximum used to normalize the stepsize updates be proportional to the the magnitude of step-size updates. For the\ntracking parameter of u we choose (1 −γλ), roughly taking the maximum over\nall the states that are currently contributing to the trace, which is exactly what\nwe want u to track. 4.2 Vector Metatrace for AC(λ) Now define αi = eβi to be the ith element of a vector of step-sizes, one for\neach weight such that the update for each weight element in AC(λ) will use the\nassociated αi. Having a separate αi for each weight enables the algorithm to\nindividually adjust how quickly it tracks each feature. This becomes particularly\nimportant when the state representation is non-stationary, as is the case when\nNN based models are used. In this case, features may be changing at different\nrates, and some may be much more useful than others, we would like our algorithm to be able to assign high step-size to fast changing useful features while\nannealing the step-size of features that are either mostly stationary or not useful\nto avoid tracking noise [5,15]. Take each wt to be a function of βi for all i and following [15] use the\napproximation ∂wi,t = 0 for all i ̸= j, differentiate equation 1, again with the ∂βj\nsame semi-gradient treatment used in AC(λ) to yield: 12 Kenny Young, Baoxiang Wang, and Matthew E.",
"paper_id": "1805.04514",
"title": "Metatrace Actor-Critic: Online Step-size Tuning by Meta-gradient Descent for Reinforcement Learning Control",
"authors": [
"Kenny Young",
"Baoxiang Wang",
"Matthew E. Taylor"
],
"published_date": "2018-05-10",
"primary_category": "cs.LG",
"arxiv_url": "http://arxiv.org/abs/1805.04514v2",
"chunk_index": 18,
"total_chunks": 42,
"char_count": 2375,
"word_count": 458,
"chunking_strategy": "semantic"
},
{
"chunk_id": "b924d644-4138-4929-ac57-00862b0155d8",
"text": "∂ β ∂Uwt(St)\nJλ (w0..w∞) = − X Gλt −Vwt(St) ∂βi ∂βi t=0\n∞ ∞\n∂Uwt(St) ∂wi,t ≈− X X(γλ)k−tδk\n∂wi,t ∂βi t=0 k=t\n∞ t\n∂Uwk(Sk) ∂wi,k = − X δt X (γλ)t−k\n∂wi,k ∂βi t=0 k=0 Once again define an eligibility trace, now a vector, with elements: ∂Uwk(Sk) ∂wi,k zβ,i,t ˙= X (γλ)t−k\n∂wi,k ∂βi\nk=0 such that: ∞\nJλ(w0..w∞) ≈− X δtzβ,i,t\n∂βi t=0\nTo compute hi(t)˙= ∂wi,t∂βi we use the obvious generalization of the scalar case,\nagain we use the approximation ∂wi,t = 0 for all i ̸= j: ∂βj ∂δt ∂wt\nhi(t + 1) ≈hi(t) + αizi,t δt +\n∂wt ∂βi\n∂δt ∂wi,t\n≈hi(t) + αizi,t δt +\n∂wi,t ∂βi = hi(t) + αizi,t δt + γ ∂Vwt(St+1) −∂Vwt(St) hi(t)\n∂wi,t ∂wi,t The first line approximates ∂zi,t = 0 as in the scalar case. The second line wt\napproximates ∂wi,t = 0 as discussed above. All together, Vector Metatrace for ∂βj\nAC(λ) is described by: ∂Uwt(St)\nzβ ←γλzβ + ⊙h\n∂wt\nβ ←β + µzβδt\n∂Vwt(St+1) h ←h + eβ ⊙z ⊙ δt + γ −∂Vwt(St) ⊙h\n∂wt ∂wt where ⊙denotes element-wise multiplication. As in the scalar case we augment this basic algorithm with entropy regularization and normalization. The extension of entropy regularization to the vector\ncase is a straightforward modification of the derivation presented in Section 4.1. To extend the normalization technique note that v is now replaced by a vector\ndenoted −→v on line 9 of algorithm 2. −→v maintains a separate running maximum\nfor the updates to each element of β . To extend the notion of effective step-size",
"paper_id": "1805.04514",
"title": "Metatrace Actor-Critic: Online Step-size Tuning by Meta-gradient Descent for Reinforcement Learning Control",
"authors": [
"Kenny Young",
"Baoxiang Wang",
"Matthew E. Taylor"
],
"published_date": "2018-05-10",
"primary_category": "cs.LG",
"arxiv_url": "http://arxiv.org/abs/1805.04514v2",
"chunk_index": 19,
"total_chunks": 42,
"char_count": 1423,
"word_count": 267,
"chunking_strategy": "semantic"
},
{
"chunk_id": "216a1456-180e-4d27-8a30-66d12089530e",
"text": "Algorithm 2 Normalized Vector Metatrace for Actor-Critic\n1: h ←0, β ←β0, −→v ←0\n2: for each episode do\n−→ 3: zβ ←0, u ←0\n4: while episode not complete do\n5: receive Vwt(St), πwt(At|St), δt, z\n6: Uwt ←Vwt + 12 log(πwt (At|St))\n−→ 7: h zβ ←γλ−→zβ + ∂Uwt∂wt ⊙−→\nh 8: ∆−→β ←−→zβδt + ψ ∂Hwt∂wt(St) ⊙−→\n→ 9: −→v ←max(|∆−→β |, −→v + µ(|∆− β | −−→v)\n→ ∆− −→ β\n10: β ←−→β + µ\n>0 (−→v where →−v elsewhere 1)\nD →−β ∂Uwt 2E D →−β ∂Uwt 2E11: u ←max e , u + (1 −γλ) e −u ∂wt ∂wt\n12: M ←max (u, 1)\n13: β ←−→β −log(M)\n−→ (St) h + ψ ∂Hwt14: h ←−→h + e→− β ⊙ −→z ⊙ δt + ∂wt∂δt ⊙−→ ∂wt\n−→ →−β15: output α = e\n16: end while\n17: end for to the vector case, follow the same logic used to derive equation 9 to yield an\nupdate ∆Vwt(St) of the form: * 2+ ∂Vwt(St)\n∆Vwt(St) = α Gλt −Vwt(st) (10)\n∂wt where α is now vector valued. Dividing by the error our fractional update, or\n\"effective step-size\", is then: * 2+ ∂Vwt(St)\nα (11)\n∂wt As in the scalar case, we maintain a running maximum of effective step-sizes over\nstates in the trace and multiplicatively decay α on each time step by the amount\nthis exceeds one. This procedure is shown on lines 11, 12 and 13 of algorithm\n2. The full algorithm for the vector case augmented with entropy regularization\nand normalization is presented in algorithm 2. 4.3 Mixed Metatrace\nWe also explore a mixed algorithm where a vector correction to the step-size β\nis learned for each weight and added to a global value ˆβ which is learned collectively for all weights. The derivation of this case is a straightforward extension\nof the scalar and vector cases. The full algorithm, that also includes entropy\nregularization and normalization, is detailed in Algorithm 3. 14 Kenny Young, Baoxiang Wang, and Matthew E.",
"paper_id": "1805.04514",
"title": "Metatrace Actor-Critic: Online Step-size Tuning by Meta-gradient Descent for Reinforcement Learning Control",
"authors": [
"Kenny Young",
"Baoxiang Wang",
"Matthew E. Taylor"
],
"published_date": "2018-05-10",
"primary_category": "cs.LG",
"arxiv_url": "http://arxiv.org/abs/1805.04514v2",
"chunk_index": 20,
"total_chunks": 42,
"char_count": 1726,
"word_count": 338,
"chunking_strategy": "semantic"
},
{
"chunk_id": "0285be29-2e67-4892-9175-e6b43e008ccb",
"text": "Algorithm 3 Normalized Mixed Metatrace for Actor-Critic\n−→ −→\n1: h ←0, ˆh ←0, ˆβ ←β0, β ←0, ˆv ←0, −→v ←0\n2: for each episode do\n−→ 3: zβ ←0, ˆzβ ←0, u ←0\n4: while episode not complete do\n5: receive Vwt(St), πwt(St|At), δt, z\n6: Uwt ←Vwt + 12 log(πwt (At|st))\n−→ 7: h zβ ←γλ−→zβ + ∂Uwt∂wt ⊙−→\nD ∂Uwt ˆh E 8: ˆzβ ←γλˆzβ + ∂wt\nh 9: ∆−→β ←−→zβδt + ψ ∂Hwt∂wt(St) ⊙−→\nD ∂Hwt (St) ˆh E10: ∆ˆβ ←ˆzβδt + ψ ∂wt →11: −→v ←max(|∆−→β |, −→v + µ(|∆− β | −−→v)\n12: ˆv ←max(|∆ˆβ|, ˆv + µ(|∆ˆβ| −ˆv)\n∆−→ −→ β\n13: β ←−→β + µ\n(−→v where v>0 elsewhere 1)\n∆ˆβ\n14: ˆβ ←ˆβ + µ (ˆv if ˆv>0 else 1)\nD −→ ∂Uwt 2E D −→ ∂Uwt 2E15: u ←max exp(ˆβ + β ) , u + (1 −γλ) exp(ˆβ + β ) −u ∂wt ∂wt\n16: M ←max (u, 1)\n17: ˆβ ←ˆβ −log(M)\n−→ (St) ∂Hwt ∂δt18: ⊙−→h + ψ h ←−→h + exp(ˆβ + −→β ) ⊙ z ⊙ δt + ∂wt ∂wt\n−→ D ∂δt E ∂Hwt (St)19: ˆh ←ˆh + exp(ˆβ + β ) ⊙ z δt + ∂wt ˆh + ψ ∂wt\n−→ −→20: output α = exp(ˆβ + β )\n21: end while\n22: end for Fig. 1: Return vs. training episodes on mountain car for a variety of α values with\nno step-size tuning. Each curve shows the average of 10 repeats and is smoothed\nby taking the average of the last 20 episodes. 5 Experiments and Results We begin by testing Scalar Metatrace on the classic mountain car domain using\nthe implementation available in OpenAI gym [1]. A reward of −1 is given for each\ntime-step until the goal is reached or a maximum of 200 steps are taken, after\nwhich the episode terminates. We use tile-coding for state representation: tilecoding with 16 10x10 tilings creates a feature vector of size 1600. The learning\nalgorithm is AC(λ) with γ fixed to 0.99 and λ fixed to 0.8 in all experiments. For mountain car we do not use any entropy regularization, and all weights were\ninitialized to 0.",
"paper_id": "1805.04514",
"title": "Metatrace Actor-Critic: Online Step-size Tuning by Meta-gradient Descent for Reinforcement Learning Control",
"authors": [
"Kenny Young",
"Baoxiang Wang",
"Matthew E. Taylor"
],
"published_date": "2018-05-10",
"primary_category": "cs.LG",
"arxiv_url": "http://arxiv.org/abs/1805.04514v2",
"chunk_index": 21,
"total_chunks": 42,
"char_count": 1711,
"word_count": 366,
"chunking_strategy": "semantic"
},
{
"chunk_id": "6bd5cf77-5564-47f4-a7e9-c3767a9087ed",
"text": "Figure 1 shows the results on this problem for a range of α values without\nstep-size tuning. α values were chosen as powers of 2 which range from excessively\nlow to too high to allow learning. Figure 2 shows the results of Normalized Scalar\nMetatrace for a range of µ values. For a fairly broad range of µ values, learning\ncurves of different α values are much more similar compared to without tuning. Figure 3 shows the performance of Unnormalized Scalar Metatrace over a\nrange of µ and initial α values. As in Figure 2 for Normalized Scalar Metatrace,\nfor a fairly broad range of µ values, learning curves over different initial α values\nare much more similar compared to without tuning. Two qualitative differences\nof the normalized algorithm over the unnormalized version are: (1) the useful\nµ values are much larger, which for reasons outlined in [15] likely reflects less\nproblem dependence, and (2) the highest initial alpha value tested, 2−5, which\ndoes not improve at all without tuning, becomes generally more stable. In both\nthe normalized and unnormalized case, higher µ values than those shown cause\nincreasing instability. We performed this same experiment for SID and NOSID [2], modified for\nAC(λ) rather than SARSA(λ). These results are presented in Figures 4 and 5. We found that the unnormalized variants of the two algorithms behaved very\nsimilarly on this problem, while the normalized variants showed differences. This\nis likely due to differences in the normalization procedures. 16 Kenny Young, Baoxiang Wang, and Matthew E.",
"paper_id": "1805.04514",
"title": "Metatrace Actor-Critic: Online Step-size Tuning by Meta-gradient Descent for Reinforcement Learning Control",
"authors": [
"Kenny Young",
"Baoxiang Wang",
"Matthew E. Taylor"
],
"published_date": "2018-05-10",
"primary_category": "cs.LG",
"arxiv_url": "http://arxiv.org/abs/1805.04514v2",
"chunk_index": 22,
"total_chunks": 42,
"char_count": 1547,
"word_count": 255,
"chunking_strategy": "semantic"
},
{
"chunk_id": "42387b03-0e58-4298-bb32-f04d7990d9b2",
"text": "(a) µ = 2−6 (b) µ = 2−7 (c) µ = 2−8 (d) µ = 2−9 (e) µ = 2−10 (f) µ = 2−11 Fig. 2: Return vs. training episodes on mountain car for a variety of initial α\nvalues with Normalized Scalar Metatrace tuning with a variety of µ values. Each\ncurve shows the average of 10 repeats and is smoothed by taking the average of\nthe last 20 episodes.",
"paper_id": "1805.04514",
"title": "Metatrace Actor-Critic: Online Step-size Tuning by Meta-gradient Descent for Reinforcement Learning Control",
"authors": [
"Kenny Young",
"Baoxiang Wang",
"Matthew E. Taylor"
],
"published_date": "2018-05-10",
"primary_category": "cs.LG",
"arxiv_url": "http://arxiv.org/abs/1805.04514v2",
"chunk_index": 23,
"total_chunks": 42,
"char_count": 334,
"word_count": 71,
"chunking_strategy": "semantic"
},
{
"chunk_id": "556d9f8b-d2dc-4c27-b777-06c58d51ac8b",
"text": "For a broad range of µ values, scalar Metatrace is able to\naccelerate learning, particularly for suboptimal values of the initial α. procedure is based on that found in [8]. The one used by NOSID is based on\nthat found in [14]. The normalization of NOSID seems to make the method more\nrobust to initial α values that are too high, as well as maintaining performance\nacross a wider range of µ values.",
"paper_id": "1805.04514",
"title": "Metatrace Actor-Critic: Online Step-size Tuning by Meta-gradient Descent for Reinforcement Learning Control",
"authors": [
"Kenny Young",
"Baoxiang Wang",
"Matthew E. Taylor"
],
"published_date": "2018-05-10",
"primary_category": "cs.LG",
"arxiv_url": "http://arxiv.org/abs/1805.04514v2",
"chunk_index": 24,
"total_chunks": 42,
"char_count": 399,
"word_count": 73,
"chunking_strategy": "semantic"
},
{
"chunk_id": "803be823-0f71-4f06-b7aa-6835eac51fc3",
"text": "On the other hand, it does not shift the useMetatrace 17 (a) µ = 2−14 (b) µ = 2−15 (c) µ = 2−16 (d) µ = 2−17 (e) µ = 2−18 (f) µ = 2−19 Fig. 3: Return on mountain-car for a variety of initial α values with unnormalized\nscalar Metatrace with a variety of µ values. Each curve shows the average of 10\nrepeats and is smoothed by taking the average of the last 20 episodes. ful range of µ values up as much6, again potentially indicating more variation\nin optimal µ value across problems. Additionally, the normalization procedure\nof NOSID makes use of a hard maximum rather than a running maximum in\nnormalizing the β update, which is not well suited for non-stationary state representations where the appropriate normalization may significantly change over\ntime. While this problem was insufficient to demonstrate a difference between 6 NOSID becomes unstable around 2−8 18 Kenny Young, Baoxiang Wang, and Matthew E.",
"paper_id": "1805.04514",
"title": "Metatrace Actor-Critic: Online Step-size Tuning by Meta-gradient Descent for Reinforcement Learning Control",
"authors": [
"Kenny Young",
"Baoxiang Wang",
"Matthew E. Taylor"
],
"published_date": "2018-05-10",
"primary_category": "cs.LG",
"arxiv_url": "http://arxiv.org/abs/1805.04514v2",
"chunk_index": 25,
"total_chunks": 42,
"char_count": 913,
"word_count": 160,
"chunking_strategy": "semantic"
},
{
"chunk_id": "cd70ee08-b0dc-48cf-a673-7f94b3793136",
"text": "(a) µ = 2−14 (b) µ = 2−15 (c) µ = 2−16 (d) µ = 2−17 (e) µ = 2−18 (f) µ = 2−19 Fig. 4: Return on mountain-car for a variety of initial α values with SID with a\nvariety of µ values. Each curve shows the average of 10 repeats and is smoothed\nby taking the average of the last 20 episodes.",
"paper_id": "1805.04514",
"title": "Metatrace Actor-Critic: Online Step-size Tuning by Meta-gradient Descent for Reinforcement Learning Control",
"authors": [
"Kenny Young",
"Baoxiang Wang",
"Matthew E. Taylor"
],
"published_date": "2018-05-10",
"primary_category": "cs.LG",
"arxiv_url": "http://arxiv.org/abs/1805.04514v2",
"chunk_index": 26,
"total_chunks": 42,
"char_count": 285,
"word_count": 64,
"chunking_strategy": "semantic"
},
{
"chunk_id": "add441c2-5599-4e0b-b693-e6a2b8962ec1",
"text": "On this problem the behavior of\nSID is very similar to that of Scalar Metatrace. SID and Metatrace, we conjecture that in less sparse settings where each weight\nis updated more frequently, the importance of incorporating the correct time\nindex would be more apparent. Our ALE experiments certainly fit this description, however we do not directly compare with the update rule of SID in the\nALE domain. For now, we note that [2] focuses on the scalar α case and does\nnot extend to vector valued α. Comparing these approaches more thoroughly,\nboth theoretically and empirically, is left for future work.",
"paper_id": "1805.04514",
"title": "Metatrace Actor-Critic: Online Step-size Tuning by Meta-gradient Descent for Reinforcement Learning Control",
"authors": [
"Kenny Young",
"Baoxiang Wang",
"Matthew E. Taylor"
],
"published_date": "2018-05-10",
"primary_category": "cs.LG",
"arxiv_url": "http://arxiv.org/abs/1805.04514v2",
"chunk_index": 27,
"total_chunks": 42,
"char_count": 601,
"word_count": 101,
"chunking_strategy": "semantic"
},
{
"chunk_id": "4683e164-5114-4e4a-872d-fd7c95f79a74",
"text": "(a) µ = 2−10 (b) µ = 2−11 (c) µ = 2−12 (d) µ = 2−13 (e) µ = 2−14 (f) µ = 2−15 Fig. 5: Return on mountain-car for a variety of initial α values with NOSID\nwith a variety of µ values. Each curve shows the average of 10 repeats and is\nsmoothed by taking the average of the last 20 episodes.",
"paper_id": "1805.04514",
"title": "Metatrace Actor-Critic: Online Step-size Tuning by Meta-gradient Descent for Reinforcement Learning Control",
"authors": [
"Kenny Young",
"Baoxiang Wang",
"Matthew E. Taylor"
],
"published_date": "2018-05-10",
"primary_category": "cs.LG",
"arxiv_url": "http://arxiv.org/abs/1805.04514v2",
"chunk_index": 28,
"total_chunks": 42,
"char_count": 287,
"word_count": 64,
"chunking_strategy": "semantic"
},
{
"chunk_id": "fc0a422d-07f2-4b08-b3bc-d8e8e3e9313e",
"text": "NOSID behaves quite\ndifferently from Normalized Scalar Metatrace, likely owing to the difference in\nnormalization technique applied. Vector Metatrace is much less effective on this problem (results are omitted). This can be understood as follows: the tile-coding representation is sparse, thus\nat any given time-step very few features are active. We get very few training\nexamples for each αi value when vector step-sizes are used. Using a scalar stepsize generalizes over all weights to learn the best overall step-size and is far\nmore useful. On the other hand, one would expect vector step-size tuning to be 20 Kenny Young, Baoxiang Wang, and Matthew E.",
"paper_id": "1805.04514",
"title": "Metatrace Actor-Critic: Online Step-size Tuning by Meta-gradient Descent for Reinforcement Learning Control",
"authors": [
"Kenny Young",
"Baoxiang Wang",
"Matthew E. Taylor"
],
"published_date": "2018-05-10",
"primary_category": "cs.LG",
"arxiv_url": "http://arxiv.org/abs/1805.04514v2",
"chunk_index": 29,
"total_chunks": 42,
"char_count": 656,
"word_count": 104,
"chunking_strategy": "semantic"
},
{
"chunk_id": "80aa0a89-62aa-4941-952c-8a95514944f0",
"text": "more useful with dense representations or when there is good reason to believe\ndifferent features require different learning rates. An example of the later is when\ncertain features are non-stationary or very noisy, as we will demonstrate next. 5.2 Drifting Mountain Car Here we extend the mountain car domain, adding noise and non-stationarity\nto the state representation. This is intended to provide a proxy for the issues\ninherent in representation learning (e.g., using a NN function approximator),\nwhere the features are constantly changing and some are more useful than others. Motivated by the noisy, non-stationary experiment from [15], we create a version\nof mountain car that has similar properties. We use the same 1600 tiling features\nas in the mountain car case but at each time-step, each feature has a chance to\nrandomly flip from indicating activation with 1 to −1 and vice-versa. We use\na uniform flipping probability per time step across all features that we refer to\nas the drift rate. Additionally we add 32 noisy features which are 1 or 0 with\nprobability 0.5 for every time-step. In expectation, 16 will be active on a given\ntime-step. With 16 tilings, 16 informative features will also be active, thus in\nexpectation half of the active features will be informative. Due to non-stationarity, an arbitrarily small α is not asymptotically optimal,\nbeing unable to track the changing features. Due to noise, a scalar α will be\nsuboptimal. Nonzero αi values for noisy features lead to weight updates when\nthat feature is active in a particular state; this random fluctuation adds noise to\nthe learning process. This can be avoided by annealing associated αis to zero.",
"paper_id": "1805.04514",
"title": "Metatrace Actor-Critic: Online Step-size Tuning by Meta-gradient Descent for Reinforcement Learning Control",
"authors": [
"Kenny Young",
"Baoxiang Wang",
"Matthew E. Taylor"
],
"published_date": "2018-05-10",
"primary_category": "cs.LG",
"arxiv_url": "http://arxiv.org/abs/1805.04514v2",
"chunk_index": 30,
"total_chunks": 42,
"char_count": 1684,
"word_count": 276,
"chunking_strategy": "semantic"
},
{
"chunk_id": "057870e9-3234-422c-9296-ce9cb24c5b22",
"text": "Figure 6 shows the best µ value tested for several drift rate values for scalar,\nvector, and mixed Metatrace methods along with a baseline with no tuning.",
"paper_id": "1805.04514",
"title": "Metatrace Actor-Critic: Online Step-size Tuning by Meta-gradient Descent for Reinforcement Learning Control",
"authors": [
"Kenny Young",
"Baoxiang Wang",
"Matthew E. Taylor"
],
"published_date": "2018-05-10",
"primary_category": "cs.LG",
"arxiv_url": "http://arxiv.org/abs/1805.04514v2",
"chunk_index": 31,
"total_chunks": 42,
"char_count": 154,
"word_count": 27,
"chunking_strategy": "semantic"
},
{
"chunk_id": "5063ad4c-e022-44c7-9e67-c333c622fb48",
"text": "All methods learn quickly initially before many features have flipped from their\ninitial value. Once many features have flipped we see the impact of the various\ntuning methods. Mixed Metatrace performs the best in the long run so there is\nindeed some benefit to individually tuning αs on this problem. Scalar Metatrace\nis able to accelerate early learning but at the higher drift values eventually drops\noffas it is unable to isolate the informative features from the noise. Vector Metatrace tends to under-perform scalar near the beginning but eventually surpasses\nit as it is eventually able to isolate the informative features from the noise.",
"paper_id": "1805.04514",
"title": "Metatrace Actor-Critic: Online Step-size Tuning by Meta-gradient Descent for Reinforcement Learning Control",
"authors": [
"Kenny Young",
"Baoxiang Wang",
"Matthew E. Taylor"
],
"published_date": "2018-05-10",
"primary_category": "cs.LG",
"arxiv_url": "http://arxiv.org/abs/1805.04514v2",
"chunk_index": 32,
"total_chunks": 42,
"char_count": 645,
"word_count": 104,
"chunking_strategy": "semantic"
},
{
"chunk_id": "64d76363-db98-4524-9efb-f88d02167247",
"text": "Figure 7 a, c and e show all µ values tested for each method for one drift\nrate value to indicate the sensitivity of each method to µ. The general pattern is\nsimilar for a broad range of µ values. Figure 7 b, d and f show how the average\nβ values for different weights evolve over time for the various methods. Both\nvector and mixed metatrace show much stronger separation between the noisy\nand informative features for the value function than for the policy. One possible\nexplanation for this is that small errors in the value function have far more\nimpact on optimizing the objective Jβλ in mountain car than small imperfections\nin the policy. Learning a good policy requires a fine-grain ability to distinguish\nthe value of similar states, as individual actions will have a relatively minor\nimpact on the car. On the other hand, errors in the value function in one state\nhave a large negative impact on both the value learning of other states and (a) drift rate= 4 × 10−6 (b) drift rate= 6 × 10−6 (c) drift rate= 8 × 10−6 (d) drift rate= 1 × 10−5 Fig. 6: Return vs. training episodes on drifting mountain car for a fixed initial\nalpha value of 2−10 with best µ value for each tuning method based on average\nreturn over the final 100 episodes. Each curve shows the average of 20 repeats,\nsmoothed over 40 episodes.",
"paper_id": "1805.04514",
"title": "Metatrace Actor-Critic: Online Step-size Tuning by Meta-gradient Descent for Reinforcement Learning Control",
"authors": [
"Kenny Young",
"Baoxiang Wang",
"Matthew E. Taylor"
],
"published_date": "2018-05-10",
"primary_category": "cs.LG",
"arxiv_url": "http://arxiv.org/abs/1805.04514v2",
"chunk_index": 33,
"total_chunks": 42,
"char_count": 1316,
"word_count": 240,
"chunking_strategy": "semantic"
},
{
"chunk_id": "3823d773-25a7-4ad7-8b5a-f5571670450a",
"text": "While all tuning methods improve on the baseline\nto some degree, mixed Metatrace is generally best. the ability to learn a good policy. We expect this outcome would vary across\nproblems.",
"paper_id": "1805.04514",
"title": "Metatrace Actor-Critic: Online Step-size Tuning by Meta-gradient Descent for Reinforcement Learning Control",
"authors": [
"Kenny Young",
"Baoxiang Wang",
"Matthew E. Taylor"
],
"published_date": "2018-05-10",
"primary_category": "cs.LG",
"arxiv_url": "http://arxiv.org/abs/1805.04514v2",
"chunk_index": 34,
"total_chunks": 42,
"char_count": 186,
"word_count": 31,
"chunking_strategy": "semantic"
},
{
"chunk_id": "93bcd100-89cf-4daf-837c-bcd11a1ccb58",
"text": "5.3 Arcade Learning Environment Here we describe our experiments with the 5 original training set games of the\nALE (asterix, beam rider, freeway, seaquest, space invaders). We use the nondeterministic ALE version with repeat action probability = 0.25, as endorsed\nby [9]. We use a convolutional architecture similar to that used in the original\nDQN paper [12]. Input was 84x84x4, downsampled, gray-scale versions of the\nlast 4 observed frames, normalized such that each input value is between 0 and\n1. We use frameskipping such that only every 4th frame is observed. We use 2\nconvolutional layers with 16 8x8 stride 4, and 32 4x4 stride 2 filters, followed\nby a dense layer of 256 units. Following [4], activations were dSiLU in the fully\nconnected layer and SiLU in the convolutional layers. Output consists of a linear 22 Kenny Young, Baoxiang Wang, and Matthew E.",
"paper_id": "1805.04514",
"title": "Metatrace Actor-Critic: Online Step-size Tuning by Meta-gradient Descent for Reinforcement Learning Control",
"authors": [
"Kenny Young",
"Baoxiang Wang",
"Matthew E. Taylor"
],
"published_date": "2018-05-10",
"primary_category": "cs.LG",
"arxiv_url": "http://arxiv.org/abs/1805.04514v2",
"chunk_index": 35,
"total_chunks": 42,
"char_count": 866,
"word_count": 144,
"chunking_strategy": "semantic"
},
{
"chunk_id": "62304783-6bd7-4b59-8f8e-3dac4032ebb4",
"text": "(a) Scalar Metatrace µ sweep. (b) Scalar Metatrace β evolution. (c) Vector Metatrace µ sweep. (d) Vector Metatrace β evolution. (e) Mixed Metatrace µ sweep. (f) Mixed Metatrace β evolution. Fig. 7: (a, c, e) Return vs. training episodes on drifting mountain car for a fixed\ninitial alpha value of 2−10 with various µ value and drift fixed to 6×10−6. (b, d,\nf) Evolution of average β values for various weights on drifting mountain car for\ndifferent tuning methods for initial α = 2−10, µ = 2−10, drift= 6 × 10−6. Each\ncurve shows the average of 20 repeats, smoothed over 40 episodes.",
"paper_id": "1805.04514",
"title": "Metatrace Actor-Critic: Online Step-size Tuning by Meta-gradient Descent for Reinforcement Learning Control",
"authors": [
"Kenny Young",
"Baoxiang Wang",
"Matthew E. Taylor"
],
"published_date": "2018-05-10",
"primary_category": "cs.LG",
"arxiv_url": "http://arxiv.org/abs/1805.04514v2",
"chunk_index": 36,
"total_chunks": 42,
"char_count": 583,
"word_count": 103,
"chunking_strategy": "semantic"
},
{
"chunk_id": "73b4589b-5e6d-43fd-8bea-2be327ccf8c2",
"text": "state value function and softmax policy. We fix γ = 0.99 and λ = 0.8 and use\nentropy regularization with ψ = 0.01. For Metatrace we also fix µ = 0.001, a\nvalue that (based on our mountain car experiments) seems to be reasonable. We\nrun all experiments up to 12.5 million observed frames7. 7 Equivalent to 50 million emulator frames when frame skipping is accounted for. (a) No tuning. (b) Scalar Metatrace. (c) Mixed Metatrace. (d) Vector Metatrace. Fig. 8: Return vs. learning steps for different step-size tuning methods across\ninitial α values for seaquest. In all cases normalization was enabled and µ fixed\nto 0.001.",
"paper_id": "1805.04514",
"title": "Metatrace Actor-Critic: Online Step-size Tuning by Meta-gradient Descent for Reinforcement Learning Control",
"authors": [
"Kenny Young",
"Baoxiang Wang",
"Matthew E. Taylor"
],
"published_date": "2018-05-10",
"primary_category": "cs.LG",
"arxiv_url": "http://arxiv.org/abs/1805.04514v2",
"chunk_index": 37,
"total_chunks": 42,
"char_count": 621,
"word_count": 107,
"chunking_strategy": "semantic"
},
{
"chunk_id": "3f1a3641-1b67-4052-ae47-fe0ef4f4f5bb",
"text": "We first performed a broad sweep of α values with only one repeat each for\neach of the 3 tuning methods, and an untuned baseline on seaquest to get a\nsense of how the different methods perform in this domain. While it is difficult\nto draw conclusions from single runs, the general trend we observed is that\nthe scalar tuning method helped more initial α values break out of the first\nperformance plateau and made performance of different initial α values closer\nacross the board. Scalar tuning however did not seem to improving learning of\nthe strongest initial αs. Vector and mixed tuning, on the other hand, seemed\nto improve performance of near optimal αs as well as reducing the discrepancy\nbetween them, but did not seem to help with enabling weaker initial α values to\nbreak out of the first performance plateau. Note also, likely owing in part to the\nnormalization, with each tuning method the highest α tested is able to improve\nto some degree, while it essentially does not improve at all without tuning. After this initial sweep we performed more thorough experiments on the 5\noriginal training set games using mixed Metatrace with the 3 best α values found\nin our initial sweep. Additionally, we test a baseline with no tuning with the same\nα values. We ran 5 random seeds with each setting. Figure 9 shows the results\nof these experiments. 24 Kenny Young, Baoxiang Wang, and Matthew E. Metatrace significantly decreases the sensitivity to the initial choice of α. In many cases, metatrace also improved final performance while accelerating\nlearning.",
"paper_id": "1805.04514",
"title": "Metatrace Actor-Critic: Online Step-size Tuning by Meta-gradient Descent for Reinforcement Learning Control",
"authors": [
"Kenny Young",
"Baoxiang Wang",
"Matthew E. Taylor"
],
"published_date": "2018-05-10",
"primary_category": "cs.LG",
"arxiv_url": "http://arxiv.org/abs/1805.04514v2",
"chunk_index": 38,
"total_chunks": 42,
"char_count": 1561,
"word_count": 266,
"chunking_strategy": "semantic"
},
{
"chunk_id": "3bd35337-756b-401e-8374-98d77f20c7c1",
"text": "In space invaders, all 3 α values tested outperformed the best α with\nno tuning, and in seaquest 2 of the 3 did. In asterix, the final performance of all\ninitial αs with Metatrace was similar to the best untuned alpha value but with\nfaster initial learning. In beam rider, however, we see that using no tuning results\nin faster learning, especially for a well tuned initial α. We hypothesize that this\nresult can be explained by the high α sensitivity and slow initial learning speed\nin beam rider. There is little signal for the α tuning to learn from early on,\nso the α values just drift due to noise and the long-term impact is never seen. In future work it would be interesting to look at what can be done to make\nMetatrace robust to this issue. In freeway, no progress occurred either with or\nwithout tuning.",
"paper_id": "1805.04514",
"title": "Metatrace Actor-Critic: Online Step-size Tuning by Meta-gradient Descent for Reinforcement Learning Control",
"authors": [
"Kenny Young",
"Baoxiang Wang",
"Matthew E. Taylor"
],
"published_date": "2018-05-10",
"primary_category": "cs.LG",
"arxiv_url": "http://arxiv.org/abs/1805.04514v2",
"chunk_index": 39,
"total_chunks": 42,
"char_count": 813,
"word_count": 149,
"chunking_strategy": "semantic"
},
{
"chunk_id": "f19ee426-be02-4aee-8d06-f358b7ed149f",
"text": "We introduce Metatrace, a novel set of algorithms based on meta-gradient descent, which performs step-size tuning for AC(λ). We demonstrate that Scalar\nMetatrace improves robustness to initial step-size choice in a standard RL domain, while Mixed Metatrace facilitates learning in an RL problem with nonstationary state representation. The latter result extends results of [15] and [8]\nfrom the SL case. Reasoning that such non-stationarity in the state representation is an inherent feature of NN function approximation, we also test the\nmethod for training a neural network online for several games in the ALE. Here\nwe find that in three of the four games where the baseline was able to learn,\nMetatrace allows a range of initial step-sizes to learn faster and achieve similar\nor better performance compared to the best fixed choice of α. In future work we would like to investigate what can be done to make Metatrace robust to the negative example we observed in the ALE. One thing that\nmay help here is a more thorough analysis of the nonlinear case to see what can\nbe done to better account for the higher order effects of the step-size updates on\nthe weights and eligibility traces, without compromising the computational efficiency necessary to run the algorithm on-line. We are also interested in applying\na similar meta-gradient descent procedure to other RL hyperparameters, for example the bootstrapping parameter λ, or the entropy regularization parameter\nψ. More broadly we would like to be able to abstract the ideas behind online\nmeta-gradient descent to the point where one could apply it automatically to\nthe hyperparameters of an arbitrary online RL algorithm. The authors would like to acknowledge Pablo Hernandez-Leal, Alex Kearney\nand Tian Tian for useful conversation and feedback.",
"paper_id": "1805.04514",
"title": "Metatrace Actor-Critic: Online Step-size Tuning by Meta-gradient Descent for Reinforcement Learning Control",
"authors": [
"Kenny Young",
"Baoxiang Wang",
"Matthew E. Taylor"
],
"published_date": "2018-05-10",
"primary_category": "cs.LG",
"arxiv_url": "http://arxiv.org/abs/1805.04514v2",
"chunk_index": 40,
"total_chunks": 42,
"char_count": 1803,
"word_count": 288,
"chunking_strategy": "semantic"
},
{
"chunk_id": "98e94cf5-1c81-44bd-8122-d604cb1b749a",
"text": "(a) No tuning on space invaders. (b) Metatrace on space invaders. (c) No tuning on seaquest. (d) Metatrace with on seaquest. (e) No tuning on asterix. (f) Metatrace on asterix. 26 Kenny Young, Baoxiang Wang, and Matthew E. (g) No tuning on beam rider. (h) Metatrace on beam rider. (i) No tuning on freeway. (j) Metatrace on freeway. Fig. 9: Return vs. learning steps for ALE games. Each curve is the average of 5\nrepeats and is smoothed by taking a running average over the most recent 40\nepisodes. The meta-step-size parameter µ was fixed to 0.001 for each run. For\nease of comparison, the plots for Metatrace also include the best tested constant\nα value in terms of average return over the last 100 training episodes.",
"paper_id": "1805.04514",
"title": "Metatrace Actor-Critic: Online Step-size Tuning by Meta-gradient Descent for Reinforcement Learning Control",
"authors": [
"Kenny Young",
"Baoxiang Wang",
"Matthew E. Taylor"
],
"published_date": "2018-05-10",
"primary_category": "cs.LG",
"arxiv_url": "http://arxiv.org/abs/1805.04514v2",
"chunk_index": 41,
"total_chunks": 42,
"char_count": 720,
"word_count": 127,
"chunking_strategy": "semantic"
}
]