| [ |
| { |
| "chunk_id": "8ffd7df4-2f3f-45ae-9733-cc4bd821c953", |
| "text": "Policy Optimization with Second-Order Advantage Information Jiajin Li∗ Baoxiang Wang∗\njjli@se.cuhk.edu.hk bxwang@cse.cuhk.edu.hk\nThe Chinese University of Hong Kong The Chinese University of Hong Kong Policy optimization on high-dimensional continuous control tasks exhibits its difficulty caused by the large\nvariance of the policy gradient estimators. We present the action subspace dependent gradient (ASDG) estimator which incorporates the Rao-Blackwell theorem (RB) and Control Variates (CV) into a unified framework to\nreduce the variance. To invoke RB, our proposed algorithm (POSA) learns the underlying factorization structure among the action space based on the second-order advantage information. POSA captures the quadratic2019\ninformation explicitly and efficiently by utilizing the wide & deep architecture. Empirical studies show that\nour proposed approach demonstrates the performance improvements on high-dimensional synthetic settings and\nOpenAI Gym's MuJoCo continuous control tasks.May\n29 1 Introduction Deep reinforcement learning (RL) algorithms have been widely applied in various challenging problems, including\nvideo games [15], board games [21], robotics [9], dynamic routing [24, 10], and continuous control tasks [20,\n12]. An important approach among these methods is policy gradient (PG). Since its inception [23], PG has been\ncontinuously improved by the Control Variates (CV) [16] theory. Examples are REINFORCE [23], Advantage[cs.LG] actor-critic (A2C) [14], Q-prop [7], and action-dependent baselines [13, 6, 22]. However, when dealing with highdimensional action spaces, CV has limited effects regarding the sample efficiency. Rao-Blackwell theorem (RB)\n[1], though not heavily adopted in policy gradient, is commonly used with CV to address high-dimensional spaces\n[17]. Motivated by the success of RB in high-dimensional spaces [17], we incorporate both RB and CV into a unified\nframework. We present the action subspace dependent gradient (ASDG) estimator.", |
| "paper_id": "1805.03586", |
| "title": "Policy Optimization with Second-Order Advantage Information", |
| "authors": [ |
| "Jiajin Li", |
| "Baoxiang Wang" |
| ], |
| "published_date": "2018-05-09", |
| "primary_category": "cs.LG", |
| "arxiv_url": "http://arxiv.org/abs/1805.03586v2", |
| "chunk_index": 0, |
| "total_chunks": 28, |
| "char_count": 1993, |
| "word_count": 272, |
| "chunking_strategy": "semantic" |
| }, |
| { |
| "chunk_id": "fad327f2-6685-4634-8a2a-71e9a6e2b53f", |
| "text": "ASDG first breaks the original\nhigh dimensional action space into several low dimensional action subspaces and replace the expectation (i.e.,\npolicy gradient) with its conditional expectation over subspaces (RB step) to reduce the sample space. A baseline\nfunction associated with each of the corresponding action subspaces is used to further reduce the variance (CV\nstep). While ASDG is benefited from both RB and CV's ability to reduce the variance, we show that ASDG is\nunbiased under relatively weak assumptions over the advantage function.", |
| "paper_id": "1805.03586", |
| "title": "Policy Optimization with Second-Order Advantage Information", |
| "authors": [ |
| "Jiajin Li", |
| "Baoxiang Wang" |
| ], |
| "published_date": "2018-05-09", |
| "primary_category": "cs.LG", |
| "arxiv_url": "http://arxiv.org/abs/1805.03586v2", |
| "chunk_index": 1, |
| "total_chunks": 28, |
| "char_count": 544, |
| "word_count": 83, |
| "chunking_strategy": "semantic" |
| }, |
| { |
| "chunk_id": "259b3cad-dd0f-4786-a842-8055ec3d7d07", |
| "text": "The major difficulty to invoke RB is to find a satisfying action domain partition. Novel trials such as [25] utilize\nRB under the conditional independence assumption which assumes that the policy distribution is fully factorized\nthe policy distribution flexibility and [25] is conducting the optimization in a restricted domain. In our works,\nwe show that Hessian of the advantage with respect to the action is theoretically connected with the action space\nstructure. Specifically, the block-diagonal structure of Hessian is corresponding to the partition of the action space. We exploit such second-order information with the evolutionary clustering algorithm [2] to learn the underlying\nfactorization structure in the action space. Instead of the vanilla multilayer perceptron, we utilize the wide & deep\narchitecture [3] to capture such information explicitly and efficiently. With the second-order advantage information, ASDG finds the partition that approximates the underlying structure of the action space. We evaluate our method on a variety of reinforcement learning tasks, including a high-dimensional synthetic\nenvironment and several OpenAI Gym's MuJoCo continuous control environments. We build ASDG and POSA\non top of proximal policy optimization (PPO), and demonstrate that ASDG consistently obtains the ideal balance:\nwhile improving the sample efficiency introduced by RB [25], it keeps the accuracy of the feasible solution [13]. In environments where the model assumptions are satisfied or minimally violated empirically, while not trivially ∗These authors contribute equally to this work. satisfied by [25], POSA outperforms previous studies with the overall cumulated rewards it achieves. In the\ncontinuous control tasks, POSA is either competitive or superior, depending on whether the action space exhibits\nits structure under the environment settings. We present the canonical reinforcement learning (RL) formalism in this section. Consider policy learning in the\ndiscrete-time Markov decision process (MDP) defined by the tuple (S, A, T , r, ρ0, γ) where S ∈Rn is the n\ndimensional state space, A ∈Rm is the m dimensional action space, T : S × A × S →R+ is the environment\ntransition probability function, r : S × A →R is the reward function, ρ0 is the initial state distribution and\nγ ∈(0, 1] is the unnormalized discount factor.", |
| "paper_id": "1805.03586", |
| "title": "Policy Optimization with Second-Order Advantage Information", |
| "authors": [ |
| "Jiajin Li", |
| "Baoxiang Wang" |
| ], |
| "published_date": "2018-05-09", |
| "primary_category": "cs.LG", |
| "arxiv_url": "http://arxiv.org/abs/1805.03586v2", |
| "chunk_index": 2, |
| "total_chunks": 28, |
| "char_count": 2355, |
| "word_count": 358, |
| "chunking_strategy": "semantic" |
| }, |
| { |
| "chunk_id": "37bf6642-2605-45fc-8131-5958237fbeb4", |
| "text": "RL learns a stochastic policy πθ : S × A →R+, which is\nparameterized by θ, to maximize the expected cumulative reward J(θ) = Es∼ρπ,a∼π[X γtr(st, at)].\nt=0 In the above equation, ρπ(s) = P∞t=1 γt−1P(st = s) is the discounted state visitation distribution. Define the\nvalue function\nV π(st) = Eπ[ X γt′−tr(st′, at′)|st, π]\nt′≥t to be the expected return of policy π at state st. Define the state-action function", |
| "paper_id": "1805.03586", |
| "title": "Policy Optimization with Second-Order Advantage Information", |
| "authors": [ |
| "Jiajin Li", |
| "Baoxiang Wang" |
| ], |
| "published_date": "2018-05-09", |
| "primary_category": "cs.LG", |
| "arxiv_url": "http://arxiv.org/abs/1805.03586v2", |
| "chunk_index": 3, |
| "total_chunks": 28, |
| "char_count": 409, |
| "word_count": 72, |
| "chunking_strategy": "semantic" |
| }, |
| { |
| "chunk_id": "05e60f5d-2a1d-48b5-9d70-acb2f29e01c9", |
| "text": "Qπ(st, at) = Eπ[ X γt′−tr(st′, at′)|st, at, π]\nt′≥t to be the expected return by policy π after taking the action at at the state st. We use ˆQπ(st, at) and ˆV π(st) to\ndenote the empirical function approximator of Qπ(st, at) and V π(st), respectively. Define the advantage function\nto be the gap between the value function and the action-value, as Aπ(st, at) = Qπ(st, at)−V π(st). To simplify the\nnotation, we focus on the time-independent formulation J(θ) = Eπ,ρπ[r(s, a)].", |
| "paper_id": "1805.03586", |
| "title": "Policy Optimization with Second-Order Advantage Information", |
| "authors": [ |
| "Jiajin Li", |
| "Baoxiang Wang" |
| ], |
| "published_date": "2018-05-09", |
| "primary_category": "cs.LG", |
| "arxiv_url": "http://arxiv.org/abs/1805.03586v2", |
| "chunk_index": 4, |
| "total_chunks": 28, |
| "char_count": 475, |
| "word_count": 83, |
| "chunking_strategy": "semantic" |
| }, |
| { |
| "chunk_id": "b6dbcd66-2c0f-4677-8eac-2424d6f445ce", |
| "text": "According to the policy gradient\ntheorem [23], the gradient of the expected cumulative reward can be estimated as ∇θJ(θ) = Eπ[∇θ log π(a|s)Qπ(s, a)]. 2.2 Variance Reduction Methods In practice, the vanilla policy gradient estimator is commonly estimated using Monte Carlo samples. A significant\nobstacle to the estimator is the sample efficiency. We review three prevailing variance reduction techniques in\nMonte Carlo estimation methods, including Control Variates, Rao-Blackwellization, and Reparameterization Trick. Control Variates - Consider the case we estimate the expectation Ep(x)[h(x)] with Monte Carlo samples {xi}Bi=1\nfrom the underlying distribution p(x). Usually, the original Monte Carlo estimator has high variance, and the main\nidea of Control Variates is to find the proper baseline function g(x) to partially cancel out the variance. A baseline\nfunction g(x) with its known expectation over the distribution p(x) is used to construct a new estimator ˆh(x) = h(x) −η(g(x) −Ep[g(x)]), where η is a constant determined by the empirical Monte Carlo samples. The Control Variates method is unbiased\nbut with a smaller variance V ar(ˆh(x)) ≤V ar(h(x)) at the optimal value η∗= Cov(h,g)V ar(g) . Rao-Blackwellization - Though most of the recent policy gradient studies reduce the variance by Control Variates, the Rao-Blackwell theorem [1] decreases the variance significantly more than CV do, especially in highdimensional spaces [17]. The motivation behind RB is to replace the expectation with its conditional expectation\nover a subset of random variables. In this way, RB transforms the original high-dimensional integration computation problem into estimating the conditional expectation on several low-dimensional subspaces separately.", |
| "paper_id": "1805.03586", |
| "title": "Policy Optimization with Second-Order Advantage Information", |
| "authors": [ |
| "Jiajin Li", |
| "Baoxiang Wang" |
| ], |
| "published_date": "2018-05-09", |
| "primary_category": "cs.LG", |
| "arxiv_url": "http://arxiv.org/abs/1805.03586v2", |
| "chunk_index": 5, |
| "total_chunks": 28, |
| "char_count": 1753, |
| "word_count": 254, |
| "chunking_strategy": "semantic" |
| }, |
| { |
| "chunk_id": "7fd55ebe-6213-4592-bac5-7d18be5e2419", |
| "text": "Consider a simple setting with two random variable sets A and B and the objective is to compute the expectation\nE[h(A, B)]. Denote that the conditional expectation ˆB as ˆB = E[h(A, B)|A]. The variance inequality V ar(ˆB) ≤V ar(h(A, B)) holds as shown in the Rao-blackwell theorem. In practical, when A and B are in high dimensional spaces, the\nconditioning is very useful and it reduces the variance significantly. The case of multiple random variables is\nhosted in a similar way. Reparameterization Trick - One of the recent advances in variance reduction is the reparameterization trick. It provides an estimator with lower empirical variance compared with the score function based estimators, as\ndemonstrated in [8, 18]. Using the same notation as is in the Control Variates section, we assume that the random\nvariable x is reparameterized by x = f(θ, ξ), ξ ∼q(ξ), where q(ξ) is the base distribution (e.g., the standard\nnormal distribution or the uniform distribution). Under this assumption, the gradient of the expectation Ep(x)[h(x)]\ncan be written as two identical forms i.e., the score function based form and reparameterization trick based form Ep[∇θ log p(x)h(x)] = Eq[∇θf(θ, ξ)∇xh(x)]. (1)", |
| "paper_id": "1805.03586", |
| "title": "Policy Optimization with Second-Order Advantage Information", |
| "authors": [ |
| "Jiajin Li", |
| "Baoxiang Wang" |
| ], |
| "published_date": "2018-05-09", |
| "primary_category": "cs.LG", |
| "arxiv_url": "http://arxiv.org/abs/1805.03586v2", |
| "chunk_index": 6, |
| "total_chunks": 28, |
| "char_count": 1202, |
| "word_count": 193, |
| "chunking_strategy": "semantic" |
| }, |
| { |
| "chunk_id": "a7a9f8fd-18f2-4d6a-990d-6bda12c34775", |
| "text": "The reparameterization trick based estimator (the right-hand side term) has relatively lower variance. Intuitively,\nthe reparameterization trick provides more informative gradients by exposing the dependency of the random variable x on the parameter θ. 2.3 Policy Gradient Methods Previous attempts to reduce the variance mainly focus on the Control Variates method in the policy gradient framework (i.e., REINFORCE, A2C, Q-prop). A proper choice of the baseline function is vital to reduce the variance. The vanilla policy gradient estimator, REINFORCE [23], subtracts the constant baseline from the action-value\nfunction,\n∇θJ(θ)RF = Eπ[∇θ log π(a|s)(Qπ(s, a) −b)]. The estimator in REINFORCE is unbiased. The key point to conclude the unbiasedness is that the constant baseline\nfunction has a zero expectation with the score function. Motivated by this, the baseline function is set to be the\nvalue function V π(s) in the advantage actor-critic (A2C) method [14], as the value function can also be regarded\nas a constant under the policy distribution π(a|s) with respect to the action a. Thus the A2C gradient estimator is ∇θJ(θ)A2C = Eπ[∇θ log π(a|s)(Qπ(s, a) −V π(s))]\n= Eπ[∇θ log π(a|s)Aπ(s, a)]. To further reduce the gradient estimate variance to acquire a zero-asymptotic variance estimator, [13] and [6] propose a general action dependent baseline function b(s, a) based on the identity (1). Note that the stochastic policy\ndistribution πθ(a|s) is reparametrized as a = f(θ, s, ξ), ξ ∼q(ξ), we rewrite Eq. (1) to get a zero-expectation\nbaseline function as below E[∇θ log π(a|s)b(s, a) −∇θf(θ, s, ξ)∇ab(s, a)] = 0. (2)", |
| "paper_id": "1805.03586", |
| "title": "Policy Optimization with Second-Order Advantage Information", |
| "authors": [ |
| "Jiajin Li", |
| "Baoxiang Wang" |
| ], |
| "published_date": "2018-05-09", |
| "primary_category": "cs.LG", |
| "arxiv_url": "http://arxiv.org/abs/1805.03586v2", |
| "chunk_index": 7, |
| "total_chunks": 28, |
| "char_count": 1627, |
| "word_count": 255, |
| "chunking_strategy": "semantic" |
| }, |
| { |
| "chunk_id": "987c0224-6c28-42f2-8afa-7aaf60d4f7ed", |
| "text": "Incorporating with the zero-expectation baseline (2), the general action dependent baseline (GADB) estimator is\nformulated as ∇θJ(θ)GADB =Eπ[∇θ log π(a|s)(Qπ(s, a) −b(s, a)) + ∇θf(θ, s, ξ)∇ab(s, a)]. (3) 3.1 Construct the ASDG Estimator We present our action subspace dependent gradient (ASDG) estimator by applying RB on top of the GADB estimator.", |
| "paper_id": "1805.03586", |
| "title": "Policy Optimization with Second-Order Advantage Information", |
| "authors": [ |
| "Jiajin Li", |
| "Baoxiang Wang" |
| ], |
| "published_date": "2018-05-09", |
| "primary_category": "cs.LG", |
| "arxiv_url": "http://arxiv.org/abs/1805.03586v2", |
| "chunk_index": 8, |
| "total_chunks": 28, |
| "char_count": 348, |
| "word_count": 52, |
| "chunking_strategy": "semantic" |
| }, |
| { |
| "chunk_id": "2aae3682-84bd-4efb-ba13-65d1013a8c1e", |
| "text": "Starting with Eq. (3), we rewrite the baseline function in the form of b(s, a) = V π(s)+c(s, a). The GADB\nestimator in Eq. (3) is then formulated as ∇θJ(θ)GADB = Eπ[∇θ log π(a|s)(Aπ(s, a) −c(s, a)) + ∇θf(θ, s, ξ)∇ac(s, a)]. Assumption 1 (Advantage Quadratic Approximation) Assume that the advantage function Aπ(s, a) can be locally second-order Taylor expanded with respect to a at some point a∗, that is, Aπ(a, s) ≈Aπ(a∗, s) + ∇aAπ(a, s)|Ta=a∗(a −a∗)\n+ 2(a −a∗)T ∇aaAπ(a, s)|a=a∗(a −a∗). (4) The baseline function c(s, a) is chosen from the same family.", |
| "paper_id": "1805.03586", |
| "title": "Policy Optimization with Second-Order Advantage Information", |
| "authors": [ |
| "Jiajin Li", |
| "Baoxiang Wang" |
| ], |
| "published_date": "2018-05-09", |
| "primary_category": "cs.LG", |
| "arxiv_url": "http://arxiv.org/abs/1805.03586v2", |
| "chunk_index": 9, |
| "total_chunks": 28, |
| "char_count": 554, |
| "word_count": 96, |
| "chunking_strategy": "semantic" |
| }, |
| { |
| "chunk_id": "2301ab39-17e5-4476-86cc-0002e51109fb", |
| "text": "Assumption 2 (Block Diagonal Assumption) Assume that the row-switching transform of Hessian ∇aaAπ(a, s)|a=a∗\nis a block diagonal matrix diag(M1, . . . , Mk), where PKk=1 dim(Mk) = m. Based on Assumption (1) and (2), the advantage function Aπ(s, a) can be divided into K independent components Aπ(s, a) = X Aπk(s, a(k)),\nk=1 where a(k) denotes the projection of the action a to the k-th action subspace corresponding to Mk. The baseline\nfunction c(s, a) is divided in the same way. Theorem 3 (ASDG Estimator) If the advantage function Aπ(s, a) and the baseline function c(s, a) satisfy Assumption (1) and (2), the ASDG estimator ∇θJ(θ)ASDG is ∇θJ(θ)ASDG = X Eπ(a(k)|s)[∇θ log π(a(k)|s)(Aπ(s, a(k)) −c(s, (a(k), ˜a(−k)))) −∇θfk(θ, s, ξ)∇a(k)ck(s, a(k))],\nk=1\nwhere ∇θf(θ, s, ξ) ∈RNθ×m is divided into K parts as ∇θf = [∇θf1, ..., ∇θfK] and Nθ is the dimension of θ. Proof 3.1 Using the fact that\nEπ(a|s)[.] = Eπ(a(k)|s)Eπ(a(−k)|a(k),s)[.],\nwhere a(−k) represents the elements within a that are complementary to a(k). With the assumptions we have\n∇J(θ)ASDG =Eπ(a(k)|s)Eπ(a(−k)|a(k),s)[(∇θ log π(a(k)|s) + ∇θ log π(a(−k)|a(k), s))\n(Aπk(s, a(k)) + X Aπi (s, a(i)) −ck(s, a(k)) − X ci(s, a(i)))\ni̸=k i̸=k + X ∇θfk(s, a(k))∇a(k)ck(s, a(k))]\nk=1\n=Eπ(a(k)|s)Eπ(a(−k)|a(k),s)[∇θ log π(a(k)|s)(Aπk −ck) −∇θfk∇a(k)ck]\n+ Eπ(a(k)|s)Eπ(a(−k)|a(k),s)[∇θ log π(a(k)|s)(X Aπi − X ci)] (5)\ni̸=k i̸=k\n+ Eπ(a(k)|s)Eπ(a(−k)|a(k),s)[∇θ log π(a(−k)|a(k), s)(Aπk −ck)] (6)\n+ Eπ(a(k)|s)Eπ(a(−k)|a(k),s)[∇θ log π(a(−k)|a(k), s)((X Aπi − X ci)) − X ∇θfi∇a(i)ci]\ni̸=k i̸=k i̸=k (♣)\n= Eπ(a(k)|s)[∇θ log π(a(k)|s)(Aπk −ck) −∇θfk∇a(k)ck]\n+ Eπ(a(−k)|a(k),s)[∇θ log π(a(−k)|a(k), s)((X Aπi − X ci)) − X ∇θfi∇a(i)ci]\ni̸=k i̸=k i̸=k (♥)\n= X Eπ(a(k)|s)[∇θ log π(a(k)|s)(Aπk −ck) −∇θfk∇a(k)ck]\nk=1 = X Eπ(a(k)|s)[∇θ log π(a(k)|s)(Aπk + X Aπi −ck − X ci) −∇θfk∇a(k)ck]\nk=1 i̸=k i̸=k = X Eπ(a(k)|s)[∇θ log π(a(k)|s)(Aπ(s, a) −c(s, a(k), ˜a(−k))) −∇θfk∇a(k)ck], (7)\nk=1 where (♣) holds as term (5) and term (6) equal to zero (using the property that the expectation of the score function\nis zero) and (♥) is expanded by induction. ■", |
| "paper_id": "1805.03586", |
| "title": "Policy Optimization with Second-Order Advantage Information", |
| "authors": [ |
| "Jiajin Li", |
| "Baoxiang Wang" |
| ], |
| "published_date": "2018-05-09", |
| "primary_category": "cs.LG", |
| "arxiv_url": "http://arxiv.org/abs/1805.03586v2", |
| "chunk_index": 10, |
| "total_chunks": 28, |
| "char_count": 2091, |
| "word_count": 331, |
| "chunking_strategy": "semantic" |
| }, |
| { |
| "chunk_id": "fc93125c-a2f9-454c-9ed3-e36a9be5ad84", |
| "text": "Our assumptions are relatively weak compared with previous studies on variance reduction for policy optimization. Different from the fully factorization policy distribution assumed in [25], our method relaxes the assumption to the\nconstraints on the advantage function Aπ(s, a) with respect to the action space instead. Similar to that, we just use\nthis assumption to obtain the structured factorization action subspaces to invoke the Rao-Blackwellization and our\nestimator does not introduce additional bias. Connection with other works - If we assume the Hessian matrix of the advantage function has no block diagonal\nstructure under any row switching transformation (i.e., K = 1), ASDG in Theorem. 3 is the one inducted in [13]\nand [6]. If we otherwise assume that Hessian is diagonal (i.e., K = m), the baseline function c(s, a(k), ˜a(−k))\nequals to Pi̸=k ci(s, a(i)), which means that each action dimension is independent with its baseline function. Thus, the estimator in [25] is obtained. Selection of the baseline functions c(s, a) - Two approaches exist to find the baseline function, including minimizing the variance of the PG estimator or minimizing the square error between the advantage function and the\nbaseline function [13, 6]. Minimizing the variance is hard to implement in general, as it involves the gradient of\nthe score function with respect to the baseline function parameter. In our work, we use a neural network advantage\napproximation as our baseline function by minimizing the square error. Under the assumption that the variance of\nreparametrization term ∇θfk(θ, s, ξ)∇a(k)ck(s, a(k)) is closed to zero, the two methods yield the same result.", |
| "paper_id": "1805.03586", |
| "title": "Policy Optimization with Second-Order Advantage Information", |
| "authors": [ |
| "Jiajin Li", |
| "Baoxiang Wang" |
| ], |
| "published_date": "2018-05-09", |
| "primary_category": "cs.LG", |
| "arxiv_url": "http://arxiv.org/abs/1805.03586v2", |
| "chunk_index": 11, |
| "total_chunks": 28, |
| "char_count": 1671, |
| "word_count": 263, |
| "chunking_strategy": "semantic" |
| }, |
| { |
| "chunk_id": "52ae7456-3ee7-417f-ad08-d81e013b9ffb", |
| "text": "3.2 Action Domain Partition with Second-Order Advantage Information When implementing the ASDG estimator, Temporal Difference (TD) learning methods such as Generalized Advantage Estimation (GAE) [4, 19] allow us to obtain the estimation ˆA(s, a) based on the value function V w(s)\nvia\nˆA(st, at) = X (λγ)t′−tδt′, (8)\nt′≥t where\nδt = E[rt + γV w(st+1) −V w(st)] (9)", |
| "paper_id": "1805.03586", |
| "title": "Policy Optimization with Second-Order Advantage Information", |
| "authors": [ |
| "Jiajin Li", |
| "Baoxiang Wang" |
| ], |
| "published_date": "2018-05-09", |
| "primary_category": "cs.LG", |
| "arxiv_url": "http://arxiv.org/abs/1805.03586v2", |
| "chunk_index": 12, |
| "total_chunks": 28, |
| "char_count": 364, |
| "word_count": 59, |
| "chunking_strategy": "semantic" |
| }, |
| { |
| "chunk_id": "c1586df4-8e92-42f7-97e9-1f9efe9b57da", |
| "text": "and λ is the discount factor of the λ-return in GAE. GAE further reduces the variance and avoids the action gap at\nthe cost of a small bias. Obviously, we cannot obtain the second-order information ∇aaA(s, a) with the advantage estimation in GAE\nidentity (8). Hence, apart from the value network V w(s), we train a separate advantage network to learn the\nadvantage information.", |
| "paper_id": "1805.03586", |
| "title": "Policy Optimization with Second-Order Advantage Information", |
| "authors": [ |
| "Jiajin Li", |
| "Baoxiang Wang" |
| ], |
| "published_date": "2018-05-09", |
| "primary_category": "cs.LG", |
| "arxiv_url": "http://arxiv.org/abs/1805.03586v2", |
| "chunk_index": 13, |
| "total_chunks": 28, |
| "char_count": 377, |
| "word_count": 64, |
| "chunking_strategy": "semantic" |
| }, |
| { |
| "chunk_id": "28180b6e-ed4f-4e93-b9eb-5200196e0a9b", |
| "text": "The neural network approximation Aµ(s, a) is used to smoothly interpolate the realization\nvalues ˆA(s, a), by minimizing the square error min ||ˆA(s, a) −Aµ(s, a)||2. (10) As shown in assumption (2), we use the block diagonal matrix to approximate the Hessian matrix and subsequently\nobtain the structure information in the action space. In the above advantage approximation setting, the Hessian\ncomputation is done by first approximating the advantage realization value and then differentiating the advantage\napproximation to obtain an approximate Hessian. However, for any finite number of data points there exists an\ninfinite number of functions, with arbitrarily satisfied Hessian and gradients, which can perfectly approximate the\nadvantage realization values [11]. Optimizing such a square error objective leads to unstable training and is prone\nto yield poor results. To alleviate this issue, we propose a novel wide & deep architecture [3] based advantage\nnet. In this way, we divide the advantage approximator into two parts, including the quadratic term and the deep\ncomponent, as\nAµ(s, a) = β1 · Awide + β2 · Adeep, where β1 and β2 are the importance weights. Subsequently, we make use of Factorization Machine (FM) model as\nour wide component\nAwide(s, a) = w0(s) + w1(s)T a + w2(s)w2(s)T ⊙aaT ,\nwhere w0(s) ∈R, w1(s) ∈Rm and w2(s) ∈Rm×m′ are the coefficients associated with the action. Also, m′ is\nthe dimension of latent feature space in the FM model.", |
| "paper_id": "1805.03586", |
| "title": "Policy Optimization with Second-Order Advantage Information", |
| "authors": [ |
| "Jiajin Li", |
| "Baoxiang Wang" |
| ], |
| "published_date": "2018-05-09", |
| "primary_category": "cs.LG", |
| "arxiv_url": "http://arxiv.org/abs/1805.03586v2", |
| "chunk_index": 14, |
| "total_chunks": 28, |
| "char_count": 1465, |
| "word_count": 235, |
| "chunking_strategy": "semantic" |
| }, |
| { |
| "chunk_id": "07230ff4-0a1a-4448-9e12-68e123421175", |
| "text": "Note that the Hadamard product A ⊙B = Pi,j AijBij. To increase the signal-to-noise ratio of the second-order information, we make use of wide components Hessian w2(s)w2(s)T as our Hessian approximator in POSA.", |
| "paper_id": "1805.03586", |
| "title": "Policy Optimization with Second-Order Advantage Information", |
| "authors": [ |
| "Jiajin Li", |
| "Baoxiang Wang" |
| ], |
| "published_date": "2018-05-09", |
| "primary_category": "cs.LG", |
| "arxiv_url": "http://arxiv.org/abs/1805.03586v2", |
| "chunk_index": 15, |
| "total_chunks": 28, |
| "char_count": 209, |
| "word_count": 33, |
| "chunking_strategy": "semantic" |
| }, |
| { |
| "chunk_id": "345e50d9-a5f3-47cc-a749-6ef58cc29f46", |
| "text": "The benefits are two-fold. On the one hand, we can compute\nthe Hessian via the forward propagation with low computational costs. On the other hand, the deep component\ninvolves large noise and uncertainties and we obtain stable and robust Hessian by excluding the deep component\nfrom calculating Hessian. The Hessian matrix contains both positive and negative values. However, we concern only the pairwise dependency\nbetween the action dimensions, which can be directly represented by the absolute value of Hessian.", |
| "paper_id": "1805.03586", |
| "title": "Policy Optimization with Second-Order Advantage Information", |
| "authors": [ |
| "Jiajin Li", |
| "Baoxiang Wang" |
| ], |
| "published_date": "2018-05-09", |
| "primary_category": "cs.LG", |
| "arxiv_url": "http://arxiv.org/abs/1805.03586v2", |
| "chunk_index": 16, |
| "total_chunks": 28, |
| "char_count": 514, |
| "word_count": 79, |
| "chunking_strategy": "semantic" |
| }, |
| { |
| "chunk_id": "5fb6a9c5-bc61-418f-be71-1e49ba744e59", |
| "text": "For instance,\nconsidering a quadratic function f(x) = a + bT x + xT Cx, x ∈Rm, it can be written as f(x) = a + Pi bixi +\n∂2f(x)\nPi,j Cijxixj. The elements in the Hessian matrix satisfy ∂xi∂xj = Cij. When Cij is close to zero, xi and xj\nare close to be independent.", |
| "paper_id": "1805.03586", |
| "title": "Policy Optimization with Second-Order Advantage Information", |
| "authors": [ |
| "Jiajin Li", |
| "Baoxiang Wang" |
| ], |
| "published_date": "2018-05-09", |
| "primary_category": "cs.LG", |
| "arxiv_url": "http://arxiv.org/abs/1805.03586v2", |
| "chunk_index": 17, |
| "total_chunks": 28, |
| "char_count": 264, |
| "word_count": 56, |
| "chunking_strategy": "semantic" |
| }, |
| { |
| "chunk_id": "3baf4d7d-301a-45fd-a0fb-a3f35c7eb9c5", |
| "text": "Thus we can decompose the function f(x) accordingly optimize the components\nseparately. We modify the evolutionary clustering algorithm in [2] by using the absolute approximating Hessian |w2(s)w2(s)T |\nas the affinity matrix in the clustering task. In other words, each row in the absolute Hessian is regarded as a feature\nvector of that action dimension when running the clustering algorithm. With the evolutionary clustering algorithm,\nour policy optimization with second-order advantage information algorithm (POSA) is described in Alg.(1). Algorithm 1: Policy Optimization with Second-Order Advantage Information (POSA)\nInput: number of iterations N, number of value iterations Mw, batch size B, number of subspaces K, initial\npolicy parameter θ, initial value and advantage parameters w and µ;\nOutput: Policy optimal parameter θ\nfor each iteration n in [N] do\nCollect a batch of trajectory data {s(i)t , a(i)t , r(i)t }Bi=1 ;\nfor Mθ iterations do\nUpdate θ by one SGD step using PPO with ASDG in Theorem (3);\nend\nfor Mw iterations do\nUpdate w and µ by minimizing ||V w(st) −Rt||22 and ||ˆA(st, at) −Aµ(st, at)||22 in one SGD step ;\nend\nEstimate ˆA(st, at) using V w(st) by GAE (8);\nCalculate the action subspace partition a(k) based on the absolute Hessian |w2(s)w2(s)T | by the evolutionary\nclustering algorithm;\nend 4 Experiments and Results We demonstrate the sample efficiency and the accuracy of ASDG and Alg.(1) in terms of both performance and\nvariance. ASDG is compared with several of the state-of-the-art gradient estimators. • Action dependent factorized baselines (ADFB) [25] assumes fully factorized policy distributions, and\nuses A(s, (¯a(k), a(−k))) as the k-th dimensional baseline. The subspace a(k) is restricted to contain only one\ndimension, which is the special case of ASDG with K = m. • Generalized advantage dependent baselines (GADB) [13, 6] uses a general baseline function c(s, a)\nwhich depends on the action. It does not utilize Rao-Blackwellization and is our special case when K = 1. 4.1 Implementation Details Our algorithm is built on top of PPO where the advantage realization value is estimated by GAE.", |
| "paper_id": "1805.03586", |
| "title": "Policy Optimization with Second-Order Advantage Information", |
| "authors": [ |
| "Jiajin Li", |
| "Baoxiang Wang" |
| ], |
| "published_date": "2018-05-09", |
| "primary_category": "cs.LG", |
| "arxiv_url": "http://arxiv.org/abs/1805.03586v2", |
| "chunk_index": 18, |
| "total_chunks": 28, |
| "char_count": 2140, |
| "word_count": 341, |
| "chunking_strategy": "semantic" |
| }, |
| { |
| "chunk_id": "5422f7e5-f91f-41c4-a82e-27f7f661c681", |
| "text": "Our code is\navailable at https://github.com/wangbx66/Action-Subspace-Dependent. We use a policy network for PPO and a\nvalue network for GAE that have the same architecture as is in [14, 20]. We utilize a third network which estimates\nthe advantage Aµ(s, a) smoothly by solving Eq. (10) to be our baseline function c(s, a). The network computes\nthe advantage and the Hessian matrix approximator w2(s)w2(s)T by a forward propagation. It uses the wide &\ndeep architecture. For the wide component, the state is mapped to w1(s) and w2(s) through two-layer MLPs, both\nwith size 128 and tanh(·) activation. The deep component Adeep is a three-layer MLPs with size 128 and tanh(·) Our other parameters are consistent with those in [20] except that we reduce the learning rate by ten\ntimes (i.e., 3 · 10−4) for more stable comparisons.", |
| "paper_id": "1805.03586", |
| "title": "Policy Optimization with Second-Order Advantage Information", |
| "authors": [ |
| "Jiajin Li", |
| "Baoxiang Wang" |
| ], |
| "published_date": "2018-05-09", |
| "primary_category": "cs.LG", |
| "arxiv_url": "http://arxiv.org/abs/1805.03586v2", |
| "chunk_index": 19, |
| "total_chunks": 28, |
| "char_count": 826, |
| "word_count": 136, |
| "chunking_strategy": "semantic" |
| }, |
| { |
| "chunk_id": "20112fbb-ee8a-48e6-995a-48787c09d682", |
| "text": "4.2 Synthetic High-Dimensional Action Spaces We design a synthetic environment with a wide range of action space dimensions and explicit action subspace\nstructures to test the performance of Alg.(1) and compare that with previous studies. The environment is a onestep MDP where the reward r(s, a) = PKk=1 aT(k)Mka(k) + ϵ does not depend on the state s (e.g., ϵ is a random\nnoise). In the environment, the action is partitioned into K independent subspaces with a stationary Hessian of the\nadvantage function. Each of the subspace can be regarded as an individual agent. The environment setting satisfies\nboth Assumption (1) and (2).", |
| "paper_id": "1805.03586", |
| "title": "Policy Optimization with Second-Order Advantage Information", |
| "authors": [ |
| "Jiajin Li", |
| "Baoxiang Wang" |
| ], |
| "published_date": "2018-05-09", |
| "primary_category": "cs.LG", |
| "arxiv_url": "http://arxiv.org/abs/1805.03586v2", |
| "chunk_index": 20, |
| "total_chunks": 28, |
| "char_count": 632, |
| "word_count": 103, |
| "chunking_strategy": "semantic" |
| }, |
| { |
| "chunk_id": "18e39a35-ec43-438a-bebb-757a8ea8dc7e", |
| "text": "Synthetic Dim=4 Synthetic Dim=10\n0 GADB 0 GADB\nASDG_2 50000 ASDG_2 ASDG_3 25000 ADFB\n100000 ADFB 50000\n150000 75000\n200000\n100000\n250000\n125000\n300000\n150000\n350000\n175000\n100k 200k 300k 400k 500k 600k 0k 100k 200k 300k 400k 500k 600k 700k 800k (a) Dim=4, K=2 (b) Dim=10, K=2 Synthetic Dim=20 Synthetic Dim=40 100000 0 GADB GADB\n50000 ASDG_4 ASDG_4 ADFB 200000 ADFB\n100000\n150000 300000\n200000\n400000\n250000\n300000 500000\n350000\n600000\n0k 200k 400k 600k 800k 1000k 200k 300k 400k 500k 600k (c) Dim=20, K=4 (d) Dim=40, K=4 Figure 1: Learning curve for synthetic high-dimensional continuous control tasks, varying from 4 to 40 dimensions. At high\ndimensions, our ASDG estimator provides an ideal balance between the accuracy (i.e., GADB) and efficiency (i.e., ADFB). Fig. 1 shows the results on the synthetic environment for ASDG with different dimensions m and number of\nsubspaces K. The legend ASDG K stands for our ASDG estimator with K blocks assumption. For environments\nwith relatively low dimensions such as (a) and (b), all of the algorithms converge to the same point because of\nthe simplicity of the settings.", |
| "paper_id": "1805.03586", |
| "title": "Policy Optimization with Second-Order Advantage Information", |
| "authors": [ |
| "Jiajin Li", |
| "Baoxiang Wang" |
| ], |
| "published_date": "2018-05-09", |
| "primary_category": "cs.LG", |
| "arxiv_url": "http://arxiv.org/abs/1805.03586v2", |
| "chunk_index": 21, |
| "total_chunks": 28, |
| "char_count": 1117, |
| "word_count": 184, |
| "chunking_strategy": "semantic" |
| }, |
| { |
| "chunk_id": "86d6d5f4-3804-4298-ba25-ecdddcfc712d", |
| "text": "Both ASDG and ADFB (that incorporates RB) outperform GADB significantly in\nterms of sample efficiency while ADFB is marginally better ASDG. For high dimensional settings such as (c)\nand (d), both ASDG and GADB converge to the same point with high accuracy. Meanwhile, ASDG achieves the\nconvergence significantly faster because of its efficiency.", |
| "paper_id": "1805.03586", |
| "title": "Policy Optimization with Second-Order Advantage Information", |
| "authors": [ |
| "Jiajin Li", |
| "Baoxiang Wang" |
| ], |
| "published_date": "2018-05-09", |
| "primary_category": "cs.LG", |
| "arxiv_url": "http://arxiv.org/abs/1805.03586v2", |
| "chunk_index": 22, |
| "total_chunks": 28, |
| "char_count": 345, |
| "word_count": 53, |
| "chunking_strategy": "semantic" |
| }, |
| { |
| "chunk_id": "dc5ea2dc-5a56-48f6-8f15-b0720ef3b20b", |
| "text": "ADFB, though having better efficiency, fails to achieve\nthe competitive accuracy. We observe an ideal balance between accuracy and efficiency. On the one hand, ASDG trades marginal accuracy\nfor efficiency when efficiency is the bottleneck of the training, as is in (a) and (b). On the other hand, ASDG trades\nmarginal efficiency for accuracy when accuracy is relatively hard to achieve, as is in (c) and (d). ASDG's tradeoff\nresults in the combination of both the merits of its extreme cases. We also demonstrate that the performance is robust to the assumed K value in (a) when accuracy is not the\nmajor difficulty. As is shown in (a), the performance of ASDG is only decided by its sample efficiency, which is\nmonotonically increased with K. However in complicated environments, an improper selection of K may result in the loss of accuracy. Hence, in general, ASDG performs best overall when the K value is set to the right value\ninstead of the maximum. 4.3 OpenAI Gym's MuJoCo Environments We present the results of the proposed POSA algorithm with ASDG estimator on common benchmark tasks.", |
| "paper_id": "1805.03586", |
| "title": "Policy Optimization with Second-Order Advantage Information", |
| "authors": [ |
| "Jiajin Li", |
| "Baoxiang Wang" |
| ], |
| "published_date": "2018-05-09", |
| "primary_category": "cs.LG", |
| "arxiv_url": "http://arxiv.org/abs/1805.03586v2", |
| "chunk_index": 23, |
| "total_chunks": 28, |
| "char_count": 1094, |
| "word_count": 183, |
| "chunking_strategy": "semantic" |
| }, |
| { |
| "chunk_id": "da33738c-aaa1-4a57-a418-8a7dc3c5e599", |
| "text": "These tasks and experiment settings have been widely studied in the deep reinforcement learning community\n[5, 7, 25, 13]. We test POSA on several environments with high action dimensions, namely Walker2d, Hopper,\nHalfCheetah, and Ant, shown in Fig. 2 and Fig. 3. In general, ASDG outperforms ADFB and GADB consistently\nbut performs extraordinarily well for HalfCheetah. Empirically, we find the block diagonal assumption (2) for the\nadvantage function is minimally violated, and that may be one of the reasons behind its good performance.", |
| "paper_id": "1805.03586", |
| "title": "Policy Optimization with Second-Order Advantage Information", |
| "authors": [ |
| "Jiajin Li", |
| "Baoxiang Wang" |
| ], |
| "published_date": "2018-05-09", |
| "primary_category": "cs.LG", |
| "arxiv_url": "http://arxiv.org/abs/1805.03586v2", |
| "chunk_index": 24, |
| "total_chunks": 28, |
| "char_count": 538, |
| "word_count": 83, |
| "chunking_strategy": "semantic" |
| }, |
| { |
| "chunk_id": "6d3c330b-1ce0-4633-b43a-d8e2746547ab", |
| "text": "Hopper-V1 HalfCheetah-V1 Ant-V1\nADFB GADB 4000 ADFB\n2000 GADB\nASDG_2 4000 ASDG_2ADFB 3000 GADBASDG_4ASDG_2\n1500 3000\n1000 1000\n500 0\n0 1000\n0k 1000k 2000k 3000k 4000k 0k 2000k 4000k 6000k 8000k 10000k 0k 1000k2000k3000k4000k5000k6000k7000k8000k Figure 2: Comparison between two baselines (ADFB, GADB) and our ASDG estimator on various OpenAI Gym Mujoco\ncontinuous control tasks, including Hopper-V1 (Dim=3), HalfCheetah-V1 (Dim=6) and Ant-V1 (Dim=8). Our ASDG estimator\nperforms consistently the best across all these tasks.", |
| "paper_id": "1805.03586", |
| "title": "Policy Optimization with Second-Order Advantage Information", |
| "authors": [ |
| "Jiajin Li", |
| "Baoxiang Wang" |
| ], |
| "published_date": "2018-05-09", |
| "primary_category": "cs.LG", |
| "arxiv_url": "http://arxiv.org/abs/1805.03586v2", |
| "chunk_index": 25, |
| "total_chunks": 28, |
| "char_count": 524, |
| "word_count": 74, |
| "chunking_strategy": "semantic" |
| }, |
| { |
| "chunk_id": "5fededd6-94e9-41bd-b974-32709adfd8d1", |
| "text": "Walker2d-V1 6000\nASDG_2\n5000 ASDG_5 ADFB\nGADB\n4000 ASDG_3\nASDG_4 0k 2000k 4000k 6000k 8000k 10000k Figure 3: The choices of action subspace number K in the Walker2d-V1 environment. To investigate the choice of K, we test all the possible K values in Walker2d. The optimal K value is supposed to\nbe between its extreme K = 1 and K = m cases.", |
| "paper_id": "1805.03586", |
| "title": "Policy Optimization with Second-Order Advantage Information", |
| "authors": [ |
| "Jiajin Li", |
| "Baoxiang Wang" |
| ], |
| "published_date": "2018-05-09", |
| "primary_category": "cs.LG", |
| "arxiv_url": "http://arxiv.org/abs/1805.03586v2", |
| "chunk_index": 26, |
| "total_chunks": 28, |
| "char_count": 340, |
| "word_count": 63, |
| "chunking_strategy": "semantic" |
| }, |
| { |
| "chunk_id": "b2976ef3-4206-4b38-b947-9e365de5e48c", |
| "text": "Empirically, we find it effective to conduct a grid search. We\nconsider the automatically approach to finding the optimal K value an interesting future work. We propose action subspace dependent gradient (ASDG) estimator, which combines Rao-Blackwell theorem and\nControl Variates theory into a unified framework to cope with the high dimensional action space. We present policy optimization with second-order advantage information (POSA), which captures the second-order information\nof the advantage function via the wide & deep architecture and exploits the information to find the dependency\nstructure for ASDG. ASDG reduces the variance from the original policy gradient estimator while keeping it unbiasedness under relatively weaker assumptions than previous studies [25]. POSA with ASDG estimator performs\nwell on a variety of environments including high-dimensional synthetic environment and OpenAI Gym's MuJoCo\ncontinuous control tasks. It ideally balances the two extreme cases and demonstrates the merit of both the methods.", |
| "paper_id": "1805.03586", |
| "title": "Policy Optimization with Second-Order Advantage Information", |
| "authors": [ |
| "Jiajin Li", |
| "Baoxiang Wang" |
| ], |
| "published_date": "2018-05-09", |
| "primary_category": "cs.LG", |
| "arxiv_url": "http://arxiv.org/abs/1805.03586v2", |
| "chunk_index": 27, |
| "total_chunks": 28, |
| "char_count": 1034, |
| "word_count": 147, |
| "chunking_strategy": "semantic" |
| } |
| ] |