| [ |
| { |
| "chunk_id": "19f88105-b6e9-4571-88fb-cc9a0b5e5bff", |
| "text": "When Simple Exploration is Sample Efficient: Identifying\nSufficient Conditions for Random Exploration to Yield PAC\nRL Algorithms Yao Liu yaoliu@stanford.edu\nStanford University Emma Brunskill ebrun@cs.stanford.edu\nStanford University\nAbstract\nApr Efficient exploration is one of the key challenges for reinforcement learning (RL) algorithms. Most traditional sample efficiency bounds require strategic exploration. Recently\nmany deep RL algorithms with simple heuristic exploration strategies that have few formal17\nguarantees, achieve surprising success in many domains. These results pose an important question about understanding these exploration strategies such as e-greedy, as well\nas understanding what characterize the difficulty of exploration in MDPs. In this work\nwe propose problem specific sample complexity bounds of Q learning with random walk\nexploration that rely on several structural properties. We also link our theoretical results\nto some empirical benchmark domains, to illustrate if our bound gives polynomial sample[cs.LG]\ncomplexity in these domains and how that is related with the empirical performance. Keywords: Reinforcement learning, Markov decision process, sample complexity of exploration An important challenge for reinforcement learning is to balance exploration and exploitation. There have been many strategic exploration algorithms (Auer and Ortner, 2007; Strehl\net al., 2012; Dann and Brunskill, 2015), yet many of the recent successes in deep reinforcement learning rely on algorithms with simple exploration mechanisms. While some of these\napproaches also require many samples, this still highlights an important question: when is\nfollowed by greedy exploitation can enable a strong efficiency criteria, Probably Approximately Correct (PAC): that on all but a number of sample that scales as a polynomial\nfunction of the domain, the algorithm will take near-optimal actions. Random exploration\nfollowed by greedy exploitation approach is related to popular e-greedy methods: it can be\nviewed as a particular thresholding decay schedule in e-greedy methods: e is initially set to\n1, and then dropped to 0 after a fixed number of steps. This simplification enables us to\nfocus on when random exploration can still be efficient, and there are many domains where\nhaving a fixed budget for exploration is reasonable where our analysis will directly apply. Most prior work on formal analysis of exploration before exploitation approach (Langford\nand Zhang, 2008; Kearns and Singh, 2002) focused on strategic exploration during the exploration phase.", |
| "paper_id": "1805.09045", |
| "title": "When Simple Exploration is Sample Efficient: Identifying Sufficient Conditions for Random Exploration to Yield PAC RL Algorithms", |
| "authors": [ |
| "Yao Liu", |
| "Emma Brunskill" |
| ], |
| "published_date": "2018-05-23", |
| "primary_category": "cs.LG", |
| "arxiv_url": "http://arxiv.org/abs/1805.09045v4", |
| "chunk_index": 0, |
| "total_chunks": 45, |
| "char_count": 2585, |
| "word_count": 368, |
| "chunking_strategy": "semantic" |
| }, |
| { |
| "chunk_id": "4223ce32-9f1a-4f52-8626-d32564c6e699", |
| "text": "In contrast, to our knowledge our work is the first to consider under what c⃝2018 Yao Liu and Emma Brunskill. License: CC-BY 4.0, see https://creativecommons.org/licenses/by/4.0/.", |
| "paper_id": "1805.09045", |
| "title": "When Simple Exploration is Sample Efficient: Identifying Sufficient Conditions for Random Exploration to Yield PAC RL Algorithms", |
| "authors": [ |
| "Yao Liu", |
| "Emma Brunskill" |
| ], |
| "published_date": "2018-05-23", |
| "primary_category": "cs.LG", |
| "arxiv_url": "http://arxiv.org/abs/1805.09045v4", |
| "chunk_index": 1, |
| "total_chunks": 45, |
| "char_count": 179, |
| "word_count": 25, |
| "chunking_strategy": "semantic" |
| }, |
| { |
| "chunk_id": "d8f3febe-f247-4121-ae56-836ee86227ad", |
| "text": "conditions random action selection during the exploration phase might still be sufficient to\nenable provably sample efficient reinforcement learning. Some restrictions on the decision process are needed: there exist challenging Markov\ndecision processes where relying on random exploration will require an exponential bound (in\nthe MDP parameters) on the sample complexity, in contrast to the polynomial dependence\nrequired for the algorithm to be PAC. In some such domains, like the combination lock\nsetting(Li, 2012; Whitehead, 2014)), any greedy actions will (for a very long time) cause\nthe agent to undo productive exploration towards finding the optimal policy, and therefore ϵ-\ngreedy (for any ϵ) will be no better and likely worse than random exploration, and therefore\nwill also not have PAC performance. Rather than focusing on new algorithmic contributions, in this paper we seek to explore\nsufficient conditions on the domains that ensure that random exploration then exploitation\nmethods will quickly lead to high performance, as formalized by satisfying the PAC criteria. Our work is related to recent work (Jiang et al., 2016) which considered structural properties\nof Markov decision processes that bound the loss when performing shallow planning: in\ncontrast to their work, our work focused on the structural properties of MDPs that enable\nsimple exploration to quickly enable good performance during learning. As our main contribution, we introduce new structural properties of MDPs, and prove\nthat when these parameters scales with a polynomial function of the domain parameters,\nthen a random explore then exploit approach is PAC. Our key properties are φ(s), a states\nstationary occupancy distribution under random walk, and eigenvalues of a graph Laplacian. Though making an assumption of the occupancy distribution under a random walk\nmight seem to be presuming the conclusion, we note that this assumption only applies to the\nasymptotic, stationary distribution but our result yields finite sample bounds. Our result\nrelies on some key results about convergence of a lazy random walk on directed graph in\nChung (2005). We also show that if a domain exhibits a property we term locally symmetric\nactions then it immediately satisfies the desired stationary criteria. That basically means\nfor any two states there is a symmetric bijection between actions leading to the other state.", |
| "paper_id": "1805.09045", |
| "title": "When Simple Exploration is Sample Efficient: Identifying Sufficient Conditions for Random Exploration to Yield PAC RL Algorithms", |
| "authors": [ |
| "Yao Liu", |
| "Emma Brunskill" |
| ], |
| "published_date": "2018-05-23", |
| "primary_category": "cs.LG", |
| "arxiv_url": "http://arxiv.org/abs/1805.09045v4", |
| "chunk_index": 2, |
| "total_chunks": 45, |
| "char_count": 2404, |
| "word_count": 368, |
| "chunking_strategy": "semantic" |
| }, |
| { |
| "chunk_id": "fc9946d6-62a4-4567-bca1-b7cce2a92291", |
| "text": "A number of common simulation domains or slight variants of, including grid worlds, 4\nrooms, and Taxi, satisfy this criteria. Following from this property, our work also yields\nsome insights into why certain popular Atari domains have been observed to be feasible\nwith simple e-greedy exploration. Some conditions that are known to enable efficient exploration under more strategic exploration algorithms, such as finite diameter domains, are\nnot sufficient for a random exploration then exploit algorithm to be PAC, and we frame a\nclassic domain, chain, as such an example. Our results also illustrate the difficulty of other\nsimilar \"trapdoor\" domains, including Montezumas Revenge which has been notoriously\nchallenging for many deep RL agents. We also discuss several other properties that have\nbeen proposed to help characterize the learning complexity of MDPs and their relation to\nour proposed criteria. To summarize, our results help to characterize the properties of an environment that\nmake exploration hard or easy, a critical problem in RL. We hope these properties might\nhelp guide practitioners in their algorithm selection, and also advance our understanding\nabout whether and when strategic exploration is needed. When Simple Exploration is Sample Efficient The optimality of the greedy policy in various settings has been previously studied for\nsignificantly more restricted settings. Bastani et al. (2017) prove that a greedy policy can\nachieve the optimal asymptotic regret for a two-armed contextual bandit, which could be\nviewed as a special case of episodic reinforcement learning, as long as the contexts are i.i.d.\nand the distribution of contexts are diverse enough. That implies a case in contextual\nbandit where the greedy strategy is enough to solve the exploration problem. Karush and\nDear (1967) shows that under MDP structures, a greedy strategy is optimal, eliminating\nthe need to plan ahead. Our work focuses on the random walk side of explore-greedy and\nyields a polynomial sample complexity bound under more mild assumptions. Similarly, if the Q-functions are initialized extremely optimistically, O( Vmax where ΠTi=1(1−αi))\nαi is learning rate and T is the samples we need to learn a near optimal Q function,\nthen greedy-only Q-learning is PAC (Even-Dar and Mansour, 2002). However, such a\nhigh optimism value (far higher than the possible achieve value) will result in an extremely\naggressive exploration, further amplifying the problem of theoretically-motivated optimistic\napproaches in practice.", |
| "paper_id": "1805.09045", |
| "title": "When Simple Exploration is Sample Efficient: Identifying Sufficient Conditions for Random Exploration to Yield PAC RL Algorithms", |
| "authors": [ |
| "Yao Liu", |
| "Emma Brunskill" |
| ], |
| "published_date": "2018-05-23", |
| "primary_category": "cs.LG", |
| "arxiv_url": "http://arxiv.org/abs/1805.09045v4", |
| "chunk_index": 3, |
| "total_chunks": 45, |
| "char_count": 2535, |
| "word_count": 385, |
| "chunking_strategy": "semantic" |
| }, |
| { |
| "chunk_id": "c38fdbda-39ce-4355-a4bc-c2ae09c1d095", |
| "text": "Maillard et al. (2014) propose a notion of hardness for MDPs named as environmental\nnorm. It measures how varied the value function is at the possible next states and on\nthe distribution over next states. They show how this property provides a tighter regret\nbound for UCRL algorithm.", |
| "paper_id": "1805.09045", |
| "title": "When Simple Exploration is Sample Efficient: Identifying Sufficient Conditions for Random Exploration to Yield PAC RL Algorithms", |
| "authors": [ |
| "Yao Liu", |
| "Emma Brunskill" |
| ], |
| "published_date": "2018-05-23", |
| "primary_category": "cs.LG", |
| "arxiv_url": "http://arxiv.org/abs/1805.09045v4", |
| "chunk_index": 4, |
| "total_chunks": 45, |
| "char_count": 284, |
| "word_count": 48, |
| "chunking_strategy": "semantic" |
| }, |
| { |
| "chunk_id": "48ae03cd-3822-4616-89a2-93cc669dd2b2", |
| "text": "In the settings we consider random walk exploration is not\ndriven by any reward/value observation, but purely depends on transition dynamics. Thus\nin this work we mainly consider transition-only parameters. In addition, in contrast to their\nwork, we are focused on how structural properties of the MDP enable explore-greedy to be\nefficient, rather than improving the analysis of strategic exploration algorithms. Our proposed properties, stationary distribution and Laplacian eigenvalues, are related\nto a couple of other domain properties that have been previously considered. The first is\ndiameter. Finite diameter is assumed for several strategic exploration algorithms such as\noptimism under uncertainty approaches (Jaksch et al., 2010) and PAC analysis (Brunskill\nand Li, 2013). However, in the context of simple random exploration, a diameter that is\npolynomial with the MDP parameters is necessary but not sufficient. This is illustrated\nlater in our chain example in which the diameter is finite, because there does exist a policy that could traverse between the start and end state in time linear in the state space,\nbut under random walk the number of samples needed to be likely to reach a later state\nscales exponentially with later states. Our bound use stationary distribution to measure\nthe asymptotic occupancy instead of direct reachability, which is measured by diameter. The second is proto-value functions Mahadevan and Maggioni (2007), which use spectral\nproperties of MDP to design a representation-based policy learning algorithm. Mixing time\nfor MDPs is also a property that is closely related with stationary distribution and our\nbounds. Previous work about mixing time in MDPs (Kearns and Singh, 2002; Brafman\nand Tennenholtz, 2002) aims at designing strategic exploration algorithm and bounding\nthe complexity of it by mixing time. Mixing time for MDPs (Kearns and Singh, 2002) is a\nproperty that is closely related with our bound. Previous work about mixing time in MDPs\nKearns and Singh (2002); Brafman and Tennenholtz (2002) aim at designing strategic exploration algorithm and bounding the complexity of it by mixing time. on how the simple exploration method works, and we bound this variant of mixing time\nby other basic parameters as well as stationary distribution and eigenvalues.", |
| "paper_id": "1805.09045", |
| "title": "When Simple Exploration is Sample Efficient: Identifying Sufficient Conditions for Random Exploration to Yield PAC RL Algorithms", |
| "authors": [ |
| "Yao Liu", |
| "Emma Brunskill" |
| ], |
| "published_date": "2018-05-23", |
| "primary_category": "cs.LG", |
| "arxiv_url": "http://arxiv.org/abs/1805.09045v4", |
| "chunk_index": 5, |
| "total_chunks": 45, |
| "char_count": 2316, |
| "word_count": 356, |
| "chunking_strategy": "semantic" |
| }, |
| { |
| "chunk_id": "def3a145-6889-434d-9e6b-b8a900c4a667", |
| "text": "Our work is\nalso related to classic results about cover time in Markov chains. Some bounds (Levin and\nPeres, 2017; Ding et al., 2011) on ϵ-mixing time and relaxation time can also induce a bound\non cover time by stationary distribution and Laplacian eigenvalues, but they all focus on\nreversible chains, which our Theorem 3 does not need. An MDP is a tuple M = {S, A, P, R, γ}, where S is the state space, A is the action space,\nP : S × A × S 7→[0, 1] is the probabilistic transition function, and R : S × A 7→[0, Rmax]\nis the reward function. We use S and A to denote the size of S and A. The value V π(s)\ndefines a discounted expected reward of running policy π beginning with state s. Sample\ncomplexity (Kakade et al., 2003), a way to quantify the performance of a reinforcementlearning algorithm, is defined as the total number of steps where algorithm execute a suboptimal policy i.e. An algorithm is PAC-MDP if its sample complexity\nis bounded by a polynomial function about S, A, 1ϵ, 1δ, and 1−γ1 with high probability. Previous work (Even-Dar and Mansour, 2003) that studies the polynomial convergence\ntime of Q learning by viewing exploration strategy as a black box. They characterize the\nefficiency of exploration by covering length and bound the convergence time by it. Definition 1 The covering length, denoted by L, is the number of time steps we need to\nvisit all state-action pairs at least once with probability at least 1/2, starting from any (s, a). Theorem 2 (Theorem 4 from Even-Dar and Mansour (2003)) Let QT be the value function\nafter T step Q learning update, with learning rate αt(s, a) = 1/(#(s, a))ω. L is the covering\nlength of the exploration policy. Then with probability at least 1 −δ, ∥QT −Q∗∥∞≤ϵ if: 1 1\n1−ω , T ≥T0 = eΘ L1+3ωV max/((12 −γ)ϵ)2 ω + (L/(1 −γ)) This theorem implies that, if the covering length L of the exploration policy is polynomial\nin all parameters, we could learn the near optimal Q function in polynomial time, and then\nachieve a near optimal policy by taking the greedy policy of this Q function.", |
| "paper_id": "1805.09045", |
| "title": "When Simple Exploration is Sample Efficient: Identifying Sufficient Conditions for Random Exploration to Yield PAC RL Algorithms", |
| "authors": [ |
| "Yao Liu", |
| "Emma Brunskill" |
| ], |
| "published_date": "2018-05-23", |
| "primary_category": "cs.LG", |
| "arxiv_url": "http://arxiv.org/abs/1805.09045v4", |
| "chunk_index": 6, |
| "total_chunks": 45, |
| "char_count": 2053, |
| "word_count": 373, |
| "chunking_strategy": "semantic" |
| }, |
| { |
| "chunk_id": "c84bf16c-99b7-4978-8347-fc3ea069b00a", |
| "text": "Thus the\ncovering length would be a good measure for us to evaluate the exploration quality of a\npolicy, and it allows us to focus on exploration. In this work we consider Q learning combined with random walk exploration policy. We are interested in the minimum number of\nsteps we need before switching to near-optimal greedy exploitation to guarantee a sufficient\nexploration. We intend to get a problem-specific bound by structural parameters of an\nMDP, to characterize when the exploration problem of an MDP is simple. Covering Length Bound In this section, we will bound the covering length by the stationary distribution over states\nfor random walk and Laplacian eigenvalues. The stationary distribution characterizes the\nasymptotic occupancy of states, and reflects asymptotically how good exploration will be. The smallest non-trivial eigenvalue of the Laplacian, is bounded by a geometric property When Simple Exploration is Sample Efficient named the Cheeger constant that intuitively measures the bottleneck of stationary random\nwalk flow. These two parameters are both related to the asymptotic behavior of random\nwalk. One natural question is that if we are given that asymptotically random walk can\nexplore well, can we achieve polynomial sample complexity bound for finite sample exploration, and we show that through the following theorem. Given the random walk policy πRW , we have a transition matrix under this policy, PRWπ ,\nand we can view it as a transition matrix for a directed weighted graph, denoted as G(P RWπ ). If P RWπ (u, v) > 0 we say there is an edge from u to v with weight PRWπ (u, v) in G. For the\nrest of this section, we use G and P to refer to this graph and its transition matrix.", |
| "paper_id": "1805.09045", |
| "title": "When Simple Exploration is Sample Efficient: Identifying Sufficient Conditions for Random Exploration to Yield PAC RL Algorithms", |
| "authors": [ |
| "Yao Liu", |
| "Emma Brunskill" |
| ], |
| "published_date": "2018-05-23", |
| "primary_category": "cs.LG", |
| "arxiv_url": "http://arxiv.org/abs/1805.09045v4", |
| "chunk_index": 7, |
| "total_chunks": 45, |
| "char_count": 1719, |
| "word_count": 288, |
| "chunking_strategy": "semantic" |
| }, |
| { |
| "chunk_id": "e55112d5-63a1-4960-abcb-60c209bf872f", |
| "text": "It\nis known that for the transition matrix P, there is a unique left eigenvector φ such that\nφ(s) > 0 for any s and φP = φ, ∥φ∥1 = 1. This eigenvector φ is also the stationary state\ndistribution under the random walk policy. We follow the definition of graph Laplacian for\na directed graph G proposed by Chung (2005): + Φ−1/2P ∗Φ1/2 L = I −Φ1/2PΦ−1/2 ,\nwhere Φ is a diagonal matrix with entries Φ(s, s) = φ(s). Usually the graph Laplacian is\nonly defined on undirected graph, and the intuition in (Chung, 2005) is that take the average\nof transition matrix P and its transpose to define an undirected graph, then normalized\nthe transition matrix, to introduce the Laplacian for weighted directed graph. The smallest\neigenvalue of Laplacian L is zero. Let λ be the smallest non-zero eigenvalue. In the following\ntheorem, we will bound the covering time of random walk policy by the eigenvalues of L\nand the stationary distribution φ. Theorem 3 The covering length of a irreducible MDP under random walk policy is at most 2 1\n8A ln(4SA) 2 ln 2/ min φ(s) / ln( + 1 X s 2 −λ) φ(s), where φ is the stationary distribution vector of random walk and λ is the smallest non-zero\neigenvalue of the Laplacian of the directed graph induced by random walk over MDP. The\n∗Φ1/2Laplacian is defined by Chung (2005): L = I−Φ1/2PΦ−1/2+Φ−1/2P , where Φ is a diagonal 2\nmatrix with entries Φ(s, s) = φ(s) and P is the transition matrix P(s, s′) = Pa AT(s′|s,1 a). It is known that in reversible Markov chains mixing time can be bounded by 1 1 ϵ mins φ(s) 1−λ∗\n(Levin and Peres, 2017), where λ∗is the largest absolute value of eigenvalue of P, except 1. Note that this λ is the second largest eigenvalue of P instead of the Laplacian, which is a\nnormalized version of I −P, thus the relationshio between λ∗and the second smallest eigenvalue of Laplacian, which is used in our paper, can be bounded. This mixing time bound\ngives us a cover time bound which has the same order of magnitude with our Theorem 3,\nin terms of S, λ and mins φ(s).", |
| "paper_id": "1805.09045", |
| "title": "When Simple Exploration is Sample Efficient: Identifying Sufficient Conditions for Random Exploration to Yield PAC RL Algorithms", |
| "authors": [ |
| "Yao Liu", |
| "Emma Brunskill" |
| ], |
| "published_date": "2018-05-23", |
| "primary_category": "cs.LG", |
| "arxiv_url": "http://arxiv.org/abs/1805.09045v4", |
| "chunk_index": 8, |
| "total_chunks": 45, |
| "char_count": 2018, |
| "word_count": 374, |
| "chunking_strategy": "semantic" |
| }, |
| { |
| "chunk_id": "638612b5-bb05-4f5e-8d87-252f388d931d", |
| "text": "Ding et al. (2011) also shows a similar result. Theorem 3\nremove the reversible assumption by considering the lazy random walk in directed graph\nand linking it to the cover time. This bound immediately implies a PAC RL bound if 1 and 1 is polynomial. 1 This λ mins φ(s)\nshows that the Laplacian eigenvalue λ and the stationary distribution are important factors 1. if φmin = 0 then this will be infinite, but this only occurs if the MDP is reducible. In that case, only the\nstrongly connected component we are in is really matters for our exploration.", |
| "paper_id": "1805.09045", |
| "title": "When Simple Exploration is Sample Efficient: Identifying Sufficient Conditions for Random Exploration to Yield PAC RL Algorithms", |
| "authors": [ |
| "Yao Liu", |
| "Emma Brunskill" |
| ], |
| "published_date": "2018-05-23", |
| "primary_category": "cs.LG", |
| "arxiv_url": "http://arxiv.org/abs/1805.09045v4", |
| "chunk_index": 9, |
| "total_chunks": 45, |
| "char_count": 551, |
| "word_count": 99, |
| "chunking_strategy": "semantic" |
| }, |
| { |
| "chunk_id": "fe189b48-4c87-4e16-a891-1db25a62fafe", |
| "text": "It is still not clear for what kind of MDPs these terms are polynomial. We\nwill show two bounds for 1/λ and 1/φmin, which may provide more intuitive insight. Eigenvalue λ: In graph theory, the second smallest eigenvalue of the Laplacian could\nbe bounded by the Cheeger constant (also known as conductance). This will give us a\nmore intuitive and geometric view of what λ actually means for an MDP and when it is\nsmall. We define a flow over the graph induced by the stationary distribution of random\nwalk as: F(u, v) = φ(u)P(u, v). Then we write: F(∂U) = Pu∈U,v /∈U F(u, v), and F(U) =\nThe Cheeger constantPu∈U φ(u). The Cheeger constant is: h = infU min{F(U),F(U)}.F(∂U)\nmeasures the relatively smallest bottleneck in the flow induced by stationary distribution. The Cheeger bound of λ says that h ≥λ ≥h2 , which means 1/λ is polynomial if and only 2\nif 1/h is polynomial. Stationary distribution: We know that for an (weighted) undirected graph, the stad(s)\ntionary distribution on state s is O( Ps d(s)) where d(s) is the degree of s. Then we define\na property of MDPs: Definition 4 An MDP has locally symmetric actions if for any s, s′, there is a bijections f\nbetween action sets {a|P(s′|s, a) > 0} and {a′|P(s|s′, a′) > 0} s.t. P(s′|s, a) = P(s|s′, f(a)). If a MDP has locally symmetric actions, we can construct an undirected graph such that\nthe random walk on the MDP is equivalent with a random walk on this graph. The weight\nbetween two state in this graph is defined as: w(u, v) = Pa∈A P(v|u, a) = Pa∈A P(u|v, a). One can verify that the random walk over the MDP has the same transition probability\nwith random walk on this graph.", |
| "paper_id": "1805.09045", |
| "title": "When Simple Exploration is Sample Efficient: Identifying Sufficient Conditions for Random Exploration to Yield PAC RL Algorithms", |
| "authors": [ |
| "Yao Liu", |
| "Emma Brunskill" |
| ], |
| "published_date": "2018-05-23", |
| "primary_category": "cs.LG", |
| "arxiv_url": "http://arxiv.org/abs/1805.09045v4", |
| "chunk_index": 10, |
| "total_chunks": 45, |
| "char_count": 1641, |
| "word_count": 299, |
| "chunking_strategy": "semantic" |
| }, |
| { |
| "chunk_id": "3d18b61f-b31f-4e1f-a3f4-8757b212460f", |
| "text": "Thus they also have the same stationary distribution,\nwhich is polynomial of S, A, computed from the undirected graph. Figure 1: Left: two room domain. Right: the stationary distribution heat map As a complementary, in appendix we list properties that give PAC RL bounds in certain\ncases where exploration should be easy intuitively, but are not covered by the bound in this\nsection: When the actions behave similarly, or when all states are densely connected. Theoretical Bounds and Links to Empirical Results Our investigation was inspired by the recent empirical successes of deep reinforcement learning which relied on simple exploration mechanisms, and we hope that our theoretical analysis\nwill both predict the hardness of domains that have been specifically constructed to require\nstrategic exploration, as well add further insight into the hardness of other domains. When Simple Exploration is Sample Efficient section, we illustrate how our approach can explain some of the ease of exploration in some\npopular domains, as well as the hardness of exploration in others.", |
| "paper_id": "1805.09045", |
| "title": "When Simple Exploration is Sample Efficient: Identifying Sufficient Conditions for Random Exploration to Yield PAC RL Algorithms", |
| "authors": [ |
| "Yao Liu", |
| "Emma Brunskill" |
| ], |
| "published_date": "2018-05-23", |
| "primary_category": "cs.LG", |
| "arxiv_url": "http://arxiv.org/abs/1805.09045v4", |
| "chunk_index": 11, |
| "total_chunks": 45, |
| "char_count": 1078, |
| "word_count": 168, |
| "chunking_strategy": "semantic" |
| }, |
| { |
| "chunk_id": "43c56528-0d57-4a68-8c6b-276746110a52", |
| "text": "Grid World: Grid world is a group of navigation domains where we need to control an\nagent to walk in a grid world, collect reward, avoid walls and holes. Most grid worlds with\ndeterministic or other typical action settings have locally symmetric actions. Under this\ncondition, random walk over the grid world is equivalent to a random walk on an undirected\ngraph. Thus 1/φmin = O(SA) and it is a polynomial function of MDP parameters.", |
| "paper_id": "1805.09045", |
| "title": "When Simple Exploration is Sample Efficient: Identifying Sufficient Conditions for Random Exploration to Yield PAC RL Algorithms", |
| "authors": [ |
| "Yao Liu", |
| "Emma Brunskill" |
| ], |
| "published_date": "2018-05-23", |
| "primary_category": "cs.LG", |
| "arxiv_url": "http://arxiv.org/abs/1805.09045v4", |
| "chunk_index": 12, |
| "total_chunks": 45, |
| "char_count": 434, |
| "word_count": 75, |
| "chunking_strategy": "semantic" |
| }, |
| { |
| "chunk_id": "66513fbe-0e6b-47a0-b313-5780a55afb2c", |
| "text": "Taxi (Dietterich, 2000): Taxi is a 5x5 gridworld. A passenger starts at one of the 4\nlocations marked in a grid world, and its destination is randomly chosen from one of the\n4 locations. The taxi starts randomly on any square, and the goal is to pickup or dropoff\nthe passenger. This domain, as well as the two room example we discussed previously,\nare widely used testing domains in the hierarchical RL literature, since options/modular\npolicy are expected to achieve more efficient exploration than primitive actions.", |
| "paper_id": "1805.09045", |
| "title": "When Simple Exploration is Sample Efficient: Identifying Sufficient Conditions for Random Exploration to Yield PAC RL Algorithms", |
| "authors": [ |
| "Yao Liu", |
| "Emma Brunskill" |
| ], |
| "published_date": "2018-05-23", |
| "primary_category": "cs.LG", |
| "arxiv_url": "http://arxiv.org/abs/1805.09045v4", |
| "chunk_index": 13, |
| "total_chunks": 45, |
| "char_count": 519, |
| "word_count": 86, |
| "chunking_strategy": "semantic" |
| }, |
| { |
| "chunk_id": "d4992a98-7da7-41eb-a26c-9a90018e808b", |
| "text": "It is also\nequivalent with undirected graphs following from the property of locally symmetric actions,\nif picking up/dropping offare not invertible actions. In that case, our bounds implies that\nrandom walk could learn the optimal value function of these domains efficiently. Pong: Pong is one of the Atari games that is relatively easy for DQN with e-greedy\n(Mnih et al., 2013). In this domain, one plays pong with a computer player by moving the\npadder in y axis, hitting the ball back. Interestingly, we can approximately view Pong as\nsatisfying the property of locally symmetric actions by considering a state abstraction. In\nPong, the angle of reflection is a bijection function of the hitting position on the paddle,\nnot of angle of incidence, which implies that we could achieve any possible reflection angle\nin the possible angle domain by proper action. Consider a game state abstraction that\nconsists only of the last ball incidence angle θ to the agent's paddle. That means, we\nview all frames after the ball leaves paddle until another hitting as the same state. This\nmakes several notable simplifications, ignoring: the ball's velocity, boundary2. Since we\nare playing in a boundless field, it is reasonable to view balls with different y coordinates\nof hitting position as the same state. For simplicity we also assume the agent's opponent\nexecutes a deterministic policy that only depends on incidence angle, so that the mapping\nfrom incidence angle to reflection angle is a bijection, denoted as f. Under these settings, we can show Pong has the locally symmetric actions. For any state\nθ1, if we execute an action a1 so that the reflection angle is θ 1,′ then the next state, which is\nthe angle after the computer opponent takes an action would be θ2 = f(θ 1).′ For this state,\nthere exist an action a2 such that the reflection angle is f−1(θ1). Since the mapping from\naction to reflection angle and f are both bijection, the mapping between a1 and a2 is also\nbijection.", |
| "paper_id": "1805.09045", |
| "title": "When Simple Exploration is Sample Efficient: Identifying Sufficient Conditions for Random Exploration to Yield PAC RL Algorithms", |
| "authors": [ |
| "Yao Liu", |
| "Emma Brunskill" |
| ], |
| "published_date": "2018-05-23", |
| "primary_category": "cs.LG", |
| "arxiv_url": "http://arxiv.org/abs/1805.09045v4", |
| "chunk_index": 14, |
| "total_chunks": 45, |
| "char_count": 1987, |
| "word_count": 335, |
| "chunking_strategy": "semantic" |
| }, |
| { |
| "chunk_id": "ea88db4f-50ce-4285-8012-feb97025421b", |
| "text": "Thus we could say random walk in a proper abstracted state space of Pong is\nequivalent with random walk on an undirected graph, and then yields polynomial sample\ncomplexity. That may intuitively explain the success of e-greedy in this domain. Chain MDP: The chain MDP has been previously introduced to motivated the need\nfor strategic exploration(Li, 2012; Whitehead, 2014). The MDP has n+1 states, the start\nstate is the leftmost state s0, and at each state si there are 2 deterministic actions, one is\ngoing right to si+1 (except the right end states sn which has a self loop action) and the other\nis going back to s0. Q learning with e-greedy or random walk does poorly in this example. Actually the boundary case could be treated by mirror reflection transformation. We would view the\nwhole game as a mirror version of playing in the extended space, after hitting the boundary.", |
| "paper_id": "1805.09045", |
| "title": "When Simple Exploration is Sample Efficient: Identifying Sufficient Conditions for Random Exploration to Yield PAC RL Algorithms", |
| "authors": [ |
| "Yao Liu", |
| "Emma Brunskill" |
| ], |
| "published_date": "2018-05-23", |
| "primary_category": "cs.LG", |
| "arxiv_url": "http://arxiv.org/abs/1805.09045v4", |
| "chunk_index": 15, |
| "total_chunks": 45, |
| "char_count": 881, |
| "word_count": 151, |
| "chunking_strategy": "semantic" |
| }, |
| { |
| "chunk_id": "789f21ff-649d-4189-8541-1d0076122e04", |
| "text": "(a) Chain MDP (b) Grid World (c) Taxi (Dietterich,\n2000) Figure 2: Domains with different order of stationary distribution It takes Θ(2n) samples in expectation to visit the right end state for one time, resulting in\nan exponential sample complexity. That matches what we can learn from our bound: The\nstationary distribution of random walk on state si is Θ( 2i1 ), and 1/φmin is Θ(2S).", |
| "paper_id": "1805.09045", |
| "title": "When Simple Exploration is Sample Efficient: Identifying Sufficient Conditions for Random Exploration to Yield PAC RL Algorithms", |
| "authors": [ |
| "Yao Liu", |
| "Emma Brunskill" |
| ], |
| "published_date": "2018-05-23", |
| "primary_category": "cs.LG", |
| "arxiv_url": "http://arxiv.org/abs/1805.09045v4", |
| "chunk_index": 16, |
| "total_chunks": 45, |
| "char_count": 386, |
| "word_count": 66, |
| "chunking_strategy": "semantic" |
| }, |
| { |
| "chunk_id": "a8dfd6a4-7f08-4806-bbee-7a55d4370105", |
| "text": "Montezuma's Revenge: Montezuma's Revenge is a relatively hard game among different Atari 2600 games for DQN with e-greedy exploration(Mnih et al., 2013). This game\nrequires the player to navigate the explorer through several rooms. The explorer may die on\nthe way of traps are triggered. We note that Montezuma's Revenge has a mechanism which\nbrings one back to the start point after death.", |
| "paper_id": "1805.09045", |
| "title": "When Simple Exploration is Sample Efficient: Identifying Sufficient Conditions for Random Exploration to Yield PAC RL Algorithms", |
| "authors": [ |
| "Yao Liu", |
| "Emma Brunskill" |
| ], |
| "published_date": "2018-05-23", |
| "primary_category": "cs.LG", |
| "arxiv_url": "http://arxiv.org/abs/1805.09045v4", |
| "chunk_index": 17, |
| "total_chunks": 45, |
| "char_count": 390, |
| "word_count": 63, |
| "chunking_strategy": "semantic" |
| }, |
| { |
| "chunk_id": "435d6afd-c087-453a-b584-c06320669899", |
| "text": "At a high level, that \"trapdoor\" structure is\ncaptured by the chain MDP example, and will result in an exponentially small stationary\ndistribution of the end point. Game domains, even at a high level, may have more than\none chain, but φmin could still be exponential in the maximum chain length. Note that\nsome games like Pong or Enduro also have the restart mechanism, but that restart point is\ndistributed more uniformly over the whole state space. This breaks the chain property and\nwill not result in an exponentially small stationary distribution.", |
| "paper_id": "1805.09045", |
| "title": "When Simple Exploration is Sample Efficient: Identifying Sufficient Conditions for Random Exploration to Yield PAC RL Algorithms", |
| "authors": [ |
| "Yao Liu", |
| "Emma Brunskill" |
| ], |
| "published_date": "2018-05-23", |
| "primary_category": "cs.LG", |
| "arxiv_url": "http://arxiv.org/abs/1805.09045v4", |
| "chunk_index": 18, |
| "total_chunks": 45, |
| "char_count": 552, |
| "word_count": 92, |
| "chunking_strategy": "semantic" |
| }, |
| { |
| "chunk_id": "8fab08a6-2226-4dd7-9620-e8c87ecccdfd", |
| "text": "In this paper we present several structural properties of MDPs that give upper bound on the\nsample complexity of Q learning with random exploration followed by exploitation. We also\nlink these properties to some conceptual testing domains as well as empirical benchmark\ndomains, towards understanding the recent empirical success. We hope the knowledge\nof these properties might help guide practitioners in selecting exploration strategy, and\nunderstanding whether and when strategic exploration is necessary. This work was supported in part by Siemens and a NSF CAREER grant. Peter Auer and Ronald Ortner.", |
| "paper_id": "1805.09045", |
| "title": "When Simple Exploration is Sample Efficient: Identifying Sufficient Conditions for Random Exploration to Yield PAC RL Algorithms", |
| "authors": [ |
| "Yao Liu", |
| "Emma Brunskill" |
| ], |
| "published_date": "2018-05-23", |
| "primary_category": "cs.LG", |
| "arxiv_url": "http://arxiv.org/abs/1805.09045v4", |
| "chunk_index": 19, |
| "total_chunks": 45, |
| "char_count": 606, |
| "word_count": 91, |
| "chunking_strategy": "semantic" |
| }, |
| { |
| "chunk_id": "07024429-4c61-4cae-8dfe-7104a0f1929e", |
| "text": "Logarithmic online regret bounds for undiscounted reinforcement learning. In Advances in Neural Information Processing Systems, pages 49–56, When Simple Exploration is Sample Efficient Ronen I Brafman and Moshe Tennenholtz. R-max-a general polynomial time algorithm\nfor near-optimal reinforcement learning. Journal of Machine Learning Research, 3(Oct):\n213–231, 2002. Emma Brunskill and Lihong Li.", |
| "paper_id": "1805.09045", |
| "title": "When Simple Exploration is Sample Efficient: Identifying Sufficient Conditions for Random Exploration to Yield PAC RL Algorithms", |
| "authors": [ |
| "Yao Liu", |
| "Emma Brunskill" |
| ], |
| "published_date": "2018-05-23", |
| "primary_category": "cs.LG", |
| "arxiv_url": "http://arxiv.org/abs/1805.09045v4", |
| "chunk_index": 20, |
| "total_chunks": 45, |
| "char_count": 397, |
| "word_count": 51, |
| "chunking_strategy": "semantic" |
| }, |
| { |
| "chunk_id": "43b61e24-0e9a-4ab6-9c8d-399ee8f6afc6", |
| "text": "Sample complexity of multi-task reinforcement learning. Laplacians and the cheeger inequality for directed graphs. Annals of Combinatorics, 9(1):1–19, 2005. The diameter and laplacian eigenvalues of directed graphs. the electronic\njournal of combinatorics, 13(1):N4, 2006. Christoph Dann and Emma Brunskill.", |
| "paper_id": "1805.09045", |
| "title": "When Simple Exploration is Sample Efficient: Identifying Sufficient Conditions for Random Exploration to Yield PAC RL Algorithms", |
| "authors": [ |
| "Yao Liu", |
| "Emma Brunskill" |
| ], |
| "published_date": "2018-05-23", |
| "primary_category": "cs.LG", |
| "arxiv_url": "http://arxiv.org/abs/1805.09045v4", |
| "chunk_index": 21, |
| "total_chunks": 45, |
| "char_count": 307, |
| "word_count": 39, |
| "chunking_strategy": "semantic" |
| }, |
| { |
| "chunk_id": "59c8da3a-b69d-4a0e-b80a-4ef3b3305334", |
| "text": "Sample complexity of episodic fixed-horizon reinforcement learning. In Advances in Neural Information Processing Systems, pages 2818–\n2826, 2015. Hierarchical reinforcement learning with the maxq value function\ndecomposition. Res.(JAIR), 13:227–303, 2000. Jian Ding, James R Lee, and Yuval Peres. Cover times, blanket times, and majorizing measures. In Proceedings of the forty-third annual ACM symposium on Theory of computing,\npages 61–70. Eyal Even-Dar and Yishay Mansour. Convergence of optimistic and incremental q-learning. In Advances in neural information processing systems, pages 1499–1506, 2002. Eyal Even-Dar and Yishay Mansour. Learning rates for q-learning. Journal of Machine\nLearning Research, 5(Dec):1–25, 2003. Thomas Jaksch, Ronald Ortner, and Peter Auer. Near-optimal regret bounds for reinforcement learning.", |
| "paper_id": "1805.09045", |
| "title": "When Simple Exploration is Sample Efficient: Identifying Sufficient Conditions for Random Exploration to Yield PAC RL Algorithms", |
| "authors": [ |
| "Yao Liu", |
| "Emma Brunskill" |
| ], |
| "published_date": "2018-05-23", |
| "primary_category": "cs.LG", |
| "arxiv_url": "http://arxiv.org/abs/1805.09045v4", |
| "chunk_index": 22, |
| "total_chunks": 45, |
| "char_count": 829, |
| "word_count": 109, |
| "chunking_strategy": "semantic" |
| }, |
| { |
| "chunk_id": "051733b8-25ff-416b-8118-621ddf2d4ec6", |
| "text": "Journal of Machine Learning Research, 11(Apr):1563–1600, 2010. Nan Jiang, Satinder Singh, and Ambuj Tewari.", |
| "paper_id": "1805.09045", |
| "title": "When Simple Exploration is Sample Efficient: Identifying Sufficient Conditions for Random Exploration to Yield PAC RL Algorithms", |
| "authors": [ |
| "Yao Liu", |
| "Emma Brunskill" |
| ], |
| "published_date": "2018-05-23", |
| "primary_category": "cs.LG", |
| "arxiv_url": "http://arxiv.org/abs/1805.09045v4", |
| "chunk_index": 23, |
| "total_chunks": 45, |
| "char_count": 107, |
| "word_count": 14, |
| "chunking_strategy": "semantic" |
| }, |
| { |
| "chunk_id": "8c9078a0-d4cb-44a1-98d5-98b33d73dbfe", |
| "text": "On structural properties of mdps that\nbound loss due to shallow planning. In Proceedings of the Twenty-Fifth International\nJoint Conference on Artificial Intelligence, pages 1640–1647. Sham Machandranath Kakade et al. On the sample complexity of reinforcement learning. PhD thesis, University of London, 2003. William Karush and RE Dear. Optimal strategy for item presentation in a learning process. Management Science, 13(11):773–785, 1967. Michael Kearns and Satinder Singh. Near-optimal reinforcement learning in polynomial\ntime. Machine Learning, 49(2-3):209–232, 2002. John Langford and Tong Zhang. The epoch-greedy algorithm for multi-armed bandits with\nside information. In Advances in neural information processing systems, pages 817–824,\n2008. David A Levin and Yuval Peres.", |
| "paper_id": "1805.09045", |
| "title": "When Simple Exploration is Sample Efficient: Identifying Sufficient Conditions for Random Exploration to Yield PAC RL Algorithms", |
| "authors": [ |
| "Yao Liu", |
| "Emma Brunskill" |
| ], |
| "published_date": "2018-05-23", |
| "primary_category": "cs.LG", |
| "arxiv_url": "http://arxiv.org/abs/1805.09045v4", |
| "chunk_index": 24, |
| "total_chunks": 45, |
| "char_count": 783, |
| "word_count": 106, |
| "chunking_strategy": "semantic" |
| }, |
| { |
| "chunk_id": "5b02e31f-7195-45f3-a9cd-c79a308ffd41", |
| "text": "Markov chains and mixing times, volume 107. American\nMathematical Soc., 2017. A unifying framework for computational reinforcement learning theory. Rutgers\nThe State University of New Jersey-New Brunswick, 2009. Sample complexity bounds of exploration. Reinforcement Learning, pages 175–\n204, 2012. Sridhar Mahadevan and Mauro Maggioni.", |
| "paper_id": "1805.09045", |
| "title": "When Simple Exploration is Sample Efficient: Identifying Sufficient Conditions for Random Exploration to Yield PAC RL Algorithms", |
| "authors": [ |
| "Yao Liu", |
| "Emma Brunskill" |
| ], |
| "published_date": "2018-05-23", |
| "primary_category": "cs.LG", |
| "arxiv_url": "http://arxiv.org/abs/1805.09045v4", |
| "chunk_index": 25, |
| "total_chunks": 45, |
| "char_count": 336, |
| "word_count": 44, |
| "chunking_strategy": "semantic" |
| }, |
| { |
| "chunk_id": "22a19dd5-f188-4cb8-917e-f7c72f0774d3", |
| "text": "Proto-value functions: A laplacian framework\nfor learning representation and control in markov decision processes. Journal of Machine\nLearning Research, 8(Oct):2169–2231, 2007. Odalric-Ambrym Maillard, Timothy A Mann, and Shie Mannor. How hard is my mdp?\" the\ndistribution-norm to the rescue\". In Advances in Neural Information Processing Systems,\npages 1835–1843, 2014. Volodymyr Mnih, Koray Kavukcuoglu, David Silver, Alex Graves, Ioannis Antonoglou,\nDaan Wierstra, and Martin Riedmiller. Playing atari with deep reinforcement learning. Alexander L Strehl, Lihong Li, and Michael L Littman.", |
| "paper_id": "1805.09045", |
| "title": "When Simple Exploration is Sample Efficient: Identifying Sufficient Conditions for Random Exploration to Yield PAC RL Algorithms", |
| "authors": [ |
| "Yao Liu", |
| "Emma Brunskill" |
| ], |
| "published_date": "2018-05-23", |
| "primary_category": "cs.LG", |
| "arxiv_url": "http://arxiv.org/abs/1805.09045v4", |
| "chunk_index": 26, |
| "total_chunks": 45, |
| "char_count": 592, |
| "word_count": 79, |
| "chunking_strategy": "semantic" |
| }, |
| { |
| "chunk_id": "4acc4377-f9cd-4749-a657-a79dbf38d19e", |
| "text": "Incremental model-based learners Complexity and cooperation in q-learning. In Maching Learning:\nProceedings of the Eighth International Workshop, pages 363–367, 2014. When Simple Exploration is Sample Efficient For completeness and clarity we include some definitions and lemmas which is helpful in\nour proof, and included in the main body.", |
| "paper_id": "1805.09045", |
| "title": "When Simple Exploration is Sample Efficient: Identifying Sufficient Conditions for Random Exploration to Yield PAC RL Algorithms", |
| "authors": [ |
| "Yao Liu", |
| "Emma Brunskill" |
| ], |
| "published_date": "2018-05-23", |
| "primary_category": "cs.LG", |
| "arxiv_url": "http://arxiv.org/abs/1805.09045v4", |
| "chunk_index": 27, |
| "total_chunks": 45, |
| "char_count": 340, |
| "word_count": 48, |
| "chunking_strategy": "semantic" |
| }, |
| { |
| "chunk_id": "4e40c112-f1e8-4515-8258-ce305c9fb626", |
| "text": "Definition 1 The covering length, denoted by L, is the number of time steps we need to\nvisit all state-action pairs at least once with probability at least 1/2, starting from any pair. Theorem 2 (Theorem 4 from Even-Dar and Mansour (2003)) Let QT be the value function\nafter T step Q learning update, with learning rate αt(s, a) = 1/(#(s, a))ω. Then with\nprobability at least 1 −δ, we have ∥QT −Q∗∥∞≤ϵ, given that 1 1 \n1−ω L1+3ωV max2 ln SAVmaxδ(1−γ)ϵ ω L Vmax\nT ≥T0 = Θ + ln (1 −γ)2ϵ2 1 −γ ϵ , where L is the covering length of the exploration policy we use in Q learning Diameter (Auer and Ortner, 2007) is a widely used parameter to measure the reachability\nof the MDP. Intuitively it means the longest expected time to reach one state from the other. Definition 5 (Diameter) D = max min E inf t ∈N : st = s′ |s0 = s, π\ns,s′ π", |
| "paper_id": "1805.09045", |
| "title": "When Simple Exploration is Sample Efficient: Identifying Sufficient Conditions for Random Exploration to Yield PAC RL Algorithms", |
| "authors": [ |
| "Yao Liu", |
| "Emma Brunskill" |
| ], |
| "published_date": "2018-05-23", |
| "primary_category": "cs.LG", |
| "arxiv_url": "http://arxiv.org/abs/1805.09045v4", |
| "chunk_index": 28, |
| "total_chunks": 45, |
| "char_count": 847, |
| "word_count": 173, |
| "chunking_strategy": "semantic" |
| }, |
| { |
| "chunk_id": "17b92193-d365-4e19-93a3-8606b78e6496", |
| "text": "The following lemma allows us to only focus on how to cover all states in the later\nanalysis. Lemma 6 If we visit a state more than A ln(4SA) times, a random walk policy will sample\nevery action at least once with probability at least 1 − 1 . 4S For completeness, we also include a lemma about relation of Q value accuracy and its\ngreedy policy performance, which is widely used in Q learning literature. Lemma 7 Let π be the greedy policy of an action value function Q. If ∥Q∗−Q∥∞≤ϵ,\nthen ∥V π∗−V π∥∞≤ 1−γ2ϵ . In this section, we include the full proofs of the three main theorems and lemmas in the\nmain body of paper. For the completeness and convenience of reading, we also include the\nlemmas and proofs that are stated in the main body.", |
| "paper_id": "1805.09045", |
| "title": "When Simple Exploration is Sample Efficient: Identifying Sufficient Conditions for Random Exploration to Yield PAC RL Algorithms", |
| "authors": [ |
| "Yao Liu", |
| "Emma Brunskill" |
| ], |
| "published_date": "2018-05-23", |
| "primary_category": "cs.LG", |
| "arxiv_url": "http://arxiv.org/abs/1805.09045v4", |
| "chunk_index": 29, |
| "total_chunks": 45, |
| "char_count": 740, |
| "word_count": 140, |
| "chunking_strategy": "semantic" |
| }, |
| { |
| "chunk_id": "22deaae3-40b8-4dac-98d3-1373d1a50050", |
| "text": "B.1 Laplacian Eigenvalues and Stationary Distribution To prove theorem 3, we introduce a useful lemma from Chung (2006) which bounds the\nconvergence of lazy random walk (random walk with additional 0.5 probability that will\nstay in the same state) over a directed graph G. Then we will relate the lazy random walk\ntransition matrix with the one-way commute time of random walk over G. Lemma 8 Suppose a strongly connected weighted directed graph has transition matrix P,\nand a lazy random walk transition P = (I+P) . For any state u, v and k > 0, the normalized 2\nmatrix M = Φ1/2PΦ−1/2 satisfies: Mk(u, v) − pφ(u)φ(v) ≤(1 −λ/2)k/2 This is part of the result in the Theorem 1 from Chung (2006). q φ(v)\nCorollary 9 Pk(u, v) ≥φ(v) − φ(u)(1 −λ/2)k/2 Proof We have that\nMk(u, v) ≥ pφ(u)φ(v) −(1 −λ/2)k/2 Since Mk = Φ1/2PkΦ−1/2. s 1 φ(v)\nPk(u, v) = Mk(u, v)pφ(v) ≥φ(v) − φ(u)(1 −λ/2)k/2 p φ(u) Now we could bound Pk(u, v) by the graph Laplacian properties. The next lemma shows\nthat Pk(u, v) is a lower bound of the probability of reaching v from u under random walk\nover G. Lemma 10 Suppose a strongly connected weighted directed graph has transition matrix P,\nand a lazy random walk transition P = (I+P) . The the probability of reaching v from u 2\nwithin k steps by original random walk will be at least Pk(u, v).", |
| "paper_id": "1805.09045", |
| "title": "When Simple Exploration is Sample Efficient: Identifying Sufficient Conditions for Random Exploration to Yield PAC RL Algorithms", |
| "authors": [ |
| "Yao Liu", |
| "Emma Brunskill" |
| ], |
| "published_date": "2018-05-23", |
| "primary_category": "cs.LG", |
| "arxiv_url": "http://arxiv.org/abs/1805.09045v4", |
| "chunk_index": 30, |
| "total_chunks": 45, |
| "char_count": 1310, |
| "word_count": 241, |
| "chunking_strategy": "semantic" |
| }, |
| { |
| "chunk_id": "22f922b6-abef-42c3-bd6f-fc351563e455", |
| "text": "Proof For simplicity of discussion, we firstly assume that Pi,i = 0 for any i, which means\nthere is no self loop in the original random walk. At the end of proof, we will show that\nhow this proof still works for the case with self loop. Define F(u, v; k) as the probability of reaching v from u within k steps by original\nrandom walk. Let l = (s0 = u, s1, ..., st = v) be a path from u to v with length 0 < t ≤k,\nand for all i < t, si ̸= v. We call this kind of path first-visit path. Then we could compute\nF(u, v; k) by sum the probability over all first-visit path. Let Luv be the set of all first-visit\npaths from u to v with length 0 < t ≤k. t−1\nF(u, v; k) = X Pr(l|r.w.) = X Y P(si, si+1),\nl∈Luv l∈Luv i=0 where the sum is over all distinct first-visit path with length less than k. When Simple Exploration is Sample Efficient Note that Pk(u, v) is the probability of reaching u from v at kth step by lazy random\nwalk. Let L be the set of paths with length of k in the lazy random walk graph, whose\ntransition weight matrix is P but not P. k−1\nPk(u, v) = X Pr(bl|lazy r.w.) = X Y P(bsi, bsi+1)\nbl∈L bl∈L i=0\nbl in L may not be a first-visit path, since there are lazy steps as well as extra steps after\nfirst visit. Now we will divide bl into three disjoint part and extract the first-visit part in bl. Firstly we find the first visit of v in bl, and let bluv be all the steps in bl from u to the first visit\nto v without all lazy steps.", |
| "paper_id": "1805.09045", |
| "title": "When Simple Exploration is Sample Efficient: Identifying Sufficient Conditions for Random Exploration to Yield PAC RL Algorithms", |
| "authors": [ |
| "Yao Liu", |
| "Emma Brunskill" |
| ], |
| "published_date": "2018-05-23", |
| "primary_category": "cs.LG", |
| "arxiv_url": "http://arxiv.org/abs/1805.09045v4", |
| "chunk_index": 31, |
| "total_chunks": 45, |
| "char_count": 1442, |
| "word_count": 305, |
| "chunking_strategy": "semantic" |
| }, |
| { |
| "chunk_id": "20a7d72c-8b34-40a5-b76e-400ee70188ab", |
| "text": "Since the lazy steps are self loop, bluv is still a valid path. Let\nthe length of bluv be t ≤k, and the number of all lazy steps in path bl be i(bl). Then the rest\nsteps in bl is a path from v to v with length of k −t −i. Let this path be blvv. Note that for\nall bl, bluv's are first-visit paths with length no greater than k, and they cover all first-visit\npaths with length no greater than k. blvv's are a valid paths from v to v with length k −t−i,\nand they cover all paths from v to v with length k −t −i. Now the problem is there might be more than one path bl with the same bluv. We need\nto prove that Pk(u, v) does not count it more than one, which means for these bl with the\nsame bluv,\nX Pr(bl|lazy random walk) ≤Pr(bluv|random walk)\nTo prove it, let L(bluv) be the set of all bl with the same bluv: k−t\nX Pr(bl|lazy r.w.) = X X Pr(bl|lazy r.w.)\nbl∈L(bluv) i=0 bl s.t. i(bl) = i\nk−t\n= X X 2k Pr(bluv|r.w.)Pr(blvv|r.w.)\ni=0 bl s.t. i(bl) = i\nk−t\n= X Pr(bluv|r.w.) X 2k Pr(blvv|r.w.)\ni=0 bl s.t. i(bl) = i\nk−t\n1 k\n= X 2k i Pr(bluv|r.w.) X Pr(blvv|r.w.)\ni=0 blvv, |blvv|=k−t−i\nk−t\n1 k\n= X 2k i Pr(bluv|r.w.)P k−t−i(v, v)\ni=0\nk−t\n1 k\n≤ X 2k i Pr(bluv|r.w.)\ni=0\n≤ Pr(blvv|r.w.)\nBy dividing of bl according to the value of i, we have the first steps. The second step follows\nfrom dividing the path bl into three parts: bluv,blvv, and the self-loop part. 1 P(s,s′)\nstep (s, s′) in bl, P(s, s′) = 2 for lazy self-loop steps and P(s, s′) = 2 for the other steps. The third step follows from that Pr(bluv|r.w.) is a constant since bluv is fixed. For a fixed i\nand fixed blvv, there is ki different bl, since there is ki possible combinations of lazy steps. By taking the some over these lazy steps combinations with a fixed blvv, we have the fourth\nstep. The fifth step follows from the fact that if we take sum of probability over all possible\nk −t −i steps path from v to v, then that is the probability of visiting v from v at k −t −i\nsteps. Since it is a valid probability, it is no greater than 1 and yields the sixth step.", |
| "paper_id": "1805.09045", |
| "title": "When Simple Exploration is Sample Efficient: Identifying Sufficient Conditions for Random Exploration to Yield PAC RL Algorithms", |
| "authors": [ |
| "Yao Liu", |
| "Emma Brunskill" |
| ], |
| "published_date": "2018-05-23", |
| "primary_category": "cs.LG", |
| "arxiv_url": "http://arxiv.org/abs/1805.09045v4", |
| "chunk_index": 32, |
| "total_chunks": 45, |
| "char_count": 2027, |
| "word_count": 414, |
| "chunking_strategy": "semantic" |
| }, |
| { |
| "chunk_id": "3e74daee-b3cf-496f-a88f-706538facc21", |
| "text": "By\nsubstituting the result above into the expression of F(u, v; k), we have that: Pk(u, v) = X Pr(bl|lazy r.w.) = X X Pr(bl|lazy r.w.) ≤ X Pr(bluv|r.w.)\nbl∈L bluv∈Luv bl∈L(bluv) bluv∈Luv The last line is exactly F(u, v; k), completing the proof. Now consider the case that there exist self loops in the original transition matrix P. In that case, we can split the self loops in P from the self loops in I. For example, if\nthere is a path in lazy random walk bl = (bs0, . . . , bsi = s, bsi+1 = s, bsk). In the original path\nPr(bsi+1 = s|bsi = s) = (P(s, s) + 1)/2. We can split this path into two exactly same path:\nbl1 and bl2. In bl1, Pr(bsi+1 = s|bsi = s) = P(s, s)/2, and this transition step is part of the\nsub-path bluv. In bl2, Pr(bsi+1 = s|bsi = s) = 1/2, and this transition step is part of the lazy\nsteps. This decomposition does not change the probability under lazy random walk since\nPr(bl|lazy r.w.) = Pr(bl1|lazy r.w.) + Pr(bl2|lazy r.w.). Thus the analysis for no self loop case\nworks for bl1 and bl2, and we finish the proof for all transition matrix P cases. Combining this result with corollary 9, we immediately have the following result: Corollary 11 For any two states u, v, the probability of reaching v from u within k steps\nq φ(v)\nis at least φ(v) − φ(u)(1 −λ/2)k/2. By setting the time steps k large enough, we could lower bound the one-way commute\nprobability by the stationary distribution:", |
| "paper_id": "1805.09045", |
| "title": "When Simple Exploration is Sample Efficient: Identifying Sufficient Conditions for Random Exploration to Yield PAC RL Algorithms", |
| "authors": [ |
| "Yao Liu", |
| "Emma Brunskill" |
| ], |
| "published_date": "2018-05-23", |
| "primary_category": "cs.LG", |
| "arxiv_url": "http://arxiv.org/abs/1805.09045v4", |
| "chunk_index": 33, |
| "total_chunks": 45, |
| "char_count": 1417, |
| "word_count": 270, |
| "chunking_strategy": "semantic" |
| }, |
| { |
| "chunk_id": "66816a05-7364-4c7c-bc92-171c381c0c8f", |
| "text": "Corollary 12 For any two state u, v, the probability of reaching v from u within k steps\n2 ln(2/φmin)\nis at least φ(v)/2, for any k ≥k0 = 2 + 1, where φmin is minx∈S φ(x).\nln( 2−λ )\n$ 2 ln 2/√ φ(u)φ(v) %\nProof By substitute k with 2 in corollary 11, we have that the probability\nln( 2−λ)\n2 ln 2/√ φ(u)φ(v)\nis bounded by φ(v)/2. Since pφ(u)φ(v) > φmin, k ≥k0 ≥ 2 .\nln( 2−λ)\nNow we need a high probability bound for the one way commute time k between two states. Corollary 13 For any two state u, v, we can visit v from u at least A ln(4SA) time with\nprobability 1 − 1 , within 8A ln(4SA)k0 steps. 4S φ(v) 2 ln(2/φmin)\nProof We know that for k0 = 2 + 1 steps, we can visit v with probability at least\nln( 2−λ)\nφ(v)/2. This is a Bernoulli trial with success probability at least φ(v)/2. When Simple Exploration is Sample Efficient different u, they are all Bernoulli trials with a same lower bound of success probability and\n4(A ln(4SA)+ln 4S)\nk. By lemma 56 in Li (2009), if we do trials, we will have A ln (4SA) φ(v)\nsuccesses with probability at least 1 −1/4S. We can do such number of trials by no more\n8A ln(4SA)k0than time steps. φ(v) Theorem 3 (Restated) The covering length of a irreducible MDP under random walk policy\nis at most\n! 2 ln (2/ mins φ(s)) 1 8A ln(4SA) + 1 X ln( 2 φ(s), 2−λ) s where φ is the stationary distribution vector of random walk and λ is the smallest non-zero\neigenvalue of the Laplacian of graph induced by random walk. Proof Firstly, by combining corollary 13 and lemma 6, we have that with probability\n8A ln(4SA)k01 −1/2S, we can visit every action in state v within , starting from any state. φ(v)\nApplying this for every state v, we have that with probability at least 1/2, we can cover\nevery state action pair within 8A ln(4SA)k0 Ps φ(s)1 steps. This bound immediately implies a sufficient condition of a PAC RL bound as the next\ncorollary states. Corollary 14 For any irreducible MDP M, let L be the Laplacian of the graph induced by\nrandom walk over M, λ be the smallest non-zero eigenvalue of L, and φ(s) be the stationary\ndistribution over states by random walk. If:\n1. 1 is a polynomial function of the MDP parameters, and λ\n2. 1 is a polynomial function of the MDP parameters, mins φ(s)\nthen Q learning with random walk exploration is a PAC RL algorithm.", |
| "paper_id": "1805.09045", |
| "title": "When Simple Exploration is Sample Efficient: Identifying Sufficient Conditions for Random Exploration to Yield PAC RL Algorithms", |
| "authors": [ |
| "Yao Liu", |
| "Emma Brunskill" |
| ], |
| "published_date": "2018-05-23", |
| "primary_category": "cs.LG", |
| "arxiv_url": "http://arxiv.org/abs/1805.09045v4", |
| "chunk_index": 34, |
| "total_chunks": 45, |
| "char_count": 2294, |
| "word_count": 442, |
| "chunking_strategy": "semantic" |
| }, |
| { |
| "chunk_id": "48e56068-b066-49f6-8412-7ab2574847d1", |
| "text": "Proof Since 1 −1 ≤ln(x), we have that x 1 1 1 2\n= ≤ =\nln( 2−λ)2 ln( 1−λ/2)1 1 −(1 −λ/2) λ Since 1 and 1 is polynomial with MDP parameters, we have that L, as well as T λ mins φ(s)\nin theorem 2 are also polynomial. Thus we achieve near optimal policy after polynomial\nnumber of mistakes if we switch to greedy policy of the learned Q function after T steps. Other Structural Properties that Bound Covering Length In the proceeding sections, we have looked at problem specific bounds for exploration that\ndepends on stationary distribution and Laplacian eigenvalue. Yet, there are MDPs that are\neasy to explore but not covered by this bound. While covering all these cases is beyond the\nobjective of this work, we cover two classes of MDPs where exploration is intuitively easy. One natural class of MDPs that exploration is easy for random walk are those where different\nactions at the same state have similar distribution over the next states. In that case, random\nwalk could easily cover all the next states and may result in a very similar behavior with the\nbest exploration policy. We capture this class of MDPs by the property action variation,\nwhich was introduced by Jiang et al. (2016) to bound the loss of shallow planning. Definition 15 (Action Variation) 3", |
| "paper_id": "1805.09045", |
| "title": "When Simple Exploration is Sample Efficient: Identifying Sufficient Conditions for Random Exploration to Yield PAC RL Algorithms", |
| "authors": [ |
| "Yao Liu", |
| "Emma Brunskill" |
| ], |
| "published_date": "2018-05-23", |
| "primary_category": "cs.LG", |
| "arxiv_url": "http://arxiv.org/abs/1805.09045v4", |
| "chunk_index": 35, |
| "total_chunks": 45, |
| "char_count": 1266, |
| "word_count": 225, |
| "chunking_strategy": "semantic" |
| }, |
| { |
| "chunk_id": "8dd09493-18cc-4eca-86aa-2e05ec6565e4", |
| "text": "X P(·|s, a′) δP = max max P(·|s, a) −1\ns a A\na′ 1 We need to introduce some useful lemmas before we prove the main theorem in this\nsection. Firstly we will define commonality between two probability distribution and a\nelementary fact of commonality, then include a key lemma from Jiang et al. (2016) for\ncompleteness. Definition 16 Given two vectors p, q of the same dimension, define comm(p,q) as the\ncommonality vector of p and q, with entries comm(s;p,q) = min{p(s), q(s)}. Proposition 17\n∥comm(p, q)∥1 = 1 −∥p −q∥1/2 Proposition 18 (lemma 1 in Jiang et al. (2016)) For any stochastic vector p, q and transition matrix P1, P2 ∥comm(pT P1, qT P2)∥1 ≥∥comm(comm(p, q)T P1, comm(p, q)T P2)∥1 We also need the next helping lemma which is widely used in MDP approximation\nanalysis: Proposition 19 (lemma 2 in Jiang et al. (2016)) Given stochastic vectors p, q, and a real\nvector v with the same dimension, |pT v −qT v| ≤∥p −q∥1 maxs,s′ |v(s) −v(s′)|/2 Lemma 20 Let p and q be two stochastic vectors over S, π be any policy and πRW be the\nrandom walk policy. ∥comm(pT P π, qT P πRW )∥1 ≥(1 −δP /2)∥comm(p, q)∥1 ∥comm(pT P π, qT P πRW )∥1 ≥ ∥comm(comm(p, q)T P π, comm(p, q)T P πRW )∥1 (1)\n= ∥comm(p, q)∥1∥comm(zT P π, zT P πRW )∥1 (2)\n= ∥comm(p, q)∥1(1 −∥zT (P π −P πRW )∥1/2) (3)\n≥ ∥comm(p, q)∥1(1 −δP /2) (4) It is slightly different with the action variation defined by Jiang et al. (2016). Their definition of action\nvariation consider the maximum l1 distance between two actions' transition vectors. When Simple Exploration is Sample Efficient", |
| "paper_id": "1805.09045", |
| "title": "When Simple Exploration is Sample Efficient: Identifying Sufficient Conditions for Random Exploration to Yield PAC RL Algorithms", |
| "authors": [ |
| "Yao Liu", |
| "Emma Brunskill" |
| ], |
| "published_date": "2018-05-23", |
| "primary_category": "cs.LG", |
| "arxiv_url": "http://arxiv.org/abs/1805.09045v4", |
| "chunk_index": 36, |
| "total_chunks": 45, |
| "char_count": 1545, |
| "word_count": 283, |
| "chunking_strategy": "semantic" |
| }, |
| { |
| "chunk_id": "4ec57e18-216c-4716-993f-f90367a39bae", |
| "text": "The first step use proposition 18. z is a normalized vector of comm(p, q). So the second\nstep follows from scaling. The third step follows proposition 17.", |
| "paper_id": "1805.09045", |
| "title": "When Simple Exploration is Sample Efficient: Identifying Sufficient Conditions for Random Exploration to Yield PAC RL Algorithms", |
| "authors": [ |
| "Yao Liu", |
| "Emma Brunskill" |
| ], |
| "published_date": "2018-05-23", |
| "primary_category": "cs.LG", |
| "arxiv_url": "http://arxiv.org/abs/1805.09045v4", |
| "chunk_index": 37, |
| "total_chunks": 45, |
| "char_count": 154, |
| "word_count": 27, |
| "chunking_strategy": "semantic" |
| }, |
| { |
| "chunk_id": "aed87c89-3092-4f6a-a9d7-3e7cb281a564", |
| "text": "Note that l1 norm each\nrow of P π −P πRW is bounded by δ. The last step follows from the fact that l1 norm is a\nconvex function. The following theorem bounds the covering length in the case that either the actions\nhave almost identical transition or the diameter is small, which implies that the necessary\nplanning horizon is short. Theorem 21 For an MDP with finite diameter D, if δP ≤ 5D,2 then the covering length\nL = O (DSA ln(SA)). Thus the Q learning with random walk exploration could learn the\nnear optimal Q function within polynomial steps. Proof Now consider a target MDP with respect to a particular state s, where the transition\nis as same as the original MDP, but state s is the absorbing state and has the only unit\nreward. By Markov inequality and definition of diameter, the optimal policy can visit s with\nin cD steps with probability at least (c −1)/c in the original MDP. Since the target MDP\nhas the same transition with the original MDP except the state s, the expectation visiting\ntime of s would not change. So the undiscounted value of optimal policy in target MDP\nwould be at least (c −1)dD/c for (c + d)D steps. Now let us compute the undiscounted T\nsteps value for random walk policy. Let p be the distribution vector of start state, r be the\nreward distribution vector. (Note that the reward we defined for target MDP only depends\non state.) T T T\nV π∗−V πRW = X pT (P π∗)kr − X pT (P πRW )kr = X (pT (P π∗)k −pT (P πRW )k)r (5) By using lemma 20 k times, we have that\ncomm(pT (P π∗)k, pT (P πRW )k) ≥(1 −δ/2)kcomm(p, p) = (1 −δ/2)k (6) Use proposition 17 to turn commonality into l1 error:\n∥pT (P π∗)k −pT (P πRW )k∥1 ≤2 −2(1 −δ/2)k (7) Substitute this into the value error above:", |
| "paper_id": "1805.09045", |
| "title": "When Simple Exploration is Sample Efficient: Identifying Sufficient Conditions for Random Exploration to Yield PAC RL Algorithms", |
| "authors": [ |
| "Yao Liu", |
| "Emma Brunskill" |
| ], |
| "published_date": "2018-05-23", |
| "primary_category": "cs.LG", |
| "arxiv_url": "http://arxiv.org/abs/1805.09045v4", |
| "chunk_index": 38, |
| "total_chunks": 45, |
| "char_count": 1710, |
| "word_count": 328, |
| "chunking_strategy": "semantic" |
| }, |
| { |
| "chunk_id": "e7ed6c26-7b91-46bc-b780-b0c8d86b9c51", |
| "text": "|V π∗−V πRW | ≤ X |(pT (P π∗)k −pT (P πRW )k)r| (8) k=0\n≤ X ∥(pT (P π∗)k −pT (P πRW )k)∥1 max |r(s) −r(s′)|/2 (9)\ns,s′\nk=0\n≤ X (1 −(1 −δP /2)k)Rmax (10)\nk=0 So the value of πRW could be bounded by V π∗−T + 1 −(1 −δP /2)T ≥(c −1)dD −(c + d)D + 2 (1 −(1 −δP /2)T ) (11)\n1 −(1 −δP /2) c δP If TδP /2 ≤1, then by Taylor extension we have: 2 ∞ k\nT(T −1) δP (−1)kT! δP X (1 −δP /2)T = 1 −TδP + +\n2 2 2 k!(T −k)! 2\nk=3\nT(T −1) δP 2 −1)(T −2) δP 3 ≤ 1 −TδP + −T(T\n2 2 2 6 2\n∞ k\nT! δP + X\nk!(T −k)! 2\nk=4\n2 3 ∞ ! T(T −1) δP −1)(T −2) δP 6 ≤ 1 −TδP + −T(T 1 − X\n2 2 2 6 2 k!\nk=4\nT(T −1) δP 2 ≤ 1 −TδP +\n2 2 2 1 −(1 −δP /2)T V π∗−T + ≥T −T 2δP − c + d D ≥3(c + d)D − c + d D\n1 −(1 −δP /2) 4 c 4 c Let c = 2 and d = 3, the value above is D/4. We have that V πRW ≥D/4. Remember that\nwe also need TδP /2 ≤1. Since we assume δP ≤ 5D2 that is true for T = 5D. On the other\nhand, the probability of visit s by random walk within T steps is: T T πRW −k V pv = X Pr(visit s at kth step) ≥ X Pr(visit s at k step)T = ≥1 (12)\nT T 20\nk=0 k=0 Every T = 5D steps episode we have a constant probability to visit state s. Recall that\nat each state we uniformly draw actions.", |
| "paper_id": "1805.09045", |
| "title": "When Simple Exploration is Sample Efficient: Identifying Sufficient Conditions for Random Exploration to Yield PAC RL Algorithms", |
| "authors": [ |
| "Yao Liu", |
| "Emma Brunskill" |
| ], |
| "published_date": "2018-05-23", |
| "primary_category": "cs.LG", |
| "arxiv_url": "http://arxiv.org/abs/1805.09045v4", |
| "chunk_index": 39, |
| "total_chunks": 45, |
| "char_count": 1148, |
| "word_count": 311, |
| "chunking_strategy": "semantic" |
| }, |
| { |
| "chunk_id": "94fcf6ff-d8a4-4d87-9e15-b677be36892c", |
| "text": "According to 20, we need to visit a state more\nthan A ln(4SA), so that with probability at least 1 −1/4S we sample every action at least\nonce. By lemma 56 in Li (2009), we can yield this by O(A ln(4SA) + ln(4S)) episodes with\nprobability at least 1 −1/4S. Applying this for every state s and combine the fail probability, we have that with probability at least 1/2, we can visit every state-action pair within\nO (DSA ln(SA)). That completes the proof.", |
| "paper_id": "1805.09045", |
| "title": "When Simple Exploration is Sample Efficient: Identifying Sufficient Conditions for Random Exploration to Yield PAC RL Algorithms", |
| "authors": [ |
| "Yao Liu", |
| "Emma Brunskill" |
| ], |
| "published_date": "2018-05-23", |
| "primary_category": "cs.LG", |
| "arxiv_url": "http://arxiv.org/abs/1805.09045v4", |
| "chunk_index": 40, |
| "total_chunks": 45, |
| "char_count": 451, |
| "word_count": 83, |
| "chunking_strategy": "semantic" |
| }, |
| { |
| "chunk_id": "63005726-6388-4ec8-b8ca-a407fa83cec7", |
| "text": "Note that this bound being polynomial does not imply that the stationary distribution\nis polynomial, since there are MDPs where actions are almost the same, but some certain\nstates could only be achieved under exponentially small probability. Also it is obvious that\nthe bound in 3 is polynomial also does not imply the polynomial bound here. There could be cases in RL applications that action variation is small. Note that action\nvariation only measure the difference in transition dynamics, and the reward can still vary\na lot in this case. In hierarchical RL domains, it is common that more than one options\nleads to the same goal, with different cost/reward. For example, if we want to control a\nrobot arm to pick up a cup, there are many ways to pick up a cup that all end up with cup\nin the hand.", |
| "paper_id": "1805.09045", |
| "title": "When Simple Exploration is Sample Efficient: Identifying Sufficient Conditions for Random Exploration to Yield PAC RL Algorithms", |
| "authors": [ |
| "Yao Liu", |
| "Emma Brunskill" |
| ], |
| "published_date": "2018-05-23", |
| "primary_category": "cs.LG", |
| "arxiv_url": "http://arxiv.org/abs/1805.09045v4", |
| "chunk_index": 41, |
| "total_chunks": 45, |
| "char_count": 803, |
| "word_count": 143, |
| "chunking_strategy": "semantic" |
| }, |
| { |
| "chunk_id": "754f938d-d3e6-408f-8270-d7452df2e1af", |
| "text": "Rewards can be very different here but the outcome space is the same. When Simple Exploration is Sample Efficient C.2 Sub Transition Matrix Norm Let us view an MDP from a graph perspective where actions are edges between states. If\nthe graph is dense, then we can easily visit any states quickly, and intuitively we do not\nneed to look ahead for too many steps to achieve a good exploration strategy. In that case,\nthe MDP is easy to explore intrinsically and we want to get a problem specific bound for\nrandom walk exploration in this case. Let P be the transition matrix under random walk πRW , and P−v,−v be the sub-matrix\nof P except column and row corresponding to the state v. Lemma 22 For any state v, the one-way covering time from any state to v by policy π is\nbounded by: max E {inf {t ∈N : st = v} |s0 = u, π} = ||(I −P −v,−v)−1||1T u Proof Let eu be the one hot start state vector with only entry on u, and this is a S −1\ndimension vector since we remove the state v. Let X be the random variable of the time\nwe first visit v, then Y = X −1 would be the last time of we stay in S/v. The probability\nof not visiting v within k steps is ∥eTu P −v,−v∥1,k which means: Pr(Y ≥k) = ∥eTu P−v,−v∥1k Thus we could compute the expectation of X by: ∞ ∞\nE(X) = X Pr(X ≥k) = X Pr(Y ≥k) (13)", |
| "paper_id": "1805.09045", |
| "title": "When Simple Exploration is Sample Efficient: Identifying Sufficient Conditions for Random Exploration to Yield PAC RL Algorithms", |
| "authors": [ |
| "Yao Liu", |
| "Emma Brunskill" |
| ], |
| "published_date": "2018-05-23", |
| "primary_category": "cs.LG", |
| "arxiv_url": "http://arxiv.org/abs/1805.09045v4", |
| "chunk_index": 42, |
| "total_chunks": 45, |
| "char_count": 1289, |
| "word_count": 261, |
| "chunking_strategy": "semantic" |
| }, |
| { |
| "chunk_id": "338166e1-a4b2-421b-9939-ee51e64ab327", |
| "text": "k=1 k=0\n∞ ∞\n= X ∥eTu P −v,−v∥1k = ∥eTu X P −v,−v∥1k (14)\nk=0 k=0\n= ∥eTu (I −P−v,−v)−1∥1 (15) The second line is true since elements in eTu P −v,−vk is non-negative for all k. Note that\nmaxu ∥eTu (I −P−v,−v)−1∥1 is exactly the l1 norm of matrix (I −P −v,−v)−1T Thus, to bound the covering length under π by this, we only need to bound ||(I −\nP −v,−v)−1||1.T By prove the equivalence factor between matrix norm by Holder's inequality,\nwe have the following result. Lemma 23 If infp ||P −v,−v||pT < 1, T S(1−1/p)\n||(I −P −v,−v)−1||1 ≤inf T p∈N 1 −||P −v,−v||p Proof For any n-by-n matrix A and p ≥1: ∥Ax∥1 n1−1/p∥Ax∥p n1−1/p∥Ax∥p\n∥A∥1 = max ≤max ≤max = n1−1/p∥A∥p\nx ∥x∥1 x ∥x∥1 x ∥x∥p The firstly inequality follows from Holder's inequality, and the second one is simply from\n∥x∥1 ≥∥x∥p for any p ≥1. For any matrix induced lp norm, ∞ ∞\n(16) ||(I −P−v,−v)−1||pT ≤ X ∥P −v,−v∥pk ≤ X ∥P−v,−v∥kp = T 1 −||P −v,−v||p k=1 k=1 Now combine these together, we have that:", |
| "paper_id": "1805.09045", |
| "title": "When Simple Exploration is Sample Efficient: Identifying Sufficient Conditions for Random Exploration to Yield PAC RL Algorithms", |
| "authors": [ |
| "Yao Liu", |
| "Emma Brunskill" |
| ], |
| "published_date": "2018-05-23", |
| "primary_category": "cs.LG", |
| "arxiv_url": "http://arxiv.org/abs/1805.09045v4", |
| "chunk_index": 43, |
| "total_chunks": 45, |
| "char_count": 959, |
| "word_count": 185, |
| "chunking_strategy": "semantic" |
| }, |
| { |
| "chunk_id": "d12bfe38-cb04-4050-8537-b055fa43e966", |
| "text": "S(1−1/p) T h T i inf (17) (S −1)(1−1/p)||(I −P−v,−v)−1||p = ||(I −P−v,−v)−1||1 ≤inf\np≥1 p≥1 1 −||P −v,−v||pT Note that the bound is finite only if the sub transition matrix of policy π satisfies\ninfp ||P −v,−v||pT < 1. By repeating this enough times, as bounded in lemma 6, we have the\nupper bound of steps for covering all actions in state i. Applying this to every state, we\ncan get the upper bound of covering length for random walk, as the following theorem: Theorem 24 Let P be the transition matrix under random walk policy πRW , and P−v,−v\nbe the sub-matrix of P except column and row corresponding to v. If for any state v,\ninfp ||P −v,−v||pT < 1. The covering length of this MDP under random walk is finite and\nbounded by:\nS(1−1/p)\n4A ln(4SA) X inf\np≥1 1 −||P −v,−v||pT v∈S Remark: The assumption infp ||P−v,−v||pT < 1 is more likely to be true when the transition\nmatrix P is more dense. The following corollary will give us a intuition about this. If we\nonly consider the case p = 1 it will be reduced to a trivial bound: Corollary 25 If the minimum one step transition probability between two different states\n4SA ln(4SA)\nunder πRW is pmin > 0, then the covering length is bounded by pmin Proof This corollary immediately follows from the case p = 1 in theorem above, and the\nfact that 1 −||P −v,−v||1T = pmin.", |
| "paper_id": "1805.09045", |
| "title": "When Simple Exploration is Sample Efficient: Identifying Sufficient Conditions for Random Exploration to Yield PAC RL Algorithms", |
| "authors": [ |
| "Yao Liu", |
| "Emma Brunskill" |
| ], |
| "published_date": "2018-05-23", |
| "primary_category": "cs.LG", |
| "arxiv_url": "http://arxiv.org/abs/1805.09045v4", |
| "chunk_index": 44, |
| "total_chunks": 45, |
| "char_count": 1322, |
| "word_count": 247, |
| "chunking_strategy": "semantic" |
| } |
| ] |