researchpilot-data / chunks /1805.11711_semantic.json
Subhadip007's picture
feat: Upload full 358k vector database
e8a2c2e
[
{
"chunk_id": "e1aaaffc-0527-44c4-ba32-11143a8f1687",
"text": "Depth and nonlinearity induce implicit exploration for RL Justas Dauparas1,2*, Ryota Tomioka2, Katja Hofmann2",
"paper_id": "1805.11711",
"title": "Depth and nonlinearity induce implicit exploration for RL",
"authors": [
"Justas Dauparas",
"Ryota Tomioka",
"Katja Hofmann"
],
"published_date": "2018-05-29",
"primary_category": "cs.LG",
"arxiv_url": "http://arxiv.org/abs/1805.11711v1",
"chunk_index": 0,
"total_chunks": 12,
"char_count": 109,
"word_count": 14,
"chunking_strategy": "semantic"
},
{
"chunk_id": "2998a067-bec0-45e2-8e64-2cf287ab75e1",
"text": "Abstract2018\nThe question of how to explore, i.e., take actions with uncertain outcomes to learn about possible future rewards,\nis a key question in reinforcement learning (RL). Here, we show a surprising result: We show that Q-learning with\nnonlinear Q-function and no explicit exploration (i.e., a purely greedy policy) can learn several standard benchmarkMay tasks, including mountain car, equally well as, or better than, the most commonly-used ϵ-greedy exploration. We carefully examine this result and show that both the depth of the Q-network and the type of nonlinearity are\nimportant to induce such deterministic exploration.29 Reinforcement learning (RL) is a systematic approach to learning in sequential decision problems, where a learners'\nfuture task performance depends on its past actions. In such settings, learners have to explore, meaning they have[cs.LG] to take actions with uncertain outcomes, to facilitate learning about the consequences of such actions. The question of how to best explore is a key open question in RL. Here, we specifically address this question from\nan empirical perspective, and investigate how to explore in a way that leads to sample efficient learning in deep RL,\ni.e., reinforcement learning with value function approximators that are parameterized as powerful neural networks. We present a surprising finding: in this setting, good approximate value functions can be learned without any\nexplicit exploration. In fact, we find that in several cases learning without explicit exploration is equally or more\nsample efficient than the most-commonly used ϵ-greedy exploration scheme on several standard benchmark tasks. We present additional results that suggest a likely role of model structure (network depth and nonlinearity) in\ninducing such implicit exploration. We believe that our insights have strong practical implications and open up a\nnovel line of research towards understanding exploration in deep RL.",
"paper_id": "1805.11711",
"title": "Depth and nonlinearity induce implicit exploration for RL",
"authors": [
"Justas Dauparas",
"Ryota Tomioka",
"Katja Hofmann"
],
"published_date": "2018-05-29",
"primary_category": "cs.LG",
"arxiv_url": "http://arxiv.org/abs/1805.11711v1",
"chunk_index": 1,
"total_chunks": 12,
"char_count": 1959,
"word_count": 292,
"chunking_strategy": "semantic"
},
{
"chunk_id": "435cdf62-ecde-4932-86e5-4fcfb0044645",
"text": "For any given policy π the Q-value, also called state-action value, can be written as Qπ(a,s) := E[r(s,a) +\nP∞t=1 γtr(st,at)], i.e., the expected discounted (with discount factor γ) cumulative reward from taking action a in\nstate s and following policy π thereafter. An optimal policy achieves optimal Q-values Q∗:= maxπQ(s,a). Learning approach (DDQN) Q-learning-based approaches estimate Q∗using an iterative approach that bootstraps estimates of Q(s,a) from those of subsequent states s′, using the recursion Q(s,a) = r(s,a)+γmaxa′Q(s′,a′). In approaches based on deep Q-learning (Mnih et al., 2015), Q-value estimates are parameterized by a deep neural\nnetwork, and trained using stochastic gradient descent using interaction data obtained through interaction with an\nenvironment using a behavior policy. In Double DQN (DDQN (Van Hasselt et al., 2016)), gradient updates minimize\nthe squared loss ∥Q(s,a;θt) −r(s,a) −Q(s′,ar gmaxa′Q(s′,a′;θt);θ′t)∥2, where the parameters of the Q function\nare denoted by θ and we explicitly distinguish between model parameters θ and target parameters θ′. Stochastic\nupdates are computed on mini-batches sampled from a replay buffer, a record of past experience. *Part of this work was done while Justas was a Research Intern at Microsoft Research Cambridge.",
"paper_id": "1805.11711",
"title": "Depth and nonlinearity induce implicit exploration for RL",
"authors": [
"Justas Dauparas",
"Ryota Tomioka",
"Katja Hofmann"
],
"published_date": "2018-05-29",
"primary_category": "cs.LG",
"arxiv_url": "http://arxiv.org/abs/1805.11711v1",
"chunk_index": 2,
"total_chunks": 12,
"char_count": 1296,
"word_count": 190,
"chunking_strategy": "semantic"
},
{
"chunk_id": "da174a64-64c0-4d44-b80b-3eb404d67b22",
"text": "(a) ϵ = 0 (b) ϵ decayed in 25k steps (c) ϵ decayed in 100k steps Figure 1: Comparison of no explicit exploration (ϵ = 0) to linearly decaying ϵ on the mountaincar-V0 task (5 random\nseeds). (a) cartpole-v0, ϵ = 0 (b) cartpole-v0, ϵ dec. in 10k steps (c) acrobat-v1, ϵ = 0 (d) acrobat-v1, ϵ dec. in 10k steps Figure 2: cartpole-v0 and acrobat-v1 tasks (10 random seeds). Exploration We contrast a greedy behavior policy with the standard ϵ-greedy approach (Sutton and Barto, 1998). A greedy policy selects actions a∗θ = ar gmaxaQ(a,s;θ). In ϵ-greedy, actions are sampled uniformly at random with\nexploration rate ϵ, while the greedy action is selected with probability 1−ϵ. Following common practice (Mnih et al.,\n2015), we decay the exploration rate over time.",
"paper_id": "1805.11711",
"title": "Depth and nonlinearity induce implicit exploration for RL",
"authors": [
"Justas Dauparas",
"Ryota Tomioka",
"Katja Hofmann"
],
"published_date": "2018-05-29",
"primary_category": "cs.LG",
"arxiv_url": "http://arxiv.org/abs/1805.11711v1",
"chunk_index": 3,
"total_chunks": 12,
"char_count": 759,
"word_count": 130,
"chunking_strategy": "semantic"
},
{
"chunk_id": "506e054f-812e-4e0e-8c98-beb8089a0259",
"text": "Tasks We use the following OpenAI-gym (Brockman et al., 2016) tasks: mountaincar-v0, cartpole-v0, and acrobatv1. These are common RL benchmarks (Duan et al., 2016). Hyper-parameters For the experiments on mountaincar-v0, we used a replay buffer of size 200k, batch size\n256, discount factor γ = 0.99; we used a Q-network with two ReLU hidden layers of 128 units each, unless stated\notherwise; for optimization, we used Adam (Kingma and Ba, 2015) with α = 5·10−4; the target network was updated\nevery 1000 steps. For cartpole-v0 and acrobat-v1 we used replay buffer size 50k; all other parameters were the same. In Figure 1, we plot the reward statistics (mean, median, 2%- and 98%-pecentiles) obtained by running DDQN on\nthe mountaincar-v0 task with (a) no explicit exploration (ϵ = 0) (b) linear decay of the exploration rate ϵ from 1 to 0\nin 25k steps and (c) linear decay in 100k steps. 5 independent random seeds were used to obtain the statistics. The\nplots show that the agent without explicit exploration (ϵ = 0) can solve the task equally well, or even slightly better\nthan, standard exploration strategies. We confirmed similar results on cartpole-v0 and acrobat-v1 (Fig. 2).",
"paper_id": "1805.11711",
"title": "Depth and nonlinearity induce implicit exploration for RL",
"authors": [
"Justas Dauparas",
"Ryota Tomioka",
"Katja Hofmann"
],
"published_date": "2018-05-29",
"primary_category": "cs.LG",
"arxiv_url": "http://arxiv.org/abs/1805.11711v1",
"chunk_index": 4,
"total_chunks": 12,
"char_count": 1184,
"word_count": 196,
"chunking_strategy": "semantic"
},
{
"chunk_id": "64c822ed-244c-49ed-9419-fef7dd4ffd81",
"text": "How can an agent explore without randomness? Note that all the above environments are deterministic except\nfor the initial states. If it is not the environment or the stochasticity in the behavior policy, it must be some property\nof the Q-network that is inducing the exploration. To understand what is inducing the exploration for the mountaincar-v0 task, we carried out further experiments\nwith the following Q-network architectures (see Fig. 4): 1.25 1.00 0.75 0.50 0.25 0.00 0.25 0.50 1.25 1.00 0.75 0.50 0.25 0.00 0.25 0.50",
"paper_id": "1805.11711",
"title": "Depth and nonlinearity induce implicit exploration for RL",
"authors": [
"Justas Dauparas",
"Ryota Tomioka",
"Katja Hofmann"
],
"published_date": "2018-05-29",
"primary_category": "cs.LG",
"arxiv_url": "http://arxiv.org/abs/1805.11711v1",
"chunk_index": 5,
"total_chunks": 12,
"char_count": 528,
"word_count": 86,
"chunking_strategy": "semantic"
},
{
"chunk_id": "fa3c37f8-7d16-4027-b352-a1465df4572e",
"text": "(a) Random nonlinear Q-function as a controller. (b) Random linear Q-function as a controller. Figure 3: Vector fields in the phase space of the mountaincar-v0 task with and without the random Q-function as a\ncontroller. Blue: with the controller. Black: uncontrolled system. Linear (no hidden layer); 2. 1 hidden layer with 128 ReLU units; 3. 2 hidden layers with 128 ReLU units in each layer (original setting); 4. 2 hidden layers with 128 tanh units in each layer. All results were obtained with ϵ = 0.",
"paper_id": "1805.11711",
"title": "Depth and nonlinearity induce implicit exploration for RL",
"authors": [
"Justas Dauparas",
"Ryota Tomioka",
"Katja Hofmann"
],
"published_date": "2018-05-29",
"primary_category": "cs.LG",
"arxiv_url": "http://arxiv.org/abs/1805.11711v1",
"chunk_index": 6,
"total_chunks": 12,
"char_count": 505,
"word_count": 86,
"chunking_strategy": "semantic"
},
{
"chunk_id": "524edf36-2874-41d3-93b3-0c54918fcdf3",
"text": "Within each column, we plot the reward statistics (as above), and phase space\ndiagrams showing the 1000 state transitions leading up to 10k steps, 20k steps, 40k steps, and 160k steps. The\ntrajectories are superimposed on top of the histograms of the state visit frequencies colored from black (zero) to\nwhite (more than 100). The red vertical lines indicate the goal states. In the first column of Fig. 4, we can see that without any nonlinearity, the agent was not able to reach the goal\nstate even once and consequently did not learn the task at all, although we believe that a linear agent is sufficient to\nsolve this task (Mania et al., 2018). By contrast, we can see in the second column that the agent is able to solve the task with a single ReLU hidden\nlayer of size 128. We have also experimented with two fully-connected layers without nonlinearity or just one\nfully-connected layer initialized with large weight initialization scale, but none of them were as successful as the\nnetworks with ReLU nonlinearities. The original setup of two hidden layers (third column) seem to be slightly better\nthan one hidden layer. The last column shows the same result for two hidden layers with the tanh nonlinearity. The\nreward curve appears slightly noisier than for the ReLU activations, but this may be due to the high variance.",
"paper_id": "1805.11711",
"title": "Depth and nonlinearity induce implicit exploration for RL",
"authors": [
"Justas Dauparas",
"Ryota Tomioka",
"Katja Hofmann"
],
"published_date": "2018-05-29",
"primary_category": "cs.LG",
"arxiv_url": "http://arxiv.org/abs/1805.11711v1",
"chunk_index": 7,
"total_chunks": 12,
"char_count": 1330,
"word_count": 228,
"chunking_strategy": "semantic"
},
{
"chunk_id": "25315f64-5a5e-45ea-8c0b-36cad05e4063",
"text": "4 Discussion and Conclusion Can deterministic exploration be an alternative to random exploration? Deterministic exploration is attractive\nbecause it would avoid the unnatural dithering behavior often observed with ϵ-greedy and other stochastic exploration strategies. From a control theory perspective, an easy way to induce exploration is to destabilize the\nunderlying system. For example, a small inverse damper term (i.e., an acceleration proportional to the speed) would\nbe sufficient for the mountain car task, because success does not depend on the speed at which the goal state is\nreached. However, this is not the case for other benchmark tasks (e.g., acrobat-v1) and it would be a bad idea for\nreal-world systems. Another way to induce deterministic exploration would be to induce chaotic dynamics.",
"paper_id": "1805.11711",
"title": "Depth and nonlinearity induce implicit exploration for RL",
"authors": [
"Justas Dauparas",
"Ryota Tomioka",
"Katja Hofmann"
],
"published_date": "2018-05-29",
"primary_category": "cs.LG",
"arxiv_url": "http://arxiv.org/abs/1805.11711v1",
"chunk_index": 8,
"total_chunks": 12,
"char_count": 808,
"word_count": 122,
"chunking_strategy": "semantic"
},
{
"chunk_id": "743e4223-20b2-4b60-9e33-0779dfa2357d",
"text": "For\nexample, for acrobat-v1, it is enough for the controller to compensate for gravity. However, in both cases it is unlikely\nthat a randomly drawn initial Q-function behaves like an inverse damper term or a gravity compensator. In this\npaper we did not design an optimal deterministic exploration behavior but we demonstrated that such an behavior\ncan be induced by the network architecture. What is the role of the nonlinearity? We plot the vector fields of the moutaincar-v0 task with and without a\nrandomly initialized Q-function with two hidden layers as a controller in Fig. 3(a).",
"paper_id": "1805.11711",
"title": "Depth and nonlinearity induce implicit exploration for RL",
"authors": [
"Justas Dauparas",
"Ryota Tomioka",
"Katja Hofmann"
],
"published_date": "2018-05-29",
"primary_category": "cs.LG",
"arxiv_url": "http://arxiv.org/abs/1805.11711v1",
"chunk_index": 9,
"total_chunks": 12,
"char_count": 586,
"word_count": 96,
"chunking_strategy": "semantic"
},
{
"chunk_id": "5379006b-06b8-4687-95fe-593b59e70cef",
"text": "All weight matrices were initialized using Glorot initialization (Glorot and Bengio, 2010) and all bias terms were initialized to zero. We\nalso plot 10 trajectories from random initial states. The same plot with a linear Q-function is shown in Fig. 3(b). Comparing the two plots we notice that the nonlinear Q-function can modify dynamics in multiple regions of\nthe phase space (e.g., area around position= −0.5 and 0.5), whereas the linear Q-function tends to focus on one\nregion. However, as we can see in the trajectories in Fig. 3(a), random Q-network initialization itself is not enough to\ninduce dynamics that can reach the goal state. Therefore we hypothesize that it is the combination of optimistic\ninitialization (Sutton and Barto, 1998) and the flexibility of the Q-function that is inducing exploration. That is,\nwhen the area around the initial state is explored and observed to be fruitless, a flexible nonlinear agent can still\nmaintain optimism in the states and actions that it has not seen, whereas a linear agent may incorrectly extrapolate\nthat there is no reward in those states. Limitations We note several limitations. First, stochasticity may be induced by initial states, as they are randomly\nsampled from [−0.6,−0.4] in the moutaincar-v0 task.",
"paper_id": "1805.11711",
"title": "Depth and nonlinearity induce implicit exploration for RL",
"authors": [
"Justas Dauparas",
"Ryota Tomioka",
"Katja Hofmann"
],
"published_date": "2018-05-29",
"primary_category": "cs.LG",
"arxiv_url": "http://arxiv.org/abs/1805.11711v1",
"chunk_index": 10,
"total_chunks": 12,
"char_count": 1269,
"word_count": 203,
"chunking_strategy": "semantic"
},
{
"chunk_id": "6dce2baf-2cf5-400d-9668-459b7ba9b89a",
"text": "However, since the goal state cannot be reached from any of\nthe states in this interval, this is unlikely to play a major role. Second, we have not fully studied how the optimistic initialization interacts with the nonlinearity of the Qfunction. For mountaincar-v0, it would be interesting to experiment with an inverse reward that is positive at the\ngoal and zero otherwise. However, optimistic initialization applies to acrobat-v1 and moutaincar-v0 (all rewards are\nnegative) but not to cartpole-v0 (positive rewards), a task where DDQN with ϵ = 0 can learn as well as the ϵ-greedy\napproach. In this note we have shown that competitive performance on standard RL benchmarks can be achieved without\nexplicit exploration when deep neural networks are used as function approximators in Q-learning. Our analysis\nsuggests that both network depth and nonlinearity play a role by inducing optimism without overgeneralization. While we have mainly focused on the aspect of optimism induced by a deterministic policy, another important aspect\nis understanding the role of uncertainty. We believe that combining uncertainty quantification (e.g., bootstrapped\nDQN Osband et al., 2016) with deterministic exploration could be an interesting alternative to standard stochastic\nexploration. No hidden layer 1 hidden layer 2 hidden layers 2 hidden layers\n(linear) (ReLU) (ReLU) (tanh) Figure 4: Reward (5 random seeds) and trajectories for different Q-network architectures.",
"paper_id": "1805.11711",
"title": "Depth and nonlinearity induce implicit exploration for RL",
"authors": [
"Justas Dauparas",
"Ryota Tomioka",
"Katja Hofmann"
],
"published_date": "2018-05-29",
"primary_category": "cs.LG",
"arxiv_url": "http://arxiv.org/abs/1805.11711v1",
"chunk_index": 11,
"total_chunks": 12,
"char_count": 1461,
"word_count": 220,
"chunking_strategy": "semantic"
}
]