| [ |
| { |
| "chunk_id": "84e0d536-691b-4bbd-9344-a7d64269c24f", |
| "text": "Dyna Planning using a Feature Based Generative\nModel Ryan Faulkner Doina Precup\nSchool of Computer Science School of Computer Science\nMcGill University McGill University\nMontreal, QC, H3A2A7 Montreal, QC, H3A2A7\nryan.faulkner@mail.mcgill.ca dprecup@cs.mcgill.ca2018", |
| "paper_id": "1805.10129", |
| "title": "Dyna Planning using a Feature Based Generative Model", |
| "authors": [ |
| "Ryan Faulkner", |
| "Doina Precup" |
| ], |
| "published_date": "2018-05-23", |
| "primary_category": "cs.LG", |
| "arxiv_url": "http://arxiv.org/abs/1805.10129v1", |
| "chunk_index": 0, |
| "total_chunks": 31, |
| "char_count": 265, |
| "word_count": 32, |
| "chunking_strategy": "semantic" |
| }, |
| { |
| "chunk_id": "8dcea87c-2265-4abf-80f0-e82499c46ff7", |
| "text": "Dyna-style reinforcement learning is a powerful approach for problems where23\nnot much real data is available. The main idea is to supplement real trajectories,\nor sequences of sampled states over time, with simulated ones sampled from a\nlearned model of the environment. However, in large state spaces, the problem\nof learning a good generative model of the environment has been open so far. We propose to use deep belief networks to learn an environment model for use\nin Dyna. We present our approach and validate it empirically on problems where[cs.LG] the state observations consist of images. Our results demonstrate that using deep\nbelief networks, which are full generative models, significantly outperforms the\nuse of linear expectation models, proposed in Sutton et al. (2008). Recent reinforcement learning (RL) research is devoted to increasingly large problems, in which an\nagent is forced to learn quickly, with a small amount of data. One approach that attempts to make\nefficient use of the existing data is the Dyna architecture proposed by Sutton (1990). In this case, data\nobtained by an agent from its interaction with the environment is used to learn both a value function\n(which guides the action selection) and a generative model of the environment. The model can then\nbe used to \"fantasize\" additional data, which the agent can use to improve its value estimates. If\nthe model is of good quality, the sampled data will be very useful as a supplement for real data.", |
| "paper_id": "1805.10129", |
| "title": "Dyna Planning using a Feature Based Generative Model", |
| "authors": [ |
| "Ryan Faulkner", |
| "Doina Precup" |
| ], |
| "published_date": "2018-05-23", |
| "primary_category": "cs.LG", |
| "arxiv_url": "http://arxiv.org/abs/1805.10129v1", |
| "chunk_index": 1, |
| "total_chunks": 31, |
| "char_count": 1486, |
| "word_count": 243, |
| "chunking_strategy": "semantic" |
| }, |
| { |
| "chunk_id": "205a82d2-540b-42cf-8a11-ff186c8d242e", |
| "text": "This approach has been quite successful as a planning and learning tool in discrete domains, where\nelusive. In this paper, we propose to use deep belief networks (Hinton et al., 2006) to learn an environment\nmodel for a Dyna architecture. Our goal is to take a step towards using such architectures to tackle\nreinforcement learning in environments with very large state or observation spaces, consisting for\nexample of images, sounds or rich sensor readings. Deep belief networks have proven to be very\neffective at learning how to represent such complex data. However, only a few applications have\nbeen implemented using deep belief nets for the analysis of temporal data (e.g. Taylor et al., 2007,\nMemisevic & Hinton, 2009).", |
| "paper_id": "1805.10129", |
| "title": "Dyna Planning using a Feature Based Generative Model", |
| "authors": [ |
| "Ryan Faulkner", |
| "Doina Precup" |
| ], |
| "published_date": "2018-05-23", |
| "primary_category": "cs.LG", |
| "arxiv_url": "http://arxiv.org/abs/1805.10129v1", |
| "chunk_index": 2, |
| "total_chunks": 31, |
| "char_count": 726, |
| "word_count": 118, |
| "chunking_strategy": "semantic" |
| }, |
| { |
| "chunk_id": "cfde87c1-4d51-46a2-ba70-0fedb7f325a3", |
| "text": "Our proposed approach is to learn a deep representation for the states\nof the environment first. Then, a temporal layer connecting two deep belief nets is learned; this will\nmodel the conditional distribution of the next state, given the current state (See §3). In the current\nimplementation, models are learned separately for each action as the space of actions is discrete and\nsmall in number in our experiments. Once the action models are learned, they can be used to generate\nsamples of states, by setting the input as the sensor reading of the current state, and performing\ninference to generate a next state. These samples can be used both for extra learning steps, as well\nas in order to plan which action should be chosen. We mainly investigate the learning aspect in this We note that free-energy models have been used before in reinforcement learning to represent\nvalue functions (Sallans & Hinton, 2004) or policies (Otsuka, 2010). However, to our knowledge,\nthis is the first time they are used as a generative model in the Dyna framework - a setting where the\nfull power of such a model can be exploited best.", |
| "paper_id": "1805.10129", |
| "title": "Dyna Planning using a Feature Based Generative Model", |
| "authors": [ |
| "Ryan Faulkner", |
| "Doina Precup" |
| ], |
| "published_date": "2018-05-23", |
| "primary_category": "cs.LG", |
| "arxiv_url": "http://arxiv.org/abs/1805.10129v1", |
| "chunk_index": 3, |
| "total_chunks": 31, |
| "char_count": 1122, |
| "word_count": 192, |
| "chunking_strategy": "semantic" |
| }, |
| { |
| "chunk_id": "29fda29c-44f8-42ee-a4c0-519da6b2f6d8", |
| "text": "In reinforcement learning, an agent interacts with its environment at discrete time steps t = 0, 1, . . . . At each time step, the agent receives information about the current state st, chooses an action at\nand the environment transitions to a new state st+1, emitting a reward rt+1. In general, learning\ntakes place on-line; the agent attempts to estimate a policy π, mapping states to actions, which\nachieves high cumulative reward. As an intermediate step, many reinforcement learning methods\nwill estimate a value function, i.e. an expectation of the cumulative reward. Value functions can be\nassociated either with states or with state-action pairs. The state value function for a fixed policy π\nis defined as:\nV π(s) = Eπ[rt+1 + γrt+2 + . . . |st = s]\nwhere γ ∈(0, 1) is a discount factor, used to de-emphasize rewards that will be received in the\ndistant future. If the environment is Markovian, it is guaranteed to have a unique optimal state value\nfunction:\nV ∗(s) = max V π(s)\nwhich can be achieved by at least one, deterministic policy. State-action value functions can be\ndefined similarly (see Sutton & Barto, 1998, for a comprehensive description). If the agent follows a fixed policy, temporal difference (TD) learning (Sutton, 1988) provides a way\nto use experience to estimate the value function. For example, in TD(0), the following update is\nperformed at every time step t:\nV (st) ←V (st) + α[rt+1 + γV (st+1) −V (st)]\nwhere α ∈(0, 1) is a learning rate parameter. In domains with many states, V can be represented\nby a function approximator with parameter vector ⃗θ. In this case, the TD(0) update rule becomes:\n⃗θt+1 = ⃗θt + α[rt+1 + γVt(st+1) −Vt(st)]∇⃗θtVt(st)\nwhere ⃗θt denotes the estimate of the parameter vector at t, and Vt is a shorthand for V⃗θt. The\nexpression of the gradient, ∇⃗θtVt(st), depends on the type of function approximator used (e.g.\nlinear, neural network etc.) If the goal is to learn an optimal control policy, the Q-learning algorithm can be used to estimate\nthe state-action value function, with update rules similar to the ones above (see Sutton & Barto,\n1998, for details). When function approximation must be used, a separate approximator is usually\nmaintained for each action when the space of actions is discrete and when the number of actions is\nnot so large that this approach is prohibitive.", |
| "paper_id": "1805.10129", |
| "title": "Dyna Planning using a Feature Based Generative Model", |
| "authors": [ |
| "Ryan Faulkner", |
| "Doina Precup" |
| ], |
| "published_date": "2018-05-23", |
| "primary_category": "cs.LG", |
| "arxiv_url": "http://arxiv.org/abs/1805.10129v1", |
| "chunk_index": 4, |
| "total_chunks": 31, |
| "char_count": 2347, |
| "word_count": 401, |
| "chunking_strategy": "semantic" |
| }, |
| { |
| "chunk_id": "8a081675-fac3-466b-8a49-c8473f9fe5c3", |
| "text": "These conditions are met for the the experiments that\nfollow. In some applications, for example robotics or active vision, it is difficult to obtain a sufficient amount\nof real data by interacting with the environment directly, because generating experience may be risky\nor time consuming. The Dyna architecture is a framework for planning and learning introduced by\nSutton (1990) and intended for this situation. The architecture is presented in Figure 1. The main\nidea is that the agent will use real experience to build a \"simulation\" (i.e. generative) model of the\nenvironment, which can then be used to sample new transitions. For every step of real experience,\nthe agent will also sample transitions and rewards from its learned model; these are used just like the\nreal experience, to perform TD or Q-learning updates. Dyna has proven successful in large discrete\nproblems, where experience is scarce; however, for large problems, where function approximation\nmust be used, finding a good way to model the environment has been difficult. Recently, Sutton et\nal. (2008) proposed an approach that can be used for linear function approximation, in which only\n\"expectation\" models are used, instead of fully generative models. In this case, the model consists\nof learning the expected value of the next feature vector, φ(st+1), given the current feature vector\nφ(st). They show that planning steps can be performed correctly with such a model, and that the\nmodel can be used in combination with linear function approximation. However, this approach does\nnot work with neural networks or other non-linear function approximators.", |
| "paper_id": "1805.10129", |
| "title": "Dyna Planning using a Feature Based Generative Model", |
| "authors": [ |
| "Ryan Faulkner", |
| "Doina Precup" |
| ], |
| "published_date": "2018-05-23", |
| "primary_category": "cs.LG", |
| "arxiv_url": "http://arxiv.org/abs/1805.10129v1", |
| "chunk_index": 5, |
| "total_chunks": 31, |
| "char_count": 1629, |
| "word_count": 257, |
| "chunking_strategy": "semantic" |
| }, |
| { |
| "chunk_id": "62ec58c4-d492-43b7-9b44-83bcfcc8cd68", |
| "text": "Our goal in this paper is to leverage the representational power of deep belief networks in\norder to be able to learn generative models of\nenvironments used in reinforcement learning\ntasks. We use the approach taken by Hinton\net al. (2006) to greedily train a deep network\nlayer-by-layer by training a Restricted Boltzmann Machine (RBM) (Smolensky, 1986) at\neach layer. Such models can then be used\nin a Dyna-style architecture, in order to learn\ngood value functions and policies; we note that, Figure 1: Dyna Architecture\nwhile we explore mainly Dyna in this paper,\nother uses of the models are also possible (e.g.,\nin order to estimate the risk of certain policies, based on the variance of the samples obtained using\nthe model). Figure 2 illustrates the deep belief model we use. It is composed of two parts: an autoencoder\nnetwork (duplicated on the left and right side) and an associative memory (RBM) at the top level. The autoencoder network is used to generate a good, compact representation of the state, which\nsummarizes well the observed data; this is similar to the use of deep networks for unsupervised\nlearning, summarized above. The RBM at the top is the temporal layer of the model; its role is to\nmodel the temporal dependencies between states (assuming that, at the level of the hidden variables,\nthe environment is well represented as a Markov Process). The topology closely resembles the\ndeep network discussed by Hinton et al. (2006). It differs from other forms of deep networks\nbased on energy models that are used for prediction over time (Taylor et al., 2007; Sutskever et\nal., 2008) because it focuses first on reducing the dimensionality of the data, and only uses this\nreduced latent variable representation in the temporal dependency model. Using a lower dimensional\nrepresentation makes inference and learning the temporal dynamics of the environment much faster,\ndue to the potential representational power over the higher level units (Hinton et al., 2006; Bengio,\n2009). This is important in our case, because fast inference is essential in problems in which an\nagent must take actions in a timely manner.", |
| "paper_id": "1805.10129", |
| "title": "Dyna Planning using a Feature Based Generative Model", |
| "authors": [ |
| "Ryan Faulkner", |
| "Doina Precup" |
| ], |
| "published_date": "2018-05-23", |
| "primary_category": "cs.LG", |
| "arxiv_url": "http://arxiv.org/abs/1805.10129v1", |
| "chunk_index": 6, |
| "total_chunks": 31, |
| "char_count": 2138, |
| "word_count": 353, |
| "chunking_strategy": "semantic" |
| }, |
| { |
| "chunk_id": "4aa19d86-caca-4b15-a14a-6d2959e81bb4", |
| "text": "Top Level Associative Memory\nThe learning algorithm has two phases: training the model and learning the value function and policies, using states generated by the WT,1T,1 WT,2\nmodel to supplement real experience. We now\nTop Level Top Level\ndescribe each of these components. Representation Representation\nW2 W2T h1,t h1,t+1\n3.1 Training, Prediction & Dyna\nW1 W T1\nTraining the network involves learning each vt vt+1\nlayer independently as described in the previous section. The data consists of a time se- AutoencoderNetwork\nquence of environment states, s1, s2, . . . . A\nseparate model is learned for each action. If\nFigure 2: The generative network used for Dyna\nthe time series is generated according to an exlearning. W represents the weights at each layer\nploration policy, the samples st, a, st+1 corre- of the deep network. The visible units are repsponding to a particular action a will be used\nresented by the bottom layer of variables in the\nto train the model for a. The autoencoders are\nautoencoder network with hidden layer represenlearned first, using greedy layer-wise training\ntations up to the top-level of the auto-encoder.\n(Hinton et al. 2006). Note that this step does\nnot require a time sequence, as the autoencoder is aimed at representing states in a more compact\nway. Once the autoencoders are learned, the parameters of the temporal layer must be trained.", |
| "paper_id": "1805.10129", |
| "title": "Dyna Planning using a Feature Based Generative Model", |
| "authors": [ |
| "Ryan Faulkner", |
| "Doina Precup" |
| ], |
| "published_date": "2018-05-23", |
| "primary_category": "cs.LG", |
| "arxiv_url": "http://arxiv.org/abs/1805.10129v1", |
| "chunk_index": 7, |
| "total_chunks": 31, |
| "char_count": 1381, |
| "word_count": 227, |
| "chunking_strategy": "semantic" |
| }, |
| { |
| "chunk_id": "ec05f845-ab07-471a-aa69-3a4087ae1771", |
| "text": "Each pair\nof states st and st+1 is used as data at the visible units. This evidence is propagated up in the\nautoencoders, to obtain corresponding latent variable representations ht and ht+1. as evidence to train the temporal layer of the model, using the standard RBM procedure described\nby Hinton et al. (2006). When the full model has been trained, it can be used to generate samples of future states, given a\ncurrent state observation. The current state st represents the data. Its high-level representation, ht,\nis generated as described above. Gibbs sampling is performed over the temporal layer, holding the\nvisible units of the RBM corresponding to the representation ht fixed, and allowing the remaining\ntemporal layer units to settle to their equilibrium distribution. This effectively yields an unbiased\nsample from P(ht+1|ht). Once the Gibbs sampling is finished and ht+1 has been determined, the\nlow-level representation (corresponding to the observation of the next state, st+1) is determined by\nperforming a generative downward pass over the autoencoder model. Using these predictions it is possible to generate K-step trajectories (Sutton et al. 2008) of simulated\nexperience for every real transition. These simulated trajectories are rooted at observed states in\norder to ensure that the relative probabilities with which states are observed in the data are preserved,\nto the extent possible. Note that these probabilities are crucial in reinforcement learning, and they\nvary as the agent's policy changes. Ideally, the models should be trained continually, as the agent's\npolicy changes, or importance sampling should be incorporated in the inference procedure to make\nsure that the state visitation distribution is tracked correctly. However, for simplicity, we separate\nthe model training phase and the value function training, without any loss in the quality of the results\n(as we will see in the next section).", |
| "paper_id": "1805.10129", |
| "title": "Dyna Planning using a Feature Based Generative Model", |
| "authors": [ |
| "Ryan Faulkner", |
| "Doina Precup" |
| ], |
| "published_date": "2018-05-23", |
| "primary_category": "cs.LG", |
| "arxiv_url": "http://arxiv.org/abs/1805.10129v1", |
| "chunk_index": 8, |
| "total_chunks": 31, |
| "char_count": 1932, |
| "word_count": 300, |
| "chunking_strategy": "semantic" |
| }, |
| { |
| "chunk_id": "f31c4b90-0d50-45a3-892e-6c23e80cf452", |
| "text": "Note that the number of simulated transitions k is usually a very important parameter in Dyna-style\narchitectures. If the model is imprecise, the trajectories drift away from the real distribution as k\nbecomes large, and performance degrades. Having a good-quality model is crucial for this approach\nto work.", |
| "paper_id": "1805.10129", |
| "title": "Dyna Planning using a Feature Based Generative Model", |
| "authors": [ |
| "Ryan Faulkner", |
| "Doina Precup" |
| ], |
| "published_date": "2018-05-23", |
| "primary_category": "cs.LG", |
| "arxiv_url": "http://arxiv.org/abs/1805.10129v1", |
| "chunk_index": 9, |
| "total_chunks": 31, |
| "char_count": 308, |
| "word_count": 48, |
| "chunking_strategy": "semantic" |
| }, |
| { |
| "chunk_id": "5552cfcd-18b9-4f77-9e2f-b208bb3d3673", |
| "text": "4 Experiments & Results In order to evaluate our approach, we use navigation tasks, but with a twist: instead of observing\nthe exact identity of the state, the agent observes a high-dimensional vector representing the state. Observations may be noisy, or they may be uniquely associated with the state. However, even if the\nenvironment is fully observable, function approximation is needed for the agent to correctly estimate\nreturns, because of the dimensionality of the observation vector. We chose to associate states with images, because this facilitates the visual interpretation of results,\nand also because image processing with deep belief networks has been explored extensively. The\nfirst domain we used is a small Markov chain built on top of the MNIST data set [4]. The second\nset of experiments uses larger problems, built on top of the NORB [5] data set, and the goal is to\nlearn optimal control (rather than just estimating the value of a fixed policy). The key feature of both\ndomains is that states have a high-dimensional low-level representation, so they may be represented\nmuch better using high-level features. We now present the details of the two domains and discuss\nour results.", |
| "paper_id": "1805.10129", |
| "title": "Dyna Planning using a Feature Based Generative Model", |
| "authors": [ |
| "Ryan Faulkner", |
| "Doina Precup" |
| ], |
| "published_date": "2018-05-23", |
| "primary_category": "cs.LG", |
| "arxiv_url": "http://arxiv.org/abs/1805.10129v1", |
| "chunk_index": 10, |
| "total_chunks": 31, |
| "char_count": 1201, |
| "word_count": 195, |
| "chunking_strategy": "semantic" |
| }, |
| { |
| "chunk_id": "8edc575c-34eb-4522-801d-293e69752137", |
| "text": "4.1 MNIST Environment The MNIST database, created by Yann Lecun\nand Corinna Cortes (1998), consists of hand- 0.2 0.2 0.2\nwritten decimal digits. The full data set consists of 60000 training samples and 10000 test 0.8 0.8 0.8 0.8 i-1 i i+1\nsamples; each sample consists of 28 × 28 (784)\nmonochrome pixels. We defined 10 states, corresponding to the MNIST classes. The purpose Figure 3: The general transition diagram of the\nof the MNIST experiment is only to evaluate Markov chain defining the MNIST domain.\nthe effectiveness of our model in the context of\nDyna, thus actions are omitted for this toy domain. We use a simple Markov chain over these states, depicted in Fig. 3. The transition probabilities\nare P(st+1 = i + 1|st = i) = 0.8 and P(st+1 = i|st = i) = 0.2 for 0 ≤i < 9. When i = 9,\nthen P(st+1 = 0|st = 9) = 0.8 and the probability of staying in state \"9\" is 0.2. Only the state\ncorresponding to the class label \"9\" has a reward of +1; all other rewards are 0.", |
| "paper_id": "1805.10129", |
| "title": "Dyna Planning using a Feature Based Generative Model", |
| "authors": [ |
| "Ryan Faulkner", |
| "Doina Precup" |
| ], |
| "published_date": "2018-05-23", |
| "primary_category": "cs.LG", |
| "arxiv_url": "http://arxiv.org/abs/1805.10129v1", |
| "chunk_index": 11, |
| "total_chunks": 31, |
| "char_count": 971, |
| "word_count": 184, |
| "chunking_strategy": "semantic" |
| }, |
| { |
| "chunk_id": "e87cfe12-e02c-4425-a1a6-5693db7299fa", |
| "text": "The observation corresponding to a state is sampled uniformly randomly from the corresponding set of digits. Note that this is actually a partially observable environment, although the\nobservations for a particular state are very closely related. A similar domain, but on a smaller scale,\nwas used in Otsuka (2010). 4.2 Model Learning for the MNIST Domain & Value Function Learning", |
| "paper_id": "1805.10129", |
| "title": "Dyna Planning using a Feature Based Generative Model", |
| "authors": [ |
| "Ryan Faulkner", |
| "Doina Precup" |
| ], |
| "published_date": "2018-05-23", |
| "primary_category": "cs.LG", |
| "arxiv_url": "http://arxiv.org/abs/1805.10129v1", |
| "chunk_index": 12, |
| "total_chunks": 31, |
| "char_count": 381, |
| "word_count": 60, |
| "chunking_strategy": "semantic" |
| }, |
| { |
| "chunk_id": "7186fcb8-b7c9-49ea-a355-fded9933ddfa", |
| "text": "We generated 1000 transitions from the environment. To train the autoencoder portion of the network, four levels were used, with 500, 500, 200, and 100 hidden units respectively (from bottom to\ntop) and CD10 - contrastive divergence learning with 10 Gibbs steps (Carreira-Perpinan & Hinton,\n2005) - trained over 2000, 1000, 3000, and 5000 sweeps of the data respectively (see Hinton et al.,\n2006 for details). The data was also broken into 20 evenly sized mini-batches. After training the\ndeep models greedily they were fine tuned using backpropagation with a learning rate of 0.01 over\n20000 sweeps. The temporal layer contained 1000 hidden units and was trained over 3000 sweeps\nwith CD20 and the data broken into 10 sequential and evenly sized mini-batches.", |
| "paper_id": "1805.10129", |
| "title": "Dyna Planning using a Feature Based Generative Model", |
| "authors": [ |
| "Ryan Faulkner", |
| "Doina Precup" |
| ], |
| "published_date": "2018-05-23", |
| "primary_category": "cs.LG", |
| "arxiv_url": "http://arxiv.org/abs/1805.10129v1", |
| "chunk_index": 13, |
| "total_chunks": 31, |
| "char_count": 760, |
| "word_count": 123, |
| "chunking_strategy": "semantic" |
| }, |
| { |
| "chunk_id": "ec6faef3-9f9a-4401-a1bb-86b61c137b9f", |
| "text": "Over all training\nwe used a learning rate of 0.005 with a momentum of 0.9. Reconstruction error over the data using\njust the autoencoder ranged from 4 −6% on test data after fine tuning. Training times for the autoencoder, fine-tuning, and temporal layer ran 4, 5, 0.5 hours respectively. All training was performed\non an 8-core CPU. We use this domain as a proof-of-concept for our architecture.", |
| "paper_id": "1805.10129", |
| "title": "Dyna Planning using a Feature Based Generative Model", |
| "authors": [ |
| "Ryan Faulkner", |
| "Doina Precup" |
| ], |
| "published_date": "2018-05-23", |
| "primary_category": "cs.LG", |
| "arxiv_url": "http://arxiv.org/abs/1805.10129v1", |
| "chunk_index": 14, |
| "total_chunks": 31, |
| "char_count": 396, |
| "word_count": 67, |
| "chunking_strategy": "semantic" |
| }, |
| { |
| "chunk_id": "7ec85db8-cb06-4192-a0a2-953821c961a3", |
| "text": "For this purpose, we wanted to evaluate separately errors introduced by the model and errors introduced by the value function estimation. Hence, we use two ways to represent the value function. The first method (TAB) assumes that we\nhave access to the state labels and therefore a table of values can be learned, with a separate entry\nfor each state. In order to obtain class labels from the representation provided by the deep network,\nwe train a three layer neural network (with 500 and 150 hidden units), in a supervised manner, using\nback propagation [4]. The classifier was trained on 45000 MNIST training samples for 2500 sweeps\nwith learning rate 0.1 and momentum set at 0.5. Over a test set of 850 cases, there were on average\n25 errors, a rate of 2.94%, in close agreement with current literature [4].", |
| "paper_id": "1805.10129", |
| "title": "Dyna Planning using a Feature Based Generative Model", |
| "authors": [ |
| "Ryan Faulkner", |
| "Doina Precup" |
| ], |
| "published_date": "2018-05-23", |
| "primary_category": "cs.LG", |
| "arxiv_url": "http://arxiv.org/abs/1805.10129v1", |
| "chunk_index": 15, |
| "total_chunks": 31, |
| "char_count": 810, |
| "word_count": 140, |
| "chunking_strategy": "semantic" |
| }, |
| { |
| "chunk_id": "d6b619c5-6952-4fa5-8f40-8b30160ec105", |
| "text": "The labels obtained from\nthe classifier are then used as indices in the value function table. The second method (FA) involves learning a linear function approximator over the features of the\nstate representation (in this case, the MNIST pixel values). Under this scheme, the value of a state is\nestimated by averaging the value returned by the function approximator over all of the training data\nsamples for each labeled state. We used learning rates of α = 0.01 and 106 updates for FA and α = 0.02 with 4 × 105 steps with\nTAB. The number of updates to perform was determined empirically by observing when value\nestimates and the function approximation parameters no longer changed significantly.", |
| "paper_id": "1805.10129", |
| "title": "Dyna Planning using a Feature Based Generative Model", |
| "authors": [ |
| "Ryan Faulkner", |
| "Doina Precup" |
| ], |
| "published_date": "2018-05-23", |
| "primary_category": "cs.LG", |
| "arxiv_url": "http://arxiv.org/abs/1805.10129v1", |
| "chunk_index": 16, |
| "total_chunks": 31, |
| "char_count": 696, |
| "word_count": 117, |
| "chunking_strategy": "semantic" |
| }, |
| { |
| "chunk_id": "45973189-0cbe-4289-9b1f-4c6de86425bf", |
| "text": "We used the\nDyna algorithm with K = 5 (Sutton et al. 2008) where sampling of future states was performed\nwith 5000 Gibbs steps (2 hours to generate all samples). If the final sample of the training data is\nencountered before the values have converged (within a specified threshold) then another pass over\nthe data is performed. The values computed under each approach are compared to the true values of the states, which can\nbe computed exactly from the model. Fig. 4 shows the value function error as training progresses\n(left) as well as the total variation of the deep model compared to the real one (middle panel). The\nvalue error was computed by averaging the absolute differences of value predictions and the true\nvalues over the number of states. The total variation estimates are computed from the transition\nkernel sampled from model predictions and the true model. As expected, more training leads to\nbetter estimates, both for the model and for the value function. An important aspect we wanted to test is the ability of the model to project farther into the future. Therefore, we generated 20-step trajectories over the model, and estimated the total variation\nbetween the true k-step model of the environment, and the k-step model as estimated by the simulation. The result is presented in Fig. 4 (right). The plot shows a remarkable ability of the simulated\nmodel to track the true model, even on longer trajectories. These results are very encouraging, as\nthey demonstrate that the models are quite stable. With this, we now turn to a bigger problem.", |
| "paper_id": "1805.10129", |
| "title": "Dyna Planning using a Feature Based Generative Model", |
| "authors": [ |
| "Ryan Faulkner", |
| "Doina Precup" |
| ], |
| "published_date": "2018-05-23", |
| "primary_category": "cs.LG", |
| "arxiv_url": "http://arxiv.org/abs/1805.10129v1", |
| "chunk_index": 17, |
| "total_chunks": 31, |
| "char_count": 1565, |
| "word_count": 264, |
| "chunking_strategy": "semantic" |
| }, |
| { |
| "chunk_id": "3e2daf6c-9837-424e-af49-ea375c38b83c", |
| "text": "Value Error Against Training Time − 10 State Model Total Variation Against Training Time − 10 State Model Total Varitation Between Model And Environment Over Walks − Ten State\nFunction Approximation 1.2 1.4 K−Steps from Model\nIdealized K−Steps 1.2 TabularMax Function Approximation 1 1.2 Max Tabular\n0.8 1\n0.8 0.8\nError 0.6 0.6 0.6\n0.4 0.4 0.4 0 0 500 1000 1500 2000 2500 3000 0 0 200 400 600 800 1000 1200 1400 1600 0 0 2 4 6 8 10 12 14 16 18 20 22\nNumber of Training Epochs Number of Training Epochs Number of Steps Figure 4: The error in the value function for the TAB and FA methods and the total variation of the transition\nmatrix sampled from the model and the transition matrix of the environment. At right is the total variation of\nthe simulator distribution over several steps.", |
| "paper_id": "1805.10129", |
| "title": "Dyna Planning using a Feature Based Generative Model", |
| "authors": [ |
| "Ryan Faulkner", |
| "Doina Precup" |
| ], |
| "published_date": "2018-05-23", |
| "primary_category": "cs.LG", |
| "arxiv_url": "http://arxiv.org/abs/1805.10129v1", |
| "chunk_index": 18, |
| "total_chunks": 31, |
| "char_count": 786, |
| "word_count": 145, |
| "chunking_strategy": "semantic" |
| }, |
| { |
| "chunk_id": "fa14c815-1dc1-40f6-9705-a9aa30b9671d", |
| "text": "The NORB dataset [5] consists of images of toy models from five object classes: humans, cars,\nplanes, trucks, and four-legged animals. The toy models are all uniformly coloured white, which\nmeans that information about shape is the most relevant indicator of an object's class. For each class, there are nine distinct models, photographed at a 96×96 resolution on a blank background from nine different elevations, 18 azimuths, and under six different levels of illumination. As the first intended use of NORB was for 3D object recognition, the dataset is composed of stereo\npairs over each of the sample configurations, leading to a grand total of 24300 image pairs. There are\nmore complex NORB datasets (the set described is referred to as the \"small\" set) which include jittered and cluttered backgrounds, translation of the object from the center, and also distractor objects\nin the periphery of the image. Using these image sets is left for future work. We use a modified version of the \"small\" datset. Only one of the stereo pairs is used and over\nthe five classes, we chose a single instance, illumination level, and elevation. This leads to five\ndistinct objects of different classes at 18 azimuths for a total of 90 samples or states to be used in\nour RL domain.", |
| "paper_id": "1805.10129", |
| "title": "Dyna Planning using a Feature Based Generative Model", |
| "authors": [ |
| "Ryan Faulkner", |
| "Doina Precup" |
| ], |
| "published_date": "2018-05-23", |
| "primary_category": "cs.LG", |
| "arxiv_url": "http://arxiv.org/abs/1805.10129v1", |
| "chunk_index": 19, |
| "total_chunks": 31, |
| "char_count": 1271, |
| "word_count": 214, |
| "chunking_strategy": "semantic" |
| }, |
| { |
| "chunk_id": "ee27238c-6ad3-4f73-b2d4-73362f64e34a", |
| "text": "To speed up the training of the model we reduced the resolution of the images in\na fashion similar to Hinton et al. in [9]. The images were first reduced in size to 64 × 64 pixels by\nremoving the borders of the samples which contained only background. The resulting images were\nthen downsampled to a 32 × 32 resolution.", |
| "paper_id": "1805.10129", |
| "title": "Dyna Planning using a Feature Based Generative Model", |
| "authors": [ |
| "Ryan Faulkner", |
| "Doina Precup" |
| ], |
| "published_date": "2018-05-23", |
| "primary_category": "cs.LG", |
| "arxiv_url": "http://arxiv.org/abs/1805.10129v1", |
| "chunk_index": 20, |
| "total_chunks": 31, |
| "char_count": 319, |
| "word_count": 60, |
| "chunking_strategy": "semantic" |
| }, |
| { |
| "chunk_id": "b7c813b9-f262-496a-8dab-94271206243c", |
| "text": "Each of the 90 samples is associated with a distinct state of the environment. These states are\narranged in a 2D map, which the agent is intended to navigate. The agent has four actions\navailable: up, down, left, and right. For each\naction, the agent has a 90% chance of moving\nin the direction indicated by the action, and a\n10% chance of staying in the same state. If the\nagent takes an action that would move it off of\nthe map then it remains in the same state deter- Figure 5: The NORB environment. The agent moves\nministically. The NORB map is shown in Fig- by taking actions: Up, Down, Left, or Right.\nure 5. We intended this environment to simulate\na camera moving through an environment in which objects are placed, and the camera views them\nat different angles, depending on its position relative to the object. There is a reward value of +1 associated with the top row of car images in the NORB map (top right\nrow) and a reward of 0 for all other states. Episodes consist of the agent starting in the top left corner\nof the NORB map and terminate upon the agent reaching any of the non-zero-reward states.", |
| "paper_id": "1805.10129", |
| "title": "Dyna Planning using a Feature Based Generative Model", |
| "authors": [ |
| "Ryan Faulkner", |
| "Doina Precup" |
| ], |
| "published_date": "2018-05-23", |
| "primary_category": "cs.LG", |
| "arxiv_url": "http://arxiv.org/abs/1805.10129v1", |
| "chunk_index": 21, |
| "total_chunks": 31, |
| "char_count": 1115, |
| "word_count": 206, |
| "chunking_strategy": "semantic" |
| }, |
| { |
| "chunk_id": "e1ec82e4-dcda-4bf8-ba83-7c053c2e3039", |
| "text": "In order\nto simulate rewards, a logistic regression model over the predefined state rewards is learned with a\nlearning rate of 0.0001, over 105 training iterations. The NORB classes are used as input, and their\nassociated rewards serve as the output. 4.4.1 The Model for the NORB Environment", |
| "paper_id": "1805.10129", |
| "title": "Dyna Planning using a Feature Based Generative Model", |
| "authors": [ |
| "Ryan Faulkner", |
| "Doina Precup" |
| ], |
| "published_date": "2018-05-23", |
| "primary_category": "cs.LG", |
| "arxiv_url": "http://arxiv.org/abs/1805.10129v1", |
| "chunk_index": 22, |
| "total_chunks": 31, |
| "char_count": 291, |
| "word_count": 48, |
| "chunking_strategy": "semantic" |
| }, |
| { |
| "chunk_id": "27c19bea-384a-41d5-ab77-1941db6b6650", |
| "text": "Training the deep model was done using the 90 states as data which, at 32 × 32 pixels, required\n1024 visible units at the bottom of the network. Since the NORB data consists of real valued pixel\nintensities, the first level RBM was trained using a gaussian distribution over the visible units with binary hidden units, similar to the approach taken in [7]. For the Gaussian-binary RBM we used\n4000 binary hidden units, a learning rate of 0.00001, over 20000 epochs, with CD5, and nine minibatches (10 samples per batch). The remainder of the deep model was trained with binary units\nas in the MNIST case. The binary deep model consists of four levels, with 2000, 1000, 200, and\n100 hidden units, learning rates of 0.001, 0.001, 0.001, and 0.004 respectively, and training sweeps\nof 40000, 20000, 20000, and 200000 respectively. One batch and CD5 was used over each level. The learning over the deep model was deterministic, meaning that unit values were not sampled\nduring learning. Instead, the probabilities were used as the activities over each layer to generate\na single representation for each state. This was acceptable since only one sample is used for each\nstate.", |
| "paper_id": "1805.10129", |
| "title": "Dyna Planning using a Feature Based Generative Model", |
| "authors": [ |
| "Ryan Faulkner", |
| "Doina Precup" |
| ], |
| "published_date": "2018-05-23", |
| "primary_category": "cs.LG", |
| "arxiv_url": "http://arxiv.org/abs/1805.10129v1", |
| "chunk_index": 23, |
| "total_chunks": 31, |
| "char_count": 1171, |
| "word_count": 198, |
| "chunking_strategy": "semantic" |
| }, |
| { |
| "chunk_id": "092e5a12-caae-46e3-a423-5e0b5274146e", |
| "text": "Training the entire auto-encoder portion of the NORB model took roughly 12 hours for the\nbinary-guassian RBM and one day for the binary deep model. The data for the temporal layer was generated by choosing actions uniformly randomly, for a total\nof 7200 steps. This data was then partitioned into pairs generated from each action. The result was\nroughly 20 sampled transitions for each state and action, sufficient to model the transitions of the\nenvironment. Note that training samples in this case consist of a pair of top-level representations\nof NORB states. In the case of our model, a training sample for the temporal layer consists of 200\nvisible binary values, where the first 100 values represent the state at some time, st, and the next\n100 values represent the state at the next time step, st+1, sampled from the true MDP defined by\nthe NORB map. This data was used to train a model for each action where the temporal layer was\ntrained stochastically with 500 binary hidden units, a learning rate of 0.005, over 2000 epochs, using\nCD5, and two batches each containing 900 samples. Training time here was about 30 minutes. While training the temporal layer the total variation of the temporal model is sampled after every\n150 epochs of learning. To determine the transition function of the model 20 next step predictions\nare sampled for each NORB state. The Euclidean distance between each model sample and all of\nthe NORB classes is computed and the class yielding the smallest distance is chosen as the class\nof the sample. The total variation between this transition function estimate and the true transition\nfunction of the environment is then computed. Despite the error introduced by the mapping scheme\ndescribed above clear improvement is observed in the transition function of the model in Figure 6. 4.5 Learning Control for the NORB Domain Total Variation Against Training Time Total Variation Against Training Time\nTo begin learning the 2 2\nvalue function over 1.8 1.8\nthe NORB states the 1.61.4 1.61.4\nagent is initialized in Variation 1.21 Variation 1.21\nthe top left corner of Total 0.8 Total 0.8\nthe map then allowed 0.60.4 0.60.4\nto choose among the 0.20 0 500 1000 1500 2000 0.20 0 500 1000 1500 2000\navailable actions.", |
| "paper_id": "1805.10129", |
| "title": "Dyna Planning using a Feature Based Generative Model", |
| "authors": [ |
| "Ryan Faulkner", |
| "Doina Precup" |
| ], |
| "published_date": "2018-05-23", |
| "primary_category": "cs.LG", |
| "arxiv_url": "http://arxiv.org/abs/1805.10129v1", |
| "chunk_index": 24, |
| "total_chunks": 31, |
| "char_count": 2245, |
| "word_count": 380, |
| "chunking_strategy": "semantic" |
| }, |
| { |
| "chunk_id": "fbbd4086-13d0-41df-ab4e-8be11b71cb4a", |
| "text": "Training Epochs of Dyna Model Training Epochs of Dyna Model\nThe environment is Figure 6: The total variation of the NORB models against the true transition funcepisodic and therefore tion against training time.", |
| "paper_id": "1805.10129", |
| "title": "Dyna Planning using a Feature Based Generative Model", |
| "authors": [ |
| "Ryan Faulkner", |
| "Doina Precup" |
| ], |
| "published_date": "2018-05-23", |
| "primary_category": "cs.LG", |
| "arxiv_url": "http://arxiv.org/abs/1805.10129v1", |
| "chunk_index": 25, |
| "total_chunks": 31, |
| "char_count": 210, |
| "word_count": 33, |
| "chunking_strategy": "semantic" |
| }, |
| { |
| "chunk_id": "c30858ba-4afa-4850-97c4-715e72fc52e6", |
| "text": "The plots show the results for the \"UP\" and \"LEFT\"\nthe agent returns to actions respectively - the remaining actions have similar curves.\nthe initial state after\nvisiting a state with reward. The experiments proceed with a learning rate of α = 0.001 and a\ndiscount factor γ = 0.9. The learning rate over the simulated samples decays to 95% of its value\nafter each episode, but falls no lower than 0.0001. Epsilon-greedy exploration is used with an initial\nvalue of ϵ = 0.9 that decays to 90% of its value after each episode until it sinks to 0.05 at which\nvalue it remains for all future epsiodes. The Dyna implementation was first compared with a baseline TD(0) / SARSA implementation over\nthe environment - simply learning by sampling from the environment.", |
| "paper_id": "1805.10129", |
| "title": "Dyna Planning using a Feature Based Generative Model", |
| "authors": [ |
| "Ryan Faulkner", |
| "Doina Precup" |
| ], |
| "published_date": "2018-05-23", |
| "primary_category": "cs.LG", |
| "arxiv_url": "http://arxiv.org/abs/1805.10129v1", |
| "chunk_index": 26, |
| "total_chunks": 31, |
| "char_count": 758, |
| "word_count": 132, |
| "chunking_strategy": "semantic" |
| }, |
| { |
| "chunk_id": "eb632b2e-499b-49d3-b092-5f6bdfd23de6", |
| "text": "For every step taken in the\nactual environment 50 simulated steps are used to further train the function appoximation model\nusing the rule described in §2 with the function approximation parameters initialized to 0. The\nsimulated states of the model are generated as detailed in §3.1. The Dyna samples contributed to\nlearning with fewer episodes and less data overall (Fig. 7). For the results depicted, Dyna returns\nwere averaged over 10 independent runs on five seperate models, trained independently. The SARSA\nreturns come from the average over 50 independent runs. Finally, the generative Dyna returns were\nshifted by 7200 samples to account for the extra data used to train the models (however these samples", |
| "paper_id": "1805.10129", |
| "title": "Dyna Planning using a Feature Based Generative Model", |
| "authors": [ |
| "Ryan Faulkner", |
| "Doina Precup" |
| ], |
| "published_date": "2018-05-23", |
| "primary_category": "cs.LG", |
| "arxiv_url": "http://arxiv.org/abs/1805.10129v1", |
| "chunk_index": 27, |
| "total_chunks": 31, |
| "char_count": 713, |
| "word_count": 114, |
| "chunking_strategy": "semantic" |
| }, |
| { |
| "chunk_id": "62583ab4-66c3-4c2b-ad6d-fa292aea9802", |
| "text": "Averaged NORB Returns Averaged NORB Returns 0.35 0.35 Generative Dyna − 50 Steps\nGenerative Dyna − 10 Steps\n0.3 0.3 Linear Dyna − 10 Steps\nReturns 0.250.2 Returns 0.250.2 0.15 0.15 Discounted 0.1 Discounted 0.1 0.05 0.05\nTD(0) − SARSA\nGenerative Dyna − 50 Steps\n0 0 0.2 0.4 0.6 0.8 1 1.2 1.4 1.6 1.8 2 2.2 0 0 0.5 1 1.5 2 2.5 3\n4 Steps from the Actual Environment Steps from the Actual Environment x 10 x 104 Figure 7: [Top] The averaged returns over the NORB environment. [Bottom] Sample walks over a linear (left)\nand generative (middle) Dyna models.", |
| "paper_id": "1805.10129", |
| "title": "Dyna Planning using a Feature Based Generative Model", |
| "authors": [ |
| "Ryan Faulkner", |
| "Doina Precup" |
| ], |
| "published_date": "2018-05-23", |
| "primary_category": "cs.LG", |
| "arxiv_url": "http://arxiv.org/abs/1805.10129v1", |
| "chunk_index": 28, |
| "total_chunks": 31, |
| "char_count": 552, |
| "word_count": 103, |
| "chunking_strategy": "semantic" |
| }, |
| { |
| "chunk_id": "f65aaea5-1211-4268-8596-9f7895f0cf65", |
| "text": "The walks begin in the bottom left cell and proceed left to right along\nthe columns and up the rows. At right is shown the policy learned using generative Dyna (bright spots indicate\nthe direction taken in each grid square). may have been used for Dyna learning as well, but the generative model learns faster regardless). Also shown (Fig. 7) is the policy inferred from one of the Dyna models. Generative Dyna with\nK = 50 at 7500 steps over the environment took about 14 hours total. Next, we compared our results to those of Sutton et al. (2008) using a linear Dyna model. This\nwas accomplished by training a linear model using the documented implementation [15]. The linear\nmodels were trained over the same data as the generative models with a learning rate of 0.001 for\n5 sweeps over all of the samples, at which point the model weights had converged. As before five\nindependent models were trained with returns averaged over 10 independent runs on each model\nand then averaged over each model. Dyna was executed over the linear models using the same\nparameters as those used for Dyna with the generative model. The plots in Fig. 7 compare the\nreturns for each model with different trajectory lengths. It should be noted that the trajectory samples\nobtained from the linear models were of poor quality after a few steps (Fig. 7), this prevented us from\nexecuting trajectories longer than around 10 steps, limiting the amount of accurate \"fantasizing\" that\ncould be done with the linear model, and consequently its effectiveness as a simulator also. 5 Conclusions and Future Work", |
| "paper_id": "1805.10129", |
| "title": "Dyna Planning using a Feature Based Generative Model", |
| "authors": [ |
| "Ryan Faulkner", |
| "Doina Precup" |
| ], |
| "published_date": "2018-05-23", |
| "primary_category": "cs.LG", |
| "arxiv_url": "http://arxiv.org/abs/1805.10129v1", |
| "chunk_index": 29, |
| "total_chunks": 31, |
| "char_count": 1583, |
| "word_count": 271, |
| "chunking_strategy": "semantic" |
| }, |
| { |
| "chunk_id": "e91ccc9f-d7b1-4eec-a4e8-61ac30bf6879", |
| "text": "Deep models provide a means in which to learn good high-level state representations for domains\nconsisting of high-dimensional data. Using these high-level feature representations further aids in\nlearning a good temporal model of the environment along with timely prediction of future states. The purpose of this paper is to demonstrate the benefits of implementing Dyna with non-linear\ngenerative models in comparison to the linear expectation models used in the state-of-the-art. We\nshowed this using DBNs however it is expected that there exist other non-linear models may be used\nto the same end; this is a suitable direction for future research. The results of our experiments in §4 demonstrate two important properties of our approach: that\nthe generative model of the environment can be trained with high accuracy, and that the model\nwhen used within the Dyna architecture can significantly speed up learning of the value function\nusing linear function approximation. We have also demonstrated that Dyna with generative models\nsignificantly outperforms Dyna implemented with linear models in a domain consisting of highdimensional images with some variance in the state transition function. Many problems in reinforcement learning derive from real world domains with partial observability and non-stationarity in\nthe environment dynamics. Thus, further investigation into how well this method handles changing\ntime dependecies and more complex domains is a very important direction for future work.", |
| "paper_id": "1805.10129", |
| "title": "Dyna Planning using a Feature Based Generative Model", |
| "authors": [ |
| "Ryan Faulkner", |
| "Doina Precup" |
| ], |
| "published_date": "2018-05-23", |
| "primary_category": "cs.LG", |
| "arxiv_url": "http://arxiv.org/abs/1805.10129v1", |
| "chunk_index": 30, |
| "total_chunks": 31, |
| "char_count": 1505, |
| "word_count": 224, |
| "chunking_strategy": "semantic" |
| } |
| ] |