researchpilot-data / chunks /1806.04552_semantic.json
Subhadip007's picture
feat: Upload full 358k vector database
e8a2c2e
[
{
"chunk_id": "8589bfcb-6699-40dd-bf96-177a9099099a",
"text": "Combining Model-Free Q-Ensembles and\nModel-Based Approaches for Informed Exploration Sreecharan Sankaranarayanan∗ Raghuram Mandyam Annasamy∗\nLanguage Technologies Institute Language Technologies Institute\nCarnegie Mellon University Carnegie Mellon University\nPittsburgh, PA 15213 Pittsburgh, PA 15213\nsreechas@cs.cmu.edu rannasam@andrew.cmu.edu2018\nKatia Sycara Carolyn Penstein Rosé\nRobotics Institute Language Technologies InstituteJun Carnegie Mellon University Carnegie Mellon University\nPittsburgh, PA 15213 Pittsburgh, PA 15213\nkatia@cs.cmu.edu cprose@cs.cmu.edu12 Q-Ensembles are a model-free approach where input images are fed into differ-[cs.LG] ent Q-networks and exploration is driven by the assumption that uncertainty is\nproportional to the variance of the output Q-values obtained. They have been\nshown to perform relatively well compared to other exploration strategies. Further,\nmodel-based approaches, such as encoder-decoder models have been used successfully for next frame prediction given previous frames. This paper proposes to\nintegrate the model-free Q-ensembles and model-based approaches with the hope\nof compounding the benefits of both and achieving superior exploration as a result. Results show that a model-based trajectory memory approach when combined\nwith Q-ensembles produces superior performance when compared to only using\nQ-ensembles. 1 Introduction and Related Work Quantifying predictive uncertainty is a problem that has started to receive a lot of attention as Deep\n[5, 6] but have continued to produce overconfident estimates. These overconfident estimates can\nbe detrimental or even harmful for practical applications [7]. Therefore, quantifying predictive\nuncertainties aside from just the accuracy of networks is an important problem. The contribution of\nour work is to combine an encoder-decoder model-based architecture and trajectory memory with the\nmodel-free Q-ensemble approach for the purpose of uncertainty estimation which in the context of\nreinforcement learning is exploration. 1.1 Neural Network Ensembles for Uncertainty Prediction Current approaches to quantifying uncertainty have mostly been Bayesian where a prior distribution\nis specified over the parameters of the Neural Network and then using the training data, the computed posterior distribution over the parameters is used to calculate the uncertainty [8]. Since this\nform of Bayesian inference is computationally intractable, approaches have ranged from Laplace\nApproximation [9], Markov Chain Monte Carlo methods [10] to Variational Bayesian Inference",
"paper_id": "1806.04552",
"title": "Combining Model-Free Q-Ensembles and Model-Based Approaches for Informed Exploration",
"authors": [
"Sreecharan Sankaranarayanan",
"Raghuram Mandyam Annasamy",
"Katia Sycara",
"Carolyn Penstein Rosé"
],
"published_date": "2018-06-12",
"primary_category": "cs.LG",
"arxiv_url": "http://arxiv.org/abs/1806.04552v1",
"chunk_index": 0,
"total_chunks": 16,
"char_count": 2574,
"word_count": 339,
"chunking_strategy": "semantic"
},
{
"chunk_id": "8806fb84-a915-45bf-bce3-751089a0b6fc",
"text": "Work in progress. ∗Both authors contributed equally to this work. methods [11, 12, 13]. These methods however suffer from issues due to bounds of computational\npower and over-reliance on the correctness of the prior probability distribution over the parameters. Having priors of convenience can in fact lead to unreasonable uncertainty estimates [14]. In order to\novercome these challenges and produce a more robust uncertainty estimate, Lakshminarayanan et\nal. [15] proposed using an ensemble of Neural Networks trained under a defined scoring rule. This\napproach when compared to Bayesian approaches is much simpler, has parallelization advantages and\nachieves state-of-the-art or better performance. State-of-the-art performance before this was achieved\nby MC-dropout which can also in essence be considered an ensemble approach where the predictions\nare averaged over an ensemble of Neural Networks with parameter sharing [16]. 1.2 Model-Free Q-Ensembles This ensemble approach for uncertainty estimation in Neural Networks motivated its use for uncertainty estimation for exploration in the case of Deep Reinforcement Learning in the form of\nQ-Ensembles [17]. Specifically, an ensemble voting algorithm is proposed where the agent takes\naction based on a majority vote over the Q-ensemble. The exploration strategy described uses\nthe estimate of the confidence interval to then optimistically explore in the direction of the largest\nconfidence interval (highest uncertainty). This approach was demonstrated to improve significantly\nover an Atari benchmark. The Q-Ensemble approach is an example of a model-free reinforcement\nlearning approach where we do not need to infer the environment in the learning process. Model-free\napproaches are however generally high in sample complexity. Sampling from a learned model of the\nenvironment can help us mitigate this problem.",
"paper_id": "1806.04552",
"title": "Combining Model-Free Q-Ensembles and Model-Based Approaches for Informed Exploration",
"authors": [
"Sreecharan Sankaranarayanan",
"Raghuram Mandyam Annasamy",
"Katia Sycara",
"Carolyn Penstein Rosé"
],
"published_date": "2018-06-12",
"primary_category": "cs.LG",
"arxiv_url": "http://arxiv.org/abs/1806.04552v1",
"chunk_index": 1,
"total_chunks": 16,
"char_count": 1873,
"word_count": 271,
"chunking_strategy": "semantic"
},
{
"chunk_id": "e38b10e3-4e47-4425-94be-3ea1b327190a",
"text": "1.3 Model-Based Encoder-Decoder Approach to Next-Frame Prediction Model-based approaches in the Atari environment (Arcade Learning Environment [18]) have been\nsuccessfully used in the past for the problem of next-frame prediction. The game environments are\nhigh in complexity with the frames themselves being high-dimensional, involving tens of objects that\nare being controlled directly or indirectly by agent actions and cases of objects leaving and entering\nthe frame. Oh et al. [19] described an encoder-decoder model that produces visually-realistic next\nframes which can then be used as control for action-conditionals in the game. They further described\nan informed exploration approach called trajectory memory that follows the ϵ-greedy strategy but\nleads to the frame that was least visited in the last n-steps instead of random exploration. 1.4 Need for Combining Model-Free and Model-Based Approaches In order to address the issues related to modeling of the latent space, we propose taking advantage of\na combination of the model-free and model-based approaches described so far. Typically, model-free\napproaches are very effective at learning complex policies but convergence might required millions\nof trials and could lead to globally sub-optimal local minima [20]. On the other hand, model-based\napproaches have the theoretical benefit of being able to generalize to new tasks better and reduce the\nnumber of trials required significantly [21, 22] but require an environment which is either engineered\nor learned well to achieve this generalization. Another advantage of the model-free approach is\nthat it can model arbitrarily complex unknown dynamics better but is substantially less sample\nefficient as indicated earlier. Prior attempts at combining the two approaches while retaining the\nrelative advantages have been met with some success [23, 24, 25]. We will therefore be performing\na combination of an encoder-decoder model and one that uses the informed trajectory memory\nexploration strategy that was proposed by Oh et al. [19] with the Q-ensemble and report the results. Section 2 describes the two methods of combining model-based methods and model-free\nq-ensembles for exploration that we implemented. Section 3 details our experimental setup. Section\n4 details the results we obtained and provides a discussion about the methods we attempted based on\nour experiments and related work.",
"paper_id": "1806.04552",
"title": "Combining Model-Free Q-Ensembles and Model-Based Approaches for Informed Exploration",
"authors": [
"Sreecharan Sankaranarayanan",
"Raghuram Mandyam Annasamy",
"Katia Sycara",
"Carolyn Penstein Rosé"
],
"published_date": "2018-06-12",
"primary_category": "cs.LG",
"arxiv_url": "http://arxiv.org/abs/1806.04552v1",
"chunk_index": 2,
"total_chunks": 16,
"char_count": 2414,
"word_count": 359,
"chunking_strategy": "semantic"
},
{
"chunk_id": "75680bb1-3e7b-4b69-b7e7-d71ec3566526",
"text": "Figure 1: Combination of Auto-encoder and Q-Ensemble (a) Ground Truth (b) Predicted (c) Ground Truth (d) Predicted Figure 2: Side-by-side Comparison of Ground Truth and Predicted Next Frames by Model-based\nEncoder-Decoder Approach at 744000 and 746000 Iterations Respectively. Figure 3: Training and validation loss for frame prediction using an auto-encoder Figure 4: DDQN and DDQN Ensemble using ϵ-greedy and UCB approaches Figure 1 visually represents the combination of the model-based and model-free Q-ensemble approaches to guide exploration. The model-based encoder-decoder is first used to predict next frames\nover all possible actions. Given each of those actions, now the Q-values associated with them are\npredicted using the Q-ensemble and the variance is used to drive exploration. This is somewhat\nsimilar to the model predictive control (MPC) framework [26] (but instead of planning over the\naction-tree, we simply repeat each action multiple times). Further details about the methods are\nprovided in the following sections.",
"paper_id": "1806.04552",
"title": "Combining Model-Free Q-Ensembles and Model-Based Approaches for Informed Exploration",
"authors": [
"Sreecharan Sankaranarayanan",
"Raghuram Mandyam Annasamy",
"Katia Sycara",
"Carolyn Penstein Rosé"
],
"published_date": "2018-06-12",
"primary_category": "cs.LG",
"arxiv_url": "http://arxiv.org/abs/1806.04552v1",
"chunk_index": 4,
"total_chunks": 16,
"char_count": 1038,
"word_count": 152,
"chunking_strategy": "semantic"
},
{
"chunk_id": "ce08d91d-a5b1-4022-9ee4-8e4b6ec5c292",
"text": "2.1 Method 1 - Feeding Auto-Encoder Images Directly To Q-Ensemble In this method, we use the auto-encoder model to generate next-step frames for all of the actions\nand feed in these predicted frames to a Q-ensemble. We use the variance in the predicted Q-values\n(for each predicted frame) to estimate how likely this state has been visited during Q-learning (visit\nfrequency). So to drive exploration, we need to simply pick those actions for which there is high\nvariance. Given a well trained auto-encoder model, we can unroll many steps of predicted frames and\nselect paths with high-variance. Figure 1 shows the architecture first of the encoder-decoder model which is composed of encoding\nlayers that extract spatio-temporal features from the input frames, action-conditional transformation\nlayers that then transform the encoded features into a next-frame prediction by using action variables\nas an additional input, finally followed by decoding layers that map the high level features back onto\nthe pixel space. 2.1.1 Encoding, Action-Conditional Transformation, Decoding Similar to [19, 25], the convolutional layers use a feedforward encoding that takes a concatenated set\nof previous images and extracts spatio-temporal features from them. The convolutional layers are\nessentially a functional transformation from the pixel space to a high level feature vector space by\npassing through multiple convolutional layers followed by a fully-connected layer at the end, each of\nwhich is followed by a non-linearity. The encoded feature vector is therefore - henct = Conv(xt−m+1:t) where xt−m+1:t ∈R(mxc)xhxw denotes m frames of hxw pixels with c color channels at time t. The encoded feature vector is now transformed using multiplicative interactions with the control\nvariables. Ideally, the transformation would looks as follows -\nhdect,i = X Wijlhenct,j at,l + bi\nj,l where hdect is the action transformed feature, at is the action-vector at time t, W is a 3-way tensor\nweight and b is the bias term.",
"paper_id": "1806.04552",
"title": "Combining Model-Free Q-Ensembles and Model-Based Approaches for Informed Exploration",
"authors": [
"Sreecharan Sankaranarayanan",
"Raghuram Mandyam Annasamy",
"Katia Sycara",
"Carolyn Penstein Rosé"
],
"published_date": "2018-06-12",
"primary_category": "cs.LG",
"arxiv_url": "http://arxiv.org/abs/1806.04552v1",
"chunk_index": 5,
"total_chunks": 16,
"char_count": 2006,
"word_count": 311,
"chunking_strategy": "semantic"
},
{
"chunk_id": "132d7837-4aff-4dd7-9b2f-ccb1e1575be5",
"text": "However, computing the 3-way tensor is not scalable but allows the\narchitecture to model different transformations for different actions as has been demonstrated in prior\nwork [27, 28, 29]. Therefore, an approximation of the 3-way tensor is used instead - hdect = W dec((W enchenct ) ∗(W aat)) + b De-convolutions using CNNs perform the inverse operation of convolutions, transforming the 1 x 1\nspatial regions onto d x d using de-convolutional kernels. In the architecture we implemented, the\nDe-convolutions was performed as follows - xt+1 = Deconv(Reshape(hdec))\nwhere Reshape is a fully connected layer and Deconv consists of multiple deconvolution layers. 2.1.2 Modifying Q-Ensemble For each state-action pair, the encoder-decoder model produces an image. We now need to determine,\nwhich of these we are most uncertain about, in order to explore in that direction.",
"paper_id": "1806.04552",
"title": "Combining Model-Free Q-Ensembles and Model-Based Approaches for Informed Exploration",
"authors": [
"Sreecharan Sankaranarayanan",
"Raghuram Mandyam Annasamy",
"Katia Sycara",
"Carolyn Penstein Rosé"
],
"published_date": "2018-06-12",
"primary_category": "cs.LG",
"arxiv_url": "http://arxiv.org/abs/1806.04552v1",
"chunk_index": 6,
"total_chunks": 16,
"char_count": 869,
"word_count": 135,
"chunking_strategy": "semantic"
},
{
"chunk_id": "d0b6038f-38e4-42bf-9f92-ac0b808b922c",
"text": "For this purpose,\nthe images are passed to several Q-networks. Each Q-network provides a distribution over actions\nand therefore outputs corresponding to the number of actions (9 in Pacman). The total number of\noutputs therefore will be number of networks (which in our case was 5) times the number of actions. A variance metric is then calculated over all of these outputs to explore in the direction of highest\nvariance.",
"paper_id": "1806.04552",
"title": "Combining Model-Free Q-Ensembles and Model-Based Approaches for Informed Exploration",
"authors": [
"Sreecharan Sankaranarayanan",
"Raghuram Mandyam Annasamy",
"Katia Sycara",
"Carolyn Penstein Rosé"
],
"published_date": "2018-06-12",
"primary_category": "cs.LG",
"arxiv_url": "http://arxiv.org/abs/1806.04552v1",
"chunk_index": 7,
"total_chunks": 16,
"char_count": 422,
"word_count": 70,
"chunking_strategy": "semantic"
},
{
"chunk_id": "dac4e87d-f191-4eda-bfd3-68889cc9eb1b",
"text": "N K K\n1 2 Uncertainty(st) = X X Qi(st, aj) −1 X Qi(st, aj)\nN K\nj=1 i=1 i=1\nAction(st) = arg max{Uncertainty(st+1) where st+1 = T(st, a) ∀a} where N is the number of actions, K is the number of Q-networks (ensemble), T(st, a) is the transition function (frame prediction model) and Action(st) denotes the action with highest uncertainty. Another way of estimating the variance can be, (converts Q-estimates to Value estimates of st) K K\nUncertainty(st) = X maxN Qi(st, aj) −1 X maxN Qi(st, aj) 2\nj=1 K j=1\ni=1 i=1 These methods of exploration can be combined with other common exploration strategies like\nϵ-greedy where instead of selecting random actions, we can select the action with highest uncertainty. 2.2 Method 2 - Trajectory Memory and Q-Ensembles The trajectory memory method as described in [19] uses an ϵ-greedy exploration policy. The trajectory\nmemory is used to measure the similarity between the predicted frame and the most recent d frames\nis calculated to give the estimated visit frequency. We use the same settings as described in [19];\nd = 20, δ = 50 and σ = 100. nD(st) = X k(st, si); where k(x, y) = exp −1 X min(max((xj −yj)2 −δ, 0), 1)\ni=1 j",
"paper_id": "1806.04552",
"title": "Combining Model-Free Q-Ensembles and Model-Based Approaches for Informed Exploration",
"authors": [
"Sreecharan Sankaranarayanan",
"Raghuram Mandyam Annasamy",
"Katia Sycara",
"Carolyn Penstein Rosé"
],
"published_date": "2018-06-12",
"primary_category": "cs.LG",
"arxiv_url": "http://arxiv.org/abs/1806.04552v1",
"chunk_index": 8,
"total_chunks": 16,
"char_count": 1165,
"word_count": 209,
"chunking_strategy": "semantic"
},
{
"chunk_id": "8e182c36-ef1e-4dd5-b500-cd6586e963c9",
"text": "We can combine this exploration method with the variance in Q-ensembles to yield a more informed\nexploration strategy. This is similar to the previous method except that the uncertainty is calculated\non the current frame (st) instead of the predicted frames (s′t+1). The predicted frames are used only\nto estimate their visit frequency. The algorithm used to select actions using the combined strategies is\ndescribed in Algorithm 1. Algorithm 1 Combined Exploration Strategy\n1: procedure ACTION_SELECTOR(current state: st)\n2: Init ϵ = 1.0\n3: for each action a\n4: Generate predicted next state using auto-encoder: s′t+1 = T(st, a)\n5: Estimate visit frequency: nD(s′t+1)\n6: UCB like estimate: µa + λσa\n1 Pki (Qi(st, a) −µa 2 where µa = K1 Pki Qi(st, a) and σa =q K\n7: Set score(a) = µa + λσa −ϵ nD(s′t+1)\n8: action = arg maxa score(a)\n9: Decay, ϵ = ϵ/decay factor\n10: end procedure All our experiments are run on the Ms. Pacman Atari environment [18] where exploration is\nchallenging and it is important to achieve better scores. Below are the experimental settings and\nhyper-parameters used.",
"paper_id": "1806.04552",
"title": "Combining Model-Free Q-Ensembles and Model-Based Approaches for Informed Exploration",
"authors": [
"Sreecharan Sankaranarayanan",
"Raghuram Mandyam Annasamy",
"Katia Sycara",
"Carolyn Penstein Rosé"
],
"published_date": "2018-06-12",
"primary_category": "cs.LG",
"arxiv_url": "http://arxiv.org/abs/1806.04552v1",
"chunk_index": 9,
"total_chunks": 16,
"char_count": 1090,
"word_count": 184,
"chunking_strategy": "semantic"
},
{
"chunk_id": "2ca62ca9-84a1-440a-aa90-f0d2ae8371ac",
"text": "• Network Architecture: For the Q-ensembles, we train 5 different Q-networks and each one\nuses a standard DQN architecture: Conv(32, 8, 4), Conv(64, 4, 2), Conv(64, 3, 1) and\nLinear(256) with ReLU activations throughout. The auto-encoder frame prediction model is\nthe same as the one used in [19]. • Optimization: For training the Q-Ensemble, we use Adam with a learning rate = 0.0001,\nweight decay = 0 and gradient norm clipped at 10 for every layer. • Training details: Batch size = 32, Training Frequency = 4 (train every 4 frames), Discount\nFactor (gamma) = 0.99, Size of Replay Memory = 10000, Target network sync frequency =\n1000 (fully replaced with the weights from training network) • Frame Preprocessing: We use the same frame processing technique as used by [30] (frame\nskipping, max over 4 frames, RGB to gray-scale).",
"paper_id": "1806.04552",
"title": "Combining Model-Free Q-Ensembles and Model-Based Approaches for Informed Exploration",
"authors": [
"Sreecharan Sankaranarayanan",
"Raghuram Mandyam Annasamy",
"Katia Sycara",
"Carolyn Penstein Rosé"
],
"published_date": "2018-06-12",
"primary_category": "cs.LG",
"arxiv_url": "http://arxiv.org/abs/1806.04552v1",
"chunk_index": 10,
"total_chunks": 16,
"char_count": 829,
"word_count": 139,
"chunking_strategy": "semantic"
},
{
"chunk_id": "0c516c91-b638-40e6-b3eb-ee871fd22455",
"text": "• Exploration: For ϵ-greedy based strategies, Initial Epsilon = 1.0, Final Epsilon = 0.01,\nExploration Timesteps = 1000000. UCB like strategy uses λ = {1.0 0.1, 0.01, 0.001} 4 Results and Discussion 4.1 Separately Training Auto-Encoder and Q-Ensemble We discuss about the results of training the auto-encoder and Q-ensemble models separately as\nreported in [19] and [17] respectively. Model-Based Approach Results from the next-frame prediction done by Oh et al.'s model-based\napproach [19] are shown in the figures. Figure 2 shows ground truth next frames and predicted next\nframes produced by the encoder-decoder model side-by-side. Figure 3 further shows the training and\nvalidation losses over 800000 iterations. Model-Free Q-Ensemble Figure 4 shows the results of training different Q-Ensemble methods and\nthe standard Double DQN implementation. Due to the inherent slowness in training ensembles all of\nthe models were not trained for the full 8000 epochs but they have been trained sufficiently to say that\nthe ensemble with ϵ-greedy outperforms the UCB and standard DDQN approaches which replicates\nthe results reported by Chen et al. [17]. These Q-ensemble models with different exploration\nstrategies will serve as our baseline to beat. (a) Average Rewards (Every 100 episodes) (Training)",
"paper_id": "1806.04552",
"title": "Combining Model-Free Q-Ensembles and Model-Based Approaches for Informed Exploration",
"authors": [
"Sreecharan Sankaranarayanan",
"Raghuram Mandyam Annasamy",
"Katia Sycara",
"Carolyn Penstein Rosé"
],
"published_date": "2018-06-12",
"primary_category": "cs.LG",
"arxiv_url": "http://arxiv.org/abs/1806.04552v1",
"chunk_index": 11,
"total_chunks": 16,
"char_count": 1298,
"word_count": 196,
"chunking_strategy": "semantic"
},
{
"chunk_id": "d1fd511c-3e36-47cf-b80d-93e9777785b7",
"text": "Figure 5: Method 1 - Feeding auto-encoder images directly 4.2 Method 1 - Combining Encoder-Decoder Model and Q-Ensemble Fig 5 shows the reward and reward averaged per 100 instances for the combination of the encoderdecoder and q-ensemble models using different seeds. Fig 5b shows the loss for this method (Method\n1).",
"paper_id": "1806.04552",
"title": "Combining Model-Free Q-Ensembles and Model-Based Approaches for Informed Exploration",
"authors": [
"Sreecharan Sankaranarayanan",
"Raghuram Mandyam Annasamy",
"Katia Sycara",
"Carolyn Penstein Rosé"
],
"published_date": "2018-06-12",
"primary_category": "cs.LG",
"arxiv_url": "http://arxiv.org/abs/1806.04552v1",
"chunk_index": 12,
"total_chunks": 16,
"char_count": 317,
"word_count": 51,
"chunking_strategy": "semantic"
},
{
"chunk_id": "8a282354-4dc4-447b-8802-24f20bbd389b",
"text": "As can be seen, in comparison to just using the Q-ensemble, the combination hovers between the\n400 and 800 mark for rewards without any improvement in training behavior even after 2.5 million\ntimesteps. Also, training this model is really slow because of the multiple steps involved in action\nselection (generating next frames, feed to Q-ensemble, calculate path (if any) uncertainties). We\nrestrict the path to be just the immediate action but repeat the action multiple times (4) to obtain a\nreasonable difference in the next states predicted by the auto-encoder. In our analysis we found that the frames predicted by the auto-encoder are highly noisy themselves\nand this can lead to poor uncertainty estimates from the Q-ensemble (the Q-ensemble almost always\nhas never seen such frames). This tends to bias the exploration in the wrong direction where actions\nwith noisy predicted frames are always preferred instead of unexplored actions. Our results for this\nmethod are quite similar to the results obtained by [19](Section 4.2) when they tried to replace the\nemulator with the frames prediction by the action-conditional model during testing. (a) Average Rewards (Every 100 episodes) (Training) Figure 6: Method 2 - Combining trajectory memory and q-ensemble estimates 4.3 Method 2 - Trajectory Memory and Q-Ensembles",
"paper_id": "1806.04552",
"title": "Combining Model-Free Q-Ensembles and Model-Based Approaches for Informed Exploration",
"authors": [
"Sreecharan Sankaranarayanan",
"Raghuram Mandyam Annasamy",
"Katia Sycara",
"Carolyn Penstein Rosé"
],
"published_date": "2018-06-12",
"primary_category": "cs.LG",
"arxiv_url": "http://arxiv.org/abs/1806.04552v1",
"chunk_index": 13,
"total_chunks": 16,
"char_count": 1324,
"word_count": 207,
"chunking_strategy": "semantic"
},
{
"chunk_id": "4ffbd40b-c4f5-4144-946b-abb7c2280b80",
"text": "Figure 6a and 6b show the reward and loss plots obtained during training. As seen from the figures,\nthe combination of trajectory memory and Q-ensemble variance (ucb-like exploration) often yields\nbetter rewards and reaches higher rewards very quickly (< 500K frames) compared to either of the\nbaselines (Double DQN, Q-Ensembles). However, the behavior is highly dependent on the starting\nseeds and can lead to a lot of variance in performance (as seen in 6a). Training is again slow (not\nas slow as Method 1 though) and can take upto 12-15 hours to reach 1M timesteps. The results are\nencouraging and show that combining estimated visit frequency and variance in Q-estimates to drive\nexploration is much better than ϵ-greedy or plain UCB-like exploration. 5 Conclusion and Future Work",
"paper_id": "1806.04552",
"title": "Combining Model-Free Q-Ensembles and Model-Based Approaches for Informed Exploration",
"authors": [
"Sreecharan Sankaranarayanan",
"Raghuram Mandyam Annasamy",
"Katia Sycara",
"Carolyn Penstein Rosé"
],
"published_date": "2018-06-12",
"primary_category": "cs.LG",
"arxiv_url": "http://arxiv.org/abs/1806.04552v1",
"chunk_index": 14,
"total_chunks": 16,
"char_count": 785,
"word_count": 127,
"chunking_strategy": "semantic"
},
{
"chunk_id": "b90d5959-38d6-42ab-a0e7-f41dd3f8e5f5",
"text": "Intelligent exploration strategies other than dithering ones like ϵ-greedy are important for Q-learning\nin large state-spaces. We find that combining trajectory memory based visit estimates with variance\nestimates from a Q-ensemble improves exploration and helps the agent reach better rewards much\nfaster than other methods. In the future, we hope to repeat these experiments on other Atari games like QBert or Seaquest\nwhere exploration is harder.",
"paper_id": "1806.04552",
"title": "Combining Model-Free Q-Ensembles and Model-Based Approaches for Informed Exploration",
"authors": [
"Sreecharan Sankaranarayanan",
"Raghuram Mandyam Annasamy",
"Katia Sycara",
"Carolyn Penstein Rosé"
],
"published_date": "2018-06-12",
"primary_category": "cs.LG",
"arxiv_url": "http://arxiv.org/abs/1806.04552v1",
"chunk_index": 15,
"total_chunks": 16,
"char_count": 449,
"word_count": 66,
"chunking_strategy": "semantic"
},
{
"chunk_id": "5f0545ff-158d-4fcf-af57-836269882942",
"text": "As observed, the frames predicted by the auto-encoder are highly noisy themselves and this can lead to poor uncertainty estimates from the Q-ensemble. Using generatordiscriminator methods that improve the quality of predicted future frames could be employed in\nthe place of the encoder-decoder models. In order to address the stability issues generally faced by\nGANs, architectures like Wasserstein GANs will also be an important direction of research. With\nmore realistic frames, it is easier to obtain unbiased uncertainty estimates from the Q-ensemble to\ndrive exploration. Also, as exploration strategies become complex, training becomes really slow and\nit will become important to use computational tricks like separating exploration from Q-network\ntraining, parallel multiple environments etc. to reduce training time.",
"paper_id": "1806.04552",
"title": "Combining Model-Free Q-Ensembles and Model-Based Approaches for Informed Exploration",
"authors": [
"Sreecharan Sankaranarayanan",
"Raghuram Mandyam Annasamy",
"Katia Sycara",
"Carolyn Penstein Rosé"
],
"published_date": "2018-06-12",
"primary_category": "cs.LG",
"arxiv_url": "http://arxiv.org/abs/1806.04552v1",
"chunk_index": 16,
"total_chunks": 16,
"char_count": 824,
"word_count": 118,
"chunking_strategy": "semantic"
}
]