researchpilot-data / chunks /1806.02448_semantic.json
Subhadip007's picture
feat: Upload full 358k vector database
e8a2c2e
[
{
"chunk_id": "6d67e35f-3457-484b-9b6c-7757151ea91d",
"text": "Deep Reinforcement Learning\nfor General Video Game AI Ruben Rodriguez Torrado* Philip Bontrager* Julian Togelius Jialin Liu\nNew York University New York University New York University Southern University of Science and Technology\nNew York, NY New York, NY New York, NY Shenzhen, China\nrrt264@nyu.edu philipjb@nyu.edu julian.togelius@nyu.edu liujl@sustc.edu.cn Diego Perez-Liebana\nQueen Mary University of London\nLondon, UK\ndiego.perez@qmul.ac.uk2018",
"paper_id": "1806.02448",
"title": "Deep Reinforcement Learning for General Video Game AI",
"authors": [
"Ruben Rodriguez Torrado",
"Philip Bontrager",
"Julian Togelius",
"Jialin Liu",
"Diego Perez-Liebana"
],
"published_date": "2018-06-06",
"primary_category": "cs.LG",
"arxiv_url": "http://arxiv.org/abs/1806.02448v1",
"chunk_index": 0,
"total_chunks": 37,
"char_count": 449,
"word_count": 57,
"chunking_strategy": "semantic"
},
{
"chunk_id": "e690b4c2-ac09-4bd7-aa36-dfe7fd0e3938",
"text": "Abstract—The General Video Game AI (GVGAI) competition The General Video Game AI (GVGAI) competitions and\nand its associated software framework provides a way of bench- framework were created with the express purpose of pro-Jun marking AI algorithms on a large number of games written in a viding a versatile general AI benchmark [3], [4], [5], [6].\n6 domain-specific description language. While the competition has\nThe planning tracks of the competition, where agents are seen plenty of interest, it has so far focused on online planning,\nproviding a forward model that allows the use of algorithms such given a forward model allowing them to plan but no training\nas Monte Carlo Tree Search. time between games, have been very popular and seen a\nIn this paper, we describe how we interface GVGAI to the number of strong agents based on tree search or evolutionary\nOpenAI Gym environment, a widely used way of connecting planning submitted. A learning track of the competition has\nagents to reinforcement learning problems. Using this interface,\nrun once, but not seen many strong agents, possibly because[cs.LG] we characterize how widely used implementations of several deep\nreinforcement learning algorithms fare on a number of GVGAI of infrastructure issues. For the purposes of testing machine\ngames. We further analyze the results to provide a first indication learning agents (as opposed to planning agents), GVGAI has\nof the relative difficulty of these games relative to each other, therefore been inferior to ALE and similar frameworks.\nand relative to those in the Arcade Learning Environment under In this paper, we attempt to rectify this by presenting a\nsimilar conditions.\nnew infrastructure for connecting GVGAI to machine learning\nagents. We connect the framework via the OpenAI Gym\nI. INTRODUCTION\ninterface, which allows the interfacing of a large number\nThe realization that video games are perfect testbeds for of existing reinforcement learning algorithm implementations.\nartificial intelligence methods have in recent years spread to We plan to use this structure for the learning track of the\nthe whole AI community, in particular since Chess and Go GVGAI competition in the future. In order to facilitate the\nhave been effectively conquered, and there is an almost daily development and testing of new algorithms, we also proflurry of new papers applying AI methods to video games. vide benchmark results of three important deep reinforcement\nIn particular, the Arcade Learning Environment (ALE), which learning algorithms over eight dissimilar GVGAI games.\nbuilds on an emulator for the Atari 2600 games console and\nII. BACKGROUND\nnumerous published papers since DeepMind's landmark paper A. General Video Game AI\nshowing that Q-learning combined with deep convolutional The General Video Game AI (GVGAI) framework is a Javanetworks could learn to play many of the ALE games at based benchmark for General Video Game Playing (GVGP)\nsuperhuman level [2]. in 2-dimensional arcade-like games [5]. This framework offers\nAs an AI benchmark, ALE is limited in the sense that a common interface for bots (or agents, or controllers) and\nthere is only a finite set of games. This is a limitation it has humans to play any of the more than 160 single- and twoin common with any framework based on existing published player games from the benchmark. These games are defined\ngames. However, for being able to test the general video game in the Video Game Description Language (VGDL), which was\nplaying ability of an agent, it is necessary to test on games on initially proposed by Ebner et al. [3] at the Dagstuhl Seminar\nwhich the agent was not optimized.",
"paper_id": "1806.02448",
"title": "Deep Reinforcement Learning for General Video Game AI",
"authors": [
"Ruben Rodriguez Torrado",
"Philip Bontrager",
"Julian Togelius",
"Jialin Liu",
"Diego Perez-Liebana"
],
"published_date": "2018-06-06",
"primary_category": "cs.LG",
"arxiv_url": "http://arxiv.org/abs/1806.02448v1",
"chunk_index": 1,
"total_chunks": 37,
"char_count": 3670,
"word_count": 588,
"chunking_strategy": "semantic"
},
{
"chunk_id": "486fc3b7-65fe-4b04-836a-4bdb74b08085",
"text": "For this, we need to be able on Artificial and Computational Intelligence in Games.\nto easily create new games, either manually or automatically, VGDL [7] is a game description language that defines 2-\nand add new games to the framework. Being able to create dimensional games by means of two files, which describe the\nnew games easily also allows the creating of games made to game and the level respectively. The former is structured in\ntest particular AI capacities. four different sections, detailing game sprites present in the game (and their behaviors and parameters), the interactions learned object models improved exploration and performance\nbetween them, the termination conditions of the game and the in other games.\nmapping from sprites to characters used in the level description More recently, Kunanusont et al. [14] interfaced the GVGAI\nfile. The latter describes a grid and the sprite locations at the framework with DL4J2 in order to develop agents that would\nbeginning of the game. These files are typically not provided to learn how to play several games via screen capture. 7 games\nthe AI agents, who must learn to play the game via simulations were employed in this study, of increasing complexity and\nor repetitions. More about VGDL and sample files can be screen size and also including both deterministic and stochastic\nfound on the GVGAI GitHub project1. games. Kunanusont et al. [14] implemented a Deep Q-Network\nThe agents implement two methods to interact with the for an agent that was able to increase winning rate and score\ngame: a constructor where the controller may initialize any in several consecutive episodes.\nstructures needed to play, and an act method, which is called The first (and to date, only) edition of the single-player\nevery game frame and must return an action to execute at learning competition, held in the IEEE's 2017 Conference\nthat game cycle. As games are played in real-time, the agents on Computational Intelligence in Games (CIG2017), received\nmust reply within a time budget (in the competition settings, 1 few and simple agents.",
"paper_id": "1806.02448",
"title": "Deep Reinforcement Learning for General Video Game AI",
"authors": [
"Ruben Rodriguez Torrado",
"Philip Bontrager",
"Julian Togelius",
"Jialin Liu",
"Diego Perez-Liebana"
],
"published_date": "2018-06-06",
"primary_category": "cs.LG",
"arxiv_url": "http://arxiv.org/abs/1806.02448v1",
"chunk_index": 2,
"total_chunks": 37,
"char_count": 2091,
"word_count": 341,
"chunking_strategy": "semantic"
},
{
"chunk_id": "ee8f2df7-84f7-4f46-aba7-7fd659130093",
"text": "Most of them are greedy methods or\nsecond for the constructor and 40ms in the act method) not based on Q-Learning and State-Action-Reward-State-Action\nto suffer any penalty. Both methods provide the agent with (SARSA), using features extracted from the game state. For\nsome information about the current state of the game, such more information about these, including the final results of the\nas its status (if it is finished or still running), the player state competition, the reader is referred to [6].\n(health points, position, orientation, resources collected) and\nB. Deep Reinforcement Learninganonymized information about other sprites in the game (so\ntheir types and behaviours are not disclosed). Additionally, A Reinforcement Learning (RL) agent learns through trialcontrollers also receive a forward model (in the planning and-error interactions with a dynamic environment [15] and\nsetting) and a screen-shot of the current game state (in the balance the reward trade-off between long-term and shortlearning setting). term planning. RL methods have been widely studied in many\nThe GVGAI framework has been used in a yearly competi- disciplines, such as operational research, simulation-based option, started in 2014, and organized around several tracks. Be- timization, evolutionary computation and multi-agent system,\ntween the single- [4] and the two-player [8] GVGAI planning including games. The cooperation between the RL methods\ncompetitions, more than 200 controllers have been submitted and Deep Learning (DL) has led to successful applications in\nby different participants, in which agents have to play in games. More about the work on Deep Reinforcement Learning\nsets of 10 unknown games to decide a winner. These tracks till 2015 can be found in the review by J.",
"paper_id": "1806.02448",
"title": "Deep Reinforcement Learning for General Video Game AI",
"authors": [
"Ruben Rodriguez Torrado",
"Philip Bontrager",
"Julian Togelius",
"Jialin Liu",
"Diego Perez-Liebana"
],
"published_date": "2018-06-06",
"primary_category": "cs.LG",
"arxiv_url": "http://arxiv.org/abs/1806.02448v1",
"chunk_index": 3,
"total_chunks": 37,
"char_count": 1784,
"word_count": 271,
"chunking_strategy": "semantic"
},
{
"chunk_id": "5016a683-b5de-4e09-a8ce-6ae9ab4a369b",
"text": "Schmidhuber [16].\nare complemented with newer ones for single-player agent For instance, Deep Q-Networks has been combined with RL to\nlearning [9], [6], level [10] and rule generation [11]. Beyond play several Atari 2600 games with video as input [17], [2].\nthe competitions, many researchers have used this framework Vezhnevets et al.[18] proposed STRategic Attentive Writerfor different types of work on agent AI, procedural content exploiter(STRAWe) for learning macro-actions and achieved\ngeneration, automatic game design and deep reinforcement significant improvements on some Atari 2600 games. Allearning, among others [6]. phaGo, combined tree search with deep neural networks to\nplay the game of Go and self-enhanced by self-playing, is In terms of learning, several approaches have been made\nranked as 9 dan professional [19] and is the first to beatbefore the single-player learning track of the GVGAI comhuman world champion of Go. Its advanced version, AlphaGopetition was launched. The first approach was proposed by\nZero [20] is able to learn only by self-playing (without theSamothrakis et al. [12], who implemented Separable Natural\ndata of matches played by human players) and outperformsEvolution Strategies (S-NES) to evolve a state value function\nAlphaGo.in order to learn how to maximize victory rate and score in\n10 games of the framework. Samothrakis et al. [12] compared During the last few years, several authors have improved\nthe results and stability obtained with the original Deep Q-a linear function approximator and a neural network, and two\nNetworks paper. Wang et. al. [21] introduces a new architec-different policies, using features from the game state.\nture for the networks know as dueling network, this new ar- Later, Braylan and Miikkulainen [13] used logistic regreschitecture uses two separate estimators: one for the state valuesion to learn a forward model on 30 games of the framework.\nfunction and one for the state-dependent action advantageThe objective was to learn the state (or, rather, a simplification\nfunction. The main benefit of this factoring is to generalizeconsistent of the most relevant features of the full game state)\nlearning across actions without imposing any change to thethat would follow a previous one when an action was supplied,\nunderlying reinforcement learning algorithm.and then apply this model in different games, assuming that\nMnih et. al., in 2016, successfully applied neural networkssome core mechanics would be shared among the different\nto actor-critic RL [22]. The network is trained to predict bothgames of the benchmark. Their results showed that these\na policy function and a value function for a state, the actor",
"paper_id": "1806.02448",
"title": "Deep Reinforcement Learning for General Video Game AI",
"authors": [
"Ruben Rodriguez Torrado",
"Philip Bontrager",
"Julian Togelius",
"Jialin Liu",
"Diego Perez-Liebana"
],
"published_date": "2018-06-06",
"primary_category": "cs.LG",
"arxiv_url": "http://arxiv.org/abs/1806.02448v1",
"chunk_index": 4,
"total_chunks": 37,
"char_count": 2700,
"word_count": 412,
"chunking_strategy": "semantic"
},
{
"chunk_id": "ae11ac92-8ecf-415d-9283-67465c65d159",
"text": "1https://github.com/EssexUniversityMCTS/gvgai/wiki/VGDL-Language 2Deep Learning for Java: https://deeplearning4j.org/ Asynchronous Advantage Actor-Critic, A3C, the game advancing time, game state serialization time and\nis inherently parallelizable and allows for a big speedup in communication time between the client and agent are not\ncomputation time. The interaction between the policy output included. The real execution of the learning phase can last\nand the value estimates has been shown to be relatively stable several hours.\nand accurate for neural networks. This new approach increases\nB.",
"paper_id": "1806.02448",
"title": "Deep Reinforcement Learning for General Video Game AI",
"authors": [
"Ruben Rodriguez Torrado",
"Philip Bontrager",
"Julian Togelius",
"Jialin Liu",
"Diego Perez-Liebana"
],
"published_date": "2018-06-06",
"primary_category": "cs.LG",
"arxiv_url": "http://arxiv.org/abs/1806.02448v1",
"chunk_index": 5,
"total_chunks": 37,
"char_count": 598,
"word_count": 79,
"chunking_strategy": "semantic"
},
{
"chunk_id": "cae4aec5-6586-4362-a817-87d5b9055f6a",
"text": "GVGAI Games\nthe score obtained from the original DQN paper, reducing the\ncomputational time by half even without using CPU. RL is a hot topic for the research community of artificial intelligence. Recent advances that combine DL with RL\n(Deep Reinforcement Learning) have shown that model-free\noptimization, or policy gradients, can be used for complex\nenvironments.",
"paper_id": "1806.02448",
"title": "Deep Reinforcement Learning for General Video Game AI",
"authors": [
"Ruben Rodriguez Torrado",
"Philip Bontrager",
"Julian Togelius",
"Jialin Liu",
"Diego Perez-Liebana"
],
"published_date": "2018-06-06",
"primary_category": "cs.LG",
"arxiv_url": "http://arxiv.org/abs/1806.02448v1",
"chunk_index": 6,
"total_chunks": 37,
"char_count": 366,
"word_count": 56,
"chunking_strategy": "semantic"
},
{
"chunk_id": "34dc048e-b16f-4b15-9d61-26e32001b201",
"text": "However, in order to continue testing new ideas\nand increasing the quality of results, the research community\nneeds good benchmark platforms to compare results. This is\nthe main goal of OpenAI GYM platform [23]. The OpenAI GYM platform provides a high variety of\nbenchmark, such as Arcade Learning Environment (ALE) [24], Figure 1: Screenshot of game Superman. In this game, innowhich is a collection of Atari 2600 video games.",
"paper_id": "1806.02448",
"title": "Deep Reinforcement Learning for General Video Game AI",
"authors": [
"Ruben Rodriguez Torrado",
"Philip Bontrager",
"Julian Togelius",
"Jialin Liu",
"Diego Perez-Liebana"
],
"published_date": "2018-06-06",
"primary_category": "cs.LG",
"arxiv_url": "http://arxiv.org/abs/1806.02448v1",
"chunk_index": 7,
"total_chunks": 37,
"char_count": 427,
"word_count": 69,
"chunking_strategy": "semantic"
},
{
"chunk_id": "dc6c31bc-1434-4041-8ed1-dd47f2b71430",
"text": "OpenAI Gym cent civilians are standing on clouds while malicious actors\nhas more environments for testing RL in different types of spawn around the edge of the screen and attempt to shoot the\nenvironments. For example, MuJoCo is used to test humanoid clouds out from underneath them. If all the clouds are gone\nlike movement in 2D and 3D. the civilian will fall and only Superman can save them by\ncatching them for 1 point. Superman can also jail the villains\nIII. If Superman catches all the villains, the player\nWhile one of the main benefits for GVGAI is the ease to wins and earns an additional 1000 points.\nwhich new games can be created for a specific problem, we\nalso feel it is necessary to place the current GVGAI games The GVGAI environment currently has over 160 games and\nin the context of other existing environments. This serves two counting.",
"paper_id": "1806.02448",
"title": "Deep Reinforcement Learning for General Video Game AI",
"authors": [
"Ruben Rodriguez Torrado",
"Philip Bontrager",
"Julian Togelius",
"Jialin Liu",
"Diego Perez-Liebana"
],
"published_date": "2018-06-06",
"primary_category": "cs.LG",
"arxiv_url": "http://arxiv.org/abs/1806.02448v1",
"chunk_index": 8,
"total_chunks": 37,
"char_count": 856,
"word_count": 151,
"chunking_strategy": "semantic"
},
{
"chunk_id": "3759b3a7-1e64-4c47-bc16-6f8ca4f0cc07",
"text": "To showcase the environment and the challenges that\npurposes: it further demonstrates the strengths and weaknesses already exist we sample a number of games to benchmark\nof the current generation of reinforcement learning algorithms, against popular reinforcement learning algorithms.\nand it allows results achieved on GVGAI to be compared to Our criteria for sampling games was informal but based on\nother existing environments. several considerations. Since many of the games in the GVGAI\nframework have been benchmarked with planning agents, we\nA. GVGAI-OpenAI embedding can roughly rank the games based on how difficult these games\nThe learning competition is based on the GVGAI frame- are for planning. We tried to get an even distribution across the\nwork, but no forward model is provided to the agents, thus range going from games that are easy for planning agents, like\nno simulations of a game are accessible.",
"paper_id": "1806.02448",
"title": "Deep Reinforcement Learning for General Video Game AI",
"authors": [
"Ruben Rodriguez Torrado",
"Philip Bontrager",
"Julian Togelius",
"Jialin Liu",
"Diego Perez-Liebana"
],
"published_date": "2018-06-06",
"primary_category": "cs.LG",
"arxiv_url": "http://arxiv.org/abs/1806.02448v1",
"chunk_index": 9,
"total_chunks": 37,
"char_count": 918,
"word_count": 145,
"chunking_strategy": "semantic"
},
{
"chunk_id": "9bf67cab-3f45-4788-8cb6-d97bd3201fb1",
"text": "However, an agent Aliens, to very difficult, like Superman. The game difficulties\nstill has access to the observation of current game state, a are based on the analysis by Bontrager et al. [25]. Other things\nStateObservation object, provided as a Json object in String we considered were having a few games that also exist in Atari\nor as a screen-shot of the current game screen (without the for some comparison and including games that we believed\nscreen border) in png format. At every game tick, the server would provide interesting challenges to reinforcement learning\nsends a new game state observation to the agent, the agent agents. Some games in VGDL contain stochastic components\nreturns either an action to play in 40ms or requests to abort as well, mostly in the form of NPC movement. GVGAI has\nthe current game. When a game is finished or aborted, the five levels for each game, we used the first level for each game\nagent can select the next level to play, among the existing for all the training.\nlevels (usually 5 levels).",
"paper_id": "1806.02448",
"title": "Deep Reinforcement Learning for General Video Game AI",
"authors": [
"Ruben Rodriguez Torrado",
"Philip Bontrager",
"Julian Togelius",
"Jialin Liu",
"Diego Perez-Liebana"
],
"published_date": "2018-06-06",
"primary_category": "cs.LG",
"arxiv_url": "http://arxiv.org/abs/1806.02448v1",
"chunk_index": 10,
"total_chunks": 37,
"char_count": 1037,
"word_count": 179,
"chunking_strategy": "semantic"
},
{
"chunk_id": "16017f8d-ea5f-4ca4-8629-d0b12bc195c6",
"text": "This setting makes it possible to We settled on Aliens, Seaquest, Missile Command, Boulder\nembed the GVGAI framework as an OpenAI Gym so that the Dash, Frogs, Zelda, Wait For Breakfast, and Superman. The\nreinforcement learning algorithms can be applied to learn to first five mentioned are modeled after their similarly named\nplay the GVGAI games. Thanks to VGDL, it is easy to design Atari counterpart. Zelda consists of finding a target while\nand add new games and levels to the GVGAI framework. killing or avoiding enemies. Frogs is modeled after Frogger\nThe main framework is described in the manual by Liu which is also similar to the Atari Freeway game.",
"paper_id": "1806.02448",
"title": "Deep Reinforcement Learning for General Video Game AI",
"authors": [
"Ruben Rodriguez Torrado",
"Philip Bontrager",
"Julian Togelius",
"Jialin Liu",
"Diego Perez-Liebana"
],
"published_date": "2018-06-06",
"primary_category": "cs.LG",
"arxiv_url": "http://arxiv.org/abs/1806.02448v1",
"chunk_index": 11,
"total_chunks": 37,
"char_count": 659,
"word_count": 111,
"chunking_strategy": "semantic"
},
{
"chunk_id": "0f198c1a-ed88-454a-a73f-913d7b676d0e",
"text": "Wait For\net al. [9], as well as the default rules in the framework. Breakfast (Figure 2) is a strange game where the player must\nOnly 5 minutes is allowed to each of the agents for learning. go to a breakfast table where food is being served a sit there\nIt is notable that only the decision time (no more than for a short amount of time. This is not usually what people\n40ms per game tick) used by the agent is included, while think of as a game but provides an interesting challenge for Layer Parameters\nbots. Finally, Superman (Figure 1) is a complicated game Layer Type Depth Kernel Stride\nthat involves saving people in a dangerous environment with Convolution 1 32 8 4\nno reward until the person is safe. A full version of our Convolution 2 64 4 2\nConvolution 3 64 3 1implementation can be found on GVGAI GYM repository 3. Fully Connected 256\nFully Connected Action Space Table I: This table represents the architecture of the network\nused to play each game. For convolutional layers, depth refers\nto the convolutional filters and for the fully connected layers\nit refers to the output size. the network starts learning after only 1000 initial decisions,\nand the target Q-network gets updated every 500 steps.",
"paper_id": "1806.02448",
"title": "Deep Reinforcement Learning for General Video Game AI",
"authors": [
"Ruben Rodriguez Torrado",
"Philip Bontrager",
"Julian Togelius",
"Jialin Liu",
"Diego Perez-Liebana"
],
"published_date": "2018-06-06",
"primary_category": "cs.LG",
"arxiv_url": "http://arxiv.org/abs/1806.02448v1",
"chunk_index": 12,
"total_chunks": 37,
"char_count": 1214,
"word_count": 216,
"chunking_strategy": "semantic"
},
{
"chunk_id": "b867d24a-85e9-4249-b392-5261b61f6850",
"text": "We test both the original DQN and a modified DQN. OpenAI Baselines has a DQN implementation that is basedFigure 2: Screenshot of game Wait For Breakfast. In this\non the original DQN but it also offers prioritized experiencegame, all tables are empty when a game starts. At a randomly\nreplay and dueling networks as options that can be turnedselected game tick, a waiter (in black) serves a breakfast to\non since they work together with the original implementationthe table with only one chair. The player (in green) wins the\n[26]. We tested the original for comparisons and also ran DQNgame only if it sits on the chair on the table after the breakfast\nwith the two additional modifications to get results from ais served and eats it. The player loses the game if it leaves the\nmore state of the art DQN. We used the baseline defaults forchair once breakfast has been served without eating it.\nthe network with a couple of exceptions pertaining to training\ntime. The defaults have been tuned for ALE and should carry\nover.C.",
"paper_id": "1806.02448",
"title": "Deep Reinforcement Learning for General Video Game AI",
"authors": [
"Ruben Rodriguez Torrado",
"Philip Bontrager",
"Julian Togelius",
"Jialin Liu",
"Diego Perez-Liebana"
],
"published_date": "2018-06-06",
"primary_category": "cs.LG",
"arxiv_url": "http://arxiv.org/abs/1806.02448v1",
"chunk_index": 13,
"total_chunks": 37,
"char_count": 1024,
"word_count": 178,
"chunking_strategy": "semantic"
},
{
"chunk_id": "23085ac5-ded1-4503-99a2-6630a07260a2",
"text": "Benchmarks\nTo test A3C, OpenAI provides A2C. This is a synchronous\nTo have standardized results we decided to choose a few version that they found to be more efficient and perform just\npopular reinforcement learning algorithms that are provided as well on Atari [26]. This was also tested with the baseline\nby the OpenAI Gym baselines library. The baselines are open defaults with the same changes made for DQN. Each baseline\nimplementations of these algorithms and are closely based on was tested on every game for one million calls, resulting in a\nthe original papers [26]. The hope is that by using publicly total of 24 million calls.\nvetted and accessible code that our results will be comparable\nto other work and reproducible.",
"paper_id": "1806.02448",
"title": "Deep Reinforcement Learning for General Video Game AI",
"authors": [
"Ruben Rodriguez Torrado",
"Philip Bontrager",
"Julian Togelius",
"Jialin Liu",
"Diego Perez-Liebana"
],
"published_date": "2018-06-06",
"primary_category": "cs.LG",
"arxiv_url": "http://arxiv.org/abs/1806.02448v1",
"chunk_index": 14,
"total_chunks": 37,
"char_count": 732,
"word_count": 123,
"chunking_strategy": "semantic"
},
{
"chunk_id": "6723882a-6562-4f1e-9bea-79ae8b63192a",
"text": "RESULTS AND DISCUSSION\nFrom OpenAI's baseline library we selected three algo- Here we present the results of training the baselines on\nrithms: Deep Q-Networks (DQN), Prioritized Dueling DQNs, each game. The results show the performance of the provided\nand Advantage Actor-Critic (A2C). These were chosen in part baselines for a sample of the games in the GVGAI framework.\nbecause they have been well documented in similar environ- This provides insight into how the baselines compare to other\nments such as ALE. DQN and A3C, which A2C is based on, AI techniques and to how the GVGAI environment compares\nare the baseline for which many new RL developments are to other environments.\nscored against. For this reason, we felt it made sense to use Finally, this section is structured in three parts.",
"paper_id": "1806.02448",
"title": "Deep Reinforcement Learning for General Video Game AI",
"authors": [
"Ruben Rodriguez Torrado",
"Philip Bontrager",
"Julian Togelius",
"Jialin Liu",
"Diego Perez-Liebana"
],
"published_date": "2018-06-06",
"primary_category": "cs.LG",
"arxiv_url": "http://arxiv.org/abs/1806.02448v1",
"chunk_index": 15,
"total_chunks": 37,
"char_count": 796,
"word_count": 131,
"chunking_strategy": "semantic"
},
{
"chunk_id": "160f6a95-3b68-4369-9eaa-ab143efee0b8",
"text": "First, the\nthese to benchmark the GVGAI games. results of training the learning algorithms on the games are\nFor all three baselines, we used the same network first provided with some additional qualitative remarks. Second, the\ndescribed in Mnih et al. for playing Atari [17]. This consists GVGAI environment is compared to the Atari environment.\nof 3 convolutional layers and two fully connected layers as Third, the reinforcement agents are compared to planning\nseen in Table I. GVGAI is providing screen-shots for each agents that have been used within the framework.\ngame state that the convolutional network learns to interpret. Results of learning algorithmsEach algorithm is trained on one million frames of a particular\ngame. From initial testing, it appeared that one million calls Figure 3 shows the training curves for DQN (red), Dueling\nwere enough to give an indication of the difficulty of a Prioritized DQN (blue) and A2C (green). The graphs show the\ngame for our agents while also being realistic in terms of total rewards for playing up to that point in time. Rewards are\ncomputational resources. It is also a step in the right direction completely defined by the game description so they can't be\nfor the learning track of GVGAI where there are very tight compared between different games.",
"paper_id": "1806.02448",
"title": "Deep Reinforcement Learning for General Video Game AI",
"authors": [
"Ruben Rodriguez Torrado",
"Philip Bontrager",
"Julian Togelius",
"Jialin Liu",
"Diego Perez-Liebana"
],
"published_date": "2018-06-06",
"primary_category": "cs.LG",
"arxiv_url": "http://arxiv.org/abs/1806.02448v1",
"chunk_index": 16,
"total_chunks": 37,
"char_count": 1306,
"word_count": 214,
"chunking_strategy": "semantic"
},
{
"chunk_id": "cbe448b5-cca0-4726-843d-c1a98c66f3cb",
"text": "This is done by reporting\ntime constraints. To accommodate the smaller number of the sum of the incremental rewards for the episode at a given\ntraining iterations, we changed a few training parameters. time step. Since this data is noisy due to episode restarts,\nBuffer size, the size of replay memory, was set to 50,000, the 20 results are averaged to smooth the graph and better\nshow a trend. A2C allows running in parallel, we were able to\n3https://github.com/rubenrtorrado/GVGAI_GYM run 12 networks in parallel at once.",
"paper_id": "1806.02448",
"title": "Deep Reinforcement Learning for General Video Game AI",
"authors": [
"Ruben Rodriguez Torrado",
"Philip Bontrager",
"Julian Togelius",
"Jialin Liu",
"Diego Perez-Liebana"
],
"published_date": "2018-06-06",
"primary_category": "cs.LG",
"arxiv_url": "http://arxiv.org/abs/1806.02448v1",
"chunk_index": 17,
"total_chunks": 37,
"char_count": 523,
"word_count": 86,
"chunking_strategy": "semantic"
},
{
"chunk_id": "df54f919-28e8-416a-a3ae-25c404812fa5",
"text": "To keep the comparisons fair, A2C is still only allowed one million GVGAI calls and conclusion as quickly.\ntherefore each of the 12 networks is given one-twelfth of a Missile Command shows a similar performance for the three\nmillion calls each. This results in the training graph seen in algorithms. Although Prioritized Dueling DQN finds a higher\nFigure 4. To compare this with the linear algorithms, each value in earlier stages, The three algorithms get trapped in a\ntime step of A2C is associated with 12 time-steps of the DQN local optimum. In the game missile command, four fire-balls\nalgorithms in Figure 3. The value for each time step of A2C target three bases. To get all 8 points the player has to defend\nis the average of all 12 rewards. all three. One of the bases gets attacked by two fire-balls which\nDue to the fact that we are running experiments on different make it hard to defend. To have time to save the third base\nmachines with different GPU and CPU configurations, we requires very accurate play, the agents did not seem to be able\nalign the results on iterations instead of time. It is important to maintain a perfect score because a few missteps led to 5\nto note that since A2C runs its fixed number of GVGAI calls points. The reward plain is very non-linear for this game.\nin parallel, it runs at about 5x the speed of DQN on a machine Superman takes this difficulty to the next level. The game\nwith two NVIDIA Tesla k80 GPUs. is very dynamic with many NPCs modifying the environment\nFigure 4 shows the training curve in parallel for A2C in a stochastic manner.",
"paper_id": "1806.02448",
"title": "Deep Reinforcement Learning for General Video Game AI",
"authors": [
"Ruben Rodriguez Torrado",
"Philip Bontrager",
"Julian Togelius",
"Jialin Liu",
"Diego Perez-Liebana"
],
"published_date": "2018-06-06",
"primary_category": "cs.LG",
"arxiv_url": "http://arxiv.org/abs/1806.02448v1",
"chunk_index": 18,
"total_chunks": 37,
"char_count": 1588,
"word_count": 285,
"chunking_strategy": "semantic"
},
{
"chunk_id": "a5432275-256b-44ad-9fae-777a1f37e406",
"text": "This means that any actions that the\non Boulder Dash. The individual agents are chaotic which agent takes will have a big impact on the environment in the\nhelps A2C break out of local minima. This also points to the future. On top of this, the way to get the most points is to\nimportance of the exploration algorithm in learning to play capture the antagonists and take them to jail. In Boulder Dash, as long as one of the 12 workers awarded for capture, only for delivery to jail.",
"paper_id": "1806.02448",
"title": "Deep Reinforcement Learning for General Video Game AI",
"authors": [
"Ruben Rodriguez Torrado",
"Philip Bontrager",
"Julian Togelius",
"Jialin Liu",
"Diego Perez-Liebana"
],
"published_date": "2018-06-06",
"primary_category": "cs.LG",
"arxiv_url": "http://arxiv.org/abs/1806.02448v1",
"chunk_index": 19,
"total_chunks": 37,
"char_count": 481,
"word_count": 90,
"chunking_strategy": "semantic"
},
{
"chunk_id": "091bfa9f-0dbf-44a1-84d0-fa550e82be06",
"text": "This introduces a\nfound an improvement they would all gain. delayed reward which is a barrier to discovery. Knowing this,\nThe agents were able to learn on most of the games that the results from the training on this game make sense. A2C performed the best for most of the games agents were occasionally able to stumble on a good pattern\ntested. Though it's important to remember a relatively small but they could not reproduce the success in the stochastic\ncomputational budget was allowed for these algorithms and environment.\nthe others might eventually catch up. 8 games is also a small DQN and Prioritized Dueling DQN struggled to play Boulsample for comparing which algorithm is the best.",
"paper_id": "1806.02448",
"title": "Deep Reinforcement Learning for General Video Game AI",
"authors": [
"Ruben Rodriguez Torrado",
"Philip Bontrager",
"Julian Togelius",
"Jialin Liu",
"Diego Perez-Liebana"
],
"published_date": "2018-06-06",
"primary_category": "cs.LG",
"arxiv_url": "http://arxiv.org/abs/1806.02448v1",
"chunk_index": 20,
"total_chunks": 37,
"char_count": 693,
"word_count": 117,
"chunking_strategy": "semantic"
},
{
"chunk_id": "f305dbb1-1542-4859-8080-a7011d277b61",
"text": "In Boulder Dash, when the player collects a diamond\nto benefit from sampling more initial conditions and starts with for points, a rock falls toward them. This means there is\na higher score. negative feedback if an agent collects a diamond and doesn't\nDQN and Prioritized Dueling DQN were both given the move. Not collecting any diamonds and surviving appears to\nsame initial seed so they had the same initial exploration be an obvious local optimum that the agents have a hard time\npattern. For this reason, both algorithms tended to start out escaping. On the other hand, A2C was able to discover how to\nwith similar performance and then diverge as time goes on. collect diamonds and survive, with a clear trend of continuing\nPrioritized Dueling DQN seems to slightly outperform vanilla to improving. DQN, but on overall they are very similar. A2C could not be Seaquest is a good example of a game that is not too hard\ncompared in this way as it intentionally is running different but has a lot of random elements. The agent can get a high\nexplorations in parallel and then learn from all of them at the score if it can survive the randomly positioned fish, catch\nsame time. This can explain why A2C tends to start out better the randomly moving diver, and take it to the surface. This\nright from the beginning, especially in Aliens.",
"paper_id": "1806.02448",
"title": "Deep Reinforcement Learning for General Video Game AI",
"authors": [
"Ruben Rodriguez Torrado",
"Philip Bontrager",
"Julian Togelius",
"Jialin Liu",
"Diego Perez-Liebana"
],
"published_date": "2018-06-06",
"primary_category": "cs.LG",
"arxiv_url": "http://arxiv.org/abs/1806.02448v1",
"chunk_index": 21,
"total_chunks": 37,
"char_count": 1335,
"word_count": 235,
"chunking_strategy": "semantic"
},
{
"chunk_id": "4523b916-b8c3-4345-9ddb-ef49cc0cd0ee",
"text": "It is benefiting requires the agent to learn to chase the diver which none of\nfrom 12 different initial conditions in this case. the agents appear to be doing. The high noise in the results is\nAvailable rewards have a big impact on the success of RL most likely from the agents failing to learn the general rules\nand that is not different in the GVGAI environment. The games behind the stochasticity. Additionally, the player needs to go\nwhere the agents performed worst were the games that had the to the surface every 25 game ticks or it loses the game, which\nleast feedback. For this work, we left the games in their current may be something hard to learn for the agents.\nform, but it is very easy for researchers to edit the VGDL file Finally, Zelda is a fairly good game for reinforcement\nand modify the reward structure to create various experiments. learning. Though, the game is not too similar to its namesake. The games sampled here vary a lot in terms of the rewards The player must find a key and use it to unlock the exit while\nthey offer. Frogs and Wait For Breakfast only provide a single fighting enemies. Each event provides feedback which allows\npoint for winning. This is evident in their training graphs. the agents to learn the game well. For Frogs, none of the agents appear to have found a winning\nsolution in the calls allotted.",
"paper_id": "1806.02448",
"title": "Deep Reinforcement Learning for General Video Game AI",
"authors": [
"Ruben Rodriguez Torrado",
"Philip Bontrager",
"Julian Togelius",
"Jialin Liu",
"Diego Perez-Liebana"
],
"published_date": "2018-06-06",
"primary_category": "cs.LG",
"arxiv_url": "http://arxiv.org/abs/1806.02448v1",
"chunk_index": 22,
"total_chunks": 37,
"char_count": 1352,
"word_count": 244,
"chunking_strategy": "semantic"
},
{
"chunk_id": "658c9829-b48c-4240-8230-8e502bcf212b",
"text": "This resulted in a situation where\nB. Comparison with ALE\nRL could not play the game. Wait For Breakfast has a simpler\nwin condition in a very static environment. The agent had to Reinforcement learning research has been making a lot\nflounder around a lot until it bumped into the correct location of progress on game playing in the last few years and the\nfor a few consecutive iterations. The environment is very static benchmark environments need to keep up. ALE is a popular\nso once a solution is found it just has to memorize it. It consists of a reasonably large set of real\nhas the exploration advantage and can find the solution sooner games and all the games have been designed for humans. Yet,\nbut it keeps exploring and does not converge to the single the game set is static and cannot provide new challenges as",
"paper_id": "1806.02448",
"title": "Deep Reinforcement Learning for General Video Game AI",
"authors": [
"Ruben Rodriguez Torrado",
"Philip Bontrager",
"Julian Togelius",
"Jialin Liu",
"Diego Perez-Liebana"
],
"published_date": "2018-06-06",
"primary_category": "cs.LG",
"arxiv_url": "http://arxiv.org/abs/1806.02448v1",
"chunk_index": 23,
"total_chunks": 37,
"char_count": 821,
"word_count": 148,
"chunking_strategy": "semantic"
},
{
"chunk_id": "f87be264-aa9b-48c1-9f06-6c404061f0cb",
"text": "80 Aliens 8 Missile Commands 12 Boulder Dash SeaQuest\n70 6 10 1000 reward reward reward 8 reward 4 800\n60 6\n2 600 4 episode 400 episode50 episode 0 episode\n200 Mean40 Mean 2 Mean 0 Mean\n0.0 0.2Number0.4 of0.6Steps0.8 1.0 0.0 0.2 0.4 0.6 0.8 1.0 0.0 0.2 0.4 0.6 0.8 1.0 0.0 0.2 0.4 0.6 0.8 1.0 1e6 Number of Steps 1e6 Number of Steps 1e6 Number of Steps 1e6\nFrogs Wait For Breakfast Superman Zelda\n1.0\n800 6\n0.04\n0.8 5reward 0.02 reward reward600 reward 4\n0.6\n0.00 400 3\n0.4 2episode 0.02 episode episode episode\n0.2 200 1\n0.04 0Mean Mean Mean Mean\n0.0 0 1\n0.0 0.2Number0.4 of 0.6Steps0.8 1.0 0.0 0.2 0.4 0.6 0.8 1.0 0.0 0.2 0.4 0.6 0.8 1.0 0.0 0.2 0.4 0.6 0.8 1.0 1e6 Number of Steps 1e6 Number of Steps 1e6 Number of Steps 1e6 Figure 3: Training reward for DQN (red), Prioritized Dueling DQN (blue), and A2C (green). The reward is reported on the\ny-axis and is different for each game. As an example, Frogs only returns a score of 1 for winning and 0 otherwise. Each\nalgorithm is trained on one million game frames. communicating through a local port to Python. While still very\nBoulder Dash fast, training will run a few times slower than Atari.",
"paper_id": "1806.02448",
"title": "Deep Reinforcement Learning for General Video Game AI",
"authors": [
"Ruben Rodriguez Torrado",
"Philip Bontrager",
"Julian Togelius",
"Jialin Liu",
"Diego Perez-Liebana"
],
"published_date": "2018-06-06",
"primary_category": "cs.LG",
"arxiv_url": "http://arxiv.org/abs/1806.02448v1",
"chunk_index": 24,
"total_chunks": 37,
"char_count": 1147,
"word_count": 220,
"chunking_strategy": "semantic"
},
{
"chunk_id": "15ea3ec9-954d-43e3-845a-c98a5344fefa",
"text": "Currently,\n17.5 there is ongoing development to optimize the communication\n15.0 between the two languages. While both environments share some games, the perfor- 12.5\nmance on these games cannot be compared directly. GVGAI\n10.0 reward has games that are inspired by Atari but they are not perfect\n7.5 replicas and the author of the VGDL file can decide how close\nto match the original and how to handle score. Yet, looking\n5.0 Episode at similar games in both environments seems to show that\n2.5 GVGAI can have many of the characteristics of Atari: such as\n0.0 fairly good performance on Aliens and poor performance on\nSeaquest.\n0 1 2 3 4 5 6 7 8\nNumber of Steps 1e4 The ALE has done a lot for providing a standard benchmark\nfor new algorithms to be tested against. GVGAI is more fluid\nFigure 4: Training reward for all 12 workers of A2C learning and changing but it allows researchers to constantly challenge\non Boulder Dash the perceived success of new RL agents. The challenges for\ncomputers can advance with them all the way to general video\ngame playing. On top of that, we provide the results here to\nresearchers experiment with the strengths and weaknesses of propose that doing well on GVGAI is at least comparable\ndifferent algorithms. doing well on ALE and we show that there are games on\nGVGAI currently has over twice the number of games as GVGAI that still are not beaten. ALE and with active research more are added every year. The VGDL language also makes it possible for researchers to C. Comparison with planning algorithms\ndesign new games. Truly stochastic games can be designed and In order to compare the performance of our learning\nmultiple levels can be included to test how well an algorithm algorithms with the state-of-art, we have used the results\ncan generalize. The VGDL engine also provides a forward obtained in [25]. This paper explores clustering GVGAI games\nmodel that can be incorporated in the future to allow hybrid to better understand the capabilities of each algorithm and\nalgorithms to learn and plan. subsequently use several agents to test the performance of each\nWhile these games allow targeted testing of AIs, they tend representative game. The tested agents may be classified in\nto not be designed with humans in mind and can be hard to Genetic Algorithms (GA), Monte Carlo Tree Search (MCTS),\nplay. Readers are also not as familiar with the games as they Iterative With and Random Sample (RS).",
"paper_id": "1806.02448",
"title": "Deep Reinforcement Learning for General Video Game AI",
"authors": [
"Ruben Rodriguez Torrado",
"Philip Bontrager",
"Julian Togelius",
"Jialin Liu",
"Diego Perez-Liebana"
],
"published_date": "2018-06-06",
"primary_category": "cs.LG",
"arxiv_url": "http://arxiv.org/abs/1806.02448v1",
"chunk_index": 25,
"total_chunks": 37,
"char_count": 2440,
"word_count": 420,
"chunking_strategy": "semantic"
},
{
"chunk_id": "5ad565f5-435d-4723-b2ca-8138b6c7b070",
"text": "To compare results,\nare in Atari and therefore might lack some of the intuition. we took the agent with the high score for each category in a\nAnother drawback is speed. The engine is written in Java and target environment. In Table II we compare the performance of the as well.\nreinforcement-learned neural network agents with high- Boulder Dash is perhaps the most complex game in the\nperforming planning agents. This is very much a case of set.",
"paper_id": "1806.02448",
"title": "Deep Reinforcement Learning for General Video Game AI",
"authors": [
"Ruben Rodriguez Torrado",
"Philip Bontrager",
"Julian Togelius",
"Jialin Liu",
"Diego Perez-Liebana"
],
"published_date": "2018-06-06",
"primary_category": "cs.LG",
"arxiv_url": "http://arxiv.org/abs/1806.02448v1",
"chunk_index": 26,
"total_chunks": 37,
"char_count": 446,
"word_count": 78,
"chunking_strategy": "semantic"
},
{
"chunk_id": "b86e1915-177f-4a3a-bb67-9ad49afe1554",
"text": "The game requires both quick reactions for the twitchcomparing apples and oranges: the learning-based agents have based gameplay of avoiding falling boulders and long-term\nbeen trained for hours for the individual game it is being tested planning of in which order to dig dirt and collect diamonds\non whereas the planning-based agents have had no training so as not to get trapped among boulders. Here we have the\ntime whatsoever and are supposed to be ready to play any interesting situations the one planning algorithm (MCTS) and\ngame at any point, and the planning-based agents have access one learning algorithm (A2C) plays the game reasonably well,\nto a forward model which the learning agent does not. In other whereas the other algorithms (both planning and learning)\nwords, each type of agent has a major advantage over the other, perform much worse. For the planning algorithms, the likely\nand it is a priori very hard to say which advantage will prove explanation is that GA has too short planning horizon and IW\nto be the most important. This is why this comparison is so does not handle the stochastic nature of the enemies.\ninteresting. For Zelda, which combines fighting random-moving enBeginning with Aliens, we see that all agents learn to play emies and finding paths to keys and doors (medium-term\nthis game well.",
"paper_id": "1806.02448",
"title": "Deep Reinforcement Learning for General Video Game AI",
"authors": [
"Ruben Rodriguez Torrado",
"Philip Bontrager",
"Julian Togelius",
"Jialin Liu",
"Diego Perez-Liebana"
],
"published_date": "2018-06-06",
"primary_category": "cs.LG",
"arxiv_url": "http://arxiv.org/abs/1806.02448v1",
"chunk_index": 27,
"total_chunks": 37,
"char_count": 1331,
"word_count": 222,
"chunking_strategy": "semantic"
},
{
"chunk_id": "3a84f69d-25f9-4abe-969a-4c595b0d4e67",
"text": "This is not overly surprising, as all Non- planning), all agents performed comparably. The tree search\nplayer Characters (NPC) and projectiles in this game behave algorithms outperformed the GA, and also seem to outperform\ndeterministically (enemy projectiles are fired stochastically, the learning agents, but not by a great margin.\nbut always takes some time to reach the player) and the game\ncan be played well with very little planning; the main tasks are V. CONCLUSION\navoiding incoming projectiles and firing at the right time to\nhit the enemy. The former task can be solved with a reactive In this paper, we have created a new reinforcement learning\npolicy, and the latter with a minimum of planning and probably challenge out of the General Video Game AI Framework by\nalso reactively. connecting it to OpenAI Gym environment. We have used\nWait for Breakfast was solved perfectly by all agents except this setup to produce the first results of state-of-art deep RL\nthe standard MCTS agent, which solved it occasionally.",
"paper_id": "1806.02448",
"title": "Deep Reinforcement Learning for General Video Game AI",
"authors": [
"Ruben Rodriguez Torrado",
"Philip Bontrager",
"Julian Togelius",
"Jialin Liu",
"Diego Perez-Liebana"
],
"published_date": "2018-06-06",
"primary_category": "cs.LG",
"arxiv_url": "http://arxiv.org/abs/1806.02448v1",
"chunk_index": 28,
"total_chunks": 37,
"char_count": 1026,
"word_count": 168,
"chunking_strategy": "semantic"
},
{
"chunk_id": "88fcc15d-fe41-4a63-ada8-c720af7eddab",
"text": "This algorithms on GVGAI games. Specifically, we tested DQN,\ngame is easily solved if you plan far enough ahead, but it is Prioritized Dueling DQN and Advance Actor-Critic (A2C) on\nalso very easy to find a fixed strategy for winning. It punishes eighth representative GVGAI games.\n\"jittery\" agents that explore without planning. Our results show that the performance of learning algorithm\nFrogs is only won by the planning agents (GA and IW differs drastically between games. In several games, all the\nalways win it, MCTS sometimes wins it) whereas it is never tested RL algorithms can learn good stable policies, possibly\nwon by the learning algorithm. The simple explanation for due to features such as memory replay and parallel actorthis is that there are no intermediate rewards in Frogs; the only learners for DQN and A2C respectively. A2C reaches a higher\nreward is for reaching the goal. There is, therefore, no gradient score than DQN and PDDQN for 6 of the 8 environments\nto ascend for the reinforcement learning algorithms.",
"paper_id": "1806.02448",
"title": "Deep Reinforcement Learning for General Video Game AI",
"authors": [
"Ruben Rodriguez Torrado",
"Philip Bontrager",
"Julian Togelius",
"Jialin Liu",
"Diego Perez-Liebana"
],
"published_date": "2018-06-06",
"primary_category": "cs.LG",
"arxiv_url": "http://arxiv.org/abs/1806.02448v1",
"chunk_index": 29,
"total_chunks": 37,
"char_count": 1034,
"word_count": 170,
"chunking_strategy": "semantic"
},
{
"chunk_id": "96ec5255-b647-439b-aa29-f967b60b288a",
"text": "For the tested without memory replay. Also, when trained on the\nplanning algorithms, on the other hand, it is just a matter GVGAI domain using 12 CPU cores, A2C trains five times\nof planning far enough ahead. (Some planning algorithms faster than DQN trained on a Tesla Nvidia GPU.\ndo better than others, for example, Iterative Width looks for But there are also many cases where some or all of\nintermediate states where facts about the world have changed.) the learning algorithms fail. In particular, DQNs and A2C\nThe reason why learning algorithms perform well on Freeway, perform badly on games with a binary score (win or lose,\nthe Atari 2600 clone of Frogger, is that it has plenty of no intermediate rewards) such as Frogs. Also, we observed\nintermediate rewards - the player gets a score for advancing a high dependency of the initial conditions which suggests\neach lane. that running multiple times is necessary for accurately benchTwo of the planning agents and all three learning agents marking DQN algorithms. Finally, some complex games (e.g.\nperform well on Missile Command; there seems to be no Seaquest) show problems of stabilization when we are training\nmeaningful performance difference between the best planning with default parameters of OpenAI baselines. This reflects\nalgorithms (IW) and the learning agents. It seems possible that a modification of replay memory or the schedule of the\nto play this game by simply moving close to the nearest learning rate parameters are necessary to improve convergence\napproaching missiles and attacking it. What is not clear is in several environments.\nwhy MCTS is performing so badly.",
"paper_id": "1806.02448",
"title": "Deep Reinforcement Learning for General Video Game AI",
"authors": [
"Ruben Rodriguez Torrado",
"Philip Bontrager",
"Julian Togelius",
"Jialin Liu",
"Diego Perez-Liebana"
],
"published_date": "2018-06-06",
"primary_category": "cs.LG",
"arxiv_url": "http://arxiv.org/abs/1806.02448v1",
"chunk_index": 30,
"total_chunks": 37,
"char_count": 1645,
"word_count": 268,
"chunking_strategy": "semantic"
},
{
"chunk_id": "7a4166ea-87b8-4af0-b12c-147290bbe056",
"text": "We also compared learning agents (which have time for\nSeaquest is a relatively complex game requiring both shoot- learning but not a forward model) with planning agents (which\ning enemies, rescuing divers and managing oxygen supply. get no learning time, but do get a forward model). The results\nAll agents play this game reasonably well, but somewhat indicate that in general, the planning agents have a slight\nsurprisingly, the learning agents perform best overall and A2C advantage, though there are large variations between games.\nis the clear winner. The presence of intermediate rewards The planning agents seem better equipped to deal with making\nshould work in the learning agents' favor; apparently, the decisions with a long time dependency and no intermediate\nlearning agents easily learn the non-trivial sequence of tasks rewards, but the learning agents performed better on e.g. Games Random Agent Planning Agents Learning Agents\nGenetic Algorithm Monte Carlo Tree Search Iterative Width DQN Prioritized Dueling DQN A2C\nAliens 52 80.4 72.6 80.2 75 74 77\nWait For Breakfast 0 1 0.4 1 1 1 1\nFrogs -2 1 -0.4 1 0 0 0\nMissile Command -2.2 2.6 -3 6.8 5 8 5\nSeaquest 17.2 435 638.2 224.6 600 800 1200\nBoulder Dash 1.4 3.4 16.4 8.8 2.5 5 15.5\nZelda -5.2 3.4 6.8 7.6 4.2 4.2 6\nSuperman 4 157 6699 130.2 500 0 800 Table II: Learning score comparison of learning algorithms (DQN, Prioritized Dueling DQN and A2C) with random and\nplanning algorithms (Genetic Algorithms, MCTS and Iterative Width). The results of planning and random are taken from [25]\nand correspond to the best performing instance of each algorithm. Seaquest (a complex game) and Missile Command (a simple [10] A. Togelius, \"General\ngame). video game level generation,\" in Proceedings of the 2016 on Genetic\nand Evolutionary Computation Conference. ACM, 2016, pp. 253–259. As researchers experiment with more the existing games, [11] A. Pérez-Liébana, and J.",
"paper_id": "1806.02448",
"title": "Deep Reinforcement Learning for General Video Game AI",
"authors": [
"Ruben Rodriguez Torrado",
"Philip Bontrager",
"Julian Togelius",
"Jialin Liu",
"Diego Perez-Liebana"
],
"published_date": "2018-06-06",
"primary_category": "cs.LG",
"arxiv_url": "http://arxiv.org/abs/1806.02448v1",
"chunk_index": 31,
"total_chunks": 37,
"char_count": 1928,
"word_count": 318,
"chunking_strategy": "semantic"
},
{
"chunk_id": "d21ce287-3c85-4929-bb00-f8fa61dd3558",
"text": "Togelius, \"General\ndesign specific games for experiments, and participate in the Video Game Rule Generation,\" in 2017 IEEE Conference on Computacompetition, we expect to gain new insights into the nature tional Intelligence and Games (CIG). Fasli, \"Neuof various learning algorithms. There is an opportunity for roevolution for general video game playing,\" in 2015 IEEE Conference\nnew games to be created by humans and AIs in an arms race on Computational Intelligence and Games (CIG). IEEE, 2015, pp.\nagainst improvements from game-playing agents. We believe 200–207.\n[13] A.",
"paper_id": "1806.02448",
"title": "Deep Reinforcement Learning for General Video Game AI",
"authors": [
"Ruben Rodriguez Torrado",
"Philip Bontrager",
"Julian Togelius",
"Jialin Liu",
"Diego Perez-Liebana"
],
"published_date": "2018-06-06",
"primary_category": "cs.LG",
"arxiv_url": "http://arxiv.org/abs/1806.02448v1",
"chunk_index": 32,
"total_chunks": 37,
"char_count": 576,
"word_count": 87,
"chunking_strategy": "semantic"
},
{
"chunk_id": "c083b2d4-d02b-4034-a220-cab0a395e094",
"text": "Miikkulainen, \"Object-model transfer in the general\nthis platform can be instrumental to scientifically evaluating video game domain,\" in Twelfth Artificial Intelligence and Interactive\nhow different algorithms can learn and evolve to understand Digital Entertainment Conference, 2016.\nmany changing environments. [14] K. Pérez-Liébana, \"General Video\nGame AI: Learning from Screen Capture,\" in 2017 IEEE Conference on\nACKNOWLEDGEMENT Evolutionary Computation (CEC). Barto, Reinforcement learning: An introduction. This work was supported by the Ministry of Science and MIT press Cambridge, 1998, vol. 1, no. 1. Technology of China (2017YFC0804003). [16] J. Schmidhuber, \"Deep learning in neural networks: An overview,\"\nNeural networks, vol. 61, pp. 85–117, 2015.\n[1] M. Bowling, \"The arcade [18] A.",
"paper_id": "1806.02448",
"title": "Deep Reinforcement Learning for General Video Game AI",
"authors": [
"Ruben Rodriguez Torrado",
"Philip Bontrager",
"Julian Togelius",
"Jialin Liu",
"Diego Perez-Liebana"
],
"published_date": "2018-06-06",
"primary_category": "cs.LG",
"arxiv_url": "http://arxiv.org/abs/1806.02448v1",
"chunk_index": 33,
"total_chunks": 37,
"char_count": 799,
"word_count": 110,
"chunking_strategy": "semantic"
},
{
"chunk_id": "641d9ec7-04c4-4200-922b-930249785c79",
"text": "Agapiou\nlearning environment: An evaluation platform for general agents.\" J. et al., \"Strategic attentive writer for learning macro-actions,\" in Advances\nArtif. Res.(JAIR), vol. 47, pp. 253–279, 2013. in neural information processing systems, 2016, pp. 3486–3494.\n[2] V. Ostrovski Den Driessche, J. Panneershelvam,\net al., \"Human-level control through deep reinforcement learning,\" M. Lanctot et al., \"Mastering the game of Go with deep neural networks\nNature, vol. 518, no. 7540, p. 529, 2015. and tree search,\" Nature, vol. 529, no. 7587, pp. 484–489, 2016.\n[3] M. Thompson, and [20] D. Togelius, \"Towards a video game description language,\" in Dagstuhl A.",
"paper_id": "1806.02448",
"title": "Deep Reinforcement Learning for General Video Game AI",
"authors": [
"Ruben Rodriguez Torrado",
"Philip Bontrager",
"Julian Togelius",
"Jialin Liu",
"Diego Perez-Liebana"
],
"published_date": "2018-06-06",
"primary_category": "cs.LG",
"arxiv_url": "http://arxiv.org/abs/1806.02448v1",
"chunk_index": 34,
"total_chunks": 37,
"char_count": 658,
"word_count": 98,
"chunking_strategy": "semantic"
},
{
"chunk_id": "2ef8fb51-f2e7-4c2e-8d95-9bda15ec03ce",
"text": "Bolton et al., \"Mastering\nFollow-Ups, vol. 6. Schloss Dagstuhl-Leibniz-Zentrum fuer Informatik, the game of Go without human knowledge,\" Nature, vol. 550, no. 7676,\n2013. pp. 354–359, 2017.\n[4] D. Thompson, \"The 2014 general \"Dueling network architectures for deep reinforcement learning,\" in\nvideo game playing competition,\" IEEE Transactions on Computational International Conference on Machine Learning, 2016, pp. 1995–2003. Intelligence and AI in Games, vol. 8, no. 3, pp. 229–243, 2016. [22] V. Kavukcuoglu, \"Asynchronous methods for deep reinT. Schaul, \"General Video Game AI: Competition, Challenges and forcement learning,\" in International Conference on Machine Learning,\nOpportunities,\" in Thirtieth AAAI Conference on Artificial Intelligence, 2016, pp. 1928–1937.\n2016, pp. 4335–4337. [23] G. Togelius, and man, J.",
"paper_id": "1806.02448",
"title": "Deep Reinforcement Learning for General Video Game AI",
"authors": [
"Ruben Rodriguez Torrado",
"Philip Bontrager",
"Julian Togelius",
"Jialin Liu",
"Diego Perez-Liebana"
],
"published_date": "2018-06-06",
"primary_category": "cs.LG",
"arxiv_url": "http://arxiv.org/abs/1806.02448v1",
"chunk_index": 35,
"total_chunks": 37,
"char_count": 825,
"word_count": 113,
"chunking_strategy": "semantic"
},
{
"chunk_id": "cb2dc162-3cf1-4c44-b85f-f492cc6da4de",
"text": "Zaremba, \"Openai gym,\" arXiv preprint\nevaluating agents, games and content generation algorithms,\" arXiv [24] J. Bowling., \"The arcade learning\n[7] T. Schaul, \"A video game description language for model-based or Res.\ninteractive learning,\" in Computational Intelligence in Games (CIG), [25] P. Togelius, \"Matching games\n2013 IEEE Conference on. IEEE, 2013, pp. 1–8. and algorithms for general video game playing,\" in Twelfth Artificial\n[8] R. Winands, Intelligence and Interactive Digital Entertainment Conference, 2016, pp. Perez-Liebana, 122–128.\n\"The 2016 two-player GVGAI competition,\" IEEE Transactions on [26] P. Radford,\nComputational Intelligence and AI in Games, 2017. Wu, \"Openai baselines,\" https://github.\n[9] J. Perez-Liebana, and S. Lucas, \"The single-player GVGAI com/openai/baselines, 2017.\nlearning framework - technical manual,\" 2017. [Online]. Available: http:\n//www.liujialin.tech/publications/GVGAISingleLearning_manual.pdf",
"paper_id": "1806.02448",
"title": "Deep Reinforcement Learning for General Video Game AI",
"authors": [
"Ruben Rodriguez Torrado",
"Philip Bontrager",
"Julian Togelius",
"Jialin Liu",
"Diego Perez-Liebana"
],
"published_date": "2018-06-06",
"primary_category": "cs.LG",
"arxiv_url": "http://arxiv.org/abs/1806.02448v1",
"chunk_index": 36,
"total_chunks": 37,
"char_count": 945,
"word_count": 118,
"chunking_strategy": "semantic"
}
]