| [ |
| { |
| "chunk_id": "e044ad5b-00be-4556-ae44-dd593b04e2e9", |
| "text": "Intelligent Trainer for Model-Based Deep\nReinforcement Learning Yuanlong Li, Member, IEEE, Linsen Dong, Student Member, IEEE, Xin Zhou, Member, IEEE,\nYonggang Wen, Senior Member, IEEE, and Kyle Guan Member, IEEE", |
| "paper_id": "1805.09496", |
| "title": "Intelligent Trainer for Model-Based Reinforcement Learning", |
| "authors": [ |
| "Yuanlong Li", |
| "Linsen Dong", |
| "Xin Zhou", |
| "Yonggang Wen", |
| "Kyle Guan" |
| ], |
| "published_date": "2018-05-24", |
| "primary_category": "cs.LG", |
| "arxiv_url": "http://arxiv.org/abs/1805.09496v6", |
| "chunk_index": 0, |
| "total_chunks": 45, |
| "char_count": 211, |
| "word_count": 30, |
| "chunking_strategy": "semantic" |
| }, |
| { |
| "chunk_id": "454240cd-7a8e-43bd-9f1b-c86d186d6f6a", |
| "text": "Abstract—Model-based reinforcement learning (MBRL) has allows for progressively learning the best control policy in\nbeen proposed as a promising alternative solution to tackle complex systems. Previously RL has been adopted to solve\nthe high sampling cost challenge in the canonical reinforcement problems like robot arm control, maze solving and game\nlearning (RL), by leveraging a learned model to generate syntheplaying, reducing the human intervention in system modeling. sized data for policy training purpose. The MBRL framework,\nnevertheless, is inherently limited by the convoluted process of Recently, RL, in combination with the emerging deep learning2019 jointly learning control policy and configuring hyper-parameters techniques (so called the deep reinforcement learning - DRL)\n(e.g., global/local models, real and synthesized data, etc). The [2], has become a popular choice for large complex system\ntraining process could be tedious and prohibitively costly. This trend started with the huge success of AlphaGO.\nresearch, we propose an \"reinforcement on reinforcement\" (RoR)Jun At the same time, researchers have also made breakthrough architecture to decompose the convoluted tasks into two layers of\n5 reinforcement learning. The inner layer is the canonical model- in complex system control in continuous domains, via DRL\nbased RL training process environment (TPE), which learns the algorithms, for example, Deep Deterministic Policy Gradients\ncontrol policy for the underlying system and exposes interfaces (DDPG) [3] and Trust Region Policy Optimization (TRPO)\nto access states, actions and rewards. The outer layer presents [4].", |
| "paper_id": "1805.09496", |
| "title": "Intelligent Trainer for Model-Based Reinforcement Learning", |
| "authors": [ |
| "Yuanlong Li", |
| "Linsen Dong", |
| "Xin Zhou", |
| "Yonggang Wen", |
| "Kyle Guan" |
| ], |
| "published_date": "2018-05-24", |
| "primary_category": "cs.LG", |
| "arxiv_url": "http://arxiv.org/abs/1805.09496v6", |
| "chunk_index": 1, |
| "total_chunks": 45, |
| "char_count": 1651, |
| "word_count": 236, |
| "chunking_strategy": "semantic" |
| }, |
| { |
| "chunk_id": "1d035f77-7c52-4bb8-90ad-b922be5a46b8", |
| "text": "As a result, the transformative nature of RL and DRL\nan RL agent, called as AI trainer, to learn an optimal hyperhave been driving its regained popularity in both academia parameter configuration for the inner TPE. This decomposition\napproach provides a desirable flexibility to implement different and industry.[cs.LG] trainer designs, called as \"train the trainer\". In our research, However, many practical industry applications face great\nwe propose and optimize two alternative trainer designs: 1) a challenges in adopting RL solutions, especially when it is\nuni-head trainer and 2) a multi-head trainer. Our proposed RoR costly to acquire data for policy training purpose.", |
| "paper_id": "1805.09496", |
| "title": "Intelligent Trainer for Model-Based Reinforcement Learning", |
| "authors": [ |
| "Yuanlong Li", |
| "Linsen Dong", |
| "Xin Zhou", |
| "Yonggang Wen", |
| "Kyle Guan" |
| ], |
| "published_date": "2018-05-24", |
| "primary_category": "cs.LG", |
| "arxiv_url": "http://arxiv.org/abs/1805.09496v6", |
| "chunk_index": 2, |
| "total_chunks": 45, |
| "char_count": 677, |
| "word_count": 102, |
| "chunking_strategy": "semantic" |
| }, |
| { |
| "chunk_id": "cdb991af-e6c3-45df-a85f-a2aa1a8253ca", |
| "text": "On the deframework is evaluated for five tasks in the OpenAI gym (i.e.,\nmand side, the performance of RL algorithms in general hinges Pendulum, Mountain Car, Reacher, Half Cheetah and Swimmer). Compared to three other baseline algorithms, our proposed upon a huge amount of operational data to train the control\nTrain-the-Trainer algorithm has a competitive performance in policy. On the supply side, acquiring a huge amount of training\nauto-tuning capability, with upto 56% expected sampling cost data from operational systems might be prohibitively costly,\nsaving without knowing the best parameter setting in advance. in resource usage and/or time consumption. For instance, in a\nThe proposed trainer framework can be easily extended to other\nrobotic control problem [5], the DRL agent can learn to score cases in which the hyper-parameter tuning is costly.\na goal with high probability, only after about three million\nIndex Terms—Reinforcement learning, AutoML, Intelligent samples observed. As result, training with a real robot to do\ntrainer, Ensemble algorithm.\nthe task in this case may take millions seconds, rendering the\nsystem unacceptable in most application scenarios. INTRODUCTION To tackle this challenge with training data, researchers have\ntremendous momentum in research and industry applications. the real-world systems are used to train a system dynamic\nRL, in comparison to supervised and unsupervised learning, model, which is in turn used to generate synthesized data\naddresses how intelligent agents should take actions in an for policy training. The generated data, together with the\nenvironment, aiming to maximize a chosen cumulative reward real world data, are used to train the target controller and\nfunction. For example, an RL agent controlling a robot arm search for sensible actions to maximize the accumulative\nto grab an object will observe the current state of the arm, reward. Generally, producing synthesized data in a cyber\nissue an action to the arm and after the action being taken, environment is relatively inexpensive, as such the MBRL has\ncollect the reward, signifying whether the object has been the advantage of low data sampling cost. This comparative\ngrabbed or not, and new state information to train its pol- advantage contributes to the fact that the MBRL has been a\nicy. This interaction between the agent and the environment popular approach in robot arm training [7] and online tree\nsearch based planning [8]–[10].", |
| "paper_id": "1805.09496", |
| "title": "Intelligent Trainer for Model-Based Reinforcement Learning", |
| "authors": [ |
| "Yuanlong Li", |
| "Linsen Dong", |
| "Xin Zhou", |
| "Yonggang Wen", |
| "Kyle Guan" |
| ], |
| "published_date": "2018-05-24", |
| "primary_category": "cs.LG", |
| "arxiv_url": "http://arxiv.org/abs/1805.09496v6", |
| "chunk_index": 3, |
| "total_chunks": 45, |
| "char_count": 2471, |
| "word_count": 382, |
| "chunking_strategy": "semantic" |
| }, |
| { |
| "chunk_id": "955ea444-c992-45f2-afc7-485931e13ab6", |
| "text": "Manuscript received... This work was supported in part by EIRP02 Grant\nfrom Singapore EMA, GDCR01 Grant from Singapore IMDA. In real applications, the adoption of the MBRL framework\nYuanlong Li, Xin Zhou, Yonggang Wen and Linsen Dong are with School is limited by the manual configuration of some crucial paof Computer Science and Engineering, Nanyang Technological University, rameters. As illustrated in Figure 1, the data acquired from\nNanyang Avenue, Singapore 639798. Email: {liyuanl, ygwen}@ntu.edu.sg,\nLINSEN001@e.ntu.edu.sg the physical environment would be used for two purposes,\nKyle Guan is with Bell Labs, Nokia. Email: kyle.guan@nokia.com. namely:", |
| "paper_id": "1805.09496", |
| "title": "Intelligent Trainer for Model-Based Reinforcement Learning", |
| "authors": [ |
| "Yuanlong Li", |
| "Linsen Dong", |
| "Xin Zhou", |
| "Yonggang Wen", |
| "Kyle Guan" |
| ], |
| "published_date": "2018-05-24", |
| "primary_category": "cs.LG", |
| "arxiv_url": "http://arxiv.org/abs/1805.09496v6", |
| "chunk_index": 4, |
| "total_chunks": 45, |
| "char_count": 660, |
| "word_count": 94, |
| "chunking_strategy": "semantic" |
| }, |
| { |
| "chunk_id": "295cd1e5-f649-4a89-b67f-95748472d4f4", |
| "text": "Initialization TPE Actions\na2 Action EnvironmentReal a0a0 a0, a1, a2\nReward State, Do local or global Modelling Real Sampling in Sampling in sampling in Real Cyber Environment State, Intelligent real/cyber ? - Environment Reward hyparameter a0, a1 Environment Trainer Target Cyber State, Reward ControllerAction Environment a1\nAction Modelling Mixing potion of Update Update\nTarget Action the- hyparameterreal/cyber data?a2 ControllerTarget ModelCyber Training Process Environment (TPE) TPE Reward\nController Reward EnvironmentCyber State,\nIs no. of Fig. 2. Illustration of \"Reinforcement on Reinforcement\" (RoR) framework.\ntotal real data samples\n> N No The inner box encapsulates a standard MBRL into a training processing\nYes\nenvironment (TPE) as the inner layer. In the outer box we introduce an\nStop intelligent trainer as the outer layer, controlling the optimization of the MBRL\ntraining in the TPE.\n(a) (b) Illustration of MBRL algorithm using the model as a data source: (a)\nThe data flow of MBRL, where the cyber environment is used to generate we first encapsulate the canonical model-based RL training\nsynthetic training data for the target controller. (b) Typical training flow of process into a standard RL environment (Training Process\nMBRL, in which we indicate the settings that are usually manually set. Environment, TPE in Fig. 2) with exposed state, action and\nreward interfaces, as the inner RL layer. In the outer layer, we\nintroduce an intelligent trainer, as an RL agent, to interact with • System model generation. The model is trained to mimic\nthe inner layer, and control the sampling and training process the real system in that, given the current state and action\nof the target controller in TPE. Such layered architecture to take, it predicts the next system state. The learned\nis embedded with an inherent decomposition between the model can be trained/used in a global or local [9] manner.\ntraining process and the controlling of the training process The global manner means that the model is trained or\nin the inner layer, greatly liberating its applicability in more utilized to generate data samples from the whole state\ngeneralized RL scenarios. In comparison with the existing space and can favor global exploration. The local manner\napproaches that directly modify the training algorithm of the is to train or utilize the model to generate data samples\ntarget controller [10], our design can work with different in certain constrained subspace, and thus can reinforce\nMBRL controllers and with different trainer designs. We call local exploitation.", |
| "paper_id": "1805.09496", |
| "title": "Intelligent Trainer for Model-Based Reinforcement Learning", |
| "authors": [ |
| "Yuanlong Li", |
| "Linsen Dong", |
| "Xin Zhou", |
| "Yonggang Wen", |
| "Kyle Guan" |
| ], |
| "published_date": "2018-05-24", |
| "primary_category": "cs.LG", |
| "arxiv_url": "http://arxiv.org/abs/1805.09496v6", |
| "chunk_index": 5, |
| "total_chunks": 45, |
| "char_count": 2586, |
| "word_count": 403, |
| "chunking_strategy": "semantic" |
| }, |
| { |
| "chunk_id": "b908079a-ff03-41a3-b51e-5511125893d1", |
| "text": "In Fig. 1, parameters a0 and a1 control\nthe latter as \"train the trainer\" design. whether to go global or local in the training and sampling\nOur research intends to optimize the \"train the trainer\" procedure of the system model.\ndesign for better learning efficiency in the outer layer of • Control policy training. The collected data from physical\nthe RoR architecture, and validate our design over widely- environment are also used in the training of the target\naccepted benchmark cases in the openAI gym. First, we controller, together with the cyber data generated from the\npropose two alternative trainer designs: learned model. In this case, the portion of the cyber data\nto use requires proper configuration to achieve the desired • Uni-head trainer. This approach is to implement a single\noutcome. As to be shown in the experimental results of trainer, cast into a DQN controller, to learn in an online\nthis paper, the proper setting can vary from case to case: manner to optimize the sampling and training in the inner\nfor certain cases using more cyber data can be helpful; layer.\nwhile for other cases it may lead to serious performance • Multi-head trainer. This approach is to implement an\ndegeneration.", |
| "paper_id": "1805.09496", |
| "title": "Intelligent Trainer for Model-Based Reinforcement Learning", |
| "authors": [ |
| "Yuanlong Li", |
| "Linsen Dong", |
| "Xin Zhou", |
| "Yonggang Wen", |
| "Kyle Guan" |
| ], |
| "published_date": "2018-05-24", |
| "primary_category": "cs.LG", |
| "arxiv_url": "http://arxiv.org/abs/1805.09496v6", |
| "chunk_index": 6, |
| "total_chunks": 45, |
| "char_count": 1216, |
| "word_count": 204, |
| "chunking_strategy": "semantic" |
| }, |
| { |
| "chunk_id": "949a29c4-2f46-4057-b887-167863f344f8", |
| "text": "In Fig. 1 parameter a2 controls this setting. ensemble trainer, comprising of multiple trainers that\ntake independent actions in their training processes, andWe refer these configuration parameters introduced by the\nare ranked across themselves to provide a quantitativemodel as model-related hyper-parameter setting. In previous\ncomparison of their respective actions.research, these parameters are manually tried in training stage,\noften resulting in additional time and/or resource cost1. It fol- We implement both trainer designs in Tensorflow for five\nlows that an autoML solution for MBRL is highly demanded. benchmark cases (i.e., Pendulum, Mountain Car, Reacher, Half\nIn this research, we propose an autoML solution for the Cheetah, and Swimmer) and evaluate their performance in\nMBRL framework, aiming to tackle the hyper-parameter set- learning the best control policy under external constraints.\nting challenge. Our proposed solution adopts a \"reinforcement Our evaluation is compared against three baseline algorithms,\non reinforcement\" (RoR) design architecture to learn the opti- including a model-free RL algorithm, a MBRL algorithm with\nmal model-related parameters and training/sampling settings randomly hyper-parameter settings and a MBRL algorithm\nin an online manner. Specifically, as illustrated in Fig. 2, with fixed hyper-parameter settings. Our numerical investigations show that our proposed framework outperforms the\n1A naive approach to potentially solve this problem is to re-train the aforementioned baseline algorithms in overall performance\ncontroller with different parameter settings with the collected data samples in across different test cases supposing the best parameter setthe first trial. Such solution will not incur additional sampling cost. However,\nthe \"supervised\" learning approach may not work well for the RL case, as the tings are unknown. Specifically, our proposed framework can\ntraining performance of such a policy is largely determined by the data used achieve the following results:\nin training. If the data used in training are sampled by an under-performed\npolicy, they may lack of important samples that can lead to better performance, • For the same learned policy quality, our proposed RoR\nmaking the re-training useless. framework can achieve an expected sampling cost saving upto 56%, over the average cost of the three baseline to control the sampling and training settings of the MBRL,\nalgorithms. and the reward information which can encourage an agent to\n• Given the same sampling budget, our proposed RoR minimize the sampling cost in the real environment to train\nframework can achieve a policy quality on par with the target controller to a target performance. To achieve these\nthe best policy available, without the prior requirement goals, the TPE is designed with two major functions and three\nof knowing the best parameter setting across all the RL interfaces as following.\nbenchmark cases. These evaluations suggest that our proposed RoR framework A. Basic Functions of TPE\ncan be readily applied to emerging industrial applications, with\nThe TPE has two major functions that can be executed to\ncost concerns.", |
| "paper_id": "1805.09496", |
| "title": "Intelligent Trainer for Model-Based Reinforcement Learning", |
| "authors": [ |
| "Yuanlong Li", |
| "Linsen Dong", |
| "Xin Zhou", |
| "Yonggang Wen", |
| "Kyle Guan" |
| ], |
| "published_date": "2018-05-24", |
| "primary_category": "cs.LG", |
| "arxiv_url": "http://arxiv.org/abs/1805.09496v6", |
| "chunk_index": 7, |
| "total_chunks": 45, |
| "char_count": 3184, |
| "word_count": 473, |
| "chunking_strategy": "semantic" |
| }, |
| { |
| "chunk_id": "555025f5-d04b-457b-82c3-7ccbb1493f19", |
| "text": "For example, in data center room cooling concomplete the entire training process of a general MBRL:\ntrol application, we can use the trainer framework to properly\n• Initialization: execute initialization tasks for the MBRLutilize the computational fluid dynamics (CFD) model with\ntraining process. These tasks include initializing the realthe real monitoring data to train an air cooling unit controller\ntraining environment, the cyber emulator, and the target[11]. At the same time, it can shed new lights on modelcontroller.based RL research by leveraging the RoR framework for\n• Step(state, action): execute one step of training of theautoML empowerment. Specifically, it can serve as a general\nMBRL algorithm. This process includes sampling fromframework that can work with different RL training algorithms\nthe real and cyber environment, training the target con-and could also be a potential solution for other learning tasks in\ntroller, and training the dynamic model of cyber emulator.which online adaptive parameter setting is demanded. We have\nNote that in each step, we keep the number of real datareleased the open-source code of our proposed RoR framework\nsamples to sample fixed (Kr) while optimize the amountat [12], for the research community to further develop new\nKc of cyber data used in the training. We found thatapplications and algorithms.\nsuch design is more stable in implementation as it can The remainder of this paper is organized as follows. Section\nprovide a tractable evaluation of the policy by measuringII provides a detailed description of the proposed trainer\nthe received reward from the real environment. With suchframework, including its key components, uni-head trainer\nsetting, the TPE exposes action interfaces to determinedesign, and ensemble trainer design. Section IV presents the\nhow many cyber data to use and how to do the sampling,numerical evaluation results of the proposed framework.", |
| "paper_id": "1805.09496", |
| "title": "Intelligent Trainer for Model-Based Reinforcement Learning", |
| "authors": [ |
| "Yuanlong Li", |
| "Linsen Dong", |
| "Xin Zhou", |
| "Yonggang Wen", |
| "Kyle Guan" |
| ], |
| "published_date": "2018-05-24", |
| "primary_category": "cs.LG", |
| "arxiv_url": "http://arxiv.org/abs/1805.09496v6", |
| "chunk_index": 8, |
| "total_chunks": 45, |
| "char_count": 1933, |
| "word_count": 293, |
| "chunking_strategy": "semantic" |
| }, |
| { |
| "chunk_id": "4e8fecb7-3023-4eaf-985f-f8a24dca2a09", |
| "text": "Secand reward information to encourage a trainer agent totion V briefly reviews the related works. Section VI concludes\ntrain the target controller to a better performance.the whole paper. With the TPE, MBRL training process can be executed by\ncalling repeatedly calling the Step function after the InitializaII. ROR: REINFORCEMENT ON REINFORCEMENT\ntion. The detailed training algorithm used to train the target\nARCHITECTURE\ncontroller will be embedded in the Step function. The overall architecture of the proposed intelligent trainer\nframework is shown in Fig. 2. The inner layer, i.e., the\nB.", |
| "paper_id": "1805.09496", |
| "title": "Intelligent Trainer for Model-Based Reinforcement Learning", |
| "authors": [ |
| "Yuanlong Li", |
| "Linsen Dong", |
| "Xin Zhou", |
| "Yonggang Wen", |
| "Kyle Guan" |
| ], |
| "published_date": "2018-05-24", |
| "primary_category": "cs.LG", |
| "arxiv_url": "http://arxiv.org/abs/1805.09496v6", |
| "chunk_index": 9, |
| "total_chunks": 45, |
| "char_count": 595, |
| "word_count": 90, |
| "chunking_strategy": "semantic" |
| }, |
| { |
| "chunk_id": "78d97240-fe17-4151-ac01-093c77fd4fb8", |
| "text": "RL Elements of TPE\nTraining Process Environment (TPE) is a standard modelbased DRL system utilizing the model as a data source to For the interaction between TPE and the intelligent trainer,\ntrain the target controller. The training data are provided by we define three interfaces State, Action, and Reward of TPE as\nthe physical environment, which represents the real-world follows. To distinguish the RL components in different layers,\nsystem, and the cyber environment, which is an emulator of the in the following, superscript ξ is used to indicate variables in\nphysical system. The emulator can be either knowledge-based the target controller layer, while Ξ is used to indicate variables\nor learning-based (e.g., a neural network prediction model). in the intelligent trainer layer. The outer layer, i.e., the intelligent trainer, is also an RL agent • State: The state is a vector that is exposed to an outthat controls and optimizes the sampling and training process side agent who can use the state to access the training\nof the target controller in the real and cyber environment via progress. Ideally one can put as much information as\nfeedbacks and action outputs. Thus, the proposed framework possible into the state design to measure the training\ncan be considered as a \"reinforcement on reinforcement\" progress.", |
| "paper_id": "1805.09496", |
| "title": "Intelligent Trainer for Model-Based Reinforcement Learning", |
| "authors": [ |
| "Yuanlong Li", |
| "Linsen Dong", |
| "Xin Zhou", |
| "Yonggang Wen", |
| "Kyle Guan" |
| ], |
| "published_date": "2018-05-24", |
| "primary_category": "cs.LG", |
| "arxiv_url": "http://arxiv.org/abs/1805.09496v6", |
| "chunk_index": 10, |
| "total_chunks": 45, |
| "char_count": 1325, |
| "word_count": 212, |
| "chunking_strategy": "semantic" |
| }, |
| { |
| "chunk_id": "ea1fb672-e004-4855-917a-cf52dd39cac4", |
| "text": "However, we found that using a constant (zero)\narchitecture. Such modularized design can easily work for to represent the TPE state can still work as such simple\ndifferent kinds of target controller training algorithms (such setting allows the trainer to learn a good action quickly.\nas DDPG, TRPO) and the extra layer of intelligent trainer can We also test other more informative state representation\nbe any optimizer that can output the control action when given designs, such as using the last average sampling reward or\na TPE observation. the normalized sampling count. They can achieve better\nIn the following we first present the inner layer of the pro- performance in certain cases. A comparative study of\nposed trainer framework to introduce how we encapsulate the these different designs are provided in Section IV.\nstandard training process of MBRL as an RL environment. We • Action: the action interface comprises three controllable\ndesign the TPE with two goals. First, it should be formulated parameters that are exposed to an outside agent who can\nas a standard RL environment, such that any agent can interact utilize these actions to control the training progress, as\nwith it. Second, the TPE shall expose the action interface mentioned in Fig. 1.", |
| "paper_id": "1805.09496", |
| "title": "Intelligent Trainer for Model-Based Reinforcement Learning", |
| "authors": [ |
| "Yuanlong Li", |
| "Linsen Dong", |
| "Xin Zhou", |
| "Yonggang Wen", |
| "Kyle Guan" |
| ], |
| "published_date": "2018-05-24", |
| "primary_category": "cs.LG", |
| "arxiv_url": "http://arxiv.org/abs/1805.09496v6", |
| "chunk_index": 11, |
| "total_chunks": 45, |
| "char_count": 1264, |
| "word_count": 206, |
| "chunking_strategy": "semantic" |
| }, |
| { |
| "chunk_id": "5f418f72-6a8b-4307-953e-fce2de110010", |
| "text": "We represent these parameters as probability values, all defined in the range of [0, 1]. Such the number Kc of cyber data to sample in each step\nnormalized action range can simplify the design of the by\ntrainer agent. Details of the three control parameters will Kr · (1 −a2) Kc = . (3)\nbe given subsequently. a2\n– Action a0 decides whether one should train the The rationale of such design is to bound the action\nmodel into a local or global model, which is achieved in the range [0, 1], which can ease the design of\nby controlling the starting state of a new episode an agent that interplays with the TPE. The sampled\nwhen the target controller samples from the real data from the real environment are stored in a\nenvironment. A quality function Φ is defined to select real memory buffer, while the cyber data sampled\nthe starting points, controlled by a0: from the cyber environment are stored in the cyber\nmemory buffer.\nΦ(s) = a0 · Qξ(s, π(s)) + (1 −a0) · u[0,1], (1) For the training part, a2 is also used to set the\nwhere Qξ is the value produced by critic network of probability of taking a mini-batch from the real\ndata memory buffer in training the target controller. the target controller, π is the current policy, and u[0,1]\nNaturally, 1 −a2 represents the probability to take is a random number drawn from [0, 1]. With this\na mini-batch from the cyber data buffer. With fixed quality function, we keep sampling random starting\nbatch size, if we train with Tr batches of real data, points in the physical environment until a highthen Tc batches of cyber data are used in this step: quality starting point is found, as shown in Algorithm\n1. In one way, when a0 approaches to one, initial Tr · (1 −a2)\nTc = . (4)\nstates with a higher Q value are likely to be selected, a2\nwhich will generate more data with high Q value. Note that we use only one action to control both the\nWhen these data are used to train the model, the\nsampling and training process to accommodate some\nmodel will be more accurate in a high Q value\nDRL algorithms, such as TRPO, where the sampling\nsubspace, benefiting the local exploitation in it. In\nand training process cannot be decoupled.\nthe other way, when a0 approaches zero, the quality\n• Reward: The reward interface is used to measure the will be a random number and the starting point will\nperformance of the target controller. Note that the only be a random state to favor global exploration.\nreliable information we can get from the training process – Action a1 decides whether one should utilize the\nis the reward data we collected when sampling from the model in a local or global manner, which is achieved\nreal environment. These reward data can be manipulated by controlling the starting state of a new episode\ninto various reward definitions for the trainer; one design when the target agent samples from the cyber enviof the reward rΞ is ronment. The starting state of an episode also matters\nin the cyber environment. For example, we can select rΞ = sign(¯rξt+1 −¯rξt ), (5)\na starting state s from the real data buffer B.", |
| "paper_id": "1805.09496", |
| "title": "Intelligent Trainer for Model-Based Reinforcement Learning", |
| "authors": [ |
| "Yuanlong Li", |
| "Linsen Dong", |
| "Xin Zhou", |
| "Yonggang Wen", |
| "Kyle Guan" |
| ], |
| "published_date": "2018-05-24", |
| "primary_category": "cs.LG", |
| "arxiv_url": "http://arxiv.org/abs/1805.09496v6", |
| "chunk_index": 12, |
| "total_chunks": 45, |
| "char_count": 3073, |
| "word_count": 554, |
| "chunking_strategy": "semantic" |
| }, |
| { |
| "chunk_id": "48f1e0e3-ab05-41b3-bcb5-13d687ebd84b", |
| "text": "In\nthis case, the subsequent sampling process will be where ¯rξt+1 and ¯rξt are the respective average sampling\na local search process similar to the imagination reward of the target controller at step t + 1 and t from\nprocess used in [9] and is more likely to generate real environment. This means, as long as the reward\nsamples that are of high prediction accuracy as is increasing, the current training action is considered\nthe model has explored nearby samples in the real acceptable.", |
| "paper_id": "1805.09496", |
| "title": "Intelligent Trainer for Model-Based Reinforcement Learning", |
| "authors": [ |
| "Yuanlong Li", |
| "Linsen Dong", |
| "Xin Zhou", |
| "Yonggang Wen", |
| "Kyle Guan" |
| ], |
| "published_date": "2018-05-24", |
| "primary_category": "cs.LG", |
| "arxiv_url": "http://arxiv.org/abs/1805.09496v6", |
| "chunk_index": 13, |
| "total_chunks": 45, |
| "char_count": 488, |
| "word_count": 84, |
| "chunking_strategy": "semantic" |
| }, |
| { |
| "chunk_id": "5ed16015-e073-4ec1-a01f-0aedf795d936", |
| "text": "Although such a simple design allows the\nenvironment. Alternatively, we can use a data point trainer to learn the settings quickly, it may not be effective\nsrand randomly selected from the state space to favor in all practical cases, especially in the case where the\nexploration. It thus can control the trade-off between cyber data does not degrade the performance but prolongs\nexploitation and exploration during the sampling the convergence. A more effective order-based reward\nprocess. In our design, a1, with 0 ≤a1 ≤1, design is used in the ensemble trainer in Section III-B.\nrepresents the probability of choosing starting state Note that we can only utilize the reward information\ns0 from the real data buffer, as received when sampling from the real environment to\nmeasure the performance of the target controller. To avoid\n( s ∈B, if u[0,1] ≤a1 additional sampling cost, we have to rely on the original s0 = (2)\nsrand, otherwise, sampling process form the real environment, which is\nwhy we set the number of real data sampled in each step\nwhere u[0,1] is a uniformly distributed random num- to be fixed, as otherwise we may not receive a stable\nber drawn from [0, 1]. evaluation of the target controller.\n– Action a2 decides how many cyber data are sampled\nand used in training. For the sampling part, a2 is\nC. Problem Formulation set to the ratio of the number of real data sampled\nto the total data sampled (real and cyber) in this Based on the defined TPE environment, the problem to solve\ntraining step. Recall that in each step we sample a in this paper is formulated as follows. Given a target controller\nfixed number Kr of real data samples. a2 controls to train by a MBRL algorithm with a given maximum number Initialization only one target controller is involved in training and all trainer\nactions are tested in a single streamline of training. This \"uniTPE head\" trainer needs to learn quickly with limited training time\nEntrance steps and samples. Several trainer learning algorithms, like\nTrainer DQN and REINFORCE, can be used to tackle this problem. Sampling in Sampling in generates In the following, we use a DQN controller to demonstrate the\nreal cyber new trainer design. A comparison of different trainer designs is\nEnvironment Environment control TPE given in Section IV-C.", |
| "paper_id": "1805.09496", |
| "title": "Intelligent Trainer for Model-Based Reinforcement Learning", |
| "authors": [ |
| "Yuanlong Li", |
| "Linsen Dong", |
| "Xin Zhou", |
| "Yonggang Wen", |
| "Kyle Guan" |
| ], |
| "published_date": "2018-05-24", |
| "primary_category": "cs.LG", |
| "arxiv_url": "http://arxiv.org/abs/1805.09496v6", |
| "chunk_index": 14, |
| "total_chunks": 45, |
| "char_count": 2303, |
| "word_count": 388, |
| "chunking_strategy": "semantic" |
| }, |
| { |
| "chunk_id": "303cca56-4fa7-4de1-8b37-5597bc36e4bb", |
| "text": "Step actions\nWe implement a specialized DQN trainer that carries out\nUpdate Update discretized control actions with a relatively small-scale Q target Train the cyber model network. At each time step, the trainer evaluates all the actions controller Trainer\nwith the Q network and selects the action with the highest Q\nTPE Exit value. Is no. of No The training of the DQN controller follows standard epsilon- total real data samples\n> N greedy exploration [13] strategy. To enhance the training\nYes stability, the DQN controller is equipped with a memory, like\nStop the replay buffer in DDPG [3]. As such, the trainer can extract\ngood actions from the noisy data received from TPE. During\nthe experiment, we notice that samples from mere one singleFig. 3. Work flow of the uni-head intelligent trainer.", |
| "paper_id": "1805.09496", |
| "title": "Intelligent Trainer for Model-Based Reinforcement Learning", |
| "authors": [ |
| "Yuanlong Li", |
| "Linsen Dong", |
| "Xin Zhou", |
| "Yonggang Wen", |
| "Kyle Guan" |
| ], |
| "published_date": "2018-05-24", |
| "primary_category": "cs.LG", |
| "arxiv_url": "http://arxiv.org/abs/1805.09496v6", |
| "chunk_index": 15, |
| "total_chunks": 45, |
| "char_count": 801, |
| "word_count": 133, |
| "chunking_strategy": "semantic" |
| }, |
| { |
| "chunk_id": "1c87ad24-68c1-4a3f-91a0-01970b1726b1", |
| "text": "In the initialization, we\ncreate the corresponding TPE. After that, The training iterates until the total action could flood the buffer. The homogeneity in actions\nnumber of real data samples reaching the budget limit N. could prolong or even halt the training of DQN. To solve this\nproblem, for a given action we limit the total number of the\nsamples to M/|A|, where M and |A| are the size of buffer\nof samples to collect from the physical environment, we\nand the size of the action set, respectively. If the number of\nencapsulate it into the TPE defined above, and aim to train\nsamples for a given action exceeds this limit, a new sample\na trainer in an online manner to maximize the accumulated\nwill replace a randomly selected old one.\nreward received from this TPE:\nThe pseudo code of the uni-head intelligent trainer is shown\ntmax in Algorithm 1, with the detailed implementation of the\nmax X rΞ(t), (6) sampling reset procedure in the real/cyber environment shown\nt=1 in Algorithm 2.\nwhere πΞ is the control policy of the trainer, tmax is the\nmaximum number of trainer steps when the real data sample Algorithm 1 Sampling Reset Procedure\nbudget is consumed.\n1: if the current sampling environment is the real environNote that the problem strictly demands online learning, as\nment then\nre-training from the beginning will incur additional sampling 2: Initialize data set D = ∅, quality set G = ∅.\ncost. In the following, we will propose different control policy\n3: for i = 1 : M1 do\ndesigns and trainer learning methods to accomplish this online\n4: Generate one initial state s0 and compute its quality\nlearning task.\nΦ(s0).\n5: Append s0 to D and append Φ(s0) to G. TTT: TRAINING THE TRAINER 6: if i > M2 and Φ(s0) ≥max(G) then\nIn this section we present the outer layer of the RoR 7: Break.\narchitecture, the trainer designs, to tackle the above formulated 8: end if\nproblem. We first propose the basic intelligent trainer, which 9: end for\nutilizes a single DQN controller to do the online learning. Then 10: Return the last state of D.\nwe propose an enhanced trainer design with multiple trainers 11: else\nto better evaluate the trainer actions, which can even work in 12: if u[0,1] < a1 then\nsome tough situations. 13: Randomly select a state s from the real data memory. 14: Set the cyber environment to state s.A.", |
| "paper_id": "1805.09496", |
| "title": "Intelligent Trainer for Model-Based Reinforcement Learning", |
| "authors": [ |
| "Yuanlong Li", |
| "Linsen Dong", |
| "Xin Zhou", |
| "Yonggang Wen", |
| "Kyle Guan" |
| ], |
| "published_date": "2018-05-24", |
| "primary_category": "cs.LG", |
| "arxiv_url": "http://arxiv.org/abs/1805.09496v6", |
| "chunk_index": 16, |
| "total_chunks": 45, |
| "char_count": 2325, |
| "word_count": 410, |
| "chunking_strategy": "semantic" |
| }, |
| { |
| "chunk_id": "f4e1976c-436c-4392-b6ec-1000424af445", |
| "text": "Intelligent Trainer\n15: Return s. We design an RL intelligent trainer to optimize control 16: else\naction a0, a1, and a2 of TPE in an online and on-policy 17: Randomly initialize the cyber environment.\nmanner. The interaction workflow of the trainer with the TPE 18: Return the current state of the cyber environment.\nis shown in Fig. 3. At each trainer step the trainer generates 19: end if\nan action setting, with which the TPE advances for one time 20: end if\nstep. Such a process is equal to the MBRL training process\nwith online parameter adaptation. Note that with such design,", |
| "paper_id": "1805.09496", |
| "title": "Intelligent Trainer for Model-Based Reinforcement Learning", |
| "authors": [ |
| "Yuanlong Li", |
| "Linsen Dong", |
| "Xin Zhou", |
| "Yonggang Wen", |
| "Kyle Guan" |
| ], |
| "published_date": "2018-05-24", |
| "primary_category": "cs.LG", |
| "arxiv_url": "http://arxiv.org/abs/1805.09496v6", |
| "chunk_index": 17, |
| "total_chunks": 45, |
| "char_count": 583, |
| "word_count": 102, |
| "chunking_strategy": "semantic" |
| }, |
| { |
| "chunk_id": "33132625-fd26-41f2-a589-69cd13a1a690", |
| "text": "Algorithm 2 Intelligent Trainer Enhanced Model-Based DRL Initialization\nTraining Algorithm\n1: Initialization: initialize the trainer agent (with a DQN\nnetwork), the training process environment, and the target\ncontroller. Initialize real data memory and cyber data Trainer 2 Trainer 1 Trainer 0 Train the Real data\nmemory as an empty set. Sample a small data set of size (NoCyber) (Random) (DQN) Trainer 0 memory sharing\ngenerates generates generates (DQN\no to initialize the cyber emulator and initialize the real actions actions actions trainer)\nenvironment. Weight transfer\n2: Set number of total samples generated from real environment n = 0. Set the maximum number of samples allowed TPE 2 TPE 1 TPE 0 Compute\nStep Step Step performance\nto use as N. with with with skewness ratio;\n3: //Training Process: reference reference reference Update pref Sampling Sampling Sampling\n4: while n < N do\n5: Generate action a from the trainer agent. Compute order\n6: //One step in TPE: Is no. of based reward\nNo and\n7: Train the target controller if there is enough data in its total real data samples accumulated > N\nmemory buffer. reward Ri , i=1,\nYes 2, and 3\n8: Sample Kr data points from real environment according\nto the sampling reset Algorithm 1, and append the data Stop Ensemble process\nto the real data memory.\n9: Sample Kc data points from the cyber environment ac- Fig. 4. Work flow of the ensemble trainer, with the major changes made to\ncording to the sampling reset Algorithm 1, and append the uni-head trainer highlighted in bold font. In the initialization, we create\nthree different trainers and their corresponding TPEs. After that, The training\nthe data to the cyber data memory. iterates until the total number of real data samples reaching the budget limit\n10: Train the dynamic model. In each training step of ensemble trainer, the original TPE step is revised to\n11: Update n. include the reference sampling mechanism and after the TPE step, we execute\nthe designed ensemble process including memory sharing, order-based reward\n12: Collect the state, action and reward data of TPE. calculation and weight transfer.\n13: Update the trainer agent.\n14: end while\neach of them can work well in different cases. Note that it\nis not a trivial task to have an effective ensemble trainer and\nB. Ensemble Trainer at the same time not incurring additional real data cost, as\nthe samples from different trainers can have different quality\nIn this subsection, we present a more robust trainer design which may degenerate the ensemble's overall performance. In\nthat learns by comparison to overcome the learning deficiency the following, we propose solutions to solve this issue.\nof uni-head trainer in certain cases. The uni-head trainer, 1) Real-Data Memory Sharing: We introduce a memorydescribed previously, for some cases cannot adequately assess sharing scheme to solve the issue of insufficient real data\nthe quality of the action as all actions are tested in a single samples for each trainer, as we splitting the whole real data\nstreamline. In other words, the actions could be correlated and sample budget evenly to all three trainers in the ensemble.\ntheir quality could become indistinguishable.", |
| "paper_id": "1805.09496", |
| "title": "Intelligent Trainer for Model-Based Reinforcement Learning", |
| "authors": [ |
| "Yuanlong Li", |
| "Linsen Dong", |
| "Xin Zhou", |
| "Yonggang Wen", |
| "Kyle Guan" |
| ], |
| "published_date": "2018-05-24", |
| "primary_category": "cs.LG", |
| "arxiv_url": "http://arxiv.org/abs/1805.09496v6", |
| "chunk_index": 18, |
| "total_chunks": 45, |
| "char_count": 3209, |
| "word_count": 527, |
| "chunking_strategy": "semantic" |
| }, |
| { |
| "chunk_id": "7efe43a9-0e28-4428-b7bb-cfd8fb27df45", |
| "text": "Also, for actions The even splitting is necessary for the evaluating of the target\nthat generate non-negative reward but could lead to slow controllers of each trainer. It then follows that each trainer only\nconvergence or locally optimal policy, the reward function has one-third of the real data samples in training compared\ndesign (6) is unable to accurately assess their quality.", |
| "paper_id": "1805.09496", |
| "title": "Intelligent Trainer for Model-Based Reinforcement Learning", |
| "authors": [ |
| "Yuanlong Li", |
| "Linsen Dong", |
| "Xin Zhou", |
| "Yonggang Wen", |
| "Kyle Guan" |
| ], |
| "published_date": "2018-05-24", |
| "primary_category": "cs.LG", |
| "arxiv_url": "http://arxiv.org/abs/1805.09496v6", |
| "chunk_index": 19, |
| "total_chunks": 45, |
| "char_count": 383, |
| "word_count": 61, |
| "chunking_strategy": "semantic" |
| }, |
| { |
| "chunk_id": "57607b38-6899-48d8-939d-3bd2ec7f32ea", |
| "text": "To with the original uni-head trainer. This will cause significant\naddress these issues, we propose an ensemble trainer which performance degeneration as will be shown in Section IV.\nuses a multi-head training process, similar to the boosted DQN To address this issue, we devise a memory sharing process\n[14]. The design rationale is to diversify actions on different before the training of the target controller, as shown in Fig.\ntrainers without posting additional sampling cost, then evaluate 4. The memory sharing scheme is a pseudo sampling process\nthe actions by ranking their performance. executed after each trainer has done its sampling process from\nThe proposed ensemble trainer consists of three different real environment and saved these data into their own real\ntrainers with the work flow shown in Fig. 4. For trainer 0, memory buffer. Then each trainer will collect the new real\nits actions are provided by the intelligent trainer; trainer 1, its data samples from the other trainers. As a result, at each step,\nactions are provided by a random trainer; trainer 2, it uses each trainer receives Kr new data samples – the same amount\nonly real data, which means setting the three actions to 1, of data as in the uni-head training. Note that with memory\n0, and 0 respectively. The settings in trainer 0 and 1 enable sharing, the real data from an underperformed target agent\nthe exploitation and exploration of the action space. Trainer could degrade, even fail the ensemble performance.", |
| "paper_id": "1805.09496", |
| "title": "Intelligent Trainer for Model-Based Reinforcement Learning", |
| "authors": [ |
| "Yuanlong Li", |
| "Linsen Dong", |
| "Xin Zhou", |
| "Yonggang Wen", |
| "Kyle Guan" |
| ], |
| "published_date": "2018-05-24", |
| "primary_category": "cs.LG", |
| "arxiv_url": "http://arxiv.org/abs/1805.09496v6", |
| "chunk_index": 20, |
| "total_chunks": 45, |
| "char_count": 1500, |
| "word_count": 247, |
| "chunking_strategy": "semantic" |
| }, |
| { |
| "chunk_id": "5031b08a-8b07-4b60-9d00-4de669da9b1f", |
| "text": "To solve\n2 is a normal DRL training process without using the cyber this problem, we introduce next a reference sampling scheme.\ndata generated by the dynamic model. The reason we choose 2) Reference Sampling: A reference sampling scheme is\nto ensemble these three distinct trainers is because they can proposed to maintain the quality of the real data samples inprovide sufficient coverage of different trainer actions and troduced by the memory-sharing mechanism. the reference sampling is to select the best trainer, then to weight parameters of the target controller trained by the best\nuse its target controller for other trainers to sample real data trainer to the target controller trained by the DQN trainer.\nsamples with a probability pref. In our algorithm, at the first We also utilize the accumulated trainer reward to detect\nof every three steps, pref is forced to set to 0. As such this whether the best trainer is significantly better than other\nfirst step, without reference sampling taking place, serves as trainers. We calculate a performance skewness ratio to measure\nan evaluation step for the trainer. In next two steps, pref is the degree of the outperformance of the best trainer:\ndetermined by the min function in the following equation. Rb −Rm\nφ = , (9)\n(0, if mod (tΞ, 3) == 0 Rb −Rw\npref =\nmin{ φ−φmin , 1}, otherwise where Rb, Rm and Rw are the best, median and worst Ri of φmax−φmin\n(7) the three trainers, respectively. The skewness ratio is used to\nwhere tΞ is the current step number of trainers, and φ is the determine the pref as shown above.\nskewness ratio, which measures the degree of the outperfor- Algorithm 3 shows the operational flow of the ensemble\nmance of the best trainer; φmax and φmin are the estimated trainer. In summary, the ensemble trainer evaluates the quality\nupper and lower bounds respectively. The details of φ are of the actions by sorting the rewards received by target\nshown in the weight transfer procedure below. With such controllers.", |
| "paper_id": "1805.09496", |
| "title": "Intelligent Trainer for Model-Based Reinforcement Learning", |
| "authors": [ |
| "Yuanlong Li", |
| "Linsen Dong", |
| "Xin Zhou", |
| "Yonggang Wen", |
| "Kyle Guan" |
| ], |
| "published_date": "2018-05-24", |
| "primary_category": "cs.LG", |
| "arxiv_url": "http://arxiv.org/abs/1805.09496v6", |
| "chunk_index": 21, |
| "total_chunks": 45, |
| "char_count": 1998, |
| "word_count": 340, |
| "chunking_strategy": "semantic" |
| }, |
| { |
| "chunk_id": "459898d4-6f53-4f4e-b6d6-33c99431c02d", |
| "text": "It can maintain the training quality by memory\ndesign, the better the performance of the best trainer, the higher sharing scheme, without incurring additional sampling cost. It\npref will be used. can maintain the sample quality by reference sampling. It can\nrecover an underperformed trainer from poor actions. Though 3) Order-based Trainer Reward Calculation: The rewards\nsaving on the sampling cost, the ensemble trainer requiresof the trainers in the ensemble trainer are designed by ordering\nthree times the training time. The increased training time canthe performance of different trainers. After the training process\nbe partially reduced by the early stop of some underperformedof the target controllers of all trainers, for each trainer we\ntrainers when necessary.calculate the average sampling reward of its corresponding\ntarget controller ¯rξi as the raw reward of this trainer. Note that\n¯rξi is different from the sign reward used in (6). NUMERICAL EVALUATIONS\nthe tuple (¯rξ0, ¯rξ1, ¯rξ2) in an ascending order. We then define the In this section, we evaluate the proposed intelligent trainer\nindex of c · ¯rξi in the sorted tuple as the reward ˆrΞi of trainer and ensemble trainer for five different tasks (or cases) of\ni. OpenAI gym: Pendulum (V0), Mountain Car (Continuous V0),\nThe rationale is that if the action of a trainer is good for Reacher (V1), Half Cheetah ( [15]), and Swimmer (V1).\ntraining, it should help the trainer to achieve better performance (measured by the average sampling reward). Experiment Configuration\nNote that with the above reward design, the trainers will\ngenerate three data samples at the trainer level in each step, and For the five test cases, different target controllers with\nall these data will be used to update the intelligent trainer. Due promising published results are used: DDPG for Pendulum\nto the reference sampling mechanism, the order information and Mountain Cars; TRPO for Reacher, Half Cheetah, and\nmay not correctly measure the performance of the trainers. The well-tuned parameters of open-sourced codes\nsolve this issue, we will throw away these samples when pref [16] [17] are used for the hyper-parameters settings of the\nis not zero. target controller (including Kr and Tr, as defined in Section\nII. Simple neural networks with guideline provided in [25] 4) Weight Transfer: After collection of the trainer reward\nare used for the cyber models. As our experiments havedata, we add a particular weight transfer mechanism to solve\nshown, it is very useful to normalize both input and output forthe issue that some target agent may fail due to unfavorable\nthe dynamic model.", |
| "paper_id": "1805.09496", |
| "title": "Intelligent Trainer for Model-Based Reinforcement Learning", |
| "authors": [ |
| "Yuanlong Li", |
| "Linsen Dong", |
| "Xin Zhou", |
| "Yonggang Wen", |
| "Kyle Guan" |
| ], |
| "published_date": "2018-05-24", |
| "primary_category": "cs.LG", |
| "arxiv_url": "http://arxiv.org/abs/1805.09496v6", |
| "chunk_index": 22, |
| "total_chunks": 45, |
| "char_count": 2643, |
| "word_count": 422, |
| "chunking_strategy": "semantic" |
| }, |
| { |
| "chunk_id": "25aaac68-74d2-448c-8ba0-357c411d9d1d", |
| "text": "In this paper, we use the normalizationtrainer actions. The rationale is that after collecting the reward\nmethod provided by [16], in which the mean and standardinformation for a certain large number of steps, we can judge\ndeviation of the data is updated during the training process.which trainer is currently the best one with high confidence. For hyperparamters M1 and M2 used in the reset procedure inIn this case, we can transfer the best target agent to the other\nAlgorithm 1, we set M1 = 50 and M2 = 5 respectively, whichtrainers, such that those trainers who fall behind can restart\nindicates that we have maximum and minimum trial numbersfrom a good position. In particular, after the trainer reward\n50 and 5 respectively.data are collected, we examine the number of steps nc that\nhave been taken since the last weight transfer. If nc is larger\nthan a threshold C, we compute an accumulative reward for B. Comparison of Uni-Head Intelligent Trainer with Baseline\neach trainer in the last nc steps as : Algorithms\nRi(tΞ) = X ˆrΞi (tΞ −j), (8) Multiple variants of the uni-head intelligent trainer are\ncompared with baseline algorithms. There are three baseline\nj∈{nc−1,...,0}\nalgorithms and four intelligent trainers. Their designs are\nwhere tΞ is the index of current trainer step. The trainer with summarized in Table I. The three baseline algorithms are:\nmaximum Ri will be set as the best trainer. We then examine • The NoCyber trainer is a standard DRL training process\nif the DQN trainer is the best; if not, we will transfer the without using cyber data.", |
| "paper_id": "1805.09496", |
| "title": "Intelligent Trainer for Model-Based Reinforcement Learning", |
| "authors": [ |
| "Yuanlong Li", |
| "Linsen Dong", |
| "Xin Zhou", |
| "Yonggang Wen", |
| "Kyle Guan" |
| ], |
| "published_date": "2018-05-24", |
| "primary_category": "cs.LG", |
| "arxiv_url": "http://arxiv.org/abs/1805.09496v6", |
| "chunk_index": 23, |
| "total_chunks": 45, |
| "char_count": 1569, |
| "word_count": 264, |
| "chunking_strategy": "semantic" |
| }, |
| { |
| "chunk_id": "ff6e686f-cd14-4011-a849-ce1e3b7bc8e9", |
| "text": "TABLE I\nCONFIGURATIONS OF DIFFERENT ALGORITHMS. Baseline algorithms Intelligent trainers\nNoCyber Fixed Random DQN DQN-5 actions DQN-larger memory REINFORCE DQN-TPE V1 DQN-TPE V2\nTrainer type None None None DQN DQN DQN REINFORCE DQN DQN\nAction (1, 0, 0) (0.6, 0.6, 0.6) ai ∈{0.2, 1.0} ai ∈{0.2, 1.0} ai ∈{0.2, 0.4, 0.6, 0.8, 1.0} ai ∈{0.2, 1.0} ai ∈{0.2, 1.0} ai ∈{0.2, 1.0} ai ∈{0.2, 1.0}\nData source Real Real & Cyber Real & Cyber Real & Cyber Real & Cyber Real & Cyber Real & Cyber Real & Cyber Real & Cyber\nMemory size - - - 32 32 2000 - 32 32\nTPE state - - - Constant Constant Constant Constant Last sampling reward Real sample count Algorithm 3 Ensemble Trainer Algorithm Algorithm 4 Performance Skewness Analysis Procedure\n1: Initialization: initialize the three trainer agents and the 1: if nc > C then\ncorresponding training process environments, along with 2: Compute accumulative reward of trainer i as Ri for\nthe target controllers. Run the initialization process for i = 0, 1, 2.\neach trainer. Initialize the best player to be NoDyna trainer 3: Update best trainer index as arg maxi(Ri).\nand the probability to use best player to sample is pref. 4: Compute the skewness ratio φ for the best player.\n2: Set number of total samples generated from real environ- 5: Update best player reference probability pref according\nment n = 0. Set maximum number of samples allowed to to (7).\nuse as N. 6: if DQN trainer is not the best trainer then\n3: //Training Process: 7: Do weight transfer from the best trainer to DQN\n4: while n < N do trainer.\n5: for trainer i ∈0, 1, 2 do 8: end if\n6: Generate action a from the trainer agent. 9: Reset nc = 0.\n7: //One step in TPE: 10: end if\n8: Execute memory sharing procedure.\n9: Train the target controller if there is enough data in TABLE II\nits memory buffer. NUMBER OF TOTAL TPE STEPS FOR DIFFERENT TASKS.\n10: Sample Kr/3 data points from real environment with\nPendulum Mountain Reacher Half Swimmer\nreference sampling probability pref, and append the Car Cheetah\ndata to the real data memory. TPE 1000 30000 1000 400 200\nSteps 11: Sample data from cyber environment according to the\ntrainer action, and append the data to the cyber data\nmemory.\nvary in practice), but to figure out if the proposed trainer\n12: Share the real data memory across all trainers.\ncan select the better action among the predefined action\n13: Train the dynamic model of the current trainer.\nvalue set.\n14: Update n. We notice that, for some tasks, the total number of steps of 15: Collect the state, action and raw reward data of TPE.\nthe TPE is only 200, as shown in Table. To simplify the 16: end for\nlearning process, we discretize each dimension of the trainer 17: Compute reward for each trainer from the raw reward\naction. data and calculate the accumulative reward Ri for\ntrainers i = 0, 1, 2. The four intelligent trainers are:\n18: Store TPE data of all three trainers into the DQN • DQN trainer. The trainer action chooses from two values\nmemory to train the intelligent trainer. of 0.2 and 1.0 like the Random trainer. That is, ai ∈\n19: Update the trainer agents. {0.2, 1} for i = 0, 1, 2. The DQN controller is trained\n20: Execute Algorithm 4 to do performance skewness anal- with a memory buffer of size 32. At each time steps,\nysis and weight transfer, update pref. four randomly selected batches of batch size eight are\n21: end while used to update the controller.", |
| "paper_id": "1805.09496", |
| "title": "Intelligent Trainer for Model-Based Reinforcement Learning", |
| "authors": [ |
| "Yuanlong Li", |
| "Linsen Dong", |
| "Xin Zhou", |
| "Yonggang Wen", |
| "Kyle Guan" |
| ], |
| "published_date": "2018-05-24", |
| "primary_category": "cs.LG", |
| "arxiv_url": "http://arxiv.org/abs/1805.09496v6", |
| "chunk_index": 24, |
| "total_chunks": 45, |
| "char_count": 3402, |
| "word_count": 613, |
| "chunking_strategy": "semantic" |
| }, |
| { |
| "chunk_id": "ccb27d89-fed0-49ab-b072-ce41ba15108d", |
| "text": "For exploration purpose,\nthe epsilon-greedy method is used, with the first 10% of\nthe trainer steps for epsilon-greedy exploration by setting\n• The Fixed trainer follows the standard MBRL, with all final epsilon to 0.1. Note that the setting 0.6 used in Fixed\nactions set to 0.6 throughout the training process. trainer is the expected mean of the actions from intelligent\n• The Random trainer outputs action 0.2 or 1.0 with equal trainer if the trainer predicts uniformly random actions.\nprobability. The same action values will be used by • DQN-5 actions. To test the effect of more action values\nthe DQN trainer. These values are picked such that in the action discretization, we introduce a second trainer,\nan extensive amount of cyber data can be used in the by selecting five values from {0.2, 0.4, 0.6, 0.8, 1}.\ntraining, for example, when a2 is set to 0.2, the amount • DQN-larger memory. To test the impact of larger trainer\nof cyber data sampled is five-time the real data sampled. memory, we introduce a third intelligent trainer with\nThe value 0.2 is chosen without any tuning, i.e., it is not memory size of 2000. In this case more trainer samples\ntuned to make DQN trainer work better. Our focus is not are stored and relatively older historical data are used in\nto find out the best settings of these parameters (as it will the training the DQN controller. The fourth intelligent trainer is the same TABLE III\nto DQN trainer except the DQN controller is replaced ACCUMULATIVE REWARDS OF DIFFERENT TRAINER VARIANTS WHEN\nUSING DIFFERENT TRAINER AND TPE DESIGNS.\nby a REINFORCE controller. REINFORCE algorithm\nrequires data of multiple episodes to train, we manually Variants Pendulum Mountain Car Reacher Half Cheetah Swimmer\nDQN -43323 1434.59 -7846 696492 4918\nset five steps (manually tuned) of TPE as an episode. DQN-5 actions -43204 1493.88 -7724 985847 2473\nDQN-larger memory -41329 1615.98 -7831 1354488 2142\nThe configurations for these algorithms are summarized in DQN-TPE V1 -41869 1849.96 -7456 868597 1522\nDQN-TPE V2 -46533 1826.19 -7478 1172288 2233\nTable I. The test results of three baseline trainers and four intelligent\ntrainers are shown in Fig. 5. We obtain the test results by severe performance degradation can occur. When cyber\nperiodically evaluating the target controller in an isolated data are used, the target controller can be trapped by a\ntest environment. This ensures that data collection from the local optimum that is difficult to recover from.", |
| "paper_id": "1805.09496", |
| "title": "Intelligent Trainer for Model-Based Reinforcement Learning", |
| "authors": [ |
| "Yuanlong Li", |
| "Linsen Dong", |
| "Xin Zhou", |
| "Yonggang Wen", |
| "Kyle Guan" |
| ], |
| "published_date": "2018-05-24", |
| "primary_category": "cs.LG", |
| "arxiv_url": "http://arxiv.org/abs/1805.09496v6", |
| "chunk_index": 25, |
| "total_chunks": 45, |
| "char_count": 2490, |
| "word_count": 411, |
| "chunking_strategy": "semantic" |
| }, |
| { |
| "chunk_id": "e44e4397-ab62-4f11-b268-0349b07f681d", |
| "text": "We resolve\ntest environment will not interfere with the training process. this issue by using ensemble trainer. In other words, none of the data collected from the test\nenvironment is used in the training. We observe that: To analyze the behavior of the trainer, we show in Fig.\n5(f) the actions taken by the DQN trainer for the tasks of\n• The tasks of Pendulum, Mountain Car, and Reacher can Mountain Car, Reacher, and Swimmer during the training\nbenefit from cyber data used in training. For tasks of process. We observe that for Mountain Car, the mean value of\nHalf Cheetah and Swimmer, NoCyber trainer performs a0 fluctuates around 0.5. This agrees with our observation that\nsignificantly better than trainers using cyber data. This for the Mountain Car, random baseline algorithm performs the\nindicates that using the cyber data may not be always best. For Reacher and Swimmer, the trainer quickly learns to\nbeneficial.", |
| "paper_id": "1805.09496", |
| "title": "Intelligent Trainer for Model-Based Reinforcement Learning", |
| "authors": [ |
| "Yuanlong Li", |
| "Linsen Dong", |
| "Xin Zhou", |
| "Yonggang Wen", |
| "Kyle Guan" |
| ], |
| "published_date": "2018-05-24", |
| "primary_category": "cs.LG", |
| "arxiv_url": "http://arxiv.org/abs/1805.09496v6", |
| "chunk_index": 26, |
| "total_chunks": 45, |
| "char_count": 924, |
| "word_count": 154, |
| "chunking_strategy": "semantic" |
| }, |
| { |
| "chunk_id": "52b7e083-2886-4770-b79c-15994e4df609", |
| "text": "Thus, the use of cyber model should be use more of the real data, with the mean value of action a2\nconsidered carefully. eventually reaching to larger than 0.6. This again indicates\n• In most tasks, the intelligent trainer performs better than the viability of the trainer. Note that for Swimmer, even the\nthe Fixed trainer. For example, DQN-5 actions performs mean value of action a2 is larger than 0.6, the performance\nbetter than Fixed trainer for the tasks of Mountain Car, of the target controller is still very poor (Fig. 5) due to\nReacher, and Half Cheetah, and performs similarly for training process' sensitivity to cyber data. This again verifies\nthe tasks of Pendulum and Swimmer. This indicates the the necessity of an ensemble trainer that can quickly recover\nviability of the intelligent trainer. from degraded performance during training.\n• For the tasks of Pendulum and Mountain Car, the Random\ntrainer performs the best. This can be attributed to the fact\nthat adding more noises would encourage exploration.", |
| "paper_id": "1805.09496", |
| "title": "Intelligent Trainer for Model-Based Reinforcement Learning", |
| "authors": [ |
| "Yuanlong Li", |
| "Linsen Dong", |
| "Xin Zhou", |
| "Yonggang Wen", |
| "Kyle Guan" |
| ], |
| "published_date": "2018-05-24", |
| "primary_category": "cs.LG", |
| "arxiv_url": "http://arxiv.org/abs/1805.09496v6", |
| "chunk_index": 27, |
| "total_chunks": 45, |
| "char_count": 1025, |
| "word_count": 169, |
| "chunking_strategy": "semantic" |
| }, |
| { |
| "chunk_id": "e29ebf3b-3a73-42b0-a144-4ef458c70d54", |
| "text": "Sensitivity Analysis on Various Trainer and TPE Designs example, to achieve better performance, the Mountain\nCar requires more exploration to avoid local optimum We compare the performances of different trainer and\nthat could lead the target agent to unfavorable searching TPE designs to study the performance sensitivity against the\ndirections. We also observe that the performance of DQN- implementation variations. In addition to previously mentioned\n5 actions is more stable than that of DQN, due to the DQN, DQN-5 actions, and DQN-large memory, we also test\nincreased dimension of action space that improves the DQN trainers with two different TPE state designs, as also\ntraining diversity. We argue that even the DQN trainer listed in Table I. DQN-TPE V1 adopts the last average\nis no better than the Random trainer in these tasks, the sampling reward of the target controller as the state of TPE;\nDQN trainer is still learning something. The reason is DQN-TPE V2 adopts the ratio (a value in the range of [0,1])\nthat we are trying to learn a fixed good action through of the real samples used to the predefined maximum number\nDQN trainer, which means that the DQN trainer will not of real samples as the state of TPE. Table III presents the\nbe able to provide the randomness which proves to be accumulative rewards for five test cases: Pendulum, Mountain\ngood in these tasks.", |
| "paper_id": "1805.09496", |
| "title": "Intelligent Trainer for Model-Based Reinforcement Learning", |
| "authors": [ |
| "Yuanlong Li", |
| "Linsen Dong", |
| "Xin Zhou", |
| "Yonggang Wen", |
| "Kyle Guan" |
| ], |
| "published_date": "2018-05-24", |
| "primary_category": "cs.LG", |
| "arxiv_url": "http://arxiv.org/abs/1805.09496v6", |
| "chunk_index": 28, |
| "total_chunks": 45, |
| "char_count": 1382, |
| "word_count": 231, |
| "chunking_strategy": "semantic" |
| }, |
| { |
| "chunk_id": "b5142530-7026-498e-b815-21acdaf0e3b6", |
| "text": "Also we can observer that for the Car, Reacher, Half Cheetah, and Swimmer. Half Cheetah task, the DQN trainer is much better than\n• For Mountain Car, Reacher and Half Cheetah, DQN-5 the Random trainer. This suggests that the DQN trainer\nactions, DQN-larger memory, DQN-TPE V1 and DQN- can indeed learn in an online manner. TPE V2 consistently outperform DQN.", |
| "paper_id": "1805.09496", |
| "title": "Intelligent Trainer for Model-Based Reinforcement Learning", |
| "authors": [ |
| "Yuanlong Li", |
| "Linsen Dong", |
| "Xin Zhou", |
| "Yonggang Wen", |
| "Kyle Guan" |
| ], |
| "published_date": "2018-05-24", |
| "primary_category": "cs.LG", |
| "arxiv_url": "http://arxiv.org/abs/1805.09496v6", |
| "chunk_index": 29, |
| "total_chunks": 45, |
| "char_count": 358, |
| "word_count": 60, |
| "chunking_strategy": "semantic" |
| }, |
| { |
| "chunk_id": "0334e13c-45bb-425b-83d5-7a8d92b4d2ac", |
| "text": "This indicates • We further examine the effect of using cyber data when\nthat for some applications, the intelligent trainer that uses it seems not working. For the Half Cheetah, we examine\nmore action selections, larger memory, or more informa- the results of multiple independent runs and cyber data\ntive state representation can achieve better performance. causes instability in performance, resulting in higher\nThe results hint that a smart design of trainer or TPE can variance and low mean reward in ten independent tests.\ncompensate the situation of lack of training data. For Swimmer, the poor performance with cyber data is\n• For Swimmer, we observe that none of the tested variants due to a special feature that the first two dimensions\nof DQN or TPE can achieve satisfying performance.", |
| "paper_id": "1805.09496", |
| "title": "Intelligent Trainer for Model-Based Reinforcement Learning", |
| "authors": [ |
| "Yuanlong Li", |
| "Linsen Dong", |
| "Xin Zhou", |
| "Yonggang Wen", |
| "Kyle Guan" |
| ], |
| "published_date": "2018-05-24", |
| "primary_category": "cs.LG", |
| "arxiv_url": "http://arxiv.org/abs/1805.09496v6", |
| "chunk_index": 30, |
| "total_chunks": 45, |
| "char_count": 795, |
| "word_count": 130, |
| "chunking_strategy": "semantic" |
| }, |
| { |
| "chunk_id": "a1d75dde-a85a-4d56-aff3-07debc923d62", |
| "text": "This are linearly correlated in its state definition. The trained\nis due to the fact that even a very small amount of cyber cyber model in this case is unable to correctly identify\ndata can cause the target controller to be trapped in a this feature and predict the state transition. Our results\nlocal minimum that cannot be recovered. show that even incorporating 10% cyber data in training, −750 50 −20\nNoCyber NoCyber NoCyber\n25 Reward −1000 Fixed Reward Fixed Reward Fixed\nRandom 0 Random −30 Random\n−1250 DQN DQN DQN\n−25\nDQN-5 actions DQN-5 actions DQN-5 actions\n−1500 DQN-larger memory −50 DQN-larger memory −40 DQN-larger memory\nREINFORCE REINFORCE REINFORCE\n−1750 −75\n0.0 0.2 0.4 0.6 0.8 1.0 0.0 0.5 1.0 1.5 2.0 2.5 3.0 0.0 0.5 1.0 1.5 2.0 2.5\nNumber of real data samples used 1e4 Number of real data samples used 1e4 Number of real data samples used 1e6 (a) Pendulum V0 (b) Mountain Car Continuous V0 (c) Reacher V1 1.0\n10000 250 Mountain Car\nNoCyber Reacher\n7500 Fixed 0.8 Swimmer 200\nRandom\n5000 DQN\n2500 150 DQN-5 actions 0.6 DQN-larger memory value\n0 NoCyber 100 REINFORCE Reward Reward Action 0.4 −2500 Fixed\nRandom\n−5000 DQN 50\n−7500 DQN-5 actions 0.2\nDQN-larger memory 0\n−10000 REINFORCE\n0.0 0.5 1.0 1.5 2.0 0.0 0.2 0.4 0.6 0.8 1.0 0.0 0.0 0.2 0.4 0.6 0.8 1.0\nNumber of real data samples used 1e6 Number of real data samples used 1e6 TPE steps (normalized into range [0, 1]) (d) Half Cheetah (e) Swimmer V1 (f) Trainer action a2 Accumulative rewards of different uni-head trainer designs for different tasks in (a)-(e). The curves show the average accumulative reward while the\nshaded region shows the standard deviation of the reward in ten independent runs.", |
| "paper_id": "1805.09496", |
| "title": "Intelligent Trainer for Model-Based Reinforcement Learning", |
| "authors": [ |
| "Yuanlong Li", |
| "Linsen Dong", |
| "Xin Zhou", |
| "Yonggang Wen", |
| "Kyle Guan" |
| ], |
| "published_date": "2018-05-24", |
| "primary_category": "cs.LG", |
| "arxiv_url": "http://arxiv.org/abs/1805.09496v6", |
| "chunk_index": 31, |
| "total_chunks": 45, |
| "char_count": 1675, |
| "word_count": 292, |
| "chunking_strategy": "semantic" |
| }, |
| { |
| "chunk_id": "12fc5414-ae81-4d97-967c-13ae69dd3288", |
| "text": "The proposed uni-head trainer shows its adaptability (better than the Fixed\ntrainer) but may fail in certain cases like Swimmer. (f) shows the mean action a2 taken by DQN trainer on tasks of Mountain Car, Reacher, and Swimmer. Ensemble 125\nNoCyber\n−250\nRandom 100 −10\nDQN\n−500 75\nReward −1000−750 Reward 5025 Reward −20\n0 −30\n−1250\nEnsemble Ensemble\n−25\nNoCyber NoCyber\n−1500\n−50 Random −40 Random\nDQN DQN\n−1750\n−75\n0.0 0.2 0.4 0.6 0.8 1.0 0.0 0.5 1.0 1.5 2.0 2.5 3.0 0.0 0.5 1.0 1.5 2.0 2.5\nNumber of real data samples used 1e4 Number of real data samples used 1e4 Number of real data samples used 1e6 (a) Pendulum V0 (b) Mountain Car Continuous V0 (c) Reacher V1 7500 1.0\nEnsemble\n5000 NoCyber\nRandom 0.8\n200 DQN 2500 0 150 value 0.6\nReward −2500 Reward 100 Action 0.4\n−5000\nEnsemble 50\n−7500 NoCyber 0.2 Mountain Car\nRandom 0 Reacher\n−10000 DQN Swimmer\n0.0\n0.0 0.5 1.0 1.5 2.0 0.0 0.2 0.4 0.6 0.8 1.0 0.0 0.2 0.4 0.6 0.8 1.0\nNumber of real data samples used 1e6 Number of real data samples used 1e6 TPE steps (normalized into range [0, 1]) (d) Half Cheetah (e) Swimmer V1 (f) Trainer action a2 Accumulative rewards of ensemble trainer for different tasks in (a)-(e). The proposed ensemble design shows close-to-optimal or even better\nperformance on all cases. (f) shows the mean action a2 taken by the DQN trainer in the ensemble trainer for Mountain Car, Reacher, and Swimmer. 125 DQN in ensemble\nRANDOM in ensemble\n100 −10 NoCyber in ensemble 25 Reward Reward 100 Reward\n0 −30 −25\nDQN in ensemble DQN in ensemble\n−50 RANDOM in ensemble −40 RANDOM in ensemble 0\nNoCyber in ensemble NoCyber in ensemble\n−750.0 0.5 1.0 1.5 2.0 2.5 3.0 0.0 0.5 1.0 1.5 2.0 2.5 −50 0.0 0.2 0.4 0.6 0.8 1.0\nNumber of real data samples used 1e4 Number of real data samples used 1e6 Number of real data samples used 1e6 (a) Mountain Car Continuous V0 (b) Reacher V1 (c) Swimmer V1 Accumulative reward of different individual trainers of the ensemble trainer on (a) Mountain Car, (b) Reacher, and (c) Swimmer. Trainers' performance\nare tending to fuse except certain extremely under-performed trainers. TABLE IV\nEnsemble\n250 Ensemble without memory sharing SAMPLING SAVING TO ACHIEVE CERTAIN PREDEFINED PERFORMANCE OF\nEnsemble without reference sampling THE ENSEMBLE TRAINER.", |
| "paper_id": "1805.09496", |
| "title": "Intelligent Trainer for Model-Based Reinforcement Learning", |
| "authors": [ |
| "Yuanlong Li", |
| "Linsen Dong", |
| "Xin Zhou", |
| "Yonggang Wen", |
| "Kyle Guan" |
| ], |
| "published_date": "2018-05-24", |
| "primary_category": "cs.LG", |
| "arxiv_url": "http://arxiv.org/abs/1805.09496v6", |
| "chunk_index": 32, |
| "total_chunks": 45, |
| "char_count": 2254, |
| "word_count": 395, |
| "chunking_strategy": "semantic" |
| }, |
| { |
| "chunk_id": "1f130af7-b2fd-4b96-85aa-c1ff3db3a1a5", |
| "text": "THE BASELINE COST IS THE EXPECTED COST OF\n200 THE THREE ALGORITHMS NOCYBER, RANDOM TRAINER AND DQN\nTRAINER.", |
| "paper_id": "1805.09496", |
| "title": "Intelligent Trainer for Model-Based Reinforcement Learning", |
| "authors": [ |
| "Yuanlong Li", |
| "Linsen Dong", |
| "Xin Zhou", |
| "Yonggang Wen", |
| "Kyle Guan" |
| ], |
| "published_date": "2018-05-24", |
| "primary_category": "cs.LG", |
| "arxiv_url": "http://arxiv.org/abs/1805.09496v6", |
| "chunk_index": 33, |
| "total_chunks": 45, |
| "char_count": 107, |
| "word_count": 18, |
| "chunking_strategy": "semantic" |
| }, |
| { |
| "chunk_id": "fdaf8627-0594-48a8-85a8-0610b8305136", |
| "text": "Pendulum Mountain Car Reacher Half Cheetah Swimmer Reward 100\nTarget reward -500 75 -10 2500 100\nSamples saving 26% 36% 2% 38% 56% 0.0 0.2 0.4 0.6 0.8 1.0 as the DQN or Random trainer. For the tasks of Swimmer\nNumber of real data samples used 1e6 and Half Cheetah, the ensemble trainer performs as well as\nthe NoCyber trainer, even though the learning process makes\nFig. 8. Accumulative rewards of ensemble trainer and its two variants: without it learn slower in the Half Cheetah task. With the proposed\nmemory sharing and without reference sampling for Swimmer. The proposed\nensemble design shows significant better performance. ensemble trainer, we are more likely to achieve sampling\ncost saving in practice as we it is hard to predict which\nkind of algorithm variant will deliver the best performance in\nadvance. We compute the expected saving in Table IV with theD.", |
| "paper_id": "1805.09496", |
| "title": "Intelligent Trainer for Model-Based Reinforcement Learning", |
| "authors": [ |
| "Yuanlong Li", |
| "Linsen Dong", |
| "Xin Zhou", |
| "Yonggang Wen", |
| "Kyle Guan" |
| ], |
| "published_date": "2018-05-24", |
| "primary_category": "cs.LG", |
| "arxiv_url": "http://arxiv.org/abs/1805.09496v6", |
| "chunk_index": 34, |
| "total_chunks": 45, |
| "char_count": 871, |
| "word_count": 147, |
| "chunking_strategy": "semantic" |
| }, |
| { |
| "chunk_id": "efb4ba75-6e8b-48e8-b513-5d68c1aac199", |
| "text": "Mitigating Action Correlation with Multi-head Ensemble\nensemble trainer when assuming the baseline sampling cost isTrainer\nthe average cost of the three uni-head trainers NoCyber, DQN\nAs discussed in Section III-B, the purpose of constructing trainer and Random trainer. Note that for tasks Mountain Car,\nan ensemble trainer is to overcome the action correlation Half Cheetah and Swimmer, the uni-head trainer may fail to\nproblem in uni-head trainer. In this subsection, we provide achieve the predefined performance target, in this case we set\nevidence of the virtue of the ensemble design by comparing the cost as the maximum number of samples we tried in the\nits performance with uni-head trainers. The ensemble trainer experiment. That means the expected saving is actually larger\ncomprises a DQN trainer (with TPE state design V2), a than the number shown in Table IV. Random Trainer, and a NoCyber trainer. Following the design In Fig. 6 (f), we observe that the action a2 taken by the DQN\nin Section III-B, these three trainers jointly sample and train varies significantly from the uni-head case. For Swimmer case,\nthree independent target controllers. The target controller of the action a2 gradually converges to one which allows better\nthe best trainer will be used in the test. For the step threshold performance. For Reacher case, we observe a phase transition\nC in weight transfer, it should be set to a TPE step count that in the middle, during which it changes from preferring fewer\na just sufficient number of trajectories (at least one episode) cyber data to more cyber data. This proves that when and\nhas been sampled.", |
| "paper_id": "1805.09496", |
| "title": "Intelligent Trainer for Model-Based Reinforcement Learning", |
| "authors": [ |
| "Yuanlong Li", |
| "Linsen Dong", |
| "Xin Zhou", |
| "Yonggang Wen", |
| "Kyle Guan" |
| ], |
| "published_date": "2018-05-24", |
| "primary_category": "cs.LG", |
| "arxiv_url": "http://arxiv.org/abs/1805.09496v6", |
| "chunk_index": 35, |
| "total_chunks": 45, |
| "char_count": 1637, |
| "word_count": 268, |
| "chunking_strategy": "semantic" |
| }, |
| { |
| "chunk_id": "fb29267e-e5c3-4f86-9c37-564510e9b15f", |
| "text": "For such reason we set C = 3 for all how many cyber data should be utilized may be related to the\ntasks except Mountain Car. For Mountain Car task, as in each training progress. For the Mountain Car task, we observe that\nTPE step, only one real sample is generated which is far from it quickly converges to favor more cyber data which is helpful\nenough to evaluate the performance.", |
| "paper_id": "1805.09496", |
| "title": "Intelligent Trainer for Model-Based Reinforcement Learning", |
| "authors": [ |
| "Yuanlong Li", |
| "Linsen Dong", |
| "Xin Zhou", |
| "Yonggang Wen", |
| "Kyle Guan" |
| ], |
| "published_date": "2018-05-24", |
| "primary_category": "cs.LG", |
| "arxiv_url": "http://arxiv.org/abs/1805.09496v6", |
| "chunk_index": 36, |
| "total_chunks": 45, |
| "char_count": 381, |
| "word_count": 71, |
| "chunking_strategy": "semantic" |
| }, |
| { |
| "chunk_id": "f724f2af-56d2-425a-abef-f07adc1a3440", |
| "text": "We set to C = 100 for in this task. This proves that the proposed ensemble trainer\nthis task. The upper and lower bounds φmax and φmin are can assess the control actions better than the uni-head trainer.\nestimated in the experiments, we found that φmax = 0.7 and In Fig. 7, we show the interactions of trainers in the\nφmin = 0.5 work well for all cases. ensemble by presenting individual results of the constituent\nThe results, as presented in Fig. 6, show that the ensemble trainers: DQN in ensemble, RANDOM in ensemble, and\ntrainer achieves overall good performance even in the cases NoCyber in ensemble, for the tasks of Mountain Car, Reacher,\nthe uni-head trainer fails. For the tasks of Pendulum, Mountain and Swimmer (In the following of this paragraph, we omit the\nCar and Reacher, the ensemble trainer performs almost as well term of \"in ensemble\" for the sake of brevity). cases, we can observe that within the ensemble, the original the target controller is either predetermined or can only be\ngood trainer (uni-head) still performs very good. For example, adjusted manually, resulting in both sampling inefficiency and\nfor the Mountain Car task, the Random trainer performs almost additional algorithm tuning cost. In [27] the authors proposed\nas good as the uni-head Random trainer. For task Swimmer, the a model-assisted bootstrapped DDPG algorithm, which uses a\nDQN trainer can now perform as good as the NoCyber trainer, variance ratio computed from the multiple heads of the critic\nwhich proves that the weight transfer process is working as network to decide whether the cyber data can be used or not.\nexpected. The method relies on the bootstrapped DQN design, which is\nTo further examine the effect of memory sharing and not suitable to other cases.\nreference sampling, in Fig. 8 we compare the performance Instead of treating the cyber model as a data source for\nof three different ensemble designs, for the task of Swimmer. training, some approaches use cyber model to conduct preAll of them comprise the same three trainers: DQN, Random, trial tree searches in applications, for which selecting the\nand NoCyber, but differ in the incorporated schemes: ensem- right action is highly critical [8] [9]. The cyber model can\nble trainer (with memory sharing and reference sampling); prevent selecting unfavorable actions and thus accelerates the\nensemble trainer without memory sharing (with reference learning of the optimal policy. In [10], the authors introduced a\nsampling); ensemble trainer without reference sampling (with planning agent and a manager who decides whether to sample\nmemory sharing). All these variants are with weight transfer. from the cyber engine or to take actions to minimize the\nThe results show that, without memory sharing, the ensemble training cost. Both approaches focuses on the tree search in\nperformance degrades.", |
| "paper_id": "1805.09496", |
| "title": "Intelligent Trainer for Model-Based Reinforcement Learning", |
| "authors": [ |
| "Yuanlong Li", |
| "Linsen Dong", |
| "Xin Zhou", |
| "Yonggang Wen", |
| "Kyle Guan" |
| ], |
| "published_date": "2018-05-24", |
| "primary_category": "cs.LG", |
| "arxiv_url": "http://arxiv.org/abs/1805.09496v6", |
| "chunk_index": 37, |
| "total_chunks": 45, |
| "char_count": 2866, |
| "word_count": 466, |
| "chunking_strategy": "semantic" |
| }, |
| { |
| "chunk_id": "2621eea9-9f88-4f47-8dc0-809355159da2", |
| "text": "This is because each of the three action selection which is different to our design that we aim to\nintelligent trainers uses only one-third of the original data select the proper data source in sampling. Some recent works\nsamples (which is why the curve stops at 1/3 of the others investigate integrating model-based and model-free approaches\nin the x-axis). Without reference sampling, the ensemble in RL. In [28] the authors combined model-based and modelperforms very similar to the DQN trainer (Fig. 5). This is free approaches for Building Optimization and Control (BOC),\nbecause without reference sampling, most of the real data where a simulator is used to train the agent, while a real-world\nsamples are from underperformed target controllers of DQN test-bed is used to evaluate the agent's performance. In [29]\nand Random trainers.", |
| "paper_id": "1805.09496", |
| "title": "Intelligent Trainer for Model-Based Reinforcement Learning", |
| "authors": [ |
| "Yuanlong Li", |
| "Linsen Dong", |
| "Xin Zhou", |
| "Yonggang Wen", |
| "Kyle Guan" |
| ], |
| "published_date": "2018-05-24", |
| "primary_category": "cs.LG", |
| "arxiv_url": "http://arxiv.org/abs/1805.09496v6", |
| "chunk_index": 38, |
| "total_chunks": 45, |
| "char_count": 840, |
| "word_count": 134, |
| "chunking_strategy": "semantic" |
| }, |
| { |
| "chunk_id": "39297f41-ed5b-41bf-ae9a-7872a622264b", |
| "text": "The data from underperformed target the model-based DRL is used to train a controller agent. The\ncontrollers deteriorates the learning process of the NoCyber agent is then used to provide weight initialization for a modeltrainer. The results indicate that memory sharing and reference free DRL approach, so as to reduce the training cost. Different\nsampling are essential for ensemble trainer. to this approach, we focus on directly sample from the model\nto reduce sampling cost in the real environment. RELATED WORKS\nTo build intelligent agents that can learn to accomplish B. AutoML\nvarious control tasks, researchers have been actively studying The method proposed in this paper is a typical AutoML\nreinforcement learning for decades, such as [18]–[22]. With re- [30] solution.", |
| "paper_id": "1805.09496", |
| "title": "Intelligent Trainer for Model-Based Reinforcement Learning", |
| "authors": [ |
| "Yuanlong Li", |
| "Linsen Dong", |
| "Xin Zhou", |
| "Yonggang Wen", |
| "Kyle Guan" |
| ], |
| "published_date": "2018-05-24", |
| "primary_category": "cs.LG", |
| "arxiv_url": "http://arxiv.org/abs/1805.09496v6", |
| "chunk_index": 39, |
| "total_chunks": 45, |
| "char_count": 780, |
| "word_count": 121, |
| "chunking_strategy": "semantic" |
| }, |
| { |
| "chunk_id": "000997ed-76ea-49ae-828c-b899775699c1", |
| "text": "AutoML aims to develop an algorithm that\ncent advancement of deep learning, DRL [2] has demonstrated can automatically train a high performance machine learning\nits strength in various applications. For example, in [23] a model without human intervention, such as hyper-parameter\nDRL agent is proposed to solve financial trading tasks; in [24] tuning, model selection etc. AutoML has been proposed to\na neural RL agent is trained to mimic the human motor skill solve various specific training tasks such as model compreslearning; in [25] an off-policy RL method is proposed to solve sion for mobile device [31], transfer learning [32], general\nnonlinear and nonzero-sum games. Our research is particularly neural network training [33].\nfocused on model-based RL which can be utilized to reduce Note that most AutoML solutions are proposed to solve\nthe sampling cost of RL, and we propose an AutoML method. supervised learning cases, in which the dataset is usually preIn the following, we briefly review the recent development of acquired. In our case, as the data will be collected by the target\nmodel-based RL and the AutoML studies. controller to train, it actually demands an AutoML solution\nmore than a general supervised learning case. CONCLUSION Despite the significant performance improvement, the high\nsampling cost necessitated by RL has become a significant In this paper we propose an intelligent trainer for online\nissue in practice. To address this issue, MBRL is introduced to model training and sampling settings learning for MBRL\nlearn the system dynamics model, so as to reduce the data col- algorithm. The proposed approach treats the training process of\nlection and sampling cost. In [7] the authors provided a MBRL MBRL as the target system to optimize, and use a trainer that\nfor a robot controller that samples from both real physical monitors and optimizes the sampling and training process in\nenvironment and learned cyber emulator. In [26] the authors MBRL.", |
| "paper_id": "1805.09496", |
| "title": "Intelligent Trainer for Model-Based Reinforcement Learning", |
| "authors": [ |
| "Yuanlong Li", |
| "Linsen Dong", |
| "Xin Zhou", |
| "Yonggang Wen", |
| "Kyle Guan" |
| ], |
| "published_date": "2018-05-24", |
| "primary_category": "cs.LG", |
| "arxiv_url": "http://arxiv.org/abs/1805.09496v6", |
| "chunk_index": 40, |
| "total_chunks": 45, |
| "char_count": 1983, |
| "word_count": 316, |
| "chunking_strategy": "semantic" |
| }, |
| { |
| "chunk_id": "1af0de53-a6ac-4c74-ad8f-06ea368e0d48", |
| "text": "The proposed trainer solution can be used in practical\nadapted a model, trained previously for other tasks, to train the applications to reduce the sampling cost while achieve closecontroller for a new but similar task. This approach combines to-optimal performance.\nprior knowledge and the online adaptation of dynamic model, For the future work, the proposed trainer framework can\nthus achieves better performance. In these approaches, the be further improved by adding more control actions to ease\nnumber of samples taken from the cyber environment to train algorithm tuning cost. An even more advanced design is to use", |
| "paper_id": "1805.09496", |
| "title": "Intelligent Trainer for Model-Based Reinforcement Learning", |
| "authors": [ |
| "Yuanlong Li", |
| "Linsen Dong", |
| "Xin Zhou", |
| "Yonggang Wen", |
| "Kyle Guan" |
| ], |
| "published_date": "2018-05-24", |
| "primary_category": "cs.LG", |
| "arxiv_url": "http://arxiv.org/abs/1805.09496v6", |
| "chunk_index": 41, |
| "total_chunks": 45, |
| "char_count": 622, |
| "word_count": 97, |
| "chunking_strategy": "semantic" |
| }, |
| { |
| "chunk_id": "a82af702-70a8-4119-b69e-3cf4a173c3ec", |
| "text": "one trainer to train different DRL controllers for multiple tasks,\n[19] F. Liu, Reinforcement learning and approximate\nwhich can learn the common knowledge shared by different dynamic programming for feedback control. John Wiley & Sons, 2013,\nDRL algorithms for these tasks. vol. 17.\n[20] D. Wei, \"Policy iteration adaptive dynamic programming\nalgorithm for discrete-time nonlinear systems,\" IEEE Transactions on\nREFERENCES Neural Networks and Learning Systems, vol. 25, no. 3, pp. 621–634,\n2014. [1] R. Barto, Reinforcement learning: An introduction. MIT press, 2018. [21] B. Huang, \"Off-policy reinforcement learning\n[2] V. Wier- for h infty control design,\" IEEE transactions on cybernetics, vol. 45,\nstra, and M. Riedmiller, \"Playing atari with deep reinforcement learn- no. 1, pp. 65–76, 2015.\n[3] T. Tassa, robust controller design for continuous-time uncertain nonlinear systems\nD. Wierstra, \"Continuous control with deep reinforcement subject to input constraints,\" IEEE transactions on cybernetics, vol. 45,\n[4] J. Moritz, \"Trust [23] Y. Dai, \"Deep direct reinregion policy optimization,\" in International Conference on Machine forcement learning for financial signal representation and trading,\" IEEE\nLearning, 2015, pp. 1889–1897. transactions on neural networks and learning systems, vol. 28, no. 3,\n[5] M. Stone, \"Deep reinforcement learning in param- pp. 653–664, 2017.\neterized action space,\" in Proceedings of the International Conference [24] Y. Yu, \"Biomimetic hybrid feedback feedforward neuralon Learning Representations (ICLR), May 2016. network learning control,\" IEEE transactions on neural networks and\n[6] R.", |
| "paper_id": "1805.09496", |
| "title": "Intelligent Trainer for Model-Based Reinforcement Learning", |
| "authors": [ |
| "Yuanlong Li", |
| "Linsen Dong", |
| "Xin Zhou", |
| "Yonggang Wen", |
| "Kyle Guan" |
| ], |
| "published_date": "2018-05-24", |
| "primary_category": "cs.LG", |
| "arxiv_url": "http://arxiv.org/abs/1805.09496v6", |
| "chunk_index": 42, |
| "total_chunks": 45, |
| "char_count": 1633, |
| "word_count": 230, |
| "chunking_strategy": "semantic" |
| }, |
| { |
| "chunk_id": "83789a80-49af-4fc6-9a21-a0a32603a6bd", |
| "text": "Sutton, \"Dyna, an integrated architecture for learning, planning, learning systems, vol. 28, no. 6, pp. 1481–1487, 2017.\nand reacting,\" ACM SIGART Bulletin, vol. 2, no. 4, pp. 160–163, 1991. [25] R. Wei, \"Off-policy integral reinforcement\n[7] M. Fox, \"Learning to control a learning method to solve nonlinear continuous-time multiplayer nonzerolow-cost manipulator using data-efficient reinforcement learning,\" 2011. sum games,\" IEEE transactions on neural networks and learning sys-\n[8] X. Wang, \"Deep learning for tems, vol. 28, no. 3, pp. 704–713, 2017.\nreal-time atari game play using offline monte-carlo tree search planning,\" [26] S. Abbeel, \"Learning contact-rich manipin Advances in neural information processing systems, 2014, pp. 3338– ulation skills with guided policy search,\" in Robotics and Automation\n3346. (ICRA), 2015 IEEE International Conference on. IEEE, 2015, pp.\n[9] T. Li et al., \"Imagination- [27] G. Boedecker, \"Uncertainty-driven imagination for conaugmented agents for deep reinforcement learning,\" arXiv preprint tinuous deep reinforcement learning,\" in Conference on Robot Learning,\n[10] R.", |
| "paper_id": "1805.09496", |
| "title": "Intelligent Trainer for Model-Based Reinforcement Learning", |
| "authors": [ |
| "Yuanlong Li", |
| "Linsen Dong", |
| "Xin Zhou", |
| "Yonggang Wen", |
| "Kyle Guan" |
| ], |
| "published_date": "2018-05-24", |
| "primary_category": "cs.LG", |
| "arxiv_url": "http://arxiv.org/abs/1805.09496v6", |
| "chunk_index": 43, |
| "total_chunks": 45, |
| "char_count": 1119, |
| "word_count": 156, |
| "chunking_strategy": "semantic" |
| }, |
| { |
| "chunk_id": "4aec460d-a5dd-4bb3-8579-1fd2675d3a79", |
| "text": "Kosmatopoulos, \"ModelD. Battaglia, \"Learning model- based and model-free plug-and-play building energy efficient control,\"\n[11] Y. Tao, \"Transforming cooling optimization\n[29] A. Levine, \"Neural network\nfor green data center via deep reinforcement learning,\" arXiv preprint\ndynamics for model-based deep reinforcement learning with model-free\n[12] https://bitbucket.org/RLinRL/intelligenttrainerpublic, accessed: 2018-\n[30] I. Sebag et al., \"A brief review[13] R. Barto, Reinforcement learning: An introduction.\nof the chalearn automl challenge: any-time any-dataset learning without MIT press Cambridge, 1998, vol. 1, no. 1.\n2016, pp. 21–30. via bootstrapped DQN,\" in Advances in neural information processing\nsystems, 2016, pp. 4026–4034. [31] Y. Han, \"Amc: Automl for\n[15] https://github.com/berkeleydeeprlcourse/homework/tree/master/hw4, ac- model compression and acceleration on mobile devices,\" in Proceedings\ncessed: 2018-05-06. of the European Conference on Computer Vision (ECCV), 2018, pp.\n[16] https://github.com/pat-coady/trpo, accessed: 2018-05-06. 784–800.\n[17] P. Gesmundo, \"Transfer learning with\nJ. Wu, \"Openai baselines,\" https://github. neural automl,\" in Advances in Neural Information Processing Systems,\ncom/openai/baselines, 2017. 2018, pp. 8356–8365.\n[18] A. Anderson, \"Neuronlike adaptive [33] Y.-H. Seo, \"Nemo: Neuro-evolution\nelements that can solve difficult learning control problems,\" IEEE with multiobjective optimization of deep neural network for speed and", |
| "paper_id": "1805.09496", |
| "title": "Intelligent Trainer for Model-Based Reinforcement Learning", |
| "authors": [ |
| "Yuanlong Li", |
| "Linsen Dong", |
| "Xin Zhou", |
| "Yonggang Wen", |
| "Kyle Guan" |
| ], |
| "published_date": "2018-05-24", |
| "primary_category": "cs.LG", |
| "arxiv_url": "http://arxiv.org/abs/1805.09496v6", |
| "chunk_index": 44, |
| "total_chunks": 45, |
| "char_count": 1489, |
| "word_count": 182, |
| "chunking_strategy": "semantic" |
| } |
| ] |