researchpilot-data / chunks /1805.06146_semantic.json
Subhadip007's picture
feat: Upload full 358k vector database
e8a2c2e
[
{
"chunk_id": "caba98b6-2336-4871-8026-5265a3247efd",
"text": "Optimized Computation Offloading Performance in Virtual Edge Computing Systems via Deep Reinforcement Learning Xianfu Chen, Honggang Zhang, Celimuge Wu, Shiwen Mao, Yusheng Ji, and\n2018 Mehdi Bennis\nMay",
"paper_id": "1805.06146",
"title": "Optimized Computation Offloading Performance in Virtual Edge Computing Systems via Deep Reinforcement Learning",
"authors": [
"Xianfu Chen",
"Honggang Zhang",
"Celimuge Wu",
"Shiwen Mao",
"Yusheng Ji",
"Mehdi Bennis"
],
"published_date": "2018-05-16",
"primary_category": "cs.LG",
"arxiv_url": "http://arxiv.org/abs/1805.06146v1",
"chunk_index": 0,
"total_chunks": 79,
"char_count": 202,
"word_count": 28,
"chunking_strategy": "semantic"
},
{
"chunk_id": "0a5b2c50-40cc-4fe2-96c8-c81f0fe079f2",
"text": "To improve the quality of computation experience for mobile devices, mobile-edge computing (MEC) is a promising paradigm by providing computing capabilities in close proximity within a sliced[cs.LG]\nradio access network (RAN), which supports both traditional communication and MEC services. Nevertheless, the design of computation offloading policies for a virtual MEC system remains challenging. Specifically, whether to execute a computation task at the mobile device or to offload it for MEC server execution should adapt to the time-varying network dynamics. In this paper, we consider MEC for a",
"paper_id": "1805.06146",
"title": "Optimized Computation Offloading Performance in Virtual Edge Computing Systems via Deep Reinforcement Learning",
"authors": [
"Xianfu Chen",
"Honggang Zhang",
"Celimuge Wu",
"Shiwen Mao",
"Yusheng Ji",
"Mehdi Bennis"
],
"published_date": "2018-05-16",
"primary_category": "cs.LG",
"arxiv_url": "http://arxiv.org/abs/1805.06146v1",
"chunk_index": 1,
"total_chunks": 79,
"char_count": 599,
"word_count": 87,
"chunking_strategy": "semantic"
},
{
"chunk_id": "9c91d3c0-09d6-42d8-a35f-51f23bfce405",
"text": "representative mobile user in an ultra-dense sliced RAN, where multiple base stations (BSs) are available to be selected for computation offloading. The problem of solving an optimal computation offloading policy is modelled as a Markov decision process, where our objective is to maximize the long-term utility performance whereby an offloading decision is made based on the task queue state, the energy queue state as well as the channel qualities between MU and BSs.",
"paper_id": "1805.06146",
"title": "Optimized Computation Offloading Performance in Virtual Edge Computing Systems via Deep Reinforcement Learning",
"authors": [
"Xianfu Chen",
"Honggang Zhang",
"Celimuge Wu",
"Shiwen Mao",
"Yusheng Ji",
"Mehdi Bennis"
],
"published_date": "2018-05-16",
"primary_category": "cs.LG",
"arxiv_url": "http://arxiv.org/abs/1805.06146v1",
"chunk_index": 2,
"total_chunks": 79,
"char_count": 469,
"word_count": 73,
"chunking_strategy": "semantic"
},
{
"chunk_id": "cb9e9a32-d611-47ae-bf14-cfd5c2b462f2",
"text": "To break the curse of high dimensionality algorithm to learn the optimal policy without knowing a priori knowledge of network dynamics. motivated by the additive structure of the utility function, a Q-function decomposition technique is combined with the double DQN, which leads to novel learning algorithm for the solving of stochastic Chen is with the VTT Technical Research Centre of Finland, Finland (e-mail: xianfu.chen@vtt.fi). with the College of Information Science and Electronic Engineering, Zhejiang University, Hangzhou, China (e-mail: honggangzhang@zju.edu.cn). Wu is with the Graduate School of Informatics and Engineering, University of ElectroCommunications, Tokyo, Japan (email: clmg@is.uec.ac.jp). Mao is with the Department of Electrical and Computer Engineering, Auburn University, Auburn, AL, USA (email: smao@ieee.org). Ji is with the Information Systems Architecture Research Division, National Institute of Informatics, Tokyo, Japan (e-mail: kei@nii.ac.jp).",
"paper_id": "1805.06146",
"title": "Optimized Computation Offloading Performance in Virtual Edge Computing Systems via Deep Reinforcement Learning",
"authors": [
"Xianfu Chen",
"Honggang Zhang",
"Celimuge Wu",
"Shiwen Mao",
"Yusheng Ji",
"Mehdi Bennis"
],
"published_date": "2018-05-16",
"primary_category": "cs.LG",
"arxiv_url": "http://arxiv.org/abs/1805.06146v1",
"chunk_index": 3,
"total_chunks": 79,
"char_count": 981,
"word_count": 130,
"chunking_strategy": "semantic"
},
{
"chunk_id": "f8d0989f-dd80-4f08-897f-63bf27817c24",
"text": "Bennis is with the Centre for Wireless Communications, University of Oulu, Finland (email: bennis@ee.oulu.fi). computation offloading. Numerical experiments show that our proposed learning algorithms achieve a significant improvement in computation offloading performance compared with the baseline policies. Network slicing, radio access networks, network virtualization, mobile-edge computing, Markov decision process, deep reinforcement learning, Q-function decomposition.",
"paper_id": "1805.06146",
"title": "Optimized Computation Offloading Performance in Virtual Edge Computing Systems via Deep Reinforcement Learning",
"authors": [
"Xianfu Chen",
"Honggang Zhang",
"Celimuge Wu",
"Shiwen Mao",
"Yusheng Ji",
"Mehdi Bennis"
],
"published_date": "2018-05-16",
"primary_category": "cs.LG",
"arxiv_url": "http://arxiv.org/abs/1805.06146v1",
"chunk_index": 4,
"total_chunks": 79,
"char_count": 475,
"word_count": 54,
"chunking_strategy": "semantic"
},
{
"chunk_id": "ffb1fadb-0d18-4c87-83bd-9e1dc7851303",
"text": "With the proliferation of smart mobile devices, a multitude of mobile applications are emerging and gaining popularity, such as location-based virtual/augmented reality and online gaming [1]. However, mobile devices are in general resource-constrained, for example, the battery capacity and the local CPU computation power are limited. When executed at the mobile devices, the performance and Quality-of-Experience (QoE) of computation-intensive applications are significantly affected by the devices' limited computation capabilities. computation-intensive applications and resource-constrained mobile devices creates a bottleneck",
"paper_id": "1805.06146",
"title": "Optimized Computation Offloading Performance in Virtual Edge Computing Systems via Deep Reinforcement Learning",
"authors": [
"Xianfu Chen",
"Honggang Zhang",
"Celimuge Wu",
"Shiwen Mao",
"Yusheng Ji",
"Mehdi Bennis"
],
"published_date": "2018-05-16",
"primary_category": "cs.LG",
"arxiv_url": "http://arxiv.org/abs/1805.06146v1",
"chunk_index": 5,
"total_chunks": 79,
"char_count": 631,
"word_count": 78,
"chunking_strategy": "semantic"
},
{
"chunk_id": "8a2a53c7-aa69-4fe9-a56a-4669f0f5b888",
"text": "for having a satisfactory Quality-of-Service (QoS) and QoE, and is hence driving a revolution in computing infrastructure [2]. In contrast to cloud computing, mobile-edge computing (MEC) is envisioned as a promising paradigm, which provides computing capabilities within the radio access networks (RANs) in close proximity to mobile users (MUs) [3]. By offloading computation tasks to the resourcerich MEC servers, not only the computation QoS and QoE can be greatly improved, but the capabilities of mobile devices can be augmented for running a variety of resource-demanding Recently, lots of efforts have been put to the design of computation offloading In [4], Wang et al. developed an alternating direction method of multipliers-based algorithm to solve the problem of revenue maximization by optimizing computation offloading decision, resource allocation and content caching strategy. In [5], Hu et al. proposed a two-phase based method for joint power and time allocation when considering cooperative computation offloading in a wireless power transfer-assisted MEC system. In [6], Wang et al. leveraged a",
"paper_id": "1805.06146",
"title": "Optimized Computation Offloading Performance in Virtual Edge Computing Systems via Deep Reinforcement Learning",
"authors": [
"Xianfu Chen",
"Honggang Zhang",
"Celimuge Wu",
"Shiwen Mao",
"Yusheng Ji",
"Mehdi Bennis"
],
"published_date": "2018-05-16",
"primary_category": "cs.LG",
"arxiv_url": "http://arxiv.org/abs/1805.06146v1",
"chunk_index": 6,
"total_chunks": 79,
"char_count": 1113,
"word_count": 164,
"chunking_strategy": "semantic"
},
{
"chunk_id": "6df2f841-54ad-4b71-b1df-c4ae7243f79f",
"text": "Lagrangian duality method to minimize the total energy consumption in a computation latency constrained wireless powered multiuser MEC system. For a MEC system, the computation offloading requires wireless data transmission, hence how to allocate wireless radio resource between the traditional communication service and the MEC service over a common RAN raises a series of technical challenges. Network slicing is a key enabler for RAN sharing, with which the traditional single ownership of network infrastructure and spectrum resources can be decoupled from the wireless services [7]. Consequently, the same",
"paper_id": "1805.06146",
"title": "Optimized Computation Offloading Performance in Virtual Edge Computing Systems via Deep Reinforcement Learning",
"authors": [
"Xianfu Chen",
"Honggang Zhang",
"Celimuge Wu",
"Shiwen Mao",
"Yusheng Ji",
"Mehdi Bennis"
],
"published_date": "2018-05-16",
"primary_category": "cs.LG",
"arxiv_url": "http://arxiv.org/abs/1805.06146v1",
"chunk_index": 7,
"total_chunks": 79,
"char_count": 610,
"word_count": 88,
"chunking_strategy": "semantic"
},
{
"chunk_id": "030dba53-dcb4-467f-9225-00b3db2001ab",
"text": "physical network infrastructure is able to host multiple wireless service providers (WSPs) [8], In literature, there exist several efforts investigating joint communication and computation resource management in such virtualized networks, which support both the traditional communication service and the MEC service [10], [11]. In this work, we focus on designing optimal stochastic computation offloading policies in a sliced RAN, where a centralized network controller (CNC) is responsible for control-plane decisions on wireless radio resource orchestration over the traditional communication and MEC services. The computation offloading policy designs in previous works [4], [5], [10]–[13] are mostly based on one-shot optimization and fail to characterize long-term computation offloading performance. In a virtual MEC system, the design of computation offloading policies should account for the environmental dynamics, such as the time-varying channel quality and the task arrival and energy status at a mobile device. In [14], Liu et al. formulated the problem of delay-optimal computation task offloading under a Markov decision process (MDP) framework and developed an efficient one-dimensional search algorithm to find the optimal solution. However, the challenge",
"paper_id": "1805.06146",
"title": "Optimized Computation Offloading Performance in Virtual Edge Computing Systems via Deep Reinforcement Learning",
"authors": [
"Xianfu Chen",
"Honggang Zhang",
"Celimuge Wu",
"Shiwen Mao",
"Yusheng Ji",
"Mehdi Bennis"
],
"published_date": "2018-05-16",
"primary_category": "cs.LG",
"arxiv_url": "http://arxiv.org/abs/1805.06146v1",
"chunk_index": 8,
"total_chunks": 79,
"char_count": 1273,
"word_count": 176,
"chunking_strategy": "semantic"
},
{
"chunk_id": "6f5886ab-cfd8-420c-8c92-cb6712d25a5d",
"text": "lies in the dependence on statistical information of channel quality variations and computation In [15], Mao et al. investigated a dynamic computation offloading policy for a MEC system with wireless energy harvesting-enabled mobile devices using a Lyapunov optimization The same technique was adopted to study the power-delay tradeoff in the scenario of computation task offloading by Liu et al. [16] and Jiang et al. [17]. The Lyapunov optimization can only construct an approximately optimal solution. Xu et al. developed in [18] a reinforcement learning based algorithm to learn the optimal computation offloading policy, which at the same time does not need a priori knowledge of network statistics.",
"paper_id": "1805.06146",
"title": "Optimized Computation Offloading Performance in Virtual Edge Computing Systems via Deep Reinforcement Learning",
"authors": [
"Xianfu Chen",
"Honggang Zhang",
"Celimuge Wu",
"Shiwen Mao",
"Yusheng Ji",
"Mehdi Bennis"
],
"published_date": "2018-05-16",
"primary_category": "cs.LG",
"arxiv_url": "http://arxiv.org/abs/1805.06146v1",
"chunk_index": 9,
"total_chunks": 79,
"char_count": 704,
"word_count": 107,
"chunking_strategy": "semantic"
},
{
"chunk_id": "2628a880-547f-42d7-a696-4a2e30b34475",
"text": "When the MEC meets an ultra dense sliced RAN, multiple base stations (BSs) with different data transmission qualities are available for offloading a computation task. explosion in state space makes the conventional reinforcement learning algorithms [18]–[20]",
"paper_id": "1805.06146",
"title": "Optimized Computation Offloading Performance in Virtual Edge Computing Systems via Deep Reinforcement Learning",
"authors": [
"Xianfu Chen",
"Honggang Zhang",
"Celimuge Wu",
"Shiwen Mao",
"Yusheng Ji",
"Mehdi Bennis"
],
"published_date": "2018-05-16",
"primary_category": "cs.LG",
"arxiv_url": "http://arxiv.org/abs/1805.06146v1",
"chunk_index": 10,
"total_chunks": 79,
"char_count": 258,
"word_count": 36,
"chunking_strategy": "semantic"
},
{
"chunk_id": "c7a6cfa7-8da2-48d3-bfc8-1543200a55be",
"text": "Moreover, in this paper, wireless charging [21] is integrated into a MEC system, which on one hand achieves sustained computation performance but, on the other hand, makes the design of a stochastic computation offloading policy even more challenging.",
"paper_id": "1805.06146",
"title": "Optimized Computation Offloading Performance in Virtual Edge Computing Systems via Deep Reinforcement Learning",
"authors": [
"Xianfu Chen",
"Honggang Zhang",
"Celimuge Wu",
"Shiwen Mao",
"Yusheng Ji",
"Mehdi Bennis"
],
"published_date": "2018-05-16",
"primary_category": "cs.LG",
"arxiv_url": "http://arxiv.org/abs/1805.06146v1",
"chunk_index": 11,
"total_chunks": 79,
"char_count": 251,
"word_count": 38,
"chunking_strategy": "semantic"
},
{
"chunk_id": "34daa92c-f217-4c5d-b8ca-608653beb805",
"text": "The main contributions in this work are four-fold. Firstly, we formulate the stochastic computation offloading problem in a sliced RAN as a MDP, in which the time-varying communication qualities and computation resources are taken into account. Secondly, to deal with the curse of state space explosion, we",
"paper_id": "1805.06146",
"title": "Optimized Computation Offloading Performance in Virtual Edge Computing Systems via Deep Reinforcement Learning",
"authors": [
"Xianfu Chen",
"Honggang Zhang",
"Celimuge Wu",
"Shiwen Mao",
"Yusheng Ji",
"Mehdi Bennis"
],
"published_date": "2018-05-16",
"primary_category": "cs.LG",
"arxiv_url": "http://arxiv.org/abs/1805.06146v1",
"chunk_index": 12,
"total_chunks": 79,
"char_count": 306,
"word_count": 47,
"chunking_strategy": "semantic"
},
{
"chunk_id": "1f672622-789f-439a-8449-d4fa0bcb2fa7",
"text": "resort to a deep neural network based function approximator [22] and derive a double deep Qnetwork (DQN) [23] based reinforcement learning (DARLING) algorithm to learn the optimal computation offloading policy without any a priori knowledge of network dynamics. further exploring the additive structure of the utility function, we attain a novel online deep stateaction-reward-state-action based reinforcement learning algorithm (Deep-SARL) for the problem of stochastic computation offloading. To the best knowledge of the authors, this is the first work",
"paper_id": "1805.06146",
"title": "Optimized Computation Offloading Performance in Virtual Edge Computing Systems via Deep Reinforcement Learning",
"authors": [
"Xianfu Chen",
"Honggang Zhang",
"Celimuge Wu",
"Shiwen Mao",
"Yusheng Ji",
"Mehdi Bennis"
],
"published_date": "2018-05-16",
"primary_category": "cs.LG",
"arxiv_url": "http://arxiv.org/abs/1805.06146v1",
"chunk_index": 13,
"total_chunks": 79,
"char_count": 555,
"word_count": 78,
"chunking_strategy": "semantic"
},
{
"chunk_id": "025bda62-cc42-4c82-8209-f67881315073",
"text": "to combine a Q-function decomposition technique with the double DQN. experiments based on TensorFlow are conducted to verify the theoretical studies in this paper. shows that both of our proposed online learning algorithms outperform three baseline schemes. Especially, the Deep-SARL algorithm achieves the best computation offloading performance.",
"paper_id": "1805.06146",
"title": "Optimized Computation Offloading Performance in Virtual Edge Computing Systems via Deep Reinforcement Learning",
"authors": [
"Xianfu Chen",
"Honggang Zhang",
"Celimuge Wu",
"Shiwen Mao",
"Yusheng Ji",
"Mehdi Bennis"
],
"published_date": "2018-05-16",
"primary_category": "cs.LG",
"arxiv_url": "http://arxiv.org/abs/1805.06146v1",
"chunk_index": 14,
"total_chunks": 79,
"char_count": 347,
"word_count": 47,
"chunking_strategy": "semantic"
},
{
"chunk_id": "db3ec768-ef53-4818-8571-755c94e64f44",
"text": "The rest of the paper is organized as follows. In the next section, we describe the system model and the assumptions made throughout this paper. In Section III, we formulate the problem of designing an optimal stochastic computation offloading policy as a MDP. We detail the derived",
"paper_id": "1805.06146",
"title": "Optimized Computation Offloading Performance in Virtual Edge Computing Systems via Deep Reinforcement Learning",
"authors": [
"Xianfu Chen",
"Honggang Zhang",
"Celimuge Wu",
"Shiwen Mao",
"Yusheng Ji",
"Mehdi Bennis"
],
"published_date": "2018-05-16",
"primary_category": "cs.LG",
"arxiv_url": "http://arxiv.org/abs/1805.06146v1",
"chunk_index": 15,
"total_chunks": 79,
"char_count": 282,
"word_count": 47,
"chunking_strategy": "semantic"
},
{
"chunk_id": "2a8a1a43-a064-4cf8-a336-11b23e4fa741",
"text": "online learning algorithms for stochastic computation offloading in a virtual MEC system in To validate the proposed studies, we provide numerical experiments under various settings in Section V. Finally, we draw the conclusions in Section VI. In Table I, we summarize the major notations of this paper. SYSTEM DESCRIPTIONS AND ASSUMPTIONS As illustrated in Fig. 1, we shall consider in this paper an ultra dense service area covered by a virtualized RAN with a set B = {1, · · · , B} of BSs.",
"paper_id": "1805.06146",
"title": "Optimized Computation Offloading Performance in Virtual Edge Computing Systems via Deep Reinforcement Learning",
"authors": [
"Xianfu Chen",
"Honggang Zhang",
"Celimuge Wu",
"Shiwen Mao",
"Yusheng Ji",
"Mehdi Bennis"
],
"published_date": "2018-05-16",
"primary_category": "cs.LG",
"arxiv_url": "http://arxiv.org/abs/1805.06146v1",
"chunk_index": 16,
"total_chunks": 79,
"char_count": 492,
"word_count": 85,
"chunking_strategy": "semantic"
},
{
"chunk_id": "8c3dd2ba-586c-42b6-b968-f5a065d564de",
"text": "Both traditional communication services and MEC services are supported over the common physical network infrastructure. server is implemented at the network edge, providing rich computing resources for the MUs. By strategically offloading the generated computation tasks via the BSs to the MEC server for execution, the MUs can expect a significantly improved computation experience. that the wireless radio resources are divided into traditional communication and MEC slices to",
"paper_id": "1805.06146",
"title": "Optimized Computation Offloading Performance in Virtual Edge Computing Systems via Deep Reinforcement Learning",
"authors": [
"Xianfu Chen",
"Honggang Zhang",
"Celimuge Wu",
"Shiwen Mao",
"Yusheng Ji",
"Mehdi Bennis"
],
"published_date": "2018-05-16",
"primary_category": "cs.LG",
"arxiv_url": "http://arxiv.org/abs/1805.06146v1",
"chunk_index": 17,
"total_chunks": 79,
"char_count": 478,
"word_count": 68,
"chunking_strategy": "semantic"
},
{
"chunk_id": "b825f8a6-3df5-4874-ae01-feccdeaa2817",
"text": "guarantee inter-slice isolation. All control plane operations happening in such a hybrid network are managed by the CNC. The focus of this work is to optimize computation performance from a perspective of the MUs, while the design of joint traditional communication and MEC",
"paper_id": "1805.06146",
"title": "Optimized Computation Offloading Performance in Virtual Edge Computing Systems via Deep Reinforcement Learning",
"authors": [
"Xianfu Chen",
"Honggang Zhang",
"Celimuge Wu",
"Shiwen Mao",
"Yusheng Ji",
"Mehdi Bennis"
],
"published_date": "2018-05-16",
"primary_category": "cs.LG",
"arxiv_url": "http://arxiv.org/abs/1805.06146v1",
"chunk_index": 18,
"total_chunks": 79,
"char_count": 273,
"word_count": 43,
"chunking_strategy": "semantic"
},
{
"chunk_id": "f6cf2607-5f78-4df9-961b-28112684b672",
"text": "resource allocation is left for our next-step investigation. In a dense networking area, our analysis MAJOR NOTATIONS USED IN THE PAPER.",
"paper_id": "1805.06146",
"title": "Optimized Computation Offloading Performance in Virtual Edge Computing Systems via Deep Reinforcement Learning",
"authors": [
"Xianfu Chen",
"Honggang Zhang",
"Celimuge Wu",
"Shiwen Mao",
"Yusheng Ji",
"Mehdi Bennis"
],
"published_date": "2018-05-16",
"primary_category": "cs.LG",
"arxiv_url": "http://arxiv.org/abs/1805.06146v1",
"chunk_index": 19,
"total_chunks": 79,
"char_count": 136,
"word_count": 21,
"chunking_strategy": "semantic"
},
{
"chunk_id": "84858d02-3797-4045-9187-6a343d5168ec",
"text": "W bandwidth of the spectrum allocated for MEC B/B number/set of BSs X/X number/set of network states Y/Y number/set of joint control actions δ duration of one decision epoch\ngb, gjb channel gain state between the MU and BS b\nq(t), qj(t) task queue state of the MU\nq(max)(t) maximum task queue length µ input data size of a task ν required CPU cycles for a task\nf(CPU)(max) maximum CPU-cycle frequency\np(max)(tr) maximum transmit power\na(t), aj(t) computation task arrival\nq(e), qj(e) energy queue state of the MU\nq(max)(e) maximum energy queue length\na(e), aj(e) energy unit arrivals\nχ, χj network state of the MU\nΦ control policies of the MU c, cj task offloading decision by the MU e, ej energy unit allocation by the MU s, sj MU-BS association state u utility function of the MU V state-value function Q, Qk state-action value functions\nθj, θjk DQN parameters\nθj−, θjk,− target DQN parameters\nMj, N j pools of historical experience\nMj,f Ne j mini-batches for DQN training",
"paper_id": "1805.06146",
"title": "Optimized Computation Offloading Performance in Virtual Edge Computing Systems via Deep Reinforcement Learning",
"authors": [
"Xianfu Chen",
"Honggang Zhang",
"Celimuge Wu",
"Shiwen Mao",
"Yusheng Ji",
"Mehdi Bennis"
],
"published_date": "2018-05-16",
"primary_category": "cs.LG",
"arxiv_url": "http://arxiv.org/abs/1805.06146v1",
"chunk_index": 20,
"total_chunks": 79,
"char_count": 974,
"word_count": 169,
"chunking_strategy": "semantic"
},
{
"chunk_id": "e0a389c8-94d0-4557-b353-71c62da2352c",
"text": "hereinafter concentrates on a representative MU. The time horizon is discretized into decision epochs, each of which is of equal duration δ (in seconds) and is indexed by an integer j ∈N+. Let W (in Hz) denote the frequency bandwidth allocated to the MEC slice, which is shared among the MUs simultaneously accessing the MEC service. This work assumes that the mobile device of the MU is wireless charging enabled and the Mobile Users Base Stations CNC Resource Slicing Wireless Charging Frequency\nTime Energy Queue\naj(e) Gateway\nTask Queue\naj(t)\nComputation Offloading Internet Illustration of mobile-edge computing (MEC) in a virtualized radio access network, where the devices of mobile users are wireless charging enabled, the radio resource is sliced between conventional communication services (the links in black",
"paper_id": "1805.06146",
"title": "Optimized Computation Offloading Performance in Virtual Edge Computing Systems via Deep Reinforcement Learning",
"authors": [
"Xianfu Chen",
"Honggang Zhang",
"Celimuge Wu",
"Shiwen Mao",
"Yusheng Ji",
"Mehdi Bennis"
],
"published_date": "2018-05-16",
"primary_category": "cs.LG",
"arxiv_url": "http://arxiv.org/abs/1805.06146v1",
"chunk_index": 21,
"total_chunks": 79,
"char_count": 819,
"word_count": 127,
"chunking_strategy": "semantic"
},
{
"chunk_id": "c12487e5-f610-4b01-8eb7-a26697a8557d",
"text": "color) and MEC services (the links in blue color), and a centralized network controller (CNC) is responsible for all control plane decisions over the network. received energy can be stored in an energy queue. The computation task generated by the MU across the time horizon form an independent and identically distributed sequence of Bernoulli\nrandom variables with a common parameter λ(t) ∈[0, 1]. We denote aj(t) ∈{0, 1} as the task\narrival indicator, that is, aj(t) = 1 if a computation task is generated from the MU during a decision n o n o\nepoch j and otherwise aj(t) = 0. Then, Pr aj(t) = 1 = 1 −Pr aj(t) = 0 = λ(t), where Pr{·}\ndenotes the probability of the occurrence of an event. We represent a computation task by (µ, ν) with µ and ν being, respectively, the input data size (in bits) and the total number of CPU cycles",
"paper_id": "1805.06146",
"title": "Optimized Computation Offloading Performance in Virtual Edge Computing Systems via Deep Reinforcement Learning",
"authors": [
"Xianfu Chen",
"Honggang Zhang",
"Celimuge Wu",
"Shiwen Mao",
"Yusheng Ji",
"Mehdi Bennis"
],
"published_date": "2018-05-16",
"primary_category": "cs.LG",
"arxiv_url": "http://arxiv.org/abs/1805.06146v1",
"chunk_index": 22,
"total_chunks": 79,
"char_count": 831,
"word_count": 152,
"chunking_strategy": "semantic"
},
{
"chunk_id": "c22f5b78-03a5-4673-a9ea-e66fc491e578",
"text": "required to accomplish the task. A computation task generated at a current decision epoch can be executed starting from the next epoch. The generated but not processed computation tasks can",
"paper_id": "1805.06146",
"title": "Optimized Computation Offloading Performance in Virtual Edge Computing Systems via Deep Reinforcement Learning",
"authors": [
"Xianfu Chen",
"Honggang Zhang",
"Celimuge Wu",
"Shiwen Mao",
"Yusheng Ji",
"Mehdi Bennis"
],
"published_date": "2018-05-16",
"primary_category": "cs.LG",
"arxiv_url": "http://arxiv.org/abs/1805.06146v1",
"chunk_index": 23,
"total_chunks": 79,
"char_count": 189,
"word_count": 30,
"chunking_strategy": "semantic"
},
{
"chunk_id": "975df592-5059-4a3d-8f86-ff87aeefa1b5",
"text": "be queued at the mobile device of the MU. Based on a first-in first-out principle, a computation task from the task queue can be scheduled for execution either locally on the mobile device or remotely at the MEC server.",
"paper_id": "1805.06146",
"title": "Optimized Computation Offloading Performance in Virtual Edge Computing Systems via Deep Reinforcement Learning",
"authors": [
"Xianfu Chen",
"Honggang Zhang",
"Celimuge Wu",
"Shiwen Mao",
"Yusheng Ji",
"Mehdi Bennis"
],
"published_date": "2018-05-16",
"primary_category": "cs.LG",
"arxiv_url": "http://arxiv.org/abs/1805.06146v1",
"chunk_index": 24,
"total_chunks": 79,
"char_count": 219,
"word_count": 39,
"chunking_strategy": "semantic"
},
{
"chunk_id": "d574c6e5-ee4e-425b-a2ca-2ad19adc6a18",
"text": "More specifically, at the beginning of each decision epoch j, the MU makes a joint control action (cj, ej), where cj ∈{0} ∪B is the computation offloading decision and ej ∈N+ is the number of allocated energy units1. We have cj > 0 if the MU\nchooses to offload the scheduled computation task to the MEC server via BS cj ∈B and cj = 0 if the MU decides to execute the computation task locally on its own mobile device. when ej = 0, the queued tasks will not be executed. When a computation task is scheduled for processing locally at the mobile device of the MU",
"paper_id": "1805.06146",
"title": "Optimized Computation Offloading Performance in Virtual Edge Computing Systems via Deep Reinforcement Learning",
"authors": [
"Xianfu Chen",
"Honggang Zhang",
"Celimuge Wu",
"Shiwen Mao",
"Yusheng Ji",
"Mehdi Bennis"
],
"published_date": "2018-05-16",
"primary_category": "cs.LG",
"arxiv_url": "http://arxiv.org/abs/1805.06146v1",
"chunk_index": 25,
"total_chunks": 79,
"char_count": 560,
"word_count": 107,
"chunking_strategy": "semantic"
},
{
"chunk_id": "2003fb0a-6988-43d3-8895-2c44ae95e851",
"text": "1An energy unit corresponds to an amount of energy, say, 2 · 10−3 Joules as in numerical experiments. during a decision epoch j, i.e., cj = 0, the allocated CPU-cycle frequency with ej > 0 energy units can be calculated as\nj ej\nf = , (1)\nτ · ν",
"paper_id": "1805.06146",
"title": "Optimized Computation Offloading Performance in Virtual Edge Computing Systems via Deep Reinforcement Learning",
"authors": [
"Xianfu Chen",
"Honggang Zhang",
"Celimuge Wu",
"Shiwen Mao",
"Yusheng Ji",
"Mehdi Bennis"
],
"published_date": "2018-05-16",
"primary_category": "cs.LG",
"arxiv_url": "http://arxiv.org/abs/1805.06146v1",
"chunk_index": 26,
"total_chunks": 79,
"char_count": 243,
"word_count": 50,
"chunking_strategy": "semantic"
},
{
"chunk_id": "a37cf712-c21d-4ea0-b9ea-95376ad184c2",
"text": "where τ is the effective switched capacitance that depends on chip architecture of the mobile\ndevice [25]. Moreover, the CPU-cycle frequency is constrained by f j ≤f(CPU).(max) Then the time\nneeded for local computation task execution is given by dj (mobile) = j , (2) f",
"paper_id": "1805.06146",
"title": "Optimized Computation Offloading Performance in Virtual Edge Computing Systems via Deep Reinforcement Learning",
"authors": [
"Xianfu Chen",
"Honggang Zhang",
"Celimuge Wu",
"Shiwen Mao",
"Yusheng Ji",
"Mehdi Bennis"
],
"published_date": "2018-05-16",
"primary_category": "cs.LG",
"arxiv_url": "http://arxiv.org/abs/1805.06146v1",
"chunk_index": 27,
"total_chunks": 79,
"char_count": 270,
"word_count": 46,
"chunking_strategy": "semantic"
},
{
"chunk_id": "fcd890b8-2f9c-4000-9aa9-f4e0384d3ccf",
"text": "which decreases as the number of allocated energy units increases. We denote gjb as the channel gain state between the MU and a BS b ∈B during each decision\nepoch j, which independently picks a value from a finite state space Gb. transitions across the time horizon are modelled as a finite-state discrete-time Markov chain. At the beginning of a decision epoch j, if the MU lets the MEC server execute the scheduled computation task on behalf of the mobile device, the input data of the task needs to be offloaded",
"paper_id": "1805.06146",
"title": "Optimized Computation Offloading Performance in Virtual Edge Computing Systems via Deep Reinforcement Learning",
"authors": [
"Xianfu Chen",
"Honggang Zhang",
"Celimuge Wu",
"Shiwen Mao",
"Yusheng Ji",
"Mehdi Bennis"
],
"published_date": "2018-05-16",
"primary_category": "cs.LG",
"arxiv_url": "http://arxiv.org/abs/1805.06146v1",
"chunk_index": 28,
"total_chunks": 79,
"char_count": 514,
"word_count": 91,
"chunking_strategy": "semantic"
},
{
"chunk_id": "a6c13974-5bd5-4770-95c6-b42ef0bc1506",
"text": "to the chosen BS cj ∈B. The MU-BS association has to be first established. is different from the previously associated one, a handover between the two BSs hence happens. Denote sj ∈B as the MU-BS association state at a decision epoch j2, sj = b · 1{{cj−1=b,b∈B}∨{{cj−1=0}∧{sj−1=b}}}, (3) where the symbols ∨and ∧mean \"logic OR\" and \"logic AND\", respectively, and 1{Ω} is the indicator function that equals 1 if the condition Ωis satisfied and otherwise 0. the energy consumption during the handover procedure is negligible at the mobile device.",
"paper_id": "1805.06146",
"title": "Optimized Computation Offloading Performance in Virtual Edge Computing Systems via Deep Reinforcement Learning",
"authors": [
"Xianfu Chen",
"Honggang Zhang",
"Celimuge Wu",
"Shiwen Mao",
"Yusheng Ji",
"Mehdi Bennis"
],
"published_date": "2018-05-16",
"primary_category": "cs.LG",
"arxiv_url": "http://arxiv.org/abs/1805.06146v1",
"chunk_index": 29,
"total_chunks": 79,
"char_count": 544,
"word_count": 89,
"chunking_strategy": "semantic"
},
{
"chunk_id": "bd96bb03-138d-4977-bb5d-cf2c2988b591",
"text": "considered dense networking scenario, the achievable data rate can be written as\ngjb · pj(tr)\nrj = W · log2 1 + , (4) where I is the received average power of interference plus additive background Gaussian noise pj(tr) = , (5)\ndj(tr) 2We assume that if the MU processes a computation task locally or no task is executed at a decision epoch j −1, then the\nMU-BS association does not change, namely, sj = sj−1. In this case, no handover will be triggered.",
"paper_id": "1805.06146",
"title": "Optimized Computation Offloading Performance in Virtual Edge Computing Systems via Deep Reinforcement Learning",
"authors": [
"Xianfu Chen",
"Honggang Zhang",
"Celimuge Wu",
"Shiwen Mao",
"Yusheng Ji",
"Mehdi Bennis"
],
"published_date": "2018-05-16",
"primary_category": "cs.LG",
"arxiv_url": "http://arxiv.org/abs/1805.06146v1",
"chunk_index": 30,
"total_chunks": 79,
"char_count": 453,
"word_count": 84,
"chunking_strategy": "semantic"
},
{
"chunk_id": "78502ca6-3cc8-4fc2-a20a-b8b0101734bc",
"text": "is the transmit power with being the time of transmitting task input data. The transmit power is constrained by the maximum\ntransmit power of the mobile device p(max)(tr) [26], i.e., pj(tr) ≤p(max)(tr) . (7) In (6) above, we assume that the energy is evenly assigned to the input data bits of the In other words, the transmission rate keeps unchanged during the input data\ntransmission. Lemma 1 ensures that dj(tr) is the minimum transmission time given the allocated\nenergy units ej > 0. Lemma 1: Given the computation offloading decision cj ∈B and the allocated energy units ej > 0 at a decision epoch j, the optimal transmission policy achieving the minimum transmission time is a policy with which the rate of transmitting task input data remains a constant. Proof: With the decisions of computation offloading cj ∈B and energy unit allocation ej > 0 at an epoch j, the input data transmission is independent of the network dynamics. there exists a new transmission policy with which the MU changes its transmission rate from rj,(1) to rj,(2) ̸= rj,(1) at a certain point during the task input data transmission. We denote the\ntime durations corresponding to transmission rates rj,(1) and rj,(2) as dj,(1)(tr) and dj,(2)(tr) , respectively.",
"paper_id": "1805.06146",
"title": "Optimized Computation Offloading Performance in Virtual Edge Computing Systems via Deep Reinforcement Learning",
"authors": [
"Xianfu Chen",
"Honggang Zhang",
"Celimuge Wu",
"Shiwen Mao",
"Yusheng Ji",
"Mehdi Bennis"
],
"published_date": "2018-05-16",
"primary_category": "cs.LG",
"arxiv_url": "http://arxiv.org/abs/1805.06146v1",
"chunk_index": 31,
"total_chunks": 79,
"char_count": 1244,
"word_count": 207,
"chunking_strategy": "semantic"
},
{
"chunk_id": "00dac509-aeb0-4e91-900a-27953343b8c3",
"text": "Taking into account the maximum transmit power of the mobile device (7), it is easy to verify that the following two constraints on total energy consumption and total transmitted data bits can be satisfied:\nn o I rj,(1) I rj,(2)\n· 2 W −1 ·dj,(1)(tr) + · 2 W −1 ·dj,(2)(tr) = min ej, dj,(1)(tr) + dj,(2)(tr) ·p(max)(tr) , (8) gj gj cj cj µ. (9) rj,(1) · dj,(1)(tr) + rj,(2) · dj,(2)(tr) = On the other hand, the average transmission rate within a duration of dj,(1)(tr) +dj,(2)(tr) can be written\n n o \n+ dj,(2)(tr) · p(max)(tr) gj min ej, dj,(1)(tr)\n¯rj = W · log21 + cj · . (10)\nI dj,(1) dj,(2) (tr) + (tr) Consider using ¯rj as the only transmission rate during the whole period of task input data transmission, we construct the following deduction ¯rj · dj,(1)(tr) + dj,(2)(tr)\n \nrj,(1) rj,(2)\n2 W −1 · dj,(1)(tr) + 2  W −1 · dj,(2)(tr) \n= W · log21 + · dj,(1)(tr) + dj,(2)(tr)\ndj,(1) dj,(2) (tr) + (tr)\ndj,(1)(tr) dj,(2)(tr)\n≥ rj,(1) · + rj,(2) · · dj,(1)(tr) + dj,(2)(tr)\ndj,(1) dj,(2) dj,(1) dj,(2) (tr) + (tr) (tr) + (tr) µ, (11) = rj,(1) · dj,(1)(tr) + rj,(2) · dj,(2)(tr) = where the inequality originates from the concavity of a logarithmic function. can find that with a constant transmission rate, more data can be transmitted within the same Hence, applying a constant rate for the transmission of µ task input data bits achieves the minimum transmission time, which can be solved according to (4) and (6). This completes the proof. □",
"paper_id": "1805.06146",
"title": "Optimized Computation Offloading Performance in Virtual Edge Computing Systems via Deep Reinforcement Learning",
"authors": [
"Xianfu Chen",
"Honggang Zhang",
"Celimuge Wu",
"Shiwen Mao",
"Yusheng Ji",
"Mehdi Bennis"
],
"published_date": "2018-05-16",
"primary_category": "cs.LG",
"arxiv_url": "http://arxiv.org/abs/1805.06146v1",
"chunk_index": 32,
"total_chunks": 79,
"char_count": 1457,
"word_count": 270,
"chunking_strategy": "semantic"
},
{
"chunk_id": "065dd083-3daa-4125-9d03-1c68a9d54b95",
"text": "Lemma 2: Given the computation offloading decision cj ∈B at a decision epoch j, the input\ndata transmission time dj(tr) is a monotonically decreasing function of the allocated energy units\nej > 0. Proof: By replacing rj in (4) with (6), we get\ngj cj 1 µ 1 log2 1 + · ej · = · . (12)\nI dj(tr) W dj(tr) Alternatively, we take 1 as the solution of (12), which is an intersection of two lines, namely,\ndj(tr)\nℓ1(x) = log2 1 + Icj · ej · x and ℓ2(x) = Wµ · x, for x > 0. From the non-negativity and the\nmonotonicity of a logarithmic function and a linear function, it is easy to see that 1 is a\ndj(tr)\nmonotonically increasing function of ej. Thus, dj(tr) is a monotonically decreasing function of\nej. □ In addition, we assume in this paper that the battery capacity at the mobile device of the MU\nis limited and the received energy units across the time horizon take integer values. Let qj(e)\nbe the energy queue length of the MU at the beginning of a decision epoch j, which evolves according to\nn o\nqj+1(e) = min qj(e) −ej + aj(e), q(max)(e) , (13) where q(max)(e) ∈N+ denotes the battery capacity limit and aj(e) ∈N+ is the number of energy\nunits received by the end of decision epoch j. In this section, we shall first formulate the problem of stochastic computation offloading within the MDP framework and then discuss the optimal solutions. Stochastic Computation Task Offloading The experienced delay is the key performance indicator for evaluating the quality of a task computing experience. The delay of a computation task is defined as the period of time from",
"paper_id": "1805.06146",
"title": "Optimized Computation Offloading Performance in Virtual Edge Computing Systems via Deep Reinforcement Learning",
"authors": [
"Xianfu Chen",
"Honggang Zhang",
"Celimuge Wu",
"Shiwen Mao",
"Yusheng Ji",
"Mehdi Bennis"
],
"published_date": "2018-05-16",
"primary_category": "cs.LG",
"arxiv_url": "http://arxiv.org/abs/1805.06146v1",
"chunk_index": 33,
"total_chunks": 79,
"char_count": 1565,
"word_count": 288,
"chunking_strategy": "semantic"
},
{
"chunk_id": "45d9daa9-b45f-43f6-ac1f-c6fb027c08bf",
"text": "when the task arrives to the computation task queue to when the task is successfully removed Thus the experienced delay includes the computation task execution delay and the task queuing delay. We assume that there is a delay of ζ seconds for control signalling during the occurrence of one handover. With a joint control action (cj, ej) at a decision epoch j, the handover delay can be then given as",
"paper_id": "1805.06146",
"title": "Optimized Computation Offloading Performance in Virtual Edge Computing Systems via Deep Reinforcement Learning",
"authors": [
"Xianfu Chen",
"Honggang Zhang",
"Celimuge Wu",
"Shiwen Mao",
"Yusheng Ji",
"Mehdi Bennis"
],
"published_date": "2018-05-16",
"primary_category": "cs.LG",
"arxiv_url": "http://arxiv.org/abs/1805.06146v1",
"chunk_index": 34,
"total_chunks": 79,
"char_count": 400,
"word_count": 70,
"chunking_strategy": "semantic"
},
{
"chunk_id": "71a69be4-ce1f-4a82-be75-9bdc5789269c",
"text": "hj = ζ · 1{{cj∈B}∧{cj̸=sj}}. (14) According to (2), (6) and (14), we obtain the task execution delay as3\n  dj(mobile), if ej > 0 and cj = 0;   \ndj = hj + dj(tr) + d(server), if ej > 0 and cj ∈B; (15)      0, if ej = 0, where d(server) is time consumed for task execution at the MEC server.",
"paper_id": "1805.06146",
"title": "Optimized Computation Offloading Performance in Virtual Edge Computing Systems via Deep Reinforcement Learning",
"authors": [
"Xianfu Chen",
"Honggang Zhang",
"Celimuge Wu",
"Shiwen Mao",
"Yusheng Ji",
"Mehdi Bennis"
],
"published_date": "2018-05-16",
"primary_category": "cs.LG",
"arxiv_url": "http://arxiv.org/abs/1805.06146v1",
"chunk_index": 35,
"total_chunks": 79,
"char_count": 299,
"word_count": 70,
"chunking_strategy": "semantic"
},
{
"chunk_id": "c3f19881-6df2-43b4-962e-5416a797ef4f",
"text": "Due to the sufficient available computation resource at the MEC server, we assume that d(server) is a sufficiently small Notice that if: 1) the MU fails to process a computation task at the mobile device within one decision epoch; or 2) a computation task is scheduled for MEC server execution but the computation result cannot be sent back via the chosen BS within the decision epoch, the task 3In this work, we assume that the BSs are connected to the MEC server via fibre links. Hence, the round-trip delay between the",
"paper_id": "1805.06146",
"title": "Optimized Computation Offloading Performance in Virtual Edge Computing Systems via Deep Reinforcement Learning",
"authors": [
"Xianfu Chen",
"Honggang Zhang",
"Celimuge Wu",
"Shiwen Mao",
"Yusheng Ji",
"Mehdi Bennis"
],
"published_date": "2018-05-16",
"primary_category": "cs.LG",
"arxiv_url": "http://arxiv.org/abs/1805.06146v1",
"chunk_index": 36,
"total_chunks": 79,
"char_count": 521,
"word_count": 91,
"chunking_strategy": "semantic"
},
{
"chunk_id": "5e05d613-a2ab-439b-b7dd-cc9c80654489",
"text": "BSs and the MEC server is negligible. Further, we neglect the time overhead for the selected BS to send back the computation result due to the fact that the size of a computation outcome is much smaller than the input data of a computation task [27]. execution fails and the task will remain in the queue until being successfully executed. dynamics of the computation task queue at the MU can be hence expressed as\nn o\nqj+1(t) = min qj(t) −1{0<dj≤δ} + aj(t), q(max)(t) , (16) where qj(t) is the number of computation tasks in the queue at the beginning of each decision\nepoch j and q(max)(t) ∈N+ limits the maximum number of computation tasks that can be queued\nat the mobile device. There will be computation task drops once the task queue is full. We let\nn o\nηj = max qj(t) −1{0<dj≤δ} + aj(t) −q(max)(t) , 0 , (17)",
"paper_id": "1805.06146",
"title": "Optimized Computation Offloading Performance in Virtual Edge Computing Systems via Deep Reinforcement Learning",
"authors": [
"Xianfu Chen",
"Honggang Zhang",
"Celimuge Wu",
"Shiwen Mao",
"Yusheng Ji",
"Mehdi Bennis"
],
"published_date": "2018-05-16",
"primary_category": "cs.LG",
"arxiv_url": "http://arxiv.org/abs/1805.06146v1",
"chunk_index": 37,
"total_chunks": 79,
"char_count": 816,
"word_count": 151,
"chunking_strategy": "semantic"
},
{
"chunk_id": "c626e1ce-361a-434e-9a08-588a798fa9d0",
"text": "define a computation task drop. If a computation task remains in the queue for a decision epoch, a delay of δ seconds will be incurred to the task. We treat the queuing delay during a decision epoch j equivalently as the length of a task queue, that is, ρj = qj(t) −1{dj>0}. (18) As previously discussed, if dj > δ, the execution of a computation task fails. MU receives a penalty, which is defined by Moreover, a payment is required for the access to MEC service when the MU decides to offload a computation task for MEC server execution. The payment is assumed to be proportional to the time consumed for transmitting and processing the task input data. That is, the payment can",
"paper_id": "1805.06146",
"title": "Optimized Computation Offloading Performance in Virtual Edge Computing Systems via Deep Reinforcement Learning",
"authors": [
"Xianfu Chen",
"Honggang Zhang",
"Celimuge Wu",
"Shiwen Mao",
"Yusheng Ji",
"Mehdi Bennis"
],
"published_date": "2018-05-16",
"primary_category": "cs.LG",
"arxiv_url": "http://arxiv.org/abs/1805.06146v1",
"chunk_index": 38,
"total_chunks": 79,
"char_count": 680,
"word_count": 123,
"chunking_strategy": "semantic"
},
{
"chunk_id": "aa1b27d4-39ce-4637-b914-5c10fc34a44a",
"text": "φj = π · min dj, δ −hj · 1{cj∈B}, (20) where π ∈R+ is the price paid for the MEC service per unit of time. The network state of the MU during each decision epoch j can be characterized by χj =\nn o n o\nqj(t), qj(e), sj, gj ∈X def= 0, 1, · · · , q(max)(t) × 0, 1, · · · , q(max)(e) × B × {×b∈BGb}, where gj =\ngjb : b ∈B . At the beginning of epoch j, the MU decides a joint task offloading and energy\ndef 4allocation decision (cj, ej) ∈Y = {{0} ∪B} × 0, 1, · · · , Q(e) according to the stationary",
"paper_id": "1805.06146",
"title": "Optimized Computation Offloading Performance in Virtual Edge Computing Systems via Deep Reinforcement Learning",
"authors": [
"Xianfu Chen",
"Honggang Zhang",
"Celimuge Wu",
"Shiwen Mao",
"Yusheng Ji",
"Mehdi Bennis"
],
"published_date": "2018-05-16",
"primary_category": "cs.LG",
"arxiv_url": "http://arxiv.org/abs/1805.06146v1",
"chunk_index": 39,
"total_chunks": 79,
"char_count": 495,
"word_count": 116,
"chunking_strategy": "semantic"
},
{
"chunk_id": "c37a37cf-0729-4ec0-805b-9709eddc07c9",
"text": "4To keep what follows uniform, we do not exclude the infeasible joint actions. control policy defined by Definition 1. In line with the discussions, we define an immediate",
"paper_id": "1805.06146",
"title": "Optimized Computation Offloading Performance in Virtual Edge Computing Systems via Deep Reinforcement Learning",
"authors": [
"Xianfu Chen",
"Honggang Zhang",
"Celimuge Wu",
"Shiwen Mao",
"Yusheng Ji",
"Mehdi Bennis"
],
"published_date": "2018-05-16",
"primary_category": "cs.LG",
"arxiv_url": "http://arxiv.org/abs/1805.06146v1",
"chunk_index": 40,
"total_chunks": 79,
"char_count": 171,
"word_count": 28,
"chunking_strategy": "semantic"
},
{
"chunk_id": "c0c6ae2e-0428-48cd-9721-5b64b7852d80",
"text": "utility at epoch j to quantify the task computation experience for the MU, u χj, (cj, ej) = ω1 · u(1) min dj, δ + ω2 · u(2)(ηj) + ω3 · u(3)(ρj) + ω4 · u(4)(ϕj) + ω5 · u(5)(φj), (21) where the positive monotonically deceasing functions u(1)(·), u(2)(·), u(3)(·), u(4)(·) and u(5)(·) measure the satisfactions of the task execution delay, the computation task drops, the task queuing delay, the penalty of failing to execute a computation task and the payment of accessing the MEC",
"paper_id": "1805.06146",
"title": "Optimized Computation Offloading Performance in Virtual Edge Computing Systems via Deep Reinforcement Learning",
"authors": [
"Xianfu Chen",
"Honggang Zhang",
"Celimuge Wu",
"Shiwen Mao",
"Yusheng Ji",
"Mehdi Bennis"
],
"published_date": "2018-05-16",
"primary_category": "cs.LG",
"arxiv_url": "http://arxiv.org/abs/1805.06146v1",
"chunk_index": 41,
"total_chunks": 79,
"char_count": 478,
"word_count": 85,
"chunking_strategy": "semantic"
},
{
"chunk_id": "b287239c-c188-49eb-8bad-15c8019836a6",
"text": "service, and ω1, ω2, ω3, ω4, ω5 ∈R+ are the weights that combine different types of function with different units into a universal utility function. With slight abuse of notations, we rewrite u(1)(·), u(2)(·), u(3)(·), u(4)(·) and u(5)(·) as u(1)(χj, (cj, ej)), u(2)(χj, (cj, ej)), u(3)(χj, (cj, ej)), u(4)(χj, (cj, ej)) and u(5)(χj, (cj, ej)).",
"paper_id": "1805.06146",
"title": "Optimized Computation Offloading Performance in Virtual Edge Computing Systems via Deep Reinforcement Learning",
"authors": [
"Xianfu Chen",
"Honggang Zhang",
"Celimuge Wu",
"Shiwen Mao",
"Yusheng Ji",
"Mehdi Bennis"
],
"published_date": "2018-05-16",
"primary_category": "cs.LG",
"arxiv_url": "http://arxiv.org/abs/1805.06146v1",
"chunk_index": 42,
"total_chunks": 79,
"char_count": 344,
"word_count": 55,
"chunking_strategy": "semantic"
},
{
"chunk_id": "bacde773-352b-493a-9779-87f74f8ba62d",
"text": "Definition 1 (Joint Task Offloading and Energy Allocation Control Policy): A stationary joint task offloading and energy allocation control policy Φ is defined as a mapping: Φ : X →Y. More specifically, the MU determines a joint control action Φ(χj) = Φ(c)(χj), Φ(e)(χj) =\n(cj, ej) ∈Y according to Φ after observing network state χj ∈X at the beginning of each decision epoch j, where Φ = Φ(c), Φ(e) with Φ(c) and Φ(e) being, respectively, the stationary task offloading and energy allocation policies. Given a stationary control policy Φ, the {χj : j ∈N+} is a controlled Markov chain with the following state transition probability\nn o n o\nPr χj+1|χj, Φ χj = Pr qj+1(t) |qj(t), Φ χj · Pr qj+1(e) |qj(e), Φ χj\nY (22)\n· Pr sj+1|sj, Φ χj · Pr gj+1b |gjb .\nb∈B\nTaking expectation with respect to the per-epoch utilities {u(χj, Φ(χj)) : j ∈N+} over the\nsequence of network states {χj : j ∈N+}, the expected long-term utility of the MU conditioned on an initial network state χ1 can be expressed as\n\" #\nV (χ, Φ) = EΦ (1 −γ) · (γ)j−1 · u χj, Φ χj |χ1 = χ , (23) where χ = q(t), q(e), s, g ∈X , g = (gb : b ∈B), γ ∈[0, 1) is the discount factor, and (γ)j−1 denotes the discount factor to the (j −1)-th power. V (χ, Φ) is also named as the state-value function for the MU in the state χ under policy Φ. Problem 1: The MU aims to design an optimal stationary control policy Φ∗= Φ∗(c), Φ∗(e) that\nmaximizes the expected long-term utility performance, V (χ, Φ), for any given initial network state χ, which can be formally formulated as in the following Φ∗= arg max V (χ, Φ), ∀χ ∈X . (24)",
"paper_id": "1805.06146",
"title": "Optimized Computation Offloading Performance in Virtual Edge Computing Systems via Deep Reinforcement Learning",
"authors": [
"Xianfu Chen",
"Honggang Zhang",
"Celimuge Wu",
"Shiwen Mao",
"Yusheng Ji",
"Mehdi Bennis"
],
"published_date": "2018-05-16",
"primary_category": "cs.LG",
"arxiv_url": "http://arxiv.org/abs/1805.06146v1",
"chunk_index": 43,
"total_chunks": 79,
"char_count": 1578,
"word_count": 304,
"chunking_strategy": "semantic"
},
{
"chunk_id": "11e29253-54f8-4bed-b29e-41f141d597ba",
"text": "V (χ) = V (χ, Φ∗) is defined as the optimal state-value function, ∀χ ∈X . Remark 1: The formulated problem of stochastic computation offloading optimization as in Problem 1 is in general a single-agent infinite-horizon MDP with the discounted utility criterion. Nevertheless, (23) can also be used to approximate the expected infinite-horizon undiscounted utility [28]\n\" #\nXJ 1\nU(χ, Φ) = EΦ lim · u χj, Φ χj |χ1 = χ . (25)\nJ→∞ J\nj=1 Learning Optimal Solution to Problem 1 The stationary control policy achieving the optimal state-value function can be obtained by solving the following Bellman's optimality equation [20]: ∀χ ∈X ,\n( )\nV (χ) = max (1 −γ) · u(χ, (c, e)) + γ · Pr{χ′|χ, (c, e)} · V (χ′) , (26)\n(c,e) where u(χ, (c, e)) is the achieved utility when a joint control action (c, e) ∈Y is performed\nunder network state χ and χ′ = q′(t), q′(e), s′, g′ ∈X is the subsequent network state with\ng′ = (g′b : b ∈B). Remark 2: The traditional solutions to (26) are based on the value iteration or the policy iteration [20], which need complete knowledge of the computation task arrival, the received energy unit and the channel state transition statistics. One attractiveness of the off-policy Q-learning is that it assumes no a priori knowledge of the network state transition statistics [20].",
"paper_id": "1805.06146",
"title": "Optimized Computation Offloading Performance in Virtual Edge Computing Systems via Deep Reinforcement Learning",
"authors": [
"Xianfu Chen",
"Honggang Zhang",
"Celimuge Wu",
"Shiwen Mao",
"Yusheng Ji",
"Mehdi Bennis"
],
"published_date": "2018-05-16",
"primary_category": "cs.LG",
"arxiv_url": "http://arxiv.org/abs/1805.06146v1",
"chunk_index": 44,
"total_chunks": 79,
"char_count": 1295,
"word_count": 231,
"chunking_strategy": "semantic"
},
{
"chunk_id": "cceb8072-4539-4c93-b505-88ba6f15818a",
"text": "We define the right-hand side of (26) by\nQ(χ, (c, e)) = (1 −γ) · u(χ, (c, e)) + γ · Pr{χ′|χ, (c, e)} · V (χ′), (27) The optimal state-value function V (χ) can be hence directly obtained from V (χ) = max Q(χ, (c, e)). (28)\n(c,e) By substituting (28) into (27), we get\nQ(χ, (c, e)) = (1 −γ) · u(χ, (c, e)) + γ · Pr{χ′|χ, (c, e)} · max Q(χ′, (c′, e′)), (29)\n(c′,e′) where we denote (c′, e′) ∈Y as a joint control action performed under the network state χ′. practice, the computation task arrival as well as the number of energy units that can be received by the end of a decision epoch are unavailable beforehand.",
"paper_id": "1805.06146",
"title": "Optimized Computation Offloading Performance in Virtual Edge Computing Systems via Deep Reinforcement Learning",
"authors": [
"Xianfu Chen",
"Honggang Zhang",
"Celimuge Wu",
"Shiwen Mao",
"Yusheng Ji",
"Mehdi Bennis"
],
"published_date": "2018-05-16",
"primary_category": "cs.LG",
"arxiv_url": "http://arxiv.org/abs/1805.06146v1",
"chunk_index": 45,
"total_chunks": 79,
"char_count": 611,
"word_count": 123,
"chunking_strategy": "semantic"
},
{
"chunk_id": "f05efdaf-ec39-464a-85e6-2126bbd00403",
"text": "Using standard Q-learning, the MU tries to learn Q(χ, (c, e)) in a recursive way based on the observation of network state χ = χj at a current decision epoch j, the performed joint action (c, e) = (cj, ej), the achieved utility u(χ, (c, e)) and the resulting network state χ′ = χj+1 at the next epoch j + 1. Qj+1(χ, (c, e)) = Qj(χ, (c, e)) + αj (1 −γ) · u(χ, (c, e)) + γ · max Qj(χ′, (c′, e′)) −Qj(χ, (c, e)) , (30)\n(c′,e′) where αj ∈[0, 1) is a time-varying learning rate. It has been proven that if 1) the network\nstate transition probability under the optimal stationary control policy is stationary, 2) j=1 αj\nis infinite and j=1(αj)2 is finite, and 3) all state-action pairs are visited infinitely often, the Q-learning process converges and eventually finds the optimal control policy [19]. condition can be satisfied if the probability of choosing any action in any network state is non-zero (i.e., exploration). Meanwhile, the MU has to exploit the most recent knowledge of Q-function in order to perform well (i.e., exploitation). A classical way to balance the trade-off between exploration and exploitation is the ǫ-greedy strategy [20].",
"paper_id": "1805.06146",
"title": "Optimized Computation Offloading Performance in Virtual Edge Computing Systems via Deep Reinforcement Learning",
"authors": [
"Xianfu Chen",
"Honggang Zhang",
"Celimuge Wu",
"Shiwen Mao",
"Yusheng Ji",
"Mehdi Bennis"
],
"published_date": "2018-05-16",
"primary_category": "cs.LG",
"arxiv_url": "http://arxiv.org/abs/1805.06146v1",
"chunk_index": 46,
"total_chunks": 79,
"char_count": 1148,
"word_count": 202,
"chunking_strategy": "semantic"
},
{
"chunk_id": "f25c5838-276a-46ef-a65f-f42f21b94ec3",
"text": "Remark 3: From (30), we can find that the standard Q-learning rule suffers from poor Due to the tabular nature in representing Q-function values, Q-learning is not readily applicable to high-dimensional scenarios with extremely huge network state and/or action spaces, where the learning process is extremely slow.",
"paper_id": "1805.06146",
"title": "Optimized Computation Offloading Performance in Virtual Edge Computing Systems via Deep Reinforcement Learning",
"authors": [
"Xianfu Chen",
"Honggang Zhang",
"Celimuge Wu",
"Shiwen Mao",
"Yusheng Ji",
"Mehdi Bennis"
],
"published_date": "2018-05-16",
"primary_category": "cs.LG",
"arxiv_url": "http://arxiv.org/abs/1805.06146v1",
"chunk_index": 47,
"total_chunks": 79,
"char_count": 314,
"word_count": 47,
"chunking_strategy": "semantic"
},
{
"chunk_id": "474683ec-7efd-4a68-89c3-8f133403d4cf",
"text": "In our considered system model, the sizes of\nthe network state space X and the action space Y can be calculated as X = 1 + q(max)(t) ·\n1 + q(max)(e) · B · b∈B |Gb| and Y = (1 + B) · 1 + q(max)(e) , respectively, where |G| means the\ncardinality of the set G. It can be observed that X grows exponentially as the number B of BSs Suppose there is a MEC system with 6 BSs and for each BS, the channel gain is\nquantized into 6 states (as assumed in our experiment setups). If we set q(max)(t) = q(max)(e) = 4, the\nMU has to update in total X · Y = 2.44944 · 108 Q-function values during the learning process, Perform (cj, ej)\nPolicy\nObserve Âj\nDQN\nu(Âj, (cj, ej))\nMiniObserve Âj+1 Batch ⁞\n⁞ Network\nUser mj\nParameter Loss and mj-M+2 Replay Memory Mobile mj-M+1 Updating Gradient Double deep Q-network (DQN) based reinforcement learning (DARLING) for stochastic computation offloading in a mobile-edge computing system. which is impossible for the Q-learning process to converge within limited number of decision",
"paper_id": "1805.06146",
"title": "Optimized Computation Offloading Performance in Virtual Edge Computing Systems via Deep Reinforcement Learning",
"authors": [
"Xianfu Chen",
"Honggang Zhang",
"Celimuge Wu",
"Shiwen Mao",
"Yusheng Ji",
"Mehdi Bennis"
],
"published_date": "2018-05-16",
"primary_category": "cs.LG",
"arxiv_url": "http://arxiv.org/abs/1805.06146v1",
"chunk_index": 48,
"total_chunks": 79,
"char_count": 1006,
"word_count": 185,
"chunking_strategy": "semantic"
},
{
"chunk_id": "373fead4-aee1-4c7c-91b8-407dde52e223",
"text": "The next section thereby focuses on developing practically feasible and computationally efficient algorithms to approach the optimal control policy. APPROACHING THE OPTIMAL POLICY In this section, we proceed to approach the optimal control policy by developing practically feasible algorithms based on recent advances in deep reinforcement learning and a linear Qfunction decomposition technique. Deep Reinforcement Learning Algorithm Inspired by the success of modelling an optimal state-action Q-function with a deep neural network [22], we adopt a double DQN to address the massive network state space X [23]. Specifically, the Q-function expressed as in (27) is approximated by Q(χ, (c, e)) ≈Q(χ, (c, e); θ), where (χ, (c, e)) ∈X × Y and θ denotes a vector of parameters associated with the DQN. The DQN-based reinforcement learning (DARLING) for stochastic computation offloading in our considered MEC system is illustrated in Fig. 2, during which instead of finding the optimal Q-function, the DQN parameters can be learned iteratively. The mobile device is assumed to be equipped with a replay memory of a finite size M to store the experience mj = (χj, (cj, ej), u(χj, (cj, ej)), χj+1) that is happened at the transition",
"paper_id": "1805.06146",
"title": "Optimized Computation Offloading Performance in Virtual Edge Computing Systems via Deep Reinforcement Learning",
"authors": [
"Xianfu Chen",
"Honggang Zhang",
"Celimuge Wu",
"Shiwen Mao",
"Yusheng Ji",
"Mehdi Bennis"
],
"published_date": "2018-05-16",
"primary_category": "cs.LG",
"arxiv_url": "http://arxiv.org/abs/1805.06146v1",
"chunk_index": 49,
"total_chunks": 79,
"char_count": 1228,
"word_count": 192,
"chunking_strategy": "semantic"
},
{
"chunk_id": "3f437fd9-f78a-4bfc-9654-79cd7087457f",
"text": "of two consecutive decision epoches j and j + 1 during the learning process of DARLING, where χj, χj+1 ∈X and (cj, ej) ∈Y. The experience pool can be represented as Mj = The MU maintains a DQN and a target DQN, namely, Q(χ, (c, e); θj) and\nQ χ, (c, e); θj− , with parameters θj at a current decision epoch j and θj−at some previous epoch before decision epoch j, ∀(χ, (c, e)) ∈X × Y. According to the experience replay technique\nf[30], the MU then randomly samples a mini-batch Mj ⊆Mj from the pool Mj of historical experiences at each decision epoch j to online train the DQN.",
"paper_id": "1805.06146",
"title": "Optimized Computation Offloading Performance in Virtual Edge Computing Systems via Deep Reinforcement Learning",
"authors": [
"Xianfu Chen",
"Honggang Zhang",
"Celimuge Wu",
"Shiwen Mao",
"Yusheng Ji",
"Mehdi Bennis"
],
"published_date": "2018-05-16",
"primary_category": "cs.LG",
"arxiv_url": "http://arxiv.org/abs/1805.06146v1",
"chunk_index": 50,
"total_chunks": 79,
"char_count": 577,
"word_count": 111,
"chunking_strategy": "semantic"
},
{
"chunk_id": "ab7bbdb1-5c16-4933-8e7c-e8b3b9bfe99f",
"text": "That is, the parameters θj are updated in the direction of minimizing the loss function, which is defined by L(DARLING) θj =\n  !\nγ · Q χ′, arg max Q χ′, (c′, e′); θj ; θj− − E(χ,(c,e),u(χ,(c,e)),χ′)∈fMj(1 −γ) · u(χ, (c, e)) +\n(c′,e′)\n2\nQ χ, (c, e); θj  , (31) The loss function L(DARLING)(θj) is a mean-squared measure of the Bellman\nequation error at a decision epoch j (i.e., the last term of (30)) by replacing Qj(χ, (c, e)) and its corresponding target (1 −γ) · u(χ, (c, e)) + γ · max(c′,e′) Qj(χ′, (c′, e′)) with Q(χ, (c, e); θj)\nand (1 −γ) · u(χ, (c, e)) + γ · Q χ′, arg max(c′,e′) Q(χ′, (c′, e′); θj); θj− [23], respectively. By\ndifferentiating the loss function L(DARLING)(θj) with respect to the DQN parameters θj, we obtain the gradient as ∇θjL(DARLING) θj =\n !\nE(χ,(c,e),u(χ,(c,e)),χ′)∈fMj(1 −γ) · u(χ, (c, e)) + γ · Q χ′, arg max Q χ′, (c′, e′); θj ; θj− −\n(c′,e′)\n  Q χ, (c, e); θj · ∇θjQ χ, (c, e); θj . (32) Algorithm 1 summarizes the implementation of the online DARLING algorithm by the MU for stochastic computation offloading in our considered MEC system. Linear Q-Function Decomposition based Deep Reinforcement Learning 1) Linear Q-Function Decomposition: It can be found that the utility function in (21) is of an additive structure, which motivates us to linearly decompose the state-action Q-function, Algorithm 1 Online DARLING Algorithm for Stochastic Computation Task Offloading in A",
"paper_id": "1805.06146",
"title": "Optimized Computation Offloading Performance in Virtual Edge Computing Systems via Deep Reinforcement Learning",
"authors": [
"Xianfu Chen",
"Honggang Zhang",
"Celimuge Wu",
"Shiwen Mao",
"Yusheng Ji",
"Mehdi Bennis"
],
"published_date": "2018-05-16",
"primary_category": "cs.LG",
"arxiv_url": "http://arxiv.org/abs/1805.06146v1",
"chunk_index": 51,
"total_chunks": 79,
"char_count": 1428,
"word_count": 257,
"chunking_strategy": "semantic"
},
{
"chunk_id": "76d8781e-1c05-45d2-9a8d-ac80af3a7cf3",
"text": "MEC System\n1: initialize the replay memory Mj with a finite size of M ∈N+, the mini-batch Mjf with a\nsize of M˜ < M for experience replay, a DQN and a target DQN with two sets θj and θj− of random parameters, for j = 1. 3: At the beginning of decision epoch j, the MU observes the network state χj ∈X , which is taken as an input to the DQN with parameters θj, and then selects a joint control action (cj, ej) ∈Y randomly with probability ǫ or (cj, ej) = arg max(c,e)∈Y Q(χj, (c, e); θj) with 4: After performing the selected joint control action (cj, ej), the MU realizes an immediate utility u(χj, (cj, ej)) and observes the new network state χj+1 ∈X at the next decision 5: The MU updates the replay memory Mj at the mobile device with the most recent transition mj = (χj, (cj, ej), u(χj, (cj, ej)), χj+1).\n6: With a randomly sampled mini-batch of transitions Mjf ⊆Mj from the replay memory, the MU updates the DQN parameters θj with the gradient given by (32).\n7: The MU regularly reset the target DQN parameters with θj+1− = θj, and otherwise θj+1− =\nθj−. 8: The decision epoch index is updated by j ←j + 1. 9: until A predefined stopping condition is satisfied. namely, Q(χ, (c, e)), ∀(χ, (c, e)) ∈X × Y, based on the pattern K = {1, · · · , K} of classifying the satisfactions regarding the task execution delay, the computation task drops, the task queuing delay, the penalty of failing to process a computation task and the payment of using the",
"paper_id": "1805.06146",
"title": "Optimized Computation Offloading Performance in Virtual Edge Computing Systems via Deep Reinforcement Learning",
"authors": [
"Xianfu Chen",
"Honggang Zhang",
"Celimuge Wu",
"Shiwen Mao",
"Yusheng Ji",
"Mehdi Bennis"
],
"published_date": "2018-05-16",
"primary_category": "cs.LG",
"arxiv_url": "http://arxiv.org/abs/1805.06146v1",
"chunk_index": 52,
"total_chunks": 79,
"char_count": 1453,
"word_count": 277,
"chunking_strategy": "semantic"
},
{
"chunk_id": "3be2b00d-f37f-4b8d-b383-a7d3cfcee087",
"text": "For example, we can divide the utility into four satisfaction categories, namely, u(χ, (c, e)) = u1(χ, (c, e)) + u2(χ, (c, e)) + u3(χ, (c, e)) + u4(χ, (c, e)) with u1(χ, (c, e)) =\nω1 · u(1)(χ, (c, e)) + ω3 · u(3)(χ, (c, e)), u2(χ, (c, e)) = ω2 · u(2)(χ, (c, e)), u3(χ, (c, e)) = ω4 ·\nu(4)(χ, (c, e)) and u4(χ, (c, e)) = ω5 · u(5)(χ, (c, e)), then K = {1, 2, 3, 4} forms a classification In our considered stochastic computation offloading scenario, it's easy to see that K ≤5. Mathematically, Q(χ, (c, e)) is decomposed into [29]\nQ(χ, (c, e)) = Qk(χ, (c, e)), (33) where the MU deploys a \"virtual\" agent k ∈K to learn the optimal per-agent state-action",
"paper_id": "1805.06146",
"title": "Optimized Computation Offloading Performance in Virtual Edge Computing Systems via Deep Reinforcement Learning",
"authors": [
"Xianfu Chen",
"Honggang Zhang",
"Celimuge Wu",
"Shiwen Mao",
"Yusheng Ji",
"Mehdi Bennis"
],
"published_date": "2018-05-16",
"primary_category": "cs.LG",
"arxiv_url": "http://arxiv.org/abs/1805.06146v1",
"chunk_index": 53,
"total_chunks": 79,
"char_count": 652,
"word_count": 130,
"chunking_strategy": "semantic"
},
{
"chunk_id": "29ca109d-4016-4a6e-bcfd-a2d51fcbb8b1",
"text": "Q-function Qk(χ, (c, e)) that satisfies\nQk(χ, (c, e)) = (1 −γ) · uk(χ, (c, e)) + γ · Pr{χ′|χ, (c, e)} · Qk(χ′, Φ∗(χ′)) , (34) with uk(χ, (c, e)) being the immediate utility related to a satisfaction category k. that the optimal joint control action in (34) of an agent k across the time horizon should reflect the optimal control policy implemented by the MU. In other words, the MU makes an optimal joint control action decision Φ∗(χ) under a network state χ\nΦ∗(χ) = arg max Qk(χ, (c, e)), (35)\n(c,e) k∈K to maximize the aggregated Q-function values from all the agents. We will see in Theorem 1 that the linear Q-function decomposition technique achieves the optimal solution to Problem 1. Theorem 1: The linear Q-function decomposition approach as in (33) asserts the optimal computation offloading performance.",
"paper_id": "1805.06146",
"title": "Optimized Computation Offloading Performance in Virtual Edge Computing Systems via Deep Reinforcement Learning",
"authors": [
"Xianfu Chen",
"Honggang Zhang",
"Celimuge Wu",
"Shiwen Mao",
"Yusheng Ji",
"Mehdi Bennis"
],
"published_date": "2018-05-16",
"primary_category": "cs.LG",
"arxiv_url": "http://arxiv.org/abs/1805.06146v1",
"chunk_index": 54,
"total_chunks": 79,
"char_count": 814,
"word_count": 140,
"chunking_strategy": "semantic"
},
{
"chunk_id": "f5279a3f-3116-4ceb-8cf7-3cd0c8cf02b3",
"text": "Proof: For the state-action Q-function of a joint action (c, e) ∈Y in a network state χ ∈X as in (27), we have\n\" #\nQ(χ, (c, e)) = EΦ∗ (1 −γ) · (γ)j−1 · u χj, cj, ej |χ1 = χ, c1, e1 = (c, e) j=1\n\" #\nX∞ X\n= EΦ∗ (1 −γ) · (γ)j−1 · uk χj, cj, ej |χ1 = χ, c1, e1 = (c, e)\nj=1 k∈K\n\" #\nX X∞\n= EΦ∗ (1 −γ) · (γ)j−1 · uk χj, cj, ej |χ1 = χ, c1, e1 = (c, e)\nk∈K j=1\n= Qk(χ, (c, e)) , (36) k∈K\nwhich completes the proof. □ Remark 4: An apparent advantage of the linear Q-function decomposition technique is that it potentially simplifies the problem solving. Back to the example above, agent 2 learns the optimal expected long-term satisfaction measuring the computation task drops across the time horizon. It's obvious that a computation task drop ηj at a decision epoch j depends only on the task queue state qj(t) (rather than the network state χj ∈X ) and the performed joint control action\n(cj, ej) by the MU. Recall that the joint control action of each agent should be in accordance with the optimal control policy of the MU, the Q-learning rule, which involves off-policy updates, is obviously not applicable to finding the optimal per-agent state-action Q-functions. The state-action-rewardstate-action (SARSA) algorithm [20], [31], which applies on-policy updates, fosters a promising alternative, namely, a SARSA-based reinforcement learning (SARL).",
"paper_id": "1805.06146",
"title": "Optimized Computation Offloading Performance in Virtual Edge Computing Systems via Deep Reinforcement Learning",
"authors": [
"Xianfu Chen",
"Honggang Zhang",
"Celimuge Wu",
"Shiwen Mao",
"Yusheng Ji",
"Mehdi Bennis"
],
"published_date": "2018-05-16",
"primary_category": "cs.LG",
"arxiv_url": "http://arxiv.org/abs/1805.06146v1",
"chunk_index": 55,
"total_chunks": 79,
"char_count": 1347,
"word_count": 250,
"chunking_strategy": "semantic"
},
{
"chunk_id": "6a30632d-1013-44f6-8efa-0157ea9d55eb",
"text": "Having the observations of the network state χ = χj, the performed joint control action (c, e) = (cj, ej) by the MU, the realized per-agent utilities (uk(χ, (c, e)) : k ∈K) at a current decision epoch j and the resulting\nnetwork state χ′ = χj+1, the joint control action (c′, e′) = (cj+1, ej+1) selected by the MU at the next epoch j + 1, each agent k ∈K updates the state-action Q-function on the fly, Qj+1k (χ, (c, e)) = Qjk(χ, (c, e)) + αj · (1 −γ) · uk(χ, (c, e)) + γ · Qjk(χ′, (c′, e′)) −Qjk(χ, (c, e)) , (37) where different from off-policy Q-learning, (c′, e′) is a actually performed joint control action in the subsequent network state, rather than the hypothetical one that maximizes the per-agent state-action Q-function. Theorem 2 ensures that the SARL algorithm converges. Theorem 2: The sequence Qjk(χ, (c, e)) : χ ∈X , (c, e) ∈Y and k ∈K : j ∈N+ by SARL\nconverges to the optimal per-agent state-action Q-functions Qk(χ, (c, e)), ∀χ ∈X , ∀(c, e) ∈Y and ∀k ∈K if and only if the state-action pairs (χ, (c, e)) ∈X × Y are visited infinitely often\nP∞ P∞ 2\nand the learning rate αj satisfies: j=1 αj = ∞and j=1 (αj) < ∞. Proof: Since the per-agent state-action Q-functions are learned simultaneously, we consider the monolithic updates during the learning process of the SARL algorithm, namely, the updating rule in (37) can be then encapsulated as\nQj+1k (χ, (c, e)) = (38)\nk∈K\nX X X\n1 −αj · Qjk(χ, (c, e)) + αj · (1 −γ) · uk(χ, (c, e)) + γ · Qjk(χ′, (c′, e′)) ,\nk∈K k∈K k∈K\nwhere (χ, (c, e)), (χ′, (c′, e′)) ∈X × Y.",
"paper_id": "1805.06146",
"title": "Optimized Computation Offloading Performance in Virtual Edge Computing Systems via Deep Reinforcement Learning",
"authors": [
"Xianfu Chen",
"Honggang Zhang",
"Celimuge Wu",
"Shiwen Mao",
"Yusheng Ji",
"Mehdi Bennis"
],
"published_date": "2018-05-16",
"primary_category": "cs.LG",
"arxiv_url": "http://arxiv.org/abs/1805.06146v1",
"chunk_index": 56,
"total_chunks": 79,
"char_count": 1526,
"word_count": 297,
"chunking_strategy": "semantic"
},
{
"chunk_id": "35c34ef5-8392-4ed4-addb-11f8cd8edf91",
"text": "We rewrite (38) as\nX X X X\nQj+1k (χ, (c, e)) − Qk(χ, (c, e)) = 1 −αj · Qjk(χ, (c, e)) − Qk(χ, (c, e))\nk∈K k∈K k∈K k∈K + αj · Υj(χ, (c, e)), (39) where\nX X\nΥj(χ, (c, e)) = (1 −γ) · uk(χ, (c, e)) + γ · max Qjk(χ′, (c′′, e′′)) (40) (c′′,e′′)\nk∈K k∈K\nX X X\n− Qk(χ, (c, e))) + γ · Qjk(χ′, (c′, e′)) −max Qjk(χ′, (c′′, e′′)) . (c′′,e′′)\nk∈K k∈K k∈K\nDenote Oj = σ {(χz, (cz, ez), (uk(χz, (cz, ez)): k ∈K)): z ≤j}, Qjk(χ, (c, e)) : ∀(χ, (c, e)) ∈ o\nX × Y, ∀k ∈K as the learning history for the first j decision epochs. The per-agent stateaction Q-functions are Oj-measurable, thus both (Pk∈K Qj+1k (χ, (c, e)) −Pk∈K Qk(χ, (c, e)))\nand Υj(χ, (c, e)) are Oj-measurable. E Υj(χ, (c, e)) |Oj ∞\n\" #\nX X X\n≤ E (1 −γ) · uk(χ, (c, e)) + γ · max Qjk(χ′, (c′′, e′′)) − Qk(χ, (c, e))|Oj + (c′′,e′′)\nk∈K k∈K k∈K ∞\n\" ! #\nX X\nE γ · Qjk(χ′, (c′, e′)) −max Qjk(χ′, (c′′, e′′)) |Oj (c′′,e′′)\nk∈K k∈K ∞\n(a) X X\n≤γ · Qjk(χ, (c, e)) − Qk(χ, (c, e)) +\nk∈K k∈K ∞\n\" ! #\nX X\nE γ · Qjk(χ′, (c′, e′)) −max Qjk(χ′, (c′′, e′′)) |Oj , (41) (c′′,e′′)\nk∈K k∈K ∞\nwhere (a) is due to the convergence property of a standard Q-learning [19]. We are left with verP P\nifying that E γ · k∈K Qjk(χ′, (c′, e′)) −max(c′′,e′′) k∈K Qjk(χ′, (c′′, e′′)) |Oj ∞converges\nto zero with probability 1, which in our considered scenario follows from: i) an ǫ-greedy strategy is applied in SARL for choosing joint control actions; ii) the per-agent state-action Q-functions are upper bounded; and iii) both the network state and the joint control action spaces are finite. All conditions in [32, Lemma 1] are satisfied. Therefore, the convergence of the SARL learning\nprocess is ensured. □ Remark 5: Implementing the derived SARL algorithm, the size of Q-function faced by each agent remains the same as the standard Q-learning algorithm. From Remark 4, the linear Qfunction decomposition technique provides the possibility of simplifying the solving of the stochastic computation offloading problem through introducing multiple \"virtual\" agents. agent learns the respective expected long-term satisfaction by exploiting the key network state",
"paper_id": "1805.06146",
"title": "Optimized Computation Offloading Performance in Virtual Edge Computing Systems via Deep Reinforcement Learning",
"authors": [
"Xianfu Chen",
"Honggang Zhang",
"Celimuge Wu",
"Shiwen Mao",
"Yusheng Ji",
"Mehdi Bennis"
],
"published_date": "2018-05-16",
"primary_category": "cs.LG",
"arxiv_url": "http://arxiv.org/abs/1805.06146v1",
"chunk_index": 57,
"total_chunks": 79,
"char_count": 2081,
"word_count": 402,
"chunking_strategy": "semantic"
},
{
"chunk_id": "c28bd690-753d-4fb7-81a3-926f6b2db94e",
"text": "Perform\n(cj, ej)/(cj+1, ej+1)\nPolicy\nObserve Âj\nDQNs Observe Âj+1\n1 ⁞ ⁞\n(uk(Âj, (cj, ej)): Mini- ⁞ k = 1, Ă, K) Batch 2 ⁞ Network ⁞\nj K ⁞ ⁞ User n\nParameter Loss and nj-N+2 Replay Memory Mobile nj-N+1 Updating Gradient Deep SARSA reinforcement learning (Deep-SARL) based stochastic computation offloading in a mobile-edge computing information and is hence enabled to use a simpler DQN to approximate the optimal state-action 2) Deep SARSA Reinforcement Learning (Deep-SARL): Applying the linear Q-function decomposition technique, the DQN Q(χ, (c, e); θ), which approximates the optimal state-action Q-function Q(χ, (c, e)), ∀(χ, (c, e)) ∈X × Y, can be reexpressed as,\nQ(χ, (c, e); θ) = Qk(χ, (c, e); θk), (42) where θ = (θk : k ∈K) is a collection of parameters associated with the DQNs of all agents and Qk(χ, (c, e); θk) (for k ∈K) is the per-agent DQN. Accordingly, we derive a novel deep SARSA reinforcement learning (Deep-SARL) based stochastic computation offloading algorithm, as depicted in Fig. 3, where different from the DARLING algorithm, the parameters θ are learned locally at the agents in an online iterative way. Implementing our proposed Deep-SARL algorithm, at each decision epoch j, the MU updates the experience pool N j with the most recent N experiences nj−N+1, · · · , nj with each experience nj = (χj, (cj, ej), (uk(χj, (cj, ej)) : k ∈K) , χj+1, (cj+1, ej+1)). To train the\ne j jDQN parameters, the MU first randomly samples a mini-batch N ⊆N and then updates\nθj = (θjk : k ∈K) for all agents at each decision epoch j to minimize the accumulative loss",
"paper_id": "1805.06146",
"title": "Optimized Computation Offloading Performance in Virtual Edge Computing Systems via Deep Reinforcement Learning",
"authors": [
"Xianfu Chen",
"Honggang Zhang",
"Celimuge Wu",
"Shiwen Mao",
"Yusheng Ji",
"Mehdi Bennis"
],
"published_date": "2018-05-16",
"primary_category": "cs.LG",
"arxiv_url": "http://arxiv.org/abs/1805.06146v1",
"chunk_index": 58,
"total_chunks": 79,
"char_count": 1578,
"word_count": 276,
"chunking_strategy": "semantic"
},
{
"chunk_id": "11b778a9-55c3-49bb-a6cd-526606fdf529",
"text": "function, which is given by L(Deep−SARL) θj\njX (1 −γ) · uk(χ, (c, e)) + γ · Qk χ′, (c′, e′); θjk,− − = E(χ,(c,e),(uk(χ,(c,e)):k∈K),χ′,(c′,e′))∈eN\nk∈K Qk χ, (c, e); θjk , (43) where θj−= θjk,−: k ∈K are the parameters of the target DQNs of all agents at some previous decision epoch before epoch j. The gradient for each agent k ∈K can be calculated as ∇θjkL(Deep−SARL) θj\nj (1 −γ) · uk(χ, (c, e)) + γ · Qk χ′, (c′, e′); θjk,− − = E(χ,(c,e),(uk(χ,(c,e)):k∈K),χ′,(c′,e′))∈eN\nQk χ, (c, e); θjk · ∇θjkQk χ, (c, e); θjk . (44) We summarize the online implementation of our proposed Deep-SARL algorithm for solving the stochastic computation offloading in a MEC system as in Algorithm 2. NUMERICAL EXPERIMENTS In this section, we proceed to evaluate the stochastic computation offloading performance achieved from our derived online learning algorithms, namely, DARLING and Deep-SARL.",
"paper_id": "1805.06146",
"title": "Optimized Computation Offloading Performance in Virtual Edge Computing Systems via Deep Reinforcement Learning",
"authors": [
"Xianfu Chen",
"Honggang Zhang",
"Celimuge Wu",
"Shiwen Mao",
"Yusheng Ji",
"Mehdi Bennis"
],
"published_date": "2018-05-16",
"primary_category": "cs.LG",
"arxiv_url": "http://arxiv.org/abs/1805.06146v1",
"chunk_index": 59,
"total_chunks": 79,
"char_count": 880,
"word_count": 149,
"chunking_strategy": "semantic"
},
{
"chunk_id": "0534fba7-0b94-4736-a1da-e4104be4a56e",
"text": "Throughout the experiments, we suppose there are B = 6 BSs in the system connecting the MU with the MEC server. The channel gain states between the MU and the BSs are from a common finite set {−11.23, −9.37, −7.8, −6.3, −4.68, −2.08} (dB), the transitions of which happen across the discrete decision epochs following respective randomly generated matrices. Each energy unit corresponds to 2 · 10−3 Joule, and the number of energy units received from the wireless environment follows a Poisson arrival process with average arrival rate λ(e) (in\nunits per epoch).",
"paper_id": "1805.06146",
"title": "Optimized Computation Offloading Performance in Virtual Edge Computing Systems via Deep Reinforcement Learning",
"authors": [
"Xianfu Chen",
"Honggang Zhang",
"Celimuge Wu",
"Shiwen Mao",
"Yusheng Ji",
"Mehdi Bennis"
],
"published_date": "2018-05-16",
"primary_category": "cs.LG",
"arxiv_url": "http://arxiv.org/abs/1805.06146v1",
"chunk_index": 60,
"total_chunks": 79,
"char_count": 562,
"word_count": 93,
"chunking_strategy": "semantic"
},
{
"chunk_id": "f115ce6f-089d-4124-acd1-c00b084182ed",
"text": "We set K = 5 for the Deep-SARL algorithm, while the u(1)(χj, (cj, ej)), Algorithm 2 Online Deep-SARL Algorithm for Stochastic Computation Task Offloading in A MEC System\nj 1: initialize for j = 1, the replay memory N with a finite size of N ∈N+, the minie j with a size of ˜N < N for experience replay, DQNs and target DQNs with batch N\ntwo sets θj = θjk : k ∈K and θj−= θjk,−: k ∈K of random parameters, the initial\nnetwork state χj ∈X , a joint control action (cj, ej) ∈Y randomly with probability ǫ or\n(cj, ej) = arg max(c,e)∈Y k∈K Qk(χj, (c, e); θjk) with probability 1 −ǫ. 3: After performing the selected joint control action (cj, ej), the agents realize immediate utilities (uk(χj, (cj, ej)) : k ∈K).\n4: The MU observes the new network state χj+1 ∈X at the next epoch j + 1, which is taken as an input to the DQNs of all agents with parameters θj, and selects a joint control action (cj+1, ej+1) ∈Y randomly with probability ǫ or (cj+1, ej+1) =\narg max(c,e)∈Y k∈K Qk(χj+1, (c, e); θjk) with probability 1 −ǫ.",
"paper_id": "1805.06146",
"title": "Optimized Computation Offloading Performance in Virtual Edge Computing Systems via Deep Reinforcement Learning",
"authors": [
"Xianfu Chen",
"Honggang Zhang",
"Celimuge Wu",
"Shiwen Mao",
"Yusheng Ji",
"Mehdi Bennis"
],
"published_date": "2018-05-16",
"primary_category": "cs.LG",
"arxiv_url": "http://arxiv.org/abs/1805.06146v1",
"chunk_index": 61,
"total_chunks": 79,
"char_count": 1015,
"word_count": 196,
"chunking_strategy": "semantic"
},
{
"chunk_id": "eec30579-d75d-4121-9c28-b1ad3925ea9e",
"text": "j 5: The replay memory N at the mobile device is updated with the most recent transition nj = (χj, (cj, ej), (uk(χj, (cj, ej)) : k ∈K), χj+1, (cj+1, ej+1)).\ne j 6: With a randomly sampled mini-batch of transitions N ⊆N j, all agents update the DQN parameters θj using the gradient as in (44).\n7: The target DQNs are regularly reset by θj+1− = θj, and otherwise θj+1− = θj−. 8: The decision epoch index is updated by j ←j + 1. 9: until A predefined stopping condition is satisfied.",
"paper_id": "1805.06146",
"title": "Optimized Computation Offloading Performance in Virtual Edge Computing Systems via Deep Reinforcement Learning",
"authors": [
"Xianfu Chen",
"Honggang Zhang",
"Celimuge Wu",
"Shiwen Mao",
"Yusheng Ji",
"Mehdi Bennis"
],
"published_date": "2018-05-16",
"primary_category": "cs.LG",
"arxiv_url": "http://arxiv.org/abs/1805.06146v1",
"chunk_index": 62,
"total_chunks": 79,
"char_count": 480,
"word_count": 93,
"chunking_strategy": "semantic"
},
{
"chunk_id": "09da71af-c5ee-4545-b1c5-f9d24cf865b6",
"text": "u(2)(χj, (cj, ej)), u(3)(χj, (cj, ej)), u(4)(χj, (cj, ej)) and u(5)(χj, (cj, ej)) in (21) are chosen to be the exponential functions. u1 χj, (cj, ej) = ω1 · u(1) χj, (cj, ej) = ω1 · exp −min dj, δ , (45) u2 χj, (cj, ej) = ω2 · u(2) χj, (cj, ej) = ω2 · exp −ηj , (46) u3 χj, (cj, ej) = ω3 · u(3) χj, (cj, ej) = ω3 · exp −ρj , (47) u4 χj, (cj, ej) = ω4 · u(4) χj, (cj, ej) = ω4 · exp −ϕj , (48) u5 χj, (cj, ej) = ω5 · u(5) χj, (cj, ej) = ω5 · exp −φj . (49) PARAMETER VALUES IN EXPERIMENTS. Parameter Value\nReplay memory capacities M, N 5000, 5000 Mini-batch sizes M, ˜ ˜N 200, 200\nDecision epoch duration δ 5 · 10−3 second\nChannel bandwidth W 0.6 MHz\nNoise power I 1.5 · 10−8 Watt\nInput data size µ 104 bits\nCPU cycles ν 7.375 · 106\nMaximum CPU-cycle frequency f(CPU)(max) 2 × 109 Hz\nMaximum transmit power p(max)(tr) 2 Watt\nHandover delay ζ 2 × 10−3 second\nMEC service price ξ 1\nWeights ω1, ω2, ω3, ω4, ω5 3, 9, 5, 2, 1\nMaximum task queue length q(max)(t) 4 tasks\nMaximum energy queue length q(max)(e) 4 units\nDiscount factor γ 0.9\nExploration probability ǫ 0.01 Based on the works [33] and [34], we use a single hidden layer consisting of 200 neurons5 for the design of a DQN in DARLING algorithm and select tanh as the activation function [36] and Adam as the optimizer. The same number of 200 neurons are employed to design the single-layer DQNs of all agents in Deep-SARL and hence for each DQN of an agent, there are in total 40",
"paper_id": "1805.06146",
"title": "Optimized Computation Offloading Performance in Virtual Edge Computing Systems via Deep Reinforcement Learning",
"authors": [
"Xianfu Chen",
"Honggang Zhang",
"Celimuge Wu",
"Shiwen Mao",
"Yusheng Ji",
"Mehdi Bennis"
],
"published_date": "2018-05-16",
"primary_category": "cs.LG",
"arxiv_url": "http://arxiv.org/abs/1805.06146v1",
"chunk_index": 63,
"total_chunks": 79,
"char_count": 1433,
"word_count": 295,
"chunking_strategy": "semantic"
},
{
"chunk_id": "e99b67f0-e3cb-4bfd-a159-929ac2d384a9",
"text": "Both the DARLING and the Deep-SARL algorithms are experimented in TensorFlow Other parameter values used in experiments are listed in Table II. For the purpose of performance comparisons, we simulate three baselines as well, namely, 1) Mobile Execution – The MU processes all scheduled computation task at the mobile device",
"paper_id": "1805.06146",
"title": "Optimized Computation Offloading Performance in Virtual Edge Computing Systems via Deep Reinforcement Learning",
"authors": [
"Xianfu Chen",
"Honggang Zhang",
"Celimuge Wu",
"Shiwen Mao",
"Yusheng Ji",
"Mehdi Bennis"
],
"published_date": "2018-05-16",
"primary_category": "cs.LG",
"arxiv_url": "http://arxiv.org/abs/1805.06146v1",
"chunk_index": 64,
"total_chunks": 79,
"char_count": 323,
"word_count": 50,
"chunking_strategy": "semantic"
},
{
"chunk_id": "0f08d115-8773-4b11-94ea-d4464e0a2e83",
"text": "5The tradeoff between a time demanding training process and an improvement in performance with a deeper and/or wider neural network is still an open problem [35]. with maximum possible energy, that is, at each decision epoch j, cj = 0 and\n (max) 3  min qj(e), f(CPU) · τ , if qj(e) > 0;\nej = (50)\n  0, if qj(e) = 0, where the allocation of energy units takes into account the maximum CPU-cycle frequency\nf(CPU)(max) and ⌊·⌋means the floor function.\n2) Server Execution – According to Lemma 2, with maximum possible energy units in the energy queue that satisfy the maximum transmit power constraint, the MU always selects a BS that achieves the minimum task execution delay to offload the input data of a scheduled computation task for MEC server execution. 3) Greedy Execution – At each decision epoch, the MU decides to execute a computation task at its own mobile device or offload it to the MEC server for processing with the aim of minimizing the immediate task execution delay.",
"paper_id": "1805.06146",
"title": "Optimized Computation Offloading Performance in Virtual Edge Computing Systems via Deep Reinforcement Learning",
"authors": [
"Xianfu Chen",
"Honggang Zhang",
"Celimuge Wu",
"Shiwen Mao",
"Yusheng Ji",
"Mehdi Bennis"
],
"published_date": "2018-05-16",
"primary_category": "cs.LG",
"arxiv_url": "http://arxiv.org/abs/1805.06146v1",
"chunk_index": 65,
"total_chunks": 79,
"char_count": 987,
"word_count": 175,
"chunking_strategy": "semantic"
},
{
"chunk_id": "5eb4346e-dda5-43d0-be7e-53bc93fb74d6",
"text": "We carry out experiments under various settings to validate the proposed studies in this paper. 1) Experiment 1 – Convergence performance: Our goal in this experiment is to validate the convergence property of our proposed algorithms, namely, DARLING and Deep-SARL, for stochastic computation offloading in the considered MEC system. We set the task arrival probability and the average energy arrival rate to be λ(t) = 0.5 and λ(e) = 0.8 units per epoch,\nrespectively. Without loss of the generality, we plot the simulated variations in Q(χ, (c, e); θj) (where χ = (2, 2, 2, (−6.3, −6.3, −4.68, −7.8, −6.3, −6.3)), (c, e) = (2, 4) and j ∈N+) for the DARLING algorithm as well as the loss function defined by (43) for the Deep-SARL algorithm versus the decision epochs in Fig. 4, which reveals that the convergence behaviours of both DARLING and Deep-SARL are similar.",
"paper_id": "1805.06146",
"title": "Optimized Computation Offloading Performance in Virtual Edge Computing Systems via Deep Reinforcement Learning",
"authors": [
"Xianfu Chen",
"Honggang Zhang",
"Celimuge Wu",
"Shiwen Mao",
"Yusheng Ji",
"Mehdi Bennis"
],
"published_date": "2018-05-16",
"primary_category": "cs.LG",
"arxiv_url": "http://arxiv.org/abs/1805.06146v1",
"chunk_index": 66,
"total_chunks": 79,
"char_count": 867,
"word_count": 146,
"chunking_strategy": "semantic"
},
{
"chunk_id": "0e927732-bb20-466a-adaf-eed9691534fd",
"text": "The two algorithms converge at a reasonable speed, for which around 1.5 × 104 decision epochs are needed. 2) Experiment 2 – Performance under various λ(t): In this experiment, we try to demonstrate the average computation offloading performance per epoch in terms of the average utility, the average task execution delay, the average task drops, the average task queuing delay, the average MEC service payment and the average task failure penalty under different computation task arrival probability settings. We choose for the average energy unit arrival rate as λ(e) = 1.6 The results are exhibited in Fig. 5. Fig. 5 (a) illustrates the average utility 0.5\nvalue 15 0\nLoss\n-0.5 10 -1 State-action\n-1.5 0 -2\n0 0.5 1 1.5 2 2.5 3\nDecision Epochs 104 Illustration for the convergence property of DARLING and Deep-SARL. performance when the MU implements DARLING and Deep-SARL. Figs. 5 (b)–(f) illustrate",
"paper_id": "1805.06146",
"title": "Optimized Computation Offloading Performance in Virtual Edge Computing Systems via Deep Reinforcement Learning",
"authors": [
"Xianfu Chen",
"Honggang Zhang",
"Celimuge Wu",
"Shiwen Mao",
"Yusheng Ji",
"Mehdi Bennis"
],
"published_date": "2018-05-16",
"primary_category": "cs.LG",
"arxiv_url": "http://arxiv.org/abs/1805.06146v1",
"chunk_index": 67,
"total_chunks": 79,
"char_count": 901,
"word_count": 148,
"chunking_strategy": "semantic"
},
{
"chunk_id": "8af72f59-0544-427a-9311-155e318b4c08",
"text": "the average task execution delay, the average task drops, the average task queuing delay, the average MEC service payment and the average task failure penalty. Each plot compares the performance of the DARLING and the Deep-SARL to the three baseline computation offloading schemes.",
"paper_id": "1805.06146",
"title": "Optimized Computation Offloading Performance in Virtual Edge Computing Systems via Deep Reinforcement Learning",
"authors": [
"Xianfu Chen",
"Honggang Zhang",
"Celimuge Wu",
"Shiwen Mao",
"Yusheng Ji",
"Mehdi Bennis"
],
"published_date": "2018-05-16",
"primary_category": "cs.LG",
"arxiv_url": "http://arxiv.org/abs/1805.06146v1",
"chunk_index": 68,
"total_chunks": 79,
"char_count": 281,
"word_count": 43,
"chunking_strategy": "semantic"
},
{
"chunk_id": "189f6378-0635-4b3f-84c5-9ea597bb0f94",
"text": "From Fig. 5, it can be observed that both the proposed schemes achieve a significant gain in average utility. Similar observations can be made from the curves in other plots, though the average MEC service payment per epoch from Deep-SARL is a bit higher than that from DARLING.",
"paper_id": "1805.06146",
"title": "Optimized Computation Offloading Performance in Virtual Edge Computing Systems via Deep Reinforcement Learning",
"authors": [
"Xianfu Chen",
"Honggang Zhang",
"Celimuge Wu",
"Shiwen Mao",
"Yusheng Ji",
"Mehdi Bennis"
],
"published_date": "2018-05-16",
"primary_category": "cs.LG",
"arxiv_url": "http://arxiv.org/abs/1805.06146v1",
"chunk_index": 69,
"total_chunks": 79,
"char_count": 278,
"word_count": 48,
"chunking_strategy": "semantic"
},
{
"chunk_id": "9f6455f9-704d-4a12-82eb-96a48e69c6af",
"text": "This can be explained by the reason that with the network settings by increasing the task arrival probability in this experiment, more tasks are scheduled for execution at the MEC server using the Deep-SARL algorithm. As the computation task arrival",
"paper_id": "1805.06146",
"title": "Optimized Computation Offloading Performance in Virtual Edge Computing Systems via Deep Reinforcement Learning",
"authors": [
"Xianfu Chen",
"Honggang Zhang",
"Celimuge Wu",
"Shiwen Mao",
"Yusheng Ji",
"Mehdi Bennis"
],
"published_date": "2018-05-16",
"primary_category": "cs.LG",
"arxiv_url": "http://arxiv.org/abs/1805.06146v1",
"chunk_index": 70,
"total_chunks": 79,
"char_count": 249,
"word_count": 40,
"chunking_strategy": "semantic"
},
{
"chunk_id": "d4550d13-c095-40d8-abbb-628190c6328a",
"text": "probability increases, the average utility performances decrease due to the increases in average task execution delay, average task drops, average task queuing delay, average MEC service payment and average task failure penalty. Since there are not enough energy units in the energy",
"paper_id": "1805.06146",
"title": "Optimized Computation Offloading Performance in Virtual Edge Computing Systems via Deep Reinforcement Learning",
"authors": [
"Xianfu Chen",
"Honggang Zhang",
"Celimuge Wu",
"Shiwen Mao",
"Yusheng Ji",
"Mehdi Bennis"
],
"published_date": "2018-05-16",
"primary_category": "cs.LG",
"arxiv_url": "http://arxiv.org/abs/1805.06146v1",
"chunk_index": 71,
"total_chunks": 79,
"char_count": 282,
"word_count": 42,
"chunking_strategy": "semantic"
},
{
"chunk_id": "99d8e0ee-fb9f-4012-86ee-4c4408f9a2bb",
"text": "queue during one decision epoch on average, in order to avoid task drops and task failure penalty, only a portion of the queued energy units are allocated for processing a scheduled computation task, hence leading to more queued tasks, i.e., an increased average task queueing delay. utility performance from Deep-SARL outperforms that from DARLING. This indicates that by",
"paper_id": "1805.06146",
"title": "Optimized Computation Offloading Performance in Virtual Edge Computing Systems via Deep Reinforcement Learning",
"authors": [
"Xianfu Chen",
"Honggang Zhang",
"Celimuge Wu",
"Shiwen Mao",
"Yusheng Ji",
"Mehdi Bennis"
],
"published_date": "2018-05-16",
"primary_category": "cs.LG",
"arxiv_url": "http://arxiv.org/abs/1805.06146v1",
"chunk_index": 72,
"total_chunks": 79,
"char_count": 372,
"word_count": 58,
"chunking_strategy": "semantic"
},
{
"chunk_id": "0a084be1-e9c3-4a95-b206-d1805570cd6f",
"text": "combining a deep neural network and the Q-function decomposition technique, the original stochastic computation offloading problem becomes simplified, hence performance improvement can be expected from approximating the state-action Q-functions of all agents with the same 3.5\n3 Delays\n2.5 16 Utilities\n2 Execution 14 1.5 Mobile Execution Average Mobile Execution Server Execution\nServer Execution 1 Greedy Execution 12 Greedy Execution Average DARLING\nDARLING 0.5 Deep-SARL\nDeep-SARL\n10 0\n0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9\nTask Arrival Probabilities Task Arrival Probabilities (a) Average utility per epoch. (b) Average execution delay per epoch. 0.7 4\nMobile Execution Mobile Execution\n0.6 Server Execution 3.5 Server Execution\nGreedy Execution Greedy Execution DARLING Delays 3 DARLING 0.5\nDeep-SARL Deep-SARL Drops 2.5\nTask 0.4 Queuing 2\nAverage 0.30.2 Task 1.51\n0.1 Average 0.5 0 0\n0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9\nTask Arrival Probabilities Task Arrival Probabilities (c) Average task drops per epoch. (d) Average task queuing delay per epoch. 10-3 3.5 0.6\nMobile Execution\n3 Server Execution Mobile Execution 0.5 Server Execution Greedy Execution\nDARLING Greedy Execution 2.5\nDeep-SARL 0.4 DARLING\nDeep-SARL Payments 2 Penalties\n0.3 MEC 1.5 Failure\n0.2\n1 Average Average\n0.1 0.5 0 0\n0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9\nTask Arrival Probabilities Task Arrival Probabilities (e) Average MEC service payment per epoch. (f) Average task execution failure penalty per epoch. Average computation offloading performance versus task arrival probabilities. 3) Experiment 3 – Performance with changing λ(e): We do the third experiment to simulate the average computation offloading performance per epoch achieved from the derived DARLING and Deep-SARL algorithms and other three baselines versus the average energy unit arrival rates.",
"paper_id": "1805.06146",
"title": "Optimized Computation Offloading Performance in Virtual Edge Computing Systems via Deep Reinforcement Learning",
"authors": [
"Xianfu Chen",
"Honggang Zhang",
"Celimuge Wu",
"Shiwen Mao",
"Yusheng Ji",
"Mehdi Bennis"
],
"published_date": "2018-05-16",
"primary_category": "cs.LG",
"arxiv_url": "http://arxiv.org/abs/1805.06146v1",
"chunk_index": 73,
"total_chunks": 79,
"char_count": 1944,
"word_count": 304,
"chunking_strategy": "semantic"
},
{
"chunk_id": "8acc95aa-30b5-4dd0-aba0-aae2b61f6c18",
"text": "The computation task arrival probability in this experiment is set to be λ(t) = 0.6. average utility, average task execution delay, average task drops, average task queuing delay, average MEC service payment and average task failure penalty across the entire learning period",
"paper_id": "1805.06146",
"title": "Optimized Computation Offloading Performance in Virtual Edge Computing Systems via Deep Reinforcement Learning",
"authors": [
"Xianfu Chen",
"Honggang Zhang",
"Celimuge Wu",
"Shiwen Mao",
"Yusheng Ji",
"Mehdi Bennis"
],
"published_date": "2018-05-16",
"primary_category": "cs.LG",
"arxiv_url": "http://arxiv.org/abs/1805.06146v1",
"chunk_index": 74,
"total_chunks": 79,
"char_count": 274,
"word_count": 42,
"chunking_strategy": "semantic"
},
{
"chunk_id": "8b9a929f-62b0-473e-9845-2e0b891f598f",
"text": "are depicted in Fig. 6. We can clearly see from Fig. 6 that as the available energy units increase, the overall computation offloading performance improves. However, as the energy unit arrival rate increases, the average task execution delay, the average MEC service payment6 and the average task failure penalty first increase but then decrease. The increasing number of queued energy units provides more opportunities to execute a computation task during each decision epoch, and at the same time, increases the possibility of failing to execute a task. When the average number of energy units in the energy queue increases to a sufficiently large value, enough energy units",
"paper_id": "1805.06146",
"title": "Optimized Computation Offloading Performance in Virtual Edge Computing Systems via Deep Reinforcement Learning",
"authors": [
"Xianfu Chen",
"Honggang Zhang",
"Celimuge Wu",
"Shiwen Mao",
"Yusheng Ji",
"Mehdi Bennis"
],
"published_date": "2018-05-16",
"primary_category": "cs.LG",
"arxiv_url": "http://arxiv.org/abs/1805.06146v1",
"chunk_index": 75,
"total_chunks": 79,
"char_count": 676,
"word_count": 107,
"chunking_strategy": "semantic"
},
{
"chunk_id": "015221dc-e4ea-433c-b963-a8418987bab9",
"text": "can be allocated to each scheduled computation task, due to which the task execution delay, the MEC service payment as well as the possibility of task computation failures decrease. In this paper, we put our emphasis to investigate the design of a stochastic computation offloading policy for a representative MU in an ultra dense sliced RAN by taking into account the dynamics generated from the time-varying channel qualities between the MU and the BSs, energy units received from the wireless environment as well as computation task arrivals. problem of stochastic computation offloading is formulated as a MDP, for which we propose two double DQN-based online strategic computation offloading algorithms, namely, DARLING and Both learning algorithms survive the curse of high dimensionality in state space and need no a priori information of dynamics statistics.",
"paper_id": "1805.06146",
"title": "Optimized Computation Offloading Performance in Virtual Edge Computing Systems via Deep Reinforcement Learning",
"authors": [
"Xianfu Chen",
"Honggang Zhang",
"Celimuge Wu",
"Shiwen Mao",
"Yusheng Ji",
"Mehdi Bennis"
],
"published_date": "2018-05-16",
"primary_category": "cs.LG",
"arxiv_url": "http://arxiv.org/abs/1805.06146v1",
"chunk_index": 76,
"total_chunks": 79,
"char_count": 866,
"word_count": 133,
"chunking_strategy": "semantic"
},
{
"chunk_id": "0c342529-faee-45b3-a296-948bfc379481",
"text": "We find from numerical experiments that compared to three baselines, our derived algorithms can achieve much better long-term utility performance, which indicates an optimal tradeoff among the computation task execution delay, the task drops, the task queuing delay, the MEC service payment and the task failure 6It's easy to see that the mobile execution scheme does not use MEC service, hence no MEC service payment needs to be 10-3 20 5\nMobile Execution Mobile Execution\nServer Execution 4.5 Server Execution\n18 Greedy Execution Greedy Execution\nDARLING 4 DARLING Deep-SARL Delays Deep-SARL\n3.5 16 Utilities\n3 Execution 14 2.5 Average\n2 12 Average\n1.5 10 1\n0.5 0.7 0.9 1.1 1.3 1.5 1.7 1.9 2.1 0.5 0.7 0.9 1.1 1.3 1.5 1.7 1.9 2.1\nEnergy Arrival Rates (units/epoch) Energy Arrival Rates (units/epoch) (a) Average utility per epoch. (b) Average execution delay per epoch. 0.7 4\nMobile Execution Mobile Execution\n0.6 Server Execution 3.5 Server Execution\nGreedy Execution Greedy Execution DARLING Delays 3 DARLING 0.5\nDeep-SARL Deep-SARL Drops 2.5\nTask 0.4 Queuing 2\nAverage 0.30.2 Task 1.51\n0.1 Average 0.5 0 0\n0.5 0.7 0.9 1.1 1.3 1.5 1.7 1.9 2.1 0.5 0.7 0.9 1.1 1.3 1.5 1.7 1.9 2.1\nEnergy Arrival Rates (units/epoch) Energy Arrival Rates (units/epoch) (c) Average task drops per epoch. (d) Average task queuing delay per epoch. 3 Mobile Execution 0.5 Server Execution\nGreedy Execution\n2.5\n0.4 DARLING\nDeep-SARL Payments 2 Penalties\n0.3 MEC 1.5 Mobile Execution Failure\nServer Execution 0.2\n1 Greedy Execution Average DARLING Average 0.1 0.5 Deep-SARL 0 0\n0.5 0.7 0.9 1.1 1.3 1.5 1.7 1.9 2.1 0.5 0.7 0.9 1.1 1.3 1.5 1.7 1.9 2.1\nEnergy Arrival Rates (units/epoch) Energy Arrival Rates (units/epoch) (e) Average MEC service payment per epoch. (f) Average task execution failure penalty per epoch. Average computation offloading performance versus average energy unit arrival rates.",
"paper_id": "1805.06146",
"title": "Optimized Computation Offloading Performance in Virtual Edge Computing Systems via Deep Reinforcement Learning",
"authors": [
"Xianfu Chen",
"Honggang Zhang",
"Celimuge Wu",
"Shiwen Mao",
"Yusheng Ji",
"Mehdi Bennis"
],
"published_date": "2018-05-16",
"primary_category": "cs.LG",
"arxiv_url": "http://arxiv.org/abs/1805.06146v1",
"chunk_index": 77,
"total_chunks": 79,
"char_count": 1879,
"word_count": 305,
"chunking_strategy": "semantic"
},
{
"chunk_id": "95cf6bde-087a-42e8-8b7f-85496b7aa5eb",
"text": "Moreover, the Deep-SARL algorithm outperforms the DARLING algorithm by taking the advantage of the additive utility function structure.",
"paper_id": "1805.06146",
"title": "Optimized Computation Offloading Performance in Virtual Edge Computing Systems via Deep Reinforcement Learning",
"authors": [
"Xianfu Chen",
"Honggang Zhang",
"Celimuge Wu",
"Shiwen Mao",
"Yusheng Ji",
"Mehdi Bennis"
],
"published_date": "2018-05-16",
"primary_category": "cs.LG",
"arxiv_url": "http://arxiv.org/abs/1805.06146v1",
"chunk_index": 78,
"total_chunks": 79,
"char_count": 135,
"word_count": 18,
"chunking_strategy": "semantic"
}
]