kumoh national institute of technology
Networked Systems Lab.

Md. Sajjad Hossain, Cosmas Ifeanyi Nwakanma, Fabliha Bushra islam, Dong-Seong Kim and Jae-Min Lee, " Dynamic Offloading Policy for Multi-user MEC System using Deep Reinforcement Learning", 2020 Korean Institute of Communication and Sciences (KICS) Summer Conference, August 12-14, 2020, Yong Pyong Resort, Pyeongchang, Gangwon Province, Korea, (A)(N8)
By : Sajjad
Date : 2020-06-07
Views : 690

In this work, a deep reinforcement learning decentralized

computation offloading procedure is examined for

a stable Mobile Edge Computing (MEC) framework. MEC

is a promising answer to solving the challenge of resource

limitation of mobile devices from computation concentrated

tasks which makes the device empower to offload workloads

to the nearest edge server. For Multiple mobile user in MEC,

the plan of computation offloading strategy is trying to limit the

computation and delay constraint. In some of the studies they

used Markov decision process (MDP), Reinforcement Learning

(RL) and Deep Reinforcement Learning (DRL) to address those

issues. Because of having some Limitations this paper proposed

to use Deep Deterministic Policy Gradient (DDPG) where no

global information is required for computation offloading policies.

This proposed DDPG decentralized method can outperform

the other deep networks in terms of computational cost with

power consumption and buffering delay.

a linear programming technique is used to make schedulingdecisions. Various centralized and distributed algorithms are thenproposed, which improves overall system performance