kumoh national institute of technology
Networked Systems Lab.

Alifia Putri Anantha, Muhammad Rusyadi Ramli, Jae-Min Lee, and Dong-Seong Kim, "Deep Reinforcement Learning with Edge Computing in Faulty-Node Detection for Industrial Internet of Things", IEEE Wireless Communication Letters, 2020. (R)
By : Alifia Putri Anantha
Date : 2020-06-29
Views : 148

Dear Prof. Dong-Seong Kim:

The review of the referenced manuscript, WCL2020-0812, is now complete. I regret to inform you that based on the enclosed reviews and my own reading of your manuscript, I am unable to recommend its publication in IEEE Wireless Communications Letters.

Your paper may not be resubmitted for review. The reasons for this are as follows: Based on my reading of the paper, the contribution of the paper is quite limited, which was confirmed by all three reviewers' comments. As pointed out by the reviewers, the authors just simulated the performance of a well-known RL algoritm in solving a fault detection problem. This is far from the quality of the papers to be published on IEEE WCL.

Additional comments: Besides the limited contribution, the paper is poorly written and thus hard to follow. The simulation parameters are missing and no benchmark schemes were compared to prove the superiority of the proposed scheme. Please find below other problems raised by the reviewers.

The reviewers' comments are found at the end of this email.

Thank you for submitting your work to the IEEE Wireless Communications Letters. I hope the outcome of this specific submission will not discourage you from the submission of future manuscripts.


Dr. He Chen
Editor, IEEE Wireless Communications Letters
he.chen@ie.cuhk.edu.hk, hechen.sdu@gmail.com

Reviewer: 1

Comments to the Author
1. Much of this paper is devoted to what is already known to the society. Sections I and II are background; Section III is an introcution to DRL; Section IV is simulation results. This paper describes very little about the contributions of the authors. In fact, the only message the authors convey to readers is: we tried to detect faulty nodes using DRL and this is our performance. Why not specifying the disadvantages of prior solutions to this problem and how your DRL scheme tackles the problem? Is there any difficulties you encountered when you design the DRL scheme (or it is too straightforward to be mentioned)?
2. No benchmarks. How do readers evaluate the performance of DRL if there were no benchmarks? From Fig. 4, the miss-detection rate is as high as 38% when 10% of the nodes fails. In my view this is a poor perormance.

Reviewer: 2

Comments to the Author
General comments: This paper investigated how to use deep reinforcement learning in edge computing systems for faulty-node detection. The topic is interesting, but the contribution is limited.

Comment 1: The motivation of using DRL is unclear. In the introduction, the authors claimed that "DRL is intended to optimize data within a specific context.". Such a statement is confusing and hard to follow. What are the advantages of using DRL when compared with existing approaches, such as SVM?

Comment 2: In section III, the authors introduce the basic idea of DRL, which is well-known. The only contribution is the simulation results in Section IV.

Comment 3: The detection scenario appears not practical. Please justify the considered scenario by citing some references.

Reviewer: 3

Comments to the Author
The work of this paper is not enough to be published in IEEE WCL. I suggest a rejection due to the following reasons.
1) The paper is poorly written and organized. The main body of the paper (Section III) is devoted to describe the general backgrounds about deep neural networks, reinforcement learning, MDP, other than the proposed scheme. The technique contribution of this paper itself only is discussed in two/three paragraphs from a very high aspect. Some English writing styles are outdated.
2) The problem is not defined clearly. What are the actions, states and rewards in the reinforcement learning problem? Not defined and discussed explicitly. From the paper, I cannot find what kind of deep reinforcement learning algorithm is exploited to solve the problem.
3) It is unclear what kind of simulation model is used: How many sensor nodes in your system? How to represent their sensed signals?