Title: Deep Reinforcement Learning for Traffic Signal Optimization in Edge-based Traffic Control System
Journal: Future Generation Computer Systems
Dear Professor. Kim,
Thank you for submitting your manuscript to Future Generation Computer Systems. I regret to inform you that reviewers have advised against publishing your manuscript, and we must therefore reject it.
Please refer to the comments listed at the end of this letter for details of why I reached this decision.
We appreciate your submitting your manuscript to this journal and for giving us the opportunity to consider your work.
Future Generation Computer Systems
Comments from the editors and reviewers:
This paper presents edge computing-based TLCS architecture for leveraging traffic signal optimization by using deep reinforcement learning, which aims at reducing the traffic congestion under three traffic conditions, i.e., traffic congestion, emergency vehicle, and vehicle collision. The topic itself is very practical and interesting. However, this paper is not well written and organized. My main comments are as follows:
1. The main weakness of this paper is the performance evaluation part. Too few results are given. It¡¯s really hard to draw the conclusions that ¡°The proposed system can maintain the traffic to avoid the congestion¡± from Figures 8 and 9. More results like metric scores like accuracy, delay, waiting time should be given not just the dotted figures.
2. Comparisons of the proposed deep reinforcement learning methods with other solutions are missing. There is a great body of literature on this problem either by using DRL methods or traditional methods. The authors are required to compare the proposed DRL approach for optimizing the traffic signal with more recent work since it is difficult for the readers to find the advantages of the proposed algorithm without comparing to other stat-of-arts algorithms.
3. The main contributions and novelties of this paper should be clearly listed in the introduction section.
4. Some equations like 1~3 can be replaced by their transposed matrix to save some space.
5. Variables should be italic.
6. How is the computation complexity of the proposed DRL approach compared with other methods?
7. Too much effort is spent on the introduction of traffic control system, but less time is spent on the new proposed approach.
8. This paper is just like a simple combination of DRL and traffic control system. More focus should given on the theoretic analysis and algorithm analysis.
This paper proposes a deep reinforcement learning (DRL) algorithm to optimize traffic-light signals under three traffic conditions (traffic congestion, emergency vehicle, and vehicle collision).
However, the main contribution, in my opinion, is that the authors utilize the deep reinforcement learning (DRL) algorithm optimize traffic-light signals under three traffic conditions and analyze the traffic condition, which is not innovative enough. The lack of good insights makes it virtually impossible to provide a novel perspective on the traffic data analysis; the lack of discussion on the generalization disables the engineering usage of the platform. The investigation of related work is not sufficient as well. Generally, the contributions in this paper are weak.
I would like to recommend that the paper go through following revisions, particularly:
1) Describe more deeply the Edge- and Cloud-architecture. Not only by a basic logic view but particularly with more technical details.
According to the authors: .....Furthermore, the DRL algorithm is implemented in the edge computing-based TLCS. The communication latency and computation process can be reduced by leveraging edge computing in the TLCS.
Please, this assumptions have to be explicitly proved/validated.
2) According to the authors: ..... The results demonstrate that the proposed algorithm can reduce vehicle waiting time in intersections under the three mentioned conditions....
Please, clarify that these results are from a Simulation Approach. The authors should emphasize the necessary condition for transfering simulation¢¥s results to real environments.
3) Section 5.2 is based on the application of a simulation approach referenced by .
3.1) Please complete this reference, because is not easy to find it in the international data bases.
3.2) Which is then the value of the simulation approach here in this paper?. Is it only the use of the Tool?
4) In order to transfer your results to a real environment, which are the requirements for both, the environment and the Edge- and Cloud-Architecture? Making this kind of analysis, the authors will be able to emphasize the potential impact of their approach.
5) Please, as indicated for reference , also check other reference for completness.
There are two critical issues about this work:
One of the main drive factors for this work to integrate edge-computing is ¡°reducing the transmission and computation time¡±. However, it is not a reason strongly supported by any listed literature. The authors need to provide a stronger problem statement.
The transportation simulation model is not validated. The parameters are defined without clear reasons. For example, why each episode is 1.5 hours. Does it represent realistic traffic conditions?
The authors proposed an edge computing-based TLCS architecture for leveraging traffic signal optimization by using deep reinforcement learning. The study is interesting and well presented.
I suggest to better address the following issues:
- - - In section 5.1.3, you should describe why and how you chose those values for the parameters in order to better motivate the choice regarding the discount factor, the update rate and the learning rate.
- - Since the use of learning techniques is very hot in literature, you should add a related work section in which you can cite and describe recent applications of reinforcement learning techniques in different context before to focus the work on traffic control systems. As an example you can cite the following recent works:
o ¡°Intelligence at the Edge of Complex Networks: The Case of Cognitive Transmission Power Control¡±, IEEE Wireless Communications, vol. 26, issue 3, pp. 97-103, 2019.
o ¡°Lightweight Reinforcement Learning for Energy Efficient Communications in Wireless Sensor Networks¡±, IEEE Access, vol. 7, pp. 29355-29364, 2019.
o ¡°Workshop Networks Integration Using Mobile Intelligence in Smart Factories¡±, IEEE Communications Magazine, Vol. 56, No. 2, pages:. 68-75, 2018.
Have questions or need assistance?
For further assistance, please visit our Customer Support site. Here you can search for solutions on a range of topics, find answers to frequently asked questions, and learn more about EVISE¢ç via interactive tutorials. You can also talk 24/5 to our customer support team by phone and 24/7 by live chat and email.
Elsevier B.V., Radarweg 29, 1043 NX Amsterdam, The Netherlands, Reg. No. 33156677.