Deep Reinforcement Learning for Addressing Disruptions in Traffic Light Control

Faizan Rasheed, Kok-Lim Alvin Yau, Rafidah Md Noor, Yung-Wey Chong

Research output: Contribution to journalArticlepeer-review


This paper investigates the use of multi-agent deep Q-network (MADQN) to address the curse of dimensionality issue occurred in the traditional multi-agent reinforcement learning (MARL) approach. The proposed MADQN is applied to traffic light controllers at multiple intersections with busy traffic and traffic disruptions, particularly rainfall. MADQN is based on deep Q-network (DQN), which is an integration of the traditional reinforcement learning (RL) and the newly emerging deep learning (DL) approaches. MADQN enables traffic light controllers to learn, exchange knowledge with neighboring agents, and select optimal joint actions in a collaborative manner. A case study based on a real traffic network is conducted as part of a sustainable urban city project in the Sunway City of Kuala Lumpur in Malaysia. Investigation is also performed using a grid traffic network (GTN) to understand that the proposed scheme is effective in a traditional traffic network. Our proposed scheme is evaluated using two simulation tools, namely Matlab and Simulation of Urban Mobility (SUMO). Our proposed scheme has shown that the cumulative delay of vehicles can be reduced by up to 30% in the simulations.
Original languageEnglish
Pages (from-to)2225-2247
Number of pages23
JournalComputers, Materials & Continua
Issue number2
Publication statusPublished - 7 Dec 2021


Dive into the research topics of 'Deep Reinforcement Learning for Addressing Disruptions in Traffic Light Control'. Together they form a unique fingerprint.

Cite this