Reinforcement Learning-Enabled Cross-Layer Optimization for Low-Power and Lossy Networks under Heterogeneous Traffic Patterns.
Arslan MusaddiqZulqar NainYazdan Ahmad QadriRashid AliSung Won KimPublished in: Sensors (Basel, Switzerland) (2020)
The next generation of the Internet of Things (IoT) networks is expected to handle a massive scale of sensor deployment with radically heterogeneous traffic applications, which leads to a congested network, calling for new mechanisms to improve network efficiency. Existing protocols are based on simple heuristics mechanisms, whereas the probability of collision is still one of the significant challenges of future IoT networks. The medium access control layer of IEEE 802.15.4 uses a distributed coordination function to determine the efficiency of accessing wireless channels in IoT networks. Similarly, the network layer uses a ranking mechanism to route the packets. The objective of this study was to intelligently utilize the cooperation of multiple communication layers in an IoT network. Recently, Q-learning (QL), a machine learning algorithm, has emerged to solve learning problems in energy and computational-constrained sensor devices. Therefore, we present a QL-based intelligent collision probability inference algorithm to optimize the performance of sensor nodes by utilizing channel collision probability and network layer ranking states with the help of an accumulated reward function. The simulation results showed that the proposed scheme achieved a higher packet reception ratio, produces significantly lower control overheads, and consumed less energy compared to current state-of-the-art mechanisms.