A Deep Reinforcement Learning Approach to Queue Management and Revenue Maximization in Multi-Tier 5G Wireless Networks
Keywords:
Deep reinforcement learning, Reinforcement learning, Network slice, 5G, Slice admission, Resource allocationAbstract
It is envisioned that 5G systems will increasingly leverage on the network slicing concept to meet the demand of diverse services, each tailored for specific user requirements. In this context, slice admission algorithms that admit slices to the system, that optimize a given objective while ensuring the efficient allocation of resources, are required. Reinforcement learning has been used successfully to implement optimal slice admission policies. But as the 5G wireless network becomes more extensive and intricate, the state and action spaces become large. The efficiency and convergence of reinforcement learning slice admission algorithms is negatively impacted in such a scenario. To improve on this, deep reinforcement learning, a combination of reinforcement learning and deep learning, has been adopted. In this paper, a Deep Q-Learning slice admission algorithm is designed; to this end a utility, was developed. Results show that using the utility as a maximization objective enabled the designed algorithm to (i) optimize the infrastructure provider’s revenue while (ii) providing queue management, in terms of queue length and queue delay.
References
Alpaydin, E.: Introduction to Machine Learning/Ethem Alpaydin. The MIT Press, Cambridge (2010)
N. C. Luong et al., "Applications of Deep Reinforcement Learning in Communications and Networking: A Survey", IEEE Commun. Surveys Tuts., May 2019.
T. Mitchell. Machine Learning, McGraw-Hill, 1997.
NGMN Alliance, “Description of Network Slicing Concept,” Public Deliverable, 2016
X. Foukas, G. Patounas, A. Elmokashfi, and M. K. Marina, “Network slicing in 5G: Survey and challenges,” IEEE Commun. Mag., vol. 55, no. 5, pp. 94–100, 2017
J. Ordonez-Lucena et al., "Network slicing for 5G with SDN/NFV: Concepts architectures and challenges", IEEE Commun. Mag., vol. 55, no. 5, pp. 80-87, May 2017.
C. Natalino et al., "Machine learning aided resource orchestration in multi-tenant networks", IEEE SUM, 2018..
Okumu, Elizabeth M. "A Q-Learning Based Slice Admission Algorithm for Multi-Tier 5G Cellular Wireless Networks." American Academic Scientific Research Journal for Engineering, Technology, and Sciences 82.1 (2021): 11-18.
D. Bega, M. Gramaglia, A. Banchs, V. Sciancalepore, K. Samdanis, and X. Costa-Perez, ‘‘Optimising 5G infrastructure markets: The business ofnetwork slicing,’’ inProc. IEEE INFOCOM, Atlanta, GA, USA, May 2017,pp. 1–9.
P. Monti, C. Natalino, M. R. Raza, P. Ohlen, and L. Wosinska, ``A slice admission policy based on reinforcement learning for a 5G flexible RAN,'' in Proc. Eur. Conf. Opt. Commun. (ECOC), no. 1, 2018, pp. 1_3.
Bega, Dario, et al. "A machine learning approach to 5G infrastructure market optimization." IEEE Transactions on Mobile Computing 19.3 (2019): 498-512.
Xiong, Zehui, et al. "Deep reinforcement learning for mobile 5G and beyond: Fundamentals, applications, and challenges." IEEE Vehicular Technology Magazine 14.2 (2019): 44-52.
Ayodele, Taiwo Oladipupo. "Types of machine learning algorithms." New advances in machine learning 3 (2010): 19-48
R. S. Sutton and A. G. Barto, Reinforcement learning: An introduction. MIT press Cambridge, 1998
R. Bellman, “A markovian decision process,” DTIC, Tech. Rep., 1957.
E. Even-Dar and Y. Mansour, “Learning rates for Q-learning,” Journal of Machine Learning Research, vol. 5, pp. 1–25, Dec. 2003.
I. Goodfellow, Y. Bengio, A. Courville, and Y. Bengio, Deep learning. MIT press Cambridge, 2016.
Albawi, Saad, Tareq Abed Mohammed, and Saad Al-Zawi. "Understanding of a convolutional neural network." 2017 International Conference on Engineering and Technology (ICET). Ieee, 2017.
Deep Q-Learning – Combining Neural Networks and Reinforcement Learning, Deeplizard, Accessed on November 12 2021. [Online]. Available: https://deeplizard.com/learn/video/wrBUkpiRvCA
Downloads
Published
How to Cite
Issue
Section
License
Authors who submit papers with this journal agree to the following terms.