Autors: Kougioumtzidis, G. V., Poulkov, V. K., Lazaridis P.I., Zaharis Z.D.
Title: Deep Reinforcement Learning-Based Resource Allocation for QoE Enhancement in Wireless VR Communications
Keywords: 5G new radio (NR), deep reinforcement learning (DRL), quality of experience (QoE), resource allocation, wireless virtual reality (VR) communications

Abstract: Wireless virtual reality (VR) communication applications have emerged as a transformative technology, offering innovative solutions in various areas of everyday life. However, the successful deployment of these applications faces challenges in ensuring high quality of experience (QoE), especially in environments with limited network resources. This research paper presents a novel approach to address the challenge of enhancing QoE by incorporating deep reinforcement learning (DRL) techniques in the resource allocation process. The proposed model takes into account the quality of service (QoS) parameters of the 5G new radio (NR) network to optimize its operation, ensuring a seamless and immersive VR experience. Specifically, the resource allocation strategy adopts a policy that maximizes the transmission-related QoE value based on the evolving characteristics of the communication channel and user interactions. To evaluate the effectiveness of the proposed approach, extensive simulations and comparative analyses against traditional resource allocation methods are performed. The results demonstrate significant improvements in the transmission-related QoE values and highlight the superiority of the DRL-based resource allocation approach in the dynamic and unpredictable wireless environments.

References

  1. I. F. Akyildiz and H. Guo, "Wireless communication research challenges for extended reality (XR)," ITU J. Future Evolving Technol., vol. 3, no. 2, pp. 273-287, 2022.
  2. F. Hu, Y. Deng, W. Saad, M. Bennis, and A. H. Aghvami, "Cellularconnected wireless virtual reality: Requirements, challenges, and solutions," IEEE Commun. Mag., vol. 58, no. 5, pp. 105-111, May 2020.
  3. A. A. Barakabitze, N. Barman, A. Ahmad, S. Zadtootaghaj, L. Sun, M. G. Martini, and L. Atzori, "QoE management of multimedia streaming services in future networks: A tutorial and survey," IEEE Commun. Surveys Tuts., vol. 22, no. 1, pp. 526-565, 1st Quart., 2020.
  4. J. Li, R. Feng, Z. Liu, W. Sun, and Q. Li, "Modeling QoE of virtual reality video transmission over wireless networks," in Proc. IEEE Global Commun. Conf. (GLOBECOM), Dec. 2018, pp. 1-7.
  5. J. Yang, J. Luo, D. Meng, and J.-N. Hwang, "QoE-driven resource allocation optimized for uplink delivery of delay-sensitive VR video over cellular network," IEEE Access, vol. 7, pp. 60672-60683, 2019.
  6. G. Sun, Z. Xu, H. Yu, and V. Chang, "Dynamic network function provisioning to enable network in box for industrial applications," IEEE Trans. Ind. Informat., vol. 17, no. 10, pp. 7155-7164, Oct. 2021.
  7. G. Sun, Z. Xu, H. Yu, X. Chen, V. Chang, and A. V. Vasilakos, "Lowlatency and resource-efficient service function chaining orchestration in network function virtualization," IEEE Internet Things J., vol. 7, no. 7, pp. 5760-5772, Jul. 2020.
  8. A. Martin, J. Egaña, J. Flórez, J. Montalbán, I. G. Olaizola, M. Quartulli, R. Viola, and M. Zorrilla, "Network resource allocation system for QoEaware delivery of media services in 5G networks," IEEE Trans. Broadcast., vol. 64, no. 2, pp. 561-574, Jun. 2018.
  9. H. Wang, Y. Wu, G. Min, J. Xu, and P. Tang, "Data-driven dynamic resource scheduling for network slicing: A deep reinforcement learning approach," Inf. Sci., vol. 498, pp. 106-116, Sep. 2019.
  10. R. Dong, C. She, W. Hardjawana, Y. Li, and B. Vucetic, "Deep learning for radio resource allocation with diverse quality-of-service requirements in 5G," IEEE Trans. Wireless Commun., vol. 20, no. 4, pp. 2309-2324, Apr. 2021.
  11. R. Li, Z. Zhao, Q. Sun, I. Chih-Lin, C. Yang, X. Chen, M. Zhao, and H. Zhang, "Deep reinforcement learning for resource management in network slicing," IEEE Access, vol. 6, pp. 74429-74441, 2018.
  12. K. Suh, S. Kim, Y. Ahn, S. Kim, H. Ju, and B. Shim, "Deep reinforcement learning-based network slicing for beyond 5G," IEEE Access, vol. 10, pp. 7384-7395, 2022.
  13. A. Thantharate, R. Paropkari, V. Walunj, and C. Beard, "DeepSlice: A deep learning approach towards an efficient and reliable network slicing in 5G networks," in Proc. IEEE 10th Annu. Ubiquitous Comput., Electron. Mobile Commun. Conf. (UEMCON), Oct. 2019, pp. 0762-0767.
  14. J. Zhou,W. Zhao, and S. Chen, "Dynamic network slice scaling assisted by prediction in 5G network," IEEE Access, vol. 8, pp. 133700-133712, 2020.
  15. Y. Abiko, T. Saito, D. Ikeda, K. Ohta, T. Mizuno, and H. Mineno, "Flexible resource block allocation to multiple slices for radio access network slicing using deep reinforcement learning," IEEE Access, vol. 8, pp. 68183-68198, 2020.
  16. Y. Kim, S. Kim, and H. Lim, "Reinforcement learning based resource management for network slicing," Appl. Sci., vol. 9, no. 11, p. 2361, Jan. 2019.
  17. V. Sciancalepore, X. Costa-Perez, and A. Banchs, "RL-NSB: Reinforcement learning-based 5G network slice broker," IEEE/ACM Trans. Netw., vol. 27, no. 4, pp. 1543-1557, Aug. 2019.
  18. M. Bosk, M. Gajic, S. Schwarzmann, S. Lange, R. Trivisonno, C. Marquezan, and T. Zinner, "Using 5G QoS mechanisms to achieve QoE-aware resource allocation," in Proc. 17th Int. Conf. Netw. Service Manage. (CNSM), Oct. 2021, pp. 283-291.
  19. M. Leconte, G. S. Paschos, P. Mertikopoulos, and U. C. Kozat, "A resource allocation framework for network slicing," in Proc. INFOCOM-IEEE Conf. Comput. Commun., Apr. 2018, pp. 2177-2185.
  20. N. Eswara, S. Chakraborty, H. P. Sethuram, K. Kuchi, A. Kumar, and S. S. Channappayya, "Perceptual QoE-optimal resource allocation for adaptive video streaming," IEEE Trans. Broadcast., vol. 66, no. 2, pp. 346-358, Jun. 2020.
  21. I. Alqerm and J. Pan, "DeepEdge: A new QoE-based resource allocation framework using deep reinforcement learning for future heterogeneous edge-IoT applications," IEEE Trans. Netw. Service Manage., vol. 18, no. 4, pp. 3942-3954, Dec. 2021.
  22. N. Naderializadeh, J. J. Sydir, M. Simsek, and H. Nikopour, "Resource management in wireless networks via multi-agent deep reinforcement learning," IEEE Trans. Wireless Commun., vol. 20, no. 6, pp. 3507-3523, Jun. 2021.
  23. W. Huang, T. Li, Y. Cao, Z. Lyu, Y. Liang, L. Yu, D. Jin, J. Zhang, and Y. Li, "Safe-NORA: Safe reinforcement learning-based mobile network resource allocation for diverse user demands," in Proc. 32nd ACM Int. Conf. Inf. Knowl. Manage., New York, NY, USA, Oct. 2023, pp. 885-894.
  24. P. Cheng, Y. Chen, M. Ding, Z. Chen, S. Liu, and Y. P. Chen, "Deep reinforcement learning for online resource allocation in IoT networks: Technology, development, and future challenges," IEEE Commun. Mag., vol. 61, no. 6, pp. 111-117, Jun. 2023.
  25. G. Zhou, L. Zhao, G. Zheng, Z. Xie, S. Song, and K.-C. Chen, "Joint multiobjective optimization for radio access network slicing using multi-agent deep reinforcement learning," IEEE Trans. Veh. Technol., vol. 72, no. 9, pp. 11828-11843, Sep. 2023.
  26. D. Yan, B. K. Ng, W. Ke, and C.-T. Lam, "Deep reinforcement learning based resource allocation for network slicing with massive MIMO," IEEE Access, vol. 11, pp. 75899-75911, 2023.
  27. J. Huang, J. Wan, B. Lv, Q. Ye, and Y. Chen, "Joint computation offloading and resource allocation for edge-cloud collaboration in Internet of Vehicles via deep reinforcement learning," IEEE Syst. J., vol. 17, no. 2, pp. 2500-2511, Jun. 2023.
  28. C.-C. Liu and L.-D. Chou, "5G/B5G network slice management via staged reinforcement learning," IEEE Access, vol. 11, pp. 72272-72280, 2023.
  29. S. Jin, X. Wang, and Q. Meng, "Spatial memory-augmented visual navigation based on hierarchical deep reinforcement learning in unknown environments," Knowl.-Based Syst., vol. 285, Feb. 2024, Art. no. 111358.
  30. G. Ji, Q. Gao, T. Zhang, L. Cao, and Z. Sun, "A heuristically accelerated reinforcement learning-based neurosurgical path planner," Cyborg Bionic Syst., vol. 4, p. 0026, Jan. 2023.
  31. W. Chen, X. Qiu, T. Cai, H.-N. Dai, Z. Zheng, and Y. Zhang, "Deep reinforcement learning for Internet of Things: A comprehensive survey," IEEE Commun. Surveys Tuts., vol. 23, no. 3, pp. 1659-1692, 3rd Quart., 2021.
  32. M. L. Puterman, Markov Decision Processes: Discrete Stochastic Dynamic Programming. Hoboken, NJ, USA: Wiley, 2014.
  33. N. C. Luong, D. T. Hoang, S. Gong, D. Niyato, P. Wang, Y.-C. Liang, and D. I. Kim, "Applications of deep reinforcement learning in communications and networking: A survey," IEEE Commun. Surveys Tuts., vol. 21, no. 4, pp. 3133-3174, 4th Quart., 2019.
  34. Y. Huang, C. Xu, C. Zhang, M. Hua, and Z. Zhang, "An overview of intelligent wireless communications using deep reinforcement learning," J. Commun. Inf. Netw., vol. 4, no. 2, pp. 15-29, Jun. 2019.
  35. A. Feriani and E. Hossain, "Single and multi-agent deep reinforcement learning for AI-enabled wireless networks: A tutorial," IEEE Commun. Surveys Tuts., vol. 23, no. 2, pp. 1226-1252, 2nd Quart., 2021.
  36. V. Mnih, K. Kavukcuoglu, D. Silver, A. A. Rusu, J. Veness, M. G. Bellemare, A. Graves, M. Riedmiller, A. K. Fidjeland, G. Ostrovski, S. Petersen, C. Beattie, A. Sadik, I. Antonoglou, H. King, D. Kumaran, D. Wierstra, S. Legg, and D. Hassabis, "Human-level control through deep reinforcement learning," Nature, vol. 518, no. 7540, pp. 529-533, Feb. 2015.
  37. D. T. Hoang, N. V. Huynh, D. N. Nguyen, E. Hossain, and D. Niyato, Deep Reinforcement Learning for Wireless Communications and Networking: Theory, Applications and Implementation. Hoboken, NJ, USA: Wiley, 2023.
  38. Huawei. White Paper on VR-Oriented Bearer Network Requirements-Huawei Press Center. Accessed: Feb. 6, 2024. [Online]. Available: https://www.huawei.com/en/news/2016/11/WhitePaper-VR-Oriented-Bearer-Network-Requirements
  39. G. Kougioumtzidis, V. Poulkov, Z. D. Zaharis, and P. I. Lazaridis, "A survey on multimedia services QoE assessment and machine learningbased prediction," IEEE Access, vol. 10, pp. 19507-19538, 2022.
  40. ITU. G.1035: Influencing Factors on Quality of Experience for Virtual Reality Services. Accessed: Feb. 6, 2024. [Online]. Available: https://www.itu.int/rec/T-REC-G.1035
  41. G. Kougioumtzidis, V. Poulkov, Z. Zaharis, and P. Lazaridis, "QoE assessment aspects for virtual reality and holographic telepresence applications," in Future Access Enablers for Ubiquitous and Intelligent Infrastructures (Lecture Notes of the Institute for Computer Sciences, Social Informatics and Telecommunications Engineering). Cham, Switzerland: Springer, 2022, pp. 171-180.
  42. G. Kougioumtzidis, A. Vlahov, V. K. Poulkov, P. I. Lazaridis, and Z. D. Zaharis, "Deep learning-aided QoE prediction for virtual reality applications over open radio access networks," IEEE Access, vol. 11, pp. 143514-143529, 2023.
  43. R. Schatz, T. Hoßfeld, L. Janowski, and S. Egger, "From packets to people: Quality of experience as a new measurement challenge," in Data Traffic Monitoring and Analysis: From Measurement, Classification, and Anomaly Detection To Quality of Experience (Lecture Notes in Computer Science), E. Biersack, C. Callegari, and M. Matijasevic, Eds., Berlin, Germany: Springer, 2013, pp. 219-263.
  44. T. Hoßfeld, P. Tran-Gia, and M. Fiedler, "Quantification of quality of experience for edge-based applications," in Managing Traffic Performance in Converged Networks (Lecture Notes in Computer Science), L. Mason, T. Drwiega, and J. Yan, Eds., Berlin, Germany: Springer, 2007, pp. 361-373.
  45. Z. Fei, F. Wang, J. Wang, and X. Xie, "QoE evaluation methods for 360-degree VR video transmission," IEEE J. Sel. Topics Signal Process., vol. 14, no. 1, pp. 78-88, Jan. 2020.

Issue

IEEE Access, vol. 13, pp. 25045-25058, 2025, United States, https://doi.org/10.1109/ACCESS.2025.3538546

Copyright Institute of Electrical and Electronics Engineers Inc.

Цитирания (Citation/s):
1. Nguyen G.M., Asiedu D.K.P., Yun J.-H., Fast adaptation of multi-cell NOMA resource allocation via federated meta-reinforcement learning, 2025, Computer Networks, issue 0, vol. 272, DOI 10.1016/j.comnet.2025.111701, issn 13891286 - 2025 - в издания, индексирани в Scopus и/или Web of Science
2. Chia R., Pang W.L., King Phang S., Hwang Goh H., Yoong Chan K., Machine Learning-Driven Analysis of User Bandwidth Allocation and Performance in 5G Network, 2025, IEEE Access, issue 0, DOI 10.1109/ACCESS.2025.3615398, eissn 21693536 - 2025 - в издания, индексирани в Scopus и/или Web of Science

Вид: статия в списание, публикация в издание с импакт фактор, публикация в реферирано издание, индексирана в Scopus и Web of Science