Joint Communication and Computation for Emerging Applications in Next-Generation Wireless Networks
Victor Frost
Prasad Kulkarni
Taejoon Kim
Shawn Keshmiri
Emerging applications in next-generation wireless networks, such as augmented and virtual reality (AR/VR) and autonomous vehicles, demand significant computational and communication resources at the network edge. This PhD research focuses on developing joint communication–computation solutions while incorporating various network-, application-, and user-imposed constraints. In the first thrust, we examine the problem of energy-constrained computation offloading to edge servers in a multi-user, multi-channel wireless network. To develop a decentralized offloading policy for each user, we model the problem as a partially observable Markov decision process (POMDP). Leveraging bandit learning methods, we introduce a decentralized task offloading solution in which edge users offload their computation tasks to nearby edge servers over selected communication channels.
The second thrust focuses on user-driven requirements for resource-intensive applications, specifically the Quality of Experience (QoE) in 2D and 3D video streaming. Given the unique characteristics of millimeter-wave (mmWave) networks, we develop a beam alignment and buffer-predictive multi-user scheduling algorithm for 2D video streaming applications. This algorithm balances the trade-off between beam alignment overhead and playback buffer levels for optimal resource allocation across multiple users. We then extend our investigation to develop a joint rate adaptation and computation distribution framework for 3D video streaming in mmWave-based VR systems. Numerical results using real-world mmWave traces and 3D video datasets demonstrate significant improvements in video quality, rebuffering time, and quality variations perceived by users.
Finally, we develop novel edge computing solutions for multi-layer immersive video processing systems. By exploring and exploiting the elastic nature of computation tasks in these systems, we propose a multi-agent reinforcement learning (MARL) framework that incorporates two learning-based methods: the centralized phasic policy gradient (CPPG) and the independent phasic policy gradient (IPPG). IPPG leverages shared information and model parameters to learn edge offloading policies; however, during execution, each user acts independently based only on its local state information. This decentralized execution reduces the communication and computation overhead of centralized decision-making and improves scalability. We leverage real-world 4G, 5G, and WiGig network traces, along with 3D video datasets, to investigate the performance trade-offs of CPPG and IPPG when applied to elastic task computing.