Optimized VR Video Offloading via Deep Reinforcement Learning
Keywords:
eXtended Reality, Offloading, Multiaccess Edge Computing, Deep Reinforcement Learning, Energy efficiency, Quality of ServiceAbstract
In recent years, eXtended Reality (VR) applications
have been widely used across diverse sectors such as
tourism, healthcare, education, and manufacturing.
Such apps are now available on mobile devices,
wearable devices, tablets, and similar platforms.
Mobile devices often have limitations regarding battery
capacity and computing power, which restricts the
variety of supported apps and diminishes the user
experience. An viable solution to these challenges is to
transfer the computation to cloud servers. The
fundamental restriction of cloud computing is the
considerable distance between the processing server
and the end user, which might lead to unacceptable
latency for several mobile XR applications. To address
these limitations, Multi-access Edge Computing (MEC)
is proposed to deliver mobile computing, network
control, and storage services to the network peripheries
(such as base stations and access points) to enable the
deployment of computation-intensive and latencysensitive
applications on resource-constrained mobile
devices. This study presents a Deep Reinforcement
Learning-based offloading strategy for XR applications
(DRLXR). The issue is articulated as an optimization
equation for a utility function that considers both
energy consumption and execution latency at devices,
using the Markov Decision Process (MDP) paradigm
for decision-making. The Deep Reinforcement
Learning (DRL) approach is then used to train and
ascertain the near-optimal offloading option for mobile
XR gadgets. The proposed DRLXR system is evaluated
in a simulated environment and compared with other
innovative offloading techniques. The simulation
findings demonstrate that our suggested approach
surpasses its alternatives for overall execution delay
and energy usage.