Hybrid Policy Learning for Energy-Latency Tradeoff in MEC-Assisted VR Video Service

2 Apr 2021  ·  Chong Zheng, Shengheng Liu, Yongming Huang, Luxi Yang ·

Virtual reality (VR) is promising to fundamentally transform a broad spectrum of industry sectors and the way humans interact with virtual content. However, despite unprecedented progress, current networking and computing infrastructures are incompetent to unlock VR's full potential. In this paper, we consider delivering the wireless multi-tile VR video service over a mobile edge computing (MEC) network. The primary goal is to minimize the system latency/energy consumption and to arrive at a tradeoff thereof. To this end, we first cast the time-varying view popularity as a model-free Markov chain to effectively capture its dynamic characteristics. After jointly assessing the caching and computing capacities on both the MEC server and the VR playback device, a hybrid policy is then implemented to coordinate the dynamic caching replacement and the deterministic offloading, so as to fully utilize the system resources. The underlying multi-objective problem is reformulated as a partially observable Markov decision process, and a deep deterministic policy gradient algorithm is proposed to iteratively learn its solution, where a long short-term memory neural network is embedded to continuously predict the dynamics of the unobservable popularity. Simulation results demonstrate the superiority of the proposed scheme in achieving a trade-off between the energy efficiency and the latency reduction over the baseline methods.

PDF Abstract
No code implementations yet. Submit your code now

Datasets


  Add Datasets introduced or used in this paper

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods


No methods listed for this paper. Add relevant methods here