Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
97 tokens/sec
GPT-4o
53 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
5 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Variational Autoencoder Assisted Neural Network Likelihood RSRP Prediction Model (2207.00166v1)

Published 27 Jun 2022 in cs.NI, cs.LG, and eess.SP

Abstract: Measuring customer experience on mobile data is of utmost importance for global mobile operators. The reference signal received power (RSRP) is one of the important indicators for current mobile network management, evaluation and monitoring. Radio data gathered through the minimization of drive test (MDT), a 3GPP standard technique, is commonly used for radio network analysis. Collecting MDT data in different geographical areas is inefficient and constrained by the terrain conditions and user presence, hence is not an adequate technique for dynamic radio environments. In this paper, we study a generative model for RSRP prediction, exploiting MDT data and a digital twin (DT), and propose a data-driven, two-tier neural network (NN) model. In the first tier, environmental information related to user equipment (UE), base stations (BS) and network key performance indicators (KPI) are extracted through a variational autoencoder (VAE). The second tier is designed as a likelihood model. Here, the environmental features and real MDT data features are adopted, formulating an integrated training process. On validation, our proposed model that uses real-world data demonstrates an accuracy improvement of about 20% or more compared with the empirical model and about 10% when compared with a fully connected prediction network.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (6)
  1. Peizheng Li (34 papers)
  2. Xiaoyang Wang (134 papers)
  3. Robert Piechocki (30 papers)
  4. Shipra Kapoor (8 papers)
  5. Angela Doufexi (21 papers)
  6. Arjun Parekh (7 papers)
Citations (2)

Summary

We haven't generated a summary for this paper yet.