Adversarial ML Attack on Self Organizing Cellular Networks

26 Sep 2019  ·  Farooq Salah-ud-din, Usama Muhammad, Qadir Junaid, Imran Muhammad Ali ·

Deep Neural Networks (DNN) have been widely adopted in self-organizing networks (SON) for automating different networking tasks. Recently, it has been shown that DNN lack robustness against adversarial examples where an adversary can fool the DNN model into incorrect classification by introducing a small imperceptible perturbation to the original example. SON is expected to use DNN for multiple fundamental cellular tasks and many DNN-based solutions for performing SON tasks have been proposed in the literature have not been tested against adversarial examples. In this paper, we have tested and explained the robustness of SON against adversarial example and investigated the performance of an important SON use case in the face of adversarial attacks. We have also generated explanations of incorrect classifications by utilizing an explainable artificial intelligence (AI) technique.

PDF Abstract
No code implementations yet. Submit your code now

Categories


Cryptography and Security Networking and Internet Architecture

Datasets


  Add Datasets introduced or used in this paper