Adversarial EXEmples: Functionality-preserving Optimization of Adversarial Windows Malware

Windows malware classifiers that rely on static analysis have been proven vulnerable to adversarial EXEmples, i.e., malware samples carefully manipulated to evade detection. However, such attacks are typically optimized via query-inefficient algorithms that iteratively apply random manipulations on the input malware, and require checking that the malicious functionality is preserved after manipulation through computationally-expensive validations. To overcome these limitations, we propose RAMEn, a general framework for creating adversarial EXEmples via functionality-preserving manipulations. RAMEn optimizes their parameters of such manipulations via gradient-based (white-box) and gradient-free (black-box) attacks, implementing many state-of-the-art attacks for crafting adversarial Windows malware. It also includes a family of black-box attacks, called GAMMA, which optimize the injection of benign content to facilitate evasion. Our experiments show that gradient-based and gradient-free attacks can bypass malware detectors based on deep learning, non-differentiable models trained on hand-crafted features, and even some renowned commercial products.

PDF Abstract
No code implementations yet. Submit your code now

Tasks


Datasets


Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods


No methods listed for this paper. Add relevant methods here