Are Malware Detection Classifiers Adversarially Vulnerable to Actor-Critic based Evasion Attacks?

Authors

  • Hemant Rathore Department of CS & IS, Goa Campus, BITS Pilani, India
  • Sujay C Sharma Department of CS & IS, Goa Campus, BITS Pilani, India
  • Sanjay K. Sahay Department of CS & IS, Goa Campus, BITS Pilani, India
  • Mohit Sewak Security & Compliance Research, Microsoft R & D, India

DOI:

https://doi.org/10.4108/eai.31-5-2022.174087

Keywords:

Adversarial Robustness, Android, Deep Neural Network, Malware Analysis and Detection, Machine Learning

Abstract

Android devices like smartphones and tablets have become immensely popular and are an integral part of our daily lives. However, it has also attracted malware developers to design android malware which have grown aggressively in the last few years. Research shows that machine learning, ensemble, and deep learning models can successfully be used to detect android malware. However, the robustness of these models against well-crafted adversarial samples is not well investigated. Therefore, we first stepped into the adversaries’ shoes and proposed the ACE attack that adds limited perturbations in malicious applications such that they are forcefully misclassified as benign and remain undetected by different malware detection models. The ACE agent is designed based on an actor-critic architecture that uses reinforcement learning to add perturbations (maximum ten) while maintaining the structural and functional integrity of the adversarial malicious applications. The proposed attack is validated against twenty-two different malware detection models based on two feature sets and eleven different classification algorithms. The ACE attack accomplished an average fooling rate (with maximum of ten perturbations) of 46.63% across eleven permission based malware detection models and 95.31% across eleven intent based detection models. The attack forced a massive number of misclassifications that led to an average accuracy drop of 18.07% and 36.62% in the above permission and intent based malware detection models. Later we also design a defense mechanism using the adversarial retraining strategy, which uses adversarial malware samples with correct class labels to retrain the models. The defense mechanism improves the average accuracy by 24.88% and 76.51% for the eleven permission and eleven intent based malware detection models. In conclusion, we found that malware detection models based on machine learning, ensemble, and deep learning perform poorly against adversarial samples. Thus malware detection models should be investigated for vulnerabilities and mitigated to enhance their overall forensic knowledge and adversarial robustness.

Downloads

Published

31-05-2022

How to Cite

1.
Rathore H, Sharma SC, Sahay SK, Sewak M. Are Malware Detection Classifiers Adversarially Vulnerable to Actor-Critic based Evasion Attacks?. EAI Endorsed Scal Inf Syst [Internet]. 2022 May 31 [cited 2022 Dec. 3];10(1):e6. Available from: https://publications.eai.eu/index.php/sis/article/view/1314