Model Protection Scheme Against Distillation Attack in Internet of Vehicles
DOI:
https://doi.org/10.4108/eetel.v8i3.3318Keywords:
Internet of vehicles, Privacy protection, Distillation immunity, Model reinforcement, Differential privacyAbstract
Aiming at the problems of model security and user data disclosure caused by the deep learning model in the Internet of Vehicles scenario, which can be stolen by malicious roadside units or base stations and other attackers through knowledge distillation and other techniques, this paper proposes a scheme to strengthen prevent against distillation. The scheme exploits the idea of model reinforcement such as model self-learning and attention mechanism to maximize the difference between the pre-trained model and the normal model without sacrificing performance. It also combines local differential privacy technology to reduce the effectiveness of model inversion attacks. Our experimental results on several datasets show that this method is effective for both standard and data-free knowledge distillation, and provides better model protection than passive defense.
References
Mekala M S, Dhiman G, Patan R, et al. Deep learning‐influenced joint vehicle‐to‐infrastructure and vehicle‐to‐vehicle communication approach for internet of vehicles[J]. Expert Systems, 2022, 39(5): e12815.
Hinton G, Vinyals O, Dean J. Distilling the knowledge in a neural network[J]. arXiv preprint arXiv:1503.02531, 2015, 2(7).
Chen H, Wang Y, Xu C, et al. Data-free learning of student networks[C]//Proceedings of the IEEE/CVF International Conference on Computer Vision. 2019: 3514-3522.
Lopes R G, Fenu S, Starner T. Data-free knowledge distillation for deep neural networks[J]. arXiv preprint arXiv:1710.07535, 2017.
Yin H, Molchanov P, Alvarez J M, et al. Dreaming to distill: Data-free knowledge transfer via deepinversion[C]//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2020: 8715-8724.
Wu Z, Wang Z, Wang Z, et al. Towards privacy-preserving visual recognition via adversarial training: A pilot study[C]//Proceedings of the European Conference on Computer Vision (ECCV). 2018: 606-624.
Zhao Y, Zhao J, Yang M, et al. Local differential privacy-based federated learning for internet of things[J]. IEEE Internet of Things Journal, 2020, 8(11): 8836-8853.
Lu Y, Huang X, Zhang K, et al. Blockchain empowered asynchronous federated learning for secure data sharing in internet of vehicles[J]. IEEE Transactions on Vehicular Technology, 2020, 69(4): 4298-4311.
Liu H, Wang H, Gu H. HPBS: A hybrid proxy based authentication scheme in VANETs[J]. IEEE Access, 2020, 8: 161655-161667.
Zhang J, Chen D, Liao J, et al. Model watermarking for image processing networks[C]//Proceedings of the AAAI Conference on Artificial Intelligence. 2020, 34(07): 12805-12812.
Fan L, Ng K W, Chan C S. Rethinking deep neural network ownership verification: Embedding passports to defeat ambiguity attacks[J]. Advances in neural information processing systems, 2019, 32.
Ribeiro D A, Melgarejo D C, Saadi M, et al. A novel deep deterministic policy gradient model applied to intelligent transportation system security problems in 5G and 6G network scenarios[J]. Physical Communication, 2023, 56: 101938.
Guerrero‐Ibañez J, Contreras‐Castillo J, Zeadally S. Deep learning support for intelligent transportation systems[J]. Transactions on Emerging Telecommunications Technologies, 2021, 32(3): e4169.
Wang W, Pei Y, Wang S H, et al. PSTCNN: Explainable COVID-19 diagnosis using PSO-guided self-tuning CNN[J]. Biocell, 2023, 47(2): 373-384.
Wang W, Zhang X, Wang S H, et al. Covid-19 diagnosis by WE-SAJ[J]. Systems Science & Control Engineering, 2022, 10(1): 325-335.
Ma H, Chen T, Hu T K, et al. Undistillable: Making a nasty teacher that cannot teach students[J]. arXiv preprint arXiv:2105.07381, 2021.
Zhang L, Song J, Gao A, et al. Be your own teacher: Improve the performance of convolutional neural networks via self distillation[C]//Proceedings of the IEEE/CVF International Conference on Computer Vision. 2019: 3713-3722.
Chen P, Liu S, Zhao H, et al. Distilling knowledge via knowledge review[C]//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2021: 5008-5017.
Hou Y, Ma Z, Liu C, et al. Learning lightweight lane detection cnns by self attention distillation[C]//Proceedings of the IEEE/CVF international conference on computer vision. 2019: 1013-1021.
Vaswani A, Shazeer N, Parmar N, et al. Attention is all you need[J]. Advances in neural information processing systems, 2017, 30.
Hu J, Shen L, Sun G. Squeeze-and-excitation networks[C]//Proceedings of the IEEE conference on computer vision and pattern recognition. 2018: 7132-7141.
Abadi M, Chu A, Goodfellow I, et al. Deep learning with differential privacy[C]//Proceedings of the 2016 ACM SIGSAC conference on computer and communications security. 2016: 308-318.
Arachchige P C M, Bertok P, Khalil I, et al. Local differential privacy for deep learning[J]. IEEE Internet of Things Journal, 2019, 7(7): 5827-5842.
Wei K, Li J, Ding M, et al. User-level privacy-preserving federated learning: Analysis and performance optimization[J]. IEEE Transactions on Mobile Computing, 2021.
Zheng Y, Zou L, Zhang W, et al. Contract-based Cooperative Computation and Communication Resources Sharing in Mobile Edge Computing[J]. Journal of Grid Computing, 2023, 21(1): 14.
Kariyappa S, Qureshi M K. Defending against model stealing attacks with adaptive misinformation[C]//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2020: 770-778.
Downloads
Published
How to Cite
Issue
Section
License
Copyright (c) 2023 EAI Endorsed Transactions on e-Learning
This work is licensed under a Creative Commons Attribution 3.0 Unported License.
This is an open-access article distributed under the terms of the Creative Commons Attribution CC BY 4.0 license, which permits unlimited use, distribution, and reproduction in any medium so long as the original work is properly cited.
Funding data
-
National Natural Science Foundation of China
Grant numbers 62162009 -
National Natural Science Foundation of China
Grant numbers 62101478