Artificial Intelligence in Intellectual Property Protection: Application of Deep Learning Model
DOI:
https://doi.org/10.4108/eetiot.5388Keywords:
Intellectual Property, IP Protection, Deep Neural Network, DNN, Taxonomy, Machine LearningAbstract
To create and train a deep learning model costs a lot in comparison to ascertain a trained model. So, a trained model is considered as the intellectual property (IP) of the person who creates such model. However, there is every chance of illegal copying, redistributing and abusing of any of these high-performance models by the malicious users. To protect against such menaces, a few numbers of deep neural networks (DNN) IP security techniques have been developed recently. The present study aims at examining the existing DNN IP security activities. In the first instance, there is a proposal of taxonomy in favor of DNN IP protection techniques from the perspective of six aspects such as scenario, method, size, category, function, and target models. Afterwards, this paper focuses on the challenges faced by these methods and their capability of resisting the malicious attacks at different levels by providing proactive protection. An analysis is also made regarding the potential threats to DNN IP security techniques from various perspectives like modification of models, evasion and active attacks.
Apart from that this paper look into the methodical assessment. The study explores the future research possibilities on DNN IP security by considering different challenges it would confront in the process of its operations.
Result Statement: A high-performance deep neural Networks (DNN) model is costlier than the trained DNN model. It is considered as an intellectual property (IP) of the person who is responsible for creating DNN model. The infringement of the Intellectual Property of DNN model is a grave concern in recent years. This article summarizes current DNN IP security works by focusing on the limitations/ challenges they confront. It also considers the model in question's capacity for protection and resistance against various stages of attacks.
Downloads
References
Nagai Y, Uchida Y, Sakazawa S, Satoh S. Watermarking for deep neural networks. Int. J. Multimedia Inf. Retrieval. 2018; 7 (1): 3–16. DOI: https://doi.org/10.1007/s13735-018-0147-1
Rouhani B.D, Chen H, Koushanfar F. DeepSigns: An end-to-end watermarking framework for ownership protection of deep neural networks. In: Proc. 24th Int. Conf. Architectural Support Program Lang. Operating Syst; 2019. p. 485-497.
Chen H, Rouhani B D, Fan X, Kilinc OC, Koushanfar F. Performance comparison of contemporary DNN watermarking techniques. 2018.
Namba R., Sakuma J. Robust watermarking of neural network with exponential weighting. In: Proc. ACM Asia Conf. Comput. Commun. Secur; 2019. p. 228–240. DOI: https://doi.org/10.1145/3321705.3329808
Zhang J, et al. Protecting intellectual property of deep neural networks with watermarking. In: Proc. Asia Conf. Comput. Commun Secur; 2018. p. 159-172. DOI: https://doi.org/10.1145/3196494.3196550
Chen H, Rouhani B D, Fu C, Zhao J, Koushanfar F. DeepMarks: A secure fingerprinting framework for digital rights management of deep learning models. In: Proc. Int. Conf. Multimedia Retrieval; 2019. p. 105–113. DOI: https://doi.org/10.1145/3323873.3325042
Adi Y, Baum C, Cissé M, Pinkas B, Keshet J. Turning your weakness into a strength: Watermarking deep neural networks by backdooring. In: 27th USENIX Secur Symp. 2018. p. 1615–1631.
Guo J, Potkonjak M. Evolutionary trigger set generation for DNN black-box watermarking. 2019.
Guo J, Potkonjak M. Watermarking deep neural networks for embedded systems. In: Proc. Int. Conf. Comput. Aided Des; 2018. p.1–8. DOI: https://doi.org/10.1145/3240765.3240862
Merrer EL, Perez P, Tredan G. Adversarial frontier stitching for remote neural network watermarking. Neural Comput. Appl. 2020; vol. 32: 9233-9244. DOI: https://doi.org/10.1007/s00521-019-04434-z
Zhao J, Hu Q, Liu G, Ma X, Chen F, Hassan M M. AFA: Adversarial fifingerprinting authentication for deep neural networks. Comput. Commun. 2020; vol. 150: 488-497. DOI: https://doi.org/10.1016/j.comcom.2019.12.016
Meurisch C, Muhlhauser M. Data protection in AI services: A survey. ACM Comput Sury. 2021; vol. 54: 40:1-40:38. DOI: https://doi.org/10.1145/3440754
Zhu R, Zhang X, Shi M, Tang Z. Secure neural network watermarking protocol against forging attack. EURASIP J. Image Video Process. 2020; 1-12. DOI: https://doi.org/10.1186/s13640-020-00527-1
Szyller S, Atli B G, Marchal S, Asokan N. DAWN: Dynamic adversarial watermarking of neural networks. In: Proc. ACM Multimedia Conf; 2021. p. 4417-4425. DOI: https://doi.org/10.1145/3474085.3475591
Lukas N, Zhang Y, Kerschbaum F. Deep neural network fingerprinting by conferrable adversarial examples. In: Proc. 9th Int. Conf. Learn. Representations; 2021. p.1-18.
Tang R., Du M, Hu X. Deep serial number: Computational watermarking for DNN intellectual property protection. 2020.
Chen M., Wu M. Protect your deep neural networks from piracy. In: Proc. IEEE Int. Workshop Inf. Forensics; 2018. p. 1-7. DOI: https://doi.org/10.1109/WIFS.2018.8630791
Fan L, Ng K, Chan C S. Rethinking deep neural networks ownership verification: Embedding passports to defeat ambiguity attacks. In: Proc. Annu. Conf. Neural Inf. Process. Syst; 2019. p. 4716-4725.
Zhang J, Chen D, Liao J, Zhang W, Hua, G, Yu N. Passport-aware normalization for deep model protection. In: Proc. Annu. Conf. Neural Inf. Process. Syst; 2020. p. 1-10.
Xue M, Wu Z, He C, Wang J, Liu W. Active DNN IP protection: A novel user fifingerprint management and DNN authorization control technique. In: Proc. IEEE 19th Int. Conf. Trust, Secur, Privacy Comput Commun; 2020. p. 975-982. DOI: https://doi.org/10.1109/TrustCom50675.2020.00130
Sun S, Xue M, Wang J, Liu W. Protecting the intellectual properties of deep neural networks with an additional class and steganographic images. 2021.
Xue M, Sun S, He C, Zhang Y, Wang J, Liu W. ActiveGuard: Active intellectual property protection for Deep Neural Networks via adversarial examples-based user fifingerprinting. In: Proc. Int. Workshop Pract. Deep Learn. Wild (Workshop at AAAI); 2022. p. 1-7.
Chakraborty A, Mondal, A, Srivastava A. Hardware-assisted intellectual property protection of deep learning models. In: Proc. 57th ACM/IEEE Des. Autom, Conf; 2020. p. 1-6. DOI: https://doi.org/10.1109/DAC18072.2020.9218651
Szentannai K, Afandi AI, Horvath A. MimosaNet: An unrobust neural network preventing model stealing. 2019.
Guan X, Feng H, Zhang W, Zhou H, Zhang J, Yu N. Reversible watermarking in deep convolutional neural networks for integrity authentication. In: Proc. 28th ACM Int. Conf. Multimedia; 2020. p. 2273-2280. DOI: https://doi.org/10.1145/3394171.3413729
Song C, Ristenpart T, Shmatikov V. Machine learning models that remember too much. In: Proc. ACM SIGSAC Conf. Comput. Commun Secur; 2017. p. 587-601. DOI: https://doi.org/10.1145/3133956.3134077
Downloads
Published
How to Cite
Issue
Section
License
Copyright (c) 2024 EAI Endorsed Transactions on Internet of Things
This work is licensed under a Creative Commons Attribution 3.0 Unported License.
This is an open-access article distributed under the terms of the Creative Commons Attribution CC BY 3.0 license, which permits unlimited use, distribution, and reproduction in any medium so long as the original work is properly cited.