Artificial Intelligence in Intellectual Property Protection: Application of Deep Learning Model




Intellectual Property, IP Protection, Deep Neural Network, DNN, Taxonomy, Machine Learning


To create and train a deep learning model costs a lot in comparison to ascertain a trained model. So, a trained model is considered as the intellectual property (IP) of the person who creates such model. However, there is every chance of illegal copying, redistributing and abusing of any of these high-performance models by the malicious users. To protect against such menaces, a few numbers of deep neural networks (DNN) IP security techniques have been developed recently. The present study aims at examining the existing DNN IP security activities. In the first instance, there is a proposal of taxonomy in favor of DNN IP protection techniques from the perspective of six aspects such as scenario, method, size, category, function, and target models. Afterwards, this paper focuses on the challenges faced by these methods and their capability of resisting the malicious attacks at different levels by providing proactive protection.  An analysis is also made regarding the potential threats to DNN IP security techniques from various perspectives like modification of models, evasion and active attacks.

Apart from that this paper look into the methodical assessment. The study explores the future research possibilities on DNN IP security by considering different challenges it would confront in the process of its operations.

Result Statement: A high-performance deep neural Networks (DNN) model is costlier than the trained DNN model. It is considered as an intellectual property (IP) of the person who is responsible for creating DNN model. The infringement of the Intellectual Property of DNN model is a grave concern in recent years. This article summarizes current DNN IP security works by focusing on the limitations/ challenges they confront. It also considers the model in question's capacity for protection and resistance against various stages of attacks.


Download data is not yet available.
<br data-mce-bogus="1"> <br data-mce-bogus="1">


Nagai Y, Uchida Y, Sakazawa S, Satoh S. Watermarking for deep neural networks. Int. J. Multimedia Inf. Retrieval. 2018; 7 (1): 3–16. DOI:

Rouhani B.D, Chen H, Koushanfar F. DeepSigns: An end-to-end watermarking framework for ownership protection of deep neural networks. In: Proc. 24th Int. Conf. Architectural Support Program Lang. Operating Syst; 2019. p. 485-497.

Chen H, Rouhani B D, Fan X, Kilinc OC, Koushanfar F. Performance comparison of contemporary DNN watermarking techniques. 2018.

Namba R., Sakuma J. Robust watermarking of neural network with exponential weighting. In: Proc. ACM Asia Conf. Comput. Commun. Secur; 2019. p. 228–240. DOI:

Zhang J, et al. Protecting intellectual property of deep neural networks with watermarking. In: Proc. Asia Conf. Comput. Commun Secur; 2018. p. 159-172. DOI:

Chen H, Rouhani B D, Fu C, Zhao J, Koushanfar F. DeepMarks: A secure fingerprinting framework for digital rights management of deep learning models. In: Proc. Int. Conf. Multimedia Retrieval; 2019. p. 105–113. DOI:

Adi Y, Baum C, Cissé M, Pinkas B, Keshet J. Turning your weakness into a strength: Watermarking deep neural networks by backdooring. In: 27th USENIX Secur Symp. 2018. p. 1615–1631.

Guo J, Potkonjak M. Evolutionary trigger set generation for DNN black-box watermarking. 2019.

Guo J, Potkonjak M. Watermarking deep neural networks for embedded systems. In: Proc. Int. Conf. Comput. Aided Des; 2018. p.1–8. DOI:

Merrer EL, Perez P, Tredan G. Adversarial frontier stitching for remote neural network watermarking. Neural Comput. Appl. 2020; vol. 32: 9233-9244. DOI:

Zhao J, Hu Q, Liu G, Ma X, Chen F, Hassan M M. AFA: Adversarial fifingerprinting authentication for deep neural networks. Comput. Commun. 2020; vol. 150: 488-497. DOI:

Meurisch C, Muhlhauser M. Data protection in AI services: A survey. ACM Comput Sury. 2021; vol. 54: 40:1-40:38. DOI:

Zhu R, Zhang X, Shi M, Tang Z. Secure neural network watermarking protocol against forging attack. EURASIP J. Image Video Process. 2020; 1-12. DOI:

Szyller S, Atli B G, Marchal S, Asokan N. DAWN: Dynamic adversarial watermarking of neural networks. In: Proc. ACM Multimedia Conf; 2021. p. 4417-4425. DOI:

Lukas N, Zhang Y, Kerschbaum F. Deep neural network fingerprinting by conferrable adversarial examples. In: Proc. 9th Int. Conf. Learn. Representations; 2021. p.1-18.

Tang R., Du M, Hu X. Deep serial number: Computational watermarking for DNN intellectual property protection. 2020.

Chen M., Wu M. Protect your deep neural networks from piracy. In: Proc. IEEE Int. Workshop Inf. Forensics; 2018. p. 1-7. DOI:

Fan L, Ng K, Chan C S. Rethinking deep neural networks ownership verification: Embedding passports to defeat ambiguity attacks. In: Proc. Annu. Conf. Neural Inf. Process. Syst; 2019. p. 4716-4725.

Zhang J, Chen D, Liao J, Zhang W, Hua, G, Yu N. Passport-aware normalization for deep model protection. In: Proc. Annu. Conf. Neural Inf. Process. Syst; 2020. p. 1-10.

Xue M, Wu Z, He C, Wang J, Liu W. Active DNN IP protection: A novel user fifingerprint management and DNN authorization control technique. In: Proc. IEEE 19th Int. Conf. Trust, Secur, Privacy Comput Commun; 2020. p. 975-982. DOI:

Sun S, Xue M, Wang J, Liu W. Protecting the intellectual properties of deep neural networks with an additional class and steganographic images. 2021.

Xue M, Sun S, He C, Zhang Y, Wang J, Liu W. ActiveGuard: Active intellectual property protection for Deep Neural Networks via adversarial examples-based user fifingerprinting. In: Proc. Int. Workshop Pract. Deep Learn. Wild (Workshop at AAAI); 2022. p. 1-7.

Chakraborty A, Mondal, A, Srivastava A. Hardware-assisted intellectual property protection of deep learning models. In: Proc. 57th ACM/IEEE Des. Autom, Conf; 2020. p. 1-6. DOI:

Szentannai K, Afandi AI, Horvath A. MimosaNet: An unrobust neural network preventing model stealing. 2019.

Guan X, Feng H, Zhang W, Zhou H, Zhang J, Yu N. Reversible watermarking in deep convolutional neural networks for integrity authentication. In: Proc. 28th ACM Int. Conf. Multimedia; 2020. p. 2273-2280. DOI:

Song C, Ristenpart T, Shmatikov V. Machine learning models that remember too much. In: Proc. ACM SIGSAC Conf. Comput. Commun Secur; 2017. p. 587-601. DOI:




How to Cite

P. Pattnayak, T. Das, A. Mohanty, and S. Patnaik, “Artificial Intelligence in Intellectual Property Protection: Application of Deep Learning Model”, EAI Endorsed Trans IoT, vol. 10, Mar. 2024.