Research on Intrusion Detection Technology of Computing Nodes in Digital Power Grid based on Artificial Intelligence

This paper aims to investigate an intrusion detection network for digital power grid networks, which consists of an edge server and two computational nodes that work collaboratively to detect any potential intrusion in the network. The primary objective of this study is to enhance the e ff ectiveness of intrusion detection in the network. To achieve this objective, we first define the outage probability of the intrusion detection system under consideration. This is done to provide a measure of the probability that the system fails to detect an intrusion when it occurs. We then derive a closed-form expression for the outage probability to enable further analysis on the system behavior. Since the system resources, such as transmit power, are limited, we further design a transmit power allocation strategy to improve the system performance. This strategy seeks to optimize the allocation of transmit power across the di ff erent nodes of the intrusion detection network to maximize the likelihood of detecting intrusions while minimizing the resource usage. Finally, to evaluate the performance of the proposed system, we conduct simulations and provide results that demonstrate the accuracy of the closed-form expression and the e ff ectiveness of the transmit power allocation strategy. These simulation results serve as evidence of the e ffi cacy of the proposed approach in detecting intrusions in a resource-constrained network, especially for the digital power grid networks.


Introduction
In recent years, artificial intelligence (AI) has made remarkable progress in fields such as image and speech recognition, natural language processing, machine translation, autonomous driving, and intelligent gaming [1][2][3].Meanwhile, with the development of new technologies such as deep learning and reinforcement learning, the scope and practicality of AI have been further expanded, which has not only driven the upgrading and transformation of traditional industries but also brought many new business opportunities and social benefits [4,5].
The widespread application of the Internet of Things (IoT) has made wireless network security issues increasingly important [6,7].In wireless network, the channel is the fundamental medium for data transmission, and its stability and security are crucial for ensuring communication quality [8,9].However, due to various factors, the channel is often susceptible to interference and intrusion attacks.Channel intrusion detection, as an important part of wireless network security, aims to discover and identify malicious behavior in the network, in order to protect wireless networks from various threats and attacks.Channel intrusion detection indicates the process of detecting unauthorized access or activity within communication channels between computing systems, such as a wireless or wired network [10,11].In many cases, these channels represent critical pathways for transmitting sensitive data, making them attractive targets for attackers seeking to compromise the confidentiality, integrity, and availability of an organization's information.As intrusion techniques continue to evolve and advance, traditional intrusion detection methods are no longer able to meet the practical security needs.Therefore, researchers have started exploring new intrusion detection methods based on artificial intelligence, including machine learning, deep learning, and other emerging technologies.In this field, the authors in [12] proposed a novel approach for detecting insider threats in computer systems, where graph-based intelligence techniques were used to analyze the interactions between users and computer resources.In addition, a knowledge-driven approach was proposed for discovering software vulnerabilities and predicting co-exploitation behaviors, in which a hybrid approach combining data-driven techniques with expert knowledge was used to identify potential vulnerabilities and their co-exploitation behaviors [13].In further, an integrated framework was proposed for predicting the time-to-exploit vulnerabilities in computer systems, where a dynamic imbalanced learning approach was devised to exploit the evolving nature of the system and the imbalanced distribution of vulnerabilities [14].These methods can detect and prevent intrusion attacks by analyzing network traffic, identifying abnormal behavior and pattern recognition, and other techniques.
This paper focuses on the design and evaluation of an intrusion detection system for digital power grid networks.The system consists of an edge server and two computational nodes that work collaboratively to detect potential intrusions in the network.The primary objective of the study is to enhance the effectiveness of intrusion detection in the network while considering resource constraints, such as limited transmit power.To achieve this objective, the paper first defines the outage probability of the intrusion detection system, which provides a measure of the probability that the system fails to detect an intrusion when it occurs.The paper then derives a closedform expression for the outage probability, enabling further analysis of the system behavior.To improve the system's performance, the paper designs a transmit power allocation strategy to optimize the allocation of transmit power across the different nodes of the intrusion detection network.This strategy aims to maximize the likelihood of detecting intrusions while minimizing the resource usage.Finally, the paper conducts simulations to evaluate the proposed system's performance and provides evidence of the efficacy of the closed-form expression and the transmit power allocation strategy in detecting intrusions in a resource-constrained network.The simulation results demonstrate the accuracy of the proposed approach and highlight its effectiveness, particularly for digital power grid networks.
The rest parts of this paper are summarized as follows.Sec. 2 describes the system model of intrusion detection for digital power grid networks, Sec. 3 defines the outage probability in this considered network, discusses the system optimization problem and designs the system resource allocation strategy.Sec. 4 provides some simulation results to verify the correctness of the closed-form expression and the effectiveness of our proposed transmit power allocation strategy.Sec. 5 gives the conclusion of this paper.

System model
Figure 1.System model of intrusion detection with two computational nodes and an edge server for digital power grid networks.
Fig. 1 shows the system model of the intrusion detection with two computational nodes and an edge server for digital power grid networks, where there are two computational nodes {N 1 , N 2 } and one edge server.Specifically, we assume that each computational node is equipped with one antenna for communicating with the edge server through an wireless link.Without loss of generality, the edge server conducts the intrusion detection of the computational nodes based on the signal-to ratio (SNR).In this network, the instantaneous SNR received by the edge server from the computational node N 1 is given by [15], where h 1 ∼ CN (0, β 1 ) denotes the channel parameter of the wireless link from the computational node N 1 to the edge server.Moreover, P 1 is the transmit power at the computational node N 1 , and σ 2 denotes the variance of the additional white Gaussian noise (AWGN) at the edge server.
In addition, the instantaneous SNR received by the edge server from the computational node N 2 is written as, where h 2 ∼ CN (0, β 2 ) is the channel gain of the wireless link from the computational node N 2 to the edge server, and P 2 denotes the transmit power at the computational node N 2 .Particularly, P 1 and P 2 should satisfy the following constraint [16,17], denotes the total transmit power of the computational nodes.
Without loss of generality, we assume that the edge server can successfully complete the intrusion detection if and only if it can simultaneously satisfy the following two constraints, where γ 1 and γ 2 denote the SNR threshold received by the edge server from the computational nodes N 1 and N 2 , respectively.When SNR 1 ≥ γ 1 and SNR 2 ≥ γ 2 both hold, the edge server will successfully detect the intrusion.

Problem formulation and optimization
In this section, we elaborate the system optimization problem of the considered digital power grid networks.Specifically, we firstly define the system outage probability, which is equal to the probability of failing the intrusion detection, and then derive the corresponding closed-form expression.In further, we aim to improve the system performance by minimizing the system outage probability, through optimizing the transmit power of the computational nodes N 1 and N 2 .From ( 4) and ( 5), the outage probability is defined as the probability that SNR 1 and SNR 2 are less than the associated SNR thresholds γ 1 and γ 2 respectively, where SNR 1 < γ 1 and SNR 2 < γ 2 mean the outage occurs at N 1 and N 2 , respectively.From ( 6), we can further derive as [18,19], According to (3) and ( 7), we can obtain, where Moreover, according to the ), we can also obtain f (|h 2 | 2 )(y) as, From ( 11), ( 12) and ( 13), we can further derive P out as, In this way, we can obtain the closed-form expression of the system outage probability in this considered network.In further, we can improve the system performance by minimizing the outage probability, through optimizing the transmit power of the computational nodes N 1 and N 2 , given by, min Specifically, constraint C 1 indicates that the transmit power of the computational node N 1 should not exceed the total system transmit power.In the following, we will describe a transmit power allocation schemes to solve the optimization problem.
In this section, we propose two transmit power allocation schemes for the digital power grid networks to improve the system performance, by minimizing the  system outage probability.The details of two proposed allocation scheme are elaborated in the follow.
Considering the fairness of each computational node, we can uniformly allocate the transmit power to the computational nodes, which can be described as, Besides the uniform power allocation in the above, we can also use the efficient dichotomy method to perform the power allocation between the two users.Specifically, the dichotomy method, also known as the bisection method, is a numerical algorithm used to find the roots of a continuous function.The method involves repeatedly bisecting an interval and then selecting a subinterval in which a root must lie, based on the sign of the function at the endpoints of the interval.The process is repeated until a root is found with a desired level of accuracy.The dichotomy method is a simple and robust algorithm that can be applied to a wide range of functions.It does not require knowledge of the derivative of the function, making it useful for functions that are difficult or expensive to differentiate.However, the method is relatively slow compared to some other root-finding algorithms, especially when the function has multiple roots or a steep slope near the root.In addition to finding roots, the dichotomy method can also be used to find the maximum or minimum value of a unimodal function on a closed interval.This is done by replacing the sign of the function with its derivative in the algorithm, and modifying the subinterval selection criteria accordingly.

Simulation
In this section, we present some experiments for the digital power grid networks, to demonstrate the effectiveness of our proposed resource allocation schemes.If not specified, the total transmit power is set to 15W.
Fig. 2 and Table 1 depict the impact of the total transmit power on the system outage probability of digital power grid networks, where β 1 = 0.1, β 2 = 0.2, γ 1 = 0.1, γ 2 = 0.2, and the total transmit power varies  from 5W to 25W.For comparison, we provide the result of "Brute force" scheme, which can find the optimal solution by iterating over all feasible solutions.As observed from this figure, we can see that the outage probability decreases as the total transmit power increases.This is because that a larger transmit power can provide a larger SNR.Moreover, the analytical results of our proposed scheme match well with the simulated ones, which demonstrates the effectiveness of the analytical expression.In further, our proposed scheme can achieve the same performance as the "Brute force", which verifies that validity of our proposed scheme.2-3, we can observe that the outage probability decreases with the increasing value of β 1 and β 2 , since the channel quality is improved when β 1 and β 2 are larger.In addition, the analytical results of our proposed scheme fit well with the simulated ones, indicating that correctness of our analytical expression can be reliable.We can also see that the performance of our proposed scheme is equal to that of the "Brute force", which proves the effectiveness of our proposed scheme.

Figs. 3 -4 and
Figs. 5-6 and Table 4-5 present the impact of SNR thresholds on the outage probability of digital power grid networks, where β 1 = 0.1, β 2 = 0.2, P = 15W.Moreover, we set γ 1 = 0.1 in Fig. 5 and γ 2 = 0.2 in Fig. 6.Particularly, Fig. 5 and Fig. 6 correspond to the computational nodes N 1 and N 2 , respectively.From Figs. 5-6 and Table 4-5, we can find that the outage probability increases with a larger γ 1 and γ 2 , as the larger SNR threshold makes the intrusion detection more difficult.Besides, the analytical results of our proposed scheme match well with the simulated ones, which verifies the correctness of out analytical expression.We can also see that the performance of our proposed scheme can achieve the same performance as the "Brute force", which further demonstrates the effectiveness of our proposed scheme.

Conclusion
In this paper, we investigated the intrusion detection system with a edge server and two computational nodes for the digital power grid networks.Moreover, the edge server and computational nodes cooperatively detect the intrusion.To improve the effectiveness of the intrusion detection, we first defined the outage probability of the considered system, and then derived the closed-form expression.As the total transmit power is limited, we further optimized the transmit power allocation strategy to improve the system performance.Finally, some simulation results were provided to demonstrate the correctness of the closedform expression and the effectiveness of our proposed transmit power allocation strategy.

Acknowledgements
This work was supported by the National Key RD Program of China(2020YFB0906003).

Copyright
The Copyright licensed to EAI.

Figure 2 .Figure 3 .
Figure 2. Outage probability of the considered system versus the total transmit power.

Figure 6 .
Figure 6.Outage probability of the considered system versus γ 2 .

Table 1
Data for Fig.2

Table 2
-3 show the impact of the rate parameters of exponential distribution on the outage 4 EAI Endorsed Transactions on Scalable Information Systems | Volume 10 | Issue 4 |

Table 2
Data for Fig.3