Home Research on intelligent substation monitoring by image recognition method
Article Publicly Available

Research on intelligent substation monitoring by image recognition method

  • Weijie Tang ORCID logo EMAIL logo and Honggang Chen
Published/Copyright: November 13, 2020

Abstract

This study mainly analyzed the improved three-frame difference algorithm for the identification of active targets in the intelligent substation. The improved three-frame difference algorithm introduced the Gaussian mixture background algorithm on the basis of the traditional three-frame difference method. The Gaussian mixture background algorithm, traditional three-frame difference method, and improved three-frame difference method were tested in the actual substation. The results showed that the improved difference method eliminated the non-target background more thoroughly when recognizing the moving target in the image; in the tested video, the improved algorithm had the highest precision and recall ratios for the active target in the video. To sum up, the improved three-frame difference method can more accurately and effectively identify the active targets in the monitoring video, so as to provide an effective support for the unmanned monitoring of intelligent substation.

1 Introduction

To ensure the stable operation of the power grid, the number and scale of corresponding substations increase [1]. As the central node of the distribution network, the safety monitoring of substation is an important condition to maintain the security of the power grid. The safety monitoring of intelligent substation is not only to monitor the power equipment but also to monitor the site conditions and equipment working process, especially the unattended intelligent substation [2]. The equipment in the substation has a high value; thus, in addition to the safe operation of monitoring equipment, it is also necessary to prevent the damage of illegal invasion and theft to the substation. The location of the unattended intelligent substation is often in the area with a good environment; in addition to lawbreakers, some small animals may also invade the area where the important equipment is located, causing damage, and then affecting the substation security. The safety monitoring of intelligent substation is not only to monitor the operation of power equipment but also to monitor the biological activities in the substation to ensure that there are no harmful actions [3]. It is impossible to monitor the living creatures in the substation only by the sensors of power equipment. When the sensors detect that the equipment is damaged, it is too late to give early warning to the criminals or small animals. Therefore, the intelligent substation usually uses the camera to capture the image, uses the image recognition technology to identify and monitor the active objects in the video, and then give a warning. Liu et al. [4] have proposed a moving target detection algorithm for rural substation based on multi-domain (time-space-frequency domain fusion). Simulation results showed that the algorithm had stable performance and strong robustness and could keep the target information complete. Yang et al. [5] used visual attention and cloud computing to recognize the moving objects in video images. Experimental results showed that the method could effectively identify active objects in video images. Gao et al. [6] proposed an algorithm based on improved ViBe to identify active people in the video. Simulation results showed that the method could identify moving objects more accurately. The main goal of this study was to improve the accuracy of unmanned monitoring in the intelligent substation, so as to improve the security of the intelligent substation. In order to achieve the above goal, this paper proposed the Gaussian mixture background algorithm and the frame difference method and then combined them to propose an improved three-frame difference method. Different from the traditional three-frame difference method which carried out difference on the current frame and the last and next frames, the improved three-frame difference method made difference on the current frame and the background model and next frame. Then, to verify the effectiveness of the improved three-frame difference method, it was tested in the monitoring video of the substation and compared with the Gaussian mixture background algorithm and the traditional three-frame difference method. The final experimental results showed that the improved three-frame difference method could extract the target contour more effectively when recognizing the moving target in the video, and there was no target cavity or “ghost shadow”. The highlight of this paper was that the traditional three-frame difference method was improved, the Gaussian background model of the video image was obtained using the Gaussian mixture background algorithm, and the difference was performed on the current frame and the background model and next frame to improve the “ghost shadow” and cavities in the recognized image. The contribution of this article is to provide an effective reference for the accurate identification of active targets in the unmanned monitoring of the intelligent substation.

2 Substation monitoring system

The workflow of the video monitoring system for monitoring the substation is shown in Figure 1. Firstly, the video data are collected by the camera installed in the substation and saved temporarily. Then, the new video is used for the next detection. The moving objects in the video are detected by using the active target recognition algorithm [7], and the moving regions are divided. Then the objects in the divided moving regions are recognized to determine whether they are normal working personnel or not. If it is an abnormal moving object, the video will be encoded as a picture and transmitted to the database, and an early warning signal will be sent out.

Figure 1: The workflow of the monitoring system.
Figure 1:

The workflow of the monitoring system.

For the substation video monitoring system, the detection and recognition of moving regions in the monitoring video are the most important, especially the detection of moving regions. The shooting range of the camera in the unattended substation is fixed, and the equipment in the substation is also fixed. The main monitoring object of the video monitoring system is the illegal invasion of living creatures. Therefore, it is necessary to accurately divide the moving object and static background in the video to more accurately identify whether the living creature is a regular staff [8]. The more accurate the moving detection algorithm divides the active region, the easier the recognition work will be. Therefore, this study focuses on the detection algorithm for video moving object.

3 Image recognition monitoring method

At present, there are various detection methods for moving objects in video, such as optical flow method [9], machine learning-based method, etc. The above detection methods can effectively identify moving objects in video, but the recognition effect is different in different application scenarios. In this study, according to the characteristics of the substation scene, background difference method, and frame difference method are selected.

3.1 Background difference method

The key point of the background subtraction method is to construct an appropriate background model. In this study, the background model is constructed by Gaussian mixture distribution, and the steps are as follows.

  1. Firstly, the first few frames of the video are taken as the sampling samples to construct the initial Gaussian mixture background model [10]. In the mixture Gaussian background model, the probability density function of pixels conforming to the Gaussian mixture distribution is as follows:

(1){p(xt)=i=1kωi,tη(xt,μi,t,τi,t)η(xt,μi,t,τi,t)=exp(12(xtμi,t)T(xtμi,t)τi,t)|τi,t|τi,t=δi,t2I,

where p(xt) stands for the image pixel probability density at time t, xt stands for the image pixel sample at time t, k represents the number of superimposed Gaussian models, ωi,t represents the weight of the i-th Gaussian distribution of image pixel at time t, η(xt,μi,t,τi,t) represents the i-th Gaussian distribution of image pixel at time t, μi,t represents the mean value of the corresponding Gaussian distribution, τi,t is the corresponding covariance matrix, δi,t is the corresponding variance, and I is the three-dimensional identity matrix of RGB pixels [11].

  1. Differential comparison was performed on the pixel value of the latest frame image and k Gaussian distribution models that are established in the previous step, according to Equation (2). If it is consistent, it is the background; otherwise, it is the moving target.

(2)|xtμi,t|2.5σi,t1.
  1. The weight of the model is updated, and the models are matched. The update formula is as follows:

(3){ωi,t=(1α)ωi,t1+αMi,tρ=αη(xt|μi,σi)μt=(1ρ)μt1+ρxtσt2=(1ρ)σt12+ρ(xtμt)T(xtμ)t ,

where α is the learning rate [12], Mi,t is one when the pixel point in the difference conforms to the i-th model and zero when the pixel point does not conform to the i-th model, η(xt|μi,σi) stands for the i-th Gaussian distribution of the image pixel at time t, μi and σi are the mean value and standard deviation of the corresponding Gaussian distribution respectively, and ρ is the updated parameter.

The above process is repeated to constantly compare the new frame image with the background model to find out the pixels that do not match the background model and taken them as the pixels of the moving target, and the background model is constantly updated to ensure the difference accuracy.

3.2 Frame difference method

3.2.1 Three-frame difference method

The frame difference rule obtains the brightness difference through the difference between successive frames and determines the pixel of the moving target according to the brightness difference [13]. The background in the image is mostly static compared with the moving object, and it often changes little in a short period in two successive frames. However, when the moving object moves rapidly in the image, its pixel will change obviously, thus the moving region can be found through comparison. However, when only comparing two successive images, it is not only easy to make the extracted object appear “ghost shadow”, but also difficult to detect the overlapping part of targets. Therefore, this study applied the three-frame difference method to make up for the above shortcomings.

The processing of video by the three-frame difference method is shown in Figure 2. The gray difference is performed on the current frame and the last and next frames, respectively, and then the two difference images are binarized and superimposed with logic “AND” to obtain the moving object in the video.

Figure 2: The recognition process of moving target by the three-frame difference method in video images.
Figure 2:

The recognition process of moving target by the three-frame difference method in video images.

3.2.2 Improved three-frame difference method

Compared with the traditional frame difference method, the three-frame difference method does not only compare the difference between two adjacent frames but also makes the current frame superimposed with the last and next frames after differential binarization. Although the three-frame difference method can eliminate the “ghost shadow”, when facing the low-speed target, there is no change in the short time between the two frames, and the overlapping part will also produce “cavities”. To eliminate the above phenomenon, this study improved the three-frame difference method. The current frame is no longer differential with the last and next frames, but with the background model and next frame. There are many methods to establish a background model. Due to the large data of substation video monitoring and the timely identification of suspicious personnel, a relatively simple mean background modeling [14] was adopted in this study. The flow chart is shown in Figure 3.

  1. Firstly, the video is read frame by frame and grayed.

  2. k=1 is set. The kth frame image is selected for moving target recognition. Firstly, the average background of the k1-frame image is modeled.

(4)avgk=f1+f2+f3++fkk,

where fk is the gray image of the kth frame and avgk is the average background model of the kth frame.

  1. The kth frame gray image treated by differential processing respectively with the background model and the k+1-th frame gray image, respectively, and the formulas are:

(5){Mk,1=|fkavgk|Mk,2=|fk+1avgk|,

where Mk,1 is the differential between the kth frame and the background model, and Mk,2 is the differential between the k + 1 frame and the background model.

  1. The differential image is binarized; then the two binary differential images are overlapped in the form of logical “AND”, and the overlapped images are denoised by the mean filter to obtain the moving target recognition result of the kth frame.

  2. Whether k is smaller than m − 1 or not is determined, where m is the total number of video frames. If it is, then k = k + 1, and it returns to Step (2); if not, the processed video image is output.

Figure 3: The flow of video processing after improvement.
Figure 3:

The flow of video processing after improvement.

4 Experimental analysis

4.1 Experimental environment

In this study, the improved three-frame difference method was simulated by MATLAB software [15]. The experiment was conducted in a laboratory server whose configurations were Windows7 system, I7 processor, and 16 G memory.

4.2 Experimental data

The video used in the simulation experiment was from three surveillance cameras of a substation in Shanghai. These cameras were respectively set at the entrance and exit of the substation and the important equipment placement area. Moreover, the video extracted from the three cameras has been checked to ensure that there were enough moving targets in the captured video time, mainly the workers in and out. The basic parameters of videos extracted by the cameras are shown in Table 1.

Table 1:

Basic video parameters for simulation experiment.

Video numberNumber of framesResolutionFrame rate
No. 1580720*102430 frames/s
No. 269030 frames/s
No. 356015 frames/s

4.3 Experimental project

The moving target recognition was carried out on the above three videos using three recognition methods. The learning rate was set as 0.02 when the Gaussian mixture background method updated the background model; the traditional and improved three-frame difference methods needed to binarize the differential image, and the threshold of binarization was the average value of the current differential image.

In addition to the above simulation experiments, the three moving target detection algorithms were tested in reality. The test was carried out in the substation from which the video used in the simulation experiment was obtained. The three algorithms were used in the unattended monitoring of the substation. The test lasted for one month. To test the effect of the three algorithms, although it is an unsupervised substation, personnel were still dispatched to assist the monitoring. The staffs who were responsible for recording worked in three shifts, 8 h each shift. The staffs recorded the actual number of moving targets in and out of the monitoring video and times of illegal invasion.

4.4 Experimental results

The total number of monitoring video frames used in the simulation experiment was large. Limited by the space, only one frame of the original image and the moving object detection results processed by the three moving target detection algorithms were displayed. As shown in Figure 4, (1) is a frame of image in the original video, in which two substation staffs are inspecting the substation equipment, and the background mainly includes substation equipment, roads, sky, and single-layer building; (2) is the result of the Gaussian mixture background model method, most of the background is removed, and there are basic outlines of the staffs, but the residual contour of the single-layer building interferes with the staffs, and the staffs only have the basic contour, with cavities in the middle; (3) is the processing result of the traditional three-frame difference method, which only retains the outline of the staffs in the original image, with large cavities in the middle; (4) is the result of the improved method, which highlights the staffs and has no cavities.

Figure 4: Moving target detection results of three algorithms after processing videos.
Figure 4:

Moving target detection results of three algorithms after processing videos.

In this study, the performance of the three algorithms in detecting moving a target in the simulation video was measured by the precision and recall ratios. As shown in Figure 5, the average precision and recall ratios of the Gaussian mixture background model method were 66.3 and 51.0% respectively, the average precision and recall ratios of the traditional three-frame difference method were 87.6 and 78.5% respectively, and the average precision and recall ratios of the improved three-frame difference method were 95.3 and 88.9% respectively.

Figure 5: The detection performance of three algorithms for simulation video.
Figure 5:

The detection performance of three algorithms for simulation video.

In addition to the above simulation experiments, this study also applied the three algorithms to the video monitoring of substation to test the effect of their practical application, and the application effect is shown in Table 2. The actual times of illegal invasion in the substation in one month was obtained by manual monitoring and recording, with a total of 76 times; 70 times were wild animals that intruded into the substation facilities by mistake, four times out of the remaining six times were that ordinary people enter the monitoring range outside the substation due to being lost, and only two times were illegal elements deliberately entering the substation to carry out illegal acts. For the monitoring of illegal intrusion in the substation, the accurate times and the number of false positives under the Gaussian mixture background model method was 53 and 25 respectively, the accurate times and the number of false positives under the traditional three-frame difference method were 69 and 9, and the accurate times and the number of false positives of the improved three-frame difference method were 75 and 2 respectively.

Table 2:

The actual monitoring effect of three algorithms in substation.

 Accurate times/nNumber of false positives/n
Gaussian mixture background model method5325
The traditional three-frame difference method699
The improved three-frame difference method752
Times of illegal invasion/n76

The above experimental results reflected the recognition effect of the three algorithms for the moving targets in the actual intelligent substation. The monitoring record results demonstrated that the invasion times of illegal elements was small among the detected illegal intrusion events, and the entering of wild animals by mistake was the most common. As the intelligent substation is often in remote areas, it is easier to be targeted by criminals in theory, but the terrain in the remote area is complex, which makes the invasion of wild animals common. To effectively distinguish between wild animals and regular staff, in the actual substation test in this study, the body size of the regular staff in the substation was recorded, and the upper and lower limits of the size were established. The size of wild small animals is often different from that of the human body. Therefore, after processing the monitoring image, the three algorithms estimated the body shape of the extracted moving objects. Once the upper and lower limits were exceeded, records and alarms would be made. The accuracy of these algorithms was directly affected by the accuracy of the three moving target recognition algorithms, whether it was the shape estimation or the comparison of the upper and lower limits. The simulation results demonstrated that the contour of the moving target processed by the improved three-frame difference method was the clearest, with the least background interference; therefore, the interference to the body size estimation of the moving target was the least, and the number of false alarms was also the least.

Among the three moving target recognition algorithms, the traditional three-frame difference method only needed to compare the current frame with the upper and lower frames, thus the calculation amount was small. The background model needed to be established when the Gaussian mixture background method was used for difference, and the background model needed to be updated continuously in this process, leading to a large calculation amount. The calculation amount was small when the improved three-frame difference method performed difference on the next frame, but when performing difference on the background model calculated by the Gaussian mixture background method, the calculation of the background model accounted for a very large proportion; therefore, the calculation amount of the improved three-frame difference method was larger than that of the traditional three-frame difference method.

5 Conclusion

To improve the recognition accuracy of unmanned monitoring in the intelligent substation, this study proposed a Gaussian mixture background method and a three-frame difference method. The three-frame difference method was improved. The background model under the current frame was obtained using the Gaussian mixture background method, and it was taken as the last frame for difference with the current frame. Then, to verify the effectiveness of the improved three-frame difference method, the improved method was compared with the Gaussian mixture background method and the traditional three-frame difference method in the simulation experiment and the actual substation. The final results are as follows. (1) The simulation results showed that the contour of the moving target processed by the Gaussian mixture background method was fuzzy and had holes, and the background was not eliminated thoroughly; the traditional three-frame difference method removed the background more thoroughly, but the outline of the main active target still had cavities; the improved three-frame difference method eliminated the background completely, and the contour of the moving target was clear without cavities. (2) The precision and recall ratios of the improved three-frame difference method were the highest, followed by the traditional three-frame difference method and the Gaussian mixture background method. (3) In the actual substation test, the improved three-frame difference method could identify the moving targets in the monitoring image more accurately, especially the small animals that intruded by mistake, compared with the Gaussian mixture background method and the traditional three-frame difference method.


Corresponding author: Weijie Tang, State Grid Shanghai Municipal Electric Power Company, No. 1122, Yuanshen Road, Pudong New District, Shanghai200122, China, E-mail:

  1. Author contribution: All the authors have accepted responsibility for the entire content of this submitted manuscript and approved submission.

  2. Research funding: None declared.

  3. Conflict of interest statement: The authors declare no conflicts of interest regarding this article.

References

1. Zhang, MT, Zhang, WM, Hu, JY, Wang, Y, Yang, G, Li, J, . Target image detection algorithm for substation equipment based on HOG feature. In: International conference on mechatronics and intelligent robotics. Cham: Springer; 2017, vol. 691:500–6 pp. https://doi.org/10.1007/978-3-319-70990-1_72.Search in Google Scholar

2. Wang, TZ, Yu, H, Lu, ZM. Research and application toward the atlas variation detection technology of soft sensor in substation’s state intelligence analysis system. Eng Technol Res; 2016. https://doi.org/10.12783/dtetr/iceta2016/7051.Search in Google Scholar

3. Zhang, X, Li, H. Detection and recognition of a ground moving target under random dynamic conditions. Microw Opt Technol Lett 2020;62:2463–72. https://doi.org/10.1002/mop.32345.Search in Google Scholar

4. Liu, Y, Cai, Z, Si, Y. Moving object detection algorithm in rural substation based on time-space-frequency-domain. Trans Chin Soc Agric Eng 2018;34:207–14. https://doi.org/10.11975/j.issn.1002-6819.2018.15.026.Search in Google Scholar

5. Yang, J, Wang, X, Zang, X, Dai, ZY. Cloud computing and visual attention based object detection for power substation surveillance robots. In: Canadian conference on electrical and computer engineering 2015; 2015:337–42 pp. https://doi.org/10.1109/CCECE.2015.7129299.Search in Google Scholar

6. Gao, J, Zhu, H. Moving object detection for video surveillance based on improved ViBe. In: Control & decision conference; 2016:6259–63 pp. https://doi.org/10.1109/CCDC.2016.7532124.Search in Google Scholar

7. Li, JX, Zhang, Z, Meng, Y. Research on moving target recognition for vehicle driving robot with remote operation based on OpenCV. IOP Conf Ser Mater Sci Eng 2018;392:062193. https://doi.org/10.1088/1757-899X/392/6/062193.Search in Google Scholar

8. Yuan, S, Liu, X, Zhou, X, Bing, P. Noise reduction from two frame speckle-shifting ghost images with morphology algorithms. J Mod Optic 2019;66:1–8 https://doi.org/10.1080/09500340.2019.1629702.Search in Google Scholar

9. Du, B, Sun, Y, Cai, S, Wu, C, Du, Q. Object tracking in satellite videos by fusing the kernel correlation filter and the three-frame-difference algorithm. Geosci Rem Sens Lett IEEE 2018;15:168–72. https://doi.org/10.1109/LGRS.2017.2776899.Search in Google Scholar

10. Ju, J, Xing, J. Moving object detection based on smoothing three frame difference method fused with RPCA. Multimed Tool Appl 2019;78:29937–51. https://doi.org/10.1007/s11042-018-6710-1.Search in Google Scholar

11. Mo, SW, Deng, XP, Wang, S, Jiang, D, Zhu, ZP. Moving object detection algorithm based on improved visual background extractor. Acta Opt Sin 2016;36:0615001 https://doi.org/10.3788/AOS201636.0615001.Search in Google Scholar

12. Khuhro, M A, Huang, D, Huang, S, Niyigena, P, Oad, A. A modified Gaussian mixture background model for moving object detection. J Comput Theor Nanosci 2017;14:3672–8. https://doi.org/10.1166/jctn.2017.6655.Search in Google Scholar

13. Guo, J, Wang, J, Bai, R, Zhang, Y, Li, Y. A new moving object detection method based on frame-difference and background subtraction. In: IOP conference; 2017, vol. 242. https://doi.org/10.1088/1757-899X/242/1/012115.Search in Google Scholar

14. Zhang, J, Cao, J, Mao, B. Moving object detection based on non-parametric methods and frame difference for traceability video analysis. Procedia Comput Sci 2016;91:995–1000.10.1016/j.procs.2016.07.132Search in Google Scholar

15. Cho, J, Jung, Y, Kim, D, Lee, S, Jung, Y. Design of moving object detector based on modified GMM algorithm for UAV collision avoidance. J Semicond Technol Sci 2018;18. https://doi.org/10.5573/JSTS.2018.18.4.491.Search in Google Scholar

Received: 2020-08-26
Accepted: 2020-11-02
Published Online: 2020-11-13

© 2020 Walter de Gruyter GmbH, Berlin/Boston

Downloaded on 11.9.2025 from https://www.degruyterbrill.com/document/doi/10.1515/ijeeps-2020-0189/html
Scroll to top button