Home Mathematics Substation Equipment 3D Identification Based on KNN Classification of Subspace Feature Vector
Article Open Access

Substation Equipment 3D Identification Based on KNN Classification of Subspace Feature Vector

  • , , EMAIL logo and
Published/Copyright: October 25, 2017
Become an author with De Gruyter Brill

Abstract

Aiming to realize rapid and efficient three-dimensional (3D) identification of substation equipment, this article proposes a new method in which the 3D identification of substation equipment is based on K-nearest neighbor (KNN) classification of subspace feature vector. First of all, the article uses octree encoding to reduce and denoise the point cloud data obtained by a 3D laser scanner. Secondly, position calibration and size standardization are used for the point cloud after pretreatment. Then, the normalized point cloud is divided into a number of cubes with same size. The cosine of the angle between the positive direction of z axis and a vector from the global centroid of the point cloud to the centroid of each subspace is regarded as the feature of the subspace. All cosines of subspaces constitute the feature of the point cloud. Finally, we classify the subspace feature vector by using the KNN algorithm and improve classification accuracy by using the particle swarm optimization algorithm. The simulation results show that the identification accuracy of the proposed method for unknown substation equipment is about 90% and the proposed method is applicable to low-degree losses. Apparently, this method can accurately identify 3D substation equipment. At the same time, increasing the number of subspaces will improve the accuracy; however, it will increase the recognition time.

1 Introduction

With the progress of image recognition technology, recognition technology of two-dimensional (2D) images is being shifted to three-dimensional (3D) object recognition. With the rapid development and wide application of the 3D laser scanner, the data of 3D object acquired become more and more accurate and efficient, which stands as the foundation of 3D object recognition. 3D object recognition obtains the surface sampling points of a device by scanning the surface of the device with a 3D laser scanner. Point cloud data formed by these surface sampling points can reflect the shape characteristics of the device. Then, the features of the point cloud data are extracted for classifying or matching to get a prediction result. The aim of substation equipment 3D identification is to identify the type and model of unknown substation equipment based on point cloud data. As the key to reconstruct the substation, recognition technology of substation equipment is helpful for truly reproducing an objective environment by using digital information on a computer. It also accelerates the appearance and construction of an intelligent substation.

3D recognition technology plays an important role in many fields, such as architecture, medicine, industry, and so on, and it receives more and more attention [8]. At present, 3D recognition algorithms can be divided into two categories: 3D recognition based on global features and 3D recognition based on local features. Global features can reflect the overall characteristics of objects, such as volume, elevation, etc., but it does not express local information accurately. It is very harmful for recognition when the shape of an object is changed greatly. Local features have good robustness in the case of overlapping and complex background, such as boundary curvature, directional projection contour, etc [5]. However, the characteristics of an object are not well expressed if the local features extracted are improper or if the object is partially shaded. Compared with 2D recognition, methods of 3D recognition are more complex. Besl and Mckay [1] first proposed a method of recognizing 3D objects by precise registration. The algorithm searches for the matching sample with the highest accuracy as the classification result by iterative matching. Registration and recognition is accurate in this method; however, the registration speed is slow and the recognition efficiency is low for massive data. In order to overcome this problem, Dai et al. [2] presented an improved iterative closest point (ICP) algorithm based on feature points. The algorithm contains initial registration and precise registration. It achieves initial registration by using eigenvectors of point cloud and takes curvature feature points and K-D tree to find the nearest point. The efficiency of the ICP algorithm is improved; however, the efficiency is still far from actual demand. Mian et al. presented a novel 3D model-based algorithm. A 3D model of an object is automatically constructed offline from its multiple unordered range images. Then, the similarity calculation is used to realize online identification [13]. Smeets et al. presented an algorithm in which the scale invariant feature transform algorithm was used for 3D face recognition. The salient point on a 3D facial surface is detected as mean curvature extrema in scale space. The neighborhood of each salient point is described in a feature vector consisting of concatenated histograms of shape indices and slant angles. The robustness of recognition system is improved in this method [17]. Tao et al. [18] applied neural network methods to the feature recognition of 3D model, and analyzed a variety of 3D model feature recognition technologies based on neural network [18]. Lowe [11] presented a method for extracting distinctive invariant features from images that can be used to perform reliable matching between different views of an object or scene. Xiao et al. utilized both regional surface properties and shape rigidity constraint to align a partial object surface and its corresponding complete surface. The new method for 3D partial-surface registration was efficiently proved by experiment [19]. In order to extract the characteristic image from a complex environment, Li et al. [9] put forward a method of establishing a classification decision tree to realize gesture recognition. Guo et al. [3] presented a hierarchical 3D object recognition algorithm by rotationally projecting the neighboring points of a feature point onto 2D planes.

This article proposes a new method in which substation equipment 3D identification is based on K-nearest neighbor (KNN) classification of a subspace feature vector. The subspace feature vector is formed by the angle features of each subspace, which is the result of dividing point cloud into many subspaces with the same size. The feature vector is used to classify and recognize substation equipment. In order to select an appropriate size of the subspace to meet actual classification requirements, this article carries on further research on the effect of different sizes of the subspace on classification accuracy. Compared with the improved ICP classification method [21], the experiment results show that the KNN algorithm can classify faster, and the algorithm based on particle swarm optimization (PSO) has a significant improvement in classification efficiency.

2 Structure of Total Identification System

Point cloud data scanned by a scanner is preprocessed to obtain appropriate experimental data, including point cloud simplifying and denoising. The octree coding method used for simplifying and denoising can retain the characteristics of the point cloud completely and reduce the number of point cloud data significantly. Because features extracted from point cloud data are related to position, spatial direction, and size ratio, point cloud data need to be calibrated and standardized after pretreatment.

After the above treatments, the subspace features of point cloud are ready to be extracted. The process of feature extraction includes two steps. The first step is to divide the point cloud into subspaces in a sequence. The minimum bounding box of the point cloud is divided into same-size cubes as subspaces. The second step is to extract each subspace feature that is the cosine of the angle between the positive direction of z axis and a vector from global centroid of the point cloud to the centroid of this subspace. The feature vector is composed of the features extracted from all subspaces. The length of the feature vector is decided by the number of subspaces. If template data are known, the feature vector of an unknown substation equipment is classified by the KNN algorithm to get a prediction result. In order to enhance the effect of positive subspaces and weaken the effect of negative subspaces, the PSO algorithm is used to optimize the weight of each subspace. It can significantly improve the final classification accuracy. The structure of the total identification system is shown in Figure 1.

Figure 1: Structure of Total Identification System.
Figure 1:

Structure of Total Identification System.

2.1 Pretreatment

The octree coding method is used for pretreatment, which includes point cloud simplifying and denoising. The principle of octree coding is as follows: the minimum bounding box of point cloud is divided into small cubes with the same size. Then, each small cube is divided into subcubes with the same size. Recursive division is done until each cube is divided into a minimal cube. The general termination condition is that side length of the minimal cube reaches a specified dot pitch. Each point in the point cloud has a certain index value. After binary conversion of the index value, each point can be encoded by combining the corresponding coefficient produced by the binary converting process. Points in the same minimal cube have the same code [20].

Simplification is simplifying the points of each minimal cube in turns. In each minimal cube, the nearest point to the central point is retained and the other points are deleted. Then, the point clouds simplified are divided into many minimal cubes again. The number of points of the point cloud in one minimal cube is usually dozens or hundreds. However, noise points are few in one minimal cube. Therefore, if the number of points in a minimal cube is less than the threshold set by us, we consider these points in this minimal cube noise points. The goal of denoising is to remove these noise points. However, we might sometimes encounter a situation where some points removed are not noise points, because the number of these points in the minimal cube is also less than the threshold, which is the same as noise points. In this case, the points removed do not affect the characteristics of the overall cloud point, because the number of these points is small enough. Thus, we can ignore these points. The process of simplifying and denoising consists of the following steps:

  1. Determine the number of the layers of octree division n, according to the specified dot pitch d0.

  2. Code each point p(x, y, z) in the point cloud.

    Convert x, y, z into index values i, j, k by the following formula:

    (1){i=[(xxmin)/d0]j=[(yymin)/d0]k=[(zzmin)/d0],

    where xmin, ymin, zmin are the minimum values of the coordinate of x, y, z axis, respectively. Index values can be expressed in binary as follows:

    (2){i=i020+i121++im2m++in12n1j=j020+j121++jm2m++jn12n1k=k020+k121++km2m++kn12n1,

    where im, jm, km∈{0, 1} and m∈{0, 1, …, n−1}. p(x, y, z) is coded as Q=qn−1ΛqmΛq1q0, where qm=im+jm21+km22.

  3. The same coding values are stored in the same minimal cube sorted by coding values.

  4. When the data cloud is simplified, the nearest point to the central point is retained and the other points are deleted in each minimal cube. When the data cloud is denoised, if the number of points in each minimal cube is less than the threshold set by us, we consider the points in this minimal cube noise points.

2.2 Calibration and Standardization

Because the features extracted from point cloud data are related to position, spatial direction, and size ratio, the point cloud data need to be calibrated and standardized after pretreating. The method of principal component analysis (PCA) is used to calibrate the position of point cloud. The principle of PCA is as follows: three eigenvectors corresponding to three maximum eigenvalues are selected as the axis of a new coordinate system. Point cloud data are converted to the new coordinate system by transforming coordinate, so as to realize position calibration [4, 12, 15]. Position calibration consists of the following steps:

  1. Calculate the covariance matrix C of point cloud data P(X, Y, Z), as follows, where X, Y, Z are the column vectors of the three dimensions of the point cloud data:

    (3)C=(cov(X,X)cov(X,Y)cov(X,Z)cov(Y,X)cov(Y,Y)cov(Y,Z)cov(Z,X)cov(Z,Y)cov(Z,Z)),
    (4)cov(A,B)=i=1n(AiA¯)(BiB¯)n1.

    In Formula (4), A̅ is the mean value of A and B̅ is the mean value of B.

  2. Calculate the eigenvalues and eigenvectors of the covariance matrix C. Eigenvectors V1, V2, V3 are from large to small corresponding to the three eigenvalues. Rotation matrix S=(V3, V2, V1) is formed by V1, V2, V3.

  3. Point cloud data P(x, y, z) is converted to the new coordinate system by transforming coordinate P1=P·S to get calibrated point cloud data P1(x, y, z).

In order to facilitate the division of point cloud, we need to scale the point cloud to a bounding box with a given size before extracting features from subspaces. In this article, the bounding box of the point cloud is uniformly scaled to a cuboid whose values of length dx, width dy, and height dz are 3, 3, and 6, respectively. Scaled point data P1(x′, y′, z′) are

(5){x=xdx/(xmaxxmin)y=ydy/(ymaxymin)z=zdz/(zmaxzmin).

2.3 Feature Extraction of Subspace

Feature extraction of subspace needs the point cloud to be divided firstly. As Figure 2 shows, the bounding box of point cloud is divided into same-size cubes, which are called subspaces X1, X2, …, Xn (n is the number of subspaces). In this article, the side length of a subspace is 0.5. For a 3 * 3 * 6 cube, the x axis is divided into six units, y axis is divided into six units, and z axis is divided into 12 units. Thus, the whole bounding box can be divided into 432 cube subspaces. In the simulation experiment of the article, the effect of side length elected on the recognition accuracy will be further studied.

Figure 2: Division of Subspaces.
Figure 2:

Division of Subspaces.

The feature of each subspace Ti is extracted after subspace division. The feature extracted is the cosine of the angle between the positive direction of z axis and a vector from the global centroid of the point cloud to the centroid of each subspace, as shown in Figure 3. The extracted feature of each subspace can be obtained as follows:

(6)Ti=cosθ=LLi/(|L||Li|),

where L is a reference axis that is parallel to z axis and Li is a vector from the global centroid of point cloud M(X, Y, Z) to the centroid of each subspace Mi(x, y, z). Thus, the whole feature vector of the point cloud T=(T1, T2, …, Tn) is formed by the feature of each subspace.

Figure 3: Angle Feature of a Subspace.
Figure 3:

Angle Feature of a Subspace.

After subspace division, the characteristic of each subspace reflects whether there are points or not. By comparing the corresponding subspaces of different devices, we can get the characteristic differences of different equipment. It is an obvious difference of characteristics. Besides, the height of the centroid of each device is different from others because of the difference in shape. As for subspaces where points exist, the feature of each subspace extracted can reflect the positions of the points in the subspace relative to the centroid of the overall cloud point.

There are five devices randomly selected from the known template data to illustrate the characteristic differences of different devices in Figure 4. From left to right, they are kV500_CB_B, kV500_CP_B, kV500_DS_1B, kV500_LA_E, and kV500_LT_D, respectively. Because the length of the feature vector is too long, parts of feature vectors of the five devices are shown to illustrate the characteristic differences of the different devices directly in Table 1. It can be seen that characteristics of the different devices are markedly different. Thus, the characteristic differences can be used as a basis for identification.

Figure 4: Five Devices Shown in Software QTReader.
Figure 4:

Five Devices Shown in Software QTReader.

Table 1:

Feature Vectors of Five Devices.

SubspacesX128X129X130X131X132X133X134X135X136X137
DevicesFeatures
kV500_CB_B00−0.1829−0.2932−0.351200000
kV500_CP_B−0.3720−0.2899−0.2285−0.1936−0.1627−0.2852−0.2957000
kV500_DS_1B0000.08190.04760000−0.4006
kV500_LA_E00−0.3840−0.2828−0.184700000
kV500_LT_D0−0.3627−0.3619−0.3410−0.287800000

2.4 KNN Algorithm

The KNN algorithm is used to classify by measuring the distance between different feature vectors [6, 10, 14, 16]. This algorithm mainly solves the problem that a test object needs to be matched with many training objects at the same time. The basic principle of KNN is as follows: if most samples of the K most similar samples (the K nearest samples) to a test sample in feature space belong to a category, the test sample belongs to this category. In the KNN algorithm, the neighbors selected are objects that have been correctly classified. The method determines the category of test sample only based on one or several nearest samples.

The neighbors selected are template samples whose categories are known in advance. Enter the test samples when the training samples and their tags are known. The characteristics of the test samples are compared with the training samples set to find the K most similar samples in the training set. The category corresponding to the test samples is the category of the most samples of the K nearest samples. The specific steps of the algorithm are as follows:

  1. Calculate the distance d between the test sample data T and the training samples data MT by

    (7)d(T,MT)=1i=n(TiMTi)2.
  2. According to the sequence of distance increasing, K points with minimum distances are selected. The value of K is usually determined by cross validation. Generally, the empirical value of K is lower than the square root of the number of training samples. In this article, the value of K is chosen as 3.

  3. Find the category of the most samples of the K nearest samples.

  4. The category of the most samples is the classification result of the test sample data.

2.5 Optimization of Subspace Feature Weights

The PSO algorithm, which is based on iteration of initial particle, updating the position and velocity of particles constantly, and following the best particle in space, searches an optimal solution [7]. The general termination condition of iteration is to run a given iteration times or get the certain accuracy. The PSO algorithm has the advantages of fast convergence speed and easy achievement.

In this article, the PSO algorithm is used to optimize the weights of subspace features. Subspace features that play a good role in identification are enhanced to improve classification accuracy after adjusting the weight of the feature of each subspace. When PSO is used to optimize the feature weights, the classification error rate is defined as the fitness function of the optimization algorithm. The position of particle represents the weight of the feature of each subspace. The specific process is as follows:

  1. Initialize the positions and velocities of particles.

  2. Update the positions and velocities of particles by

    (8)Vi(t+1)=w×Vi(t)+c1×r1×(pbest(t)Xi(t))+c2×r2×(gbest(t)Xi(t)),
    (9)Xi(t+1)=Xi(t)+Vi(t+1),
    (10)w=wmaxt×wmaxwminTmax.

    The values of learning factors c1, c2 are both 1.5 in this article. pbest(t) is the individual extreme or the best position of the individual that each particle learns from its own experience and searches for in its flight. Similarly, gbest(t) is the global extreme or the best position of the whole group that each particle learns from the experience of the group and searches for in flight history. The best position represents a point where the minimum value of fitness function is obtained.

    Calculate the fitness function, as follows:

    (11)Fitness=1mN.

    The function represents the classification error rate, where parameter N is the total number of test sample data and parameter m is the number of correct classifications.

  3. Update the individual extreme and global extreme.

  4. If the times of iterations are satisfied, the output of parameter is stopped; if the times of iterations are not satisfied, steps 2–4 are repeated.

3 Implement Process

The implement process mainly includes two parts, as shown in Figure 5. The first part is extracting feature vectors of known template data MT and feature vectors of test point cloud data T to get the fitness function by using the KNN algorithm. In order to accurately classify, we need to make sure that the known template data and the test point cloud data are treated by the same process. Data pretreatment, position calibration, and size standardization are equally performed on the known template data and the test point cloud data. Thereafter, features are extracted to get the feature set of template data and the feature vector of the point cloud data of a test device. According to the distance between the feature vector of the test device and the feature vector of each template device in feature set, we can know the gap between the test device and each template device. Then, K smallest gap devices are selected. In the K devices, the category of the most samples is the result of final classification.

Figure 5: Implement Process.
Figure 5:

Implement Process.

The second part is to update the position and velocity of particles, and calculate the fitness function and extreme for optimizing classification. Based on the prediction of the first part, the classification error rate of all test point cloud data is regarded as the fitness function to optimize subspace feature weights. Because the PSO algorithm can be used to find the minimum value of fitness function, we use PSO to reach the minimum classification error rate of all test cloud data. The classification error rate is a ratio of the number of wrong classifications and the amount of test point cloud samples. The weight of a subspace is particle position when the classification error rate is the least. The highest classification accuracy is obtained by searching the minimum classification error rate.

4 Simulation Results and Analysis

The experimental data marked as kV500 were provided by Henan Tenglong Information Engineering Company Limited in May 2016, including 54 known template data and 90 test point cloud data. The experimental data were obtained by scanning substation equipment with 3D laser scanner, and these data had been segmented and simply pretreated. As shown in Table 2, it includes nine types of substation equipment, such as circuit breaker, transformer, trap, and so on. There are several models for each type of device. For example, circuit breaker (CB) includes four kinds of models (CB_A, CB_B, CB_BB, and CB_C). We name the suffixes (A, B, BB, and C) in the process of segmentation. In this article, we use MATLAB R2014a software for simulation.

Table 2:

Types and Models of Substation Equipment.

CBkV500_CB_AESkV500_ES_ADSkV500_DS_1ACTkV500_CT_A
kV500_CB_BLAkV500_LA_AkV500_DS_1AAkV500_CT_AA
kV500_CB_BBkV500_LA_BkV500_DS_1BkV500_CT_B
kV500_CB_CkV500_LA_CkV500_DS_1BBMTkV500_MT_A
CPkV500_CP_AkV500_LA_DkV500_DS_1CkV500_MT_B
kV500_CP_BkV500_LA_EkV500_DS_1DLTkV500_LT_A
kV500_CP_CkV500_LA_FkV500_DS_2AkV500_LT_B
kV500_CP_DkV500_LA_GkV500_DS_2BkV500_LT_C
kV500_CP_DDPTkV500_LA_AkV500_DS_3AkV500_LT_D
kV500_CP_DDDkV500_LA_BkV500_DS_3B
kV500_CP_EkV500_LA_CkV500_DS_3C
kV500_CP_FkV500_LA_CCkV500_DS_3CC
kV500_CP_GkV500_LA_DkV500_DS_3D
kV500_CP_HkV500_LA_EkV500_DS_3DD
kV500_CP_IkV500_LA_FkV500_DS_3EE

A simulation experiment is carried out with the above method. The optimization process of PSO on subspace weights is shown in Figure 6. The classification accuracy is 78.89% before optimization. The classification accuracy is 87.78% after optimization. It can be seen that it is effective using the PSO algorithm to improve the accuracy of classification. We can get more accurate classification results after optimization. It can also be seen that the PSO algorithm has characteristics of rapid response and quick convergence in the optimization process.

Figure 6: Result of the Process of PSO.
Figure 6:

Result of the Process of PSO.

When not optimized, the feature weights of all subspaces are all 1. The feature weights of all subspaces after optimization are shown in Figure 7. The abscissa is the code of subspaces that represents the sequence of 432 subspaces. The ordinate is the value of a subspace weight. The weight value of each subspace is the corresponding point in Figure 7. It is obvious that there is a big difference between the weights of any two subspaces. It shows the different contribution of subspace to improve classification accuracy. Therefore, it is necessary to optimize the subspace feature weights.

Figure 7: Result of Subspace Feature Weights.
Figure 7:

Result of Subspace Feature Weights.

In order to prove the validity and accuracy of denoising, 90 test point cloud data before denoising and after denoising are recognized, respectively. The effect of denoising on recognition accuracy is shown in Table 3. It can be seen that denoising is helpful to improve recognition accuracy.

Table 3:

Effect of Denoising on Recognition Accuracy.

Before denoisingAfter denoising
Recognition accuracy85.56%87.78%

Figure 8 is the point cloud of a test device before denoising compared with after denoising. Point cloud before denoising is on the left, and the point cloud after denoising is on the right. As shown in the figure, noise points around the point cloud have been removed. The point cloud becomes clearer.

Figure 8: Comparison of Point Cloud Before and After Denoising.
Figure 8:

Comparison of Point Cloud Before and After Denoising.

The above results are experimental results when the side length of subspace is 0.5. The following discussion is about the effect of different side lengths of subspace on classification. In order to facilitate subspace division, the point cloud data are scaled firstly. Consequently, when the side length of the subspace is different, the number of subspaces is different. When the side length of subspace is 1, 0.8, 0.6, 0.5, and 0.4, respectively, the results of classification are shown in Table 4.

Table 4:

Classification Accuracy Comparison with Different Subspace Sizes.

Side length:

Number:
1

54
0.8

128
0.6

250
0.5

432
0.4

960
Before optimization
 Accuracy68.89%78.89%78.89%78.89%85.56%
 Time0.022 s0.024 s0.025 s0.027 s0.033 s
After optimization
 Accuracy81.11%84.44%84.44%87.78%91.11%
 Time64.820 s74.101 s91.878 s122.950 s168.187 s

As shown in Table 4, the number of subspaces increases exponentially with the decrease of the side length of subspace. The decrease of side length is more advantageous to classification and recognition on the whole. Point cloud is divided in more detail by reducing the side length of subspace. The local information of point cloud is compared with template data more specifically. If we continue to reduce the side length of subspace, recognition accuracy will be further improved. According to this method, if the side length of subspace is small enough, it can be regarded as extracting the feature of each point from point cloud data. A smaller size of subspace makes a higher accuracy of classification; however, the large number of subspaces increases the dimension of the feature vector, which reduces computation speed.

In the field, it is difficult collect complete point cloud data of all parts of the equipment; thus, point clouds generally have different degrees of losses. The influence of the degrees of point cloud data losses on recognition accuracy is briefly discussed in the following. We can only lose some data artificially, because there is almost no loss in our test data of substation equipment. From left to right, the complete point cloud of a test device, point cloud with 10% losses, with 20% losses, and with 30% losses are shown in Figure 9, respectively. The complete point cloud, point cloud with 10% losses, and point cloud with 20% losses can be identified. However, the point cloud with 30% losses cannot be identified. According to the experiment, the point cloud can be identified within about 20% losses. The recognition test for point cloud with different degrees of losses is performed on all test data. The average degree of allowed losses is about 13%. In order to ensure the current identification accuracy, the degree of point cloud losses should be the minimal value of all allowed losses; namely, it will be within 10%. Thus, as the proposed method is applicable to low-degree losses, it needs to be improved. Moreover, the identification of point cloud with high-degree losses is our next point of research.

Figure 9: Point Cloud of a Test Device with Different Degrees of Losses.
Figure 9:

Point Cloud of a Test Device with Different Degrees of Losses.

In order to prove the effectiveness of the method proposed in this article, the recognition performance of this method is compared with an improved ICP algorithm [21]. The improved ICP algorithm is one of the most effective methods at present, whose projection profile is used to reduce recognition time on the basis of keeping a high accuracy. In this experiment, the side length of subspace is 0.4. The 90 devices used in the preceding paragraphs are still used as test data, including 47 kinds of device models. The above two methods are compared, and the results are shown in Table 5. The improved ICP algorithm is better than the method proposed in this article from the view of recognition accuracy; however, the average recognition time of each device is much longer. It can be seen that the method proposed in this article has satisfactory recognition accuracy. At the same time, it has better recognition efficiency.

Table 5:

Comparison of Two Methods on Recognition Effect.

Improved ICP algorithmMethod proposed in this article
Recognition yes or noRecognition time (s)Recognition yes or noRecognition time (s)
CB_A_1Yes81.94Yes0.14
CB_B_2Yes58.76Yes0.15
CP_D_1Yes135.27No0.14
CT_A_3Yes135.71Yes0.20
PT_A_3Yes85.71Yes0.21
LT_C_1Yes126.93Yes0.19
TotalAccuracy 98.8%Average time 85.91Accuracy 91.1%Average time 0.19

5 Conclusions

This article proposes a new method in which substation equipment 3D identification is based on KNN classification of subspace feature vector. The advantage of the KNN algorithm is that it is fast; however, the disadvantage is that it is not accurate enough. The classification accuracy is improved greatly after the PSO algorithm is used to optimize the weight of each subspace. The method of subspace feature vector is proposed in this article, which is based on dividing the point cloud into same-size subspaces to extract the angle feature of each subspace. The size of subspace has a great influence on the classification result. The classification accuracy can be improved by dividing subspaces in more detail regardless of classification time. In general, we decide the size of subspace according to the classification accuracy and time from actual requirements. It can be seen that the method proposed in this article has good recognition accuracy in the comparison test. At the same time, it has more satisfactory recognition efficiency.

Acknowledgments

This work was supported by the Science and Technology Key Project of Henan province (grant no. 152102210036), the Young Teacher Foundation of Henan province (grant no. 2015GGJS-148), and the Industry-University-Research Collaboration Project of Henan province (grant no. 152107000058).

Bibliography

[1] P. J. Besl and N. D. Mckay, A method for registration of 3-d shapes, IEEE Trans. Pattern Anal. Mach. Intell.14 (1992): 239–256.10.1109/34.121791Search in Google Scholar

[2] J. Dai, Z. Chen and X. Ye, The application of ICP algorithm in point cloud alignment, J. Image Graph.12 (2007), 517–521.Search in Google Scholar

[3] Y. Guo, F. Sohel, M. Bennamoun, M. Lu and J. Wan, Rotational projection statistics for 3D local surface description and object recognition, Int. J. Comput. Vis.105 (2013), 63–86.10.1007/s11263-013-0627-ySearch in Google Scholar

[4] X. Hu and J. Wang, Similarity analysis of three-dimensional point cloud based on eigenvector of subspace, Infrar. Laser Eng.4 (2014), 1316–1321.Search in Google Scholar

[5] P. Jia, N. Xu and Y. Zhang, Automatic target recognition based on local feature extraction, Opt. Precis. Eng.21 (2013), 1898–1905.10.3788/OPE.20132107.1898Search in Google Scholar

[6] S. Jiang, G. Pang, M. Wu and L. Kuang, An improved K-nearest-neighbor algorithm for text categorization, Expert Syst. Appl.39 (2012), 1503–1509.10.1016/j.eswa.2011.08.040Search in Google Scholar

[7] J. Kennedy and R. Eberhart, Particle swarm optimization, in: IEEE International Conference on Neural Networks, pp. 1942–1948, Perth, Australia, 1995.10.1109/ICNN.1995.488968Search in Google Scholar

[8] Q. Li, M. Zhou and J. Liu, A review on 3D objects recognition, J. Image Graph.5 (2000), 985–993.Search in Google Scholar

[9] R. Li, C. Cao and L. Wang, Hand posture recognition using depth image and appearance feature, J. Huazhong Univ. Sci. Technolog. (Nat. Sci. Ed.)39 (2011), 88–91.Search in Google Scholar

[10] X. B. Lin, T. S. Qiu, F. Morain-Nicolier and S. Ruan, A topology preserving non-rigid registration algorithm with integration shape knowledge to segment brain subcortical structures from MRI images, Pattern Recognit.43 (2010), 2418–2427.10.1016/j.patcog.2010.01.012Search in Google Scholar

[11] D. G. Lowe, Distinctive image features from scale-invariant keypoints, Int. J. Comput. Vis.60 (2004), 91–110.10.1023/B:VISI.0000029664.99615.94Search in Google Scholar

[12] Z. Lv, J. Wu and Y. Gong, Improvement of a three-dimensional coordination transformation model adapted to big rotation angle based on quaternion, Geom. Inf. Sci. Wuhan Univ.4 (2016), 547–553.Search in Google Scholar

[13] A. S. Mian, M. Bennamoun and R. Owens, Three-dimensional model-based object recognition and segmentation in cluttered scenes, IEEE Trans. Pattern Anal. Mach. Intell.28 (2006), 1584–1601.10.1109/TPAMI.2006.213Search in Google Scholar PubMed

[14] A. Miloud-Aouidate and A. R. Baba-Ali, An efficient ant colony instance selection algorithm for KNN classification, Int. J. Geotech. Earthq. Eng.4 (2013), 47–64.10.4018/ijamc.2013070104Search in Google Scholar

[15] K. Picos, V. H. Diaz-Ramirez, V. Kober, A. S. Montemayor and J. J. Pantrigo, Accurate three-dimensional pose recognition from monocular images using template matched filtering, Opt. Eng.55 (2016), 1–11.10.1117/1.OE.55.6.063102Search in Google Scholar

[16] J. Sankaranarayanan, H. Samet and A. Varshney, A fast all nearest neighbor algorithm for applications involving large point-clouds, Comput. Graph.31 (2007), 157–174.10.1016/j.cag.2006.11.011Search in Google Scholar

[17] D. Smeets, J. Keustermans, D. Vandermeulen and P. Suetens, MeshSIFT: local surface features for 3D face recognition under expression variations and partial data, Comput. Vis. Image Underst.117 (2013), 158–169.10.1016/j.cviu.2012.10.002Search in Google Scholar

[18] P. Tao, B. Zhang and Z. Ye, Neural network method in 3D model feature recognition, Comput. Integra. Manuf. Sys.8 (2002), 912–918.Search in Google Scholar

[19] G. Xiao, S. H. Ong and K. W. C. Foong, Efficient partial-surface registration for 3D objects, Comput. Vis. Image Underst.98 (2005), 271–294.10.1016/j.cviu.2004.10.001Search in Google Scholar

[20] Q. Xie and X. Xie, Point cloud data reduction methods of octree-based coding and neighborhood search, in: Proceedings of IEEE Conference on Electronic & Mechanical Engineering and Information Technology, vol. 7, pp. 3800–3803, Harbin, Heilongjiang, 2011.10.1109/EMEIT.2011.6023069Search in Google Scholar

[21] G. Yulan, M. Lu, Z. Tan and J. Wan, Fast target recognition in Ladar using projection contour features, Chin. J. Lasers39 (2012), 200–205.10.3788/CJL201239.0209003Search in Google Scholar

Received: 2017-06-06
Published Online: 2017-10-25

©2019 Walter de Gruyter GmbH, Berlin/Boston

This article is distributed under the terms of the Creative Commons Attribution Non-Commercial License, which permits unrestricted non-commercial use, distribution, and reproduction in any medium, provided the original work is properly cited.

Articles in the same Issue

  1. Frontmatter
  2. Precursor Selection for Sol–Gel Synthesis of Titanium Carbide Nanopowders by a New Cubic Fuzzy Multi-Attribute Group Decision-Making Model
  3. Modified and Optimized Method for Segmenting Pulmonary Parenchyma in CT Lung Images, Based on Fractional Calculus and Natural Selection
  4. PCI-PSO: Preference-Based Component Identification Using Particle Swarm Optimization
  5. Performance Evaluation of Modified Color Image Steganography Using Discrete Wavelet Transform
  6. Pythagorean Hesitant Fuzzy Hamacher Aggregation Operators in Multiple-Attribute Decision Making
  7. Mitral Regurgitation Severity Analysis Based on Features and Optimal HE (OHE) with Quantification using PISA Method
  8. Non-dominated Sorting Genetic Algorithms for a Multi-objective Resource Constraint Project Scheduling Problem
  9. Substation Equipment 3D Identification Based on KNN Classification of Subspace Feature Vector
  10. Mathematical Model Using Soft Computing Techniques for Different Thermal Insulation Materials
  11. Prediction Method of Railway Freight Volume Based on Genetic Algorithm Improved General Regression Neural Network
  12. Tree Physiology Optimization in Benchmark Function and Traveling Salesman Problem
  13. Design and Development of a Multiobjective Cost Function for Robust Video Watermarking Using Wavelet Transform
  14. Forecasting Air Quality Index Using an Ensemble of Artificial Neural Networks and Regression Models
  15. Particle Swarm Optimization-Enhanced Twin Support Vector Regression for Wind Speed Forecasting
Downloaded on 26.3.2026 from https://www.degruyterbrill.com/document/doi/10.1515/jisys-2017-0272/html
Scroll to top button