Home Local kernel nonparametric discriminant analysis for adaptive extraction of complex structures
Article Open Access

Local kernel nonparametric discriminant analysis for adaptive extraction of complex structures

  • Quanbao Li , Fajie Wei and Shenghan Zhou EMAIL logo
Published/Copyright: May 5, 2017

Abstract

The linear discriminant analysis (LDA) is one of popular means for linear feature extraction. It usually performs well when the global data structure is consistent with the local data structure. Other frequently-used approaches of feature extraction usually require linear, independence, or large sample condition. However, in real world applications, these assumptions are not always satisfied or cannot be tested. In this paper, we introduce an adaptive method, local kernel nonparametric discriminant analysis (LKNDA), which integrates conventional discriminant analysis with nonparametric statistics. LKNDA is adept in identifying both complex nonlinear structures and the ad hoc rule. Six simulation cases demonstrate that LKNDA have both parametric and nonparametric algorithm advantages and higher classification accuracy. Quartic unilateral kernel function may provide better robustness of prediction than other functions. LKNDA gives an alternative solution for discriminant cases of complex nonlinear feature extraction or unknown feature extraction. At last, the application of LKNDA in the complex feature extraction of financial market activities is proposed.

1 Introduction

Linear discriminant analysis (LDA) generally refers to Fisher’s discriminant analysis (1936), which is an excellent method of dimensionality reduction and classification using a projection algorithm [1]. LDA is widely used in pattern recognition, business intelligence and genome sciences for good classification accuracy and high computing efficiency. Numerous extensions for LDA have been proposed during the recent decades. These extensions serve to, for example, address the small sample size (SSS) problem, handle the incremental great sample, relax the assumption of normality, and extract nonlinear and nonparametric features.

Currently, a prevalent extension for the nonlinear problem is kernel discriminant analysis (KDA) [24]. KDA first maps low-dimensional data into high-dimensional data, and subsequently projects high-dimensional ones onto low-dimensional ones. It is able to recognize certain simple nonlinear relationships. However, KDA, in complex nonlinear structures, is not as effective as the nonparametric method with a local classifier, such as k-nearest neighbor (KNN). Nonparametric discriminant analysis (NDA) relaxes the normality assumption of traditional LDA [5]. NDA provides a unified view of the parametric nearest mean reclassification algorithm and the nonparametric valley seeking algorithm. Diaf combined NDA and KDA to introduce a non-parametric Fisher’s discriminant analysis with kernels [6]. Weighted LDA is commondly used in handling the unbalanced sample [7]. Nearest neighbor discriminant analysis (NNDA) can be regarded as an extension of NDA using a new between-class scatter matrix [8]. Above discriminant analyses are parametric and nonparametric methods with a global classifier, which more or less identify nonlinear features. Fan proposed a parametric discriminant analysis with a local classifier in 2011 named local linear discriminant analysis (LLDA), which is skilled in complex nonlinear structures [9]. For each testing sample, LLDA first extracts the k-nearest subsets from the entire training set and then classifies them by LDA. The k-nearest subsets are calculated by Euclidean distance. Shi and Hu (2012) presents LLDA utilizing a composite kernel which is derived from a combination of local linear models with interpolation [10]. Li et al proposed NDA with kernels, and testified the feasibility of the proposed algorithm on 3D model classification [11]. Zeng (2014) proposes weighted marginal NDA to efficiently utilize the marginal information of sample distribution [12]. NDA has been extended to a semi-supervised dimensionality reduction technique to take advantage of both the discriminating power provided by the NDA method and the locality-preserving power provided by the manifold learning [13]. Du (2013) embedded sparse representation in NDA for face recognition [14]. Adaptive slow feature discriminant analysis is an attractive biologically inspired learning method to extract discriminant features for classification on time series [15]. Fast incremental LDA feature extraction are derived by optimizing the step size in each iteration using steepest descent and conjugate direction methods [16].

In this paper, we generalize LLDA to local kernel nonparametric discriminant analysis (LKNDA), which is a nonparametric discriminant analysis with a local classifier. LKNDA performs more accurately and robustly than LLDA in almost all cases. LKNDA improves conventional discriminant analysis with the inspiration from nonparametric statistics. This analysis considers the weight of different samples in subsets and modifies the kernel function of nonparametric statistics into a unilateral kernel function. LKNDA defines a generalized nearest neighbor function, which will be useful in real cases of further study. LKNDA relaxes the normality assumption, and it performs well in the nonlinear or nonparametric problem. Compared with the KNN method, LKNDA has the same time complexity and higher accuracy on class margin. The bottom half of this paper proposed an application in the complex feature extraction of financial market activities.

2 Methodology

2.1 Overview of the conventional NDA

NDA developed by Fukunaga and Mantock (1983) is a nonparametric extension of Fisher’s LDA [5]. Discriminant analysis is designed to identify the optimal projection direction that maximizes the ratio of between-class scatter to within-class scatter. NDA introduces a weighting function to emphasize the boundary information. Thus, this analysis can address non-normal data distributions by incorporating data direction and boundary structure.

To maximize the objective function of NDA:

J(w)=wTSBwwTSww.(1)

w is a projection matrix. Sw is the within-class scatter matrix and SB is the between-class scatter matrix. Two scatter matrices are defined as follows [5, 6]:

SW=1Ni=1Cl=1Ni(xlμi)(xlμi)T(2)
SB=1Ni=1Cl=1Nij=1,jiCω(i,j,l)(xlimj(xli))(xlimj(xli))T(3)

where

mj(xli)=1kp=1knn(xli,j,p)(4)
ω(i,j,l)=min{dα(xli,nn(xli,i,k)),dα(xli,nn(xli,j,k))}dα(xli,nn(xli,i,k))+dα(xli,nn(xli,j,k)).(5)

µi is the mean vector of class i. mj(xli) represents the mean vector of the k-nearest neighbors of vector xlXi from class j. nn(xli,j,k) is the k-th-nearest neighbor from class j to sample xlXi. d(xli,nn(xli,i,k)) is the Euclidean distance from xlXi to its k-th-nearest neighbor from class j. ω(i, j, l) is a weighting function to de-emphasize the effect of samples with large magnitudes, which are far away from the decision boundary.

2.2 Local kernel nonparametric discriminant analysis

Local Kernel Nonparametric Discriminant Analysis is a nonparametric method for solving complex nonlinear problems. For each testing sample, LKNDA fetches a local subset from the entire training set by the nearest neighbor function. LKNDA also considers that the more similar to the estimated sample, the larger the sample weight.

(1) Local subset extraction

To discriminate each sample xE of the testing set, we first extract a nearest neighbor subset with bandwidth K. nn(xE, k) is the nearest neighbor function. This function defines the k-th-nearest neighbor of xE in the testing set in certain similarity computing standards.

The generalized similarity calculation model satisfies the following conditions:

  1. Non-negative: for all i and j, 1 ≥s(i, j) ≥ 0. Especially when i = j, s(i, j) = 1.

  2. Symmetry: for all i and j, s(i, j) = s(j, i).

There are many calculation models for similarities.

Each model is applicable to the specific condition. Euclidean distance is the most common spatial distance algorithm for Rn. In addition, cosine similarity is another approach of spatial similarity, and the Jacard coefficient is a measure of attribute similarity. In the application of LKNDA, the selection of the distance formula needs to consider the data characteristics and research purposes.

Parameter k can be manually specified. The optimal parameter can also be determined by cross-validation. It can also refer to the selecting principle of Fan’s LLDA [9]: for a large sample, let K be 5%–10% of the amount of all testing samples; for a small sample, let K be 10%–20%.

NN represents the ordered subset of top K nearest neighbor samples. In the subset NN, the number of classification is C, the sample size is K, and the sample size of class i is Ni. Thus,

NN={nn(xE,k)|k=1,,K}(6)
K=i=1CNi(7)

(2) Local subset prediction

The maximization objective function of NDA is:

J(w)=wTSBwwTSww(8)

with within-class scatter matrix and between-class scatter matrix

SW=1Ki=1Cl=1Ni(xliμi)(xliμi)T(9)
SB=1Ki=1Cl=1Nij=1jiCKrxli1K(xliμj)(xliμj)T(10)

where µi is the mean vector of class i in the local ordered subset NN. w Rd is the projection matrix and the parameters for J(w). D is the before-projection dimension that equals to the attribute size in LKNDA, while d is the after-projection dimension that satisfies dC – 1. nn(xli,j,k) is the k-th-nearest neighbor from class j to sample xlXi.r(xli) is the numerical order of xli in NN. K(x) is the unilateral kernel function, which is discussed in the next subsection. For x[0,1),K(x)>0.

For any times of w, J(w) is unchanged. To solve the optimization problem, we add a constraint that lets the denominator norm be 1.

max:J(w)=wTSBwwTSww(11)
s.t.:wTSww=1.(12)

Using Lagrangian method,

c(w)=wTSBwλ(wTSww)(13)
cw=2SBw2λSww=0(14)
SBw=λSww.(15)

As a consequence, to maximize J(w) is equivalent to obtaining a projection matrix wopt whose columns are the eigenvectors corresponding to the top eigenvalues of the eigenequation SBw = λSww.

2.3 Unilateral kernel function

Nonlinear sciences have an enormous potential for applied mathematics [17]. There are two different definitions of kernel function: in the field of machine learning, a kernel function is used to map a sample set into a high dimensional space; in the field of nonparametric statistics, a kernel function is a weighting function of nonparametric estimation. LKNDA accepts differences in the importance of neighbor samples. The greater the distance between xE and samples in NN, the lower the reliability of the information provided. Thus, we modify the kernel function of nonparametric statistics into a unilateral kernel function, which presents the reliability differences. Table 1 shows six types of unilateral kernel functions, which are drawn in Figure 1.

Figure. 1 Six types of unilateral kernel functions
Figure. 1

Six types of unilateral kernel functions

Table 1

Six types of unilateral kernel function

Kernel TypeFunction
UniformK(u)=1{0u1}
TriangularK(u)=2(1u)1{0u1}
EpanechnikovK(u)=32(1u2)1{0u1}
QuarticK(u)=158(1u2)21{0u1}
GaussianK(u)=2/πe12u21{0u1}
CosineK(u)=π2cosπ2u1{0u1}

An ordered subset of neighbors contains K samples. According to the unilateral kernel function in Table 1, the unnormalized weight of the i-th training sample is gi and the normalized weight is hi.

gi=K(i1K)(16)
hi=K(i1K)/i=1KK(i1K)(17)

Figure 1 shows the pattern of different unilateral kernel functions. For 0u1<u21,K(u1)K(u2). Kernel weighted process means that, for any two samples, the weight of high similarity is greater than or equal to the one of low similarity. The accuracy of LKNDA is usually affected by the neighbor size K. Despite this, a unilateral kernel function makes LKNDA more robust by weighting procedure. This idea is proven in section 3.2.

If the nearest neighbor function is Euclidean distance, the kernel type is uniform, the bandwidth K is a certain percentage of the total sample, and NDA is replaced by LDA for local subset computing, then LKNDA degrades into LLDA.

2.4 Pseudo-code of LKNDA

1: input data: xTest, xTrain, bandwidth K, kernel function K(.)

2: KKi1K|i=1,,K

3: for each xEin xTestdo

4:      NN ← {nn(xE, i)|i = 1,..., K}

5:      if all NN has the same class Cithen

6:         xE is Ci

7:      else

8:         calculate SW and SB using NN and 𝓚

9:         w is constituted by the d eigenvectors of Sw1SB corresponding to its first d largest eigenvalues

10:         the nearest class of wTx in NN is xE

11:      end if

12: end for

2.5 Time complexity

LDA is highly efficient classification algorithm, and its time complexity is O(d3 + nd2 + md). This sample size of the whole training set is n, and the size of the testing set is m. The number of categories of the after-projected sample is d. Usually, d is very small and m < n, so the time complexity of LDA can be expressed as O(n).

The time complexity of LKNDA is O(mkn + t(d3 + kd2 + d)), where n and m are the observation amount of the training set and testing set respectively. The local subset has k samples, and t is the number of discriminant analysis after the pruning algorithm. The number of categories of the after-projected sample is d. Usually, d and k are notably small, and t < m < n, so the time complexity of LKNDA can be expressed as O(mn), which equals to KNN.

3 Comparison

In this section, we assess LKNDA in two ways. First, we compare the accuracy of six methods using 2 dimensional and 3 dimensional composite data. Second, we make a parameter comparison with different combinations of bandwidth and unilateral kernel types.

3.1 Classification methods comparison

In this paper, simulations of composite data are used to evaluate methods. 2 dimensional and 3 dimensional data can visually reflect the characteristics of the data. In the following discussion, we apply six data generating processes, which are shown in Figure 2. These data generating processes are: simple small samples, mild hybrid and unbalanced simple triangle samples, multi-cluster samples, the Taiji diagram, superimposed curve samples and 3D spiral samples. During the simulation, to avoid the particularity of a set of data, we repeated the simulation 100 times for each data generation process to obtain 100 datasets. For each dataset, we performed 10 times random sampling, which extracts 1/3 of the observations as test sample and the rest as train sample. In this way, we have 1000 pairs of test samples and train samples for one data generating process.

Figure. 2 Six data generating processes for composite data
Figure. 2

Six data generating processes for composite data

This section computes the accuracy of six classification models: naive Bayesian classifier (NBC), C5.0 decision tree classifier (C5.0), k-nearest neighbor (KNN), linear discriminant analysis (LDA), support vector machine (SVM) and local kernel nonparametric discriminant analysis (LKNDA). To emphasize the comparative method credibility, we use the same 1000 pairs of testing data and training data for each classification model. We summarize the mean and standard deviation of prediction accuracy and list them in Table 2.

Table 2

Prediction performance of six kinds of composite data

NBCC5.0KNNLDASVMLKNDA
(a) simple small samplesmean0.87560.89210.94960.99520.98330.9902
s.d.0.08360.08080.05500.01570.03060.0235
(b) unbalanced triangle samplesmean0.92230.91560.91800.92340.92280.9153
s.d.0.01430.01340.01340.01390.01300.0138
(c) multi-cluster samplesmean0.65370.99590.99990.38990.99991
s.d.0.09060.00430.00060.14290.00050.0003
(d) The Taiji diagrammean0.77970.92830.95370.80480.88050.9716
s.d.0.02550.02480.01230.02570.02110.0107
(e) superimposed curve samplesmean0.50410.51460.87880.50440.60830.9635
s.d.0.03680.07330.01900.03500.04640.0115
(f) 3D spiral samplesmean0.39590.86050.56230.39560.53590.9429
s.d.0.02910.16440.03040.02880.05000.0181

Figure 2(a) shows simple small samples with linear characteristics and presents a simple classification problem. LDA, SVM and LKNDA achieve nearly 100% accuracy of classification. KNN and C5.0 are inefficient for small sample classification and not ideal for this case. NBC strictly depends on the independence assumption and thus reaches a poor result.

Figure 2(b) shows mild hybrid and unbalanced simple triangle samples. It has a single structure, obviously linear boundaries and adequate samples; thus, all six types of methods can predict with approximately 92% accuracy. By inference, for simple classification problems and adequate samples, the effect of different methods has no significant difference.

Figure 2(c) shows multi-cluster samples. It is intuitively clear. It has obvious category boundaries. C5.0, KNN, SVM and LKNDA classify accurately close to 100%. NBC and LDA perform poorly in this case. NBC is based on marginal probability distribution and the independence assumption. Green samples and blue samples in Figure 2(c) have the same marginal probability distribution, which makes NBC unable to distinguish them. LDA is a projection method based on mean and the variance of classes. Three types of samples have the same mean, causing LDA to fail.

Figure 2(d) shows the Taiji diagram. It is an identification problem with complex nonlinear structure and clear edge margins. LKNDA performs best, then KNN, followed by C5.0. These three nonparametric methods have high accuracy. LDA, SVM and NBC do not perform well because they can only explain a linear or simple nonlinear classification situation.

Figure 2(e) shows superimposed curve samples. It is an identification problem with complex nonlinear structure and linear regularity. LKNDA performs best. KNN often misclassifies the points near the intersection point. Other methods fail in this case.

Figure 2(f) shows 3D spiral samples. It is a three-class identification problem with complex nonlinear structure and clear edge margins. LKNDA performs best, then C5.0. Other methods are invalid.

According to the above analysis, the environment of six classifications can be summarized as in Table 3. NBC requires a large sample size and an independence assumption. It cannot be used in dependence relationships, both linear and nonlinear. C5.0 is a decision tree classifier with an information entropy theory. It requires a large sample size. Its dividing surfaces are limited in the x or y direction, which leads to classification errors. KNN is a competent nonparametric classification algorithm and is capable of solving the problem of complex nonlinear characteristics with adequate sample size. However, KNN is not capable of understanding the law of the points near the class interface. LDA is a linear projection classifier. LDA performs well with a small sample size but is invalid in a nonlinear environment. SVM solves simple nonlinear problems by establishing a classification hyper plane. It is still insufficient to handle complex nonlinear problems. LKNDA, proposed in this paper, strives to absorb the advantages of both nonparametric and parametric methods. It takes the advantages of nonparametric methods in solving complex nonlinear problems. At the same time, it draws on the benefits of parametric ones for pattern recognition with a small sample size.

Table 3

Conditions of classification algorithm

NBCC5.0KNNLDASVMLKNDA
Small simple size×××
Non-independence×
Simple nonlinear××
Complex nonlinear××××
Overall pattern××
Local pattern×××××

In the small sample classification tasks, if data are rendered as linear, LDA can be used properly. If data are simple nonlinear, SVM is fine. If data are complex nonlinear and has adequate samples, KNN is often applied. If KNN behaves poorly, or if you want to further improve the prediction accuracy, you can turn to LKNDA.

3.2 Parameter comparison of LKNDA

There are two parameters in LKNDA: bandwidth K and kernel type. In this section, we conduct two classification tasks using the Taiji diagram (Figure 2(d)) and superimposed curve (Figure 2(e)) samples. In each task, we let K = 5, 10, 15, 20, 30, 40 for each of the six unilateral kernel functions in Table 1. Table 4 summarizes the results of the simulation prediction, which is based on different combinations of the parameters of LKNDA. We test 1000 times for each combination to obtain a mean of forecast accuracy. All standard deviation is approximately 0.15, which is not listed in the table. Different values for the parameters contribute to different results, which is shown in Table 4.

Table 4

Mean of simulation accuracy for different bandwidth and unilateral kernel functions

K = 5K = 10K = 15K = 20K = 30K = 40
Taiji diagramUniform0.96570.97160.96910.96500.95370.9350
Triangular0.96600.97350.97360.97280.96710.9565
Epanechnikov0.96630.97280.97230.97070.96350.9501
Quartic0.96540.97350.97340.97260.96720.9569
Gaussian0.96600.97230.97080.96740.95770.9406
Cosine0.96630.97290.97260.97110.96420.9515
Superimposed curve samplesUniform0.95300.96350.96520.96120.93700.9054
Triangular0.95260.96390.96680.96710.95990.9486
Epanechnikov0.95290.96440.96660.96600.95520.9376
Quartic0.95230.96380.96710.96740.96130.9523
Gaussian0.95310.96400.96570.96350.94440.9165
Cosine0.95290.96430.96650.96640.95660.9407

Training data, 333 testing data, 1000 times simulation

In Figure 3(a) and 3(b), the x-coordinate is the parameter K and the y-coordinate is the mean of simulation accuracy for 1000 times. In each graph, six lines have six colors, which represent six types of unilateral kernel functions. Each line in Figure 3 displays as a downward parabolic shape. This means that there is an optimal Kopt for different values of bandwidth K. The more K deviates from Kopt, the faster the accuracy declines. Both 3(a) and 3(b) show that different kernel functions vary in prediction accuracy. Quartic kernel and triangular kernel perform best. Cosine kernel and Epanechnikov kernel are moderately good. Gaussian kernel and uniform kernel are less effective. After combining K and kernel type, we deem that K mainly determines prediction accuracy and kernel type mainly determines the robustness of prediction. When selecting a suitable unilateral kernel function, the sensitivity of the accuracy rate to K is decreasing, which means that the robustness of prediction is obviously increasing and the accuracy of prediction is slightly increasing.

Figure. 3 Line chart of mean of simulation accuracy for different bandwidths and unilateral kernel functions
Figure. 3

Line chart of mean of simulation accuracy for different bandwidths and unilateral kernel functions

4 An application for financial timing signal mixer

4.1 General framework of timing system

Market timing is a strategy for making a decision to buy or sell a financial asset by trying to predict future market price movements. The key of market timing is to justify the market trend that is a perceived tendency of financial markets to move in a particular direction over time. Quantitative timing traders always attempt to identify market trends using technical indicators. There are dozens of common technical indicators and many indicators developed by financial institutions. Technical indicators have different algorithms, different types, different trading signals and different scope of assets. Effectiveness of indicators varies with the market environment changes. Indicators performed well in the sample often stunned in the outsample prediction.

In practice, the quantification team usually needs to carry out the single indicator optimization, the single indicator test and the timing signal mixing to construct the timing trading system. In Figure 4, the single indicator optimization stage is selecting robust optimal parameters in the feasible region of parameters according to the preset rules. The single indicator filtering stage is screening usable technical indicators by the sample test, out-sample test, and extrapolation test. The timing signal mixing stage is generating integrated trading signals through the integration of multi index.

Figure. 4 General flow chart of timing system
Figure. 4

General flow chart of timing system

There are many ways to create a timing signal mixer. The most commonly used methods are equal voting system and general linear weighting. Complex methods are integer mixed genetic algorithm [18, 19] and neural networks [2022]. This section only discusses the application of LKNDA method to construct timing signal mixer and the classification prediction of LKNDA in the mixer construction process. The index selection, parameter optimization, timing system and signal mixer effect are not discussed further.

4.2 An application for timing signal mixer

The CSI 300 is an influential stock market index in the Shanghai and Shenzhen stock exchanges. There are a wealth of financial products related to the index. In the following study, CSI 300 data are used. Features are calculated by yields of the next five days. The signal source as independent variables is two market state indicators and six timing indicators. LKNDA is used to build the mixer.

  1. Features:

    Markettrend:FT={1FRETT>1%1FRETT<1%1otherwise(18)

    where Τ means calculated day, FRET is the yield of the next five days, FRETT=CLOSET+5CLOSET+1CLOSET+1

  2. Market state indicators:

    Relative position:

    S1,T=CLOSETmin{LOWI}max{HIGHi}min{LOWI},i[T119,T](19)

    Oscillation intensity:

    S2,T=|CLOSEiCLOSEi1|max{HIGHI}min{LOWI},i[T59,T](20)
  3. Technical indicators:

Six technical indicators I1, T, I2, T, . . ., I6, T listed in Table 5.

Table 5

Sample back-test performance of six technical indicators

Performance with optimal parameters

in the sample period
Performance with extrapolation

optimal parameters
Annualized yieldSharpeCalmarAnnualized yieldSharpeCalmar
Indicator 141.0%1.291.2328.5%1.090.90
Indicator 238.9%1.221.1918.0%0.700.88
Indicator 336.9%1.181.2226.1%1.001.22
Indicator 436.9%1.161.0313.2%0.500.41
Indicator 532.3%1.021.2921.4%0.810.58
Indicator 627.0%0.850.5212.1%0.450.29

Taking state index as classification variable and technical index as independent variable, the mixer was constructed. Signal mixer tested by the method of extrapolation in which sample set {Fi, S.,i, I.,i|i ∈ [1, T – 5]} was used to predict FT for the T day. Results predicted by LKNDA is shown in Table 6. The prediction accuracy of LKNDA was 79.8%, and the probability of contrary signal between predicted value and actual value was only 7.3%. The prediction accuracy of SVM, LDA, C5.0, NBC and KNN were 78.7%, 77.6%, 75.8%, 72.2% and 70.5% respectively.

Table 6

Proportion of predicted value to actual value in total sample

Actual

Long
Actual

Flat
Actual

Short
Predicted Long36.4%2.1%3.3%
Predicted Flat2.8%15.4%9.4%
Predicted Short4.0%3.9%28.1%

According to the integrated signal generated by the mixer, the performance of strategy is shown in Figure 5. The CSI 300 index is gray line on the secondary axis. The red area is where the long signal is, and green is the range of the short trade signal. The white bar area is the empty signal period. The net value of back testing is gray line one the primary axis. Strategy to achieve annualized yield 43.2%, Sharpe ratio 1.51 and Calmar ratio 1.82, which is far higher than the single-strategy extrapolation performance on the right side of Table 5. Timing signal mixer obtained excellent performance. On the one hand, LKNDA has good adaptability in extracting complex features. On the other hand, it can automatically adjust the weight of different strategies according to the market state, and maximize the effect of the strategy group.

Figure 5 Integrated transaction signal of mixer and net value of back testing
Figure 5

Integrated transaction signal of mixer and net value of back testing

5 Conclusions

This paper presents a supervised classification algorithm, local kernel nonparametric discriminant analysis, which relaxes the normality assumption of conventional discriminant analysis. Compared with NBC, LDA and SVM, LKNDA can effectively identify the classification of complex nonlinear structures. Compared with KNN and C5.0, it can accurately identify the characteristics of the local sample. The bandwidth K primarily determines prediction accuracy, and the type of selected unilateral kernel function primarily determines the robustness of prediction. Compared with KNN, LKNDA has the same time complexity O(mn), higher accuracy, and a smaller adequate sample size. This method is applied to the construction of the mixer. It is very effective in extracting the complex features of the financial system. Application of the mixer performed well in CSI 300 back-test. In future studies, the method will be tested in more cases.

Acknowledgement

This work was supported by the National Nature Science Funds of China (No. 71501007, 71332003, 71672006). The authors would like to thank the referees and the editor who handled this manuscript, for all their invaluable comments and suggestions.

References

[1] Fisher R., The use of multiple measurements in taxonomic problems, Annuals of human genetics, 1936, 7, 179-18810.1111/j.1469-1809.1936.tb02137.xSearch in Google Scholar

[2] Roth V., Steinhage V., Nonlinear discriminant analysis using kernel functions, Adv. Neural Inf. Process. Syst., 1999, 568-574Search in Google Scholar

[3] Mika S., Rätsch G., Weston J., Schölkopf B., Müllert K., Fisher discriminant analysis with kernels, Neural networks signal Process. IX. IEEE, 1999, 41-4810.1109/NNSP.1999.788121Search in Google Scholar

[4] Baudat G., Anouar F., Generalized discriminant analysis using a kernel approach, Neural Comput., 2000, 12(10), 2385-240410.1162/089976600300014980Search in Google Scholar PubMed

[5] Fukunaga K., Mantock J.M., Nonparametric discriminant analysis, IEEE Trans. Pattern Anal. Mach. Intell., 1983, 6, 671-67810.1109/TPAMI.1983.4767461Search in Google Scholar

[6] Diaf A., Boufama B., Benlamri R., Non-parametric Fisher’s discriminant analysis with kernels for data classification, Pattern Recognit. Lett., 2013, 34(5), 552-55810.1016/j.patrec.2012.10.030Search in Google Scholar

[7] Jarchi D., Boostani R., A new weighted LDA method in comparison to some versions of LDA, Proc. Word Acad. Sci. Eng. Technol. 2006, 12, 233-238Search in Google Scholar

[8] Qiu X., Wu L., Nearest neighbor discriminant analysis, Int. J. Pattern Recognit. Artif. Intell., 2006, 20, 1245-125910.1142/S0218001406005186Search in Google Scholar

[9] Fan Z., Xu Y., Zhang D., Local linear discriminant analysis framework using sample neighbors, IEEE Trans. Neural Networks, 2011, 22, 1119-113210.1109/TNN.2011.2152852Search in Google Scholar PubMed

[10] Shi Z. and Hu J., Local linear discriminant analysis with composite kernel for face recognition. IEEE Int. Conf. Neural Networks, 2012, 20, 1-510.1109/IJCNN.2012.6252385Search in Google Scholar

[11] Li J., Sun W., Wang Y., Tang L., 3D model classification based on nonparametric discriminant analysis with kernels, Neural Comput. Appl., 2013, 22(3-4), 771-78110.1007/s00521-011-0768-2Search in Google Scholar

[12] Zeng Q., Weighted marginal discriminant analysis, Neural Comput. & Appl., 2014, 24(3-4), 503-51110.1007/s00521-012-1293-7Search in Google Scholar

[13] Xing X., Du S., Jiang H., Semi-supervised nonparametric discriminant analysis. IEICE T. Inf. Syst., 2013, E96.D(2): 375-37810.1587/transinf.E96.D.375Search in Google Scholar

[14] Du C., Zhou S., Sun J., Sun H., Wang L., Discriminant embedding by sparse representation and nonparametric discriminant analysis for face recognition, J. Cent. South Univ., 2013, 20(12), 3564-357210.1007/s11771-013-1882-3Search in Google Scholar

[15] Gu X., Liu C., Wang S., Zhao C., Feature extraction using adaptive slow feature discriminant analysis, Neurocomputing, 2015, 154, 139-14810.1016/j.neucom.2014.12.010Search in Google Scholar

[16] Ghassabeh Y.A., Rudzicz F., Moghaddam H.A., Fast incremental LDA feature extraction, Pattern Recogn., 2015, 48(6), 1999-201210.1016/j.patcog.2014.12.012Search in Google Scholar

[17] Pérez-García V.M., Fitzpatrick S., Pérez-Romasanta LA, Pesic M, Schucht P, Applied mathematics and nonlinear sciences in the war on cancer, Appl. Math. Nonlinear Sci., 2016, 1(2), 423-43610.21042/AMNS.2016.2.00036Search in Google Scholar

[18] Lin Y.C., Hwang K.S., Wang F.S., A mixed-coding scheme of evolutionary algorithms to solve mixed-integer nonlinear programming problems, Comput. Math. Appl., 2004, 47(8-9), 1295-130710.1016/S0898-1221(04)90123-XSearch in Google Scholar

[19] Chung, T.S., Wang Z.Y., Li Y.Z., Optimal generation expansion planning via improved genetic algorithm approach, Int. J. Elec. Power Energy Syst., 2004, 26(8), 655-65910.1016/j.ijepes.2004.04.012Search in Google Scholar

[20] Mcculloch W.S., Pitts W., A logical calculus of the ideas immanent in nervous activity, Neurocomputing: Found. Res., MIT Press, 194310.1007/BF02478259Search in Google Scholar

[21] Werbos P. Beyond regression: new tools for prediction and analysis in the behavioral sciences, PhD thesis, Harvard University, 1974Search in Google Scholar

[22] Feng J., Shi D., Complex Network Theory and Its Application Research on P2P Networks, Appl. Math. Nonlinear Sci., 2016, 1(1), 45-5210.21042/AMNS.2016.1.00004Search in Google Scholar

Received: 2017-1-3
Accepted: 2017-2-23
Published Online: 2017-5-5

© 2017 Q. Li et al.

This work is licensed under the Creative Commons Attribution-NonCommercial-NoDerivatives 3.0 License.

Articles in the same Issue

  1. Regular Articles
  2. Analysis of a New Fractional Model for Damped Bergers’ Equation
  3. Regular Articles
  4. Optimal homotopy perturbation method for nonlinear differential equations governing MHD Jeffery-Hamel flow with heat transfer problem
  5. Regular Articles
  6. Semi- analytic numerical method for solution of time-space fractional heat and wave type equations with variable coefficients
  7. Regular Articles
  8. Investigation of a curve using Frenet frame in the lightlike cone
  9. Regular Articles
  10. Construction of complex networks from time series based on the cross correlation interval
  11. Regular Articles
  12. Nonlinear Schrödinger approach to European option pricing
  13. Regular Articles
  14. A modified cubic B-spline differential quadrature method for three-dimensional non-linear diffusion equations
  15. Regular Articles
  16. A new miniaturized negative-index meta-atom for tri-band applications
  17. Regular Articles
  18. Seismic stability of the survey areas of potential sites for the deep geological repository of the spent nuclear fuel
  19. Regular Articles
  20. Distributed containment control of heterogeneous fractional-order multi-agent systems with communication delays
  21. Regular Articles
  22. Sensitivity analysis and economic optimization studies of inverted five-spot gas cycling in gas condensate reservoir
  23. Regular Articles
  24. Quantum mechanics with geometric constraints of Friedmann type
  25. Regular Articles
  26. Modeling and Simulation for an 8 kW Three-Phase Grid-Connected Photo-Voltaic Power System
  27. Regular Articles
  28. Application of the optimal homotopy asymptotic method to nonlinear Bingham fluid dampers
  29. Regular Articles
  30. Analysis of Drude model using fractional derivatives without singular kernels
  31. Regular Articles
  32. An unsteady MHD Maxwell nanofluid flow with convective boundary conditions using spectral local linearization method
  33. Regular Articles
  34. New analytical solutions for conformable fractional PDEs arising in mathematical physics by exp-function method
  35. Regular Articles
  36. Quantum mechanical calculation of electron spin
  37. Regular Articles
  38. CO2 capture by polymeric membranes composed of hyper-branched polymers with dense poly(oxyethylene) comb and poly(amidoamine)
  39. Regular Articles
  40. Chain on a cone
  41. Regular Articles
  42. Multi-task feature learning by using trace norm regularization
  43. Regular Articles
  44. Superluminal tunneling of a relativistic half-integer spin particle through a potential barrier
  45. Regular Articles
  46. Neutrosophic triplet normed space
  47. Regular Articles
  48. Lie algebraic discussion for affinity based information diffusion in social networks
  49. Regular Articles
  50. Radiation dose and cancer risk estimates in helical CT for pulmonary tuberculosis infections
  51. Regular Articles
  52. A comparison study of steady-state vibrations with single fractional-order and distributed-order derivatives
  53. Regular Articles
  54. Some new remarks on MHD Jeffery-Hamel fluid flow problem
  55. Regular Articles
  56. Numerical investigation of magnetohydrodynamic slip flow of power-law nanofluid with temperature dependent viscosity and thermal conductivity over a permeable surface
  57. Regular Articles
  58. Charge conservation in a gravitational field in the scalar ether theory
  59. Regular Articles
  60. Measurement problem and local hidden variables with entangled photons
  61. Regular Articles
  62. Compression of hyper-spectral images using an accelerated nonnegative tensor decomposition
  63. Regular Articles
  64. Fabrication and application of coaxial polyvinyl alcohol/chitosan nanofiber membranes
  65. Regular Articles
  66. Calculating degree-based topological indices of dominating David derived networks
  67. Regular Articles
  68. The structure and conductivity of polyelectrolyte based on MEH-PPV and potassium iodide (KI) for dye-sensitized solar cells
  69. Regular Articles
  70. Chiral symmetry restoration and the critical end point in QCD
  71. Regular Articles
  72. Numerical solution for fractional Bratu’s initial value problem
  73. Regular Articles
  74. Structure and optical properties of TiO2 thin films deposited by ALD method
  75. Regular Articles
  76. Quadruple multi-wavelength conversion for access network scalability based on cross-phase modulation in an SOA-MZI
  77. Regular Articles
  78. Application of ANNs approach for wave-like and heat-like equations
  79. Special issue on Nonlinear Dynamics in General and Dynamical Systems in particular
  80. Study on node importance evaluation of the high-speed passenger traffic complex network based on the Structural Hole Theory
  81. Special issue on Nonlinear Dynamics in General and Dynamical Systems in particular
  82. A mathematical/physics model to measure the role of information and communication technology in some economies: the Chinese case
  83. Special issue on Nonlinear Dynamics in General and Dynamical Systems in particular
  84. Numerical modeling of the thermoelectric cooler with a complementary equation for heat circulation in air gaps
  85. Special issue on Nonlinear Dynamics in General and Dynamical Systems in particular
  86. On the libration collinear points in the restricted three – body problem
  87. Special issue on Nonlinear Dynamics in General and Dynamical Systems in particular
  88. Research on Critical Nodes Algorithm in Social Complex Networks
  89. Special issue on Nonlinear Dynamics in General and Dynamical Systems in particular
  90. A simulation based research on chance constrained programming in robust facility location problem
  91. Special issue on Nonlinear Dynamics in General and Dynamical Systems in particular
  92. A mathematical/physics carbon emission reduction strategy for building supply chain network based on carbon tax policy
  93. Special issue on Nonlinear Dynamics in General and Dynamical Systems in particular
  94. Mathematical analysis of the impact mechanism of information platform on agro-product supply chain and agro-product competitiveness
  95. Special issue on Nonlinear Dynamics in General and Dynamical Systems in particular
  96. A real negative selection algorithm with evolutionary preference for anomaly detection
  97. Special issue on Nonlinear Dynamics in General and Dynamical Systems in particular
  98. A privacy-preserving parallel and homomorphic encryption scheme
  99. Special issue on Nonlinear Dynamics in General and Dynamical Systems in particular
  100. Random walk-based similarity measure method for patterns in complex object
  101. Special issue on Nonlinear Dynamics in General and Dynamical Systems in particular
  102. A Mathematical Study of Accessibility and Cohesion Degree in a High-Speed Rail Station Connected to an Urban Bus Transport Network
  103. Special issue on Nonlinear Dynamics in General and Dynamical Systems in particular
  104. Design and Simulation of the Integrated Navigation System based on Extended Kalman Filter
  105. Special issue on Nonlinear Dynamics in General and Dynamical Systems in particular
  106. Oil exploration oriented multi-sensor image fusion algorithm
  107. Special issue on Nonlinear Dynamics in General and Dynamical Systems in particular
  108. Analysis of Product Distribution Strategy in Digital Publishing Industry Based on Game-Theory
  109. Special issue on Nonlinear Dynamics in General and Dynamical Systems in particular
  110. Expanded Study on the accumulation effect of tourism under the constraint of structure
  111. Special issue on Nonlinear Dynamics in General and Dynamical Systems in particular
  112. Unstructured P2P Network Load Balance Strategy Based on Multilevel Partitioning of Hypergraph
  113. Special issue on Nonlinear Dynamics in General and Dynamical Systems in particular
  114. Research on the method of information system risk state estimation based on clustering particle filter
  115. Special issue on Nonlinear Dynamics in General and Dynamical Systems in particular
  116. Demand forecasting and information platform in tourism
  117. Special issue on Nonlinear Dynamics in General and Dynamical Systems in particular
  118. Physical-chemical properties studying of molecular structures via topological index calculating
  119. Special issue on Nonlinear Dynamics in General and Dynamical Systems in particular
  120. Local kernel nonparametric discriminant analysis for adaptive extraction of complex structures
  121. Special issue on Nonlinear Dynamics in General and Dynamical Systems in particular
  122. City traffic flow breakdown prediction based on fuzzy rough set
  123. Special issue on Nonlinear Dynamics in General and Dynamical Systems in particular
  124. Conservation laws for a strongly damped wave equation
  125. Special issue on Nonlinear Dynamics in General and Dynamical Systems in particular
  126. Blending type approximation by Stancu-Kantorovich operators based on Pólya-Eggenberger distribution
  127. Special issue on Nonlinear Dynamics in General and Dynamical Systems in particular
  128. Computing the Ediz eccentric connectivity index of discrete dynamic structures
  129. Special issue on Nonlinear Dynamics in General and Dynamical Systems in particular
  130. A discrete epidemic model for bovine Babesiosis disease and tick populations
  131. Special issue on Nonlinear Dynamics in General and Dynamical Systems in particular
  132. Study on maintaining formations during satellite formation flying based on SDRE and LQR
  133. Special issue on Nonlinear Dynamics in General and Dynamical Systems in particular
  134. Relationship between solitary pulmonary nodule lung cancer and CT image features based on gradual clustering
  135. Special issue on Nonlinear Dynamics in General and Dynamical Systems in particular
  136. A novel fast target tracking method for UAV aerial image
  137. Special issue on Nonlinear Dynamics in General and Dynamical Systems in particular
  138. Fuzzy comprehensive evaluation model of interuniversity collaborative learning based on network
  139. Special issue on Nonlinear Dynamics in General and Dynamical Systems in particular
  140. Conservation laws, classical symmetries and exact solutions of the generalized KdV-Burgers-Kuramoto equation
  141. Special issue on Nonlinear Dynamics in General and Dynamical Systems in particular
  142. After notes on self-similarity exponent for fractal structures
  143. Special issue on Nonlinear Dynamics in General and Dynamical Systems in particular
  144. Excitation probability and effective temperature in the stationary regime of conductivity for Coulomb Glasses
  145. Special issue on Nonlinear Dynamics in General and Dynamical Systems in particular
  146. Comparisons of feature extraction algorithm based on unmanned aerial vehicle image
  147. Special issue on Nonlinear Dynamics in General and Dynamical Systems in particular
  148. Research on identification method of heavy vehicle rollover based on hidden Markov model
  149. Special issue on Nonlinear Dynamics in General and Dynamical Systems in particular
  150. Classifying BCI signals from novice users with extreme learning machine
  151. Special issue on Nonlinear Dynamics in General and Dynamical Systems in particular
  152. Topics on data transmission problem in software definition network
  153. Special issue on Nonlinear Dynamics in General and Dynamical Systems in particular
  154. Statistical inferences with jointly type-II censored samples from two Pareto distributions
  155. Special issue on Nonlinear Dynamics in General and Dynamical Systems in particular
  156. Estimation for coefficient of variation of an extension of the exponential distribution under type-II censoring scheme
  157. Special issue on Nonlinear Dynamics in General and Dynamical Systems in particular
  158. Analysis on trust influencing factors and trust model from multiple perspectives of online Auction
  159. Special Issue on Advances on Modelling of Flowing and Transport in Porous Media
  160. Coupling of two-phase flow in fractured-vuggy reservoir with filling medium
  161. Special Issue on Advances on Modelling of Flowing and Transport in Porous Media
  162. Production decline type curves analysis of a finite conductivity fractured well in coalbed methane reservoirs
  163. Special Issue on Advances on Modelling of Flowing and Transport in Porous Media
  164. Flow Characteristic and Heat Transfer for Non-Newtonian Nanofluid in Rectangular Microchannels with Teardrop Dimples/Protrusions
  165. Special Issue on Advances on Modelling of Flowing and Transport in Porous Media
  166. The size prediction of potential inclusions embedded in the sub-surface of fused silica by damage morphology
  167. Special Issue on Advances on Modelling of Flowing and Transport in Porous Media
  168. Research on carbonate reservoir interwell connectivity based on a modified diffusivity filter model
  169. Special Issue on Advances on Modelling of Flowing and Transport in Porous Media
  170. The method of the spatial locating of macroscopic throats based-on the inversion of dynamic interwell connectivity
  171. Special Issue on Advances on Modelling of Flowing and Transport in Porous Media
  172. Unsteady mixed convection flow through a permeable stretching flat surface with partial slip effects through MHD nanofluid using spectral relaxation method
  173. Special Issue on Advances on Modelling of Flowing and Transport in Porous Media
  174. A volumetric ablation model of EPDM considering complex physicochemical process in porous structure of char layer
  175. Special Issue on Advances on Modelling of Flowing and Transport in Porous Media
  176. Numerical simulation on ferrofluid flow in fractured porous media based on discrete-fracture model
  177. Special Issue on Advances on Modelling of Flowing and Transport in Porous Media
  178. Macroscopic lattice Boltzmann model for heat and moisture transfer process with phase transformation in unsaturated porous media during freezing process
  179. Special Issue on Advances on Modelling of Flowing and Transport in Porous Media
  180. Modelling of intermittent microwave convective drying: parameter sensitivity
  181. Special Issue on Advances on Modelling of Flowing and Transport in Porous Media
  182. Simulating gas-water relative permeabilities for nanoscale porous media with interfacial effects
  183. Special Issue on Advances on Modelling of Flowing and Transport in Porous Media
  184. Simulation of counter-current imbibition in water-wet fractured reservoirs based on discrete-fracture model
  185. Special Issue on Advances on Modelling of Flowing and Transport in Porous Media
  186. Investigation effect of wettability and heterogeneity in water flooding and on microscopic residual oil distribution in tight sandstone cores with NMR technique
  187. Special Issue on Advances on Modelling of Flowing and Transport in Porous Media
  188. Analytical modeling of coupled flow and geomechanics for vertical fractured well in tight gas reservoirs
  189. Special Issue on Ever-New "Loopholes" in Bell’s Argument and Experimental Tests
  190. Special Issue: Ever New "Loopholes" in Bell’s Argument and Experimental Tests
  191. Special Issue on Ever-New "Loopholes" in Bell’s Argument and Experimental Tests
  192. The ultimate loophole in Bell’s theorem: The inequality is identically satisfied by data sets composed of ±1′s assuming merely that they exist
  193. Special Issue on Ever-New "Loopholes" in Bell’s Argument and Experimental Tests
  194. Erratum to: The ultimate loophole in Bell’s theorem: The inequality is identically satisfied by data sets composed of ±1′s assuming merely that they exist
  195. Special Issue on Ever-New "Loopholes" in Bell’s Argument and Experimental Tests
  196. Rhetoric, logic, and experiment in the quantum nonlocality debate
  197. Special Issue on Ever-New "Loopholes" in Bell’s Argument and Experimental Tests
  198. What If Quantum Theory Violates All Mathematics?
  199. Special Issue on Ever-New "Loopholes" in Bell’s Argument and Experimental Tests
  200. Relativity, anomalies and objectivity loophole in recent tests of local realism
  201. Special Issue on Ever-New "Loopholes" in Bell’s Argument and Experimental Tests
  202. The photon identification loophole in EPRB experiments: computer models with single-wing selection
  203. Special Issue on Ever-New "Loopholes" in Bell’s Argument and Experimental Tests
  204. Bohr against Bell: complementarity versus nonlocality
  205. Special Issue on Ever-New "Loopholes" in Bell’s Argument and Experimental Tests
  206. Is Einsteinian no-signalling violated in Bell tests?
  207. Special Issue on Ever-New "Loopholes" in Bell’s Argument and Experimental Tests
  208. Bell’s “Theorem”: loopholes vs. conceptual flaws
  209. Special Issue on Ever-New "Loopholes" in Bell’s Argument and Experimental Tests
  210. Nonrecurrence and Bell-like inequalities
  211. Special Issue: The 18th International Symposium on Electromagnetic Fields in Mechatronics, Electrical and Electronic Engineering ISEF 2017
  212. Three-dimensional computer models of electrospinning systems
  213. Special Issue: The 18th International Symposium on Electromagnetic Fields in Mechatronics, Electrical and Electronic Engineering ISEF 2017
  214. Electric field computation and measurements in the electroporation of inhomogeneous samples
  215. Special Issue: The 18th International Symposium on Electromagnetic Fields in Mechatronics, Electrical and Electronic Engineering ISEF 2017
  216. Modelling of magnetostriction of transformer magnetic core for vibration analysis
  217. Special Issue: The 18th International Symposium on Electromagnetic Fields in Mechatronics, Electrical and Electronic Engineering ISEF 2017
  218. Comparison of the fractional power motor with cores made of various magnetic materials
  219. Special Issue: The 18th International Symposium on Electromagnetic Fields in Mechatronics, Electrical and Electronic Engineering ISEF 2017
  220. Dynamics of the line-start reluctance motor with rotor made of SMC material
  221. Special Issue: The 18th International Symposium on Electromagnetic Fields in Mechatronics, Electrical and Electronic Engineering ISEF 2017
  222. Inhomogeneous dielectrics: conformal mapping and finite-element models
  223. Special Issue: The 18th International Symposium on Electromagnetic Fields in Mechatronics, Electrical and Electronic Engineering ISEF 2017
  224. Topology optimization of induction heating model using sequential linear programming based on move limit with adaptive relaxation
  225. Special Issue: The 18th International Symposium on Electromagnetic Fields in Mechatronics, Electrical and Electronic Engineering ISEF 2017
  226. Detection of inter-turn short-circuit at start-up of induction machine based on torque analysis
  227. Special Issue: The 18th International Symposium on Electromagnetic Fields in Mechatronics, Electrical and Electronic Engineering ISEF 2017
  228. Current superimposition variable flux reluctance motor with 8 salient poles
  229. Special Issue: The 18th International Symposium on Electromagnetic Fields in Mechatronics, Electrical and Electronic Engineering ISEF 2017
  230. Modelling axial vibration in windings of power transformers
  231. Special Issue: The 18th International Symposium on Electromagnetic Fields in Mechatronics, Electrical and Electronic Engineering ISEF 2017
  232. Field analysis & eddy current losses calculation in five-phase tubular actuator
  233. Special Issue: The 18th International Symposium on Electromagnetic Fields in Mechatronics, Electrical and Electronic Engineering ISEF 2017
  234. Hybrid excited claw pole generator with skewed and non-skewed permanent magnets
  235. Special Issue: The 18th International Symposium on Electromagnetic Fields in Mechatronics, Electrical and Electronic Engineering ISEF 2017
  236. Electromagnetic phenomena analysis in brushless DC motor with speed control using PWM method
  237. Special Issue: The 18th International Symposium on Electromagnetic Fields in Mechatronics, Electrical and Electronic Engineering ISEF 2017
  238. Field-circuit analysis and measurements of a single-phase self-excited induction generator
  239. Special Issue: The 18th International Symposium on Electromagnetic Fields in Mechatronics, Electrical and Electronic Engineering ISEF 2017
  240. A comparative analysis between classical and modified approach of description of the electrical machine windings by means of T0 method
  241. Special Issue: The 18th International Symposium on Electromagnetic Fields in Mechatronics, Electrical and Electronic Engineering ISEF 2017
  242. Field-based optimal-design of an electric motor: a new sensitivity formulation
  243. Special Issue: The 18th International Symposium on Electromagnetic Fields in Mechatronics, Electrical and Electronic Engineering ISEF 2017
  244. Application of the parametric proper generalized decomposition to the frequency-dependent calculation of the impedance of an AC line with rectangular conductors
  245. Special Issue: The 18th International Symposium on Electromagnetic Fields in Mechatronics, Electrical and Electronic Engineering ISEF 2017
  246. Virtual reality as a new trend in mechanical and electrical engineering education
  247. Special Issue: The 18th International Symposium on Electromagnetic Fields in Mechatronics, Electrical and Electronic Engineering ISEF 2017
  248. Holonomicity analysis of electromechanical systems
  249. Special Issue: The 18th International Symposium on Electromagnetic Fields in Mechatronics, Electrical and Electronic Engineering ISEF 2017
  250. An accurate reactive power control study in virtual flux droop control
  251. Special Issue: The 18th International Symposium on Electromagnetic Fields in Mechatronics, Electrical and Electronic Engineering ISEF 2017
  252. Localized probability of improvement for kriging based multi-objective optimization
  253. Special Issue: The 18th International Symposium on Electromagnetic Fields in Mechatronics, Electrical and Electronic Engineering ISEF 2017
  254. Research of influence of open-winding faults on properties of brushless permanent magnets motor
  255. Special Issue: The 18th International Symposium on Electromagnetic Fields in Mechatronics, Electrical and Electronic Engineering ISEF 2017
  256. Optimal design of the rotor geometry of line-start permanent magnet synchronous motor using the bat algorithm
  257. Special Issue: The 18th International Symposium on Electromagnetic Fields in Mechatronics, Electrical and Electronic Engineering ISEF 2017
  258. Model of depositing layer on cylindrical surface produced by induction-assisted laser cladding process
  259. Special Issue: The 18th International Symposium on Electromagnetic Fields in Mechatronics, Electrical and Electronic Engineering ISEF 2017
  260. Detection of inter-turn faults in transformer winding using the capacitor discharge method
  261. Special Issue: The 18th International Symposium on Electromagnetic Fields in Mechatronics, Electrical and Electronic Engineering ISEF 2017
  262. A novel hybrid genetic algorithm for optimal design of IPM machines for electric vehicle
  263. Special Issue: The 18th International Symposium on Electromagnetic Fields in Mechatronics, Electrical and Electronic Engineering ISEF 2017
  264. Lamination effects on a 3D model of the magnetic core of power transformers
  265. Special Issue: The 18th International Symposium on Electromagnetic Fields in Mechatronics, Electrical and Electronic Engineering ISEF 2017
  266. Detection of vertical disparity in three-dimensional visualizations
  267. Special Issue: The 18th International Symposium on Electromagnetic Fields in Mechatronics, Electrical and Electronic Engineering ISEF 2017
  268. Calculations of magnetic field in dynamo sheets taking into account their texture
  269. Special Issue: The 18th International Symposium on Electromagnetic Fields in Mechatronics, Electrical and Electronic Engineering ISEF 2017
  270. 3-dimensional computer model of electrospinning multicapillary unit used for electrostatic field analysis
  271. Special Issue: The 18th International Symposium on Electromagnetic Fields in Mechatronics, Electrical and Electronic Engineering ISEF 2017
  272. Optimization of wearable microwave antenna with simplified electromagnetic model of the human body
  273. Special Issue: The 18th International Symposium on Electromagnetic Fields in Mechatronics, Electrical and Electronic Engineering ISEF 2017
  274. Induction heating process of ferromagnetic filled carbon nanotubes based on 3-D model
  275. Special Issue: The 18th International Symposium on Electromagnetic Fields in Mechatronics, Electrical and Electronic Engineering ISEF 2017
  276. Speed control of an induction motor by 6-switched 3-level inverter
Downloaded on 12.10.2025 from https://www.degruyterbrill.com/document/doi/10.1515/phys-2017-0030/html?lang=en
Scroll to top button