Article Open Access

Multi-task feature learning by using trace norm regularization

  • EMAIL logo , , and
Published/Copyright: November 10, 2017

Abstract

Multi-task learning can extract the correlation of multiple related machine learning problems to improve performance. This paper considers applying the multi-task learning method to learn a single task. We propose a new learning approach, which employs the mixture of expert model to divide a learning task into several related sub-tasks, and then uses the trace norm regularization to extract common feature representation of these sub-tasks. A nonlinear extension of this approach by using kernel is also provided. Experiments conducted on both simulated and real data sets demonstrate the advantage of the proposed approach.

1 Introduction

Multi-task learning has been researched extensively in the machine learning community for many years [1]. It deals with many related learning tasks about regression or classification simultaneously to obtain a better model for each task. It has been used in many fields such as bioinformatics [2, 3], visual classification [4], web search ranking [5], industrial inspection [6] and so on. Better predictive function for each task can be created by multi-task learning if these tasks have something in common. Besides, when a new related task appears, by utilizing the relationship of the related tasks, a precise model for the new task can be developed with just few training samples collected for the new task.

There are many methods to define the correlation among multiple tasks [7, 8, 9, 10, 11, 12]. One important way is to assume that each task shares common features. The same subsets of features are chosen to represent the correlation of input and output in each task. L2,1 norm regularized minimization, a kind of group Lasso problem, is commonly used to find the shared features among different learning tasks [13, 14]. Another way to describe the correlation among multiple tasks is to assume that the linear predictors of different tasks are in a low rank subspace. Argyriou et al. proposed a convex multitask feature learning formulation to learn a common sparse representation from multitasks [15]. Their multitask learning formulation is essentially equivalent to the approach employing the trace norm as a regularization which is introduced to replace non convex rank function. The trace norm (known as nuclear norm) of a matrix is the sum of its singular values, so the trace norm regularization is the absolute shrinkage on the singular values of the coefficients matrix and enforces the singular values of matrix to be zero [16, 17]. The trace norm regularization is a promising heuristic approach to find the low rank structure of the coefficients in different tasks [18].

Many machine learning methods use the divide and conquer strategy to deal with complex classification or regression problems. With the divide and conquer strategy, a complex problem is divided into multiple simpler subproblems. Motivated by the success of the multi-task learning techniques in learning multiple tasks, in this paper we attempt to use the multi-task learning method to improve the performance of the divide and conquer strategy. The divide and conquer strategy often divides the input space into many local regions, and the training data in each region may be insufficient. All subproblems are caused by studying the same target and thus there exists some intrinsic relatedness among the subproblems. Therefore, we can utilize the multi-task learning approach to improve the generalization performance of the divide and conquer strategy.

In order to fulfill a single task through the multi-task learning method, we use the mixture of experts (MOE) [19] method to divide a complex machine learning problem into subproblems. MOE is a probabilistic tree-based model, using a mixture of conditional density models to approximate the conditional distribution of output. The mixing coefficients of MOE depend on the inputs and decided by the gating functions, and the local conditional density models are called experts. A comprehensive survey of the mixture of experts can be found in Ref. [20]. Recently, many novel MOE methods are proposed to handle high dimensional data [21, 22, 23]. In this paper, a new trace norm regularized MOE model is proposed. We choose the trace norm regularization to extract the connection among expert models and gating functions. Trace norm regularization is a feature learning technique. The trace norm regularized MOE model can uncover the shared underlying characteristics of training input. Different from the previous study on the MOE model which often aims to select a set of sparse features [22], the trace norm regularized MOE model gains a small set of underlying characteristics which can be represented by the linear combination of original features. Moreover, the trace norm regularization allows us to work in the kernel space and handle the high dimensional (or infinite) features. At this point, the trace norm regularized MOE model is more flexible than the MOE with sparse feature selection [22].

This paper is organized as follows. Section 2 reviews the standard framework of the MOE model and presents the proposed trace norm regularized MOE model. The optimization and extension with kernel is also provided in section 2. In section 3, we demonstrate the performance of the proposed method by the experiments conducted on both synthetic and real data sets. We discuss the feature research in section 4.

2 Mixture of experts model with trace norm regularization

In this section, we first briefly present the basic elements of MOE model. Then we present the new MOE model with the trace norm regularization and give its extension with kernel.

2.1 Mixture of experts model

Let (xi, yi) i = 1,2,…N denote the N pairs of input/output data (x, y), where yi is a response variable and xiRp is a p-dimensional vector. The MOE model aims to estimate the conditional probability distribution:

p(y|x)(1)

MOE model approximate the probability distribution through a mixture of multiple local distributions, called experts. The MOE model with K experts can be expressed as:

p(y|x)=k=1kπk(x)pk(y|x)(2)

In the MOE model, the posteriori conditional distribution is decomposed as a weighted combination of K expert models. The weight function πk(x), defined as gating function, satisfies πk(x)>0,kπkx=1. The gating functions divide the input data into multiple regions. pk(y|x) is the expert model, which models the data divided by the gating functions. Different from the mixture of Gaussian or mixture of linear regression model, MOE assumes that the weight coefficients of each sample are different. The mixture proportions are allowed to depend on input x.

MOE model often uses the Softmax function as the gating function:

πk(x)=exprkTxj=1KexprjTx(3)

where riRp, i ∈ {1, 2, …, k}.

The expert models also often use linear models such as the generalized linear models. The density function pk(y|x) can be expressed as:

pk(y|x)=h(y,wkTx)(4)

wherewiRp, i ∈ {1, 2, …, k} are the parameters of the expert models. For example, we can use the logistic function for two-class classification:

pk(y|x)=h(y,wkTx)=11+expy(wkTx)(5)

where y ∈ {1, −1}.

The expectation-maximization (EM) algorithm can be used to train a MOE model. Introduce a K-dimensional binary random variable z=(z1, z2, …, zk) ∈ {0, 1}k, the above MOE model (2) can be expressed as:

p(y|x)=zp(z|x)p(y|z,x)(6)

z has a 1-of-K representation in which a particular element zi is equal to 1 and all other elements are equal to 0. In (6), the distribution over z is specified in terms of the weight coefficient, such that

p(zi=1|x)=πk(x)(7)

and

p(y|z,x)=k=1kh(y,wkTx)zk(8)

Every latent variable zi corresponds to a training pair (xi, yi).We summarize the training input in a matrix XRp × N, whose columns are given by xi, the training output in a vector YRN, and the latent variables in a matrix z, whose columns are given by zi.

Let θ denote {wk, rk}k=1… k. The EM algorithm optimizes the following objective function alternately to obtain the maximum likelihood estimate of θ:

F(θ,q)=zq(Z)lnP(Y,Z|X,θ)q(Z)(9)

In the E-step, θ is fixed and the posterior distribution of the latent variable zi is estimated by:

q(Zi)=argmaxq(z)F(θ,q)=P(Zi|xi,yi,θ)(10)

Specifically, we have

q(zni=1)=αni=p(yn|xn,wi,zni)p(zni|xn,ri)j=1kp(yn|xn,wj,znj)p(znj|xn,rj)(11)

In the M-step, we optimize θ to maximize the expected log likelihood of complete data over the posteriori distribution of latent variables estimated from E-step:

θ=argmaxθF(θ,q)=argmaxθL(θ)=argmaxθn=1Ni=1KαnilogP(yn|xn,zni=1,θ)+logP(zni=1|xn,θ)(12)

The EM algorithm repeats the E-step and M-step until either the parameters θ or the log likelihood converges.

2.2 Mixture of experts model with the trace norm regularization

Feature extraction is a commonly used approach in machine learning to improve the accuracy of model when the training samples are insufficient. Some feature extraction methods gained the underlying characteristics by estimating a matrix to project the original input feature into a low dimensional subspace. Formally, the extracted input feature can be expressed as FTx, where FRP × S, SP. Subsequently, a regression or classification model is developed with the extracted feature FTx. Combining the feature extraction and the MOE model, and using the projection matrix FG, FE for the gating functions and the expert models respectively, we can obtain

πK(x)=P(zk=1|x,rk)=p(zk=1|FGTx,gk)(13)
h(y,wKTx)=P(y|x,wk)=P(y|FETx,hk)(14)

where gkRs, k=1,…,K, hkRs, k=1,…,K. The model coefficients wk, rk, are substituted by: wk = FEhk and rk= FGgk. Let W=(wi,…,wk) ∈ Rp × K and R=(r1, …, rk) ∈ Rp × K. When the projection matrix FG and FE are known, we can use the new inputs FGTx and FETx to train a new MOE model. In this paper, we attempt to simultaneously learn the projection matrix and the MOE model built on the extracted features. Following the work [15, 24], we use the trace norm regularization for simultaneous feature extraction and model learning. As the prediction are dependent on FEH and FGG, we can add the Frobenius norm regularizer in the above EM algorithm to control the magnitude of FE, Fg, H and G. Adding the regularization terms will not change the posterior distribution of the latent variable z. Therefore, the E-step has not been changed. In the M-step, the optimization problem is then reformulated as:

maxFE,FG,H,GL(FEH,FGG,X,Y)CG12FGF2+12GF2CE12FEF2+12HF2(15)

where CG > 0, CE > 0 are the regularization parameters. The optimization problem (15) is nonconvex. However, following Ref. [24], the non-convex optimization problem can be converted into a trace norm regularization problem.

Trace norm ∥⋅∥tr of a matrix is defined as the sum of the singular value of the matrix:

Wtr=iγi(16)

where γi is the i-th singular values of W.

According to [25], the trace norm has the following property:

Wtr=minFG=W12FF2+GF2(17)

The problem (15) can be rewritten as:

maxW,R,C,Dn=1Ni=1KαnilogP(yn|xn,zni=1,wk)+logP(zni=1|xn,rk)CGWtrCERtr(18)

Trace norm is the convex envelop of matrix rank [26]. Therefore, the trace norm regularization is often used in the multi-task learning and matrix completion to obtain low rank solutions. The idea of using the trace norm to extract the features of the MOE model comes from multi-task learning. In multi-task learning, the trace norm regularizer is often used to gain a few features common across the tasks. The MOE model divides the data into multiple regions, and the data in each region may be insufficient to train the local expert model. Since the multi-task learning can improve the generalization performance when only limited training data for each task are available, we can use the multi-task learning to improve the performance of the expert model and the gating functions in MOE model. Consequently, with the aid of MOE model, we can apply the multi-task learning technique in a single task learning problem.

The optimization problem (18) can be divided into two independent trace norm regularization problems:

maxW,Cn=1Ni=1Kαnilogh(yn,wiTxn)+CHWtr(19)

and

maxR,Dn=1Ni=1Kαnilogexp(riTxn+di)j=1kexp(rjTxn+dj)+CGRtr(20)

When h(yn,wiTxn) is log-concave, the optimization with trace norm regularization is a convex, but non-smooth, optimization problem. It can be formulated as a semi-definite program (SDP) and solved by some existing SDP solver such as SDPT3 [26]. Recently, many efficient algorithms such as block coordinate descent [15], accelerated proximal gradient method (APG) [16] and ADMM [27] have been developed to solve the trace norm minimization problem. In this paper, we use the APG algorithm to solve the trace norm regularization due to its fast convergence rate.

In summary, the MOE model with trace norm regularization can be trained with iterating the following two steps:

E-step: evaluate the output of the gating functions and the expert models by using the current parameters, and then evaluate the posterior probability of latent variables using eq. (11).

M-step: update the values of the parameters wi and ri by solving the two trace norm minimization problems (19) and (20).

2.3 Nonlinear extension with kernel

The MOE model with linear experts can directly handle the nonlinear data. However, when the feature space is very high dimensional (or infinite), working in the original feature space is computationally expensive. The above method can be extended to work on kernel matrix. Following the representer theorem of trace norm regularization [15], we can obtain the following theorem:

Theorem 2.1

If W and R is the optimal solutions of the MOE model, then wk, rk, k ∈ {1, 2, …, K} can be expressed as:

wk=i=1Nukixi,rk=i=1Nvkixi(21)

where uki and vki are linear combination coefficients for the k-th experts and gating functions. The proof of this theorem is similar with the proof in Ref. [15]. According the Theorem 1, the optimal W and R can be represented as:

W=XU0,R=XV0(22)

where U0(i, j) = uji and V0(i, j) = vij. Let ζ = span{xi, i=1, 2,…, N}. We consider a matrix P whose columns form an orthogonal basis of ζ. According to (22), there are matrices U1 and V1 such that

W=PU1,R=PV1(23)

As PTP=I, we have

Wtr=tracePU1U1TPT12=traceU1U1T12=U1trRtr=V1tr(24)

Substituting (24) in the objective of (19) and (20) yields the following objective functions:

minU1,Cn=1Ni=1Kαnilogh(yn,u1iTPTxn)+CHU1tr(25)
minV1,Dn=1Ni=1Kαnilogexp(v1iTPTxn+di)j=1kexp(v1jTPTxn+dj)+CGV1tr(26)

where u1i and v1i are the i-th columns of U1 and V1. The problems (25) and (26) can be regarded as having modified input B=(β1,β2,… βN) = PTX. As P consists of the basis of ζ, P can be expressed as:

P=XR(27)

Thus, B=PTX=RTXTX=RTK, K is the Gram matrix:

K=x1,x1Kx1,xNMOMxN,x1KxN,xN(28)

If the matrix R is known, the above trace norm regularized MOE model will only depend on the inner product of two samples. When the input feature x is mapped onto a kernel feature space by a nonlinear map φ(x), we can use the kernel function to evaluate the inner product in the kernel space. The matrix R is also estimated from the Gram matrix K. Compute the eigen decomposition of the n × n Gram matrix K=UDUT, where D is the diagonal matrix containing the eigenvalues of K, and U is a matrix whose columns are the corresponding eigenvectors. R is equal to UD−1/2. R can also be computed by the Gram-Schmidt orthogonalization [15].

To make a prediction for a new sample x, the expert models only need to evaluate wiTx=(XRu1i)Tx=(Ruli)T(XTx)=(Ru1i)TK%(x), and the gating functions only need to evaluate riTx=(Rv1i)TK%(x), where K%(x)=(k(x1, x), k(x2, x),…,k(xN, x))T. In summary, using the above procedures, we can build and make predictions with a trace regularized MOE model without the original features.

3 Experiment

In this section, we present some numerical experiments on both synthetic and several real data sets to demonstrate the performance of the proposed method. We studied the two-class classification problem in these experiments and used the logistic regression models as the local expert models.

The proposed trace norm regularized MOE model is compared with the L1 norm regularized MOE model [22], support vector machines (SVM), linear logistic regression with L1 norm regularization, SVM ensemble with bagging, and AdaBoost using decision trees as weak classifiers. The parameters of these methods are selected with 3-fold cross-validation using the grid search.

3.1 Synthetic data

To generate the synthetic data set, we first construct 2-dimensional positive and negative samples as shown in Figure 1, where the positive samples are randomly selected from a 2-dimenisional Gaussian distribution with zero means and covariance cov=diag(4,4). The i-th negative sample is generated as (6 cos(2πui) + vi, 6 sin(2πui)+wi, where ui, vi, and wi is randomly selected from the standardized normal distribution. Then, we generate a 50 × 2 random orthogonal projection matrix to project the 2-dimensional samples to 50-dimensional linear space. Finally, we further add 50-dimenisional noise features to the projected samples. The noises are zero-mean Gaussian noises with standard deviation equal to 1. Consequently, we obtain 100-dimenisional input data. The labels of these high dimensional data are decided by the labels in original 2-dimensional feature space. We generate a total of 200 samples including 100 positive samples and 100 negative samples.

Figure 1 2-dimensional positive and negative samples
Figure 1

2-dimensional positive and negative samples

Since the data are generated from the 2-dimenisional data, we can plot the separating hyperplane to illustrate the classified accuracy of different methods. We use 75% samples as the training set to build the classifiers. Figure 2 shows the separating hyperplane obtained by L1 norm regularized logistic model, SVM with Gaussian kernel, MOE with trace norm regularization, and the classic MOE model without regularization. In MOE model, the number of experts is set to 10. In Figure 2, we use the following procedure to obtain the 2-dimenisional separating hyperplane. First, we use the project matrix which is used to generate the 50-dimemisional input features to project the points in the 2-dimemisional plane onto 50 dimensional feature space. Then, we add 50 all zero features for these projected points. Finally, we use the classifers built by different methods to decide the label of the points in the 2-dimenisional plane, and draw the separating hyperplane.

Figure 2 Separating hyperplane obtained by a) L1 norm regularized logistic regression, b) SVM, c) trace norm regularized MOE, d) classic MOE
Figure 2

Separating hyperplane obtained by a) L1 norm regularized logistic regression, b) SVM, c) trace norm regularized MOE, d) classic MOE

Although we specify the number of experts to 10, the trace norm regularized MOE model uses about 4 segment lines to separate the samples successfully. As the synthetic data set is linear inseparable, the logistic regression model cannot classify these data correctly. Meanwhile, SVM predict many negative samples as the positive samples due to the disturbance of noise features. Figure 2 shows that the performance of the classic MOE on the high dimensional synthetic data is very poor. Therefore, we do not evaluate the performance of classic MOE model in the following experiments.

Next, we adopt the 10 hold-out partitions to compute the average classification accuracy of these methods. On each partition, we first randomly select 50% samples as the training samples and then use a 3-fold cross validation procedure on training set to obtain the suitable parameters for each method. We evaluate the predictive accuracy of the remaining 50% samples. Table 1 shows the average classification error on the synthetic data set. TMOE in Table 1 stands for the proposed trace norm regularized MOE model, and RMOE in Table 1 stands for the L1 norm regularized MOE model.

Table 1

Average (std. deviation) classification error on synthetic dataset

MethodClassification accuracy
TMOE(Gaussian kernel)40.6% ± 2.3%
RMOE15.9% ± 4.2%
Logistic50.9% ± 4.4%
SVM(Gaussian kernel)42.1% ± 5.2%
Adaboost13.2% ± 2.2%
Bagging-SVM30.6% ± 4.5%

The results in Table 1 show that AdaBoost obtains the best classification accuracy on the synthetic data set. The results obtained by TMOE and RMOE are comparable with AdaBoost. AdaBoost can not extract the linear combination of features, but MOE model can converge multiple linear models to describe the complex nonlinear relationship between input and output variables.

3.2 Real data sets

We test the performance of the proposed method using 4 real datasets including Ionosphere, Musk-1, LSVT, and Sonar data sets. These datasets are taken from UCI Machine Learning Repository for 2-class classification. The main characteristics of each dataset are described in Table 2.

Table 2

Detail of real datasets used for experiments

Dataset nameNo. samplesNo covariates
Ionosphere35134
Musk-1476166
LSVT126309
Sonar20860

We also use the 10 hold out partitions to evaluate the average classification accuracy. From each partition, we select 50% samples as training samples and use the remaining samples as the test samples. Table 3 shows the average classification errors on these real datasets.

Table 3

Average (std. deviation) classification error on real datasets

Dataset name
MethodIonosphereMusk-1LSVTSonar
TMOE(linear)12.3% ± 2.1%10.9% ± 2.8%11.2% ± 1.7%21.2% ± 4.5%
TMOE(kernel)6.0% ± 1.4%10.7% ± 3.4%23.3% ± 6.7%19.8% ± 5.0%
RMOE12.2% ± 3.1%14.8% ± 3.1%17.5% ± 4.8%23.5% ± 4.7%
Logistic12.2% ± 2.4%20.0% ± 2.1%19.2% ± 6.2%27.2% ± 4.8%
SVM6.1% ± 1.2%12.1% ± 2.4%23.2% ± 5.5%23.0% ± 4.2%
AdaBoost10.3% ± 1.7%18.5% ± 2.3%18.1% ± 3.8%20.9% ± 4.7%
Bagging-SVM7.0% ± 1.3%13.1% ± 0.5%15.7% ± 1.6%15.7% ± 1.5%

In the experiments, the SVM method can obtain better results by using the Gaussian kernel on Ionosphere and Sonar data sets and using the polynomial kernel on Musk-1 and LSVT data sets. Therefore, SVM use the Gaussian kernel and the polynomial kernel respectively on the four datasets. The kernel function used in TMOE is the same as the kernel function used in SVM. The comparison results in Table 3 show that the trace norm regularized MOE model generally perform better than the L1 norm regularized MOE model. Since the linear experts can preserve more common features among tasks, TMOE using the linear experts obtains the best result in higher-dimensional datasets, such as Musk-1(166-dimensional) and LSVT(309-dimensional); On the contrary, TMOE with kernel obtains the best result in Ionosphere(34-dimensinal, 351 samples) dataset in the case of lower dimensional and large sample size. Regularized MOE models often perform better than linear models because they use the combination of multiple linear models to describe nonlinear relationships. The experiments on the real datasets demonstrate the good performance achieved by the proposed method.

4 Conclusions

In the paper, the trace norm regularization is introduced into mixture of expert model to extract the common feature representation of expert models and gating functions. The combination of MOE and trace norm regularization can improve the generalization performance of MOE model. Moreover, trace norm regularization allows us to handle the high dimensional data more flexibly by working on the kernel space. The experiments on synthetic dataset and four real data sets demonstrate the superiority of the proposed method to the classic L1 norm regularized MOE model. However, experimental results also show that the performance of the proposed method is not always as well as classical sophisticated algorithms(such as Bagging-SVM and AdaBoost) in small samples or lower dimensional cases. In the future, the kernel function selected approach will be optimized to improve the classification performance, and the combination of Bayesian multi-task learning technique and MOE model will be considered to avoid the computation of cross-validation in the proposed method.

Acknowledgement

This work is supported in part by National Natural Science Foundation of China (Grant No. 61501385), Science and Technology Planning Project of Sichuan Province, China (Grant Nos. 2016JY0242, 2016GZ0210), and Foundation of Southwest University of Science and Technology (Grant Nos. 15kftk02, 15kffk01).

References

[1] Pan S.J., Yang Q., A Survey on Transfer Learning, IEEE Knowl Data. EN., 2010, 22, 1345–1359.10.1109/TKDE.2009.191Search in Google Scholar

[2] Kshirsagar M., Carbonell J., Klein-Seetharaman J., Multitask learning for host–pathogen protein interactions. Bioinformatics., 2013, 29, 217–226.10.1093/bioinformatics/btt245Search in Google Scholar PubMed PubMed Central

[3] Bickel S., Bogojeska J., Multi-task learning for HIV therapy screening, Proceedings of the 25th International Conference on Machine Learning, Helsinki Finland, 2008, 56-63.10.1145/1390156.1390164Search in Google Scholar

[4] Yuan X.T., Yan S., Visual classification with multi-task joint sparse representation, Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, San Francisco, CA, 2010, 3493–350010.1109/CVPR.2010.5539967Search in Google Scholar

[5] Chapelle O., Shivaswamy P., Multi-task learning for boosting with application to web search ranking, Proceedings of the 16th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, Washington USA, 2010. 1189-1198.10.1145/1835804.1835953Search in Google Scholar

[6] He J.R., Zhu Y.D., Hierarchical Multi-task Learning with Application to Wafer Quality Prediction, Proceedings of the 12th IEEE International Conference on Data Mining, Brussels Belgium, 2012, 290–298.10.1109/ICDM.2012.63Search in Google Scholar

[7] Caruana R., Multitask Learning. MACH Learnm., 1997, 28, 41–7510.10.1007/978-1-4615-5529-2_5Search in Google Scholar

[8] Baxter J., A Model of Inductive Bias Learning, J. Artif Intell RES., 2011, 12, 149-198.10.1613/jair.731Search in Google Scholar

[9] Schwaighofer A., Tresp V., Yu K., Learning Gaussian process kernels via hierarchical Bayes, Neural Inf. Process. Syst., 2004, 1209–1216.Search in Google Scholar

[10] Yu K., Tresp V., Schwaighofer A., Learning Gaussian Processes from Multiple Tasks, Proceedings of the 22Nd International Conference on Machine Learning, New York, USA, 2005, 1012–1019.10.1145/1102351.1102479Search in Google Scholar

[11] Zhang J., Ghahramani Z., Yang Y., Learning multiple related tasks using latent independent component analysis, Neural Inf. Process. Syst., 2005, 1585–1592.Search in Google Scholar

[12] Evgeniou T., Pontil M., Regularized Multi–task Learning, Proceedings of the 10th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, New York, USA, 2004, 109–117.10.1145/1014052.1014067Search in Google Scholar

[13] Liu J., Ji S., Ye J., Multi-task feature learning via efficient l 2, 1-norm minimization, Proceedings of the 25th Conference on Uncertainty in Artificial Intelligence, Montreal Canada,2009, 339–348Search in Google Scholar

[14] Nie F., Huang H., Cai X., Ding C.H., Efficient and robust feature selection via joint 2,1-norms minimization, Neural Inf. Process. Syst., 2010, 1813–1821.Search in Google Scholar

[15] Argyriou A., Evgeniou T., Pontil M., Convex multi-task feature learning, MACN. Learn., 2008,10.1007/s10994-007-5040-8Search in Google Scholar

[16] Toh K.C., Yun S., An Accelerated Proximal Gradient Algorithm for Nuclear Norm Regularized Linear Least Squares Problems. PAC J. Optim., 2010, 6(3), 615–640.Search in Google Scholar

[17] Pong T.K., Tseng P., Ji S.W., Ye J.P., Trace Norm Regularization: Reformulations, Algorithms, and Multi-Task Learning. SIAM J. Optimiz., 2010, 20, 3465–3489.10.1137/090763184Search in Google Scholar

[18] Recht B., Fazel M., Parrilo P.A., Guaranteed Minimum-Rank Solutions of Linear Matrix Equations via Nuclear Norm Minimization, SIAM REV., 2010, 52, 471–501.10.1137/070697835Search in Google Scholar

[19] Jacobs R.A., Jordan M.I., Nowlan S.J., Hinton G.E., Adaptive Mixtures of Local Experts, Neural Comput., 1991, 3, 79–87.10.1162/neco.1991.3.1.79Search in Google Scholar PubMed

[20] Yuksel S.E., Wilson J.N., Gader P.D., Twenty Years of Mixture of Experts, IEEE T NET L EAR., 2012, 23, 1177–1193.10.1109/TNNLS.2012.2200299Search in Google Scholar PubMed

[21] Bo L., Sminchisescu C., Kanaujia A., Metaxas D., Fast algorithms for large scale conditional 3D prediction, Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Anchorage USA, 2008, 1–8.10.1109/CVPR.2008.4587578Search in Google Scholar

[22] Peralta B., Soto A., Embedded local feature selection within mixture of experts, INFORM Sciences., 2014, 269, 176–187.10.1016/j.ins.2014.01.008Search in Google Scholar

[23] Khalili A., New estimation and feature selection methods in mixture-of-experts models. CAN J. STAT., 2010, 38, 519–539.10.1002/cjs.10083Search in Google Scholar

[24] Amit Y., Fink M., Srebro N., Ullman S., Uncovering Shared Structures in Multiclass Classification, Proceedings of the 24th International Conference on Machine Learning, New York, USA, 2007, 17–24.10.1145/1273496.1273499Search in Google Scholar

[25] Srebro N., Rennie J.D.M., Jaakola T.S., Maximum-Margin Matrix Factorization, Neural Inf. Process. Syst., 2005, 1329–1336.Search in Google Scholar

[26] Fazel M., Hindi H., Boyd S.P., A rank minimization heuristic with application to minimum order system approximation, Proceedings of the 2001 American Control Conference, Arlington, USA, 2001, 4734–4739.10.1109/ACC.2001.945730Search in Google Scholar

[27] Yang J., Yuan X., Linearized augmented Lagrangian and alternating direction methods for nuclear norm minimization, MATH Comput., 2013, 82, 301–329.10.10.1090/S0025-5718-2012-02598-1Search in Google Scholar

Received: 2017-6-16
Accepted: 2017-9-17
Published Online: 2017-11-10

© 2017 Jiangmei et al.

This work is licensed under the Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 License.

Articles in the same Issue

  1. Regular Articles
  2. Analysis of a New Fractional Model for Damped Bergers’ Equation
  3. Regular Articles
  4. Optimal homotopy perturbation method for nonlinear differential equations governing MHD Jeffery-Hamel flow with heat transfer problem
  5. Regular Articles
  6. Semi- analytic numerical method for solution of time-space fractional heat and wave type equations with variable coefficients
  7. Regular Articles
  8. Investigation of a curve using Frenet frame in the lightlike cone
  9. Regular Articles
  10. Construction of complex networks from time series based on the cross correlation interval
  11. Regular Articles
  12. Nonlinear Schrödinger approach to European option pricing
  13. Regular Articles
  14. A modified cubic B-spline differential quadrature method for three-dimensional non-linear diffusion equations
  15. Regular Articles
  16. A new miniaturized negative-index meta-atom for tri-band applications
  17. Regular Articles
  18. Seismic stability of the survey areas of potential sites for the deep geological repository of the spent nuclear fuel
  19. Regular Articles
  20. Distributed containment control of heterogeneous fractional-order multi-agent systems with communication delays
  21. Regular Articles
  22. Sensitivity analysis and economic optimization studies of inverted five-spot gas cycling in gas condensate reservoir
  23. Regular Articles
  24. Quantum mechanics with geometric constraints of Friedmann type
  25. Regular Articles
  26. Modeling and Simulation for an 8 kW Three-Phase Grid-Connected Photo-Voltaic Power System
  27. Regular Articles
  28. Application of the optimal homotopy asymptotic method to nonlinear Bingham fluid dampers
  29. Regular Articles
  30. Analysis of Drude model using fractional derivatives without singular kernels
  31. Regular Articles
  32. An unsteady MHD Maxwell nanofluid flow with convective boundary conditions using spectral local linearization method
  33. Regular Articles
  34. New analytical solutions for conformable fractional PDEs arising in mathematical physics by exp-function method
  35. Regular Articles
  36. Quantum mechanical calculation of electron spin
  37. Regular Articles
  38. CO2 capture by polymeric membranes composed of hyper-branched polymers with dense poly(oxyethylene) comb and poly(amidoamine)
  39. Regular Articles
  40. Chain on a cone
  41. Regular Articles
  42. Multi-task feature learning by using trace norm regularization
  43. Regular Articles
  44. Superluminal tunneling of a relativistic half-integer spin particle through a potential barrier
  45. Regular Articles
  46. Neutrosophic triplet normed space
  47. Regular Articles
  48. Lie algebraic discussion for affinity based information diffusion in social networks
  49. Regular Articles
  50. Radiation dose and cancer risk estimates in helical CT for pulmonary tuberculosis infections
  51. Regular Articles
  52. A comparison study of steady-state vibrations with single fractional-order and distributed-order derivatives
  53. Regular Articles
  54. Some new remarks on MHD Jeffery-Hamel fluid flow problem
  55. Regular Articles
  56. Numerical investigation of magnetohydrodynamic slip flow of power-law nanofluid with temperature dependent viscosity and thermal conductivity over a permeable surface
  57. Regular Articles
  58. Charge conservation in a gravitational field in the scalar ether theory
  59. Regular Articles
  60. Measurement problem and local hidden variables with entangled photons
  61. Regular Articles
  62. Compression of hyper-spectral images using an accelerated nonnegative tensor decomposition
  63. Regular Articles
  64. Fabrication and application of coaxial polyvinyl alcohol/chitosan nanofiber membranes
  65. Regular Articles
  66. Calculating degree-based topological indices of dominating David derived networks
  67. Regular Articles
  68. The structure and conductivity of polyelectrolyte based on MEH-PPV and potassium iodide (KI) for dye-sensitized solar cells
  69. Regular Articles
  70. Chiral symmetry restoration and the critical end point in QCD
  71. Regular Articles
  72. Numerical solution for fractional Bratu’s initial value problem
  73. Regular Articles
  74. Structure and optical properties of TiO2 thin films deposited by ALD method
  75. Regular Articles
  76. Quadruple multi-wavelength conversion for access network scalability based on cross-phase modulation in an SOA-MZI
  77. Regular Articles
  78. Application of ANNs approach for wave-like and heat-like equations
  79. Special issue on Nonlinear Dynamics in General and Dynamical Systems in particular
  80. Study on node importance evaluation of the high-speed passenger traffic complex network based on the Structural Hole Theory
  81. Special issue on Nonlinear Dynamics in General and Dynamical Systems in particular
  82. A mathematical/physics model to measure the role of information and communication technology in some economies: the Chinese case
  83. Special issue on Nonlinear Dynamics in General and Dynamical Systems in particular
  84. Numerical modeling of the thermoelectric cooler with a complementary equation for heat circulation in air gaps
  85. Special issue on Nonlinear Dynamics in General and Dynamical Systems in particular
  86. On the libration collinear points in the restricted three – body problem
  87. Special issue on Nonlinear Dynamics in General and Dynamical Systems in particular
  88. Research on Critical Nodes Algorithm in Social Complex Networks
  89. Special issue on Nonlinear Dynamics in General and Dynamical Systems in particular
  90. A simulation based research on chance constrained programming in robust facility location problem
  91. Special issue on Nonlinear Dynamics in General and Dynamical Systems in particular
  92. A mathematical/physics carbon emission reduction strategy for building supply chain network based on carbon tax policy
  93. Special issue on Nonlinear Dynamics in General and Dynamical Systems in particular
  94. Mathematical analysis of the impact mechanism of information platform on agro-product supply chain and agro-product competitiveness
  95. Special issue on Nonlinear Dynamics in General and Dynamical Systems in particular
  96. A real negative selection algorithm with evolutionary preference for anomaly detection
  97. Special issue on Nonlinear Dynamics in General and Dynamical Systems in particular
  98. A privacy-preserving parallel and homomorphic encryption scheme
  99. Special issue on Nonlinear Dynamics in General and Dynamical Systems in particular
  100. Random walk-based similarity measure method for patterns in complex object
  101. Special issue on Nonlinear Dynamics in General and Dynamical Systems in particular
  102. A Mathematical Study of Accessibility and Cohesion Degree in a High-Speed Rail Station Connected to an Urban Bus Transport Network
  103. Special issue on Nonlinear Dynamics in General and Dynamical Systems in particular
  104. Design and Simulation of the Integrated Navigation System based on Extended Kalman Filter
  105. Special issue on Nonlinear Dynamics in General and Dynamical Systems in particular
  106. Oil exploration oriented multi-sensor image fusion algorithm
  107. Special issue on Nonlinear Dynamics in General and Dynamical Systems in particular
  108. Analysis of Product Distribution Strategy in Digital Publishing Industry Based on Game-Theory
  109. Special issue on Nonlinear Dynamics in General and Dynamical Systems in particular
  110. Expanded Study on the accumulation effect of tourism under the constraint of structure
  111. Special issue on Nonlinear Dynamics in General and Dynamical Systems in particular
  112. Unstructured P2P Network Load Balance Strategy Based on Multilevel Partitioning of Hypergraph
  113. Special issue on Nonlinear Dynamics in General and Dynamical Systems in particular
  114. Research on the method of information system risk state estimation based on clustering particle filter
  115. Special issue on Nonlinear Dynamics in General and Dynamical Systems in particular
  116. Demand forecasting and information platform in tourism
  117. Special issue on Nonlinear Dynamics in General and Dynamical Systems in particular
  118. Physical-chemical properties studying of molecular structures via topological index calculating
  119. Special issue on Nonlinear Dynamics in General and Dynamical Systems in particular
  120. Local kernel nonparametric discriminant analysis for adaptive extraction of complex structures
  121. Special issue on Nonlinear Dynamics in General and Dynamical Systems in particular
  122. City traffic flow breakdown prediction based on fuzzy rough set
  123. Special issue on Nonlinear Dynamics in General and Dynamical Systems in particular
  124. Conservation laws for a strongly damped wave equation
  125. Special issue on Nonlinear Dynamics in General and Dynamical Systems in particular
  126. Blending type approximation by Stancu-Kantorovich operators based on Pólya-Eggenberger distribution
  127. Special issue on Nonlinear Dynamics in General and Dynamical Systems in particular
  128. Computing the Ediz eccentric connectivity index of discrete dynamic structures
  129. Special issue on Nonlinear Dynamics in General and Dynamical Systems in particular
  130. A discrete epidemic model for bovine Babesiosis disease and tick populations
  131. Special issue on Nonlinear Dynamics in General and Dynamical Systems in particular
  132. Study on maintaining formations during satellite formation flying based on SDRE and LQR
  133. Special issue on Nonlinear Dynamics in General and Dynamical Systems in particular
  134. Relationship between solitary pulmonary nodule lung cancer and CT image features based on gradual clustering
  135. Special issue on Nonlinear Dynamics in General and Dynamical Systems in particular
  136. A novel fast target tracking method for UAV aerial image
  137. Special issue on Nonlinear Dynamics in General and Dynamical Systems in particular
  138. Fuzzy comprehensive evaluation model of interuniversity collaborative learning based on network
  139. Special issue on Nonlinear Dynamics in General and Dynamical Systems in particular
  140. Conservation laws, classical symmetries and exact solutions of the generalized KdV-Burgers-Kuramoto equation
  141. Special issue on Nonlinear Dynamics in General and Dynamical Systems in particular
  142. After notes on self-similarity exponent for fractal structures
  143. Special issue on Nonlinear Dynamics in General and Dynamical Systems in particular
  144. Excitation probability and effective temperature in the stationary regime of conductivity for Coulomb Glasses
  145. Special issue on Nonlinear Dynamics in General and Dynamical Systems in particular
  146. Comparisons of feature extraction algorithm based on unmanned aerial vehicle image
  147. Special issue on Nonlinear Dynamics in General and Dynamical Systems in particular
  148. Research on identification method of heavy vehicle rollover based on hidden Markov model
  149. Special issue on Nonlinear Dynamics in General and Dynamical Systems in particular
  150. Classifying BCI signals from novice users with extreme learning machine
  151. Special issue on Nonlinear Dynamics in General and Dynamical Systems in particular
  152. Topics on data transmission problem in software definition network
  153. Special issue on Nonlinear Dynamics in General and Dynamical Systems in particular
  154. Statistical inferences with jointly type-II censored samples from two Pareto distributions
  155. Special issue on Nonlinear Dynamics in General and Dynamical Systems in particular
  156. Estimation for coefficient of variation of an extension of the exponential distribution under type-II censoring scheme
  157. Special issue on Nonlinear Dynamics in General and Dynamical Systems in particular
  158. Analysis on trust influencing factors and trust model from multiple perspectives of online Auction
  159. Special Issue on Advances on Modelling of Flowing and Transport in Porous Media
  160. Coupling of two-phase flow in fractured-vuggy reservoir with filling medium
  161. Special Issue on Advances on Modelling of Flowing and Transport in Porous Media
  162. Production decline type curves analysis of a finite conductivity fractured well in coalbed methane reservoirs
  163. Special Issue on Advances on Modelling of Flowing and Transport in Porous Media
  164. Flow Characteristic and Heat Transfer for Non-Newtonian Nanofluid in Rectangular Microchannels with Teardrop Dimples/Protrusions
  165. Special Issue on Advances on Modelling of Flowing and Transport in Porous Media
  166. The size prediction of potential inclusions embedded in the sub-surface of fused silica by damage morphology
  167. Special Issue on Advances on Modelling of Flowing and Transport in Porous Media
  168. Research on carbonate reservoir interwell connectivity based on a modified diffusivity filter model
  169. Special Issue on Advances on Modelling of Flowing and Transport in Porous Media
  170. The method of the spatial locating of macroscopic throats based-on the inversion of dynamic interwell connectivity
  171. Special Issue on Advances on Modelling of Flowing and Transport in Porous Media
  172. Unsteady mixed convection flow through a permeable stretching flat surface with partial slip effects through MHD nanofluid using spectral relaxation method
  173. Special Issue on Advances on Modelling of Flowing and Transport in Porous Media
  174. A volumetric ablation model of EPDM considering complex physicochemical process in porous structure of char layer
  175. Special Issue on Advances on Modelling of Flowing and Transport in Porous Media
  176. Numerical simulation on ferrofluid flow in fractured porous media based on discrete-fracture model
  177. Special Issue on Advances on Modelling of Flowing and Transport in Porous Media
  178. Macroscopic lattice Boltzmann model for heat and moisture transfer process with phase transformation in unsaturated porous media during freezing process
  179. Special Issue on Advances on Modelling of Flowing and Transport in Porous Media
  180. Modelling of intermittent microwave convective drying: parameter sensitivity
  181. Special Issue on Advances on Modelling of Flowing and Transport in Porous Media
  182. Simulating gas-water relative permeabilities for nanoscale porous media with interfacial effects
  183. Special Issue on Advances on Modelling of Flowing and Transport in Porous Media
  184. Simulation of counter-current imbibition in water-wet fractured reservoirs based on discrete-fracture model
  185. Special Issue on Advances on Modelling of Flowing and Transport in Porous Media
  186. Investigation effect of wettability and heterogeneity in water flooding and on microscopic residual oil distribution in tight sandstone cores with NMR technique
  187. Special Issue on Advances on Modelling of Flowing and Transport in Porous Media
  188. Analytical modeling of coupled flow and geomechanics for vertical fractured well in tight gas reservoirs
  189. Special Issue on Ever-New "Loopholes" in Bell’s Argument and Experimental Tests
  190. Special Issue: Ever New "Loopholes" in Bell’s Argument and Experimental Tests
  191. Special Issue on Ever-New "Loopholes" in Bell’s Argument and Experimental Tests
  192. The ultimate loophole in Bell’s theorem: The inequality is identically satisfied by data sets composed of ±1′s assuming merely that they exist
  193. Special Issue on Ever-New "Loopholes" in Bell’s Argument and Experimental Tests
  194. Erratum to: The ultimate loophole in Bell’s theorem: The inequality is identically satisfied by data sets composed of ±1′s assuming merely that they exist
  195. Special Issue on Ever-New "Loopholes" in Bell’s Argument and Experimental Tests
  196. Rhetoric, logic, and experiment in the quantum nonlocality debate
  197. Special Issue on Ever-New "Loopholes" in Bell’s Argument and Experimental Tests
  198. What If Quantum Theory Violates All Mathematics?
  199. Special Issue on Ever-New "Loopholes" in Bell’s Argument and Experimental Tests
  200. Relativity, anomalies and objectivity loophole in recent tests of local realism
  201. Special Issue on Ever-New "Loopholes" in Bell’s Argument and Experimental Tests
  202. The photon identification loophole in EPRB experiments: computer models with single-wing selection
  203. Special Issue on Ever-New "Loopholes" in Bell’s Argument and Experimental Tests
  204. Bohr against Bell: complementarity versus nonlocality
  205. Special Issue on Ever-New "Loopholes" in Bell’s Argument and Experimental Tests
  206. Is Einsteinian no-signalling violated in Bell tests?
  207. Special Issue on Ever-New "Loopholes" in Bell’s Argument and Experimental Tests
  208. Bell’s “Theorem”: loopholes vs. conceptual flaws
  209. Special Issue on Ever-New "Loopholes" in Bell’s Argument and Experimental Tests
  210. Nonrecurrence and Bell-like inequalities
  211. Special Issue: The 18th International Symposium on Electromagnetic Fields in Mechatronics, Electrical and Electronic Engineering ISEF 2017
  212. Three-dimensional computer models of electrospinning systems
  213. Special Issue: The 18th International Symposium on Electromagnetic Fields in Mechatronics, Electrical and Electronic Engineering ISEF 2017
  214. Electric field computation and measurements in the electroporation of inhomogeneous samples
  215. Special Issue: The 18th International Symposium on Electromagnetic Fields in Mechatronics, Electrical and Electronic Engineering ISEF 2017
  216. Modelling of magnetostriction of transformer magnetic core for vibration analysis
  217. Special Issue: The 18th International Symposium on Electromagnetic Fields in Mechatronics, Electrical and Electronic Engineering ISEF 2017
  218. Comparison of the fractional power motor with cores made of various magnetic materials
  219. Special Issue: The 18th International Symposium on Electromagnetic Fields in Mechatronics, Electrical and Electronic Engineering ISEF 2017
  220. Dynamics of the line-start reluctance motor with rotor made of SMC material
  221. Special Issue: The 18th International Symposium on Electromagnetic Fields in Mechatronics, Electrical and Electronic Engineering ISEF 2017
  222. Inhomogeneous dielectrics: conformal mapping and finite-element models
  223. Special Issue: The 18th International Symposium on Electromagnetic Fields in Mechatronics, Electrical and Electronic Engineering ISEF 2017
  224. Topology optimization of induction heating model using sequential linear programming based on move limit with adaptive relaxation
  225. Special Issue: The 18th International Symposium on Electromagnetic Fields in Mechatronics, Electrical and Electronic Engineering ISEF 2017
  226. Detection of inter-turn short-circuit at start-up of induction machine based on torque analysis
  227. Special Issue: The 18th International Symposium on Electromagnetic Fields in Mechatronics, Electrical and Electronic Engineering ISEF 2017
  228. Current superimposition variable flux reluctance motor with 8 salient poles
  229. Special Issue: The 18th International Symposium on Electromagnetic Fields in Mechatronics, Electrical and Electronic Engineering ISEF 2017
  230. Modelling axial vibration in windings of power transformers
  231. Special Issue: The 18th International Symposium on Electromagnetic Fields in Mechatronics, Electrical and Electronic Engineering ISEF 2017
  232. Field analysis & eddy current losses calculation in five-phase tubular actuator
  233. Special Issue: The 18th International Symposium on Electromagnetic Fields in Mechatronics, Electrical and Electronic Engineering ISEF 2017
  234. Hybrid excited claw pole generator with skewed and non-skewed permanent magnets
  235. Special Issue: The 18th International Symposium on Electromagnetic Fields in Mechatronics, Electrical and Electronic Engineering ISEF 2017
  236. Electromagnetic phenomena analysis in brushless DC motor with speed control using PWM method
  237. Special Issue: The 18th International Symposium on Electromagnetic Fields in Mechatronics, Electrical and Electronic Engineering ISEF 2017
  238. Field-circuit analysis and measurements of a single-phase self-excited induction generator
  239. Special Issue: The 18th International Symposium on Electromagnetic Fields in Mechatronics, Electrical and Electronic Engineering ISEF 2017
  240. A comparative analysis between classical and modified approach of description of the electrical machine windings by means of T0 method
  241. Special Issue: The 18th International Symposium on Electromagnetic Fields in Mechatronics, Electrical and Electronic Engineering ISEF 2017
  242. Field-based optimal-design of an electric motor: a new sensitivity formulation
  243. Special Issue: The 18th International Symposium on Electromagnetic Fields in Mechatronics, Electrical and Electronic Engineering ISEF 2017
  244. Application of the parametric proper generalized decomposition to the frequency-dependent calculation of the impedance of an AC line with rectangular conductors
  245. Special Issue: The 18th International Symposium on Electromagnetic Fields in Mechatronics, Electrical and Electronic Engineering ISEF 2017
  246. Virtual reality as a new trend in mechanical and electrical engineering education
  247. Special Issue: The 18th International Symposium on Electromagnetic Fields in Mechatronics, Electrical and Electronic Engineering ISEF 2017
  248. Holonomicity analysis of electromechanical systems
  249. Special Issue: The 18th International Symposium on Electromagnetic Fields in Mechatronics, Electrical and Electronic Engineering ISEF 2017
  250. An accurate reactive power control study in virtual flux droop control
  251. Special Issue: The 18th International Symposium on Electromagnetic Fields in Mechatronics, Electrical and Electronic Engineering ISEF 2017
  252. Localized probability of improvement for kriging based multi-objective optimization
  253. Special Issue: The 18th International Symposium on Electromagnetic Fields in Mechatronics, Electrical and Electronic Engineering ISEF 2017
  254. Research of influence of open-winding faults on properties of brushless permanent magnets motor
  255. Special Issue: The 18th International Symposium on Electromagnetic Fields in Mechatronics, Electrical and Electronic Engineering ISEF 2017
  256. Optimal design of the rotor geometry of line-start permanent magnet synchronous motor using the bat algorithm
  257. Special Issue: The 18th International Symposium on Electromagnetic Fields in Mechatronics, Electrical and Electronic Engineering ISEF 2017
  258. Model of depositing layer on cylindrical surface produced by induction-assisted laser cladding process
  259. Special Issue: The 18th International Symposium on Electromagnetic Fields in Mechatronics, Electrical and Electronic Engineering ISEF 2017
  260. Detection of inter-turn faults in transformer winding using the capacitor discharge method
  261. Special Issue: The 18th International Symposium on Electromagnetic Fields in Mechatronics, Electrical and Electronic Engineering ISEF 2017
  262. A novel hybrid genetic algorithm for optimal design of IPM machines for electric vehicle
  263. Special Issue: The 18th International Symposium on Electromagnetic Fields in Mechatronics, Electrical and Electronic Engineering ISEF 2017
  264. Lamination effects on a 3D model of the magnetic core of power transformers
  265. Special Issue: The 18th International Symposium on Electromagnetic Fields in Mechatronics, Electrical and Electronic Engineering ISEF 2017
  266. Detection of vertical disparity in three-dimensional visualizations
  267. Special Issue: The 18th International Symposium on Electromagnetic Fields in Mechatronics, Electrical and Electronic Engineering ISEF 2017
  268. Calculations of magnetic field in dynamo sheets taking into account their texture
  269. Special Issue: The 18th International Symposium on Electromagnetic Fields in Mechatronics, Electrical and Electronic Engineering ISEF 2017
  270. 3-dimensional computer model of electrospinning multicapillary unit used for electrostatic field analysis
  271. Special Issue: The 18th International Symposium on Electromagnetic Fields in Mechatronics, Electrical and Electronic Engineering ISEF 2017
  272. Optimization of wearable microwave antenna with simplified electromagnetic model of the human body
  273. Special Issue: The 18th International Symposium on Electromagnetic Fields in Mechatronics, Electrical and Electronic Engineering ISEF 2017
  274. Induction heating process of ferromagnetic filled carbon nanotubes based on 3-D model
  275. Special Issue: The 18th International Symposium on Electromagnetic Fields in Mechatronics, Electrical and Electronic Engineering ISEF 2017
  276. Speed control of an induction motor by 6-switched 3-level inverter
Downloaded on 14.4.2026 from https://www.degruyterbrill.com/document/doi/10.1515/phys-2017-0079/html
Scroll to top button