Startseite Technik Application of nonlinear clustering optimization algorithm in web data mining of cloud computing
Artikel Open Access

Application of nonlinear clustering optimization algorithm in web data mining of cloud computing

  • Yan Zhang EMAIL logo
Veröffentlicht/Copyright: 2. Februar 2023
Veröffentlichen auch Sie bei De Gruyter Brill

Abstract

To improve data mining and data clustering performance to improve the efficiency of the cloud computing platform, the author proposes a bionic optimized clustering data extraction algorithm based on cloud computing platform. According to the Gaussian distribution function graph, the degree of aggregation of the categories and the distribution of data points of the same category can be judged more intuitively. The cloud computing platform has the characteristics of large amount of data and high dimension. In the process of solving the distance between all sample points and the center point, after each center point update, the optimization function needs to be re-executed, the author mainly uses clustering evaluation methods such as PBM-index and DB-index. The simulation data object is the Iris dataset in UCI, and N = 500 samples are selected for simulation. The experiment result shows that when P is not greater than 15, the PBM value changes very little, and when P = 20, the PBM performance of all the four clustering algorithms decreased significantly. When the sample size is increased from 50,000 to 100,000, the DB performance of this algorithm does not change much, and the DB value tends to be stable. In terms of clustering operation time, the K-means algorithm has obvious advantages, the DBSCAN algorithm is the most time-consuming, and the operation time of wolf pack clustering and Mean-shift is in the middle. In the actual application process, the number of samples for each training can be dynamically adjusted according to the actual needs, in order to improve the applicability of the wolf pack clustering algorithm in specific application scenarios. Flattening in cloud computing for data clusters, this algorithm is compared with the common clustering algorithm in PBM. DB also shows better performance.

1 Introduction

With the rapid development of social informatization, various types of data, such as mobile Internet data, Internet of Things data, GIS data, meteorological data, medical data, etc., have exploded [1]. In the face of such a complex and diverse mass of data, it is a waste for the company to discard or just store it; however, traditional queries and simple statistics have been unable to meet the actual needs of practitioners. People are expected to obtain more valuable information for the industry from data, that is, to obtain knowledge from data. This kind of knowledge has certain guiding significance for production and life, such as helping sellers to discover the sales model of a certain type of product and predict the next stage of sales, it helps medical researchers to determine which factors are more related to a disease from medical diagnostic data [2], etc. Therefore, in today’s environment of “rich data but lack of information,” the new technology of data mining came into being, and it has received extensive attention from academia and information industry [3]. Data mining is a new discipline that is driven by actual needs and developed from multiple fields such as statistics, machine learning, databases, and pattern recognition. A data mining task can usually be described as extracting implicit, previously unknown, and potentially valuable knowledge and information from data [4]. This is a complex multi-stage knowledge mining process, a complete data mining process usually includes, data acquisition, data preprocessing, feature engineering, knowledge discovery, result presentation, etc. According to the different tasks of data mining, it can be divided into two types: descriptive data mining and predictive data mining. Association rule mining and cluster analysis are typical methods of the former, whose purpose is to describe the general characteristics of data in a concise and general way. Classification and regression are typical methods of the latter, by modeling existing data, predictive analysis is performed on new data. Figure 1 shows the data mining process. Clustering is an important direction of data mining, a clustering task can be briefly described as dividing a given dataset into several subsets, each of which is a cluster, so that the objects in the cluster are as similar as possible, it is not similar to objects in other clusters. Cluster analysis has been widely used in different scenarios, for example, web community mining, DNA sequence analysis, etc. [5]. Because different scenarios deal with different data, there is no universal method that works well with all types of data. Non-uniform data are a type of data that exist in real life, the typical feature is that the same dataset contains clusters with large differences in the number of samples and sample densities, the study of non-uniform data is a difficult problem in current data mining research [6]. Based on the current research, the author proposes a bionic optimized clustering data extraction algorithm based on cloud computing platform. According to the Gaussian distribution function graph, the degree of aggregation of categories and the distribution of data points of the same category can be judged more intuitively, the cloud computing platform has the characteristics of large amount of data and high dimension. In the process of solving the distance between all sample points and the center point, after each center point update, the optimization function needs to be re-executed, the author mainly uses clustering evaluation methods such as PBM-index and DB-index. The simulation data object is the Iris dataset in UCI, and N = 500 samples are selected for simulation. The experimental results show that in the actual application process, the number of samples for each training can be dynamically adjusted according to actual needs, so as to improve the applicability of the wolf pack clustering algorithm in specific application scenarios. This algorithm is compared with the common clustering algorithm in PBM in cloud computing of data clustering. DB index shows better performance.

Figure 1 
               The process of data mining.
Figure 1

The process of data mining.

2 Literature review

In response to this research question, Zhang , based on four-day-scale datasets, compared the performance of distributed and MapReduce methods on large datasets, the purpose was to mine accuracy and efficiency. Comparing the performance differences between distributed and MapReduce methods on large-scale datasets, the results show that, no matter how many computer nodes are used, the classification performance of MapReduce-based programs is very stable, outperforming the baseline stand-alone and distributed programs [7]. Gavrylenko and Dvornyk completed the global optimization of service level agreement through genetic algorithm in cloud computing. Furthermore, service clustering is used to reduce the search space of the problem, and association rules are used to compound services based on their history to improve service composition efficiency. The conducted experiments confirm the higher efficiency of the proposed method compared to similar related work [8]. Heraguemi computed privacy-preserving data mining in cloud computing environments using homomorphic encryption, which combines the extraction of frequent closed patterns in distributed environments such as clouds, the aim is to maintain the privacy of the site during data mining tasks in a homomorphic encryption based cloud environment [9]. Wang et al. used LDA and an enhanced SVM approach for ECG signals in cloud computing. An SVM model was used with a weighted kernel method to classify more features in the input ECG signal, right bundle branch block, premature ventricular contractions, and early atrial contractions. Want et al. proposed Meta Cloud Data Storage architecture to protect data in cloud computing environment. This framework ensures efficient data mining and more business insights in a cloud computing environment [10]. Reddy and Chittineni, through deep learning artificial neural network (DLANN), constructed a feed-forward multilayer ANN for modeling high-level data abstractions, results on three real IoT datasets show that ANN and DLANN can provide highly accurate results [11]. The invention of Yin and Cui relates to a method for performing on-vehicle analysis of data associated with a vehicle, and a system and method for implementing a cloud-based distributed data stream mining algorithm, the algorithm is used to detect patterns from vehicle diagnostics and correlate patterns with contextual data [12]. The algorithm newly developed by Zubar and Bala Murugan can extract rules similar to or more concise than those generated by symbolic methods from neural networks. A data mining process using neural networks is described with an emphasis on rule extraction [13]. Lv et al. proposed a new neutrosophic association rule algorithm. The algorithm uses a novel approach to generate association rules by handling the membership, uncertainty, and non-membership functions of items, an efficient decision-making system is made by considering all fuzzy association rules [14]. Balamurugan et al. combined RFM model (a model for measuring customer value and customer profitability in the sales field) and clustering techniques, to distinguish user groups and then carry out targeted marketing [15]. According to the characteristics of data division, Geng et al. divided such algorithms into hard division and soft division, for example, when the classical K-means and K-modes algorithms divide data, a data object can only belong to a certain cluster, the FCM algorithm, by introducing fuzzy theory, gives a fuzzy value to the attribution of data object clusters, which is a kind of soft division [16]. Because different scenarios deal with different data, there is no universal method that works well with all types of data. Non-uniform data are a type of data that exist in real life, the typical feature is that the same dataset contains clusters with large differences in the number of samples and sample densities, the study of non-uniform data is a difficult problem in current data mining research. Based on the current research, the author proposes a bionic optimized clustering data extraction algorithm based on cloud computing platform. According to the Gaussian distribution function graph, the degree of aggregation of categories and the distribution of data points of the same category can be judged more intuitively, the cloud computing platform has the characteristics of large amount of data and high dimension. In the process of solving the distance between all sample points and the center point, after each center point update, the optimization function needs to be re-executed, the author mainly uses clustering evaluation methods such as PBM-index and DB-index. The simulation data object is the Iris dataset in UCI, and N = 500 samples are selected for simulation. It can be seen from Table 5 and Figure 3 that in terms of clustering operation time, the K-means algorithm has obvious advantages, the DBSCAN algorithm is the most time-consuming, and the operation time of wolf pack clustering and Mean-shift is in the middle. In the actual application process, the number of samples for each training can be dynamically adjusted according to actual needs, so as to improve the applicability of the wolf pack clustering algorithm in specific application scenarios. The experimental results show that in the actual application process, the number of samples for each training can be dynamically adjusted according to actual needs, so as to improve the applicability of the wolf pack clustering algorithm in specific application scenarios. Flattening in cloud computing For data clustering, this algorithm is compared with the clustering algorithms commonly found in PBM. DB also shows better performance.

3 Methods

3.1 Similarity clustering

Generally, the similarity of the data category is judged according to the distance between the center point and other points. Compare the distance between the two points with the preset threshold, in order to determine whether the point belongs to the same category as the center point. In the actual operation process, the distance between the two points is not directly calculated, instead, a Gaussian function is introduced to assist in calculating the similarity. In this way, according to the Gaussian distribution function graph, it is more intuitive to judge the degree of aggregation of categories and the distribution of data points of the same category [17]. Assuming a center point, x i , the calculation method of the similarity relationship between other points and the center point is given as follows:

(1) S ij = exp x i x j 2 2 σ 2 , i j 0 , i = j .

In the clustering process, in order to effectively classify all points in the dataset, and to minimize the distance between all data points after classification and the center points of various types, the following optimization function is established:

(2) ε = i x i j , x j N ˆ ( x i ) S ij x j 2 .

where x j N ˆ ( x i ) indicates that x j is all nodes except x i in the N sample points, and there are

(3) j , x j N ( x i ) S ij x j = 1 , S i j 0 .

(4) ε = i x i j , x j N ( x i ) S i j x j 2 = j , x j N ( x i ) S ij S ik ( x i x j ) T ( x i x k )

where x k represents the kth node. To simplify Eq. (4), let

(5) G i j = ( x i x j ) T ( x i x k ) .

Substitute Eq. (5) in Eq. (4) to get Eq. (6).

(6) ε = j , x j N ( x i ) S i j G i j S i k .

Therefore, the mathematical description of clustering optimization is the solution.

(7) min S i j j , k , x j , x k N ( x i ) S i j G i j S i k .

After solving S i j , the similarity matrix can be obtained to determine whether x i and x j belong to the same class.

3.2 Combination of similarity clustering and wolf pack algorithm

In the process of similar clustering, the optimal solution of formula (2) is mainly solved. Considering the large amount of data and high dimension of the cloud computing platform, in the process of solving the distance between all sample points and the center point, Eq. (2) needs to be re-executed after each center point update, at this time, under the condition of the preset number of iterations, it is not easy to obtain the global optimal solution, therefore, in the clustering process, an efficient algorithm needs to be introduced to quickly obtain the global optimal solution [18]. The steps of combining similarity clustering and wolf pack algorithm are as follows:

  1. After initializing K center points, establish a clustering model;

  2. According to formulas (1) and (2), the optimization function of K center points is obtained;

  3. In formula (2), the optimization is carried out according to the wolf pack algorithm, and the position of the head wolf is the center point of the cluster when the algorithm iteration stops, so that the positions of the K center points can be determined;

  4. Solve the optimal S i j of formula (7) according to K center points;

  5. After substituting S i j in Eq. (2), continue to use the wolf pack algorithm to update the position of the center point;

  6. Repeatedly update the position of the center point until the clustering requirements are met.

3.3 Evaluation of clustering results

The choice of the evaluation index of the clustering results will affect the applicability of the clustering algorithm. Under the cloud computing platform, the data have large differences and high dimensions, which makes data mining more difficult, the evaluation of clustering results will directly affect the effect of data mining, under different application requirements, the focus of evaluation of clustering results is also different. At present, the commonly used clustering evaluation methods mainly include PBM-index and DB-index, and the authors mainly use these two evaluation methods [19].

The mathematical description of the PBM-index evaluation method is as follows. Suppose the samples are divided into K categories, with a total of K center points, the k-th center point is c k , μ k j , which means the correlation factor is between k and j, and the mathematical expression is [ μ k j ] K × n . Then, there are

(8) E K = k = 1 K j = 1 n μ k j x j c k ,

(9) D K = max i , j = 1 K c i c j ,

where E K represents the distance of all nodes x j from the k-th center point, and D K is the distance between two center points with the largest distance from each other among all the K center points [20]. According to formulas (8) and (9), the clustering evaluation method can be obtained as follows:

(10) PBM ( K ) = 1 K × E 1 E K × D K σ ,

where E 1 represents the distance of all nodes x j from the first center point; σ represents the evaluation factor, and the value range is σ 1 .

The mathematical description of the DB-index evaluation method is as follows. Suppose the samples are divided into K categories, with a total of K center points, the center point of the k-th class is c k , and the center point of the k class is c k , then the evaluation formula is as follows:

(11) DB ( K ) = 1 K × k K max k k × x c k x c k + x c k x c k c k c k .

4 Results and analysis

In order to verify the performance of the bionic optimized clustering algorithm, the performance is compared with the commonly used K-means algorithm, Mean-shift algorithm, and DBSCAN algorithm, and Matlab is used for the example simulation [21].

4.1 PBM and DB evaluation of clustering performance

The simulation data object is the Iris dataset in UCI. N = 500 samples are selected for simulation, and the initial value K of similar clustering is set to 3, the initial value of the wolf pack algorithm is λ = 0.5 , T num = 100 . Similar clustering of wolves was performed in the experiment, and the visualization result of the clustering is shown in Figure 2.

Figure 2 
                  Clustering visualization of data samples when K = 3.
Figure 2

Clustering visualization of data samples when K = 3.

As can be seen from Figure 2, the 500 samples are well divided into 3 categories. Although the center points of the 2 categories are relatively close, it does not affect the 500 sample points all belong to some category, and there is no isolated sample point.

1. PBM performance evaluation

Although the clustering is completed using wolf pack optimization, Figure 2 cannot reflect the numerical indicators of clustering, nor the advantages of this algorithm compared to other clustering algorithms [22]. The following takes the PBM evaluation index as the simulation object, and simulates data samples with different sample sizes and different numbers of attributes through different clustering algorithms. After the simulation iteration is over, the maximum value (Max), the minimum value (Min), and the mean value (Mean) of PBM are calculated for each type of situation of each algorithm according to formula (10), the specific results are listed in Table 1, where N represents the sample set and P represents the sample attribute [23].

Table 1

PBM performance with different sample sizes when P = 10

PBM K-means Mean-shift DBSCAN Wolf pack clustering
N = 2,000 Max 0.07213 0.07269 0.07257 0.07386
Mean 0.07034 0.07128 0.06921 0.07224
Min 0.0681 0.06844 0.06611 0.07021
N = 10,000 Max 0.05117 0.05255 0.03922 0.05287
Mean 0.04786 0.05158 0.03566 0.05130
Min 0.0371 0.04787 0.03377 0.05022
N = 50,000 Max 0.02833 0.02825 0.02736 0.03137
Mean 0.02871 0.02832 0.02632 0.02836
Min 0.01726 0.01739 0.01534 0.01801
N = 100,000 Max 0.00933 0.00837 0.00736 0.01166
Mean 0.00888 0.00832 0.00632 0.01021
Min 0.00726 0.00739 0.00534 0.00866

Ten attributes of each sample in the vehicle dataset are selected for cluster analysis. From Table 1, it can be concluded that as the number of training samples increases, the model complexity of the training samples increases, and the PBM values calculated by all clustering algorithms decrease [24]. When the wolf pack algorithm calculates the position of the center point, it is affected by the operation time and the preset number of iterations, the performance of PBM may decrease, but compared with the other three clustering algorithms, the PBM value of the wolf group clustering algorithm is larger, when the number of samples reaches 100,000, only the mean PBM of this algorithm is greater than 0.1. In comparison, the wolf pack clustering algorithm is better in terms of PBM performance.

When the sample size is fixed at 50,000, the statistical results of the PBM performance represented by different numbers of attributes in the vehicle dataset are shown in Table 2.

Table 2

PBM performance with different numbers of data attributes when N = 50,000

PBM K-means Mean-shift DBSCAN Wolf pack clustering
P = 4 Max 0.02853 0.02865 0.02775 0.03138
Mean 0.02822 0.02875 0.02664 0.02896
Min 0.01796 0.01798 0.01524 0.01824
P = 10 Max 0.02822 0.02874 0.02765 0.03136
Mean 0.02898 0.02885 0.02675 0.02887
Min 0.01795 0.01796 0.01524 0.01822
P = 15 Max 0.02872 0.02824 0.02725 0.03189
Mean 0.02798 0.02852 0.02657 0.02864
Min 0.01742 0.01712 0.01545 0.01787
P = 20 Max 0.01772 0.01921 0.01764 0.02561
Mean 0.01272 0.01353 0.01225 0.02063
Min 0.00843 0.00912 0.00741 0.01649

As can be seen from Table 2, as the number of sample attributes increases, the PBM performance of all clustering algorithms degrades. Under the same experimental conditions, the PBM value of the wolf pack clustering algorithm is higher than that obtained by the other three clustering algorithms [25]. When P is not greater than 15, the PBM value changes very little, and when P = 20, the PBM performance of all four clustering algorithms decreased significantly. Therefore, in practical applications, it can be considered to optimize the sample attributes and control the number of attributes within 15 in order to obtain better PBM performance.

2. DB performance evaluation

In the following, the DB evaluation index is used as the simulation object, and the example simulation is carried out. The results of DB performance for different sample sizes are listed in Table 3.

Table 3

DB performance with different sample sizes when P = 10

PBM K-means Mean-shift DBSCAN Wolf pack clustering
N = 2,000 Max 0.7174 0.7069 0.6545 0.6386
Mean 0.6333 0.6425 0.6169 0.6127
Min 0.6039 0.5964 0.5778 0.5744
N = 10,000 Max 0.7014 0.6735 0.6335 0.6023
Mean 0.6374 0.6235 0.6076 0.5834
Min 0.6085 0.5889 0.5634 0.5559
N = 50,000 Max 0.6086 0.5888 0.5652 0.5532
Mean 0.5243 0.5045 0.4828 0.4275
Min 0.4756 0.4638 0.4228 0.3886
N = 100,000 Max 0.6087 0.5868 0.5676 0.5534
Mean 0.5232 0.5021 0.4875 0.4228
Min 0.4714 0.4645 0.4285 0.3836

It can be seen from Table 3 that with the increase in the number of samples, the DB values of all the four clustering algorithms are decreasing, this shows that the DB performance of the four algorithms is improving. But under the same experimental conditions, the DB performance of the wolf pack optimization algorithm is better than the other three clustering algorithms. By comparison, it is found that when the sample size increases from 50,000 to 100,000, the DB performance of this algorithm does not change much, and the DB value tends to be stable. When the sample size is fixed at 50,000, the statistical results of DB performance represented by different attribute numbers are listed in Table 4.

Table 4

DB performance with different numbers of data attributes

PBM K-means Mean-shift DBSCAN Wolf pack clustering
P = 4 Max 0.6039 0.5861 0.5637 0.5528
Mean 0.5242 0.5042 0.4842 0.4275
Min 0.4675 0.4614 0.4272 0.3874
P = 10 Max 0.6063 0.5815 0.5654 0.5574
Mean 0.5286 0.5016 0.4862 0.4282
Min 0.4777 0.4675 0.4254 0.3883
P = 15 Max 0.7255 0.7085 0.6653 0.6234
Mean 0.6411 0.6372 0.6052 0.5835
Min 0.5947 0.5624 0.5352 0.5136
P = 15 Max 1.0078 1.0074 0.97741 0.9337
Mean 0.9093 0.8945 0.8845 0.8139
Min 0.7945 0.7776 0.7678 0.7038

As can be seen from Table 4, as the number of sample attributes increases, the DB performance of all clustering algorithms declines, especially when the number of attributes is greater than 15, the DB value increases significantly, and the data dimension has a greater impact on DB performance. The comparison found that under the same experimental conditions, the wolf pack optimization algorithm shows better DB performance.

4.2 Operation time of different clustering algorithms

It can be obtained from Section 4.1 that when the number of attributes of the training samples is P = 10, for the UCI dataset used by the authors, the four clustering algorithms showed good clustering performance. The author simulated the operation time of four different clustering algorithms under the condition of P = 10, PBM >0.2, and DB <0.5. The statistical results are listed in Table 5 and Figure 3.

Table 5

Computation time of different clustering algorithms

Sample size K-means Mean-shift DBSCAN Wolf pack clustering
100 5.333 8.208 9.319 7.241
200 8.112 12.321 13.659 12.409
500 12.391 39.423 41.449 37.5839
50,000 32.731 127.220 177. 221 129.176
100,000 181.709 623.541 933.322 626.481
Figure 3 
                  The computation time of each clustering algorithm with different sample sizes.
Figure 3

The computation time of each clustering algorithm with different sample sizes.

It can be seen from Table 5 and Figure 3 that in terms of clustering operation time, the K-means algorithm has obvious advantages, the DBSCAN algorithm is the most time-consuming, and the operation time of wolf pack clustering and Mean-shift is in the middle. In the actual application process, the number of samples for each training can be dynamically adjusted according to actual needs, so as to improve the applicability of the wolf pack clustering algorithm in specific application scenarios.

5 Conclusion

The author proposes the application of nonlinear clustering optimization algorithm in cloud computing web data mining, using bionic optimization algorithm to deal with complex data advantages, use cloud computing platforms to extract data, making it better to obtain data that are more valuable than massive cloud computing data. There are many types of bionic optimization algorithms, and due to their ability to handle large amounts of data, we provide a set of wolf algorithms. PBM and DB cluster impact assessment methods are used to test cluster impact when pre-defined evaluation criteria are met, wolf pack optimization and similar clustering calculations are continuously performed until clustering metric requirements are met. After the experiment, a wolf pack optimization cluster algorithm was compared, which has better cluster impact and faster merging speed on a cloud computing platform with large amount of data. In the future, we can continue to conduct research from the following aspects, from the perspective of clustering objective function and the distribution characteristics of non-uniform data, and extend related work to the Spark platform.

  1. Funding information: The author states no funding involved.

  2. Author contributions: The author has accepted responsibility for the entire content of this manuscript and approved its submission.

  3. Conflict of interest: The author states no conflict of interest.

References

[1] Sun H, Yao Z, Miao Q. Design of macroeconomic growth prediction algorithm based on data mining. Mob Inf Syst. 2021;2021(7):1–8.10.1155/2021/2472373Suche in Google Scholar

[2] Huang X, Cheng S. Optimization of k-means algorithm based on MapReduce. J Phys. 2021;1881(3):032069(12pp).10.1088/1742-6596/1881/3/032069Suche in Google Scholar

[3] Yang G, Pan Q. Application of mining algorithm in personalized Internet marketing strategy in massive data environment. J Intell Syst. 2022:31(1):237–44.10.1515/jisys-2022-0014Suche in Google Scholar

[4] Venkataraman A. Application of DCS for level control in nonlinear system using optimization and robust algorithms. ADCAIJ. Journal. 2020;9(1):29–50.10.14201/ADCAIJ2020912950Suche in Google Scholar

[5] Wang Q. Application of clustering algorithm in ideological and political education in colleges and universities. J Phys. 2021;1852(3):032041(6pp).10.1088/1742-6596/1852/3/032041Suche in Google Scholar

[6] Zou H. Clustering algorithm and its application in data mining. Wirel Personal Commun. 2020;110(1):21–30.10.1007/s11277-019-06709-zSuche in Google Scholar

[7] Zhang B. Optimization of FP-growth algorithm based on cloud computing and computer big data. Int J Syst Assur Eng Manag. 2021;12(4):853–63.10.1007/s13198-021-01139-2Suche in Google Scholar

[8] Gavrylenko O, Dvornyk V. Application of clustering methods to determine the areas of activity of candidates in recruitment for IT-companies. Syst Technol. 2021;3(134):126–34.10.34185/1562-9945-3-134-2021-14Suche in Google Scholar

[9] Heraguemi K. Whale optimization algorithm for solving association rule mining issue. IJCDS. 2021;10(1):332–42.10.12785/ijcds/100133Suche in Google Scholar

[10] Wang Y, Ding S, Wang L, Du S. A manifold p-spectral clustering with sparrow search algorithm. Soft Comput. 2022;26(4):1765–77.10.1007/s00500-022-06741-5Suche in Google Scholar

[11] Reddy GS, Chittineni S. Entropy based c4.5-SHO algorithm with information gain optimization in data mining. PeerJ Comput Sci. 2021;7(2):e424.10.7717/peerj-cs.424Suche in Google Scholar PubMed PubMed Central

[12] Yin Z, Cui W. Outlier data mining model for sports data analysis. J Intell Fuzzy Syst. 2020;40(2):1–10.10.3233/JIFS-189315Suche in Google Scholar

[13] Zubar AH, Balamurugan R. Green computing process and its optimization using machine learning algorithm in healthcare sector. Mob Netw Appl. 2020;25(4):1307–18.10.1007/s11036-020-01549-9Suche in Google Scholar

[14] Lv W, Tang W, Huang H, Chen T. Research and application of intersection clustering algorithm based on PCA feature extraction and k-means. J Phys. 2021;1861(1):012001(7pp).10.1088/1742-6596/1861/1/012001Suche in Google Scholar

[15] Balamurugan R, Ratheesh S, Venila YM. Classification of heart disease using adaptive Harris hawk optimization-based clustering algorithm and enhanced deep genetic algorithm. Soft Comput. 2021;26(5):2357–73.10.1007/s00500-021-06536-0Suche in Google Scholar

[16] Geng X, Chen M, Wang K. Application of the nonlinear steepest descent method to the coupled Sasa-Satsuma equation. East Asian J Appl Mathematics. 2020;11(1):181–206.10.4208/eajam.220920.250920Suche in Google Scholar

[17] Nithyanandakumari K. Assessment of ant colony optimization algorithm for DAG task scheduling in cloud computing. Int J Adv Trends Comput Sci Eng. 2020;9(4):5278–86.10.30534/ijatcse/2020/159942020Suche in Google Scholar

[18] Subhash LS, Udayakumar R. Sunflower whale optimization algorithm for resource allocation strategy in cloud computing platform. Wirel Personal Commun. 2021;116(4):3061–80.10.1007/s11277-020-07835-9Suche in Google Scholar

[19] Safi S, Farhang M. Sensitivity of cosmological parameter estimation to nonlinear prescription from galaxy clustering. Astrophys J. 2021;914(1):65(8pp).10.3847/1538-4357/abfa18Suche in Google Scholar

[20] Ji K, Wen R, Ren Y, Dhakal YP. Nonlinear seismic site response classification using k-means clustering algorithm: Case study of the september 6, 2018 Mw6.6 Hokkaido Iburi-Tobu earthquake, Japan. Soil Dyn Earthq Eng. 2020;128(Jan.): 105907.1–14.10.1016/j.soildyn.2019.105907Suche in Google Scholar

[21] Cimmelli VA, Jou D, Sellitto A. Nonlinear thermoelastic waves in functionally graded materials: Application to si1–cGec nanowires. J Therm Stresses. 2020;43(5):1–17.10.1080/01495739.2020.1730283Suche in Google Scholar

[22] Azizi T, Kerr G. Application of stability theory in study of local dynamics of nonlinear systems. J Appl Mathematics Phys. 2020;8(6):1180–92.10.4236/jamp.2020.86089Suche in Google Scholar

[23] Bertuzzi A, Conte F, Papa F, Sinisgalli C. Applications of nonlinear programming to the optimization of fractionated protocols in cancer radiotherapy. Information. 2020;11(6):313.10.3390/info11060313Suche in Google Scholar

[24] Xiao Q, Zhong X, Zhong C. Application research of KNN algorithm based on clustering in big data talent demand information classification. Int J Pattern Recognit Artif Intelligence. 2020;34(6):1525822X15603149-1987.10.1142/S0218001420500159Suche in Google Scholar

[25] Adhikary, S, Basu, M. Nonlinear pulse reshaping in a typically designed silicon-on-insulator waveguide and its application to generate a high repetition rate pulse train. J Optics. 2021;23(12):125506(12pp).10.1088/2040-8986/ac34e5Suche in Google Scholar

Received: 2022-03-25
Revised: 2022-07-28
Accepted: 2022-08-12
Published Online: 2023-02-02

© 2023 the author(s), published by De Gruyter

This work is licensed under the Creative Commons Attribution 4.0 International License.

Artikel in diesem Heft

  1. Research Articles
  2. The regularization of spectral methods for hyperbolic Volterra integrodifferential equations with fractional power elliptic operator
  3. Analytical and numerical study for the generalized q-deformed sinh-Gordon equation
  4. Dynamics and attitude control of space-based synthetic aperture radar
  5. A new optimal multistep optimal homotopy asymptotic method to solve nonlinear system of two biological species
  6. Dynamical aspects of transient electro-osmotic flow of Burgers' fluid with zeta potential in cylindrical tube
  7. Self-optimization examination system based on improved particle swarm optimization
  8. Overlapping grid SQLM for third-grade modified nanofluid flow deformed by porous stretchable/shrinkable Riga plate
  9. Research on indoor localization algorithm based on time unsynchronization
  10. Performance evaluation and optimization of fixture adapter for oil drilling top drives
  11. Nonlinear adaptive sliding mode control with application to quadcopters
  12. Numerical simulation of Burgers’ equations via quartic HB-spline DQM
  13. Bond performance between recycled concrete and steel bar after high temperature
  14. Deformable Laplace transform and its applications
  15. A comparative study for the numerical approximation of 1D and 2D hyperbolic telegraph equations with UAT and UAH tension B-spline DQM
  16. Numerical approximations of CNLS equations via UAH tension B-spline DQM
  17. Nonlinear numerical simulation of bond performance between recycled concrete and corroded steel bars
  18. An iterative approach using Sawi transform for fractional telegraph equation in diversified dimensions
  19. Investigation of magnetized convection for second-grade nanofluids via Prabhakar differentiation
  20. Influence of the blade size on the dynamic characteristic damage identification of wind turbine blades
  21. Cilia and electroosmosis induced double diffusive transport of hybrid nanofluids through microchannel and entropy analysis
  22. Semi-analytical approximation of time-fractional telegraph equation via natural transform in Caputo derivative
  23. Analytical solutions of fractional couple stress fluid flow for an engineering problem
  24. Simulations of fractional time-derivative against proportional time-delay for solving and investigating the generalized perturbed-KdV equation
  25. Pricing weather derivatives in an uncertain environment
  26. Variational principles for a double Rayleigh beam system undergoing vibrations and connected by a nonlinear Winkler–Pasternak elastic layer
  27. Novel soliton structures of truncated M-fractional (4+1)-dim Fokas wave model
  28. Safety decision analysis of collapse accident based on “accident tree–analytic hierarchy process”
  29. Derivation of septic B-spline function in n-dimensional to solve n-dimensional partial differential equations
  30. Development of a gray box system identification model to estimate the parameters affecting traffic accidents
  31. Homotopy analysis method for discrete quasi-reversibility mollification method of nonhomogeneous backward heat conduction problem
  32. New kink-periodic and convex–concave-periodic solutions to the modified regularized long wave equation by means of modified rational trigonometric–hyperbolic functions
  33. Explicit Chebyshev Petrov–Galerkin scheme for time-fractional fourth-order uniform Euler–Bernoulli pinned–pinned beam equation
  34. NASA DART mission: A preliminary mathematical dynamical model and its nonlinear circuit emulation
  35. Nonlinear dynamic responses of ballasted railway tracks using concrete sleepers incorporated with reinforced fibres and pre-treated crumb rubber
  36. Two-component excitation governance of giant wave clusters with the partially nonlocal nonlinearity
  37. Bifurcation analysis and control of the valve-controlled hydraulic cylinder system
  38. Engineering fault intelligent monitoring system based on Internet of Things and GIS
  39. Traveling wave solutions of the generalized scale-invariant analog of the KdV equation by tanh–coth method
  40. Electric vehicle wireless charging system for the foreign object detection with the inducted coil with magnetic field variation
  41. Dynamical structures of wave front to the fractional generalized equal width-Burgers model via two analytic schemes: Effects of parameters and fractionality
  42. Theoretical and numerical analysis of nonlinear Boussinesq equation under fractal fractional derivative
  43. Research on the artificial control method of the gas nuclei spectrum in the small-scale experimental pool under atmospheric pressure
  44. Mathematical analysis of the transmission dynamics of viral infection with effective control policies via fractional derivative
  45. On duality principles and related convex dual formulations suitable for local and global non-convex variational optimization
  46. Study on the breaking characteristics of glass-like brittle materials
  47. The construction and development of economic education model in universities based on the spatial Durbin model
  48. Homoclinic breather, periodic wave, lump solution, and M-shaped rational solutions for cold bosonic atoms in a zig-zag optical lattice
  49. Fractional insights into Zika virus transmission: Exploring preventive measures from a dynamical perspective
  50. Rapid Communication
  51. Influence of joint flexibility on buckling analysis of free–free beams
  52. Special Issue: Recent trends and emergence of technology in nonlinear engineering and its applications - Part II
  53. Research on optimization of crane fault predictive control system based on data mining
  54. Nonlinear computer image scene and target information extraction based on big data technology
  55. Nonlinear analysis and processing of software development data under Internet of things monitoring system
  56. Nonlinear remote monitoring system of manipulator based on network communication technology
  57. Nonlinear bridge deflection monitoring and prediction system based on network communication
  58. Cross-modal multi-label image classification modeling and recognition based on nonlinear
  59. Application of nonlinear clustering optimization algorithm in web data mining of cloud computing
  60. Optimization of information acquisition security of broadband carrier communication based on linear equation
  61. A review of tiger conservation studies using nonlinear trajectory: A telemetry data approach
  62. Multiwireless sensors for electrical measurement based on nonlinear improved data fusion algorithm
  63. Realization of optimization design of electromechanical integration PLC program system based on 3D model
  64. Research on nonlinear tracking and evaluation of sports 3D vision action
  65. Analysis of bridge vibration response for identification of bridge damage using BP neural network
  66. Numerical analysis of vibration response of elastic tube bundle of heat exchanger based on fluid structure coupling analysis
  67. Establishment of nonlinear network security situational awareness model based on random forest under the background of big data
  68. Research and implementation of non-linear management and monitoring system for classified information network
  69. Study of time-fractional delayed differential equations via new integral transform-based variation iteration technique
  70. Exhaustive study on post effect processing of 3D image based on nonlinear digital watermarking algorithm
  71. A versatile dynamic noise control framework based on computer simulation and modeling
  72. A novel hybrid ensemble convolutional neural network for face recognition by optimizing hyperparameters
  73. Numerical analysis of uneven settlement of highway subgrade based on nonlinear algorithm
  74. Experimental design and data analysis and optimization of mechanical condition diagnosis for transformer sets
  75. Special Issue: Reliable and Robust Fuzzy Logic Control System for Industry 4.0
  76. Framework for identifying network attacks through packet inspection using machine learning
  77. Convolutional neural network for UAV image processing and navigation in tree plantations based on deep learning
  78. Analysis of multimedia technology and mobile learning in English teaching in colleges and universities
  79. A deep learning-based mathematical modeling strategy for classifying musical genres in musical industry
  80. An effective framework to improve the managerial activities in global software development
  81. Simulation of three-dimensional temperature field in high-frequency welding based on nonlinear finite element method
  82. Multi-objective optimization model of transmission error of nonlinear dynamic load of double helical gears
  83. Fault diagnosis of electrical equipment based on virtual simulation technology
  84. Application of fractional-order nonlinear equations in coordinated control of multi-agent systems
  85. Research on railroad locomotive driving safety assistance technology based on electromechanical coupling analysis
  86. Risk assessment of computer network information using a proposed approach: Fuzzy hierarchical reasoning model based on scientific inversion parallel programming
  87. Special Issue: Dynamic Engineering and Control Methods for the Nonlinear Systems - Part I
  88. The application of iterative hard threshold algorithm based on nonlinear optimal compression sensing and electronic information technology in the field of automatic control
  89. Equilibrium stability of dynamic duopoly Cournot game under heterogeneous strategies, asymmetric information, and one-way R&D spillovers
  90. Mathematical prediction model construction of network packet loss rate and nonlinear mapping user experience under the Internet of Things
  91. Target recognition and detection system based on sensor and nonlinear machine vision fusion
  92. Risk analysis of bridge ship collision based on AIS data model and nonlinear finite element
  93. Video face target detection and tracking algorithm based on nonlinear sequence Monte Carlo filtering technique
  94. Adaptive fuzzy extended state observer for a class of nonlinear systems with output constraint
Heruntergeladen am 31.12.2025 von https://www.degruyterbrill.com/document/doi/10.1515/nleng-2022-0239/html
Button zum nach oben scrollen