Home A comparative study of different neural networks in predicting gross domestic product
Article Open Access

A comparative study of different neural networks in predicting gross domestic product

  • Han Lai EMAIL logo
Published/Copyright: May 17, 2022
Become an author with De Gruyter Brill

Abstract

Gross domestic product (GDP) can well reflect the development of the economy, and predicting GDP can help better grasp the future economic trends. In this article, three different neural network models, the genetic algorithm – back-propagation neural network model, the particle swarm optimization (PSO) – Elman neural network (Elman NN) model, and the bat algorithm – long short-term memory model, were analyzed based on neural networks. The GDP data of Sichuan province from 1992 to 2020 were collected to compare the performance of the three models in predicting GDP. It was found that the mean absolute percentage error values of the three models were 0.0578, 0.0236, and 0.0654, respectively; the root-mean-square error values were 0.0287, 0.0166, and 0.0465, respectively; and the PSO-Elman NN model had the best performance in GDP prediction. The experimental results demonstrate that neural networks were reliable in predicting GDP and can be used for further applications in practice.

1 Introduction

Gross domestic product (GDP) [1] can reflect the economic operation comprehensively and is an important basis for the relevant national departments to formulate economic development strategies and plans. With the rapid development of productivity, GDP data have received more and more attention. The prediction of GDP is related to the formulation of various economic and monetary policies and has a close relationship with the development of society. Many methods have been applied in the prediction of data. Suzuki et al. [2] predicted micro-meteorological data based on the support vector regression model and found that the method reduced the prediction error by 0.1% and the computation time by 98.7% through experiments on the prediction of temperature in Sapporo. Rau et al. [3] predicted the development of liver cancer within 6 years of diagnosis with type II diabetes. They established artificial neural network (ANN) and logistic regression models on 2,060 cases and found that ANN had better performance and could correctly predict 75.7% of diabetic patients receiving a future diagnosis of liver cancer and could correctly predict 75.5% not being diagnosed with liver cancer. Bo et al. [4] designed a homologous gray prediction model with one variable and one first-order equation (HGEM(1,1)) to predict the total energy consumption of the manufacturing industry of China. The experiment found that the method had a favorable comprehensive performance. Xu et al. [5] studied the prediction of short-term time series data of customer electricity consumption, reduced the dimensionality of the data with principal component analysis, and used a long-short memory (LSTM) network for prediction. The experimental results showed that the method had high prediction performance and generality. Jiang et al. [6] extracted data related to COVID-19 from the data of two hospitals in Wenzhou, Zhejiang, China, predicted severe cases using an artificial intelligence framework, and found through experiments that the accuracy of the method was 70–80%. Ingle and Deshmukh [7] used term frequency-inverse document frequency features extracted from online news data of companies in the Bombay Stock Exchange along with other stock market features for prediction, predicted the next day’s stock price by ensemble deep learning framework, and found that the method had an accuracy of about 85%. Most of the current research on data prediction is about the optimization and study of a particular data prediction method, and there are fewer studies on the performance comparison of different methods on the same data set. Thus, this article compared the performance of several different neural network models for GDP prediction and carried experiments using GDP data from Sichuan Province to find the model with the best performance for effective GDP prediction. This article verifies the reliability of the particle swarm optimization (PSO) – Elman neural network (Elman NN) model in predicting GDP by comparing different neural network models and provides a more accurate method for data prediction.

2 Different neural network models

The performance of the model can be significantly improved by optimizing the data model through metaheuristic algorithms and swarm intelligence algorithms. The principle of these algorithms is to solve optimization problems that are difficult to solve directly by defining group behavior and individual behavior so that the group has group evolutionary diversity and behavioral directionality. These algorithms are simple to implement and will not affect the solution of the problem due to individual failures, so they have been extensively used. For example, Noori et al. [8] optimized the ANN by the PSO algorithm and achieved higher precision in predicting the performance of tunnel boring machines. Mikaeil et al. [9] determined the clustering of tunnel engineering risks with the metaheuristic algorithm (genetic algorithm [GA]). Salemi et al. [10] classified the concrete lining of tunnels with a GA. Guido et al. [11] evaluated road safety data by combining PSO and GA with the K-means algorithm. Therefore, to obtain better effects in GDP prediction, this article optimized different prediction models with different algorithms.

2.1 GA-optimized back-propagation neural network (BPNN) model for GDP prediction

BPNN [12] has good performance in the prediction of time series, including short-term and long-term prediction, and has successful applications in the prediction of industrial [13] and medical [14] data. To enhance the performance of BPNN, this article uses a GA to improve it.

Taking a simple three-layer BPNN as an example, let the node in the input layer as

(1) X = ( x 1 , x 2 , , x n ) T ,

the node in the output layer as

(2) Y = ( y 1 , y 2 , , y m ) T ,

the node in the hidden layer as

(3) H = ( h 1 , h 2 , , h p ) T ,

the input of the output layer is h k and the output is y k . Then,

(4) h k = i w i j h i ,

(5) y k = f ( h k ) ,

(6) h i = i w j i h i ,

(7) y i = f ( h i ) ,

where w ij and w ji refer to the weight between the hidden layer and the input layer and the weight between the hidden layer and the output layer and f(·) is the activation function, i.e., the sigmoid function

(8) f ( x ) = 1 1 + e x ,

in this article.

Then, the error is back-propagated to optimize the weights and thresholds, and the objective function of the model can be written as:

(9) E n = 1 2 k ( y n k y n k ^ ) 2 ,

where y nk and y n k ^ are the expected and actual outputs of the kth neuron in the output layer.

A GA is a biological evolution-based method applicable to the optimization of complex systems [15], enabling the search for optimal solutions through the simulation of natural evolutionary processes. It can be well integrated with other algorithms [16]. The specific steps of the GA-BPNN model are as follows.

  1. The population is initialized, and the parameters to be optimized are encoded. For a BPNN network with an N–M–L structure, the length S of chromosomes in GA is as follows:

    (10) S = L × ( N + 1 ) + M × ( L + 1 ) .

  2. The adaptation function used is the error of the BPNN:

    (11) E n = 1 2 k ( y n k y n k ^ ) 2 .

  3. The selection, crossover, and mutation operations are performed. The selection is done by random traversal sampling. The crossover is done by a single-point crossover operator. The mutation is performed using a random method: the gene subject to mutation is selected, e.g., if the selected gene has a code of 1, it is mutated as 0; otherwise, it is mutated as 1.

  4. Steps (2) and (3) are repeated until the error meets the requirements, and the optimized parameters are output.

  5. The obtained parameters are input into BPNN to build the GDP prediction model.

2.2 Particle swarm algorithm-optimized Elman NN model for GDP prediction

Elman NN is a local regression network [17], which is characterized by an additional undertaking layer compared to BPNN. Let the input layer be U, the output layer be Y, the hidden layer be X, and the undertaking layer be X c . The expressions of each layer can be written as:

(12) Y ( k ) = g ( w 2 X ( k ) + b 2 ) , X ( k ) = f ( w 1 X c ( k ) + w 1 ( U ( k 1 ) ) + b 1 ) X c ( k ) = X ( k 1 ) , ,

where w 1 is the weight from the input layer to the hidden layer, w 2 refers to the weight from the undertaking layer to the hidden layer, b 1 refers to the threshold value of the hidden layer, b 2 refers to the threshold value of the output layer, g(·) is the activation function of the output layer, and f(·) is the activation function of the hidden layer.

Similar to BPNN, the weights and thresholds of Elman NN also have an impact on the performance of the network. Elman NN was optimized using the PSO algorithm in this article. The PSO algorithm is a global stochastic optimization algorithm [18]. Based on the predation process of birds [19], every bird is considered a particle. Suppose there are m particles in a D-dimensional space, where the position of the ith particle is

(13) x i = ( x i 1 , x i 2 , , x i D ) ,

the velocity is

(14) v i = ( v i 1 , v i 2 , , v i D ) ,

the current optimal position is

(15) p i = ( p i 1 , p i 2 , , p i D ) ,

and the optimal global position is

(16) p g = ( p g 1 , p g 2 , , p g D ) .

Then, the update formula for the velocity and position of the particle can be written as:

(17) v i d ( t + 1 ) = v i d ( t ) + c 1 r 1 ( p i d x i d ( t ) ) + c 2 r 2 ( p g d x i d ( t ) ) ,

(18) x i d ( t + 1 ) = x i d ( t ) + v i d ( t + 1 ) ,

where c 1 and c 2 are non-negative constants and r 1 and r 2 are random numbers in [0, 1].

The specific steps of the PSO-Elman NN model are as follows.

  1. An Elman NN model is developed, and its network structure is determined.

  2. The parameters of PSO are initialized.

  3. The initial fitness value of the particle is calculated, and then the particle is updated according to the formula above.

  4. The updated particle fitness value is calculated. The current optimal position and the global optimal position are updated until the maximum number of iterations is reached.

  5. The obtained parameters are input into Elman NN to build the GDP prediction model.

2.3 Bat algorithm (BA) – LSTM neural network model for GDP prediction

LSTM is an improvement of recurrent neural network (RNN) [20], which has very wide applications in language models [20], artificial intelligence, etc. [21], and also performs well in the prediction of various data [22]. It has higher prediction accuracy when it is combined with other algorithms. In this article, the BA is combined with LSTM for GDP prediction.

The difference between LSTM and RNN is that the LSTM includes a memory unit to keep the historical information and includes three gates in the hidden layer: the input gate i t , the forgetting gate f t , and the output gate o t . The update formula of the LSTM at layer t is written as:

(19) i t = σ ( w i × [ h t 1 , x t ] + b i ) f t = σ ( w f × [ h t 1 , x t ] + b f ) o t = σ ( w o × [ h t 1 , x t ] + b o ) c ˜ t = tanh ( w c × [ h t 1 , x t ] + b c ) c t = f t c t 1 + i t c ˜ t h t = o t tanh ( c t ) .

where h t−1 refers to the last output, x t refers to the current input, σ refers to the activation function, w i and b i are the weight and bias of the input gate, w f and b i are the weight and bias of the forgetting gate, w o and b o are the weight and bias of the output gate, c t−1 is the status of the previous memory unit, and c t is the status of the current memory unit.

The loss function used in this article is the mean square error. The Adam function is used as the optimizer, the network is trained using the small-batch gradient descent algorithm, and the network parameters are optimized using the BA.

The BA is a heuristic algorithm [23], which finds the optimal global solution by simulating the bat localization process [24]. Let the number of bats in a D-dimensional space be N, the upper and lower limits of the pulse frequency are Q max and Q min, the position of the ith bat is

(20) X i = [ x 1 , x 2 , , x N ] ,

and the velocity is

(21) V i = [ v 1 , v 2 , , v N ] ,

then the update equations of its frequency, velocity, and position are as follows:

(22) Q i = Q min + β × ( Q max Q min ) ,

(23) v i ( t ) = v i ( t 1 ) + Q i × ( x i g best ) ,

(24) x i ( t ) = x i ( t 1 ) + v i ( t ) ,

where β ∈ [0,1] and g best is the optimal global position. A random number is generated to compare with the frequency; if the random number is larger, a random perturbation is performed:

(25) X i ( t ) = g best + A t × δ ,

and if the random number is smaller, the cross-border processing is performed:

(26) X i ( t ) = judgebound ( X i ( t ) ) .

A t is the average loudness of the bat, and δ ∈ [−1,1]. The judgebound() function is used to implement cross-border processing. When the bat finds the target, the update formulas for frequency and loudness are as follows:

(27) A i ( t + 1 ) = α A i ( t ) ,

(28) r i ( t + 1 ) = r i ( 0 ) [ 1 exp ( ε t ) ] ,

where α and ε are constants. If the new fitness value is smaller than the global optimal fitness value, then

(29) x i ( t ) = g best .

The BA-optimized parameters are input into the LSTM model. The specific steps of the BA-LSTM model are as follows.

  1. The LSTM model is established, and the network structure is set up.

  2. The parameters of the BA are initialized, and all bats are iterated.

  3. The frequency, speed, and position of the bat are updated according to the formula.

  4. The fitness value is calculated. The frequency and loudness are updated until the termination condition is satisfied. The optimal parameters are output.

  5. The obtained parameters are input into the LSTM to build the GDP prediction model.

3 Experimental analysis

3.1 Experimental data

This article focuses on the GDP forecast of Sichuan Province. In 2020, the GDP of Sichuan Province reached 485.98 billion yuan, showing an increase of 3.8% over the previous year, the economic growth was fast, the industrial system was complete, and industries, such as nuclear power equipment and heavy combustion engines, ranked among the top in the country. The GDP data of Sichuan province between 1992 and 2020 provided by the National Bureau of Statistics are shown in Table 1. The GDP of the first 4 years was used as input, the GDP of the fifth year was used as output, and so on. The model was validated using the data between 2010 and 2020, as shown in Table 2.

Table 1

GDP of Sichuan Province between 1992 and 2020 (unit: billion yuan) [25]

Time GDP
1992 1177.3
1993 1496.1
1994 2001.4
1995 2443.2
1996 2871.7
1997 3241.5
1998 3474.1
1999 3649.1
2000 3928.2
2001 4293.5
2002 4725.0
2003 5346.2
2004 6304.0
2005 7195.9
2006 8494.7
2007 10562.1
2008 12756.2
2009 14190.6
2010 17224.8
2011 21050.9
2012 23922.4
2013 26518.0
2014 28891.3
2015 30342.0
2016 33138.5
2017 37905.1
2018 42902.1
2019 46363.8
2020 48598.8
Table 2

Validation sample

Sample number Model input Model output
1 2007–2010 GDP 2011 GDP
2 2008–2011 GDP 2012 GDP
3 2009–2012 GDP 2013 GDP
4 2010–2013 GDP 2014 GDP
10 2016–2019 GDP 2020 GDP

In the evaluation of the prediction model, to have a better understanding of the prediction effect, it is generally expressed by the prediction error, i.e., the distance between the actual value and the predicted value. Commonly used indicators included mean absolute error, median absolute error, etc. The performance of the different models was analyzed using the following two indicators:

Mean absolute percentage error (MAPE): it normalizes the error of every point, reflecting prediction precision, and its calculation formula is as follows:

(30) MAPE = 1 N × i = 1 N y i y i y i × 100 % .

Root-mean-square error (RMSE): it reflects the degree of deviation between the predicted value and the actual value, and its calculation formula is as follows:

(31) RMSE = 1 N × i = 1 N ( y i y i ) 2 ,

where y i and y i are the actual and predicted values of GDP, respectively.

3.2 Experimental setup

As GDP belongs to normal time series, selecting a general three-layer structure was enough for the neural network model; therefore, the GDP of the first 4 years were used as the input, i.e., the number of input nodes was 4, and the GDP of the fifth year was used as the output, i.e., the number of output nodes was 1. The other settings are as follows.

  1. GA-BPNN model: according to the experimental data, the node N of the input layer in the BPNN was 4 and the node L of the output layer was 1, +1. A network with a 4–9–1 structure was obtained. The parameters that GA needed to optimize were determined as 23 according to S = L × ( N + 1 ) + M × ( L + 1 ) . The population size was 60. The maximum number of iterations was 70. The mutation probability was 0.1. The crossover probability was 0.7. The training number of the model was 100. The learning rate was 0.1.

  2. PSO-Elma and the node M of the hidden layer was determined as 9 using the empirical formula: M = 2Nn NN model: referring to the BPNN model, the structure of Elman NN was also set as 4–9–1. Let the population size of PSO be 20, r 1 = 1, r 2 = 0.5, c 1 = c 2 = 2, and the maximum number of iterations be 1,000.

  3. BA-LSTM model: the node of the input layer was 4, the node of the output layer was 1, the node of the hidden layer was 10, the time step was 1, the maximum number of iterations was 1,000, the population size of BA was 30, and the frequency and loudness were 0.5.

3.3 Predicted results

The GDP prediction results of different models are shown in Figure 1.

Figure 1 
                  GDP prediction results of different models.
Figure 1

GDP prediction results of different models.

It is seen from Figure 1 that there were some differences in the prediction results of different models for the prediction of GDP. The results of GA-BPN and BA-LSTM models had large errors with the actual values and large data volatility, showing unstable prediction performance, while the prediction results of the PSO-Elman NN model had a good agreement with the actual values. To further understand the model performance, the MAPE and RMSE values were calculated and compared, and the results are shown in Figures 2 and 3.

Figure 2 
                  Comparison of MAPE values between different models.
Figure 2

Comparison of MAPE values between different models.

Figure 3 
                  Comparison of RMSE values between different models.
Figure 3

Comparison of RMSE values between different models.

It is seen from Figure 2 that the MAPEs of the three models were 0.0578, 0.0236, and 0.0654, i.e., the MAPE of the BA-LSTM model was the smallest, followed by GA-BPNN and PSO-Elman NN models. The MAPEs of the GA-BPNN and BA-LSTM models were above 0.05. The MAPE of the PSO-Elman NN model was 0.0236, the smallest, 0.0342 smaller than the GA-BPNN model and 0.0418 smaller than the BA-LSTM model. The above results indicated that the PSO-Elman NN model had a smaller prediction error, showing better performance in predicting GDP.

It is seen from Figure 3 that the RMSE values of all the models were above 0.01; the RMSE value of the PSO-LSTM model was the smallest, followed by GA-BPNN and BA-LSTM models. The RMSE of the BA-LSTM model was 0.0465, the largest, and the RMSE of the PSO-Elman NN model was 0.0166, which was 0.0121 smaller than the GA-BPNN model and 0.0299 smaller than the BA-LSTM model. The smaller the RMSE value was, the smaller the difference between the prediction result and actual value was, i.e., the higher the degree of fitting was. The results indicated that the PSO-Elman NN model had better performance in GDP prediction and its prediction results were closer to the actual values; thus, it could achieve better applications in practice.

4 Discussion

A neural network is a way to process data by simulating the information processing mechanism of the human brain, which can realize large-scale parallel processing. Neural networks have strong adaptive and self-organizing abilities. The neural network model consists of basic neurons, each of which is independent but interconnected, so it has good performance in processing complex information and has been very widely used in industrial control [26], pattern recognition [27], and prediction estimation [28]. This article mainly compared three neural network models, BPNN, Elman NN, and LSTM models, and optimized every model to achieve a better prediction of GDP.

It is seen from the data in Table 1 that the GDP of Sichuan Province showed a trend of steady growth. To be specific, in the primary industry, the farming system in Sichuan Province is three times a year, so the yields of food crops and cash crops are high and in a steady state of improvement; in the secondary industry, Sichuan Province has a full range of industrial sectors, strategic emerging industries, such as information technology and new energy, are in rapid development, and the development of the construction industry is also fast; in the tertiary industry, Sichuan Province has a wide range of financial institutions and a high degree of openness, and domestic trade and foreign economy are also developing rapidly. It is seen from Figure 1 that the prediction results of GA-BPNN and BA-LSTM models had large errors with the actual values, and fluctuations of the values were also significant, which indicated that the stability of the models was relatively general, but the prediction results of the PSO-Elman NN model almost coincided with the actual values. Then, it is seen from the comparison of MAPE and RMSE values (Figures 2 and 3) that the MAPE value of the PSO-Elman NN model was 59.17% smaller than that of the GA-BPNN model and 63.01% smaller than that of the BA-LSTM model; the RMSE value of the PSO-Elman NN model was 42.16% smaller than that of the GA-BPNN model and 64.3% smaller than the BA-LSTM model.

The experimental results verify the accuracy of the PSO-Elman NN model in GDP prediction, and the model can be applied in practical cases. For example, it can predict the future GDP trend of provinces and cities to help governments to make timely and accurate responses to economic changes and provide scientific guidance for the next development decision. Changes in GDP are affected by indicators, such as residents’ consumption, cash circulation, gold reserves, foreign investment, and taxation. According to the forecast results of GDP, in the future development, Sichuan Province can further implement policies favorable to industrial development, promote residents’ consumption, and flexibly apply the forecast model to grasp the GDP trend of Sichuan Province in time and make adjustments to relevant policies.

5 Conclusion

This article compared several different models, GA-BPNN, PSO-Elman NN, and BA-LSTM models, for the prediction problem of GDP in Sichuan Province, and trained and tested the models using the GDP data of Sichuan Province from 1992 to 2020. It was found that the PSO-Elman NN model had an MAPE value of 0.0236 and an RMSE value of 0.0166, showing the best prediction performance. The PSO-Elman NN model can be further studied and applied to realize the accurate prediction of GDP and stable and healthy development of the economy. However, this study also has some limitations, such as the limited comparison of experimental data and insufficient comparison and optimization of neural network models. In future research, experiments will be conducted on larger-scale data, more neural network models will be investigated, and more in-depth research on the improvement of PSO will also be conducted to further improve the performance of the models.

  1. Conflict of interest: Author states no conflict of interest.

References

[1] Lakštutienė A. Correlation of the indicators of the financial system and gross domestic product in European Union Countries. Eng Econ. 2015;32:7–18.Search in Google Scholar

[2] Suzuki Y, Kaneda Y, Mineno H. Analysis of support vector regression model for micrometeorological data prediction. Gastroenterology. 2015;3:37–48.10.13189/csit.2015.030202Search in Google Scholar

[3] Rau H, Fuad A, Hsu CY, Wei LM, Lin YA, Hsu MH, et al. Development of a web-based liver cancer prediction model for type II diabetes patients by using an artificial neural network. Comput Method Prog Bio. 2016;125:58–65.10.1016/j.cmpb.2015.11.009Search in Google Scholar PubMed

[4] Bo Z, Zhou M, Zhang J. Forecasting the energy consumption of China’s manufacturing using a homologous grey prediction model. Sustainability. 2017;9:1975.10.3390/su9111975Search in Google Scholar

[5] Xu K, Hou R, Ding X, Tao Y, Xu Z. Short-term time series data prediction of power consumption based on deep neural network. IOP Conf Ser Mater Sci Eng. 2019;646:012027.10.1088/1757-899X/646/1/012027Search in Google Scholar

[6] Jiang X, Coffee M, Bari A, Wang J, Jiang X, Huang J, et al. Towards an artificial intelligence framework for data-driven prediction of coronavirus clinical severity. Comput Mater Con. 2020;62:537–51.10.32604/cmc.2020.010691Search in Google Scholar

[7] Ingle V, Deshmukh S. Ensemble deep learning framework for stock market data prediction (EDLF-DP) – ScienceDirect. Glob Transit Proc. 2021;2:47–66.10.1016/j.gltp.2021.01.008Search in Google Scholar

[8] Noori AM, Mikaeil R, Mokhtarian M, Haghshenas SS, Foroughi M. Feasibility of intelligent models for prediction of utilization factor of TBM. Geotech Geol Eng. 2020;38:3125–43.10.1007/s10706-020-01213-9Search in Google Scholar

[9] Mikaeil R, Shaffiee Haghshenas S, Sedaghati Z. Geotechnical risk evaluation of tunneling projects using optimization techniques (case study: the second part of Emamzade Hashem tunnel). Nat Hazards. 2019;97:1099–113.10.1007/s11069-019-03688-zSearch in Google Scholar

[10] Salemi A, Mikaeil R, Haghshenas SS. Integration of finite difference method and genetic algorithm to seismic analysis of circular shallow tunnels (Case study: Tabriz urban railway tunnels). KSCE J Civ Eng. 2018;22:1978–90.10.1007/s12205-017-2039-ySearch in Google Scholar

[11] Guido G, Haghshenas SS, Haghshenas SS, Vitale A, Astarita V, Haghshenas AS. Feasibility of stochastic models for evaluation of potential factors for safety: a case study in Southern Italy. Sustainability. 2020;12:7541.10.3390/su12187541Search in Google Scholar

[12] Arthur CK, Temeng VA, Ziggah YY. Performance evaluation of training algorithms in backpropagation neural network approach to blast-induced ground vibration prediction. Ghana Mining J. 2020;20:20–33.10.4314/gm.v20i1.3Search in Google Scholar

[13] Li M, Wu H, Wang Y, Handroos H, Carbone G. Modified Levenberg–Marquardt algorithm for backpropagation neural network training in dynamic model identification of mechanical systems. J Dyn Syst Meas Control. 2017;139:031012.10.1115/1.4035010Search in Google Scholar

[14] Puspita JW, Jaya AI, Gunadharma S. Classification of epileptiform and wicket spike of EEG pattern using backpropagation neural network. AIP Conference Proceedings. 2017;1825:020018.10.1063/1.4978987Search in Google Scholar

[15] Stevanovic A, Martin P, Stevanovic J. VisSim-based genetic algorithm optimization of signal timings. Transp Res Rec. 2015;2035:59–68.10.3141/2035-07Search in Google Scholar

[16] Wu J, Long J, Liu M. Evolving RBF neural networks for rainfall prediction using hybrid particle swarm optimization and genetic algorithm. Neurocomputing. 2015;148:136–42.10.1016/j.neucom.2012.10.043Search in Google Scholar

[17] Liu H, Tian HQ, Liang XF, Li YF. Wind speed forecasting approach using secondary decomposition algorithm and Elman neural networks. Appl Energ. 2015;157:183–94.10.1016/j.apenergy.2015.08.014Search in Google Scholar

[18] Wang D, Tan D, Lei L. Particle swarm optimization algorithm: an overview. Soft Comput. 2018;22:387–408.10.1007/s00500-016-2474-6Search in Google Scholar

[19] Khan SU, Yang S, Wang L, Liu L. A modified particle swarm optimization algorithm for global optimizations of inverse problems. IEEE Trans Magn. 2016;52:1–4.10.1109/TMAG.2015.2487678Search in Google Scholar

[20] Sundermeyer M, Ney H, Schluter R. From feedforward to recurrent LSTM neural networks for language modeling. IEEE/ACM Trans Audio Speech. 2015;23:517–29.10.1109/TASLP.2015.2400218Search in Google Scholar

[21] Fan H, Zhu L, Yang Y. Cubic LSTMs for video prediction. Proc AAAI Conf Artif Intell. 2019;33:8263–70.10.1609/aaai.v33i01.33018263Search in Google Scholar

[22] Lam PD, Ahn H, Kim K, Kim K. Process-aware enterprise social network prediction and experiment using LSTM neural network models. IEEE Access. 2021;1.Search in Google Scholar

[23] Raghavan S, Sarwesh P, Marimuthu C, Chandrasekaran K. Bat algorithm for scheduling workflow applications in cloud. 2015 International Conference on Electronic Design, Computer Networks and Automated Verification, 29–30 Jan 2015. Shillong, India: IEEE; 2015. p. 139–44.10.1109/EDCAV.2015.7060555Search in Google Scholar

[24] Ashok KN, Rao GS, Lavanya B, Novel BAT. Algorithm for position estimation of a GPS receiver located in coastal region of Southern India. Proc Comput Sci. 2018;143:860–7.10.1016/j.procs.2018.10.369Search in Google Scholar

[25] https://data.stats.gov.cn/easyquery.htm?cn=E0103&zb=A0201&reg=510000&sj=2020.Search in Google Scholar

[26] El-Sousy F, Abuhasel KA. Adaptive nonlinear disturbance observer using a double-loop self-organizing recurrent wavelet neural network for a two-axis motion control system. IEEE Trans Ind Appl. 2018;54:764–86.10.1109/IAS.2016.7731869Search in Google Scholar

[27] Esteva A, Kuprel B, Novoa RA, Ko J, Swetter SM, Blau HM, et al. Dermatologist-level classification of skin cancer with deep neural networks. Nature. 2017;542:115–8.10.1038/nature21056Search in Google Scholar PubMed PubMed Central

[28] Alanis AY. Electricity prices forecasting using artificial neural networks. IEEE Lat Am Trans. 2018;16:105–11.10.1109/TLA.2018.8291461Search in Google Scholar

Received: 2021-11-24
Revised: 2022-02-27
Accepted: 2022-03-01
Published Online: 2022-05-17

© 2022 Han Lai, published by De Gruyter

This work is licensed under the Creative Commons Attribution 4.0 International License.

Articles in the same Issue

  1. Research Articles
  2. Construction of 3D model of knee joint motion based on MRI image registration
  3. Evaluation of several initialization methods on arithmetic optimization algorithm performance
  4. Application of visual elements in product paper packaging design: An example of the “squirrel” pattern
  5. Deep learning approach to text analysis for human emotion detection from big data
  6. Cognitive prediction of obstacle's movement for reinforcement learning pedestrian interacting model
  7. The application of neural network algorithm and embedded system in computer distance teach system
  8. Machine translation of English speech: Comparison of multiple algorithms
  9. Automatic control of computer application data processing system based on artificial intelligence
  10. A secure framework for IoT-based smart climate agriculture system: Toward blockchain and edge computing
  11. Application of mining algorithm in personalized Internet marketing strategy in massive data environment
  12. On the correction of errors in English grammar by deep learning
  13. Research on intelligent interactive music information based on visualization technology
  14. Extractive summarization of Malayalam documents using latent Dirichlet allocation: An experience
  15. Conception and realization of an IoT-enabled deep CNN decision support system for automated arrhythmia classification
  16. Masking and noise reduction processing of music signals in reverberant music
  17. Cat swarm optimization algorithm based on the information interaction of subgroup and the top-N learning strategy
  18. State feedback based on grey wolf optimizer controller for two-wheeled self-balancing robot
  19. Research on an English translation method based on an improved transformer model
  20. Short-term prediction of parking availability in an open parking lot
  21. PUC: parallel mining of high-utility itemsets with load balancing on spark
  22. Image retrieval based on weighted nearest neighbor tag prediction
  23. A comparative study of different neural networks in predicting gross domestic product
  24. A study of an intelligent algorithm combining semantic environments for the translation of complex English sentences
  25. IoT-enabled edge computing model for smart irrigation system
  26. A study on automatic correction of English grammar errors based on deep learning
  27. A novel fingerprint recognition method based on a Siamese neural network
  28. A hidden Markov optimization model for processing and recognition of English speech feature signals
  29. Crime reporting and police controlling: Mobile and web-based approach for information-sharing in Iraq
  30. Convex optimization for additive noise reduction in quantitative complex object wave retrieval using compressive off-axis digital holographic imaging
  31. CRNet: Context feature and refined network for multi-person pose estimation
  32. Improving the efficiency of intrusion detection in information systems
  33. Research on reform and breakthrough of news, film, and television media based on artificial intelligence
  34. An optimized solution to the course scheduling problem in universities under an improved genetic algorithm
  35. An adaptive RNN algorithm to detect shilling attacks for online products in hybrid recommender system
  36. Computing the inverse of cardinal direction relations between regions
  37. Human-centered artificial intelligence-based ice hockey sports classification system with web 4.0
  38. Construction of an IoT customer operation analysis system based on big data analysis and human-centered artificial intelligence for web 4.0
  39. An improved Jaya optimization algorithm with ring topology and population size reduction
  40. Review Articles
  41. A review on voice pathology: Taxonomy, diagnosis, medical procedures and detection techniques, open challenges, limitations, and recommendations for future directions
  42. An extensive review of state-of-the-art transfer learning techniques used in medical imaging: Open issues and challenges
  43. Special Issue: Explainable Artificial Intelligence and Intelligent Systems in Analysis For Complex Problems and Systems
  44. Tree-based machine learning algorithms in the Internet of Things environment for multivariate flood status prediction
  45. Evaluating OADM network simulation and an overview based metropolitan application
  46. Radiography image analysis using cat swarm optimized deep belief networks
  47. Comparative analysis of blockchain technology to support digital transformation in ports and shipping
  48. IoT network security using autoencoder deep neural network and channel access algorithm
  49. Large-scale timetabling problems with adaptive tabu search
  50. Eurasian oystercatcher optimiser: New meta-heuristic algorithm
  51. Trip generation modeling for a selected sector in Baghdad city using the artificial neural network
  52. Trainable watershed-based model for cornea endothelial cell segmentation
  53. Hessenberg factorization and firework algorithms for optimized data hiding in digital images
  54. The application of an artificial neural network for 2D coordinate transformation
  55. A novel method to find the best path in SDN using firefly algorithm
  56. Systematic review for lung cancer detection and lung nodule classification: Taxonomy, challenges, and recommendation future works
  57. Special Issue on International Conference on Computing Communication & Informatics
  58. Edge detail enhancement algorithm for high-dynamic range images
  59. Suitability evaluation method of urban and rural spatial planning based on artificial intelligence
  60. Writing assistant scoring system for English second language learners based on machine learning
  61. Dynamic evaluation of college English writing ability based on AI technology
  62. Image denoising algorithm of social network based on multifeature fusion
  63. Automatic recognition method of installation errors of metallurgical machinery parts based on neural network
  64. An FCM clustering algorithm based on the identification of accounting statement whitewashing behavior in universities
  65. Emotional information transmission of color in image oil painting
  66. College music teaching and ideological and political education integration mode based on deep learning
  67. Behavior feature extraction method of college students’ social network in sports field based on clustering algorithm
  68. Evaluation model of multimedia-aided teaching effect of physical education course based on random forest algorithm
  69. Venture financing risk assessment and risk control algorithm for small and medium-sized enterprises in the era of big data
  70. Interactive 3D reconstruction method of fuzzy static images in social media
  71. The impact of public health emergency governance based on artificial intelligence
  72. Optimal loading method of multi type railway flatcars based on improved genetic algorithm
  73. Special Issue: Evolution of Smart Cities and Societies using Emerging Technologies
  74. Data mining applications in university information management system development
  75. Implementation of network information security monitoring system based on adaptive deep detection
  76. Face recognition algorithm based on stack denoising and self-encoding LBP
  77. Research on data mining method of network security situation awareness based on cloud computing
  78. Topology optimization of computer communication network based on improved genetic algorithm
  79. Implementation of the Spark technique in a matrix distributed computing algorithm
  80. Construction of a financial default risk prediction model based on the LightGBM algorithm
  81. Application of embedded Linux in the design of Internet of Things gateway
  82. Research on computer static software defect detection system based on big data technology
  83. Study on data mining method of network security situation perception based on cloud computing
  84. Modeling and PID control of quadrotor UAV based on machine learning
  85. Simulation design of automobile automatic clutch based on mechatronics
  86. Research on the application of search algorithm in computer communication network
  87. Special Issue: Artificial Intelligence based Techniques and Applications for Intelligent IoT Systems
  88. Personalized recommendation system based on social tags in the era of Internet of Things
  89. Supervision method of indoor construction engineering quality acceptance based on cloud computing
  90. Intelligent terminal security technology of power grid sensing layer based upon information entropy data mining
  91. Deep learning technology of Internet of Things Blockchain in distribution network faults
  92. Optimization of shared bike paths considering faulty vehicle recovery during dispatch
  93. The application of graphic language in animation visual guidance system under intelligent environment
  94. Iot-based power detection equipment management and control system
  95. Estimation and application of matrix eigenvalues based on deep neural network
  96. Brand image innovation design based on the era of 5G internet of things
  97. Special Issue: Cognitive Cyber-Physical System with Artificial Intelligence for Healthcare 4.0.
  98. Auxiliary diagnosis study of integrated electronic medical record text and CT images
  99. A hybrid particle swarm optimization with multi-objective clustering for dermatologic diseases diagnosis
  100. An efficient recurrent neural network with ensemble classifier-based weighted model for disease prediction
  101. Design of metaheuristic rough set-based feature selection and rule-based medical data classification model on MapReduce framework
Downloaded on 14.9.2025 from https://www.degruyterbrill.com/document/doi/10.1515/jisys-2022-0042/html
Scroll to top button