Home Research on an English translation method based on an improved transformer model
Article Open Access

Research on an English translation method based on an improved transformer model

  • Hongxia Li EMAIL logo and Xin Tuo
Published/Copyright: April 29, 2022
Become an author with De Gruyter Brill

Abstract

With the expansion of people’s needs, the translation performance of traditional models is increasingly unable to meet current demands. This article mainly studied the Transformer model. First, the structure and principle of the Transformer model were briefly introduced. Then, the model was improved by a generative adversarial network (GAN) to improve the translation effect of the model. Finally, experiments were carried out on the linguistic data consortium (LDC) dataset. It was found that the average Bilingual Evaluation Understudy (BLEU) value of the improved Transformer model improved by 0.49, and the average perplexity value reduced by 10.06 compared with the Transformer model, but the computation speed was not greatly affected. The translation results of the two example sentences showed that the translation of the improved Transformer model was closer to the results of human translation. The experimental results verify that the improved Transformer model can improve the translation quality and be further promoted and applied in practice to further improve the English translation and meet application needs in real life.

1 Introduction

With the progress and development of society, the need for cross-language communication has increased [1]; therefore, translation between different languages has become particularly important. Traditionally, translation is done by humans [2], with high accuracy; however, with the development of globalization, the speed and cost of human translation cannot meet the current demand, so machine translation (MT) has emerged [3]. MT is a technology that enables the interconversion of different languages through computers, with high speed and low cost, and has been well used in many large-scale translation scenarios. To better serve the society, improving the quality of MT has become a very important issue nowadays. Lee et al. [4] introduced a neural machine translation (NMT) model that mapped the source character sequence to the target character sequence without any segmentation and used a character-level convolutional network with maximum pooling in the encoder part. By experimenting on a many-to-one translation task, the model was found to have high translation quality. Wu et al. [5] improved the NMT model using source and target dependency trees. The new encoder enriched each source state with a dependency relationship in the tree. During decoding, the tree structure was used as a context to facilitate word generation. The experiment found that the model was found to be effective in improving translation quality. Choi et al. [6] contextualized the word embedding vectors using a nonlinear bag-of-words representation of the source sentence and represented special tokens with typed symbols to facilitate translation of the words that are less suitable for translation through continuous vectors. The experimental results of En-Fr and En-De demonstrated the effectiveness of the model in improving the translation quality. Hewavitharana and Vogel [7] proposed a phrase alignment method that aims to align parallel sections bypassing nonparallel sections of a sentence and verified the effectiveness of the method in translation systems for Arabic English and Urdu English, which resulted in improvements up to 1.2 Bilingual Evaluation Understudy (BLEU) over the baseline. The current NMT has some problems, such as over-translation and under-translation, which lead to poor and ineffective translation, and further improvement and research are still needed. Therefore, this work studied the current mainstream Transformer model and innovatively improved it by combining it with a generative adversarial network (GAN). In the experiments, the reliability of the improved model in Chinese-English translation was verified by comparing the traditional Transformer model with the improved Transformer model. It was found that the translation results of the improved Transformer model were closer to the semantics of the source language and had smaller differences with the reference translations. The improved Transformer model is conducive to further improving the translation quality and effect and the better application of the Transformer model in Chinese-English translation.

2 Transformer model

With the development of artificial intelligence technology, MT has gradually developed from the earliest rule-based MT [8] to the early statistical MT [9], and the more common one now is NMT [10], which is mainly based on an “encoder-decoder” framework. NMT uses an encoder to map the source language sentence to a computable semantic vector and uses a decoder to decode the semantic vector to generate the target language sentence. Improving the translation effect of NMT is a key and difficult content in the current research. This work mainly studied the Transformer model.

Compared with the traditional NMT model, the Transformer model [11] completely abandons the recurrent neural network (RNN) structure and uses only the Attention mechanism to implement MT [12], which is good for reducing the computation and improving the translation effect. In the Transformer model, for an input (x 1, x 2,…, x n ), it is mapped to z = ( z 1 , z 2 , z n ) through an encoder, and an output sequence (y 1, y 2,…, y n ) is generated through a decoder. The overall structure of the model is shown in Figure 1.

Figure 1 
               The structure of the Transformer model.
Figure 1

The structure of the Transformer model.

“Attention” in the Transformer model refers to Scale Dot-Product Attention. Let the dimension of the input query be q and key be d k and the dimension of value be d v. Query, keys, and value are processed into Q, K, and V. The output matrix can be written as:

(1) Attention ( Q , K , V ) = softmax Q T k d k ,

Q R m × d k , K R m × d k , and V R m × d v . The dimension of the output matrix is R m × d v .

Multi-head attention processing is used in the Transformer model. First, a linear mapping is performed on Q, K, and V · Q, K and V matrices whose input dimension is d model is mapped to Q R m × d k , K R m × d k , and V R m × d v . Then, the result is calculated through Scale Dot-Product Attention. The above steps are repeated. Attentions obtained through h times of operations are put together to obtain multi-head attention. The detailed calculation formula is:

(2) MultiHeadAttention ( Q , K , V ) = Concat ( head 1 , head 2 , , head h ) ,

(3) Head i = Attention ( Q W i Q , K W i k , V W i V ) ,

where W i Q , W i k , and W i V are parameter matrices.

Since the encoder and decoder of the model cannot capture the sequence order information, the position encoding is used in the Transformer model, which can be written as:

(4) PE ( pos , 2 i ) = sin ( pos/ 10 , 000 2 i / d model ) ,

(5) PE ( pos , 2 i + 1 ) = cos ( pos/ 10 , 000 2 i / d model ) ,

where pos refers to the position of a word in a sentence and i is the dimensional subscript of a word.

3 The improved Transformer model

In order to further improve the performance of the Transformer model, this paper improved it by combining GAN, which is based on the Nash equilibrium of game theory [13] and has extensive applications in image processing [14] and natural language processing [15]. GAN obtains the desired data through confrontation and gaming of a generator (G) and a discriminator (D) [16], using the back-propagation of D as the parameter update of G, thus enabling the model to learn what kind of utterance is considered a good translation.

The improved Transformer model uses the Transformer model as the generator (G) and the convolutional neural network (CNN) [17] as the discriminator (D). The goal of G is to generate a sequence from the initial state that maximizes the final desired reward, written as:

(6) J ( θ ) = Y 1 : T G θ ( Y 1 : T | X ) R D , Q G θ ( Y 1 : T X , y T Y ) ,

where θ refers to the parameter in the generator G, Y 1 : T refers to the generated target sentence, X refers to the source sentence, Y* refers to the given standard target sentence, and R D , Q G θ refers to the action-value function from source sentence X to target sequence.

The BLEU value of a sentence is used as a generator, and the n-element syntactic precision of sentence y g is computed. Based on the target standard sentence y d , reward Q ( y g , y d ) is calculated. The calculation formula of R D , Q G θ is:

(7) R D , Q G θ ( Y 1 : T X , y T Y ) = γ ( D ( X , Y 1 : T ) b ( X , Y 1 : T ) ) + ( 1 γ ) Q ( Y 1 : T , Y ) ,

where b ( X , Y ) stands for the base value, which is set as 0.5 for simple calculation, and γ stands for a hyperparameter. To obtain a stable reward, an N-time Monte Carlo search is used to obtain the action-value, and the formula is:

(8) { Y 1 : T 1 1 , , Y 1 : T N N } = MC G θ ( ( Y 1 : T , X ) , N ) ,

where T i refers to the length of the sequence at the ith Monte Carlo search, ( Y 1 : T , X ) refers to the present state, Y 1 : T N N refers to the sentence generated according to the policy G θ , and MC refers to the Monte Carlo search function.

The generator is optimized by updating the discriminator used as the reward function, and the relevant calculation formula is:

(9) min ( E X , Y P data [ lg ( D ( X , Y ) ) ] E X , Y G [ lg ( 1 D ( X , Y ) ) ] ) ,

where E is an anticipation function. The derivative of the target function J ( θ ) to the generator parameter θ is:

(10) c c π J ( θ ) = 1 T t = 1 T y t R D , Q G θ ( Y 1 : T , X , y T , Y ) θ ( G θ ( y t | Y 1 : T , X ) ) ,

The generator parameter is updated as follows:

(11) θ θ + a h θ J ( θ ) ,

where a h refers to the learning rate at the h step.

4 Experimental analysis

The previous sections introduced two NMT models: the traditional Transformer model and the improved Transformer model combined with GAN. In this section, the experimental setup was described, the evaluation indicators of the model performance were introduced, the experiments were conducted on the linguistic data consortium (LDC) dataset, and the experimental results were analyzed in detail.

4.1 Experimental setup

The baseline system was the Transformer model in the open-source framework THUMT from Tsinghua University, whose parameters are shown in Table 1.

Table 1

Parameters of the Transformer model

Parameters Value
Number of layers in the encoder 6
Number of layers in the decoder 6
Size of Chinese and English word lists 32k
Word vector dimension 512
The hidden layer state dimension of the feedforward neural network 2,048
The number of heads in multi-head attention 8
Dropout ratio 0.1
Number of words in each batch 6,250
The largest number of words in a sentence 50

The experimental task was a Chinese-English translation task on the LDC data set. NIST06 was used as the development set, and NIST02, NIST03, NIST04, NIST05, and NIST08 were used as the test sets to compare the performance of the Transformer model with the improved Transformer model.

4.2 Evaluation criteria

BLEU [18]: the more similar the results of model translation and human translation were, the better the performance of the model was. The calculation method of BLEU is shown below.

  1. The maximum number of possible occurrences of an n-gram word in the reference translation, i.e., maxrefcount ( n -gram ) was calculated. Then, the number of occurrences of the n-gram word in the translation results of the model, i.e., Count ( n -gram ) , was calculated. The smaller number was taken as the final matching times, and the relevant calculation formula is:

    (12) Count clip ( n -gram ) = min { Count ( n -gram ) ,maxrefcount ( n -gram ) } .

  2. After obtaining Count clip ( n -gram ) , the BLEU value was calculated, and the formula is:

(13) BLEU = BP × exp n = 1 N w n log p n ,

(14) p n = C { candidates } n -gram C Count clip ( n -gram ) C { candidates } n -gram C Count clip ( n -gram ) ,

(15) BP = 1 , if c > r , e ( 1 r / c ) , if c r ,

where BP refers to a penalty factor, w n refers to the weight of n-gram word, p n refers to the score of precision, c refers to the length of the target text obtained by the mode, and r refers to the length of the target text for reference.

Perplexity [19]: it is one of the criteria for testing the performance of a model. In translation results of the model, the larger the probability of every word was, the more accurate the word was. The trained model was tested to obtain the final translation result. The probability was calculated. If there are N words, the calculation formula of perplexity is:

(16) PP ( T ) = p ( w 1 w 2 w N ) 1 / N = n = 1 N 1 p ( w n | w 1 w 2 w n 1 ) N ,

where p ( w i ) stands for the translation probability of word w i .

4.3 Translation results

The BLEU values of the two models are shown in Figure 2.

Figure 2 
                  Comparison of BLEU values between different models.
Figure 2

Comparison of BLEU values between different models.

It was seen from Figure 2 that the BLEU value of the improved Transformer model had some improvement; the BLEU value of the improved Transformer model improved by 0.66 in NIST02, 0.36 in NIST03, 0.1 in NIST04, and 0.48 in NIST05, and NIST08 improved the most. The BLEU value of the dataset was 32.26 in the Transformer model and 33.09 in the improved Transformer model, i.e., the value improved by 0.83. The average BLEU value of the Transformer model was 40.47, while the average BLEU value of the improved Transformer model was 40.96, i.e., there was an improvement of 0.49. The above results verified the reliability of the improved Transformer model in improving the translation quality.

Then, the perplexity values of the two models were compared, and the results are shown in Figure 3.

Figure 3 
                  Comparison of perplexity values between different models.
Figure 3

Comparison of perplexity values between different models.

It was seen from Figure 3 that the perplexity value of the improved Transformer model was significantly reduced. Specifically, when using the improved Transformer model, the perplexity values of NIST02 reduced by 10.27, NIST03 by 9.16, NIST04 by 10.47, NIST05 by 9.45, and NIST08 by 10.98. The average perplexity value of the Transformer model was 26.89, while the average perplexity value of the improved Transformer model was 16.83, which was reduced by 10.06. The above results indicated that the probability of the model selecting the correct translation results somewhat increased, thus the model obtained results closer to the human translation.

The computational speed of the two models was compared, and the results are shown in Figure 4.

Figure 4 
                  Comparison of computing speed between different models.
Figure 4

Comparison of computing speed between different models.

It was seen from Figure 4 that the computational speed of the Transformer model was 27,850 words per second, while the computational speed of the improved Transformer model was 25,890 words per second, which was only 7.04% lower than that of the Transformer model, indicating that the improvement of the Transformer model by combining GAN did not have a particularly significant impact on its computational speed.

The translation results of the two models were compared and analyzed, as shown in Tables 2 and 3.

Table 2

Translation example 1

Source language 欧盟办事处与澳洲大使馆在同一建筑内。
Candidate translation 1 The EU mission is in the same building with the Australian embassy
Candidate translation 2 The European Union office and the Australian embassy are both located in the same building
Candidate translation 3 The European Union office is in the same building as the Australian embassy
Candidate translation 4 The EU office and the Australian embassy are housed in the same building
The translation of the Transformer model The European Union offices with the Australian embassy in the same building
The translation of the improved Transformer model The EU office is housed in the same building as the Australian Embassy
Table 3

Translation example 2

Source language 经过处理后的“中水”将率先在城市绿化浇灌中使用。
Candidate translation 1 The treated reclaimed water will first be used in city greenbelt irrigation
Candidate translation 2 The treated reclaimed water will be first used to irrigate urban greenery
Candidate translation 3 The treated middle-water will first be used in watering the trees in and around the city
Candidate translation 4 The treated reclaimed water will be first used in urban green area irrigation
The translation of the Transformer model The treated intermediate water will be the first to be used in urban green irrigation
The translation of the improved Transformer model The treated reclaimed water will be first used in urban green area irrigation

It was seen from Table 2 that the Transformer model translated “欧盟办事处” as “The European Union offices,” while the improved Transformer model translated it as “The EU office,” which was more accurate. The reason for the above problem may be that the Transformer model did not consider “The EU office” as a proper noun, leading to a wrong translation.

In Chinese semantic, the term “中水” means recycled water. It was seen from Table 3 that the Transformer model translated “中水” directly as “intermediate water” without considering the specific semantics. In addition, the Transformer model translated “率先…” as “be the first to,” which was more inclined to the meaning of “the first time to,” the Transformer model translated it as “be first,” which was closer to the original meaning of the source language and the reference translations.

5 Discussion

Advances in artificial intelligence have led to the rapid development of NMT, which has become a new paradigm for MT [20]. Compared with statistical MT, NMT does not require steps such as word alignment and sequencing, and its translation process entirely relies on the self-learning of neural networks, which greatly reduces the complexity of the model and significantly improves translation performance. However, the current NMT has some problems, such as low-frequency words and unknown words [21]; therefore, the research on NMT is of great importance.

This article mainly focused on the Transformer model. In order to improve the translation effect, GAN was introduced to improve the Transformer model. Then, the translation performance of these two models was compared and analyzed on the LDC dataset. The comparison of BLEU and perplexity values showed that the improved Transformer model had larger BLEU values and smaller perplexity values, which indicated that the similarity between the results of model translation and human translation was high, i.e., the translation quality of the improved Transformer model was high. In addition, the comparison of the translation speed (Figure 4) suggested that the computational speed of the improved Transformer model only decreased by 7.04% compared with the traditional Transformer model, which indicated that the improved model did not increase the computational complexity to a great extent. Finally, the two translation examples (Tables 2 and 3) demonstrated that compared with the Transformer model, the translation results obtained by the improved Transformer model were more consistent with the meaning of the source language and matched better with the reference translations, which verified the reliability of its translation effect and its application feasibility in practice. At present, Chinese-English translation has a broad demand in many fields, and the improved Transformer model can provide simultaneous translation in multilingual communication in international conferences and provide services for cross-language retrieval in academic fields, which has great application values in scenarios such as foreign trade and overseas travel.

In this article, although some results have been obtained from the study of English translation based on the improved Transformer model, there are some shortcomings. For example, the amount of the experimental data was small and the translated sentences were short. In future work, experiments will be conducted on a larger dataset, and the performance of the improved Transformer model on English paragraph translation will be investigated.

6 Conclusion

This article improved the Transformer model by combining GAN and conducted experiments on the LDC dataset. The performance of the Transformer model and the improved Transformer model in English translation was compared. The results showed that:

  1. The average BLEU value and perplexity value of the improved Transformer model were 40.96 and 16.83, respectively, which were superior to those of the Transformer model;

  2. The computational speed of the improved Transformer model was 25,890 words, which only decreased by 7.04% compared to the Transformer model;

  3. The translation results of the improved Transformer model were closer to the reference translations.

The experimental results verify that the improved Transformer model can improve translation quality effectively while ensuring computational speed. In order to realize better application in actual translation scenarios, the improved Transformer model will be further studied in the future to analyze its performance in translating long and complicated sentences and paragraphs.

  1. Conflict of interest: Authors state no conflict of interest.

References

[1] Liu H, Zhang M, Fernández AP, Xie N, Li B, Liu Q. Role of language control during interbrain phase synchronization of cross-language communication. Neuropsychologia. 2019;131:316–24.10.1016/j.neuropsychologia.2019.05.014Search in Google Scholar PubMed

[2] Lumeras MA, Way A. On the complementarity between human translators and machine translation. Hermes. 2017;56:21.10.7146/hjlcb.v0i56.97200Search in Google Scholar

[3] Liu HI, Chen WL. Re-transformer: a self-attention based model for machine translation. Proc Comput Sci. 2021;189:3–10.10.1016/j.procs.2021.05.065Search in Google Scholar

[4] Lee J, Cho K, Hofmann T. Fully character-level neural machine translation without explicit segmentation. Trans Assoc Comput Linguist. 2017;5:365–78.10.1162/tacl_a_00067Search in Google Scholar

[5] Wu S, Zhang D, Zhang Z, Yang N, Li M, Zhou M. Dependency-to-dependency neural machine translation. IEEE/ACM T Audio Speech. 2018;26:2132–41.10.1109/TASLP.2018.2855968Search in Google Scholar

[6] Choi H, Cho K, Bengio Y. Context-dependent word representation for neural machine translation. Comput Speech Lang. 2017;45:149–60.10.1016/j.csl.2017.01.007Search in Google Scholar

[7] Hewavitharana S, Vogel S. Extracting parallel phrases from comparable data for machine translation. Nat Lang Eng. 2016;22:549–73.10.1017/S1351324916000139Search in Google Scholar

[8] Sghaier MA, Zrigui M. Rule-based machine translation from tunisian dialect to modern standard Arabic. Proc Comput Sci. 2020;176:310–9.10.1016/j.procs.2020.08.033Search in Google Scholar

[9] Germann U. Sampling phrase tables for the moses statistical machine translation system. Prague Bull Math Linguist. 2015;104:39–50.10.1515/pralin-2015-0012Search in Google Scholar

[10] Luong MT. Addressing the rare word problem in neural machine translation. Bull Univ Agric Sci Vet Med Cluj-Napoca. 2015;27:82–6.10.3115/v1/P15-1002Search in Google Scholar

[11] Popel M, Bojar O. Training tips for the transformer model. Prague Bull Math Linguist. 2018;110:43–70.10.2478/pralin-2018-0002Search in Google Scholar

[12] Lin F, Zhang C, Liu S, Ma H. A hierarchical structured multi-head attention network for multi-turn response generation. IEEE Access. 2020;8:46802–10.10.1109/ACCESS.2020.2977471Search in Google Scholar

[13] Wang HG, Li X, Zhang T. Generative adversarial network based novelty detection using minimized reconstruction error. Front Inf Tech El. 2018;01:119–28.Search in Google Scholar

[14] Wolterink JM, Leiner T, Viergever MA, Isgum I. Generative adversarial networks for noise reduction in low-dose CT. IEEE T Med Imaging. 2017;36:2536–45.10.1109/TMI.2017.2708987Search in Google Scholar PubMed

[15] Creswell A, White T, Dumoulin V, Arulkumaran K, Sengupta B, Bharath AA. Generative adversarial networks: an overview. IEEE Signal Proc Mag. 2017;35:53–65.10.1109/MSP.2017.2765202Search in Google Scholar

[16] Wang KF, Gou C, Duan YJ, Lin YL, ZHeng XH, Wang FY. Generative adversarial networks: the state of the art and beyond. Acta Autom Sin. 2017;43:321–32.Search in Google Scholar

[17] Ren Q, Su Y, Wu N. Research on Mongolian-Chinese machine translation based on the end-to-end neural network. Int J Wavel Multi. 2020;18:46–59.Search in Google Scholar

[18] Shereen A, Mohamed A. A cascaded speech to Arabic sign language machine translator using adaptation. Int J Comput Appl. 2016;133:5–9.10.5120/ijca2016907799Search in Google Scholar

[19] Brychcin T, Konopik M. Latent semantics in language models. Comput Speech Lang. 2015;33:88–108.10.1016/j.csl.2015.01.004Search in Google Scholar

[20] Choi H, Cho K, Bengio Y. Fine-grained attention mechanism for neural machine translation. Neurocomputing. 2018;284:171–6.10.1016/j.neucom.2018.01.007Search in Google Scholar

[21] Hasigaowa, Wang S. Research on unknown words processing of Mongolian-Chinese neural machine translation based on semantic similarity. 2019 IEEE 4th International Conference on Computer and Communication Systems, ICCCS; 2019. p. 370–4.10.1109/CCOMS.2019.8821725Search in Google Scholar

Received: 2022-01-11
Revised: 2022-03-10
Accepted: 2022-03-22
Published Online: 2022-04-29

© 2022 Hongxia Li and Xin Tuo, published by De Gruyter

This work is licensed under the Creative Commons Attribution 4.0 International License.

Articles in the same Issue

  1. Research Articles
  2. Construction of 3D model of knee joint motion based on MRI image registration
  3. Evaluation of several initialization methods on arithmetic optimization algorithm performance
  4. Application of visual elements in product paper packaging design: An example of the “squirrel” pattern
  5. Deep learning approach to text analysis for human emotion detection from big data
  6. Cognitive prediction of obstacle's movement for reinforcement learning pedestrian interacting model
  7. The application of neural network algorithm and embedded system in computer distance teach system
  8. Machine translation of English speech: Comparison of multiple algorithms
  9. Automatic control of computer application data processing system based on artificial intelligence
  10. A secure framework for IoT-based smart climate agriculture system: Toward blockchain and edge computing
  11. Application of mining algorithm in personalized Internet marketing strategy in massive data environment
  12. On the correction of errors in English grammar by deep learning
  13. Research on intelligent interactive music information based on visualization technology
  14. Extractive summarization of Malayalam documents using latent Dirichlet allocation: An experience
  15. Conception and realization of an IoT-enabled deep CNN decision support system for automated arrhythmia classification
  16. Masking and noise reduction processing of music signals in reverberant music
  17. Cat swarm optimization algorithm based on the information interaction of subgroup and the top-N learning strategy
  18. State feedback based on grey wolf optimizer controller for two-wheeled self-balancing robot
  19. Research on an English translation method based on an improved transformer model
  20. Short-term prediction of parking availability in an open parking lot
  21. PUC: parallel mining of high-utility itemsets with load balancing on spark
  22. Image retrieval based on weighted nearest neighbor tag prediction
  23. A comparative study of different neural networks in predicting gross domestic product
  24. A study of an intelligent algorithm combining semantic environments for the translation of complex English sentences
  25. IoT-enabled edge computing model for smart irrigation system
  26. A study on automatic correction of English grammar errors based on deep learning
  27. A novel fingerprint recognition method based on a Siamese neural network
  28. A hidden Markov optimization model for processing and recognition of English speech feature signals
  29. Crime reporting and police controlling: Mobile and web-based approach for information-sharing in Iraq
  30. Convex optimization for additive noise reduction in quantitative complex object wave retrieval using compressive off-axis digital holographic imaging
  31. CRNet: Context feature and refined network for multi-person pose estimation
  32. Improving the efficiency of intrusion detection in information systems
  33. Research on reform and breakthrough of news, film, and television media based on artificial intelligence
  34. An optimized solution to the course scheduling problem in universities under an improved genetic algorithm
  35. An adaptive RNN algorithm to detect shilling attacks for online products in hybrid recommender system
  36. Computing the inverse of cardinal direction relations between regions
  37. Human-centered artificial intelligence-based ice hockey sports classification system with web 4.0
  38. Construction of an IoT customer operation analysis system based on big data analysis and human-centered artificial intelligence for web 4.0
  39. An improved Jaya optimization algorithm with ring topology and population size reduction
  40. Review Articles
  41. A review on voice pathology: Taxonomy, diagnosis, medical procedures and detection techniques, open challenges, limitations, and recommendations for future directions
  42. An extensive review of state-of-the-art transfer learning techniques used in medical imaging: Open issues and challenges
  43. Special Issue: Explainable Artificial Intelligence and Intelligent Systems in Analysis For Complex Problems and Systems
  44. Tree-based machine learning algorithms in the Internet of Things environment for multivariate flood status prediction
  45. Evaluating OADM network simulation and an overview based metropolitan application
  46. Radiography image analysis using cat swarm optimized deep belief networks
  47. Comparative analysis of blockchain technology to support digital transformation in ports and shipping
  48. IoT network security using autoencoder deep neural network and channel access algorithm
  49. Large-scale timetabling problems with adaptive tabu search
  50. Eurasian oystercatcher optimiser: New meta-heuristic algorithm
  51. Trip generation modeling for a selected sector in Baghdad city using the artificial neural network
  52. Trainable watershed-based model for cornea endothelial cell segmentation
  53. Hessenberg factorization and firework algorithms for optimized data hiding in digital images
  54. The application of an artificial neural network for 2D coordinate transformation
  55. A novel method to find the best path in SDN using firefly algorithm
  56. Systematic review for lung cancer detection and lung nodule classification: Taxonomy, challenges, and recommendation future works
  57. Special Issue on International Conference on Computing Communication & Informatics
  58. Edge detail enhancement algorithm for high-dynamic range images
  59. Suitability evaluation method of urban and rural spatial planning based on artificial intelligence
  60. Writing assistant scoring system for English second language learners based on machine learning
  61. Dynamic evaluation of college English writing ability based on AI technology
  62. Image denoising algorithm of social network based on multifeature fusion
  63. Automatic recognition method of installation errors of metallurgical machinery parts based on neural network
  64. An FCM clustering algorithm based on the identification of accounting statement whitewashing behavior in universities
  65. Emotional information transmission of color in image oil painting
  66. College music teaching and ideological and political education integration mode based on deep learning
  67. Behavior feature extraction method of college students’ social network in sports field based on clustering algorithm
  68. Evaluation model of multimedia-aided teaching effect of physical education course based on random forest algorithm
  69. Venture financing risk assessment and risk control algorithm for small and medium-sized enterprises in the era of big data
  70. Interactive 3D reconstruction method of fuzzy static images in social media
  71. The impact of public health emergency governance based on artificial intelligence
  72. Optimal loading method of multi type railway flatcars based on improved genetic algorithm
  73. Special Issue: Evolution of Smart Cities and Societies using Emerging Technologies
  74. Data mining applications in university information management system development
  75. Implementation of network information security monitoring system based on adaptive deep detection
  76. Face recognition algorithm based on stack denoising and self-encoding LBP
  77. Research on data mining method of network security situation awareness based on cloud computing
  78. Topology optimization of computer communication network based on improved genetic algorithm
  79. Implementation of the Spark technique in a matrix distributed computing algorithm
  80. Construction of a financial default risk prediction model based on the LightGBM algorithm
  81. Application of embedded Linux in the design of Internet of Things gateway
  82. Research on computer static software defect detection system based on big data technology
  83. Study on data mining method of network security situation perception based on cloud computing
  84. Modeling and PID control of quadrotor UAV based on machine learning
  85. Simulation design of automobile automatic clutch based on mechatronics
  86. Research on the application of search algorithm in computer communication network
  87. Special Issue: Artificial Intelligence based Techniques and Applications for Intelligent IoT Systems
  88. Personalized recommendation system based on social tags in the era of Internet of Things
  89. Supervision method of indoor construction engineering quality acceptance based on cloud computing
  90. Intelligent terminal security technology of power grid sensing layer based upon information entropy data mining
  91. Deep learning technology of Internet of Things Blockchain in distribution network faults
  92. Optimization of shared bike paths considering faulty vehicle recovery during dispatch
  93. The application of graphic language in animation visual guidance system under intelligent environment
  94. Iot-based power detection equipment management and control system
  95. Estimation and application of matrix eigenvalues based on deep neural network
  96. Brand image innovation design based on the era of 5G internet of things
  97. Special Issue: Cognitive Cyber-Physical System with Artificial Intelligence for Healthcare 4.0.
  98. Auxiliary diagnosis study of integrated electronic medical record text and CT images
  99. A hybrid particle swarm optimization with multi-objective clustering for dermatologic diseases diagnosis
  100. An efficient recurrent neural network with ensemble classifier-based weighted model for disease prediction
  101. Design of metaheuristic rough set-based feature selection and rule-based medical data classification model on MapReduce framework
Downloaded on 11.9.2025 from https://www.degruyterbrill.com/document/doi/10.1515/jisys-2022-0038/html
Scroll to top button