Startseite A study on automatic correction of English grammar errors based on deep learning
Artikel Open Access

A study on automatic correction of English grammar errors based on deep learning

  • Mengyang Qin EMAIL logo
Veröffentlicht/Copyright: 2. Juni 2022
Veröffentlichen auch Sie bei De Gruyter Brill

Abstract

Grammatical error correction (GEC) is an important element in language learning. In this article, based on deep learning, the application of the Transformer model in GEC was briefly introduced. Then, in order to improve the performance of the model on GEC, it was optimized by a generative adversarial network (GAN). Experiments were conducted on two data sets. It was found that the performance of the GAN-combined Transformer model was significantly improved compared to the Transformer model. The F 0.5 value of the optimized model was 53.87 on CoNIL-2014, which was 2.69 larger than the Transformer model; the generalized language evaluation understanding (GLEU) value of the optimized model was 61.77 on JFLEG, which was 8.81 larger than that of the Transformer model. The optimized model also had a favorable correction performance in an actual English essay. The experimental results verify the reliability of the GAN-combined Transformer model on automatic English GEC, suggesting that the model can be further promoted and applied in practice.

1 Introduction

In English learning, grammar is one of the key elements [1]. In order to improve grammar, learners often do many writing exercises. In the specific teaching process, the correction of writing exercises takes much time and is burdensome for both learners and teachers; therefore, the efficiency of grammar learning can be significantly improved if automatic English grammatical error correction (GEC) can be implemented. With the improvement of technology, GEC has also been widely studied [2]. Solyman et al. [3] studied the Arabic GEC model, introduced a multi-convolutional layer model containing an attention mechanism, and found through experiments that the method obtained high accuracy. Park et al. [4] analyzed correction and over-correction problems in GEC and pointed out that the current GEC model might make unnecessary changes to correct sentences. Acheampong and Tian [5] improved the seq2seq model using a neural cascade strategy and found through experiments that the method was more effective in correcting grammatical errors in a low-resource model. Liu and Liu [6] suggested training the GEC model with unlabeled data, used an attention-based neural network, and verified the advantages of the method through experiments. Lin et al. [7] regarded GEC as a multi-classification task, integrated different language embedding models and deep learning models to correct ten part-of-speech errors in Indonesian texts, and found that the average F 0.5 of the model reached 0.551, suggesting a good performance. Zhou and Liu [8] established a basic model for GEC based on the classification model and found through experiments that the model could constantly improve accuracy and correction efficiency in the learning process. Facing with the problem of grammatical errors in Chinese, Wang et al. [9] established word vectors using the Bidirectional Encoder Representation from Transformers (BERT) model, designed and implemented the BERT BILSTM CRF-based Chinese grammatical error detection model, and found that the model was feasible and had a high accuracy. This article studied automatic English GEC based on deep learning, designed a method that combined the Transformer model with a Generative Adversarial Network (GAN), and verified the effectiveness of the method through experiments on data sets. This article makes a contribution to further improving GEC.

2 Deep learning models

Natural language has strong flexibility and uncertainty. English, as a language, has an extensive vocabulary and complex grammar. It is difficult to correct English grammar. Automatic English GEC [10] can not only reduce teachers’ burden and save their resources but also help learners to get feedback on grammar learning faster. Deep learning has been successfully applied in image processing [11] and speech recognition [12], and its application in natural language processing (NLP) has been continuously researched [13].

Deep learning obtains more feature information for data classification and prediction through continuously learning from automatically extracted data features, including convolutional neural networks (CNN) [14] and recurrent neural networks (RNN) [15]. In NLP, RNN is one of the most common models. Since RNN can only implement sequential computation, its parallel computation capability is poor; thus, the Transformer model [16] has emerged. This article realized automatic English GEC based on the Transformer model.

The Transformer model follows the encoder–decoder framework. The encoder and decoder both consist of six identical layers stacked together. Every layer in the encoder includes two sub-layers, multi-head attention and feed-forward, and the decoder includes a sub-layer in addition to these two sub-layers. Multi-head attention is performed on the output of the encoder. The most important element in the model is the self-attention mechanism, which makes every word have three vectors: query, key, and value. The calculation formula of attention is written as:

(1) Attention ( Q , K , V ) = softmax Q K T d k V ,

where d k is the dimensional value of the vector, 512.

With different attention heads, different information can be obtained. Then, these attention heads are combined to get the output of multi-head attention. The relevant calculation formulas are written as:

(2) MultiHead ( Q , K , V ) = Concat ( head 1 , head 2 , , head n ) W o ,

(3) Head i = Attention ( Q W i Q , K W i K , V W i V ) .

The formula for the feed-forward network (FFN) is written as:

(4) FFN ( x ) = max ( 0 , x W 1 + b 1 ) W 2 + b 2 ,

where W 1 , W 2 , b 1 , and b 2 are training parameters.

The Transformer model implements the position encoding of the sequence through sin and cos functions. The calculation formulas are written as:

(5) PE ( pos , 2 i ) = sin ( pos / 10 , 000 2 i / d model ) ,

(6) PE ( pos , 2 i + 1 ) = cos ( pos / 10 , 000 2 i / d model ) ,

where pos is the feature location and i is the word dimension.

Automatically correcting English grammatical errors with the Transformer model is considered as a translation task, i.e., “translating” grammatically incorrect sentences into grammatically correct ones. In the training process, maximum likelihood estimation is adopted to maximize the likelihood of the model on the training data S . The computational formula is written as:

(7) α = argmax ( x , y ) S log p ( y | x ; α ) .

In order to further improve the performance of the Transformer model on GEC, this article optimized the Transformer model with GAN.

3 GEC approach combined with GAN

GAN [17] consists of two independent networks: a generator and a discriminator. The training of a model is achieved by adversarial learning of the above two networks. The generator used in this article is the Transformer model, and the discriminator is a CNN-based classification model. An “error-corrected” sentence is written as ( X , Y ) . Let the parameter of generator G be θ, the error sentence at the source side be

(8) x = ( x 1 , x 2 , , x m ) ,

the corrected sentence at the target end be

(9) y = ( y 1 , y , , y n ) ,

and the corrected sentence generated by the generator be

(10) y = ( y 1 , y , , y n ) .

When the generator outputs a corrected sentence, it is used as input to the discriminator along with the incorrect sentence at the source side, and the discriminator calculates the probability that the sentence is manually labeled and feeds it back to the generator as a reward. The goal of the whole adversarial learning is to obtain the maximum desired reward. The process of GAN adversarial training is as follows:

  1. The generator is pre-trained on (X, Y) using maximum likelihood estimation.

  2. Taking (X, Y) as the positive sample, a negative sample (X, Y′) is established using the generator to pre-train the discriminator.

  3. Generator updating: subset ( X batch , Y batch ) is sampled from (X, Y). Source-side error sentence X batch is sampled by the generator to obtain Y batch . Then, a Monte Carlo search is performed on every position of Y batch to calculate the respective reward values. Then, the generator is updated through the policy gradient method.

  4. Discriminator updating: subset ( X batch , Y batch ) is sampled from (X, Y), and negative sample ( X batch ,   Y batch ) is established to pre-train the discriminator.

4 Experimental analysis

4.1 Experimental setup

The Transformer model was realized by the open-source tensor2tensor. There were eight heads in the multi-headed attention. As to the FNN, the dimension of the hidden layer was 2,048, the gelu function was used as the activation function, dropout was 0.1, and lr was 0.01. The Adam optimizer was used. The batch_size was 20, and the epoch number was 40. The maximum length of the sentence was 50. The beam size was 8. The length penalty parameter was 0.6. The training ended when the model had no effect improvement for three consecutive epochs on the development set. In CNN, the word vector dimension was 300, the size of the convolutional window was 3 × 3, the pooling window was 2 × 2, the first convolutional layer was mapped with 128 features, the second convolutional layer was mapped with 256 features, and the dimension of the hidden layer was 128. For adversarial training, the RMSprop algorithm was used [18], the initial learning rate was set as 0.0003, and the batch_size was 128. When training the discriminator, 5,000 samples were randomly sampled as positive samples, and the corresponding negative samples were created.

4.2 Experimental data set

Training sets: ① Lang-8: it is a corpus established by extracting from the social network Lang-8 and has multilingual versions, as shown in Table 1. The English version is extracted as the training set.

Table 1

Lang-8 data set

Language Quantity
English 1,069,549
Japanese 925,588
Mandarin 136,203
Korea 93,955
Spanish 51,829
French 58,918
German 37,886

② CLC FCE: It includes 1,244 exam scripts written based on test papers of candidates who participated in exams of English for Speakers of Other Languages, containing original texts, labels, error comments, etc. There are 80 types of grammatical errors and 1.36 million data.

③ NUCLE: It includes about 1,400 papers written by National University of Singapore undergraduates, which have been annotated and corrected by professional English teachers. The percentage of incorrect sentences in the corpus is 42.4%, involving 28 types of grammatical errors.

Test sets: ① CoNIL-2014: It is a standard data set of GEC, which includes 1,312 sentences and 28 types of grammatical errors. The evaluation index is F 0.5.

② JFLEG: It is a common data set of GEC, which can help correct grammatical errors and produce more fluent language. It includes 747 difficult sentences. The evaluation indexes are GLEU and F 0.5.

Table 2

Confusion matrix

Model output results
Positive sample Negative sample
Manual labeling results Positive sample TP TN
Negative sample FP FN

4.3 Evaluation indicators

  1. F 0.5: F refers to the weighted harmonic mean of precision and recall rate; F 0.5 refers to that the precision is twice as important as recall rate. The samples are divided according to the confusion matrix shown in Table 2.

    Then, the precision (P) of the model is

    (11) P = TP TP + FP ,

    and the recall rate (R) is

    (12) R = TP TP + FN .

    The calculation formula of F 0.5 is

    (13) F 0.5 = ( 0.5 2 + 1 ) × P × R 0.5 2 × P × R .

  2. MaxMatch (M 2): In GEC, an error sentence may have multiple results after corrections. If every answer is considered as a branch of a node, then the whole process of error correction can be regarded as a connected graph, and the evaluation of the error correction result can be regarded as the evaluation of the graph, i.e., scoring the edge of the node that can reach the final correct answer. For source-side sentence:

    (14) S = { s 1 , s 2 , , s n } ,

    let the corrected sentence output by the model be

    (15) E = { e 1 , e 2 , , e n } ,

    and the manually labeled sentence be

    (16) G = { g 1 , g 2 , , g n } .

    The specific calculation formulas are as follows:

    (17) P = i = 1 n e i g i i = 1 n e i ,

    (18) R = i = 1 n e i g i i = 1 n g i ,

    (19) F 0.5 = ( 0.5 2 + 1 ) × P × R 0.5 2 × P × R .

  3. GLEU: In a translation task, BLEU is based on the similarity between sentences. The specific calculation formulas are as follows:

(20) BLEU = BP × exp n = 1 N w n × log P n .

(21) BP = 1 , if c > r , e 1 r c , if c r ,

where P n is the precision rate of n-gram level words, BP is the length parameter, and c and r are the lengths of reference translation and machine translation, respectively. However, unlike machine translation, in the GEC task, an untranslated word may not be wrong, and changes can be made to the words in the source sentence; therefore, in order to evaluate GEC, BLEU is modified to GLEU, and the specific calculation formulas are as follows:

(22) Count B ( n - gram ) = n - gram ' B d ( n - gram , n - gram ' ) ,

(23) GLEU ( C , R , S ) = P × exp n = 1 N w n × log P n ,

(24) P ' n = n - gram G Count R \ S ( n - gram ) γ ( Count R \ S ( n - gram ) ) + Count R ( n - gram ) n - gram ' G ' Count S ( n - gram ' ) + n - gram R \ S Count R \ S ( n - gram ) ,

where C , R , and S refer to the candidate set, reference set, and source-end sentences, P n is the accuracy rate after correction, and γ is the penalty rate, which is used to penalize incorrect answers that appear in the source sentence but not in the reference set.

4.4 Experimental results

The experimental results on CoNIL-2014 are shown in Figure 1.

Figure 1 
                  GEC results of the model on CoNIL-2014.
Figure 1

GEC results of the model on CoNIL-2014.

It is seen from Figure 1 that the P-value, R-value, and F 0.5 of the basic Transformer model were 61.21, 30.91, and 51.18, respectively; the P-value of the GAN-combined Transformer model was 63.27, which was 2.06 larger than that of the Transformer model; the R-value of the optimized model was 34.52, which was 3.61 larger than that of the Transformer model; and the F 0.5 of the optimized model was 53.87, which was 2.69 larger than that of the Transformer model. The experimental results showed that the performance of the Transformer model on the GEC task was improved to some extent after combining GAN, which suggested that the improvement was effective.

The experimental results on JFLEG are shown in Figure 2.

Figure 2 
                  GEC results of the model on JFLEG.
Figure 2

GEC results of the model on JFLEG.

It is seen from Figure 2 that on the JFLEG data set, the GLEU value of the Transformer model was 52.96, and the GLEU value of the GAN-combined Transformer model was 61.77, which was 8.81 larger than the Transformer model, indicating that the similarity between the results obtained by the GAN-combined Transformer model and the manually annotated reference set was higher. The results suggested that the GAN-combined Transformer model was superior.

In order to further understand the application possibilities of the GAN-combined Transformer model, a student’s English essay was used as an example. The grammatical errors in the essay were automatically corrected using the model designed in this article, and the original text is as follows.

Technology is all around us, changing the way our live. The emergence of ships, trains, planes, etc. has made transportation more convenience. No matter where you want to go, as long as you find a suitable means of transportation, you can get to there quickly. On the same time, the development of science and technology has promoted the birth of computer, which has further narrowed the distant between people, allowing people to communicate to distant friends without leaving home, and can also realize online shopping, entertainment, work and so on. The development of technology have enabled us to live a better life.

The correction results of the model are shown in Table 3.

Table 3

English essay correction example

Source-side sentence Mode output
Technology is all around us, changing the way our live Technology is all around us, changing the way we live
The emergence of ships, trains, planes, etc. has made transportation more convenience The emergence of ships, trains, planes, etc. has made transportation more convenient
No matter where you want to go, as long as you find a suitable means of transportation, you can get to there quickly No matter where you want to go, as long as you find a suitable means of transportation, you can get to there quickly
On the same time, the development of science and technology has promoted the birth of computer At the same time, the development of science and technology has promoted the birth of computers
allowing people to communicate to distant friends without leaving home allowing people to communicate with distant friends without leaving home
The development of technology have enabled us to live a better life The development of technology has enabled us to live a better life

It is seen from Table 3 that the GAN-combined Transformer model corrected tense errors, pronoun errors, and singular and plural errors, but some errors were not detected, for example, in the sentence “which has further narrowed the distant between people,” the word “distant” should be corrected to “distance.” The above results suggested that the GAN-combined Transformer model had some shortcomings compared with manual annotation, which need to be improved in future research to further improve its performance in GEC.

5 Discussion

With the development of globalization, the number of English learners is also increasing, which puts pressure on English teaching. Grammar, as an important part of English learning, requires a lot of practice. Relying entirely on teachers’ manual corrections for grammar will consume a lot of time and energy. Therefore, automatic GEC has received more and more research. Deep learning has a wide range of applications in machine translation. This article designed an automatic GEC method based on deep learning.

First, on the CoNIL-2014 data set, the P-value, R-value, and F 0.5 of the traditional Transformer model were 61.21, 30.91, and 51.18, respectively, while the GAN-combined Transformer model had a P-value of 63.27, which was 3.32% higher than the Transformer model; a R-value of 34.52, which was 11.7% higher than the Transformer model; and a F 0.5 value of 53.87, which was 5.26% higher than the Transformer model. It was concluded that the GAN-combined Transformer model had significantly improved translation performance, indicating that the addition of adversarial learning greatly improved the performance of the model for GEC. Then, on the JFLEG data set, the GLEU value of the GAN-combined Transformer model was 16.64% higher than the Transformer model (61.77 vs 52.96). The above results suggested that the GAN-combined Transformer model had a better performance in GEC, further suggesting its reliability.

Finally, it was seen from the performance of the GAN-combined Transformer model in practical correction tasks that it could not only find out grammatical errors but also effectively corrected the mistakes in tenses and pronouns. The model can be applied in practice to reduce teachers’ pressure in correcting students’ homework and help teachers respond faster to students’ situation.

6 Conclusion

In this article, based on deep learning, the Transformer model was combined with GAN to study GEC. The experimental analysis found that the GAN-combined Transformer model had a good performance on both CoNIL-2014 and JFLEG data sets and had better F 0.5 and GLEU values than the traditional Transformer model. The results on the actual English essay correction also showed that the model was effective in automatically correcting grammatical errors in the English essay. The GAN-combined Transformer model can be promoted and applied in practice.

  1. Conflict of interest: Author states no conflict of interest.

References

[1] Hos R, Kekec M. Unpacking the discrepancy between learner and teacher beliefs: what should be the role of grammar in language classes? Eur Educ Res J. 2015;4:70–6.10.12973/eu-jer.4.2.70Suche in Google Scholar

[2] Madi N, Al-Khalifa HS. Grammatical error checking systems: a review of approaches and emerging directions. 2018 Thirteenth International Conference on Digital Information Management (ICDIM); 2018. p. 142–7.10.1109/ICDIM.2018.8847020Suche in Google Scholar

[3] Solyman A, Wang Z, Tao Q. Proposed model for Arabic grammar error correction based on convolutional neural network. 2019 International Conference on Computer, Control, Electrical, and Electronics Engineering (ICCCEEE); 2019. p. 1–6.10.1109/ICCCEEE46830.2019.9071310Suche in Google Scholar

[4] Park C, Yang Y, Lee C, Lim H. Comparison of the evaluation metrics for neural grammatical error correction with overcorrection. IEEE Access. 2020;8:106264–72.10.1109/ACCESS.2020.2998149Suche in Google Scholar

[5] Acheampong KN, Tian W. Toward end-to-end neural cascading strategies for grammatical error correction. 2019 IEEE 14th International Conference on Intelligent Systems and Knowledge Engineering (ISKE); 2019. p. 1265–72.10.1109/ISKE47853.2019.9170364Suche in Google Scholar

[6] Liu ZR, Liu Y. Exploiting unlabeled data for neural grammatical error detection. J Computer Sci Technol. 2017;032:758–67.10.1007/s11390-017-1757-4Suche in Google Scholar

[7] Lin N, Chen B, Lin X, Wattanachote K, Jiang S. A framework for Indonesian grammar error correction. ACM Trans Asian Low-Resource Lang Inf Process. 2021;20:1–12.10.1145/3440993Suche in Google Scholar

[8] Zhou S, Liu W. English grammar error correction algorithm based on classification model. Complexity. 2021;2021:1–11.10.1155/2021/6687337Suche in Google Scholar

[9] Wang H, Zhang YJ, Sun XM. Chinese grammatical error diagnosis based on sequence tagging methods. J Phys Conf Series. 2021;1948:012027 (7pp).10.1088/1742-6596/1948/1/012027Suche in Google Scholar

[10] Solyman A, Wang Z, Tao Q, Elhag AAM, Toseef M, Aleibeid Z. Synthetic data with neural machine translation for automatic correction in Arabic grammar. Egypt Inform J. 2020;22:303–15.10.1016/j.eij.2020.12.001Suche in Google Scholar

[11] Won YS, Han DG, Jap D, Bhasin S, Park JY. Non-profiled side-channel attack based on deep learning using picture trace. IEEE Access. 2021;9:22480–92.10.1109/ACCESS.2021.3055833Suche in Google Scholar

[12] Fayek HM, Lech M, Cavedon L. Evaluating deep learning architectures for speech emotion recognition. Neural Netw. 2017;92:60–8.10.1016/j.neunet.2017.02.013Suche in Google Scholar PubMed

[13] Tsuruoka Y. Deep learning and natural language processing. Brain Nerve. 2019;71:45–55.Suche in Google Scholar

[14] Milletari F, Ahmadi SA, Kroll C, Plate A, Rozanski VE, Maiostre J, et al. Hough-CNN: deep learning for segmentation of deep brain regions in MRI and ultrasound. Computer Vis Image Underst. 2017;164:92–102.10.1016/j.cviu.2017.04.002Suche in Google Scholar

[15] Shi H, Xu M, Li R. Deep learning for household load forecasting – a novel pooling deep RNN. IEEE Trans Smart Grid. 2018;9:5271–80.10.1109/TSG.2017.2686012Suche in Google Scholar

[16] Wu H, Shen GQ, Lin X, Li M, Li CZ. A transformer-based deep learning model for recognizing communication-oriented entities from patents of ICT in construction. Autom Constr. 2021;125:103608.10.1016/j.autcon.2021.103608Suche in Google Scholar

[17] Wolterink JM, Leiner T, Viergever MA, Isgum I. Generative adversarial networks for noise reduction in low-dose CT. IEEE Trans Med Imaging. 2017;36:2536–45.10.1109/TMI.2017.2708987Suche in Google Scholar PubMed

[18] Tieleman T, Hinton G. RMSProp: divide the gradient by a running average of its recent magnitude. COURSERA: Neural Netw Mach Learning. 2012;4:26–31.Suche in Google Scholar

Received: 2022-02-24
Revised: 2022-03-23
Accepted: 2022-04-20
Published Online: 2022-06-02

© 2022 Mengyang Qin, published by De Gruyter

This work is licensed under the Creative Commons Attribution 4.0 International License.

Artikel in diesem Heft

  1. Research Articles
  2. Construction of 3D model of knee joint motion based on MRI image registration
  3. Evaluation of several initialization methods on arithmetic optimization algorithm performance
  4. Application of visual elements in product paper packaging design: An example of the “squirrel” pattern
  5. Deep learning approach to text analysis for human emotion detection from big data
  6. Cognitive prediction of obstacle's movement for reinforcement learning pedestrian interacting model
  7. The application of neural network algorithm and embedded system in computer distance teach system
  8. Machine translation of English speech: Comparison of multiple algorithms
  9. Automatic control of computer application data processing system based on artificial intelligence
  10. A secure framework for IoT-based smart climate agriculture system: Toward blockchain and edge computing
  11. Application of mining algorithm in personalized Internet marketing strategy in massive data environment
  12. On the correction of errors in English grammar by deep learning
  13. Research on intelligent interactive music information based on visualization technology
  14. Extractive summarization of Malayalam documents using latent Dirichlet allocation: An experience
  15. Conception and realization of an IoT-enabled deep CNN decision support system for automated arrhythmia classification
  16. Masking and noise reduction processing of music signals in reverberant music
  17. Cat swarm optimization algorithm based on the information interaction of subgroup and the top-N learning strategy
  18. State feedback based on grey wolf optimizer controller for two-wheeled self-balancing robot
  19. Research on an English translation method based on an improved transformer model
  20. Short-term prediction of parking availability in an open parking lot
  21. PUC: parallel mining of high-utility itemsets with load balancing on spark
  22. Image retrieval based on weighted nearest neighbor tag prediction
  23. A comparative study of different neural networks in predicting gross domestic product
  24. A study of an intelligent algorithm combining semantic environments for the translation of complex English sentences
  25. IoT-enabled edge computing model for smart irrigation system
  26. A study on automatic correction of English grammar errors based on deep learning
  27. A novel fingerprint recognition method based on a Siamese neural network
  28. A hidden Markov optimization model for processing and recognition of English speech feature signals
  29. Crime reporting and police controlling: Mobile and web-based approach for information-sharing in Iraq
  30. Convex optimization for additive noise reduction in quantitative complex object wave retrieval using compressive off-axis digital holographic imaging
  31. CRNet: Context feature and refined network for multi-person pose estimation
  32. Improving the efficiency of intrusion detection in information systems
  33. Research on reform and breakthrough of news, film, and television media based on artificial intelligence
  34. An optimized solution to the course scheduling problem in universities under an improved genetic algorithm
  35. An adaptive RNN algorithm to detect shilling attacks for online products in hybrid recommender system
  36. Computing the inverse of cardinal direction relations between regions
  37. Human-centered artificial intelligence-based ice hockey sports classification system with web 4.0
  38. Construction of an IoT customer operation analysis system based on big data analysis and human-centered artificial intelligence for web 4.0
  39. An improved Jaya optimization algorithm with ring topology and population size reduction
  40. Review Articles
  41. A review on voice pathology: Taxonomy, diagnosis, medical procedures and detection techniques, open challenges, limitations, and recommendations for future directions
  42. An extensive review of state-of-the-art transfer learning techniques used in medical imaging: Open issues and challenges
  43. Special Issue: Explainable Artificial Intelligence and Intelligent Systems in Analysis For Complex Problems and Systems
  44. Tree-based machine learning algorithms in the Internet of Things environment for multivariate flood status prediction
  45. Evaluating OADM network simulation and an overview based metropolitan application
  46. Radiography image analysis using cat swarm optimized deep belief networks
  47. Comparative analysis of blockchain technology to support digital transformation in ports and shipping
  48. IoT network security using autoencoder deep neural network and channel access algorithm
  49. Large-scale timetabling problems with adaptive tabu search
  50. Eurasian oystercatcher optimiser: New meta-heuristic algorithm
  51. Trip generation modeling for a selected sector in Baghdad city using the artificial neural network
  52. Trainable watershed-based model for cornea endothelial cell segmentation
  53. Hessenberg factorization and firework algorithms for optimized data hiding in digital images
  54. The application of an artificial neural network for 2D coordinate transformation
  55. A novel method to find the best path in SDN using firefly algorithm
  56. Systematic review for lung cancer detection and lung nodule classification: Taxonomy, challenges, and recommendation future works
  57. Special Issue on International Conference on Computing Communication & Informatics
  58. Edge detail enhancement algorithm for high-dynamic range images
  59. Suitability evaluation method of urban and rural spatial planning based on artificial intelligence
  60. Writing assistant scoring system for English second language learners based on machine learning
  61. Dynamic evaluation of college English writing ability based on AI technology
  62. Image denoising algorithm of social network based on multifeature fusion
  63. Automatic recognition method of installation errors of metallurgical machinery parts based on neural network
  64. An FCM clustering algorithm based on the identification of accounting statement whitewashing behavior in universities
  65. Emotional information transmission of color in image oil painting
  66. College music teaching and ideological and political education integration mode based on deep learning
  67. Behavior feature extraction method of college students’ social network in sports field based on clustering algorithm
  68. Evaluation model of multimedia-aided teaching effect of physical education course based on random forest algorithm
  69. Venture financing risk assessment and risk control algorithm for small and medium-sized enterprises in the era of big data
  70. Interactive 3D reconstruction method of fuzzy static images in social media
  71. The impact of public health emergency governance based on artificial intelligence
  72. Optimal loading method of multi type railway flatcars based on improved genetic algorithm
  73. Special Issue: Evolution of Smart Cities and Societies using Emerging Technologies
  74. Data mining applications in university information management system development
  75. Implementation of network information security monitoring system based on adaptive deep detection
  76. Face recognition algorithm based on stack denoising and self-encoding LBP
  77. Research on data mining method of network security situation awareness based on cloud computing
  78. Topology optimization of computer communication network based on improved genetic algorithm
  79. Implementation of the Spark technique in a matrix distributed computing algorithm
  80. Construction of a financial default risk prediction model based on the LightGBM algorithm
  81. Application of embedded Linux in the design of Internet of Things gateway
  82. Research on computer static software defect detection system based on big data technology
  83. Study on data mining method of network security situation perception based on cloud computing
  84. Modeling and PID control of quadrotor UAV based on machine learning
  85. Simulation design of automobile automatic clutch based on mechatronics
  86. Research on the application of search algorithm in computer communication network
  87. Special Issue: Artificial Intelligence based Techniques and Applications for Intelligent IoT Systems
  88. Personalized recommendation system based on social tags in the era of Internet of Things
  89. Supervision method of indoor construction engineering quality acceptance based on cloud computing
  90. Intelligent terminal security technology of power grid sensing layer based upon information entropy data mining
  91. Deep learning technology of Internet of Things Blockchain in distribution network faults
  92. Optimization of shared bike paths considering faulty vehicle recovery during dispatch
  93. The application of graphic language in animation visual guidance system under intelligent environment
  94. Iot-based power detection equipment management and control system
  95. Estimation and application of matrix eigenvalues based on deep neural network
  96. Brand image innovation design based on the era of 5G internet of things
  97. Special Issue: Cognitive Cyber-Physical System with Artificial Intelligence for Healthcare 4.0.
  98. Auxiliary diagnosis study of integrated electronic medical record text and CT images
  99. A hybrid particle swarm optimization with multi-objective clustering for dermatologic diseases diagnosis
  100. An efficient recurrent neural network with ensemble classifier-based weighted model for disease prediction
  101. Design of metaheuristic rough set-based feature selection and rule-based medical data classification model on MapReduce framework
Heruntergeladen am 3.12.2025 von https://www.degruyterbrill.com/document/doi/10.1515/jisys-2022-0052/html
Button zum nach oben scrollen