Home Research on target feature extraction and location positioning with machine learning algorithm
Article Open Access

Research on target feature extraction and location positioning with machine learning algorithm

  • Licheng Li EMAIL logo
Published/Copyright: December 31, 2020
Become an author with De Gruyter Brill

Abstract

The accurate positioning of target is an important link in robot technology. Based on machine learning algorithm, this study firstly analyzed the location positioning principle of binocular vision of robot, then extracted features of the target using speeded-up robust features (SURF) method, positioned the location using Back Propagation Neural Networks (BPNN) method, and tested the method through experiments. The experimental results showed that the feature extraction of SURF method was fast, about 0.2 s, and was less affected by noise. It was found from the positioning results that the output position of the BPNN method was basically consistent with the actual position, and errors in X, Y and Z directions were very small, which could meet the positioning needs of the robot. The experimental results verify the effectiveness of machine learning method and provide some theoretical support for its further promotion and application in practice.

MSC 2010: 68T40

1 Introduction

With the development of technology, intelligent robots have been more and more widely used in people’s work and life, including civil [1], industry [2], agriculture [3], medical [4], military [5], etc., especially in environments such as the universe and ocean that human beings can not match. Intelligent robots is an important embodiment of national science and technology level and industrial level. In order to achieve more intelligent service of robots, it is necessary to improve the control technology of robots, such as target positioning, path planning, etc. [6]. Taking industrial robot as an example, in the process of completing tasks, the robot needs to locate the target accurately to realize the following work such as recognition, detection, grasping and classification. Robot positioning has become a problem which is widely concerned by researchers [7]. Zhang et al. [8] designed a piecewise fitting monocular vision ranging method for the positioning of humanoid robots and found through experiments that the average error of the method was 1.7 mm, suggesting relatively accurate positioning. Luo et al. [9] studied the positioning of picking points of grape picking robot. Based on the improved clustering image segmentation algorithm, the positioning of picking points was carried out. Experiments were carried out in the environment of Opencv2.3.1 and Visual C + +. They found that the error ratio of the picking points obtained by the proposed method and the manually set picking points reached 88.33%, which could meet the position requirements of robot. Yan et al. [10] designed a positioning method based on passive radio frequency identification (RFID), carried out simulation experiments, and found that the absolute error of the method was smaller than 10.16 cm and the calculation time was short. Wu et al. [11] preprocessed images with pixel projection, then located and recognized the target through deep convolution neural network (DCNN), and found through experiment that the method was effective and could recognize the type of workpiece quickly. Binocular vision robot has a wide range of applications in the industrial field and has obvious advantages in modern and automatic production. Its position positioning is also an important and difficult problem. Therefore, the research on its positioning has very important practical values. At present, the accuracy of many methods can not meet the requirements of robot work. To realize the target feature extraction and positioning of binocular vision robot better, this study mainly analyzed the machine learning algorithm, designed a method that extracted features using speeded-up robust features (SURF) and positioned target using back propagation neural network (BPNN), carried out simulation experiments. The present study provides some theoretical support for the application of the method in practice and also makes some contributions to the better application of robots.

2 Positioning principle of binocular vision

Vision is a key technology of robots, which can identify and locate the target. According to the number of cameras, one or two, vision can be divided into monocular and binocular. Binocular vision has higher positioning accuracy and can obtain the three-dimensional information of the target, so it has a more extensive application [12]. The principle of binocular vision is similar to that of human eyes. As there is a distance between the left and right eyes of human, when observing an object, the object will project different positions on the left and right retinas. The deviation of the position is parallax. Binocular vision observes the target through two cameras [14]. According to the parallax of the target, the position of the target can be calculated. When two cameras observe target A at the same time, two coordinates,

(1) A l e f t = x l e f t , y l e f t ,
(2) A r i g h t = x r i g h t , y r i g h t ,

are obtained.

According to the trigonometric relations, there is:

(3) x l e f t = f x c z c x r i g h t = f x c B z c y = f y c z c ,

where B refers to the baseline, the distance between the center points of two cameras. If the binocular vision parallax

(4) D = x l e f t x r i g h t ,

then the camera coordinates of A can be expressed as:

(5) x c = B x l e f t D y c = B y D z c = B f d .

If parallax D, baseline B and focal length f are determined, the coordinates of the target can be obtained.

3 Target feature extraction

In the two images collected by binocular vision, for location, it is necessary to extract the target area.

The commonly used methods include Scale Invariant Feature Transform (SIFT) algorithm and SURF algorithm. SIFT algorithm has a good accuracy in image feature extraction. SIFT algorithm can extract features well for zoomed and rotated images, but it is difficult to be applied in practice because of its poor real-time performance. SURF algorithm overcomes the above defect and has faster feature extraction speed. Therefore, this study extracted target features using SURF algorithm based on the texture features of the image. SURF is an improvement of Scale Invariant Feature Transform (SIFT) algorithm, which has a high feature extraction speed and can detect stable feature points; therefore it has a good performance in feature extraction [15].

For image I, one point is set as

(6) X = ( x , y ) T ,

then Hessian matrix H (X, σ) of scale σ at X can be expressed as:

(7) H ( x , σ ) = L x x x , σ L x y x , σ L x y x , σ L y y x , σ ,

where Lxx (x, σ) stands for the convolution of Gaussian second-order partial derivative 2 x 2 with image I at X.

Box filter is used. Lxx, Lxy, Lyy is replaced by Dxx, Dxy, Dyy to obtain the determinant of approximate matrix Happrox:

(8) det H a p p r o x = D x x D y y ω ¯ D x y 2 ,

where ω ¯ stands for a regulation parameter.

SURF algorithm builds image pyramid using the indirect method, divides the scale space according to groups, calculates the image extreme points in each layer of pyramid, sets the threshold value, and performs non-maximal suppression in 3×3×3 neighbourhood on the obtained feature point. Only the point which is larger than the point in the neighbourhood is the feature point.

In order to ensure the invariance of the feature vector, a circular region is divided by taking the feature point as the center and 6 s (s is the scale of the feature point) as the radius. Then the Haar wavelet response operation is performed within the region to calculate the direction corresponding to the maximum Harr response accumulation value, i.e., the main direction of the feature point. Then, taking the point as the center, a square area whose side length is 20 s is selected and divided into 16 subregions. Each region is sampled by 5 s × 5 s, and the Haar wavelet response is calculated. Then a four-dimensional vector is obtained:

(9) v = d x , d x , d y , d y .

Sixty-four descriptors can be obtained by combining the vectors in the 16 subregions.

Finally, the steps of the SURF based feature extraction method are as follows. Firstly, the sets of the feature point pairs of the template and the left and right images, Ileft and Iright, are obtained through SURF. Then the feature points which are matched with the template are searched and put into set S left and Sright. For the extracted points, the corresponding Euclidean distance is calculated. When the ratio of the two minimum Euclidean distances is smaller than the threshold value, the matching of feature points is successful.

4 Machine learning location algorithm

Machine learning algorithms include decision tree algorithm, Bayesian algorithm, support vector machine (SVM), etc. In this study, the target was positioned using the neural network method from machine learning algorithm. Neural network which is the simulation of human brain has better performance in solving complex nonlinear problems compared to other methods. Moreover, neural network has good adaptivity and fault tolerance. Neural network can be used in positioning. BPNN [16] is the most widely used one, and its structure is shown in Figure 1.

Figure 1 BPNN structure
Figure 1

BPNN structure

When the target is positioned by BPNN, the image is collected by binocular vision, and the feature points of the target are obtained by feature extraction. The pixel coordinates of the target feature points are taken as the input of BPNN, and the world coordinates of the feature points are the output. The specific steps are as follows.

  1. The input vector of BPNN is set as:

(10) X = x 1 x 2 , , x l ,

where l is the number of nodes The weight is set as wij. The input vector of the hidden layer is set as:

(11) S = s 1 s 2 , , s m ,

where m stands for the number of neurons, and the output vector was set as:

(12) Y = y 1 y 2 , , y n ,

Then:

(13) S j = i = 0 l w i j x i ,
(14) y j = f i = 0 l w i j x j , i = 1 , 2 , , l , j = 1 , 2 , m .
  1. The input vector of the output layer is set as:

(15) R = r 1 r 2 , , r n ,

the output vector is set as:

(16) Z = z 1 z 2 , , z n ,

where n stands for the number of nodes, and the weight is set as vjk. Then:

(17) r k = j = 0 m v j k y j r ,
(18) z k = f r k = f j = 0 m v j k , v j , k = 1 , 2 , , n .
  1. If the expected output vector of BPNN is:

(19) Q = q 1 , q 2 , , q i ,

then the error between the expected output vector and actual output vector Z is error signal E:

(20) E = 1 2 k = 1 n q k z k 2

After expansion, there is:

(21) E = 1 2 k = 1 n q k f j = 0 m v j k f i = 0 l w i j x j 2

Corrections Δwij and Δvjk of the weight are:

(22) Δ w j k = η q k z k f j = 1 m w j k z j k ,
(23) Δ w i j = η q k z k f j = 1 m w j k z j k w j k ,

where η stands for the learning coefficient.

  1. When the error signal meets the requirements, the calculation stops and the result is output, i.e., the world coordinates of the target feature points.

5 Simulation experiment

The target positioning of binocular vision robot was tested using the SURF and BPNN method proposed in this study. The target was photographed by two charge coupled device (CCD) cameras, and then programming was performed using C++ language in the environment of Visual Studio 2008. Firstly, the feature extraction method was analyzed. Taking an image collected by the camera as an example (Figure 4), the time required for feature extraction of the algorithm was simulated under the condition of no noise and noise multiple of 0.01, 0.03, 0.05 and 0.07 respectively. The results are shown in Figure 2.

Figure 2 The image of the target
Figure 2

The image of the target

Figure 3 Feature extraction time under different noise conditions
Figure 3

Feature extraction time under different noise conditions

Figure 4 Error changes of SVM
Figure 4

Error changes of SVM

It was seen from Figure 3 that SURF had the highest feature extraction speed in the case of no noise, taking only 0.245 s; with the increase of noise multiple, the calculation time of the algorithm increased gradually, but the increase amplitude was very small; when the noise reached 0.07 times, the calculation time was 0.271 s, which was only 0.026s longer than that in the case of no noise, indicating that the feature extraction algorithm designed in this study could meet the real-time requirement.

Two hundred groups of sample images were collected, and the pixel coordinates of the corresponding feature points were extracted. Then 200 groups of sample data were obtained. One hundred and fifty groups of data were used for BPNN training, and the remaining 50 groups of data were used for positioning test. Pixel coordinates pl, ql, pr and qr of feature points of left and right images obtained by binocular vision were the input of BPNN, and world coordinates xw, yw and zw of the feature points were the output of BPNN. The BPNN method used in this study was compared with the SVM method [17]. The testing results are shown in Table 1.

Table 1

Positioning results

No. Expected output Actual output
SVM BPNN

xw yw zw xw yw zw xw yw zw
1 271 110 0 273.1625 110.918 0.0435 271.3648 110.0254 0.0000
2 289 133 0 291.1452 134.4194 0.0546 290.3311 133.5268 0.0001
3 310 112 0 311.9584 113.303 0.0514 310.2587 111.5896 0.0000
4 456 128 0 457.8526 129.5284 0.0425 455.9658 128.6358 0.0000
5 359 136 0 361.0215 137.6771 0.0442 358.1258 136.7845 0.0001
6 224 127 0 225.4962 128.7792 0.0454 225.2615 127.8866 0.0001
7 268 109 0 270.8219 109.9513 0.0485 268.1258 109.0587 0.0001
8 316 129 0 318.4733 130.4213 0.0436 315.6258 129.5287 0.0000
9 478 164 0 481.2193 166.578 0.0465 478.5264 165.6854 0.0000
10 532 151 0 535.5889 152.3678 0.0485 531.6589 150.5248 0.0000
11 276 154 0 278.5819 155.278 0.0484 277.3256 154.3854 0.0001
12 289 167 0 290.8136 169.1523 0.0436 288.1028 168.2597 0.0000
...... ...... ...... ...... ...... ...... ...... ...... ...... ......
50 315 156 0 316.4565 157.1201 0.0462 315.9215 155.6914 0.0001

It was seen from Table 1 that the output result of SVM was slightly different with the expected output, but the actual output of BPNN was very close to the expected output, and the error of the output of zw especially was almost 0. The error of every time of positioning of the two methods was counted, and the results are shown in Figures 4 and 5.

Figure 5 Error changes of BPNN
Figure 5

Error changes of BPNN

After calculation, it was found that the maximum error values of xw, yw, and zw in SVM were 3.5889, 2.7437, and 0.0546 respectively, and the minimum errors were 0.8732, 0.7694, and 0.411 respectively; in BPNN, the maximum errors of xw, yw, and zw were 1.9954, 1.9812, and 0.0001 respectively, and the minimum errors were 0.0280, 0.0069 and 0.0000 respectively. The above results showed that the error of SVM was significantly larger than BPNN, indicating that the positioning precision of BPNN was higher. It was found that the error of BPNN in X and Y directions was slightly larger, and the error was almost 0 in the Z direction. The changes of error showed that the error was always smaller than 2, which could meet the positioning requirements of robots.

6 Discussion

Machine learning covers many aspects of knowledge, such as probability theory, statistics, etc., which is a very important research direction in artificial intelligence [18]. It has been widely used in solving complex engineering and scientific problems, such as natural language processing [19], pattern recognition [20], biological information processing [21], machine vision [22], etc. The algorithms of machine learning include decision tree [23], random forest [24], Bayesian [25], etc. In this study, the target positioning method was designed using BPNN method.

Before a robot positions a target, feature extraction of the target is needed. In this study, the feature extraction of the target was achieved by SURF method. Then the pixel coordinates of the extracted feature pointswere used as the input of BPNN to locate the target. The experimental results showed that SURF method had a good performance in target feature extraction as it completed feature extraction in a short time and was less affected by noise. It was seen from Figure 3 that the increase amplitude of the calculation time of the algorithm was very small, around 0.2 s, though there was an increase with the increase of noise. The positioning experiment showed that the positioning precision of BPNN was significantly superior to that of SVM, the output position coordinates of BPNN were very close to the actual coordinates, and the errors in three directions were very small, which could meet the needs of robots well in the actual work.

This study found that the SURF method and BPNN model showed good performance and had strong applicability in solving the positioning problem of robots; however, there are some limitations that need to be improved in future works:

  1. comparative study was not carried out on more machine learning methods;

  2. the BPNN method was not further optimized to enhance the positioning precision.

7 Conclusion

In order to solve the problem of robot target location, the target features were extracted using the SURF method, and then the location of the target was realized by the BPNN model. It was found from the experiment that:

  1. the SURF method was less interfered by noise and had a high extraction speed, showing a good performance in the target feature extraction;

  2. the positioning result of the BPNN method was superior to that of SVM and had very small errors, and the error in X and Y directions was slightly large, but not more than 2.

The experimental results verify the effectiveness of the proposed method for target feature extraction and location, which can be promoted and applied in practice.

References

[1] E. Clotet, D. Martínez, J. Moreno, M. Tresanchez and J. Palacín, Development of a High Mobility Assistant Personal Robot for Home Operation, Adv. Intell. Syst. Comput. 376 (2015), 65-73.10.1007/978-3-319-19695-4_7Search in Google Scholar

[2] R. J. Guo and J. S. Zhao, Topological principle of strengthened connecting frames in the stretchable arm of an industry coating robot, Mech. Mach. Theory 114 (2017), 38-59.10.1016/j.mechmachtheory.2017.03.017Search in Google Scholar

[3] S. Erfani, A. Jafari and A. Hajiahmad, Comparison of two data fusion methods for localization of wheeled mobile robot in farm conditions, Artif. Intell. Agric. 1 (2019), 48-55.10.1016/j.aiia.2019.05.002Search in Google Scholar

[4] K. Zhang, X. Huang, Y. Gao, W. Liang, H. Xi, et al., Robot-Assisted Versus Laparoscopy-Assisted Proximal Gastrectomy for Early Gastric Cancer in the Upper Location, Cancer Control 25 (2018), 107327481876599.10.1177/1073274818765999Search in Google Scholar PubMed PubMed Central

[5] S. Y. Choi and J. H. Yang, Analysis of the Human Performance and Communication Effects on the Operator Tasks of Military Robot Vehicles by Using Extended Petri Nets, Korean J. Comput. Design Eng. 22 (2017), 162-171.10.7315/CDE.2017.162Search in Google Scholar

[6] H. K. Kim, H. S. Sim and W. J. Hwang, A Study on a Path Planning and Real-Time Trajectory Control of Autonomous Travelling Robot for Unmanned FA, J. Korean Soc. Ind. Converg. 19 (2016), 75-80.10.21289/KSIC.2016.19.2.075Search in Google Scholar

[7] V. Belle and H. J. Levesque, Robot location estimation in the situation calculus, J. Appl. Logic. 13 (2015), 397-413.10.1016/j.jal.2015.02.004Search in Google Scholar

[8] L. Zhang, H. Liu, C. Luo, G. Bian and W. Wu, Target recognition of indoor trolley for humanoid robot based on piecewise fitting method, Int. J. Adapt. Control 33 (2019), 1319-1327.10.1002/acs.2994Search in Google Scholar

[9] L. Luo, X. Zou, J. Xiong, Y. Zhang, H. Peng, et al., Automatic positioning for picking point of grape picking robot in natural environment, Trans. Chin. Soc. Agric. Eng. 31 (2015), 14-21(8).Search in Google Scholar

[10] L. Yan and D. Xiong, Mobile motion robot indoor passive RFID location research, Int. J. Rf. Tech. Res. Appl. 9 (2018), 113-129.10.3233/RFT-17101Search in Google Scholar

[11] X. Wu, X. Ling and J. Liu, Location Recognition Algorithm for Vision-Based Industrial Sorting Robot via Deep Learning, Int. J. Pattern Recogn. 33 (2019), 1955009.1-1955009.18.10.1142/S0218001419550097Search in Google Scholar

[12] P. Ueareeworakul and S. Saiyod,Search in Google Scholar

[13] 2017 9th International Conference on Knowledge and Smart Technology (KST) - Obstacle detection algorithm for unmanned aerial vehicles using binocular stereoscopic vision. Int. Confon. Knowledge & Smart Technology (2017), 332-337.Search in Google Scholar

[14] J. Turski, On binocular vision: The geometric horopter and Cyclopean eye, Vision Res. 119 (2016), 73-81.10.1016/j.visres.2015.11.001Search in Google Scholar PubMed

[15] G. Dan, M. A. Khan and V. Fodor, Characterization of SURF and BRISK Interest Point Distribution for Distributed Feature Extraction in Visual Sensor Networks, IEEE T. Multimedia 17 (2015), 591-602.10.1109/TMM.2015.2406574Search in Google Scholar

[16] J. Fan, J. Zhong, J. Zhao and Y. Zhu, BP neural network tuned PID controller for position tracking of a pneumatic artificial muscle, Tech. Health Care: Offic. J. Eur. Soc. Eng. Med. 23 (2015), S231-S238.10.3233/THC-150958Search in Google Scholar PubMed

[17] D. Chen, L. Wang, and L. Li, Position computation models for high-speed train based on support vector machine approach, Appl. Soft Comput. 30 (2015), 758-766.10.1016/j.asoc.2015.01.017Search in Google Scholar

[18] M. I. Jordan and T. M. Mitchell, Machine learning: Trends, perspectives, and prospects, Science 349 (2015), 255-260.10.1126/science.aaa8415Search in Google Scholar PubMed

[19] Y. Ni, D. Barzman, A. Bachtel, M. F. Griffey, A. Osborn, et al., FindingWarningMarkers: Leveraging Natural Language Processing and Machine Learning Technologies to Detect Risk of School Violence, Int. J. Med. Inform. 139 (2020), 104137.10.1016/j.ijmedinf.2020.104137Search in Google Scholar PubMed PubMed Central

[20] D. S. Bulgarevich, S. Tsukamoto, T. Kasuya, M. Demura and M. Watanabe, Pattern recognition with machine learning on optical microscopy images of typical metallurgical microstructures, Sci. Rep. 8 (2018), 2078.10.1038/s41598-018-20438-6Search in Google Scholar PubMed PubMed Central

[21] N. Kriegeskorte, Deep Neural Networks: A New Framework for Modeling Biological Vision and Brain Information Processing, Ann. Rev. Vis. Sci. 1 (2015), 417-446.10.1146/annurev-vision-082114-035447Search in Google Scholar PubMed

[22] R. Shanmugamani, M. Sadique and B. Ramamoorthy, Detection and classification of surface defects of gun barrels using computer vision and machine learning, Measurement 60 (2015), 222-230.10.1016/j.measurement.2014.10.009Search in Google Scholar

[23] Y. M. Shi, W. W. Chen and Y. F. Zhu, Study on Prediction Model of Number of Rainstorm Days in Summer Based on C5.0 Decision Tree Algorithm, Meteorol. Environ. Res. 10 (2019), 60-64.Search in Google Scholar

[24] M. G. Appley, S. Beyramysoltan and R. A. Musah, Random Forest Processing of Direct Analysis in Real-TimeMass Spectrometric Data Enables Species Identification of Psychoactive Plants from Their Headspace Chemical Signatures, Acs Omega 4 (2019), 15636-15644.10.1021/acsomega.9b02145Search in Google Scholar PubMed PubMed Central

[25] N. Bin, J. W. Wu and F. Hu, Spam Message Classification Based on the Naieve Bayes Classification Algorithm, IAENG Int. J. Comput. Sci. 46 (2019), 46-53.Search in Google Scholar

Received: 2020-07-15
Accepted: 2020-10-20
Published Online: 2020-12-31

© 2020 L. C. Li, published by De Gruyter

This work is licensed under the Creative Commons Attribution 4.0 International License.

Articles in the same Issue

  1. Research Articles
  2. Best Polynomial Harmony Search with Best β-Hill Climbing Algorithm
  3. Face Recognition in Complex Unconstrained Environment with An Enhanced WWN Algorithm
  4. Performance Modeling of Load Balancing Techniques in Cloud: Some of the Recent Competitive Swarm Artificial Intelligence-based
  5. Automatic Generation and Optimization of Test case using Hybrid Cuckoo Search and Bee Colony Algorithm
  6. Hyperbolic Feature-based Sarcasm Detection in Telugu Conversation Sentences
  7. A Modified Binary Pigeon-Inspired Algorithm for Solving the Multi-dimensional Knapsack Problem
  8. Improving Grey Prediction Model and Its Application in Predicting the Number of Users of a Public Road Transportation System
  9. A Deep Level Tagger for Malayalam, a Morphologically Rich Language
  10. Identification of Biomarker on Biological and Gene Expression data using Fuzzy Preference Based Rough Set
  11. Variable Search Space Converging Genetic Algorithm for Solving System of Non-linear Equations
  12. Discriminatively trained continuous Hindi speech recognition using integrated acoustic features and recurrent neural network language modeling
  13. Crowd counting via Multi-Scale Adversarial Convolutional Neural Networks
  14. Google Play Content Scraping and Knowledge Engineering using Natural Language Processing Techniques with the Analysis of User Reviews
  15. Simulation of Human Ear Recognition Sound Direction Based on Convolutional Neural Network
  16. Kinect Controlled NAO Robot for Telerehabilitation
  17. Robust Gaussian Noise Detection and Removal in Color Images using Modified Fuzzy Set Filter
  18. Aircraft Gearbox Fault Diagnosis System: An Approach based on Deep Learning Techniques
  19. Land Use Land Cover map segmentation using Remote Sensing: A Case study of Ajoy river watershed, India
  20. Towards Developing a Comprehensive Tag Set for the Arabic Language
  21. A Novel Dual Image Watermarking Technique Using Homomorphic Transform and DWT
  22. Soft computing based compressive sensing techniques in signal processing: A comprehensive review
  23. Data Anonymization through Collaborative Multi-view Microaggregation
  24. Model for High Dynamic Range Imaging System Using Hybrid Feature Based Exposure Fusion
  25. Characteristic Analysis of Flight Delayed Time Series
  26. Pruning and repopulating a lexical taxonomy: experiments in Spanish, English and French
  27. Deep Bidirectional LSTM Network Learning-Based Sentiment Analysis for Arabic Text
  28. MAPSOFT: A Multi-Agent based Particle Swarm Optimization Framework for Travelling Salesman Problem
  29. Research on target feature extraction and location positioning with machine learning algorithm
  30. Swarm Intelligence Optimization: An Exploration and Application of Machine Learning Technology
  31. Research on parallel data processing of data mining platform in the background of cloud computing
  32. Student Performance Prediction with Optimum Multilabel Ensemble Model
  33. Bangla hate speech detection on social media using attention-based recurrent neural network
  34. On characterizing solution for multi-objective fractional two-stage solid transportation problem under fuzzy environment
  35. Deep Large Margin Nearest Neighbor for Gait Recognition
  36. Metaheuristic algorithms for one-dimensional bin-packing problems: A survey of recent advances and applications
  37. Intellectualization of the urban and rural bus: The arrival time prediction method
  38. Unsupervised collaborative learning based on Optimal Transport theory
  39. Design of tourism package with paper and the detection and recognition of surface defects – taking the paper package of red wine as an example
  40. Automated system for dispatching the movement of unmanned aerial vehicles with a distributed survey of flight tasks
  41. Intelligent decision support system approach for predicting the performance of students based on three-level machine learning technique
  42. A comparative study of keyword extraction algorithms for English texts
  43. Translation correction of English phrases based on optimized GLR algorithm
  44. Application of portrait recognition system for emergency evacuation in mass emergencies
  45. An intelligent algorithm to reduce and eliminate coverage holes in the mobile network
  46. Flight schedule adjustment for hub airports using multi-objective optimization
  47. Machine translation of English content: A comparative study of different methods
  48. Research on the emotional tendency of web texts based on long short-term memory network
  49. Design and analysis of quantum powered support vector machines for malignant breast cancer diagnosis
  50. Application of clustering algorithm in complex landscape farmland synthetic aperture radar image segmentation
  51. Circular convolution-based feature extraction algorithm for classification of high-dimensional datasets
  52. Construction design based on particle group optimization algorithm
  53. Complementary frequency selective surface pair-based intelligent spatial filters for 5G wireless systems
  54. Special Issue: Recent Trends in Information and Communication Technologies
  55. An Improved Adaptive Weighted Mean Filtering Approach for Metallographic Image Processing
  56. Optimized LMS algorithm for system identification and noise cancellation
  57. Improvement of substation Monitoring aimed to improve its efficiency with the help of Big Data Analysis**
  58. 3D modelling and visualization for Vision-based Vibration Signal Processing and Measurement
  59. Online Monitoring Technology of Power Transformer based on Vibration Analysis
  60. An empirical study on vulnerability assessment and penetration detection for highly sensitive networks
  61. Application of data mining technology in detecting network intrusion and security maintenance
  62. Research on transformer vibration monitoring and diagnosis based on Internet of things
  63. An improved association rule mining algorithm for large data
  64. Design of intelligent acquisition system for moving object trajectory data under cloud computing
  65. Design of English hierarchical online test system based on machine learning
  66. Research on QR image code recognition system based on artificial intelligence algorithm
  67. Accent labeling algorithm based on morphological rules and machine learning in English conversion system
  68. Instance Reduction for Avoiding Overfitting in Decision Trees
  69. Special section on Recent Trends in Information and Communication Technologies
  70. Special Issue: Intelligent Systems and Computational Methods in Medical and Healthcare Solutions
  71. Arabic sentiment analysis about online learning to mitigate covid-19
  72. Void-hole aware and reliable data forwarding strategy for underwater wireless sensor networks
  73. Adaptive intelligent learning approach based on visual anti-spam email model for multi-natural language
  74. An optimization of color halftone visual cryptography scheme based on Bat algorithm
  75. Identification of efficient COVID-19 diagnostic test through artificial neural networks approach − substantiated by modeling and simulation
  76. Toward agent-based LSB image steganography system
  77. A general framework of multiple coordinative data fusion modules for real-time and heterogeneous data sources
  78. An online COVID-19 self-assessment framework supported by IoMT technology
  79. Intelligent systems and computational methods in medical and healthcare solutions with their challenges during COVID-19 pandemic
Downloaded on 28.9.2025 from https://www.degruyterbrill.com/document/doi/10.1515/jisys-2020-0072/html?lang=en
Scroll to top button