Home Broadcast speech recognition and control system based on Internet of Things sensors for smart cities
Article Open Access

Broadcast speech recognition and control system based on Internet of Things sensors for smart cities

  • Min Qin , Ravi Kumar , Mohammad Shabaz EMAIL logo , Sanjay Agal , Pavitar Parkash Singh and Anooja Ammini
Published/Copyright: October 28, 2023
Become an author with De Gruyter Brill

Abstract

With the wide popularization of Internet of Things (IoT) technology, the design and implementation of intelligent speech equipment have attracted more and more researchers’ attention. Speech recognition is one of the core technologies to control intelligent mechanical equipment. An industrial IoT sensor-based broadcast speech recognition and control system is presented to address the issue of integrating a broadcast speech recognition and control system with an IoT sensor for smart cities. In this work, a design approach for creating an intelligent voice control system for the Robot operating system (ROS) is provided. The speech recognition control program for the ROS is created using the Baidu intelligent voice software development kit, and the experiment is run on a particular robot platform. ROS makes use of communication modules to implement network connections between various system modules, mostly via topic-based asynchronous data transmission. A point-to-point network structure serves as the communication channel for the many operations that make up the ROS. The hardware component is mostly made up of the main controller’s motor driving module, a power module, a WiFi module, a Bluetooth module, a laser ranging module, etc. According to the experimental findings, the control system can identify the gathered sound signals, translate them into control instructions, and then direct the robot platform to carry out the necessary actions in accordance with the control instructions. Over 95% of speech is recognized. The control system has a high recognition rate and is simple to use, which is what most industrial controls require. It has significant implications for the advancement of control technology and may significantly increase production and life efficiency.

1 Introduction

As we talk much more quickly than we write, voice-first technology not only increases safety but also improves customer experience by making communication more convenient. Communication has improved by allowing spoken commands to be sent to touchless control devices. A transition to voiceless is supported by a number of variables in addition to the users’ low-effort, high-comfort experience. Devices with speech recognition are portable and available anywhere. Widespread adoption of smart speakers in contemporary homes and Natural language processing (NLP) advancements enable sentiment analysis and broad context understanding using machine learning, and artificial intelligence development to provide tailored experiences and voice-controlled IoT devices’ widespread use.

The IoT is growing to a large extent. One of the most cutting-edge technologies, like IoT, may be utilized for managing, connecting, and monitoring devices linked to the Internet. Obtaining an adequate range of IoT devices, security, and related IoT device services for the average way of life, including a novel use case value proposition proposal, are some of the major challenges [1]. A hyper-connected environment where items are connected to one another, mobile devices, and the Internet has emerged as a result of information technology’s (IT) fast expansion. The core of this highly linked network is the IoT, also known as machine-to-machine communication [2].

The IoT has been used for data and voice integration into IoT applications, such as voice over long-term evolution, offering a flexible way to provide human interaction and communication that can provide a more cost-effective user interface than traditional methods like touch screens or data input with proper implementation and consideration of costs and human factors. The Android framework and Firebase as the cloud are used in this article as a paradigm for smart home computerization. The Arduino Node MCu Wi-Fi Module’s presence placed restrictions on each device [3].

As an emerging strategic industry, the Industrial Internet of Things (IIoT) is widely used as an important foundation and core technology in industrial and agricultural production, transportation, and other aspects. Sensor technology plays an important role in the development of the IIoT [4]. Based on this premise, it is necessary to conduct a comprehensive and in-depth study on the application of intelligent sensors in the IoT, to achieve continuous optimization and upgrading of the IIoT [5].

Secondly, the user end of the Internet of Things can be effectively expanded and extended to all objects, and its ultimate goal is to effectively realize things and people, and things and things. Through the effective connection between the network and all objects, information exchange can be effectively carried out, and then effective identification, management, and control can be achieved [6]. With the continuous development and progress of society, the unique advantages of the IoT are gradually highlighted [7].

The IoT, the enabling technology that has made widespread digitization possible and produced the idea of smart cities, is at the center of smart city projects. For devices to transfer data to the cloud and receive instructions for carrying out tasks, they must be constantly connected to the Internet. This phenomenon is known as the “Internet of Things.” In the IoT, data are gathered and data analytics operations are carried out to extract information to aid in decision- and policy-making. According to the estimates, there will be more than 75 billion Internet-connected gadgets by 2025 [8].

Speech is the most consistent with human natural habits of a way of interaction, and with the development of speech interaction technology, its application in human–machine interface and multimedia is also increasingly common. Language is the most natural and effective means of human communication and is also the signal with the largest amount of information among many carriers, with the highest level of intelligence. This requires that the robot must have the capability of natural language understanding (NLU). The lexer can be used to realize various methods of lexical analysis, such as right linear grammar, and regular expression automata, which are equivalent in description ability but different in complexity [9]. Considering the simple and graceful character of regular expressions, the system uses regular expressions for lexical analysis to realize NLU [10].

2 Literature review

The essence of a sensor is a kind of detection device, which can accurately perceive the information to be measured and effectively transform the relevant information into electrical signals or other forms of information for effective dissemination. Under the effective support of sensors, automatic detection, and control to obtain comprehensive enhancement with the further development of IT, Internet technology is also in the time development pace to get effective development, to be able to fully meet the demand of the development of the IoT, by a high-performance microprocessor with the intelligence technology in the use of traditional sensors, to derive a new intelligent sensor [11]. With the current rapid development of high and new technologies, the realization of intelligent sensors needs to be completed in the following ways: the first is to rely on the non-integrated way, the traditional sensor signal processing circuit, and the microprocessor for scientific and reasonable integration, to achieve the creation of an intelligent sensing system [12].

Since the new century and the rapid development of artificial intelligence in 2017, artificial intelligence ushered in a real new era, and intelligent voice as an important branch of artificial intelligence will become the next outlet and key direction. With the rapid development of robot technology, the traditional control methods such as joystick, keyboard, and handle gradually eliminate voice control as a new control method because of the complicated operation and low efficiency. Because of its advantages such as high control efficiency and simple operation, it has been widely used.

For 8 years running, China has dominated the global market for industrial robots. The action is a component of a larger national effort to deal with a graying population and use cutting-edge technologies to drive industrial upgrading. Major productivity issues and the quickening of robot labor’s replacement have started to be resolved thanks to rapid advancements in artificial intelligence and robotics. The human workforce presents enormous difficulties. As a result of intelligent robots’ influence on labor supply and demand, wage equilibrium may alter as a result of changes in labor supply and demand. Intelligent robots have the potential to soon replace humans in several occupations, lowering wages. Over time, the contribution of labor to the national revenue will decline as intelligent robots operate. Increased minimum wages have a substantial influence on the deployment of robots by more productive, private, skilled labor-intensive, coastal businesses [13].

Complex autonomous and semi-autonomous robotic system control and monitoring is a difficult task. In order to operate as a robotic middleware system operating on Ubuntu Linux, the Robot operating system (ROS) was created. It permits, among other things, hardware abstraction, message-passing between individual processes, and package management. However, there is a lack of active support for ROS apps on smartphones and tablets. Use the ROS Visualization (rviz), which includes a full desktop installation, to see robot data, establish marker locations, or define the boundary perimeters. To make it easier to interact with various robotic systems, an Android application that offers a variety of control and visualization options similar to rviz and is also adaptable and simple to use is needed. Unfortunately, rviz is not available for mobile devices running Android, which accounts for 75% of all mobile devices [14].

In this article, an intelligent voice control system based on the ROS platform is designed, and the speech recognition software is built by the Baidu voice recognition software development kit (SDK). The control system can recognize voice commands and convert them to text information, control the system according to the different words of receiving information from the corresponding instruction, control the robot’s movement of the intelligent robot based on voice control overcomes the traditional robot operation trivial low efficiency and so on shortcomings, for the intelligent robot with flexible control provides the basis.

3 Method

3.1 Current status of relevant technology development

3.1.1 ROS introduction

As a complex research object of multinode and multitask, the robot needs to establish a set of matching systems, including a physical peripheral human–machine interface, communication interface, and simulation tool, to complete the corresponding design [15]. Since entering the new century, the development of the field of robot control is changing with each passing day, and the traditional robot design method is no longer suitable for the requirements of the development of time [16]. In 2010, the open-source ROS was born, which provides a new solution for the development of software and hardware for robot systems.

With the development of science and technology, the social division of labor is gradually detailed, which reflects the characteristics of multilevel robot-related hardware, such as DC motor, sensor, driver, controller, and other components from different manufacturers. ROS, as an operating system suitable for robot programming, can integrate various components of a robot in a loosely coupled way, providing a communication architecture for the communication between robot systems.

As shown in Figure 1, ROS, as a post-operating system, differs from those operating systems in the conventional sense. As an open-source operating system, ROS organizes various environmental control decisions. The publish-subscribe messaging architecture offered by ROS is meant to facilitate the rapid and simple creation of distributed computing systems. It offers a wide range of tools for setting up, launching, introspecting, troubleshooting, visualizing, logging, testing, and terminating distributed computing systems. With a focus on mobility, manipulation, and perception, it also offers a large selection of libraries that implement practical robot functions. With a strong emphasis on integration and documentation, a sizable community supports and advances ROS. The hundreds of ROS packages that are offered by developers worldwide may be found and learned about at ros.org, which serves as a one-stop shop for this purpose.

Figure 1 
                     Comparison of operating systems.
Figure 1

Comparison of operating systems.

As shown in Figure 2, ROS uses communication modules to realize the network connection among various modules in the system, mainly through topic-based asynchronous data communication. The ROS is composed of a variety of processes and communicates with each other through a point-to-point network structure.

Figure 2 
                     Communication structure of ROS.
Figure 2

Communication structure of ROS.

The special endpoint-to-endpoint communication mode of ROS can greatly reduce the computational pressure caused by multiple complex functions, especially in the scenario of multiple robots, the ROS system has the characteristics of multilanguage support, and language neutrality framework structure can meet the preferences of various developers [17].

ROS integrates existing open-source project code to integrate a wide variety of functional projects into the development architecture [18]. Compared with the traditional complex construction method of robot systems, it is more convenient and faster.

3.1.2 Introduction to intelligent speech technology

Since the beginning of the 21st century, advances in machine learning algorithms and computer performance have brought about more efficient methods for training neural networks deep neural networks, which exhibit better accuracy and performance than traditional models. On desktop computers, mobile devices, and human-robot interfaces, voice interaction with NLU has been thoroughly investigated. However, a limited study has been done on voice interaction with NLU in argumented reality (AR). In AR, adopting voice interaction has advantages such as high naturalness and being hands-free. Because NLU has a broader grammar, it may be implemented with a speech interface that is more flexible. We look at how much this flexibility may increase interaction accuracy when utilizing voice in augmented reality. Users’ voice command intent is determined by the NLU component. Without prior constraints, users can use different verbalizations of the same command. Intents, entities, keywords, and semantic interpretation are all automatically extracted from texts using NLU. NLU is frequently used in combination with voice recognition to examine the text that has been extracted from spoken language. Deep learning techniques like the attention-based encoder–decoder and recursive autoencoder are the foundation of the majority of NLU algorithms.

With the application of deep learning and the continuous progress of natural speech algorithms, speech recognition technology has been greatly developed. The speech recognition rate of some companies has reached 97% in the case of quiet rooms, and the stability and recognition rate have met the requirements of wide application. On the other hand, the rapid development of NLP technology has greatly reduced the difficulty for machines to understand the semantics of human voices and improved their ability to communicate with people. The principle of speech recognition is shown in Figure 3.

Figure 3 
                     Principles of voice control flow in speech recognition.
Figure 3

Principles of voice control flow in speech recognition.

3.2 Overall design of the system

Based on the summary of existing scientific research achievements, combined with the application of real scenes and the development status of intelligent speech recognition technology, this article uses the advantages and characteristics of ROS to design an intelligent speech control system. The system is composed of software and hardware, and its structure is shown in Figure 4.

Figure 4 
                  Software and hardware of the control system.
Figure 4

Software and hardware of the control system.

The hardware part is mainly composed of the motor drive module of the main controller, power module, WiFi module, Bluetooth module, laser ranging module, etc. [19]. The main controller on the robot platform receives the control signal sent by the PC and directly or indirectly controls the movement or operation of a certain function of the robot platform [20].

3.3 Hardware constitution

The intelligent speech robot platform uses Ubuntu as the operating system, and the ROS development environment is configured with this operating system. The intelligent speech robot based on ROS is a kind of robot platform that is easy to assemble and low cost. It can complete various needed applications on this platform and has good comprehensive performance.

The robot operating platform is widely used in enterprises’ teaching and research in the early development stage. It adopts a highly integrated modular design and can be improved and customized according to user needs, from the appearance of hardware and software to the content and application.

The hardware components of the robot platform mainly include the controller, driver, light detection and ranging (LiDAR), DC motor, etc.

Master controller: The robot platform uses the Raspberry PI 3B, a tiny advanced RISC machines-based motherboard that is about the size of a credit card but contains almost all the basic functions of a PC. This controller runs Ubuntu ATE16.04 [21]. The main controller accepts the control instructions sent by the PC end and completes the corresponding actions through the control drive board to realize various functions [22].

Motor drive module: The platform uses STM32 as the drive board. The drive board accepts the command issued by the Raspberry PI, the main control end, and drives the DC motor to run to control the movement of the robot platform, realize the basic functions such as forward and backward, left and right turns, and, on this basis, realize the functions such as turning around and tracking.

Laser ranging module: the main sensor adopts simultaneous localization and mapping LiDAR. Through the ranging principle, the two-point clouds at different times are matched and compared, and the distance and attitude changes of the radar relative to the movement are calculated to complete its positioning. Statutory liquidity ratio (SLR) is the only space geodetic approach in which the zenith total delay (ZTD) and the slant tropospheric delays are determined purely from in situ meteorological measurements utilized as fixed values. While the mapping function and, consequently, the slant delay in SLR are used, the temperature records gathered concurrently with the laser ranging data. ZTD in SLR depends on the air pressure, humidity, and the location of the SLR station. This method is explained by the comparatively few observations made in SLR during the early years of operation, which precluded the ability to determine the ZTD directly, and the relatively low sensitivity of optical wavelengths to wet delays in comparison to microwave approaches [23].

With more precise and expansive approaches, optical measuring systems now provide new ways to measure distance, deformation, or vibration. Numerous components, particularly optical ones, have been improved because of technological advancements. Thus, it is imperative to create fundamental measuring techniques in order to keep up with technological growth. The three kinds of optical distance measuring techniques are interferometry, telemetry, and triangulation. Telemetry is based on calculating the so-called time-of-flight. With the help of theoretical knowledge about the detection range and a grasp of fundamental physical principles, optical measuring systems have made significant developments in recent years. New breakthroughs in measuring techniques have resulted from advancements in the production of lasers, integrated optical devices, and electronic transmitters and receivers [24].

3.4 Software module

Based on the characteristics of the detailed and multilevel design of the robot industry, the overall architecture of the robot platform is designed as shown in Figure 5. In the PC, the system should complete the functions of voice interaction and monitoring, control the movement of the robot with real-time voice, and monitor the movement status of the robot.

Figure 5 
                  Architecture of the Robot platform.
Figure 5

Architecture of the Robot platform.

Monitoring module: Multiple sensors can be configured on the robot platform. Sensors such as the LiDAR camera and gyroscope constantly generate data during operation, and the control system should timely process the data transmitted by each sensor. To better detect whether the running status of each node is normal, it is necessary to detect the running data of each node in real time and give an alarm when the data is abnormal.

Voice control: The Baidu voice recognition node is used to launch the file to obtain voice commands from the speech recognition node, convert them into words, and publish them to the corresponding topic. The robot control node receives text messages by subscribing to the topic to control the main controller on the robot platform and control the movement of the robot platform.

Navigation module: According to the shape and size of the robot, the parameters of the navigation function package are configured. When the robot platform is running ROS, the transformation tree and sensor data of ROS message type are released to realize the autonomous navigation function of the robot platform.

3.5 Automatic speech recognition (ASR)

At present, most intelligent voice control systems adopt hardware modules as speech recognition units. The commonly used ones are the LD3320 speech recognition chip from ICROUTE and the YQ5969 series intelligent voice control module from Shenzhen Renmai Information. These are non-specific speech recognition chips that only recognize the same language and have nothing to do with differences in age and gender. These chips increase the hardware cost for the design of the control system, and the recognition rate is not high, which leads to the unstable performance of the language recognition of the control system.

ROS has some available speech tools, such as Sphinx and Festival, but it is not widely used because of its low recognition accuracy and inadaptability to dialects.

Baidu intelligent speech development platform provides an intelligent speech development kit SDK, which makes the secondary development more convenient and quicker to use Baidu voice application programming interface to realize speech recognition and TTS function package, which can easily realize speech recognition, synthesis, and other functions. The robot control node receives written instructions by subscribing to this topic and sends them to the main controller of the robot platform, which controls the movement of the DC motor through the drive module, thus completing the voice control. The voice control process is shown in Figure 6.

Figure 6 
                  Voice control flow.
Figure 6

Voice control flow.

3.6 Software testing

The software module is mainly tested for speech recognition software and control programs. The commonly used parameter and evaluation standard in speech recognition is the word error rate (WER). To maintain consistency between the recognized word sequence and the standard word sequence, the speech recognition program needs to automatically replace and delete or insert certain words. The total number of inserted, replaced, and deleted words divided by the percentage of the number of words in the standard word sequence is WER, which is calculated as follows:

(1) WER = S + I + D N × 100 % .

where S is the replacement word, I is the inserted word, D is the deleted word, and N is the total number of words. Some words could be omitted or mistranscribed when voice recognition software converts spoken words into text. WER counts the number of variations between the reference transcript and the anticipated output word-by-word comparing the two. When calculating WER, three types of mistakes are taken into account:

  • Insertion: when the output expected contains extra words not found in the transcript;

  • Deletion: when words that exist in the transcript are missing from the anticipated output;

  • Substitutions: When incorrectly translated, words from the expected output are used to substitute terms in the transcript;

ASR software that has a lower WER is frequently more accurate at identifying speech. Therefore, a greater WER frequently denotes a lesser ASR accuracy. The test of the control system is carried out on the robot platform. Six groups of 100 voice signals are collected and input to the system for testing, respectively. In the case of different control commands, the words after each speech recognition are recorded [25]. The output word information is compared with the input voice command, and the number of deleted, replaced, and inserted words is counted [26].

4 Results and discussion

WER measures how many mistakes there are per thousand words in a transcript. Better voice recognition accuracy results from a reduced WER in speech-to-text. When transcribing sentences and paragraphs of words with significance (such as from book pages or newspaper articles), the WER may be more useful. It shows how many words must be added, removed, or substituted in order to change one statement into another.

WER = ( Num inserted + Num deleted + Num substituted ) Num words in the reference .

WER is a standard statistic for assessing how accurately speech recognition APIs create transcripts. Along with powering complex IVRs and voice-activated devices like the Amazon Echo, speech recognition APIs are used to extract useful information from vast amounts of audio data. By developing a test, you may check the precision of your unique model. The test requires a number of audio recordings and the transcriptions that go with them. You can assess the precision of a custom model against a speech-to-text-based model or another custom model. Analyze the WER in relation to the outcomes of the voice recognition test after receiving the test results. WERs of 5–10% are regarded as being of high quality and suitable for usage. Although a WER of 20% is respectable, you may want to think about getting further training. When the WER is 30% or more, it indicates poor quality and necessitates customization and training.

WER and accuracy here were calculated under different input commands, as shown in Table 1.

Table 1

WER and accuracy instruction identification table

Voice WER / Accuracy/
Order S I D % %
Left-handed rotation 0 2 1 3 95
Right-handed rotation 1 0 2 3 95
Forward 1 2 2 5 97
Backward 2 0 1 3 95
Spin 1 2 2 5 99
Stop 0 1 0 1 97

According to the experimental results in Figure 7, the WER of speech recognition for six groups of different voice commands is all within 5%. Compared with the speech recognition chip as the speech recognition unit, the speech recognition software, including the speech recognition SDK, has higher efficiency, a reduced hardware cost, and a lower WER [27,28]. You must construct a SpeechConfig object in order to use the Speech SDK to contact the speech service. This class contains details about your subscription, such as your key and any related endpoint, host location, or authorization tokens. Language and acoustic modeling techniques are used in speech recognition. Acoustic modeling is used to illustrate the link between audio signals and linguistic units of speech. Language modeling, on the other hand, connects sounds to word sequences to assist in discerning between similar sounding words or phrases [29,30]. Furthermore, hidden Makarov models are frequently utilized to distinguish certain temporal speech patterns and hence increase system accuracy.

Figure 7 
               Experimental accuracy analysis diagram.
Figure 7

Experimental accuracy analysis diagram.

5 Conclusion

This article proposes a speech recognition and control system for broadcasting based on IIoT sensors. Speech recognition and robot control nodes are programmed on the robot platform using ROS to collect human voice signals, and then the speech recognition nodes are used to process the speech signals, and the robot control nodes receive commands and then control the robot movement. The system based on voice control can free human hands, solve the problems of traditional robot operation cumbersome system delay, large interpersonal interaction is not needed, greatly improve the efficiency of production regulation, more suitable for the needs of market development. The experimental results show that the control system can recognize the acquired sound signals, convert them into control instructions, and then tell the robot platform to move in the direction of the control instructions. Speech recognition rates exceed 95%. The system’s high identification rate and ease of use are requirements for most industrial systems. It potentially has a substantial impact on production and life efficiency, as well as the development of control technologies. This module responds comparatively quickly. The major limitation of this research lies in the losses that may arise due to the use of various devices and the interaction between machines and humans. Furthermore, a large set of data is required to train the speech module in order to recognize the sound signals more accurately. In order to tackle these limitations, work may be done by including more appliances to reduce the losses and noise, training the speech recognition module to adapt to different user voice conditions, and adding a voice recognition module to assure security for home automation. Furthermore, additional authentication methods can be added.

  1. Author contributions: Min Qin: Conceptualization; Data curation; Formal analysis; Investigation; Methodology; Roles/Writing – original draft. Ravi Kumar: Conceptualization; Formal analysis; Investigation; Methodology; Roles/Writing – original draft. Mohammad Shabaz: Conceptualization; Supervision; Resources; Software; Validation; Visualization; Writing – review & editing. Sanjay Agal: Conceptualization; Formal analysis; Software; Visualization; Writing – review & editing. Pavitar Parkash Singh: Resources; Software; Validation; Visualization; Writing – review & editing. Anooja A: Resources; Software; Validation; Visualization; Writing – review & editing.

  2. Conflict of interest: The authors declare that there is no conflict of interest.

  3. Data availability statement: Data shall be made available on request from corresponding author.

References

[1] Ismail A, Abdlerazek S, El-Henawy IM. Development of smart healthcare system based on speech recognition using support vector machine and dynamic time warping. In Sustainability. Vol. 12, Issue 6, MDPI AG; 2020. p. 2403. 10.3390/su12062403.Search in Google Scholar

[2] Poongothai M, Sundar K, Vinayak B. Implementation of IoT based intelligent voice controlled laboratory using Google Assistant. In International Journal of Computer Applications. Vol. 182, Issue 16. New York, US: Foundation of Computer Science; 2018. p. 6–10. 10.5120/ijca2018917808.Search in Google Scholar

[3] Sethy K, Varalakshmi L, Rajkumar E, Pandey RR, Karthika RN, Vijayakumar P. IoT-based speech recognition system. AIP Conf Proc. 2022;2393:020096. 10.1063/5.0074140.Search in Google Scholar

[4] Qian H. Optimization of intelligent management and monitoring system of sports training hall based on the Internet of Things. Wirel Commun Mob Comput. 2021;2021(2):1–11.10.1155/2021/1465748Search in Google Scholar

[5] Jiang W, Zou D, Zhou X, Zuo G, Li HJ. Research on key technologies of multi-task-oriented live maintenance robots for ultra-high voltage multi-split transmission lines. Ind Robot. 2020;48(1):17–28.10.1108/IR-03-2020-0066Search in Google Scholar

[6] Ren K, Chen Z, Ling Y, Zhao J. Recognition of freezing of gait in Parkinson’s disease based on combined wearable sensors. BMC Neurol. 2022;22(1):1–13.10.1186/s12883-022-02732-zSearch in Google Scholar PubMed PubMed Central

[7] Kravchenko V, Hryshchenko O, Skrypnik V, Dudarieva H. Investigation of the dependence of the structure of shift indexes vectors on the properties of ring codes in the mobile networks of the Internet of Things. Informatyka Automatyka Pomiary w Gospodarce i Ochronie Środowiska. 2021;11(1):62–4.10.35784/iapgos.2404Search in Google Scholar

[8] Lele A. Internet of Things (IoT). In Disruptive technologies for the militaries and security. Smart innovation, systems and technologies. Vol. 132, Singapore: Springer; 2019. 10.1007/978-981-13-3384-2_11.Search in Google Scholar

[9] Chen D, Zhao Z, Qin X, Luo Y, Liu A. MAGLeak: A learning-based side-channel attack for password recognition with multiple sensors in an IoT environment. IEEE Trans Ind Inform. 2020;18(1):1.10.1109/TII.2020.3045161Search in Google Scholar

[10] Gleason B, Manca S. Curriculum and instruction: Pedagogical approaches to teaching and learning with Twitter in higher education. On the Horizon. 2020;28(1):1–8.10.1108/OTH-03-2019-0014Search in Google Scholar

[11] Sarampote CS, Friedman‐Hill SR, Cohen ED, Murphy ER, Pacheco J, Garvey MA. Annual research review: The contributions of the doc research framework on understanding the neurodevelopmental origins, progression, and treatment of mental illnesses. J Child Psychol Psychiatry. 2022;63(4):360–76.10.1111/jcpp.13543Search in Google Scholar PubMed PubMed Central

[12] Nam C, Lee S, Lee J, Sang HC, Park SK. A software architecture for service robots manipulating objects in human environments. IEEE Access. 2020;8:1.10.1109/ACCESS.2020.3003991Search in Google Scholar

[13] Zhao Y, Said R, Ismail NW, Hamzah HZ. Effect of industrial robots on employment in China: An industry level analysis. Comput Intell Neurosci. 2022;2022:13, Article ID 2267237.10.1155/2022/2267237Search in Google Scholar PubMed PubMed Central

[14] Rottmann N, Studt N, Ernst F, Rueckert E. ROS-Mobile: An Android application for the Robot Operating System; 2020.Search in Google Scholar

[15] Lu W, Yu R, Wang S, Wang C, Huang H. Sentence semantic matching based on 3D CNN for human–robot language interaction. ACM Trans Internet Technol. 2021;21(4):1–24.10.1145/3450520Search in Google Scholar

[16] Nancy GA, Kalpana R, Nandhini S. A study on pressure ulcer: influencing factors and diagnostic techniques. Int J Low Extremity Wounds. 2022;21(3):254–63.10.1177/15347346221081603Search in Google Scholar PubMed

[17] Younus MU, Khan MK, Anjum MR, Afridi S, Jamali AA. Optimizing the lifetime of software-defined wireless sensor network via reinforcement learning. IEEE Access. 2020;9(2021):259–72.10.1109/ACCESS.2020.3046693Search in Google Scholar

[18] Toth AJ, Szilagyi B, Fozer D, Haaz E, Mizsey P. Membrane flash index: Powerful and perspicuous help for efficient separation system design. ACS Omega. 2020;5(25):15136–45.10.1021/acsomega.0c01063Search in Google Scholar PubMed PubMed Central

[19] Muji E, Drakuli U. Design and implementation of a fuzzy control system for egg incubator based on IoT technology. IOP Conf Ser: Mater Sci Eng. 2021;1208(1):012038.10.1088/1757-899X/1208/1/012038Search in Google Scholar

[20] Lee HS, Eom K, Park M, Ku SB, Lee K, Lee HM. High-density neural recording system design. Biomed Eng Lett. 2022;12(3):251–61.10.1007/s13534-022-00233-zSearch in Google Scholar PubMed PubMed Central

[21] Yan G, Qin Q. The application of edge computing technology in the collaborative optimization of intelligent transportation systems based on information physical fusion. IEEE Access. 2020;8:1.10.1109/ACCESS.2020.3008780Search in Google Scholar

[22] Kaabar M, Kalvandi V, Eghbali N, Samei M, Siri Z, Martínez F. A generalized ML-Hyers-Ulam stability of quadratic fractional integral equation. Nonlinear Eng. 2021;10(1):414–27.10.1515/nleng-2021-0033Search in Google Scholar

[23] Drożdżewski M, Sośnica K. Tropospheric and range biases in satellite laser ranging. J Geod. 2021;95:100.10.1007/s00190-021-01554-0Search in Google Scholar

[24] Fonseca J, Baptista A, Martins M, Torres JP. Distance measurement systems using lasers and their applications. Appl Phys Res. 2017;9:33. 10.5539/apr.v9n4p33.Search in Google Scholar

[25] Ajay P, Nagaraj B, Arun Kumar R, Huang R, Ananthi P. Unsupervised hyperspectral microscopic image segmentation using deep embedded clustering algorithm. Scanning. 2022;2022:9. Article ID 1200860.10.1155/2022/1200860Search in Google Scholar PubMed PubMed Central

[26] Liu X, Su Y-X, Dong S-L, Deng W-Y, Zhao B-T. Experimental study on the selective catalytic reduction of NO with C3H6 over Co/Fe/Al2O3/cordierite catalysts. Ranliao Huaxue Xuebao/J Fuel Chem Technol. 2018;46(6):743–53.10.1016/S1872-5813(18)30051-3Search in Google Scholar

[27] Selva D, Pelusi D, Rajendran A, Nair A. Intelligent network intrusion prevention feature collection and classification algorithms. Algorithms. 2021;14:224.10.3390/a14080224Search in Google Scholar

[28] Sharma A, Kumar R. Performance comparison and detailed study of AODV, DSDV, DSR, TORA, and OLSR routing protocols in ad hoc networks. 2016 Fourth International Conference on Parallel, Distributed and Grid Computing (PDGC). Waknaghat, India: IEEE; 2016.10.1109/PDGC.2016.7913218Search in Google Scholar

[29] Farag SG. Application of smart structural system for smart sustainable cities. In Proceedings of the 2019 4th MEC International Conference on Big Data and Smart City (ICBDSC), Muscat, Oman; 15–16 January 2019. p. 1–5.10.1109/ICBDSC.2019.8645582Search in Google Scholar

[30] Syed AS, Sierra-Sosa D, Kumar A, Elmaghraby A. IoT in smart cities: A survey of technologies, practices and challenges. Smart Cities. 2021;4:429–75.10.3390/smartcities4020024Search in Google Scholar

Received: 2023-05-23
Revised: 2023-08-20
Accepted: 2023-08-30
Published Online: 2023-10-28

© 2023 the author(s), published by De Gruyter

This work is licensed under the Creative Commons Attribution 4.0 International License.

Articles in the same Issue

  1. Research Articles
  2. Salp swarm and gray wolf optimizer for improving the efficiency of power supply network in radial distribution systems
  3. Deep learning in distributed denial-of-service attacks detection method for Internet of Things networks
  4. On numerical characterizations of the topological reduction of incomplete information systems based on evidence theory
  5. A novel deep learning-based brain tumor detection using the Bagging ensemble with K-nearest neighbor
  6. Detecting biased user-product ratings for online products using opinion mining
  7. Evaluation and analysis of teaching quality of university teachers using machine learning algorithms
  8. Efficient mutual authentication using Kerberos for resource constraint smart meter in advanced metering infrastructure
  9. Recognition of English speech – using a deep learning algorithm
  10. A new method for writer identification based on historical documents
  11. Intelligent gloves: An IT intervention for deaf-mute people
  12. Reinforcement learning with Gaussian process regression using variational free energy
  13. Anti-leakage method of network sensitive information data based on homomorphic encryption
  14. An intelligent algorithm for fast machine translation of long English sentences
  15. A lattice-transformer-graph deep learning model for Chinese named entity recognition
  16. Robot indoor navigation point cloud map generation algorithm based on visual sensing
  17. Towards a better similarity algorithm for host-based intrusion detection system
  18. A multiorder feature tracking and explanation strategy for explainable deep learning
  19. Application study of ant colony algorithm for network data transmission path scheduling optimization
  20. Data analysis with performance and privacy enhanced classification
  21. Motion vector steganography algorithm of sports training video integrating with artificial bee colony algorithm and human-centered AI for web applications
  22. Multi-sensor remote sensing image alignment based on fast algorithms
  23. Replay attack detection based on deformable convolutional neural network and temporal-frequency attention model
  24. Validation of machine learning ridge regression models using Monte Carlo, bootstrap, and variations in cross-validation
  25. Computer technology of multisensor data fusion based on FWA–BP network
  26. Application of adaptive improved DE algorithm based on multi-angle search rotation crossover strategy in multi-circuit testing optimization
  27. HWCD: A hybrid approach for image compression using wavelet, encryption using confusion, and decryption using diffusion scheme
  28. Environmental landscape design and planning system based on computer vision and deep learning
  29. Wireless sensor node localization algorithm combined with PSO-DFP
  30. Development of a digital employee rating evaluation system (DERES) based on machine learning algorithms and 360-degree method
  31. A BiLSTM-attention-based point-of-interest recommendation algorithm
  32. Development and research of deep neural network fusion computer vision technology
  33. Face recognition of remote monitoring under the Ipv6 protocol technology of Internet of Things architecture
  34. Research on the center extraction algorithm of structured light fringe based on an improved gray gravity center method
  35. Anomaly detection for maritime navigation based on probability density function of error of reconstruction
  36. A novel hybrid CNN-LSTM approach for assessing StackOverflow post quality
  37. Integrating k-means clustering algorithm for the symbiotic relationship of aesthetic community spatial science
  38. Improved kernel density peaks clustering for plant image segmentation applications
  39. Biomedical event extraction using pre-trained SciBERT
  40. Sentiment analysis method of consumer comment text based on BERT and hierarchical attention in e-commerce big data environment
  41. An intelligent decision methodology for triangular Pythagorean fuzzy MADM and applications to college English teaching quality evaluation
  42. Ensemble of explainable artificial intelligence predictions through discriminate regions: A model to identify COVID-19 from chest X-ray images
  43. Image feature extraction algorithm based on visual information
  44. Optimizing genetic prediction: Define-by-run DL approach in DNA sequencing
  45. Study on recognition and classification of English accents using deep learning algorithms
  46. Review Articles
  47. Dimensions of artificial intelligence techniques, blockchain, and cyber security in the Internet of medical things: Opportunities, challenges, and future directions
  48. A systematic literature review of undiscovered vulnerabilities and tools in smart contract technology
  49. Special Issue: Trustworthy Artificial Intelligence for Big Data-Driven Research Applications based on Internet of Everythings
  50. Deep learning for content-based image retrieval in FHE algorithms
  51. Improving binary crow search algorithm for feature selection
  52. Enhancement of K-means clustering in big data based on equilibrium optimizer algorithm
  53. A study on predicting crime rates through machine learning and data mining using text
  54. Deep learning models for multilabel ECG abnormalities classification: A comparative study using TPE optimization
  55. Predicting medicine demand using deep learning techniques: A review
  56. A novel distance vector hop localization method for wireless sensor networks
  57. Development of an intelligent controller for sports training system based on FPGA
  58. Analyzing SQL payloads using logistic regression in a big data environment
  59. Classifying cuneiform symbols using machine learning algorithms with unigram features on a balanced dataset
  60. Waste material classification using performance evaluation of deep learning models
  61. A deep neural network model for paternity testing based on 15-loci STR for Iraqi families
  62. AttentionPose: Attention-driven end-to-end model for precise 6D pose estimation
  63. The impact of innovation and digitalization on the quality of higher education: A study of selected universities in Uzbekistan
  64. A transfer learning approach for the classification of liver cancer
  65. Review of iris segmentation and recognition using deep learning to improve biometric application
  66. Special Issue: Intelligent Robotics for Smart Cities
  67. Accurate and real-time object detection in crowded indoor spaces based on the fusion of DBSCAN algorithm and improved YOLOv4-tiny network
  68. CMOR motion planning and accuracy control for heavy-duty robots
  69. Smart robots’ virus defense using data mining technology
  70. Broadcast speech recognition and control system based on Internet of Things sensors for smart cities
  71. Special Issue on International Conference on Computing Communication & Informatics 2022
  72. Intelligent control system for industrial robots based on multi-source data fusion
  73. Construction pit deformation measurement technology based on neural network algorithm
  74. Intelligent financial decision support system based on big data
  75. Design model-free adaptive PID controller based on lazy learning algorithm
  76. Intelligent medical IoT health monitoring system based on VR and wearable devices
  77. Feature extraction algorithm of anti-jamming cyclic frequency of electronic communication signal
  78. Intelligent auditing techniques for enterprise finance
  79. Improvement of predictive control algorithm based on fuzzy fractional order PID
  80. Multilevel thresholding image segmentation algorithm based on Mumford–Shah model
  81. Special Issue: Current IoT Trends, Issues, and Future Potential Using AI & Machine Learning Techniques
  82. Automatic adaptive weighted fusion of features-based approach for plant disease identification
  83. A multi-crop disease identification approach based on residual attention learning
  84. Aspect-based sentiment analysis on multi-domain reviews through word embedding
  85. RES-KELM fusion model based on non-iterative deterministic learning classifier for classification of Covid19 chest X-ray images
  86. A review of small object and movement detection based loss function and optimized technique
Downloaded on 28.9.2025 from https://www.degruyterbrill.com/document/doi/10.1515/jisys-2023-0067/html
Scroll to top button