Abstract
Deaf-mute people have much potential to contribute to society. However, communication between deaf-mutes and non-deaf-mutes is a problem that isolates deaf-mutes from society and prevents them from interacting with others. In this study, an information technology intervention, intelligent gloves (IG), a prototype of a two-way communication glove, was developed to facilitate communication between deaf-mutes and non-deaf-mutes. IG consists of a pair of gloves, flex sensors, an Arduino nano, a screen with a built-in microphone, a speaker, and an SD card module. To facilitate communication from the deaf-mutes to the non-deaf-mutes, the flex sensors sense the hand gestures and connected wires, and then transmit the hand movement signals to the Arduino nano where they are translated into words and sentences. The output is displayed on a small screen attached to the gloves, and it is also issued as voice from the speakers attached to the gloves. For communication from the non-deaf-mutes to the deaf-mute, the built-in microphone in the screen senses the voice, which is then transmitted to the Arduino nano to translate it to sentences and sign language, which are displayed on the screen using a 3D avatar. A unit testing of IG has shown that it performed as expected without errors. In addition, IG was tested on ten participants, and it has been shown to be both usable and accepted by the target users.
1 Introduction
Listening and speaking are natural abilities of humans. Unfortunately, there are many people who do not have these abilities and cannot easily communicate with others. The World Health Organization stated that approximately 70 million people in the world are deaf-mutes. A total of 360 million people are deaf, and 32 million of these individuals are children. Also, by 2050, it is expected that one in every four people will face some degree of hearing loss [1].
One of the main problems that deaf-mutes suffer from is communicating with others. Reliance on lip-reading and using sign language interpreters are two solutions for communication between deaf-mutes and non-deaf-mutes [2]. However, lip-reading, in some cases, is useless, especially when the parties of the dialogue need to wear masks. In addition, the usage of sign language interpreters is costly. For example, in New Jersey, interpreters typically charge from $50 to $70 per hour [3]. Moreover, learning sign language is difficult for many people because of the time required to learn it and the need for professional training [4]. Therefore, communication between the two parties remains a challenge that needs to be addressed. Thus, more attention is needed by information system/information technology (IT) researchers to solve this two-way communication problem.
Multiple studies have been proposed to solve the problem of communication between deaf-mutes and non-deaf-mutes. Numerous systems have also been developed using sensors, image processing, and smartphones to facilitate communication between the two parties. However, most of these systems are limited to one direction of communication from the deaf-mute to non-deaf-mute people and not vice versa. Thus, this research study proposed an IT intervention called intelligent gloves (IG), which aims to raise the level of communication among non-deaf individuals and deaf-mute individuals. IG assists in communication without the need for mediators (sign language interpreters). As a result, this intervention will assist deaf people in communication with other members of the society to easily satisfy their needs and purposes, such as purchasing, occupations, and learning. In addition, deaf people can communicate without difficulty or any kind of embarrassment, and thus, they can contribute positively to the community and society by providing their ideas and sharing or helping in social development. The main contributions of this study are the following:
Designing and developing a prototype of a system “Intelligent Gloves” facilitates two-way communication between deaf-mutes and non-deaf-mutes by translating the sign language to text and voice simultaneously that can be understood by non-deaf-mute people and translating non-deaf-mute voice to text and sign language presented by a video of an avatar on the screen to the deaf-mute people.
Evaluating the proposed system using functional and usability testing.
The study is organized as follows. In Section 2, relevant studies of the proposed system are discussed. Section 3 presents the research methodology. Section 4 presents the evaluation phase. Finally, the conclusion and the future work of the study are given in Section 5.
2 Related work
Hand gestures would assist and facilitate communication among people by providing a meaningful interaction [5]. There are several applications that embed hand gestures, such as vision-based recognition systems, game control systems, and human–robot interactions [6]. Usually, researchers use wearable sensors to collect hand movements. Then, the data are processed with any of hand gesture recognition approaches [5]. In the literature, there are mainly two approaches, which are vision-based and sensor-based [7,8]. The vision-based approach relies on processing digital images and videos while using machine learning and deep learning approaches for gesture recognition. A real-time system for hand gesture recognition based on You Only Look Once (YOLO) v3 and DarkNet-53 convolutional neural networks (CNNs) was proposed in ref. [9]. The system is evaluated on a labeled dataset of hand gesture images in both Pascal VOC and YOLO format and attained an accuracy of 97.68%. A hybrid automated system that translates sign language to text and speech was built in ref. [10]. The system uses CNN, natural language processing, language translation, and text-to-speech algorithms to recognize, interpret, express, and convert the hand gesture images to the speech of ten different languages. The system is based on a pre-existing dataset of American Sign Language (ASL) and attained an accuracy of 99.63%. A hand gesture recognition system to predict the emergency signs of Indian Sign Language (ISL) was developed in ref. [11]. The system uses a three-dimensional convolutional neural network (3D CNN) and a combination of pre-trained CNN (VGG-16), long short-term memory, and YOLO v3 algorithm to detect hand gestures. The system is based on a video of ISL dataset [12] and attained 99.6% mean average precision for detecting the hand gestures. The vision-based approach requires using cameras, which are usually available on smartphones, to capture images or videos of the hand gestures. Despite the low cost of cameras, the main drawback of this approach lies in the complex and time-consuming processing required to recognize the hand gestures, which is negatively affected by the background noise, and illumination.
The sensor-based gesture recognition involves using sensors, such as flex sensors, to measure the bending angle, movements, orientations, and alignments of fingers, and the positioning of the palm and use these measurements to recognize the gestures. A system team called “QuadSquad” consisting of two parts, which are gloves and a mobile application running on Windows mobile (Windows 7 and 8), was developed by a research team in ref. [13]. The gloves consisted of flex sensors that would convert hand gestures that are compliant with ASL and transmit them via Bluetooth to the mobile application. The application would recognize and accept the input data from the glove sensors and transform the data to text that would then appear on the device screen and to speech using a text-to-speech engine [13]. A glove with a simple design that included several sensors, a small screen, and a mini speaker attached to the glove was developed in ref. [14]. The glove sensors would translate the hand gestures to text that appears on the screen and to speech via the speaker using a “text-to-speech chip” [14]. SignAloud is another glove developed to translate the sign language to text and speech displayed on a computer [15,16]. The glove would record the hand positions via a collection of sensors. It would send the recorded data wirelessly via Bluetooth from the Arduino controller on the glove to the Arduino controller linked to the computer screen. If the data matched one of the gestures saved in the computer, then the associated word to the gesture would be spoken through a speaker [15,16]. A glove attached with flex sensors that are used to convert the ASL to audio using the Arduino circuit board was developed by Rajapandian et al. [17]. Furthermore, the board would convert the audio to text using Analog-to-Digital converters (ADC) to be displayed on the LCD screen [17]. Likewise, a glove consisting of flex sensors, Arduino board, and an LCD was developed in ref. [18]. The sensors would transmit the sign language to the Arduino board that would process the entered data using a microcontroller, and then send the processed data to be displayed on the LCD [12]. Although the sensors might be slightly more expensive than cameras, the main advantages of the sensor-based approach are the high accuracy and that they do not require data processing as the data needed for the gesture recognition are directly obtained from the sensors’ readings. The accuracy and fast processing advantages of the sensor-based approach offset the increment in the cost of this approach since the speed in communication between the deaf-mutes and the non-deaf-mutes is an important factor. Thus, the authors in this current study chose sensor-based gesture not vision-based.
In regard to enabling the two-way communication between the deaf-mutes and non-deaf-mutes, the authors in ref. [19] developed two separate Android applications for communication between non-deaf-mutes and deaf-mutes. The non-deaf-mute application would convert speech to visual contexts and gestures. The speaking person would give a speech input to the system. The system would convert the speech to text using Google speech-to-text application programming interface (API). Then, it would define the keywords from the text and show some visual images and gestures to the deaf-mute based on the determined keywords. The deaf-mute application enters the gestures or vibrotactile input to the system using a mobile interface, and then converts the input to speech. The system would match the given gesture with the gestures stored in the system’s database and present the predefined words associated with the gestures as a speech to the non-deaf-mute person using Google text-to-speech API. The two applications are connected via Bluetooth to pass information between the devices. One of the main drawbacks of this system is the reliance on the mobile interface to input the user gestures, which might be less accurate than the glove sensor-based approach.
All the mentioned IT interventions were mainly designed to simplify communication and to assist deaf-mute people to be heard. The vision-based approach, which utilizes machine and deep learning algorithms, is complex in processing, and it incurs additional time for recognition that might be detrimental to the speed of communication between the deaf-mute and the non-deaf-mute. In addition, the vision-based approach requires the development of an accurate model to recognize the hand gestures when dynamic images are used as input, which is hard to achieve [10]. Despite its cost, using the sensory approach for sign language translation systems is considered the most accurate approach for capturing input data. Sensors are not affected by the external environment when collecting hand gesture data, and therefore, they provide an improved recognition accuracy. To the best of our knowledge, none of the proposed sensor-based systems have integrated the two-way communication between the deaf-mutes and the non-deaf-mutes, which involves translation from sign language to text and voice and vice versa in a single system. Thus, this work provides a sensor-based glove prototype for integrating the two-way communication between the deaf-mutes and the non-deaf-mutes, without the need of the Bluetooth technology, thereby reducing the power consumption of the system.
3 Research methodology
The methodology of this study’s system design focuses mainly on addressing the research aim of facilitating two-way communication among non-deaf individuals and deaf individuals. Therefore, the following phases were adopted: feasibility analysis, intervention design and implementation, and testing. All phases are described in detail below.
3.1 Feasibility analysis
A questionnaire was created and published to understand whether this system has a clear need, and whether the stakeholders would use the proposed solution. The questionnaire questions are presented in Table 1. The questionnaire was distributed randomly using different channels, such as email and social media, targeting people in Saudi Arabia. This questionnaire was distributed to 335 participants of deaf-mute and non-deaf-mute people. Three hundred and nineteen (95.2%) of the responses were from non-deaf-mutes, and 246 (77%) of whom had difficulty in communicating with deaf-mute people. According to the questionnaire’s results, 228 (71.5%) communicate with deaf-mute by hand gestures, 162 (50.8%) by writing, and 42 (13.2%) by photographs. Around 255 (80%) non-deaf-mute people see the potential of the proposed solution, which would assist in the communication issue with deaf people. Approximately 16 (5%) of the respondents were from deaf-mutes and 11 (68.8%) of whom had difficulty communicating with others. Around 9 (56.3%) of the deaf-mute respondents communicate with others by hand gestures, 9 (56.3%) by writing, and 4 (25%) by photographs. Almost 223 (70%) of the deaf-mute respondents said that this product would simplify their communication with other people. Therefore, the proposed solution would assist in raising the level of communication among both deaf and non-deaf individuals.
Questionnaire questions
Question | |
---|---|
1. | Are you deaf and mute? |
2. | Do you have difficulty communicating with (none) deaf and mute people? |
3. | What is your method of communication with the (none) deaf and mute people? |
4. | If there is a product that converts deaf-mute gestures to voice and written text and at the same time converts the speech of non-deaf-mute people to written text and deaf-mute gestures, do you think this product will help communication between deaf and mute people and non-deaf-mute people? |
3.2 Intervention design and implementation
To enable bi-directional communication between the deaf-mutes and the non-deaf-mutes, a gloves instrument was designed. The two-way interaction between the deaf-mutes and the non-deaf-mutes through the glove instrument is shown in Figure 1. The deaf-mutes use sign language gestures. These gestures are converted using the instrument to voices and sentences that are audible by the non-deaf-mutes. The latter can interact with the deaf-mutes through the same instrument using voice, which is then converted into on-screen text and sign language movements, which are shown on the attached screen using an avatar. Anyone of the users, either deaf-mutes or non-deaf-mutes, can start a conversation at any time and the corresponding output will be presented by the device.

An illustration of the interaction between a deaf-mute and a non-deaf-mute using IG.
The main components of an IG are gloves, which are made of fabric, ten flex sensors, an Arduino nano board, a small screen with a built-in microphone, a speaker, and a SanDisk (SD card) module, where all the components are attached to the fabric of the gloves. The flex sensor senses the movements of the user’s hand and fingers, and it measures the amount of deviation, bending, and movement angles with a high level of accuracy. It is lightweight and can be easily attached to the fabric. Figure 2 shows the flex sensors, which are attached to the gloves and connected to the Arduino using wires. The Arduino nano board collects the hand gestures received from the flex sensors and converts them to an output – voice and written words on the screen. The SD memory card module is attached to the Arduino board, where a database stored in the SD card is used to match the hand gestures of deaf-mutes and convert them into written words and voice to non-deaf-mutes, or it is used to match the spoken words by non-deaf-mutes and convert them to written words and visual representations/video by avatar to the deaf-mute. The small screen and speaker are used to display and audit the data obtained from the SD card module. Finally, the microphone is used to capture the voice of words spoken by non-deaf-mutes. Figure 3(a) shows how the two gloves are connected to each other and the LCD. The final prototype of the IG system is shown in Figure 3(b).

Gloves with sensors.

IGs’ prototype. Two gloves connected to each other and the LCD (a) and the final prototype of the IG system (b).
The circuit diagram of the proposed IG system is presented in Figure 4. The circuit contains ten flex sensors (five for right hand and five for left hand), a speaker, an SD card, an Arduino nano board, and a battery. Flex sensors are powered by connecting them to the VCC (5v) and ground (GND) pins available on the Arduino board. For supplying the flex sensors with the positive charge, they are connected to the analog pins in the Arduino nano board, where the right-hand flex sensors are connected to the pins A0–A4 and the left-hand flex sensors are connected to the pins A5–A10. In addition, a static resistor is connected to each wire supplying the positive charge to the system to create a voltage divider. Thus, a variable voltage can be measured by the ADC of the microcontroller. The second part of each flex sensor is connected to the GND pins in the Arduino nano board for the negative charge.

Circuit diagram of IG system.
Flex sensors measure the degree of bending. As the flex sensor bends, a resistance output is produced that is connected with the radius of bending and increases as the sensor bends. Flex sensors transmit the measured angle of bending as voltage signal toward the Arduino board. The conversion from analog to digital signal is done with the help of the microcontroller in the board, which has a program for converting the detected analog voltage by the flex sensor to the digital form using the ADC. The digital form of flex sensor readings is mapped to text by indexing into a file containing the text equivalent to that reading. After that, the text is mapped to the equivalent voice playback, which is stored in the SD card using the same index.
The ten flex sensors cost about 785 S.R. ($209.30 US dollars). The Arduino board costs about 40 S.R. ($10.67 US dollars). The small screen costs about 125 S.R. ($33.33 US dollars). The speakers cost about 90 S.R. ($24 US dollars). The SD card module costs 10 S.R. ($2.67 US dollars). Thus, the estimated total cost of the system is about 1050 S.R. (about $280 US dollars). Figure 5(a) and (b) illustrates the workflow using the Business Process Model and Notation (BPMN) diagram.

The BPMN diagram of (a) converting sign language to voice and text and (b) converting voice to sign language and text.
Figure 6 shows the design of the system, which follows a three-layer architecture. This allows for modifying each layer separately without affecting the other layers, making the system easy to maintain and extend in the future. The first layer is the interface layer, which includes the gloves, the screen, the speaker, and the microphone. The second layer is the logic layer, and it includes all the functional components of the system, which are written in C++, Arduino programming languages, and Java, such as utilizing sensors to translate movements to sentences. The functional components are as follows: (1) translating sign language to voice, (2) translating voice to text, and (3) translating voice to sign language. The third layer is the data access layer, which includes the database of sign language gestures and written and spoken words for matching the input by the gloves or the microphone and displaying the corresponding output. The database used in the IG system consists of text keywords and their representations in different formats, which are sequences of digits representing the gesture readings of the keywords, recorded voices of the keywords, and animated videos using an avatar.

IG’s three-layer system architecture.
The study followed a scrum framework in the design of the gloves. The work was broken down to smaller parts known as user sprints that provide high-level details around the desired functionality for a specific user. The sprints were arranged from the highest to the lowest priority. The Backlog of the IG contains four sprints A, B, C, and D.
Sprints A and B include the deaf-mute parts of the system. They include programming the Arduino in the gloves. Sprint A is the programming of the Arduino to make it translate hand gestures to voice coming from the speaker. The Arduino was programmed using Arduino programming language and C++ in the Android studio environment to monitor hand gestures in the form of a range of numbers for each movement, and the numbers were compared with the words stored in the SD card. Then, we pronounce the word or sentence. Sprint B is the programming of Arduino using the Arduino programming language and the Java programming language in Android studio environment to translate hand gestures to text that appears on the screen. Sprints C and D include the non-deaf-mute parts of the system. They include screen programming. Sprint C involves programming the interface using Java in Android studio environment to enable the screen to display the translated non-deaf-mute’s voice to texts. Sprint D involves programming the interface using Java in the Android studio environment to enable the screen to display an avatar that matches the translated text obtained from sprint C. The duration of each sprint is 2 weeks, resulting in 8 weeks of work as a total time for the design and implementation of the gloves. Figure 7 shows the prototype scenario of each sprint.

Prototype scenario of sprint (a) converting sign language into voice, (b) converting sign language into text, (c) converting voice into texts, and (d) converting voice into sign language by an avatar on the screen.
3.2.1 Communication between deaf-mutes and non-deaf-mutes
Normally, deaf-mute people utilize sign language to communicate with non-deaf-mute people. Deaf-mutes and non-deaf-mutes communicate with each other using very simple sentences, which are composed of a few nouns and verbs. The system in this study was designed for the basic communication between deaf-mutes and non-deaf-mutes. The database in our system consists of simple keywords related to basic conversation sentences, which would be used by users of different age groups in different contexts of their daily life activities. Using the IG system does not require the non-deaf-mutes to learn sign language, since the sign language is converted using the system into on-screen text and voice. However, non-deaf-mutes need to know that sign language is composed of simple sentences with uncomplicated vocabularies of nouns and verbs. The communication process starts when the deaf-mute user wears the IG. Then, the user’s hand gestures and finger movements while using sign language are sensed by the flex sensors and sent to the SD card module where they are matched with gestures stored in the database. Using this database, gestures are converted into words, which are then presented on the small screen as texts and a voice that is audible to the non-deaf-mutes through the speakers. From the other side, the system converts the words uttered by the non-deaf-mutes to written words on the screen and sign language representations that are shown on the screen using an avatar. The process starts when non-deaf-mutes speak the words through a microphone. The voices sensed through the microphone are sent to the SD card module where they are matched with the voices of words stored in the database. The corresponding sign language gestures are then shown on the screen using an avatar.
4 Evaluation
To evaluate the functionality of the IG intervention, functional testing and usability testing were performed. Functional testing was done to ensure that all the system components are working as expected, and usability testing was done to ensure that the system is easy to use by the target users. The evaluations done in this study are similar to the evaluation of the systems proposed in refs [18,19].
The functional system focuses on the overall validation of the system behavior and its ability to perform as expected. The functional testing involved unit and integration testing. In software development, unit testing is often performed as a first step in testing any software [20]. Unit testing is a practical method that is used to increase a piece of software’s correctness and quality [21]. One of the main advantages of performing unit testing is to assist in fixing bugs and saving costs during the development cycle [21]. Thus, each function in the IG intervention was considered as a unit. Each unit was tested individually during the development phase and before recruiting the participants. After ensuring the correctness of each unit and as a next step, developers needed to test the integration of two or more units, which is called integration testing. The main aim of integration testing is to ensure that all parts interact with each other in their intended environment to match the expected results [20]. Converting sign language to voice, converting sign language to text, converting voice to texts, and converting voice to sign language represented by an avatar are some of the integrations that were tested. The findings from IG intervention indicated that there were no errors in the system, as all components interacted correctly and matched the expected results, showing that the IG intervention was technically working. Table 2 shows some examples of the performed unit testing results.
Unit testing results
Unit test ID | Description | Expected result | Actual result | Result status | Remark |
---|---|---|---|---|---|
Scenario A: Converting sign language into voice | |||||
1 | Hand movement | Voice: hello | Voice: hello | Pass | Gloves transmit the sign language into voice |
2 | Hand movement | Voice: bye | Voice: bye | Pass | Gloves transmit the sign language into voice |
Scenario B: Converting sign language into text | |||||
1 | Hand movement | “Hello” word displays on the screen | “Hello” | Pass | Gloves transmit the sign language into words/sentences that appear on the screen |
2 | Hand movement | “What is your name” displays on the screen | “What is your name” | Pass | Gloves transmit the sign language into words/sentences that appear on the screen |
Scenario C: Converting voice into texts | |||||
1 | Non-deaf-mute person says: “hello” | “Hello” displays on the screen | “Hello” | Pass | Translating the voice to words/sentences that appear on the screen |
2 | Non-deaf-mute person says: “Where is she?” | “Where is she” displays on the screen | “Where is she” | Pass | Translating the voice to words/sentences that appear on the screen |
Scenario D: Converting voice to sign language represented by animated video of an avatar on the screen | |||||
1 | Translates voice to sign language represented by animated video of an avatar | Non-deaf-mute person says: “Relax” | Figure 8(a) | Pass | Translating the voice to sentences on the screen, as well as to a sign language by the animated video |
2 | Translates voice to sign language represented by animated video of an avatar | Non-deaf-mute person says: “What is your problem?” | Figure 8(b) | Pass | Translating the voice to sentences on the screen, as well as to a sign language by the animated video |

Translating the voice to word (a) and translating voice to a sentence language (b).
To test the usability of the IG system, the convenience sampling technique was used to recruit ten participants during the testing phase. This included five deaf-mutes and five non-deaf-mutes. The participants were college undergraduate students, and their ages range from 18 to 23 years old. The non-deaf-mutes have not dealt with deaf-mute in advance. Before the user testing, simple instructions about the operations of the IG system were given to the participants in written format. Deaf-mutes were asked to wear the IG gloves and use the sign language gestures to communicate with the non-deaf-mutes. Then, the matching voices can be heard by the non-deaf-mutes and the matching text words will appear on the screen. The non-deaf-mute participants were asked to use the built-in microphone to communicate with the deaf-mutes. Then, the matching text word and avatar will appear on the screen as seen in Figure 9(a and b). Participants were asked to perform the scenarios mentioned in Table 2. The system was able to recognize the signs performed by the deaf-mute participants in scenarios A and B and display the matching voices and texts. In addition, the system has effectively recognized the words said by the non-deaf-mute participants in scenarios C and D and displays the matching text and animated video. Some users expressed discomfort when using the system due to the cabling of the system. Nonetheless, all the participants agreed that the IG system is overall easy to use and it is suitable to use for two-way translation of sign language.

Samples of sign language gestures with the corresponding text displayed on the LCD (a) and (b).
As for the comparison between the proposed prototype and existing IT gloves, unlike the systems developed in refs [13,18,19], which focused on only one side of the communication from the deaf-mutes to the non-deaf-mutes, the proposed approach focuses on investigating the feasibility of integrating the two-way communication in a single system that would help in facilitating the interaction between the deaf-mutes and the non-deaf-mutes. Results of evaluating the IG system show that it is overall feasible to integrate a two-way communication system in one device, and it shows that the system is easy to use by the target users. In addition, unlike some of the proposed systems in refs [13,16,18,19], in this study, a questionnaire was administered to collect the system requirements and assess its acceptability by the users.
5 Conclusion and future work
Hearing and speaking loss is a major issue that many people around the world face. More than 5% of the world’s population needs rehabilitation to assist them with their disability. This percentage as estimated will almost double by 2050. This shows the urgent need to design and develop an IT intervention that may aid these people to communicate with other people normally without the need to have a translator in the middle. The proposed IT intervention, IG, is a promising solution that would help deaf-mute and non-deaf-mute people to communicate easily. The system works in both directions as it translates from sign language to voice and text, and vice versa. However, the current IG system is wired, which might constrain the user’s freedom of mobility. In the future, the system could enable more flexible movement of the users by utilizing wireless communication such as Bluetooth and WiFi. In addition, reducing the price of IG by replacing some of its components with cheaper parts would make the system more accessible. Also, researchers for this study will add a customization feature, so the avatar will, for example, change based on the users’ gender. Additionally, the avatar may also be customized based on the user’s age for better usability of the system for different age groups. Finally, we will add an accelerometer sensor to improve the detecting accuracy that is achieved in this current study.
-
Conflict of interest: The authors declare that there is no conflict of interest regarding the publication of this article.
References
[1] World Health Organization. WHO: 1 in 4 people projected to have hearing problems by 2050; 1-Dec-2021. https://www.who.int/news/item/02-03-2021-who-1-in-4-people-projected-to-have-hearing-problems-by-2050.Search in Google Scholar
[2] National Association of the Deaf. Community and Culture – Frequently Asked Questions; 1-Dec-2021. https://www.nad.org/resources/american-sign-language/community-and-culture-frequently-asked-questions/.Search in Google Scholar
[3] Pezzino JM. Ethnography of deaf individuals: a struggle with health literacy. Rutgers University-Graduate School-Newark; 2021.Search in Google Scholar
[4] Mohd Jalani NN, Zamzuri ZF. iMalaySign: Malaysian sign language recognition mobile application using Convolutional Neural Network (CNN). Malaysia: Akademi Pengajian Bahasa; 2021.Search in Google Scholar
[5] Oudah M, Al-Naji A, Chahl J. Hand gesture recognition based on computer vision: a review of techniques. J Imaging. 2020;6(8):73.10.3390/jimaging6080073Search in Google Scholar PubMed PubMed Central
[6] Gadekallu TR, Alazab M, Kaluri R, Maddikunta PKR, Bhattacharya S, Lakshmanna K. Hand gesture classification using a novel CNN-crow search algorithm. Complex Intell Syst. 2021;7(4):1855–68.10.1007/s40747-021-00324-xSearch in Google Scholar
[7] Rosero-Montalvo PD, Godoy-Trujillo P, Flores-Bosmediano E, Carrascal-Garcia J, Otero-Potosi S, Benitez-Pereira H, et al. Sign language recognition based on intelligent glove using machine learning techniques. 2018 IEEE Third Ecuador Technical Chapters Meeting (ETCM). Cuenca, Ecuador: IEEE; 2018. 10.1109/ETCM.2018.8580268.Search in Google Scholar
[8] Adeyanju I, Bello O, Adegboye M. Machine learning methods for sign language recognition: A critical review and analysis. Intell Syst Appl. 2021;12:200056. 10.1016/j.iswa.2021.200056.Search in Google Scholar
[9] Kim G-M, Baek J-H. Real-time hand gesture recognition based on deep learning. J Korea Multimed Soc. 2019;22(4):424–31. 10.3390/app11094164.Search in Google Scholar
[10] Kulkarni A. Dynamic sign language translating system using deep learning and natural language processing. Turk J Comput Math Educ. 2021;12(10):129–37. 10.17762/turcomat.v12i10.4060.Search in Google Scholar
[11] Areeb QM, Nadeem M, Alroobaea R, Anwer F. Helping hearing-impaired in emergency situations: A deep learning-based approach. IEEE Access. 2022;10:8502–17. 10.1109/ACCESS.2022.3142918.Search in Google Scholar
[12] Adithya V, Rajesh R. Hand gestures for emergency situations: A video dataset based on words from Indian sign language. Data Brief. 2020;31:106016. 10.1016/j.dib.2020.106016.Search in Google Scholar PubMed PubMed Central
[13] Bukhari J, Rehman M, Malik SI, Kamboh AM, Salman A. American sign language translation through sensory glove; SignSpeak. Int J u-e-Service Sci Technol. 2015;8(1):131–42. 10.14257/ijunesst.2015.8.1.12.Search in Google Scholar
[14] Gurbanova KS. Gesture language: History, development stage and current state. İTP Jurnalı. 2018;9:94–9. 10.25045/jpis.v09.i1.10.Search in Google Scholar
[15] Mailonline, ROHF. ‘SignAloud’ gloves translate sign language gestures into spoken English; 2016. http://www.dailymail.co.uk/sciencetech/article-3557362/SignAloudgloves-translate-sign-language-movements-spoken-English.html [25-Aug-2021].Search in Google Scholar
[16] Shenoy K, Dastane T, Rao V, Vyavaharkar D. Real-time Indian sign language (ISL) recognition. 2018 9th International Conference on Computing, Communication and Networking Technologies (ICCCNT). Bengaluru, India: IEEE; 2018. 10.1109/ICCCNT.2018.8493808.Search in Google Scholar
[17] Rajapandian B, Harini V, Raksha D, Sangeetha V. A novel approach as an AID for blind, deaf and dumb people. 2017 Third International Conference on Sensing, Signal Processing and Security (ICSSS). Chennai, India: IEEE; 2017. 10.1109/SSPS.2017.8071628.Search in Google Scholar
[18] Cotoros D, Stanciu A, Hutini A. Innovative device for enhancing deaf-mute persons communication possibilities. Int J Model Optim. 2021;11(2):53–7. 10.7763/IJMO.2021.V11.777.Search in Google Scholar
[19] Sobhan M, Chowdhury MZ, Ahsan I, Mahmud H, Hasan MK. A communication aid system for deaf and mute using vibrotactile and visual feedback. 2019 International Seminar on Application for Technology of Information and Communication (iSemantic). Semarang, Indonesia: IEEE; 2019. 10.1109/ISEMANTIC.2019.8884323.Search in Google Scholar
[20] Delamaro ME, Maidonado J, Mathur AP. Interface mutation: An approach for integration testing. IEEE Trans Softw Eng. 2001;27(3):228–47.10.1109/32.910859Search in Google Scholar
[21] Cheon Y, Leavens GT. A simple and practical approach to unit testing: The JML and JUnit way. European Conference on Object-Oriented Programming. Málaga, Spain: Springer; 2002.10.1007/3-540-47993-7_10Search in Google Scholar
© 2023 the author(s), published by De Gruyter
This work is licensed under the Creative Commons Attribution 4.0 International License.
Articles in the same Issue
- Research Articles
- Salp swarm and gray wolf optimizer for improving the efficiency of power supply network in radial distribution systems
- Deep learning in distributed denial-of-service attacks detection method for Internet of Things networks
- On numerical characterizations of the topological reduction of incomplete information systems based on evidence theory
- A novel deep learning-based brain tumor detection using the Bagging ensemble with K-nearest neighbor
- Detecting biased user-product ratings for online products using opinion mining
- Evaluation and analysis of teaching quality of university teachers using machine learning algorithms
- Efficient mutual authentication using Kerberos for resource constraint smart meter in advanced metering infrastructure
- Recognition of English speech – using a deep learning algorithm
- A new method for writer identification based on historical documents
- Intelligent gloves: An IT intervention for deaf-mute people
- Reinforcement learning with Gaussian process regression using variational free energy
- Anti-leakage method of network sensitive information data based on homomorphic encryption
- An intelligent algorithm for fast machine translation of long English sentences
- A lattice-transformer-graph deep learning model for Chinese named entity recognition
- Robot indoor navigation point cloud map generation algorithm based on visual sensing
- Towards a better similarity algorithm for host-based intrusion detection system
- A multiorder feature tracking and explanation strategy for explainable deep learning
- Application study of ant colony algorithm for network data transmission path scheduling optimization
- Data analysis with performance and privacy enhanced classification
- Motion vector steganography algorithm of sports training video integrating with artificial bee colony algorithm and human-centered AI for web applications
- Multi-sensor remote sensing image alignment based on fast algorithms
- Replay attack detection based on deformable convolutional neural network and temporal-frequency attention model
- Validation of machine learning ridge regression models using Monte Carlo, bootstrap, and variations in cross-validation
- Computer technology of multisensor data fusion based on FWA–BP network
- Application of adaptive improved DE algorithm based on multi-angle search rotation crossover strategy in multi-circuit testing optimization
- HWCD: A hybrid approach for image compression using wavelet, encryption using confusion, and decryption using diffusion scheme
- Environmental landscape design and planning system based on computer vision and deep learning
- Wireless sensor node localization algorithm combined with PSO-DFP
- Development of a digital employee rating evaluation system (DERES) based on machine learning algorithms and 360-degree method
- A BiLSTM-attention-based point-of-interest recommendation algorithm
- Development and research of deep neural network fusion computer vision technology
- Face recognition of remote monitoring under the Ipv6 protocol technology of Internet of Things architecture
- Research on the center extraction algorithm of structured light fringe based on an improved gray gravity center method
- Anomaly detection for maritime navigation based on probability density function of error of reconstruction
- A novel hybrid CNN-LSTM approach for assessing StackOverflow post quality
- Integrating k-means clustering algorithm for the symbiotic relationship of aesthetic community spatial science
- Improved kernel density peaks clustering for plant image segmentation applications
- Biomedical event extraction using pre-trained SciBERT
- Sentiment analysis method of consumer comment text based on BERT and hierarchical attention in e-commerce big data environment
- An intelligent decision methodology for triangular Pythagorean fuzzy MADM and applications to college English teaching quality evaluation
- Ensemble of explainable artificial intelligence predictions through discriminate regions: A model to identify COVID-19 from chest X-ray images
- Image feature extraction algorithm based on visual information
- Optimizing genetic prediction: Define-by-run DL approach in DNA sequencing
- Study on recognition and classification of English accents using deep learning algorithms
- Review Articles
- Dimensions of artificial intelligence techniques, blockchain, and cyber security in the Internet of medical things: Opportunities, challenges, and future directions
- A systematic literature review of undiscovered vulnerabilities and tools in smart contract technology
- Special Issue: Trustworthy Artificial Intelligence for Big Data-Driven Research Applications based on Internet of Everythings
- Deep learning for content-based image retrieval in FHE algorithms
- Improving binary crow search algorithm for feature selection
- Enhancement of K-means clustering in big data based on equilibrium optimizer algorithm
- A study on predicting crime rates through machine learning and data mining using text
- Deep learning models for multilabel ECG abnormalities classification: A comparative study using TPE optimization
- Predicting medicine demand using deep learning techniques: A review
- A novel distance vector hop localization method for wireless sensor networks
- Development of an intelligent controller for sports training system based on FPGA
- Analyzing SQL payloads using logistic regression in a big data environment
- Classifying cuneiform symbols using machine learning algorithms with unigram features on a balanced dataset
- Waste material classification using performance evaluation of deep learning models
- A deep neural network model for paternity testing based on 15-loci STR for Iraqi families
- AttentionPose: Attention-driven end-to-end model for precise 6D pose estimation
- The impact of innovation and digitalization on the quality of higher education: A study of selected universities in Uzbekistan
- A transfer learning approach for the classification of liver cancer
- Review of iris segmentation and recognition using deep learning to improve biometric application
- Special Issue: Intelligent Robotics for Smart Cities
- Accurate and real-time object detection in crowded indoor spaces based on the fusion of DBSCAN algorithm and improved YOLOv4-tiny network
- CMOR motion planning and accuracy control for heavy-duty robots
- Smart robots’ virus defense using data mining technology
- Broadcast speech recognition and control system based on Internet of Things sensors for smart cities
- Special Issue on International Conference on Computing Communication & Informatics 2022
- Intelligent control system for industrial robots based on multi-source data fusion
- Construction pit deformation measurement technology based on neural network algorithm
- Intelligent financial decision support system based on big data
- Design model-free adaptive PID controller based on lazy learning algorithm
- Intelligent medical IoT health monitoring system based on VR and wearable devices
- Feature extraction algorithm of anti-jamming cyclic frequency of electronic communication signal
- Intelligent auditing techniques for enterprise finance
- Improvement of predictive control algorithm based on fuzzy fractional order PID
- Multilevel thresholding image segmentation algorithm based on Mumford–Shah model
- Special Issue: Current IoT Trends, Issues, and Future Potential Using AI & Machine Learning Techniques
- Automatic adaptive weighted fusion of features-based approach for plant disease identification
- A multi-crop disease identification approach based on residual attention learning
- Aspect-based sentiment analysis on multi-domain reviews through word embedding
- RES-KELM fusion model based on non-iterative deterministic learning classifier for classification of Covid19 chest X-ray images
- A review of small object and movement detection based loss function and optimized technique
Articles in the same Issue
- Research Articles
- Salp swarm and gray wolf optimizer for improving the efficiency of power supply network in radial distribution systems
- Deep learning in distributed denial-of-service attacks detection method for Internet of Things networks
- On numerical characterizations of the topological reduction of incomplete information systems based on evidence theory
- A novel deep learning-based brain tumor detection using the Bagging ensemble with K-nearest neighbor
- Detecting biased user-product ratings for online products using opinion mining
- Evaluation and analysis of teaching quality of university teachers using machine learning algorithms
- Efficient mutual authentication using Kerberos for resource constraint smart meter in advanced metering infrastructure
- Recognition of English speech – using a deep learning algorithm
- A new method for writer identification based on historical documents
- Intelligent gloves: An IT intervention for deaf-mute people
- Reinforcement learning with Gaussian process regression using variational free energy
- Anti-leakage method of network sensitive information data based on homomorphic encryption
- An intelligent algorithm for fast machine translation of long English sentences
- A lattice-transformer-graph deep learning model for Chinese named entity recognition
- Robot indoor navigation point cloud map generation algorithm based on visual sensing
- Towards a better similarity algorithm for host-based intrusion detection system
- A multiorder feature tracking and explanation strategy for explainable deep learning
- Application study of ant colony algorithm for network data transmission path scheduling optimization
- Data analysis with performance and privacy enhanced classification
- Motion vector steganography algorithm of sports training video integrating with artificial bee colony algorithm and human-centered AI for web applications
- Multi-sensor remote sensing image alignment based on fast algorithms
- Replay attack detection based on deformable convolutional neural network and temporal-frequency attention model
- Validation of machine learning ridge regression models using Monte Carlo, bootstrap, and variations in cross-validation
- Computer technology of multisensor data fusion based on FWA–BP network
- Application of adaptive improved DE algorithm based on multi-angle search rotation crossover strategy in multi-circuit testing optimization
- HWCD: A hybrid approach for image compression using wavelet, encryption using confusion, and decryption using diffusion scheme
- Environmental landscape design and planning system based on computer vision and deep learning
- Wireless sensor node localization algorithm combined with PSO-DFP
- Development of a digital employee rating evaluation system (DERES) based on machine learning algorithms and 360-degree method
- A BiLSTM-attention-based point-of-interest recommendation algorithm
- Development and research of deep neural network fusion computer vision technology
- Face recognition of remote monitoring under the Ipv6 protocol technology of Internet of Things architecture
- Research on the center extraction algorithm of structured light fringe based on an improved gray gravity center method
- Anomaly detection for maritime navigation based on probability density function of error of reconstruction
- A novel hybrid CNN-LSTM approach for assessing StackOverflow post quality
- Integrating k-means clustering algorithm for the symbiotic relationship of aesthetic community spatial science
- Improved kernel density peaks clustering for plant image segmentation applications
- Biomedical event extraction using pre-trained SciBERT
- Sentiment analysis method of consumer comment text based on BERT and hierarchical attention in e-commerce big data environment
- An intelligent decision methodology for triangular Pythagorean fuzzy MADM and applications to college English teaching quality evaluation
- Ensemble of explainable artificial intelligence predictions through discriminate regions: A model to identify COVID-19 from chest X-ray images
- Image feature extraction algorithm based on visual information
- Optimizing genetic prediction: Define-by-run DL approach in DNA sequencing
- Study on recognition and classification of English accents using deep learning algorithms
- Review Articles
- Dimensions of artificial intelligence techniques, blockchain, and cyber security in the Internet of medical things: Opportunities, challenges, and future directions
- A systematic literature review of undiscovered vulnerabilities and tools in smart contract technology
- Special Issue: Trustworthy Artificial Intelligence for Big Data-Driven Research Applications based on Internet of Everythings
- Deep learning for content-based image retrieval in FHE algorithms
- Improving binary crow search algorithm for feature selection
- Enhancement of K-means clustering in big data based on equilibrium optimizer algorithm
- A study on predicting crime rates through machine learning and data mining using text
- Deep learning models for multilabel ECG abnormalities classification: A comparative study using TPE optimization
- Predicting medicine demand using deep learning techniques: A review
- A novel distance vector hop localization method for wireless sensor networks
- Development of an intelligent controller for sports training system based on FPGA
- Analyzing SQL payloads using logistic regression in a big data environment
- Classifying cuneiform symbols using machine learning algorithms with unigram features on a balanced dataset
- Waste material classification using performance evaluation of deep learning models
- A deep neural network model for paternity testing based on 15-loci STR for Iraqi families
- AttentionPose: Attention-driven end-to-end model for precise 6D pose estimation
- The impact of innovation and digitalization on the quality of higher education: A study of selected universities in Uzbekistan
- A transfer learning approach for the classification of liver cancer
- Review of iris segmentation and recognition using deep learning to improve biometric application
- Special Issue: Intelligent Robotics for Smart Cities
- Accurate and real-time object detection in crowded indoor spaces based on the fusion of DBSCAN algorithm and improved YOLOv4-tiny network
- CMOR motion planning and accuracy control for heavy-duty robots
- Smart robots’ virus defense using data mining technology
- Broadcast speech recognition and control system based on Internet of Things sensors for smart cities
- Special Issue on International Conference on Computing Communication & Informatics 2022
- Intelligent control system for industrial robots based on multi-source data fusion
- Construction pit deformation measurement technology based on neural network algorithm
- Intelligent financial decision support system based on big data
- Design model-free adaptive PID controller based on lazy learning algorithm
- Intelligent medical IoT health monitoring system based on VR and wearable devices
- Feature extraction algorithm of anti-jamming cyclic frequency of electronic communication signal
- Intelligent auditing techniques for enterprise finance
- Improvement of predictive control algorithm based on fuzzy fractional order PID
- Multilevel thresholding image segmentation algorithm based on Mumford–Shah model
- Special Issue: Current IoT Trends, Issues, and Future Potential Using AI & Machine Learning Techniques
- Automatic adaptive weighted fusion of features-based approach for plant disease identification
- A multi-crop disease identification approach based on residual attention learning
- Aspect-based sentiment analysis on multi-domain reviews through word embedding
- RES-KELM fusion model based on non-iterative deterministic learning classifier for classification of Covid19 chest X-ray images
- A review of small object and movement detection based loss function and optimized technique