Abstract
The inclusion of technologies such as telepractice, and virtual reality in the field of communication disorders has transformed the approach to providing healthcare. This research article proposes the employment of similar advanced technology – social robots, by providing a context and scenarios for potential implementation of social robots as supplements to stuttering intervention. The use of social robots has shown potential benefits for all the age group in the field of healthcare. However, such robots have not yet been leveraged to aid people with stuttering. We offer eight scenarios involving social robots that can be adapted for stuttering intervention with children and adults. The scenarios in this article were designed by human–robot interaction (HRI) and stuttering researchers and revised according to feedback from speech-language pathologists (SLPs). The scenarios specify extensive details that are amenable to clinical research. A general overview of stuttering, technologies used in stuttering therapy, and social robots in health care is provided as context for treatment scenarios supported by social robots. We propose that existing stuttering interventions can be enhanced by placing state-of-the-art social robots as tools in the hands of practitioners, caregivers, and clinical scientists.
1 Introduction
Innovative technical solutions have been proposed and incorporated in treatments for several conditions and have positively impacted the lives of people. The field of communication disorders has also benefited from the inclusion of technology [1]. This article is offered as a position paper to illustrate the potential and utility of similar advanced technologies, namely, social robots, for stuttering interventions. In this article, we propose scenarios that have been designed with input from human–robot interaction (HRI) and stuttering researchers. Feedback of speech-language pathologists (SLPs) was included to place the scenarios within a clinician-centric framework involving three phases of treatment. This position article is intended for the varied stakeholders in the field of stuttering including SLPs, researchers, and clients, as well as stakeholders in the field of HRI who design and adapt robotics for clinical and healthcare applications. Overall, this article provides a starting point for productive collaborations among these stakeholders. It should be noted that, to the author’s knowledge, no empirical research has been conducted in this area, and, thus, this article initiates research in a new direction. This is the first article proposing clinician-centric HRI scenarios in which social robots are presented as tools to aid SLPs and people with stuttering.
The present article is structured as follows: Section 2 presents a brief review of stuttering. Section 3 presents a review of technologies used in stuttering intervention. In Sections 4 and 5, we introduce social robots and review the available literature on social robotics in healthcare and communication disorders. In Section 6, we describe eight play scenarios to illustrate the potential applications of social robots in the treatment of children and adults who stutter. The article concludes with a summary and discussion of the contributions, limitations, and future steps for stuttering research using social robots as tools.
2 Overview of stuttering
Stuttering, a developmental speech disorder prevalent worldwide, has been defined as “disorders in the rhythm of speech, in which the individual knows precisely what [they] wish to say, but at the time is unable to say it because of an involuntary, repetitive prolongation or cessation of a sound” [2] (World Health Organization, 1977). The primary symptoms of stuttering are repetitions, prolongations, pauses and/or blocks that disrupt the rhythmic flow of speech [3]. These primary symptoms may be accompanied by physical (e.g., involuntary eye-blinking, jaw jerking) and/or affective (e.g., avoidance behaviors, negative emotions) secondary behaviors [2]. Researchers have pointed to multiple possible causes for stuttering, including sensorimotor, neurological, linguistic, and/or genetic differences. The most common form of stuttering is a developmental disorder that begins in the early childhood and follows a recovery trajectory or a chronic trajectory [3]. A rare, acquired form of stuttering is also recognized, which has an onset during adulthood, usually tied to neurological disease [4]. The majority (65–75%) of children diagnosed with developmental stuttering recover naturally by the age of 8 years over a duration of 1 to 4 years, whereas 25–35% of children do not recover [5].
A review of 44 international studies of school-aged children reported 1% prevalence rate for stuttering [6]. The incidence rate of stuttering is approximately 5%, although a more recent large-scale survey suggests 5–8% [7], with the onset occurring primarily during the preschool years [8]. Stuttering is more prevalent in biological males than biological females, with an approximate ratio of 4 males to 1 female [9]. The ratio is less pronounced in young children, reportedly being 2:1 males-to-females [10].
The quality of life is negatively impacted by stuttering with coping difficulties extending beyond speech into the social, emotional, and psychological domains, eventually impacting major life choices [11,12]. These difficulties are exacerbated from childhood into adolescence and adulthood. While a range of evidence-based treatment programs (e.g., Comprehensive Stuttering Program [13]) address the behavioral, social, and emotional components of stuttering, inclusion of social robotics could enhance these treatment options. A comprehensive survey covering a variety of stuttering treatment approaches including formal programs, fluency induction techniques, and adjunct therapy approaches that have been reported in the past 20 years (2000–2020) can be found in ref. [14].
3 Technology used in stuttering interventions
Technological advancement has prompted the exploration and application of technology in speech and language therapy. The following is a brief description of the technological tools used in stuttering therapy, which provide contrast and context for considering possibilities of social robots.
Altered auditory feedback (AAF): The most common AAF device is the SpeechEasy,[1] which is a self-contained, in-the-ear fluency aid that alters the frequency and delay of the user’s speech [15]. AAF devices can be used in any language, and improvements in fluency from these devices are generally stable over time. However, these devices are not curative and gains in fluency are contingent on using the device [15].
Metronomes: Metronome-paced speech, or speaking to a beat, is a fluency induction technique based on controlling speech rate [16]. Metronomes have been used to establish target speech rates in the early stage of therapy [17]. Metronomes have also been miniaturized into electronic metronomes, which can be worn behind the ear like hearing-aids.
EMG biofeedback: A technique, which is not widely used but has potential for fluency induction, uses surface electromyography (EMG) to reduce excessive muscular tension in the speech articulators (e.g., larynx, tongue). Electrodes are placed over muscles where there is presumed to be excessive tension. The client and clinician monitor muscular tension by a visual or auditory signal and employ therapy techniques to reduce tension to reach a subjective threshold [18].
Virtual reality (VR): VR provides advanced human–computer interfaces where a user is immersed in a simulated version of 3D, real-world situations virtually. VR immersion interface typically includes a standalone visual interface, a VR headset for interacting with virtual environments through vision and sometimes audition. Researchers have reported that individuals with stuttering display similar levels of affective, behavioral, and cognitive symptoms in virtual and real environments [19]. VR has been effectively used for desensitization therapy (e.g., reducing fear of public speaking [20]) and the adaptation effect (i.e., gradually reducing dysfluencies with repeated exposure to the same stimulus; [21]). Such findings suggest that VR can function as a systematic, controlled, and confidential method to supplement treatment.
Telepractice: Telepractice is the delivery of professional healthcare services using tele-communications technology, typically the Internet. Telepractice can be synchronous (i.e., video or audio conferencing), asynchronous (i.e., e-mail) or hybrid [22]. A recent systematic review by McGill et al. [23] indicates that synchronous telepractice is a promising service delivery system that has been successfully incorporated in stuttering treatments, such as the Lidcombe and Camperdown programs. Benefits of telepractice include cost and time effectiveness and increased accessibility to healthcare (e.g., decreased travel time, access from remote areas, and during situations like the COVID-19 pandemic [23]). Limitations of telepractice include concerns about patient or client privacy, technological difficulties (e.g., Internet speed), and poor patient or client attendance [23].
Mobile apps: Given the increased utilization of smartphones, mobile applications for stuttering management are being developed [24,25]. Available applications include different stuttering modification and fluency shaping techniques (e.g., AAF). Popular mobile applications for stuttering management include Speech4Good,[2] DAF Professional,[3] Dysfluency Index Counter,[4] Stamurai,[5] MyLynel,[6] SPEECHTOOLS,[7] and Speechagain.[8] These applications have been designed by professionals in the fields of speech and language therapy, software engineering, and artificial intelligence, and they show promise in supporting users’ therapy goals.
Considering the ongoing innovation in the field of stuttering intervention, inclusion of advanced AI-based technologies, such as social robots, is an important step toward increasing the accessibility and potential effectiveness of stuttering intervention.
4 Social robots
Social robots are one of the contemporary technologies that have been investigated and used in education, healthcare, fitness, and entertainment [26,27,28]. According to Breazeal et al. [26], “social robots are designed to interact with people in human-centric terms and to operate in human environments alongside people.” Social robots “engage people in an interpersonal manner, [by] communicating and coordinating their behavior with humans through verbal, nonverbal, and/or affective modalities” [26]. Social robots interact with the environment using different sensors and actuators such as cameras, microphones, motors, and speakers with software algorithms to support artificial speech recognition and generation. As shown in Figure 1, the physical appearance of social robots ranges from human-like (humanoids) to non-biometric (animal-or toy-like) [29]. Such robots have the capacity for social interactions using verbal and non-verbal modalities, such as interpreting sounds and speech, speaking, performing gestures using arms (e.g., pointing), face (emotional expressions, eye gaze, and joint attention [30]) or full body (walking [31]). Social robots can be operated using different modes of operation, including fully autonomous, semi-autonomous, and wizard-of-oz (WoZ [29,31]). Fully autonomous social robots sense their environment, make decisions, and perform tasks without human intervention. Semi-autonomous social robots have some pre-programmed operations, while others are controlled remotely by a human operator. In the WoZ model of operation, social robots are teleoperated by a hidden human operator. The design of HRI scenarios can include one-to-one interaction (e.g., a social robot and a human participant) or multiparty interaction (e.g., interaction between one or more social robots with one or more human participants). Further information regarding human–robot interaction and social robots can be found in ref. [31]. There are numerous advantages to incorporating social robots in the field of stuttering interventions.

Examples of social robots that have been used in the fields of therapy and healthcare. Left to Right: Pepper (2014-present, Source: Softbank Robotics); Kaspar (2009-present, Source & credit: The Adaptive Systems research group, University of Hertfordshire ); Furhat (2014-present, source: Furhat Robotics AB); Nao (2006-present, source: Softbank Robotics); QT (2018-present, LuxAI).
4.1 Technology-focused advantages
Social robots excel at repetitive tasks: Stuttering treatments often require time-intensive practice, for which the continual presence of the SLPs is not essential. Social robots could function as assistants for repetitive tasks because they maintain task consistency. In addition, social robots’ expansive ability to record sessions and performance can provide clinicians with more options to evaluate and chart progress. The application of robots in this manner potentially allows the clinicians to see more clients, focus on individual needs, and offset wait lists.
Social robots are programmable and adaptable: Such robots can be tailored to accommodate individual needs [32], which can address the variation presented by people who stutter. These robots can be customized according to speech patterns and stuttering severity, as well as age, gender, and personality. They can play different roles and exhibit varied behaviors in order to make activities incorporating social robots appropriate for children, adolescents, and adults. Moreover, they could be tailored according to the particular intervention used by a clinician.
Social robots have a physical presence: Unlike other technologies, such as mobile applications or virtual reality, social robots are physically present during an interaction. Studies in the field of education have demonstrated that students learned more and faster in the presence of a physically embodied social robot in comparison to alternative technologies because physical presence offered a multimodal and richer pedagogical atmosphere [33,34]. Similarly, interactions with social robots have been reported to be more motivating and engaging than a virtual reality counterpart [35]. Furthermore, through their survey on people’s perception of physically present social robots versus virtual agents, ref. [36] found that physically present social robots were perceived more positively and considered more persuasive.
Social robots compared to other AI-driven technologies: Several studies have shown that people prefer social robots over other technologies such as tablets or smartphones. Westlund et al. [37] reported that compared to learning from an iPad or a human teacher, children preferred learning new words from a robot. They also considered the social robot to be more like a person than the iPad. Similarly, Zhexenova et al. [38] compared children’s knowledge of Latin script in three conditions: with a tablet, with a robot plus a tablet, and a human teacher. The results showed that while children gained the same amount of knowledge in all three conditions, they reported higher likeability for and positive mood change in the “robot plus tablet” condition over the other two conditions. Further, Deublein and Lugrin [39] conducted a user study with 84 students in a smart office environment where a tablet, non-expressive social robot, or an expressive social robot randomly requested the participants to perform activities associated with physical well-being. The findings suggested that compared to the tablet, both the non-expressive and expressive robots were rated significantly higher in social presence and interaction within the smart office environment.
4.2 Human-focused advantages
Social robots are enjoyable/engaging: Social robots are novel contemporary technologies, and therefore, they tend to inherently generate interest and/or excitement regardless of whether they appear to be anthropomorphic or toy-like [40]. Using them in conventional interventions could introduce an element of fun, curiosity, and excitement, which engage clients and make the therapy experiences and exercises more enjoyable [35]. Social robots might be particularly attractive to children, who find the interaction with a robot intrinsically rewarding, thereby assist in early intervention [41].
Social robots are non-judgemental: One of the key advantages of using social robots for clinical populations is that people feel little to no judgement from them, which could particularly facilitate therapeutic outcomes in stuttering. Both the verbal and non-verbal behaviors of a social robot can be consistently regulated to ensure the atmosphere is comfortable, supportive, and non-judgemental [42,43]. In certain therapeutic contexts, social robots can be particularly useful for mediating human–human interactions, as robots do not show unconscious behaviors that could limit the engagement and comfort of the clients. This has been recognized in HRI research on children with special needs, who prefer social robots for some therapy activities [44,45]. We expect that this non-judgemental element, along with social robots human-like communication and interaction, could in turn, facilitates practice of certain fluency skills [46].
Social robots as a companion: Socially assistive robots (SARs) have shown promising results in providing companionship to children and adults in healthcare facilities, where one might experience loneliness and/or psychological distress [47,48,49]. For example, Alemi et al. [50] employed the NAO robot as a tool for robot-assisted therapy in the hands of psychologists to reduce levels of anxiety, anger, and depression among children with cancer. During the invasive treatment procedures, the robot played different playful roles (e.g., peer, nurse, doctor) to put the patient at ease. The findings suggest that positive effects in the levels of anxiety, anger, and depression among the participants. Similarly, SARs have been used to help older adults foster social connection with others, combat loneliness and depression, and improve mood and quality of life [51].
4.3 Limitations of social robots
Social robots are machines: As a complex machine consisting of hardware and software components, the capabilities of a social robot are dependent on the functionalities and limitations of these components. By leveraging these components, a social robot can perform certain human-like behaviors reliably; however, this is far from having human-level perception and decision-making capabilities [52].
Dependency on pre-programming: Social robots depend heavily on pre-programming, regardless of whether the robot is teleoperated or fully autonomous. Some of the technical challenges associated with social robots include natural-language understanding [53], speech recognition [53,54], social-signal processing, and action selection among others [31].
Limited capabilities: Even though social robots can reply to questions and communicate to participants, they have limited capabilities to understand the semantics and context of a situation and do not learn these elements in real time [31]. Thus, a natural, human-like, open-ended dialogue with a robot is not possible at present.
Cost: The cost of social robots varies based on their level of functionality [28]. While social robots have demonstrated effectiveness and potential, the targeted users, including researchers, teachers, doctors, therapists, parents, may not have the funds to acquire a social robot [55]. However, efforts are being continuously made to develop cost-effective and affordable robots [56]. Particularly promising are initiatives promoting open-source robot development,[9] which could allow others to replicate the robot hardware and use the previously developed software. An example is the recently developed MyJay robot, to support robot-assisted play for children with physical disabilities [57].
Maintenance: Like any other computer-based technology, robots may also require periodic maintenance. Common maintenance issues include hardware or software malfunction, software updates, and charging dependency. Further, a trained human technician is usually needed to complete the technical set-up required for the robot. Social robots are not humans: Social robots are not suitable for sensitive situations, where human-level perceptual abilities, emotions, empathy, and general human-level intelligence and expertise are essential [58]. Ethics-related concerns should be tackled meticulously to make such robots safe and secure for use, so that social robots can be employed effectively in different contexts with human supervision and intervention.
However, despite these limitations, targeted programming and designs allow social robots to interact successfully with humans. Overall, social robots present a unique avenue for research and intervention for serving clinical populations, such as people who stutter. In the context of the proposed scenarios presented later, a social robot cannot replace the experience, training, judgement, or flexibility of the SLPs; however, their functionality complements therapy activities within the scope of the aforementioned benefits.
5 Use of social robots in healthcare
In the field of healthcare, social robots have taken on the roles of assistants, companions, guides, and trainers to boost performance and provide encouragement while clients learning a certain task [59] (see Figure 2). Social robots have also been used with clients of all ages in therapeutic and assistive contexts, such as therapy for children with autism spectrum disorder (ASD), people with dementia, or individuals with diabetes or cancer [59]. In addition, robots have provided companionship to children during extended stays at hospitals [60,61] and act as a distraction tool during medication procedures [62,63]. Social robots have been used in clinical studies to train literacy, self-management, and awareness [48,64,65], alongside recent proposals to use social robots for social anxiety interventions [66]. Different appearances of social robots have been used (e.g., humanoid, animal-like, cartoon-like, machine-like). Examples include Kaspar,[10] Zeno,[11] Nao, Aibo,[12] Paro,[13] Pleo,[14] and Keepon[15] robots.
Regarding the ASD, social robots have fostered verbal and non-verbal communication skills [67,68,69], enhanced joint attention [70], collaborative skills [71], visual-perspective taking [72], and social/emotional engagement [73], which are specific difficulties for children with ASD [74]. Social robots have also been used for long-term interventions outside controlled lab settings and without extensive technical supervision [46]. For example, Scassellati et al. [75] used social robots to provide home-based social communication skill practice for children with special needs. In this study, the children engaged in a triadic interaction with the robot and their caregiver for 30 minutes per day for a month that lead to improvement in joint attention skills with caregivers. For older adults and adults with special needs and medical conditions, such as dementia, social robots have been used as companions and service robots. Companion robots are defined as robots that fulfil certain tasks, e.g., provide cognitive, physical or social assistance in activities of daily living, in a socially acceptable manner, to enhance the overall health and well-being of the individual [76]. Some of the most common companion robots are PARO, Aibo, Nao, Furhat,[16] Pepper,[17] Zorabots,[18] Buddy,[19] and AIDO.[20] PARO robots, which have been widely accepted, demonstrated positive outcomes, especially in dementia care [77]. Older adults with dementia showed improved activity levels [78], strengthened social connections, and reduced stress levels [79]. Service robots, such as Care-O-Bot,[21] Hobbit robot,[22] and Pearl [80], have supported independent living, i.e., to provide medication reminders; monitor activity levels; provide cleaning services, basic daily activities [81], exercise, and navigation [82]; and maintaining safety [83]. The success of these verbal and nonverbal interaction activities mediated by social robots holds considerable potential for children and adults who stutter by embedding fluency goals within communicative contexts. For example, turn-taking games combined with fluency goals could be implemented during early therapy and during transfer and maintenance.
![Figure 2
Social robots interacting with people in healthcare settings. (a) The social robot Kaspar which has been designed for interactions with children with special needs engaging a child in an activity with educational and therapeutic objectives [84]; (b) Nao robot playing a self-management educational game with a child with diabetics [64]; (c) the robot assisting an autism therapist in ASD diagnosis training session [68]; (d) the robot engaging in therapy tasks with a child with special needs [69]; (e) the robot as a collaborator promoting engagement and performance for gait rehabilitation of a patient with neurological disorder during a therapy session; (f) a training assistant robot providing encouragement and motivation to a patient in a cardiac rehabilitation training session. Image credits.](/document/doi/10.1515/pjbr-2022-0001/asset/graphic/j_pjbr-2022-0001_fig_002.jpg)
Social robots interacting with people in healthcare settings. (a) The social robot Kaspar which has been designed for interactions with children with special needs engaging a child in an activity with educational and therapeutic objectives [84]; (b) Nao robot playing a self-management educational game with a child with diabetics [64]; (c) the robot assisting an autism therapist in ASD diagnosis training session [68]; (d) the robot engaging in therapy tasks with a child with special needs [69]; (e) the robot as a collaborator promoting engagement and performance for gait rehabilitation of a patient with neurological disorder during a therapy session; (f) a training assistant robot providing encouragement and motivation to a patient in a cardiac rehabilitation training session. Image credits.
5.1 Social robots as tools in communication disorders
Communication can be impacted by disorders of speech (i.e., speech sound disorders, stuttering, cleft palate), language (i.e., developmental language disorder, language delay), and social communication [74]. While research on social robots in communication disorders is limited, some promising results have been reported. In research involving children with special needs, participants’ verbal production and engagement in the therapy sessions increased when the session was conducted with the social robots Nao and CommU [85]. Another study found that when two social robots were placed in a disability unit for adolescents with special needs for two years, improvements in articulation, verbal participation and spontaneous conversations were noted [67]. Similarly, the robot Kaspar, used for a long-term study by caregivers in a nursery school for children with ASD, has shown beneficial outcomes for the participants. This study also found good acceptance among teachers and caregivers who used the robot without direct supervision by researchers [46]. Among children with pervasive developmental disorder, the humanoid robot iRoboi was found to positively impact their communication skills using augmentative and alternative communication strategies [86]. Robles-Bykbaev et al. [87] used a low-cost robot, named SPELTRA (Speech and Language Therapy Robotic Assistant), to support therapy sessions for children with neurodevelopmental disorders such as cerebral palsy, intellectual disability, dysarthia, among others. Specifically, the robot was programmed to register the participants’ information and results from the therapy session, as well as supporting the client outside the clinic to[23] reinforce the skills learned in-clinic. The participants quickly adapted to SPELTRA and showed improvements in phonological, morphosyntactical, and semantic communication measures. Other researchers have proposed different applications of social robots in the context of communication disorders, which hints at the multitude of potential ways in that social robots can be leveraged to enhance interventions. For patients with aphasia, Pereira et al. [89] proposed the implementation of the social robot Nao as a mediator in a memory game between a speech therapist and a client with aphasia, to promote understanding of imperative statements. Ramamurthy and Li [88] developed an application for children with cleft lip and palate with the social robot Buddy to practise articulation. Castillo et al. [90] developed an application using a desktop social robot, called Mini, to assist therapists with rehabilitation exercises for adults with apraxia. In regard to stuttering, Kwaśniewicz et al. [91] described an application of the social robot Nao to provide “echo” (a combination of delayed auditory feedback and choral speech) as clients practised their fluency skills. The authors predicted that Nao could potentially enhance the echo effect because the robot also provides visual feedback through arm movements and a sense of company while the clients practice therapy tasks. Still, the impact of the social robot is yet to be validated with empirical evidence [91]. Although significant progress has been made in utilizing social robots in the field of healthcare and clinical practice, implementing these robots in the field of stuttering interventions has not been explored to date. In the next section, we introduce possible applications of social robots in stuttering intervention.
6 Proposed play scenarios with social robots
In this section, we propose higher-level conceptualizations of social robots in stuttering intervention. Formulating and specifying human–robot interaction scenarios is viewed as the first and fundamental step for introducing social robots in a principled manner within established interventions and clinical research. The scenarios are intended to guide and inform future research. The exact details of the scenarios are likely to change during empirical research, which will require co-design, system development and evaluation, and feedback from clients, teachers, caregivers, therapists, and other stakeholders, cf. [84]. The format of the following play scenarios is inspired by previously developed robot-assisted play scenarios [84,92] and broadly conform to several current robot-assisted interventions that have been used for children, in particular for children with ASD. The scenarios are presented within a therapeutic framework common in stuttering interventions that involve three general phases of establishment (alternative term used for acquisition), transfer, and maintenance. These phases are described by Kully [13] in the context of the Comprehensive Stuttering Program. According to Kully [13], establishment is the first phase of treatment during which clients learn strategies or skills to support their fluency goals (acknowledging their skills will differ depending on the stuttering treatment approach). Transfer is the second phase in which clients apply their skills and strategies in diverse contexts. These speaking activities are finely sequenced to gradually build on factors that influence the client’s ability to implement the strategies in simulated and real-world situations. The clinician works closely with the client to structure transfer activities that provide functional practice and build the client’s confidence. Maintenance is the last phase, which lasts beyond treatment, where the client continues to practice treatment skills in everyday life and over the long term. The training and experience of the SLPs is most necessary during the establishment phase, when treatment skills relevant for reducing dysfluency are initially taught and negative attitudes are evaluated. Once the client has learned the relevant treatment skills, the clinician can progress to the transfer phase, where skills are elaborated in varying communication contexts and with diverse conversation partners. The clinician must work closely with the client to appropriately choose sequential transfer activities and adjust the complexity of the task to support the client’s progress. Since social robots excel at repetition and are inherently non-judgemental, they may have the most relevance for transfer when multiple, intensive repetitions of new skills are needed. The clinician may find that structured human–robot transfer activities reduce the time requirements for direct clinician interaction while still achieving transfer goals. Maintenance is introduced as the last phase of treatment when the client is prepared to sustain learned treatment skills and continue to work towards their personal communication goals. As maintenance of treatment gains often involves continued practice, a social robot could support the client by providing a practice partner and potentially certain forms of feedback related to the client’s accuracy in producing treatment skills. Other treatment approaches may vary in their approach to establishment, transfer, and maintenance. Each of the following scenarios is categorized according to a particular phase of treatment with treatment objectives set by the clinician (often with input from the client) and social robots programmed accordingly by engineers and roboticists. We strive to propose scenarios from a clinician’s perspective and based them on our established experience of translating clinical activities into programs carried out by robots. These scenarios include suggested activities only and describe possible applications of social robots in stuttering therapy both in and outside the clinic. The proposed scenarios would require some adjustment for alternating implementation between clinicians, caregivers, and teachers. As with other social robot experiences, the client is often an active participant in determining the extent and frequency of robot participation.
6.1 Components of play scenarios
The first item in each scenario is the objective of the proposed scenario, which is based on the phase of treatment (establishment, transfer, or maintenance) followed by categories that list treatment domains (speech, social, and emotional), treatment technique, type of play, and interaction technique for each scenario. Other items in the scenarios are as follows: (a) participant roles and behaviors, (b) specification of robot configuration and mode of operation, (c) setting and time-frame, (d) possible variations, and (e) benefits that social robots offer to the clients and SLPs. The details are aimed at providing SLPs, roboticists, and researchers with starting points for leveraging robots enhancing stuttering interventions. The following sections discuss the integration of play and different interactions techniques within the scenarios.
6.2 Integration of play
The proposed scenarios are play based because play is a crucial aspect of life that promotes cognitive, physical, social, and emotional well-being of children and youth [93]. Play is “fun, educational, creative, stress-relieving and encourages positive social interactions and communication” [94]. Play therapies are powerful modalities for working with different populations [95]. With regard to social robots, play can make human–robot interaction more enjoyable and motivating. In the aforementioned examples of social robots for children with ASD or people with dementia, the interaction with the robot was inherently rewarding, in the absence of any additional, explicit reward [84]. The proposed scenarios incorporate social and cognitive play, which is typically used with children but can be extended to adolescents/adults. Social play is further divided into the four categories of solitary, parallel, associative, and cooperative [96], whereas cognitive play, proposed by Piaget [97], includes three distinctions, which are practice, symbolic, and play with rules. These types of play encourage participants to enhance their skills in social, cognitive, psychological, emotional, and communication domains, all of which could benefit persons who stutter.
6.3 Integration of interaction techniques
The interaction techniques used in the proposed scenarios are to promote engagement and provide priming to participants. These techniques are described as follows:
Peer learning. This interaction technique has been widely used in the field of education. Peer learning is a reciprocal learning method, in which participants play the roles of both teachers and students [98]. Further, it is a useful technique because participants are required to take on the responsibility for their own learning through communication, provision, and reception of feedback [99]. Chandra et al. [100] used peer learning in their study on human–robot interaction during a writing activity where the children acted as peers and the social robot (Nao) as instructor. Their findings suggest that peer learning enhanced the participants’ learning gains [100,101].
Learning-by-teaching. Also known as peer-tutoring, in this interaction technique, a student takes on the role of the teacher and teaches other learners, which enhances their own learning [102]. Prior to teaching the other learners, the student engages with the task and specific content [103]. Teaching the material taps into the three core aspects of learning interactions, which are structuring, taking responsibility, and reflecting [103]. There is substantial evidence suggesting that teaching others is an effective method for personal learning [104]. Several studies on HRI also found that peer-tutoring was an effective method for improving children’s handwriting capabilities [101,105]. In these studies, children taught a robot how to write, which led to improvements in their own writing.
Support groups. Support groups are gatherings of five or more people with a common problem that may or may not include a trained professional [106]. Support groups are beneficial because they can provide a sense of community, a safe environment for self-disclosure, help foster new friendships, provide resources, recommendations for coping with difficulties, facilitate improvement in social skills, and reduce the level of distress [106]. Birmingham et al. [108] used a Nao robot as a mediator to investigate trust dynamics in a support group of students who were strangers. The results of this study validated that the robot-mediated support group could improve interpersonal trust among the group members.
Model-rival method. Developed by Dietmar Todt, the model-rival method involves a three-way interaction between two researchers and one student [109]. One of the researchers acts as an instructor, while the other models the behaviors of interest and is the student’s rival for the instructor’s attention. With this interaction technique, the goal is to indirectly teach the student behaviors of interest. The student observes the response the model gets from the instructor when they respond correctly (i.e., reinforcement of student’s interest) and incorrectly to a prompt (i.e., punishment) [110]. The student observes these interactions and learns the targeted behavior in order to receive the reinforcement of interest. Pepperberg successfully implemented this technique for teaching her parrot, Alex, colours, shapes, and numbers [110]. Inspired by this, Fishman [111] proposed to use the Model/Rival method for preschool children with complex communication needs where the rival could be replaced with low-tech devices (e.g., displays, speech generative devices or a helping doll) to promote greater communication partner involvement. However, this technique has not been explored in the field of HRI yet.
6.4 Integration of treatment techniques
The treatment techniques used in the proposed scenarios are behavioral interventions typically used for stuttering. These techniques fall into two categories: fluency shaping and stuttering modification techniques.
Fluency-shaping techniques: Also known as speech restructuring/modification or prolonged speech treatments, these techniques promote fluent speech by teaching new speech production patterns to the client [112,113,114]. Examples of fluency-shaping techniques include prolonged speech, regulated breathing, and syllable-timed speech [112,115].
Prolonged speech: Slowed or prolonged speech is an effective speech restructuring technique that focuses on controlling speech rate. It typically involves learning to produce elongated speech segments at a very slow rate, typically timed at a syllable-by-syllable level. The overall goal is to gradually increase speech rate while maintaining fluency and naturalness [115,116].
Regulated breathing: Also known as habit reversal, regulated breathing seeks to reduce stuttering by teaching more effective speech-related respiratory behaviors that are incompatible with stuttering. [117,118]. It is a multi-component approach that includes awareness, relaxation, competing response time, motivation, and generalization training [119].
Syllable-timed speech: This treatment involves timing each syllable to a rhythmic beat. For example, a client is instructed to produce syllables of equal duration in time to a metronome beat. This technique is believed to stabilize the speech motor system by reducing the variations in linguistic stress [120,121].
Stuttering modification techniques: These techniques are anxiolytic (i.e., anxiety-reducing) in nature and aim to promote desensitization to stuttering, awareness of stuttering moments, and acceptance of one’s stuttering [115]. The focus of stuttering modification techniques is to reduce fear, anxiety, low self-esteem, shame, and avoidance of stuttering moments, or speaking situations among individuals who stutter [112,122]. Examples of these techniques include pseudo-stuttering and self-disclosure of stuttering [112].
Voluntary stuttering: Also called pseudo-stuttering, this technique entails deliberate production of overt dysfluencies that resemble stuttering by the client or clinician [123,124]. It is used to provide desensitization to stuttering and reduce fear, negative emotions associated with stuttering, and feeling of loss of control [123,125].
Operant conditioning: Several stuttering interventions incorporate the principles of operant conditioning, which is described as “the process by which the frequency of a response is changed as a result of the consequences of that response” [126]. Positive reinforcements are typically used in techniques with operant conditioning to increase or decrease the frequency of a specific response [127,128]. For individuals who stutter, treatment techniques include parents’ verbal contingencies for stuttered and stutter-free speech. For example, the clinician teaches parents to reinforce fluent utterances and to correct disfluent utterances [129].
Time-out: This operant conditioning technique, which is also known as “response-contingent time-out,” involves a deliberate cessation of speech after a stuttering episode [130]. This technique can incorporate clinician-led time-outs or self-administered time-outs by the client. This technique is typically used for reducing dysfluency in teenagers and adults [130].
6.5 Robot as a tool box – case study
In addition to the scenarios presented in Tables 1, 2, 3, 4, 5, 6, 7, and 8, a social robot can function as an embodied toolbox, i.e., a fluency buddy, that provides different tools for enhancing fluency in varied settings. The robot can be customized by the SLP with different fluency-inducing applications (e.g., delayed auditory feedback), fluency techniques (e.g., stretched speech), speed/timing of applications, variable feedback delivery, and language difficulty/complexity. In addition, the fluency buddy could complement clinical activities by allowing the client to practice targeted skills in an engaging manner outside the clinic.
Description of scenario 1
Scenario 1: Beats with peers | |
---|---|
Objectives | Acquisition and transfer of fluency skills |
Treatment domain | Speech domain |
Treatment technique | Syllable-timed speech |
Play type (social ∣ cognitive) | Cooperative and practice play |
Interaction technique | Peer learning |
Participants’ role & behavior | There are two participants in this scenario, a social robot and the individual who stutters. Both participants alternate as instructors and learners |
Activity description | |
![]() |
This scenario entails a cooperative task between a social robot and an individual who stutters. A metronome, downloaded on a tablet, will be adjusted to a target syllable rate. At the beginning of the scenario, the robot and the client are learners who practice syllable-timed speech. As a co-learner, the robot can make the establishment phase more engaging by actively modelling the technique and interacting with the client as a peer. After establishing the basic skill, the robot and the client can engage in a peer learning game, where they alternate between tasks of modelling and learning syllable-timed speech. The participant who is modelling will demonstrate how to pace their speech according to the metronome, and the other party will repeat after the modelling. Then the participants will switch roles |
Robot configuration & mode of operation | A social robot that has speech capability will be used, such as the Pepper robot [131]. This social robot will also have a tablet embedded into its chest to display the words or phrases and well as the metronome. This robot can be operated in the Wizard-of-Oz (WoZ) or semi-autonomous manner |
Setting & time | This scenario can be carried out in a home, school, or clinic setting over multiple sessions. Telepractice is also a feasible option for both clinic and school settings. The duration and frequency of the sessions would be set according to the clinician’s treatment plan |
Variation | The level of difficulty can be adjusted (i.e., increasing the number of syllables per words and/or sentence lengths) in order to adapt the scenario to other age groups and treatment hierarchy. The activity can also include more participants to promote transfer to group speaking situations |
Benefits | In this scenario, the individual who stutters will benefit from a social robot’s ability to offer customized, repetitive, and non-judgemental practice in a play based format. If effective, such practice could reduce time demands, allowing the SLP to see more clients or provide more focused intervention. The social robot’s ability to record the client’s performance could help in monitoring progress |
Description of scenario 2
Scenario 2: Musical modelling | |
---|---|
Objectives | Transfer of treatment techniques |
Treatment domain | Social and speech domains |
Treatment technique | Syllable-timed speech, regulated breathing, or voluntary stuttering along with positive reinforcement |
Play type (social ∣ cognitive) | Cooperative and symbolic play |
Interaction technique | Peer learning and support group [132,133,134] |
Participants’ role & behavior | A speech-language pathologist, small group of individuals who stutter and one social robot are involved. The social robot will model treatment techniques, while the speech language pathologist mediates group interactions and activities |
Activity description | |
![]() |
In this scenario, the group of individuals who stutter will review a stuttering treatment technique that they previously learned. During a review session, the social robot will model the stuttering treatment technique for the participants as the clinician explains it. After this review, the group of individuals who stutter will engage in a game similar to musical chairs (see note below). The participants will pass a ball while the music plays, and when it stops, the participant with the ball will select an activity under their name on a tablet. The social robot will model the selected activity (as many times as required), and then, the participant will follow the social robot’s lead to complete the activity. The SLP will monitor the participants and mediate when required (e.g., providing an alternative model if participant has difficulty with the task) |
Robot configuration & mode of operation | A social robot with speech capability, like Pepper, that can operate in a wizard of oz [133] or semi-autonomous manner will be used |
Setting & time | This scenario can be conducted in a school or clinical setting over multiple sessions and duration as prescribed by the SLP |
Variation | The words or phrases displayed on the tablet can be customized to different reading levels, stuttering severity, and treatment goals |
Benefits | Through this scenario, individuals can transfer a therapy technique to a group setting, which can build confidence and introduce variation in using a skill. The social robot can record the progress of a participant, which the SLP can review at any time. The time requirements for direct SLP interaction during practice sessions can be reduced |
Note: A game where a set of chairs are arranged in a circular fashion and the number of chairs are fewer than the number of participants. While music plays, players walk around the chairs and when the music stops abruptly, all the players must occupy a chair. The player who fails to find a chair gets eliminated from the game. For the next round, a chair is removed, and the process repeats until one player remains in the game and is declared a winner.
Description of scenario 3
Scenario 3: Bouncy bingo | |
---|---|
Objectives | Desensitisation to stuttering |
Treatment domain | Social and emotional domains |
Treatment technique | Voluntary stuttering |
Play type (social ∣ cognitive) | Cooperative play and games with rules |
Interaction technique | Peer learning |
Participants’ role & behavior | In this play scenario, a child who stutters, and a social robot operate as peers |
Activity description | |
![]() |
This scenario consists of a cooperative game with rules between a social robot, and a child client. Before starting the game, the clinician or a caretaker will review voluntary stuttering for the client and a social robot programmed to model voluntary stuttering. After the review, the client and the social robot engage in a game, i.e., Bingo! (see note below) [136], mediated by an application run by the robot. On each turn, a participant selects a card presented on a tablet, which has words or phrases adjusted to individual treatment goals. Then the client and social robot use voluntary stuttering to produce the stimulus, provide feedback to each other, potentially assign a score and then continue the game |
Robot configuration & mode of operation | A social robot with the capability to speak and operate autonomously, such as Pepper, QT, Nao |
Setting & time | This play scenario can be conducted in home or clinic setting over multiple sessions as prescribed. Telepractice can also be incorporated in this scenario. Increase the number of participants or increase difficulty of stimuli to facilitate transfer. The fluency technique can also be varied, such as using cancellations or pull-outs |
Benefits | The client benefits from the social robot’s non-judgemental engagement that also helps to foster a comfortable play environment and more communication |
Note: Players are provided a card with random numbers in different arrangements. A host announces different numbers that the players mark on their cards. The player who completes the card by marking 5 numbers in a row or column yells “Bingo!” to stop the game. If all the marked numbers were actually announced and marked correctly, then the player is the winner and a new round is started.
Description of scenario 4
Scenario 4: Talking robots | |
---|---|
Objectives | To promote practice of a fluency skill |
Treatment domain | Speech domain |
Treatment technique | Prolonged speech and operant conditioning |
Play type (social ∣ cognitive) | Associative and practice play |
Interaction technique | Model-rival method [109,110,137] |
Participants’ role & behavior | In this scenario, there will be a social robot as a model/rival, a clinician as an instructor, and a client as a learner |
Activity description | |
![]() |
This scenario entails associative and practice play between an individual who stutters, a clinician who indirectly teaches the client a fluency-enhancing technique, and a social robot that will assume the role of model-rival for the client. The clinician will first train the social robot on the fluency technique intended for the client. The clinician will then provide verbal and non-verbal positive reinforcement each time the social robot produces the target behavior. The client will first observe the target skill and the positive reinforcement from the clinician. After the practice round with the social robot, the clinician will repeat the task with the client in the same format to model the skill and provide appropriate reinforcement |
Robot configuration & mode of operation | A social robot with speech capability and appropriate emotional expression, such as Pepper and Nao, functioning in a WoZ or semi-autonomous manner |
Setting & time | This scenario can be conducted in a clinic or school setting over multiple sessions and duration. Telepractice can be incorporated in both settings |
Variation | Different fluency skills and complexity can be used based on the treatment goals. The roles and order of participation can also be varied. As an alternative, the robot can be controlled by the SLP in a WoZ mode to question “bumpy” vs “smooth” speech as part of a modified Lidcombe approach |
Benefits | The use of model-rival framework and positive reinforcement can encourage clients to practise therapy skills modelled by the social robots in a more engaging routine |
Description of scenario 5
Scenario 5: Musical jeopardy! | |
---|---|
Objectives | To promote transfer of a fluency skill |
Treatment domain | Speech and social domains |
Treatment technique | Variable treatment skills – e.g., prolonged speech |
Play type (social ∣ cognitive) | Cooperative, parallel play and play with rules |
Interaction technique | Support group |
Participants’ role & behavior | In this scenario, there will be four or more clients who stutter and two social robots, who will model speech behaviors for the participants in a setting resembling a support group |
Activity description | |
![]() |
This cooperative, parallel and play with rules scenario involves multiple clients who stutter and two social robots. The clients are divided into teams consisting of at least two clients and a social robot to play a modified version of Jeopardy! (see note below). Using a pre-programmed jeopardy technique in the robots, each group will take turns to select a potential category (e.g., movies, geography, music). In each category, there will be an option to select one of five speech tasks that have a hierarchy of increasing difficulty (e.g., 200 points for the easiest speech task and 1000 points for the most difficult speech task). On each turn, a speech task will be displayed on the robot’s tablet, who will model a fluency skill and prompt the client to complete the task using the skill. The teams can compete for the highest score under the discretion of the SLP |
Robot configuration & mode of operation | A social robot with speech capability, such as Pepper, Nao, or QT, operating in a semi-autonomous manner |
Setting & time | School or clinic |
Variation | In an alternate version, the social robot can lead the first session as a model, followed by the clients who alternate as leader. The complexity of the reading material can be varied according to reading level and treatment hierarchy |
Benefits | The clients practice treatment skills in a play based transfer session where the speech tasks can be customized for the treatment goals with modelling provided by social robots and other group members |
Note: Jeopardy is a game show where participants engage in a general knowledge quiz. Each participant selects a dollar value and a category from the game board, and responds to a trivia prompt in the form of a question [138].
Description of scenario 6
Scenario 6: Smooth skits | |
---|---|
Objectives | To promote transfer and maintenance of fluency skills through a skit |
Treatment domain | Speech and social domains |
Treatment technique | Fluency-shaping or stuttering modification techniques |
Play type (social ∣ cognitive) | Cooperative, parallel, and practice play |
Interaction technique | Peer learning |
Participants’ role & behavior | In this scenario, the client who stutters, and a social robot engage in a skit delivered by the robot |
Activity description | |
![]() |
This scenario consists of an activity that entails elements of cooperative, parallel, and practice play with the social robot as a peer. The social robot and client adopt one of the roles in a two-actor skit, which could be a simulated job interview, television show, or movie. Both the robot and client will read the lines of their assigned role using a target fluency skill. After receiving feedback from the clinician, teacher, or a caretaker, they can switch roles and repeat the skit. The robot can provide the client with positive verbal feedback during and after the performance |
Robot configuration & mode of operation | A social robot with the capability to speak and operate semi-autonomously, such as Pepper, QT, or Nao |
Setting & time | School, clinic, or home setting. In addition, telepractice can be incorporated in school or clinic settings |
Variation | The skits can be adjusted to the age and interests of the clients. For example, an adult who stutters might be interested in practicing skits rooted in real-life social interactions, such as job interviews. The number of participants can be increased to promote social interaction and generalization of learned skills |
Benefits | The client can apply fluency skills in different contexts by engaging in scripted simulations of every day and imaginative situations to promote transfer and maintenance |
Description of scenario 7
Scenario 7: scavenger hunt | |
---|---|
Objectives | To promote acquisition, transfer, and maintenance of therapy techniques |
Treatment domain | Speech and social domains |
Treatment technique | Fluency-shaping or stuttering modification techniques |
Play type (social ∣ cognitive) | Cooperative and symbolic play |
Interaction technique | Peer interaction |
Participants’ role & behavior | In this scenario, the client who stutters and a social robot engage in a search and discover activity |
Activity description | |
![]() |
This scenario contains elements of parallel and symbolic play. A client who stutters and the social robot will engage in a search and recover activity, where both the client and the social robot search for a hidden item containing a cut-out that specifies a speech task (i.e., words or phrases with different complexities). Once the participants find the cut-out, the client will be prompted to complete the speech task using a targeted therapy technique. As needed, the social robot could model the speech task for the participant and feedback could be provided by the clinician. After completing the task successfully, the participants can search for the next item |
Robot configuration & mode of operation | A social robot with the capability to speak, move, and operate semi-autonomously, such as Pepper, QT, or Nao |
Setting & time | Clinic, school, or home setting |
Variation | The tasks can be adjusted to the age and reading level of the clients |
Benefits | This scenario allows clients to practice a variety of therapy skills while engaging the social robot’s haptic abilities to make the practice session more interesting and entertaining |
Description of scenario 8
Scenario 8: Your wish is my command! | |
---|---|
Objectives | To promote acquisition of fluency in imperative speech |
Treatment domain | Speech, emotional, and social domains |
Treatment technique | Fluency-shaping or stuttering modification techniques |
Play type (social ∣ cognitive) | Cooperative and practice play |
Interaction technique | Peer learning |
Participants’ role & behavior | In this scenario, the client who stutters acts as instructor and the robot as the assistive agent |
Activity description | |
![]() |
This scenario consists of elements of cooperative and practice play. The premise of this scenario is rooted in the difficulty clients who stutter may have with imperative statements and requests for service. In this scenario, the client could take on the role of the instructor or individual seeking a service in different everyday social scenarios (e.g., ordering food, calling a store or playing a song), while the social robot complies with requests. The imperative commands will need to be fluent in order to elicit the response from the social robot |
Robot configuration & mode of operation | A social robot with the capability to speak and operate semi-autonomously, such as Pepper, QT, or Nao |
Setting & time | A skit scenario can be used in a school or home setting |
Variation | The imperative commands can be scripted and presented to the individual who stutters on a tablet in order to provide the client with a point of reference. The context in the scenario can involve the most relevant social situations for the client |
Benefits | Clients can practice imperative statements and requests in an engaging yet non-judgemental environment. For the individuals with stuttering, the social robot can function as a stepping stone that assists with establishing skill and confidence needed outside the clinic |
7 Conclusion
Stuttering impacts a large segment of the population, approximately 1% of people stutter. Individuals with the chronic form experience significant reductions in the quality of their life, with many facing bullying or peer-rejection, emotional embarrassment/frustration, and psychological sequelae (e.g., self-stigma and social anxiety). Given such reductions in quality of life, it is critical to investigate technologically intensive interventions that have the potential to enhance current treatment options. Research studies using technologies such as virtual reality and telepractice have revealed the potential and benefits of these technologies for people with stuttering. Similarly, we propose social robots as state-of-the-art technology that could enhance stuttering interventions. In this position paper, a context and scenarios for practical applications of social robots to supplement varied stuttering interventions, which seek to promote fluency, offset any negative impacts of stuttering and promote effective, low-effort communication.
Social robots are designed to interact with people in interpersonal, social, and cooperative manners. Social robots offer a multitude of benefits, including the ability to facilitate non-judgemental therapeutic environments, maintain task consistency, adaptability, and generate interest/engagement while being physically present during an interaction. Given these characteristics and the available literature demonstrating the effectiveness of incorporating social robots in interventions for children with special needs, and people with dementia, diabetes, or cancer, we believe that social robots hold tremendous potential for persons who stutter. To exploit the benefits of social robots, this article proposes robot-assisted novel play scenarios that leverage upcoming technologies with stuttering intervention practices. Each play scenario presented in Tables 1, 2, 3, 4, 5, 6, 7, and 8 consists of several components, including play-types, treatment domain, and treatment techniques. These components are presented in considerable detail and are tangible contributions that we hope can be applied in therapy and clinical research (see Table 9). The proposed scenarios have been designed to partner with SLPs, caregivers, and teachers, according to a therapeutic framework that involves three phases – establishment, transfer and maintenance – with social robots mostly contributing to transfer activities (see Table 9). A limitation of the proposed scenarios is the social robots’ presently limited ability to automatically recognize and generate speech, but this is an active area of research and great progress has been made [139,140], including automated techniques to recognize stuttering [141,142]. Turn-taking, programmed speech production, and game applications have already been instantiated in many commercial robots (e.g., NAO or Pepper), making the development of social robots feasible with collaborations between engineers, clients, clinicians, and researchers. These scenarios are proposed as the first steps towards productive collaborations between stakeholders in the fields of stuttering and HRI research that could further expand and enhance stuttering treatment options.
Summary of the proposed play scenarios
Scenario | 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 |
---|---|---|---|---|---|---|---|---|
Scenario name | Beats with peers | Musical modelling | Bouncy Bingo | Talking robots | Musical Jeopardy! | Smooth skits | Scavenger hunt | Your wish is my command! |
Play Type | Cooperative, practice | Cooperative, symbolic | Cooperative, games with rules | Associative, practice | Cooperative, parallel, game with rules | Cooperative, parallel, practice | Cooperative, symbolic | Cooperative, practice |
Play medium | Digital | Hybrid | Hybrid | Digital | Digital | Digital | Hybrid | Digital |
Treatment domain | Speech | Social, speech | Social, emotional | Speech | Speech, social | Speech, social | Speech, social | Speech, emotional and social |
Treatment technique | Syllable-timed speech | Syllable-timed speech, regulated breathing, voluntary stuttering, with positive reinforcement | Voluntary stuttering | Prolonged speech with operant conditioning | Variable treatment skills, e.g., prolonged speech | Fluency-shaping or stuttering modification techniques | Fluency-shaping or stuttering modification techniques | Fluency-shaping or stuttering modification techniques |
Interaction technique | Peer learning | Peer learning, support group | Peer learning | Model-rival method | Support group | Peer learning | Peer learning | Peer learning |
Participants’ role | R: Instructor/learner C: Instructor/learner | R: Model C: Player SLP: Moderator | R: Peer C:Peer | R: Model C: Rival SLP: Instructor | R1: Model R2: Model C: Player | R: Peer C: Peer | R: Peer C: Peer | R: Peer C: Peer |
Number of participants | R:1 C:1 | R: 1 C: 2 or more | R: 1 C:1 | R: 1 C:1 SLP:1 | R1, R2:1 C:4 or more | R: 1 C: 1 | R: 1 C:1 | R: 1 C:1 |
Setting | Home, school, or clinic (including telepractice) | School or clinic | Home, school, or clinic (including telepractice) | School or clinic (including telepractice) | Home, school, or clinic | Home, school (including telepractice) or clinic (including telepractice) | Clinic, school, or home | School or home |
Note: 1. Digital: The interaction is mediated by an electronic tablet or another electronic device. 2. Hybrid: The interaction is mediated by an electronic tablet and other non-electronic play material, such as a ball. 3. “R” stands for social robot, “C” stands for the client/human participant, and “SLP” stands for the clinician.
To conclude, technological advancements in robotics and HRI are unlikely to slow down and have the potential to significantly benefit clinical populations such as people who stutter. Hence, we argue for the need to explore the potential of these new technologies through surveys and pilot studies with relevant stakeholders. Exploring contemporary technologies, such as social robots, for stuttering and other communication disorders, not only provides a platform to test these technologies in clinical and healthcare domains but also potentially expands the scope of treatments. We hope that the appropriate introduction of social robots in the treatment of people who stutter can benefit both speech-language pathologists and clients. Presently, we are planning experiments and surveys in collaboration with speech language pathologists to explore and further refine the aforementioned scenarios and investigate how we can demonstrate a therapy context where social robots are effective tools in the hands of clinicians.
Acknowledgments
We thank Holly Lomheim and Jessica Harasym for relevant feedback on the article and HRI scenarios. it’s repeated in Funding Information.
-
Funding information: This research was funded, in part, by the Canada 150 Research Chair Programme. Shruti Chandra, Garima Gupta, Kerstin Dautenhahn: Canada 150 Research Chair Programme.
-
Author contributions: SC contributed to the conceptualization, methodology, writing, and editing of the original draft preparation. GG worked on the conceptualization and editing of the original draft. TL supervised, reviewed, and edited the manuscript. KD supervised, reviewed, edited the manuscript, and acquired the funding. The authors applied the sequence of authors (SDC) approach for the sequence of authors.
-
Conflict of interest: The authors state no conflict of interest.
-
Informed Consent: The conducted research does not include any study, as no informed consent obtained during the research work.
-
Ethical approval: The conducted research is not related to either human or animals use.
-
Data availability statement: Data sharing is not applicable to this article, as no datasets were generated during the research work.
References
[1] A. Almudhi, “Evolution in technology and changes in the perspective of stuttering therapy: A review study,” Saudi J. Biol. Sci., vol. 28, no. 1, p. 623, 2021. 10.1016/j.sjbs.2020.10.051Suche in Google Scholar
[2] W. H. Organization, Manual of the international statistical classification of diseases, injuries, and causes of death, Geneva (Switzerland), WHO, 1977, vol. 1. Suche in Google Scholar
[3] J. E. Prasse and G. E. Kikano, “Stuttering: an overview,” American Family Physician, vol. 77, no. 9, pp. 1271–1276, 2008. Suche in Google Scholar
[4] C. Theys, A. Van Wieringen, and F. Luc, “A clinician survey of speech and non-speech characteristics of neurogenic stuttering,” J. Fluency Disorders, vol. 33, no. 1, pp. 1–232008. 10.1016/j.jfludis.2007.09.001Suche in Google Scholar
[5] E. Yairi and N. G. Ambrose, Early Childhood Stuttering, Pro Ed, Austin, USA, 2004.Suche in Google Scholar
[6] O. Bloodstein and N. Bernstein-Ratner, A Handbook on Stuttering, 6th edn, New York, NY: Thomson-delmar, 2008. Suche in Google Scholar
[7] S. Reilly, M. Onslow, A. Packman, M. Wake, E. L. Bavin, M. Prior, et al., “Predicting stuttering onset by the age of 3 years: A prospective, community cohort study,” Pediatrics, vol. 123, no. 1, pp. 270–277, 2009. 10.1542/peds.2007-3219Suche in Google Scholar
[8] H. Maaansson, “Childhood stuttering: Incidence and development,” J. Fluency Disorders, vol. 25, no. 1, pp. 47–57, 2000. 10.1016/S0094-730X(99)00023-6Suche in Google Scholar
[9] A. Craig, K. Hancock, Y. Tran, M. Craig, and K. Peters, “Epidemiology of stuttering in the community across the entire life span,” J. Speech Language Hearing Res., vol. 45, no. 6, pp. 1097–1105, 2002. 10.1044/1092-4388(2002/088)Suche in Google Scholar
[10] E. Yairi and N. Ambrose, “Epidemiology of stuttering: 21st century advances,” J. Fluency Disorders, vol. 38, no. 2, pp. 66–87, 2013. 10.1016/j.jfludis.2012.11.002Suche in Google Scholar
[11] O. Bloostein and N. Bernstein Ratner, A Handbook on Stuttering, Clifton Park, NY: Thompson/Delmar, 2008. Suche in Google Scholar
[12] J. S. Yaruss, “Assessing quality of life in stuttering treatment outcomes research,” J. Fluency Disorders, vol. 35, no. 3, pp. 190–202, 2010. 10.1016/j.jfludis.2010.05.010Suche in Google Scholar
[13] D. Kully, “Intensive treatment of stuttering in adolescents and adults,” Stuttering Related Disorders Fluency, pp. 213–232, 2007. Suche in Google Scholar
[14] G. Gupta, S. Chandra, T. Loucks, and K. Dautenhahn, “Stuttering treatment approaches from the past two decades: Comprehensive survey and review,” J. Student Res., 2022, in press. 10.47611/jsr.v11i2.1562Suche in Google Scholar
[15] J. Kalinowski, V. K. Guntupalli, A. Stuart, and T. Saltuklaroglu, “Self-reported efficacy of an ear-level prosthetic device that delivers altered auditory feedback for the management of stuttering,” Int. J. Rehabil. Res., vol. 27, no. 2, pp. 167–170, 2004. 10.1097/01.mrr.0000128063.76934.dfSuche in Google Scholar
[16] J. H. Davidow, “Systematic studies of modified vocalization: the effect of speech rate on speech production measures during metronome-paced speech in persons who stutter,” Int. J. Language Commun. Disorders, vol. 49, no. 1, pp. 100–112, 2014. 10.1111/1460-6984.12050Suche in Google Scholar
[17] V. A. Coppola and E. Yairi, “Rhythmic speech training with preschool stuttering children: An experimental study,” J. Fluency Disorders, vol. 7, no. 4, pp. 447–457, 1982. 10.1016/0094-730X(82)90020-1Suche in Google Scholar
[18] S. Block, M. Onslow, R. Roberts, and S. White, “Control of stuttering with EMG feedback,” Adv. Speech Language Pathol., vol. 6, no. 2, pp. 100–106, 2004. 10.1080/14417040410001708521Suche in Google Scholar
[19] S. B. Brundage and A. B. Hancock, “Real enough: Using virtual public speaking environments to evoke feelings and behaviors targeted in stuttering assessment and treatment,” Am. J. Speech-Language Pathol., vol. 24, no. 2, pp. 139–149, 2015. 10.1044/2014_AJSLP-14-0087Suche in Google Scholar PubMed
[20] A. Moïse-Richard, L. Ménard, S. Bouchard, and A.-L. Leclercq, “Real and virtual classrooms can trigger the same levels of stuttering severity ratings and anxiety in school-age children and adolescents who stutter,” J. Fluency Disorders, vol. 68, p. 105830, 2021. 10.1016/j.jfludis.2021.105830Suche in Google Scholar PubMed
[21] A. Almudhi, “Evaluating adaptation effect in real versus virtual reality environments with people who stutter,” Expert Rev. Medical Devices, vol. 19, no. 1, pp. 1–7, 2021. 10.1080/17434440.2021.1894124Suche in Google Scholar PubMed
[22] E. Haynes and M. Langevin, “Telepractice at the institute for stuttering treatment and research (istar),” in: Paper presented at the 13th International Stuttering Awareness Day Online Conference for ISAD on-line Forum. October, 2010. Suche in Google Scholar
[23] M. McGill, N. Noureal, and J. Siegel, “Telepractice treatment of stuttering: A systematic review,” Telemed. e-Health, vol. 25, no. 5, pp. 359–368, 2019. 10.1089/tmj.2017.0319Suche in Google Scholar PubMed
[24] R. N. Madeira, P. Macedo, P. Pita, Í. Bonança, and H. Germano, “Building on mobile towards better stuttering awareness to improve speech therapy,” in: Proceedings of International Conference on Advances in Mobile Computing and Multimedia, 2013, pp. 551–554. 10.1145/2536853.2536911Suche in Google Scholar
[25] M.-C. Yuen, S. Y. Chu, C. H. Wong, and K. F. Ng, “Development and pilot test for stuttering self-monitoring solution using telehealth,” in: 2021 International Conference on COMmunication Systems and NETworkS (COMSNETS), IEEE, 2021, pp. 650–655. 10.1109/COMSNETS51098.2021.9352924Suche in Google Scholar
[26] C. Breazeal, K. Dautenhahn, and T. Kanda, “Social robotics,” in: Springer Handbook of Robotics, Springer, Cham, Switzerland, 2016, pp. 1935–1972. 10.1007/978-3-319-32552-1_72Suche in Google Scholar
[27] A. Lotfi, C. Langensiepen, and S. W. Yahaya, “Socially assistive robotics: Robot exercise trainer for older adults,” Technologies, vol. 6, no. 1, p. 32, 2018. 10.3390/technologies6010032Suche in Google Scholar
[28] T. Belpaeme and F. Tanaka, “Social robots as educators,” OECD Digital Education Outlook 2021 Pushing the Frontiers with Artificial Intelligence, Blockchain and Robots: Pushing the Frontiers with Artificial Intelligence, Blockchain and Robots, OECD Publishing, Paris, 2021, pp. 143–147. 10.1787/1c3b1d56-enSuche in Google Scholar
[29] B. Scassellati, H. Admoni, and M. Matarić, “Robots for use in autism research,” Annual Rev. Biomed. Eng., vol. 14, pp. 275–294, 2012. 10.1146/annurev-bioeng-071811-150036Suche in Google Scholar PubMed
[30] S. Sani-Bozkurt and G. Bozkus-Genc, “Social robots for joint attention development in autism spectrum disorder: A systematic review,” Int. J. Disabil. Develop. Edu., pp. 1–19, 2021. 10.1080/1034912X.2021.1905153Suche in Google Scholar
[31] C. Bartneck, T. Belpaeme, F. Eyssel, T. Kanda, M. Keijsers, and S. Šabanović, Human-Robot Interaction: An Introduction, Cambridge, United Kingdom: Cambridge University Press, 2020. 10.1017/9781108676649Suche in Google Scholar
[32] H.-L. Cao, P. G. Esteban, A. De Beir, R. Simut, G. v. d. Perre, D. Lefeber, et al., “A survey on behavior control architectures for social robots in healthcare interventions,” Int. J. Humanoid Robot., vol. 14, no. 04, p. 1750021, 2017. 10.1142/S0219843617500219Suche in Google Scholar
[33] J. Kennedy, P. Baxter, and T. Belpaeme, “The robot who tried too hard: Social behavior of a robot tutor can negatively affect child learning,” in: 2015 10th ACM/IEEE International Conference on Human-Robot Interaction (HRI), IEEE, 2015, pp. 67–74. 10.1145/2696454.2696457Suche in Google Scholar
[34] T. Belpaeme, J. Kennedy, A. Ramachandran, B. Scassellati, and F. Tanaka, “Social robots for education: A review,” Sci. Robot., vol. 3, no. 21, 2018. 10.1126/scirobotics.aat5954Suche in Google Scholar PubMed
[35] H. Köse, P. Uluer, N. Akalın, R. Yorgancı, A. Özkul, and G. Ince, “The effect of embodiment in sign language tutoring with assistive humanoid robots,” Int. J. Soc. Robot., vol. 7, no. 4, pp. 537–548, 2015. 10.1007/s12369-015-0311-1Suche in Google Scholar
[36] J. Li, “The benefit of being physically present: A survey of experimental works comparing copresent robots, telepresent robots and virtual agents,” Int. J. Human-Comput. Stud., vol. 77, pp. 23–37, 2015. 10.1016/j.ijhcs.2015.01.001Suche in Google Scholar
[37] J. K. Westlund, L. Dickens, S. Jeong, P. Harris, D. DeSteno, and C. Breazeal, “A comparison of children learning new words from robots, tablets, & people,” in: Proceedings of the 1st International Conference on Social Robots in Therapy and Education, 2015. Suche in Google Scholar
[38] Z. Zhexenova, A. Amirova, M. Abdikarimova, K. Kudaibergenov, N. Baimakhan, B. Tleubayev, et al., “A comparison of social robot to tablet and teacher in a new script learning context,” Front. Robot. AI, vol. 7, p. 99, 2020. 10.3389/frobt.2020.00099Suche in Google Scholar PubMed PubMed Central
[39] A. Deublein and B. Lugrin, “(Expressive) social robot or tablet?-on the benefits of embodiment and non-verbal expressivity of the interface for a smart environment,” in: International Conference on Persuasive Technology, Springer, Cham, Switzerland, 2020, pp. 85–97. 10.1007/978-3-030-45712-9_7Suche in Google Scholar
[40] C. D. Kidd and C. Breazeal, “Effect of a robot on user perceptions,” in: 2004 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)(IEEE Cat. No. 04CH37566), IEEE, vol. 4, 2004, pp. 3559–3564. 10.1109/IROS.2004.1389967Suche in Google Scholar
[41] E. Ferrari, B. Robins, and K. Dautenhahn, “Therapeutic and educational objectives in robot assisted play for children with autism,” in: RO-MAN 2009-The 18th IEEE International Symposium on Robot and Human Interactive Communication, IEEE, 2009, pp. 108–114. 10.1109/ROMAN.2009.5326251Suche in Google Scholar
[42] J. G. G. daSilva, D. J. Kavanagh, T. Belpaeme, L. Taylor, K. Beeson, and J. Andrade, “Experiences of a motivational interview delivered by a robot: qualitative study,” J. Med. Internet Res., vol. 20, no. 5, p. e116, 2018. 10.2196/jmir.7737Suche in Google Scholar PubMed PubMed Central
[43] J. K. Lee and C. Breazeal, “Human social response toward humanoid robot’s head and facial features,” in: CHI’10 Extended Abstracts on Human Factors in Computing Systems, Association for Computing Machinery, New York, NY, USA, 2010, pp. 4237–4242. 10.1145/1753846.1754132Suche in Google Scholar
[44] C. L. Bethel, J. E. Cossitt, Z. Henkel, and K. Baugus, “Qualitative interview techniques for human–robot interactions,” in: Human-Robot Interaction, Springer, Cham, Switzerland, 2020, pp. 145–174. 10.1007/978-3-030-42307-0_6Suche in Google Scholar
[45] L. J. Wood, K. Dautenhahn, A. Rainer, B. Robins, H. Lehmann, and D. S. Syrdal, “Robot-mediated interviews: A field trial with a potential real-world user,” Interact. Stud., vol. 21, no. 2, pp. 243–267, 2020. 10.1075/is.18031.wooSuche in Google Scholar
[46] D. S. Syrdal, K. Dautenhahn, B. Robins, E. Karakosta, and N. C. Jones, “Kaspar in the wild: Experiences from deploying a small humanoid robot in a nursery school for children with autism,” Paladyn, J. Behav. Robot., vol. 11, no. 1, pp. 301–326, 2020. 10.1515/pjbr-2020-0019Suche in Google Scholar
[47] M. Kyrarini, F. Lygerakis, A. Rajavenkatanarayanan, C. Sevastopoulos, H. R. Nambiappan, K. K. Chaitanya, et al., “A survey of robots in healthcare,” Technologies, vol. 9, no. 1, p. 8, 2021. 10.3390/technologies9010008Suche in Google Scholar
[48] C. S. González-González, V. Violant-Holz, and R. M. Gil-Iranzo, “Social robots in hospitals: a systematic review,” Applied Sci., vol. 11, no. 13, p. 5976, 2021. 10.3390/app11135976Suche in Google Scholar
[49] F. Werner, “A survey on current practices in user evaluation of companion robots,” in: Human-Robot Interaction, Springer, Cham, Switzerland, 2020, pp. 65–88. 10.1007/978-3-030-42307-0_3Suche in Google Scholar
[50] M. Alemi, A. Meghdari, A. Ghanbarzadeh, L. J. Moghadam, and A. Ghanbarzadeh, “Impact of a social humanoid robot as a therapy assistant in children cancer treatment,” in: International Conference on Social Robotics, Springer, 2014, pp. 11–22. 10.1007/978-3-319-11973-1_2Suche in Google Scholar
[51] L. Pu, W. Moyle, C. Jones, and M. Todorovic, “The effectiveness of social robots for older adults: a systematic review and meta-analysis of randomized controlled studies,” Gerontologist, vol. 59, no. 1, pp. e37–e51, 2019. 10.1093/geront/gny046Suche in Google Scholar PubMed
[52] J. Fox and A. Gambino, “Relationship development with humanoid social robots: Applying interpersonal theories to human–robot interaction,” Cyberpsychol. Behav. Soc. Networking, vol. 24, no. 5, pp. 294–299, 2021. 10.1089/cyber.2020.0181Suche in Google Scholar PubMed
[53] T. Williams, D. Thames, J. Novakoff, and M. Scheutz, ““Thank you for sharing that interesting fact!” effects of capability and context on indirect speech act use in task-based human–robot dialogue,” in: Proceedings of the 2018 ACM/IEEE International Conference on Human-Robot Interaction, 2018, pp. 298–306. 10.1145/3171221.3171246Suche in Google Scholar
[54] J. Kennedy, S. Lemaignan, C. Montassier, P. Lavalade, B. Irfan, F. Papadopoulos, et al., “Child speech recognition in human–robot interaction: evaluations and recommendations,” in: Proceedings of the 2017 ACM/IEEE International Conference on Human-Robot Interaction, 2017, pp. 82–90. 10.1145/2909824.3020229Suche in Google Scholar
[55] V. Wang and T. F. Osborne, “Social robots and other relational agents to improve patient care,” Using Technology to Improve Care of Older Adults, vol. 166, 2017, pp. 227–245.10.1891/9780826142436.0011Suche in Google Scholar
[56] C. Vandevelde, F. Wyffels, B. Vanderborght, and J. Saldien, “Do-it-yourself design for social robots: An open-source hardware platform to encourage innovation,” IEEE Robot. Automat. Magazine, vol. 24, no. 1, pp. 86–94, 2017. 10.1109/MRA.2016.2639059Suche in Google Scholar
[57] H. Mahdi, S. Saleh, E. Sanoubari, and K. Dautenhahn, “User-centered social robot design: Involving children with special needs in an online world,” in: 2021 30th IEEE International Conference on Robot & Human Interactive Communication (RO-MAN), IEEE, 2021, pp. 844–851. 10.1109/RO-MAN50785.2021.9515417Suche in Google Scholar
[58] G. Veruggio, F. Operto, and G. Bekey, “Roboethics: Social and ethical implications,” in: Springer Handbook of Robotics, Springer, Cham, Switzerland, 2016, pp. 2135–2160. 10.1007/978-3-319-32552-1_80Suche in Google Scholar
[59] J. Casas, N. Cespedes, M. Munera, and C. A. Cifuentes, “Human-robot interaction for rehabilitation scenarios,” in: Control Systems Design of Bio-Robotics and Bio-mechatronics with Advanced Applications, Elsevier, London, UK, 2020, pp. 1–31. 10.1016/B978-0-12-817463-0.00001-0Suche in Google Scholar
[60] S. Jeong, D. E. Logan, M. S. Goodwin, S. Graca, B. O’Connell, H. Goodenough, et al., “A social robot to mitigate stress, anxiety, and pain in hospital pediatric care,” in: Proceedings of the Tenth Annual ACM/IEEE International Conference on Human-Robot Interaction Extended Abstracts, 2015, pp. 103–104. 10.1145/2701973.2702028Suche in Google Scholar
[61] D. E. Logan, C. Breazeal, M. S. Goodwin, S. Jeong, B. O’Connell, D. Smith-Freedman, et al., “Social robots for hospitalized children,” Pediatrics, vol. 144, no. 1, p. e20181511, 2019. 10.1542/peds.2018-1511Suche in Google Scholar PubMed
[62] T. N. Beran, A. Ramirez-Serrano, O. G. Vanderkooi, and S. Kuhn, “Reducing childrenas pain and distress towards flu vaccinations: A novel and effective application of humanoid robotics,” Vaccine, vol. 31, no. 25, pp. 2772–2777, 2013. 10.1016/j.vaccine.2013.03.056Suche in Google Scholar PubMed
[63] C. A. Cifuentes, M. J. Pinto, N. Céspedes, and M. Munera, “Social robots in therapy and care,” Current Robotics Reports, pp. 1–16, 2020. 10.1007/s43154-020-00009-2Suche in Google Scholar
[64] O. A. B. Henkemans, B. P. Bierman, J. Janssen, R. Looije, M. A. Neerincx, M. M. van Dooren, et al., “Design and evaluation of a personal robot playing a self-management education game with children with diabetes type 1,” Int. J. Human-Comput. Stud., vol. 106, pp. 63–76, 2017. 10.1016/j.ijhcs.2017.06.001Suche in Google Scholar
[65] R. Looije, M. A. Neerincx, J. K. Peters, and O. A. B. Henkemans, “Integrating robot support functions into varied activities at returning hospital visits,” Int. J. Soc. Robot., vol. 8, no. 4, pp. 483–497, 2016. 10.1007/s12369-016-0365-8Suche in Google Scholar
[66] S. Rasouli, G. Gupta, E. Nilsen, and K. Dautenhahn, “Potential applications of social robots in robot-assisted interventions for social anxiety,” Int. J. Soc. Robot., pp. 1–32, 2022. 10.1007/s12369-021-00851-0Suche in Google Scholar PubMed PubMed Central
[67] D. Silvera-Tawil, D. Bradford, and C. Roberts-Yates, “Talk to me: The role of human–robot interaction in improving verbal communication skills in students with autism or intellectual disability,” in: 2018 27th IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN), IEEE, 2018, pp. 1–6. 10.1109/ROMAN.2018.8525698Suche in Google Scholar
[68] K. Baraka, F. S. Melo, and M. Veloso, “Interactive robots with model-based “autism-like” behaviors: Assessing validity and potential benefits,” Paladyn, J. Behav. Robot., vol. 10, no. 1, pp. 103–116, 2019. 10.1515/pjbr-2019-0011Suche in Google Scholar
[69] K. Baraka, F. S. Melo, M. Couto, and M. Veloso, “Optimal action sequence generation for assistive agents in fixed horizon tasks,” Autonom. Agents Multi-Agent Syst., vol. 34, no. 2, pp. 1–36, 2020. 10.1007/s10458-020-09458-7Suche in Google Scholar
[70] P. Chevalier, K. Kompatsiari, F. Ciardo, and A. Wykowska, “Examining joint attention with the use of humanoid robots-a new approach to study fundamental mechanisms of social cognition,” Psychonom. Bulletin Rev., vol. 27, no. 2, pp. 1–20, 2019. 10.3758/s13423-019-01689-4Suche in Google Scholar PubMed PubMed Central
[71] J. Wainer, B. Robins, F. Amirabdollahian, and K. Dautenhahn, “Using the humanoid robot kaspar to autonomously play triadic games and facilitate collaborative play among children with autism,” IEEE Trans. Autonom. Mental Development, vol. 6, no. 3, pp. 183–199, 2014. 10.1109/TAMD.2014.2303116Suche in Google Scholar
[72] G. Lakatos, L. J. Wood, D. S. Syrdal, B. Robins, A. Zaraki, and K. Dautenhahn, “Robot-mediated intervention can assist children with autism to develop visual perspective taking skills,” Paladyn J. Behav. Robot., vol. 12, no. 1, pp. 87–101, 2020. 10.1515/pjbr-2021-0007Suche in Google Scholar
[73] O. Rudovic, J. Lee, M. Dai, B. Schuller, and R. W. Picard, “Personalized machine learning for robot perception of affect and engagement in autism therapy,” Sci. Robot., vol. 3, no. 19, 2018. 10.1126/scirobotics.aao6760Suche in Google Scholar PubMed
[74] A. P. Association et al., Diagnostic and statistical manual of mental disorders (DSM-5RRRRR) American Psychiatric Pub, 2013. 10.1176/appi.books.9780890425596Suche in Google Scholar
[75] B. Scassellati, L. Boccanfuso, C.-M. Huang, M. Mademtzi, M. Qin, N. Salomons, et al., “Improving social skills in children with asd using a long-term, in-home social robot,” Sci. Robot., vol. 3, no. 21, p. eaat7544, 2018. 10.1126/scirobotics.aat7544Suche in Google Scholar PubMed
[76] E. Martinez-Martin and A. P. delPobil, “Personal robot assistants for elderly care: An overview,” Personal Assistants: Emerging Computational Technologies, pp. 77–91, 2018. 10.1007/978-3-319-62530-0_5Suche in Google Scholar
[77] K. Wada, T. Shibata, T. Saito, and K. Tanie, “Psychological and social effects of robot assisted activity to elderly people who stay at a health service facility for the aged,” in: 2003 IEEE International Conference on Robotics and Automation (Cat. No. 03CH37422), vol. 3, IEEE, 2003, pp. 3996–4001. 10.1109/ROBOT.2003.1242211Suche in Google Scholar
[78] S. Šabanović, C. C. Bennett, W.-L. Chang, and L. Huber, “Paro robot affects diverse interaction modalities in group sensory therapy for older adults with dementia,” in: 2013 IEEE 13th international conference on rehabilitation robotics (ICORR), IEEE, 2013, pp. 1–6. 10.1109/ICORR.2013.6650427Suche in Google Scholar PubMed
[79] K. Wada and T. Shibata, “Social and physiological influences of living with seal robots in an elderly care house for two months,” Gerontechnol., vol. 7, no. 2, p. 235, 2008. 10.4017/gt.2008.07.02.172.00Suche in Google Scholar
[80] M. E. Pollack, L. Brown, D. Colbry, C. Orosz, B. Peintner, S. Ramakrishnan, et al., “Pearl: A mobile robotic assistant for the elderly,” AAAI workshop on automation as eldercare, vol. 2002, 2002. Suche in Google Scholar
[81] B. Graf, M. Hans, and R. D. Schraft, “Care-o-bot ii–development of a next generation robotic home assistant,” Autonom. Robots, vol. 16, no. 2, pp. 193–205, 2004. 10.1023/B:AURO.0000016865.35796.e9Suche in Google Scholar
[82] B. Graf, “Reactive navigation of an intelligent robotic walking aid,” in: Proceedings 10th IEEE International Workshop on Robot and Human Interactive Communication. ROMAN 2001 (Cat. No. 01TH8591) IEEE, 2001, pp. 353–358. 10.1109/ROMAN.2001.981929Suche in Google Scholar
[83] S. Bahadori, A. Cesta, G. Grisetti, L. Iocchi, R. Leone, D. Nardi, et al., “Robocare: Pervasive intelligence for the domestic care of the elderly,” Intell. Artif., vol. 1, no. 1, pp. 16–21, 2004. Suche in Google Scholar
[84] B. Robins, K. Dautenhahn, E. Ferrari, G. Kronreif, B. Prazak-Aram, P. Marti, et al., “Scenarios of robot-assisted play for children with cognitive and physical disabilities,” Interaction Studies, vol. 13, no. 2, pp. 189–234, 2012. 10.1075/is.13.2.03robSuche in Google Scholar
[85] N. Charron, E. D. Kim Lindley-Soucy, L. Lewis, M. Craig, “Robot therapy,” New Hampshire J. Edu., vol. 21, no. Fall, p. 10983, 2019. Suche in Google Scholar
[86] K. H. Jeon, S. J. Yeon, Y. T. Kim, S. Song, and J. Kim, “Robot-based augmentative and alternative communication for nonverbal children with communication disorders,” in: Proceedings of the 2014 ACM International Joint Conference on Pervasive and Ubiquitous Computing, 2014, pp. 853–859. 10.1145/2632048.2636078Suche in Google Scholar
[87] V. Robles-Bykbaev, M. Ochoa-Guaraca, M. Carpio-Moreta, D. Pulla-Sánchez, L. Serpa-Andrade, M. López-Nores, et al., “Robotic assistant for support in speech therapy for children with cerebral palsy,” in: 2016 IEEE International Autumn Meeting on Power, Electronics and Computing (ROPEC), IEEE, 2016, pp. 1–6. 10.1109/ROPEC.2016.7830603Suche in Google Scholar
[88] J. Pereira, M. de Melo, N. Franco, F. Rodrigues, A. Coelho, and R. Fidalgo, “Using assistive robotics for aphasia rehabilitation,” in: 2019 Latin American Robotics Symposium (LARS), 2019 Brazilian Symposium on Robotics (SBR) and 2019 Workshop on Robotics in Education (WRE), IEEE, 2019, pp. 387–392. 10.1109/LARS-SBR-WRE48964.2019.00074Suche in Google Scholar
[89] P. Ramamurthy and T. Li, “Buddy: A speech therapy robot companion for children with cleft lip and palate (cl/p) disorder,” in: Companion of the 2018 ACM/IEEE International Conference on Human-Robot Interaction, 2018, pp. 359–360. 10.1145/3173386.3177830Suche in Google Scholar
[90] J. C. Castillo, D. Alvarez-Fernandez, F. Alonso-Martin, S. Marques-Villarroya, and M. A. Salichs, “Social robotics in therapy of apraxia of speech,” J. Healthcare Eng. vol. 2018, p. 11, 2018. 10.1155/2018/7075290Suche in Google Scholar PubMed PubMed Central
[91] Ł. Kwaśniewicz, W. Kuniszyk-Jóźkowiak, G. M. Wójcik, and J. Masiak, “Adaptation of the humanoid robot to speech disfluency therapy,” Bio-Algorithms and Med-Syst., vol. 12, no. 4, pp. 169–177, 2016. 10.1515/bams-2016-0018Suche in Google Scholar
[92] B. Robins and K. Dautenhahn, “Tactile interactions with a humanoid robot: Novel play scenario implementations with children with autism,” Int. J. Soc. Robot., vol. 6, no. 3, pp. 397–415, 2014. 10.1007/s12369-014-0228-0Suche in Google Scholar
[93] K. R. Ginsburg, and the Committee on Communications and the Committee on Psychosocial Aspects of Child and Family Health, “The importance of play in promoting healthy child development and maintaining strong parent-child bonds,” Pediatrics, vol. 119, no. 1, pp. 182–191, 2007. 10.1542/peds.2006-2697Suche in Google Scholar PubMed
[94] C. E. Schaefer, Foundations of Play Therapy, John Wiley & Sons, Hoboken, New Jersey, 2011. Suche in Google Scholar
[95] H. L. Stulmaker and D. C. Ray, “Child-centered play therapy with young children who are anxious: A controlled trial,” Children Youth Services Rev., vol. 57, pp. 127–133, 2015. 10.1016/j.childyouth.2015.08.005Suche in Google Scholar
[96] M. B. Parten, “Social participation among pre-school children,” J. Abnormal Soc. Psychol., vol. 27, no. 3, p. 243, 1932. 10.1037/h0074524Suche in Google Scholar
[97] J. Piaget, Play, dreams and imitation in childhood, vol. 25, Routledge, Abingdon, Oxon, 2013. 10.4324/9781315009698Suche in Google Scholar
[98] D. Boud, “Making the move to peer learning,” Peer Learning in Higher Education: Learning from and with Each Other, vol. 1, pp. 1–21, 2001. Suche in Google Scholar
[99] M. Keppell, E. Au, A. Ma, and C. Chan, “Peer learning and learning-oriented assessment in technology-enhanced environments,” Assessment Evaluat. Higher Edu., vol. 31, no. 4, pp. 453–464, 2006. 10.1080/02602930600679159Suche in Google Scholar
[100] S. Chandra, P. Alves-Oliveira, S. Lemaignan, P. Sequeira, A. Paiva, and P. Dillenbourg, “Children’s peer assessment and self-disclosure in the presence of an educational robot,” in: 2016 25th IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN), IEEE, 2016, pp. 539–544. 10.1109/ROMAN.2016.7745170Suche in Google Scholar
[101] S. Chandra, P. Dillenbourg, and A. Paiva, “Children teach handwriting to a social robot with different learning competencies,” Int. J. Soc. Robot., vol. 12, pp. 1–28, 2019. 10.1007/s12369-019-00589-wSuche in Google Scholar
[102] S. Stollhans, “Learning by teaching: Developing transferable skills,” in: Employability for Languages: A Handbook, Research-publishing.net, Dublin, Ireland, 2016, pp. 161–164. 10.14705/rpnet.2016.cbg2016.478Suche in Google Scholar
[103] G. Biswas, K. Leelawong, D. Schwartz, N. Vye, and T. T. A. G. at Vanderbilt, “Learning by teaching: A new agent paradigm for educational software,” Appl. Artif. Intell., vol. 19, no. 3–4, pp. 363–392, 2005. 10.1080/08839510590910200Suche in Google Scholar
[104] L. Fiorella and R. E. Mayer, “The relative benefits of learning by teaching and teaching expectancy,” Contemp. Educat. Psychol., vol. 38, no. 4, pp. 281–288, 2013. 10.1016/j.cedpsych.2013.06.001Suche in Google Scholar
[105] D. Hood, S. Lemaignan, and P. Dillenbourg, “When children teach a robot to write: An autonomous teachable humanoid which uses simulated handwriting,” in: Proceedings of the Tenth Annual ACM/IEEE International Conference on Human-Robot Interaction, ACM, 2015, pp. 83–90. 10.1145/2696454.2696479Suche in Google Scholar
[106] M. P. Boyle, “Assessment of stigma associated with stuttering: Development and evaluation of the self-stigma of stuttering scale (4s),” J. Speech Language Hearing Res., vol. 56, no. 5, 2013. 10.1044/1092-4388(2013/12-0280)Suche in Google Scholar
[107] I. Funck-Brentano, C. Dalban, F. Veber, P. Quartier, S. Hefez, D. Costagliola, et al., “Evaluation of a peer support group therapy for hiv-infected adolescents,” Aids, vol. 19, no. 14, pp. 1501–1508, 2005. 10.1097/01.aids.0000183124.86335.0aSuche in Google Scholar
[108] C. Birmingham, Z. Hu, K. Mahajan, E. Reber, and M. J. Mataric, “Can i trust you? a user study of robot mediation of a support group,” 2020, arXiv: http://arXiv.org/abs/arXiv:2002.04671. 10.1109/ICRA40945.2020.9196875Suche in Google Scholar
[109] N. Clayton, Book Review: The Alex Studies: Cognitive and Communicative Abilities of Grey Parrots, Harvard University Press, London, England, 2001. 10.1080/713932751Suche in Google Scholar
[110] I. M. Pepperberg and D. Sherman, “Proposed use of two-part interactive modelling as a means to increase functional skills in children with a variety of disabilities,” Teach. Learn. Med., vol. 12, no. 4, pp. 213–220, 2000. 10.1207/S15328015TLM1204_10Suche in Google Scholar
[111] I. Fishman, “What alex the parrot can teach us about working with children with complex communication needs,” Perspect. Augment. Alternat. Commun., vol. 17, no. 4, pp. 144–149, 2008. 10.1044/aac17.4.144Suche in Google Scholar
[112] M. Blomgren, “Stuttering treatment for adults: an update on contemporary approaches,” in: Seminars in Speech and Language, Thieme Medical Publishers, New York, vol. 31, no. 04, 2010, pp. 272–282. 10.1055/s-0030-1265760Suche in Google Scholar
[113] A. Alpermann, W. Huber, U. Natke, and K. Willmes, “Construct validity of modified time-interval analysis in measuring stuttering and trained speaking patterns,” J. Fluency Disorders, vol. 37, no. 1, pp. 42–53, 2012. 10.1016/j.jfludis.2011.11.006Suche in Google Scholar
[114] C. Nye, M. Vanryckeghem, J. B. Schwartz, C. Herder, H. M. Turner III, and C. Howard, “Behavioral stuttering interventions for children and adolescents: A systematic review and meta-analysis,” J. Speech Lang. Hear. Res., vol. 56, no. 3, p. 921, 2013. 10.1044/1092-4388(2012/12-0036)Suche in Google Scholar
[115] M. Blomgren, “Behavioral treatments for children and adults who stutter: a review,” Psychol. Res. Behav. Manag., vol. 6, p. 9, 2013. 10.2147/PRBM.S31450Suche in Google Scholar
[116] D. Matthews and M. Blomgren, “Modifying phonation intervals (mpi) stuttering therapy compared to standard prolonged speech treatment,” Evidence-Based Commun. Assessment Intervent., vol. 10, no. 1, pp. 25–31, 2016. 10.1080/17489539.2016.1165336Suche in Google Scholar
[117] K. A. Freeman and P. C. Friman, “Using simplified regulated breathing with an adolescent stutterer: Application of effective intervention in a residential context,” Behav. Modificat., vol. 28, no. 2, pp. 247–260, 2004. 10.1177/0145445503259267Suche in Google Scholar
[118] D. W. Woods, M. P. Twohig, R. W. Fuqua, and J. M. Hanley, “Treatment of stuttering with regulated breathing: Strengths, limitations, and future directions,” Behav. Therapy, vol. 31, no. 3, pp. 547–568, 2000. 10.1016/S0005-7894(00)80030-1Suche in Google Scholar
[119] C. A. Conelea, K. A. Rice, and D. W. Woods, “Regulated breathing as a treatment for stuttering: A review of the empirical evidence,” J. Speech Lang. Pathol.-Appl. Behav. Anal., vol. 1, no. 2, p. 94, 2006. 10.1037/h0100191Suche in Google Scholar
[120] C. Andrews, S. O’Brian, M. Onslow, A. Packman, R. Menzies, and R. Lowe, “Phase II trial of a syllable-timed speech treatment for school-age children who stutter,” J. Fluency Disorders, vol. 48, pp. 44–55, 2016. 10.1016/j.jfludis.2016.06.001Suche in Google Scholar PubMed
[121] T. Law, A. Packman, M. Onslow, C. K.-S. To, M. C.-F. Tong, and K. Y.-S. Lee, “Rhythmic speech and stuttering reduction in a syllable-timed language,” Clin. Linguistic. Phonetic., vol. 32, no. 10, pp. 932–949, 2018. 10.1080/02699206.2018.1480655Suche in Google Scholar PubMed
[122] M. Blomgren, “Review of the successful stuttering management program,” in: The Science and Practice of Stuttering Treatment: A Symposium, Wiley Online Library, 2012. 10.1002/9781118702796.ch8Suche in Google Scholar
[123] M. Rami, J. Kalinowski, A. Stuart, and M. Rastatter, “Self-perceptions of speech language pathologists-in-training before and after pseudostuttering experiences on the telephone,” Disability Rehabil., vol. 25, no. 9, pp. 491–496, 2003. 10.1080/0963828031000090425Suche in Google Scholar PubMed
[124] S. Hughes, “Ethical and clinical implications of pseudostuttering,” Perspect. Fluency Fluency Disorders, vol. 20, no. 3, pp. 84–96, 2010. 10.1044/ffd20.3.84Suche in Google Scholar
[125] C. T. Byrd, Z. Gkalitsiou, J. Donaher, and E. Stergiou, “The client’s perspective on voluntary stuttering,” Am J. Speech-Lang. Pathol., vol. 25, no. 3, pp. 290–305, 2016. 10.1044/2016_AJSLP-15-0018Suche in Google Scholar
[126] S. Nittrouer and C. Cheney, “Operant techniques used in stuttering therapy: A review,” J. Fluency Disorders, vol. 9, no. 3, pp. 169–190, 1984. 10.1016/0094-730X(84)90011-1Suche in Google Scholar
[127] M. Donaghy, E. Harrison, S. O’Brian, R. Menzies, M. Onslow, A. Packman, et al.“An investigation of the role of parental request for self-correction of stuttering in the lidcombe program,” Int. J. Speech-Lang. Pathol., vol. 17, no. 5, pp. 511–517, 2015. 10.3109/17549507.2015.1016110Suche in Google Scholar PubMed
[128] M. C. Swift, M. Jones, S. O’Brian, M. Onslow, A. Packman, and R. Menzies, “Parent verbal contingencies during the lidcombe program: Observations and statistical modelling of the treatment process,” J. Fluency Disorders, vol. 47, pp. 13–26, 2016. 10.1016/j.jfludis.2015.12.002Suche in Google Scholar PubMed
[129] M. Onslow, A. Packman, and R. E. Harrison, The Lidcombe Program of Early Stuttering Intervention: A Clinicianas Guide, Pro-ed, Austin, TX, 2003. Suche in Google Scholar
[130] S. Hewat, S. Hewat, M. Onslow, A. Packman, and S. O’BriAn, “A phase II clinical trial of self-imposed time-out treatment for stuttering in adults and adolescents,” Disability Rehabil., vol. 28, no. 1, pp. 33–42, 2006. 10.1080/09638280500165245Suche in Google Scholar PubMed
[131] A. K. Pandey and R. Gelin, “A mass-produced sociable humanoid robot: Pepper: The first machine of its kind,” IEEE Robot. Autom. Magazine, vol. 25, no. 3, pp. 40–48, 2018. 10.1109/MRA.2018.2833157Suche in Google Scholar
[132] J. K. Deodhar and S. S. Goswami, “Structure, process, and impact of a staff support group in an oncology setting in a developing country,” Indust. Psychiat. J., vol. 26, no. 2, p. 194, 2017. 10.4103/ipj.ipj_59_16Suche in Google Scholar PubMed PubMed Central
[133] C. Floriana, C. Luca, and G. Simona, “Effectiveness of a short-term psychotherapeutic group for doctors and nurses in a hospice in Southern Europe,” Progress Palliative Care, vol. 27, no. 2, pp. 58–63, 2019. 10.1080/09699260.2019.1612136Suche in Google Scholar
[134] U. Peterson, G. Bergström, M. Samuelsson, M. Åsberg, and Å. Nygren, “Reflecting peer-support groups in the prevention of stress and burnout: Randomized controlled trial,” J. Adv. Nursing, vol. 63, no. 5, pp. 506–516, 2008. 10.1111/j.1365-2648.2008.04743.xSuche in Google Scholar PubMed
[135] L. D. Riek, “Wizard of oz studies in HRI: A systematic review and new reporting guidelines,” J. Human-Robot Interact., vol. 1, no. 1, pp. 119–136, 2012. 10.5898/JHRI.1.1.RiekSuche in Google Scholar
[136] A. Coco, I. Woodward, K. Shaw, A. Cody, G. Lupton, and A. Peake, “Bingo for beginners: A game strategy for facilitating active learning,” Teaching Sociology, vol. 29, no. 4, pp. 492–503, 1998. 10.2307/1318950Suche in Google Scholar
[137] I. M. Pepperberg, The Alex studies: Cognitive and Communicative Abilities of Grey Parrots, Harvard University Press, London, England, 2009. 10.2307/j.ctvk12qc1Suche in Google Scholar
[138] A. Metrick, “A natural experiment in ‘jeopardy!’,” Am. Econom. Rev., vol. 85, no. 1, pp. 240–253, 1995. Suche in Google Scholar
[139] U. Kamath, J. Liu, and J. Whitaker, Deep Learning for NLP and Speech Recognition, vol. 84, Springer, Cham, Switzerland, 2019. 10.1007/978-3-030-14596-5Suche in Google Scholar
[140] Z. Zhang, J. Geiger, J. Pohjalainen, A. E.-D. Mousa, W. Jin, and B. Schuller, “Deep learning for environmentally robust speech recognition: An overview of recent developments,” ACM Trans. Intell. Syst. Technol. (TIST), vol. 9, no. 5, pp. 1–28, 2018. 10.1145/3178115Suche in Google Scholar
[141] N. Mishra, A. Gupta, and D. Vathana, “Optimization of stammering in speech recognition applications,” Int. J. Speech Technol., vol. 24, no. 3, pp. 1–7, 2021. 10.1007/s10772-021-09828-wSuche in Google Scholar
[142] M. Jouaiti and K. Dautenhahn, “Dysfluency classification in stuttered speech using deep learning for real-time applications,” in: IEEE ICASSP 2022, Hybrid Event, Singapore. (in press), 2022. 10.1109/ICASSP43922.2022.9746638Suche in Google Scholar
© 2022 Shruti Chandra et al., published by De Gruyter
This work is licensed under the Creative Commons Attribution 4.0 International License.
Artikel in diesem Heft
- Regular Articles
- Social robot – Jack of all trades?
- Opportunities for social robots in the stuttering clinic: A review and proposed scenarios
- Social inclusion of robots depends on the way a robot is presented to observers
- Special Issue: Recent Advancements in the Role of Robotics in Smart Industries and Manufacturing Units - Part I
- Study of industrial interactive design system based on virtual reality teaching technology in industrial robot
- Optimization of industrial process parameter control using improved genetic algorithm for industrial robot
- Robot visual navigation estimation and target localization based on neural network
- Categorizing threat types and cyber-assaults over Internet of Things-equipped gadgets
- Optimization technique based on cluster head selection algorithm for 5G-enabled IoMT smart healthcare framework for industry
- Integration of artificial intelligence in robotic vehicles: A bibliometric analysis
Artikel in diesem Heft
- Regular Articles
- Social robot – Jack of all trades?
- Opportunities for social robots in the stuttering clinic: A review and proposed scenarios
- Social inclusion of robots depends on the way a robot is presented to observers
- Special Issue: Recent Advancements in the Role of Robotics in Smart Industries and Manufacturing Units - Part I
- Study of industrial interactive design system based on virtual reality teaching technology in industrial robot
- Optimization of industrial process parameter control using improved genetic algorithm for industrial robot
- Robot visual navigation estimation and target localization based on neural network
- Categorizing threat types and cyber-assaults over Internet of Things-equipped gadgets
- Optimization technique based on cluster head selection algorithm for 5G-enabled IoMT smart healthcare framework for industry
- Integration of artificial intelligence in robotic vehicles: A bibliometric analysis