Home Investigating the relationship between the speed of automatization and linguistic abilities: data collection during the COVID-19 pandemic
Article Open Access

Investigating the relationship between the speed of automatization and linguistic abilities: data collection during the COVID-19 pandemic

  • Ashley Blake ORCID logo EMAIL logo and Ewa Dąbrowska ORCID logo
Published/Copyright: January 5, 2024

Abstract

Our research explores the relationship between cognition and language. The focus of this paper is to discuss how we embarked upon remote data collection with children during the COVID-19 pandemic. In this study we investigate cognitive processes of non-verbal intelligence, working memory, implicit statistical learning, and speed of automatization (measured with the multiple-trial Tower of Hanoi puzzle). Here we focus primarily on the speed of automatization, partly because of theoretical interest, and because it is more difficult to adapt to an online format due to the motor component of the task. We established a hybrid method of data collection where the researcher was present online to guide children through a battery of language and cognitive tasks. We used a videoconferencing platform, a digital visualiser, and a physical puzzle which we posted to each child prior to commencing the research sessions. We also designed an online version of the puzzle with support from the Getting Data project. We discuss the methodology of our study and the lessons learned during remote data collection.

1 Introduction

The global COVID-19 pandemic presented many challenges to psycholinguistic research, especially experimental studies dependent upon face-to-face methods of data collection such as research involving children or special populations. The focus of this paper is to discuss how we adapted specific tasks to an online format to continue with data collection with children during the pandemic. We also compare two different approaches utilized to measure the speed of automatization in children and adults (using the Tower of Hanoi [ToH] puzzle).

Experimental research involving complex research design has typically utilized moderated, in-person methods of data collection. Usually, this involves participants visiting a laboratory or the experimenter visiting participants in schools or public settings. There are many advantages to this approach. In-person laboratory-based testing allows the experimenter to control the environment for stringent test designs or where specialized equipment is required. The experimenter can monitor participants’ attention on experimental tasks, which is particularly beneficial with children, where engagement and attention can decline quickly and affect the quality of data collection. Furthermore, humans are social beings, and for many people, physical interaction is more natural and engaging. Face-to-face communication makes it easier for researchers to read non-verbal cues and reassure participants who may be reluctant or nervous to participate in scientific research.

Online methods of data collection have grown in popularity in recent years but increasingly so because of the COVID-19 pandemic. Before the pandemic, our laboratory utilized a combination of moderated and unmoderated methods, but worldwide lockdowns meant that we had to tailor many of our tasks previously used for in-person data collection to an online format. However, studies involving children or special populations create additional issues that required careful consideration. For example, we asked ourselves: would online versions of a task measure the same ability as the traditional versions? Would we be able to maintain children’s attention? Would it be possible to collect data with special populations online? Certain tasks are easier to adapt to an online format, for example, vocabulary and grammatical tasks, although consideration needs to be given to how to present the stimuli to the child (especially in the case of standardized tests) and how the child responds online. Furthermore, tasks that involve physical equipment (in our case, the ToH puzzle), require more in-depth programming to operate online.

In our research, we used a combination of standardized language measures and cognitive tasks to explore non-verbal intelligence, working memory, implicit statistical learning, and the speed of automatization, measured with the ToH puzzle. For our study with children, we used a physical puzzle as we specifically planned for them to learn using a tangible object. We also designed an online version of the ToH for use with adults. The differences between these two methods will be discussed.

1.1 Background

This section provides a brief background regarding our research and why we are interested in the speed of automatization. As we do not present results in this paper, this is purely to provide general context to the present study, which investigates linguistic ability in language-typical children and children with developmental language disorder (DLD), defined as a persistent difficulty learning and using language in the absence of impaired hearing, low general intelligence, or neurological impairment (Bishop et al. 2017; Leonard 2014). In our research, we investigate the relationship between language and cognition and how the speed of automatization predicts linguistic ability.

Automatization refers to a skill or action which is performed without conscious thought, for example, being able to touch-type, play tennis, or drive a car. Learning a new skill is initially difficult and cognitively demanding, but through practice, one develops competence or even mastery. In our research, we explore the notion of language as a complex cognitive skill. This is not a new approach, and we join several academics who have explored this concept (Chater and Christiansen 2018; Chater et al. 2016; Johnson 1996; Kamhi 2019). Humans produce and comprehend linguistic utterances very quickly and largely automatically. This kind of performance occurs with little conscious effort and would not be possible if the underlying cognitive processes were not highly automatized. In our research, we apply the processes involved in language acquisition to distinct stages involved in skill learning, namely the cognitive, associative, and autonomous phases (Anderson 1982, 1993; Anderson et al. 2004; Fitts 1964; Fitts and Posner 1967; Taatgen et al. 2008). More in-depth description of the underlying theory regarding our approach can be found in Dąbrowska and Blake (2022).

1.2 Exploring the speed of automatization using the Tower of Hanoi puzzle

We use the ToH puzzle to explore the ability to acquire a complex cognitive skill and to assess individual differences in the speed of automatization. The ToH is a wooden puzzle comprising three equidistant vertical rods mounted on a wooden base and discs which range in size. For this study, we used a four-disc version of the puzzle (see Figure 1). To solve the puzzle, the participant is asked to move the discs from the leftmost rod to the rightmost rod while observing two rules: (i) only one disc can be moved at a time, and (ii) a larger disc cannot be placed on top of a smaller disc. The optimum solution to solving the puzzle is in 15 moves.

Figure 1: 
The Tower of Hanoi puzzle.
Figure 1:

The Tower of Hanoi puzzle.

This puzzle has been used in psychological and neuropsychological research as a measure of executive functioning (Beaunieux et al. 2006; Lezak et al. 2004) and has also been used to investigate cognitive skill learning. In the latter application, participants are asked to solve the puzzle repeatedly and the researcher measures the improvement in performance, either in the number of moves or in the time taken to reach the solution (Beaunieux et al. 2006; Hubert et al. 2007).

To solve the puzzle, the participant is required to plan and hold sub-goals in memory (e.g., to solve the puzzle, they must first move the largest disc to the rightmost rod; to achieve this goal, they must first remove the three discs which are on top of it; and so on). The solution to the puzzle has a recursive structure and parallels can be drawn with language. For example, a sentence is not simply a linear string of words: sequences of words combine to form higher-order units (phrases) which in turn combine with other units to form even larger phrases or clauses. In terms of relating this to the cognitive processes involved in solving the ToH, individual moves can be equated to words, and sub-goals to the recursive structure of phrases. Through practice over numerous trials, the procedure for solving the puzzle becomes proceduralized and eventually fully automatic. The same applies to language learning: through practice, units become entrenched for fluent processing.

We also developed an online version of the puzzle and the differences between these two approaches is discussed in Section 3.

2 Methodology

2.1 How we approached data collection during the pandemic

We originally planned to collect data in individual face-to-face sessions with children. However, we received ethical clearance for our study in March 2020, at the same time as lockdowns were beginning due to the COVID-19 pandemic. After careful consideration, we made the decision to move our research study online. We faced two challenges: (i) how to recruit children for our study, and (ii) how to conduct our experiments online using a format that would be accessible to children and provide the same quality of data as we would expect from in-person data collection.

2.1.1 Recruitment

Recruiting adult participants for online studies is possible through platforms such as Amazon Mechanical Turk (https://www.mturk.com) and Prolific (https://www.prolific.co). However, recruiting children and special populations is more difficult, even in normal circumstances. There are platforms that connect families to available research studies, for example, Lookit (Scott and Schulz 2017) and Children Helping Science (https://childrenhelpingscience.com), but in this study, we found that sharing our study with parents and guardians on social media provided the most success in recruiting participants.

We recruited participants through asking schools to share information with parents and guardians, and we advertised through social media platforms such as Facebook and Twitter (now X) providing a wider geographical reach. This saved time and travel expenses and was particularly advantageous in recruiting children with DLD. Whilst DLD is a common disorder, affecting approximately 7 percent of the population (Norbury et al. 2017; Tomblin et al. 1997), unfortunately it is not a very well-known condition. Therefore, it can be difficult to find children with DLD who are willing to take part in scientific research, especially as the very tasks being assessed are those that children with DLD find difficult. Facebook offered the opportunity of joining specialist groups such as “Keeping children entertained during lockdown”, and DLD parental support groups, both of which generated positive interest.

2.1.2 Adapting tasks to an online format

We used a hybrid approach to the experimental tasks where the researcher communicated with the child using the videoconferencing platform Zoom (https://zoom.us). This interaction was important as it allowed the experimenter to explain each task and assess understanding as they would have done in person. It also enabled the experimenter to monitor progress and encourage and motivate children through each of the online sessions.

Many of the tasks in our study relied on standardized cognitive and language assessments which are usually administered in person. Permission was sought from the publishers of the assessments, and they recommended using a digital visualizer to project tasks to participants, using the screen share option on Zoom. Further information as to how each of the tasks was adapted for online use is provided in Sections 2.3.1 and 2.3.2.

2.2 Participants

There were 97 participants aged between 6;9 years and 10;4 years (mean age of 8.1 years) who took part in our study. All children spoke English as a first language. Of this group, 73 children had typically developing language, and 24 children had DLD, as reported by parents or guardians.

2.3 Procedure

After giving consent for their child to participate, parents or guardians were asked to complete an online background questionnaire on Qualtrics (https://www.qualtrics.com) which included questions regarding demographics and speech and language difficulties, if applicable.

We posted a ToH puzzle to every family that participated in our study and asked parents and guardians to ensure that their child did not open the parcel until the first session when it was opened in front of the experimenter. The aim of this was to ensure that children did not get the opportunity to practice solving the puzzle prior to the first session, but it also increased suspense to make participation in the study more exciting. The puzzle was beautifully wrapped, which added to the anticipation of starting the study. After receiving the puzzle, parents booked the online sessions.

All tasks were administered over three online sessions. Each session was approximately 45 min and the experimenter guided children through the tasks as explained below. Figure 2 demonstrates the order of the research sessions and the tasks that were completed during each session.

Figure 2: 
The tasks completed in each of the three online sessions with children.
Figure 2:

The tasks completed in each of the three online sessions with children.

2.3.1 Cognitive tasks

Multiple-trial Tower of Hanoi (MToH) task: Both the experimenter and the child had a physical puzzle in front of them at the start of the task. The experimenter explained the rules and showed the child examples of legal and illegal moves. Following Beaunieux et al. (2006), the experimenter showed the child the first move by putting the smallest disc on the middle rod of the puzzle. Children were encouraged to ask questions to verify their understanding and were invited to do a practice trial before starting, to learn the rules of the task and to get used to manipulating the discs. Participants were asked to solve the puzzle 30 times, with a secondary task in the last five trials. The secondary task involved tapping in response to a recording consisting of a random sequence of the words one and two, with a four-second pause between the words. Participants were asked to solve the puzzle with their dominant hand and then tap with their non-dominant hand in response to the numbers (two taps in response to the word one and one tap in response to the word two). The aim of the secondary task was to test automatization: if the procedure for solving the puzzle is fully automatized, then doing the secondary task should not interfere with performance on the puzzle. For every trial, we calculated the amount of time to complete the task and the number of moves used. As children repeat the task, their time per trial starts to reduce and therefore, after several trials, the experimenter would tell them their time. This was motivating for participants as they enjoyed trying to improve their time and parents and guardians often shared in this success. Videos 1 and 2 demonstrate the different stages of the MToH task: the cognitive phase of the task (associated video-1-blake & dabrowska.mp4) and the autonomous phase of the task (associated video-2-blake & dabrowska.mp4).

Raven’s Coloured Progressive Matrices (Raven et al. 1938): This is a non-verbal intelligence test for children from age 5 to age 11. It consists of 36 items in three sets of 12 problems to assess the development of intellectual maturity. Each diagrammatical problem has a “piece” missing and the participant is required to select the missing piece from one of six options. The child selects their answer by naming the number corresponding to the missing piece of the puzzle.

Backwards colour span task (Riches 2012; Zoelch et al. 2005): This is a working memory measure involving a visual variation on the backward digit span task. The child watches as various coloured tennis balls are placed in a physical tube (see Figure 3). Their task is to work out what order the balls will come out of the tube. Nine colours are used which correspond to digit recall stimuli. These are: black, white, red, blue, green, pink, grey, and brown, as equivalent to eight one-syllable stimuli, and one two-syllable stimulus: yellow, corresponding to the number seven. The task comprises blocks of six stimuli: block 1 with two items per stimulus; three items per stimulus in block 2; four items per stimulus in block 3; five items per stimulus in block 4; and six items per stimulus in block 5. To pass a block, the child must achieve four out of six correct responses, and after three failures in a block, the test is discontinued. Before starting, the experimenter showed the child all the colours and ensured that they were able to name each colour before proceeding. This task was adapted for online use by using a digital visualizer and screen sharing to present the physical tube and coloured balls. Real objects were used to try and make the task more enjoyable for the child.

Figure 3: 
Backwards colour span task.
Figure 3:

Backwards colour span task.

Embedded triplet task (adapted from Arciuli and Simpson 2011): This task was developed for our study on Gorilla Experiment Builder (https://gorilla.sc; Anwyl-Irvine et al. 2020). Gorilla is a cloud-based platform enabling researchers to create and run online experiments through a graphical user interface which means that programming knowledge is not required. Furthermore, additional support is available from Gorilla’s scripting consultancy service. Participants access tasks on the Gorilla platform through a link which launches the task without the need for the participant to download additional software. The embedded triplet task used in our study is an implicit statistical learning task which involves a familiarization stage and a test stage. During familiarization, participants see cartoon figures of 12 friendly aliens which appear on the screen one by one. The aliens form triplets which are always presented in the same order; the order of the triplets is random. In the test phase, four new “impossible triplets” were created by combining one alien from each of the original triplets. In each trial, the child is presented with one of the original triplets and an impossible triplet, that is, six stimuli in total, presented sequentially with a short pause after the first triplet. The child’s task is to choose which triplet had appeared previously during familiarization. There are 64 test trials, and the entire task takes about 15 min. In the original task designed by Arciuli and Simpson (2011), participants responded verbally during the test phase and their response was recorded manually. In our online task, participants made their choice by clicking on one of two options (A or B), and their responses were scored on the Gorilla platform.

2.3.2 Language tasks

The experimenter used a digital visualizer to project language tasks to participants, using the screen sharing option on Zoom. The use of a visualizer allowed us to present printed materials to the child such as standardized language tasks which often involve a picture selection format. Where a child would normally point to a picture to indicate their choice, we asked them to say the number of the picture that matched the sentence or word (as applicable). These are minimal variations imposed by the online format. As many of the measures in our study are standardized tasks, we had to ensure that the testing was as similar as possible to face-to-face testing. In each instance, the task was explained to children, and they were able to do a few practice trials before beginning the task. The language assessments used are described below.

Test for Reception of Grammar (TROG-2; Bishop 2003): TROG-2 is a receptive language test which evaluates understanding of grammatical contrasts which are marked by inflections, function words, and word order. The format comprises a picture selection task consisting of 80 four-choice items. Grammatical contrasts are grouped in blocks of four items and a block is passed if all four items are answered correctly. The experimenter reads a sentence aloud to the child and they are asked to select the correct picture in response to the sentence.

British Picture Vocabulary Scale (BPVS3; Dunn et al. 2009): The BPVS3 assesses a child’s receptive vocabulary through a picture selection task format. Each item in the test consists of four illustrations and the child’s task is to select a picture that best illustrates the meaning of a stimulus word spoken by the experimenter. There are 14 sets of 12 test items, which increase in difficulty.

Recalling sentences sub-test of Clinical Evaluation of Language Fundamentals (CELF-5; Wiig et al. 2013): This task evaluates the ability to listen to and repeat spoken sentences which increase in length and complexity. The experimenter reads the sentences aloud and asks the child to repeat the sentences without changing the content. The test is discontinued after five consecutive sentences with four or more errors in each sentence.

Narrative task: The Frog Story (based on Frog, Where Are You?; Mayer 1969): The narrative task involves the experimenter showing the child The Frog Story, a short picture story without words. The original story contains black and white pictures, but we used an adaption with colour illustrations by Monica Pascual Holder (made available by Esther Pascual). The child is advised that they will see the picture story twice, and on the second occasion, the child is asked to tell the story in their own words. The narrative is recorded for transcription purposes.

Expository discourse (Nippold et al. 2008): In this task, the child explains their favourite game, sport, or hobby. The experimenter uses specific prompts to encourage discussion; for example, the experimenter asks the child to pretend that they are talking to an alien from outer space, who has no knowledge of how things work on planet earth. The child’s response is recorded for transcription purposes.

3 MToH task: differences between physical and virtual modalities

Concurrent to the child study, we developed an online MToH task for use in online tasks with adults (see Figure 4). The development of this task was supported by funding from the Getting Data project (https://gettingdata.humanities.uva.nl) and by Gorilla Experiment Builder scripting consultancy. In this section, we discuss the differences in modality between the online MToH task versus the physical puzzle.

Figure 4: 
MToH online task.
Figure 4:

MToH online task.

In the online MToH task, participants were advised that they would complete the puzzle 25 times and in doing so, they would progress through different levels. When they reached the highest level, they would be asked to do a concurrent task. This was different from the secondary task which we used with children. In the online version of the secondary task, participants were presented with shapes that appeared on the screen whilst they completed the puzzle. They were asked to tap the space bar twice in response to a single shape, and once in response to two shapes, whilst completing the puzzle at the same time. The default response window to respond to the secondary task was 1,000 ms. After the instructions, participants did three practice trials with feedback provided, before starting five trials of the secondary task.

There are distinct advantages to providing children with a tactile ToH puzzle. We know that procedural memory is dependent on psychomotor abilities, and Hubert et al.’s study (2007) showed that the anterior part of the cerebellum is activated during the associative phase of cognitive skill learning. In Beaunieux et al. (2006), psychomotor abilities correlated with procedural performance from the fifteenth trial onwards, during the proceduralization stage. Furthermore, a physical puzzle is more engaging. In our study, children visibly enjoyed the tangible element of this task, and they were delighted to keep the puzzle afterwards.

There are also positives regarding the online version of the MToH task, and we are currently using this format to explore cognitive skill learning in adults with and without DLD. The online version of the puzzle does not require in-person demonstration. Instructions are provided during the task by video, which also demonstrates legal and illegal moves. This format is also practical from a logistical perspective as participants can complete the task at a time that suits them and can participate from anywhere in the world. Additionally, it provides a cost-effective alternative to posting participants a physical puzzle. Finally, scoring is automatic, which saves time for the experimenter.

A potential disadvantage of the virtual format is that it may be less enjoyable and more repetitive than the physical puzzle and participants do not have the experimenter present to validate performance or encourage completion. The motor component is stronger in the physical version of the puzzle, and this could potentially have implications on how learning happens. As such, we cannot be absolutely certain that the physical and online version of the task assess exactly the same abilities. Further research is required to explore possible differences between these tasks.

4 Lessons learned

4.1 Challenges

One concern regarding online methods is that the experimenter cannot control the participant’s environment. In our study, there were a few occasions where there were difficulties in maintaining a child’s attention. As our study was moderated, these situations were easily resolved with encouragement from the experimenter and by the parent or guardian helping to refocus their child’s attention. However, in unmoderated studies, participants may be more likely to drop out if they find the tasks boring or too difficult. This can be solved with extrinsic motivation through rewards and ensuring that online tasks are enjoyable.

Online data collection is reliant on participants having their own device (desktop computer, laptop, tablet, or mobile phone) and there is the risk of digital poverty potentially excluding participants from taking part. Another disadvantage is not being able to control for uniformity of equipment between participants. Online platforms such as Gorilla Experiment Builder provide the option to control the type of device so that you can specify whether a task is completed on a desktop or tablet.

Another aspect of online research that can be problematic is that it is not possible to control the speed and stability of home internet connections. In our study, this did not affect scoring of our assessments, but we had to discard linguistic data for two participants due to poor internet connections, making it impossible to transcribe their linguistic samples accurately.

Whilst online research provides the potential to reach a wider population, there is still the risk of recruiting WEIRD participants (those from Western, educated, industrialized, rich, and democratic societies; see Henrich et al. 2010). Online research also poses the risk of bias, and these aspects are difficult to control. For example, there can be self-selection bias, with the research only reaching literate people, or those with access to the internet. In our sample, demographic data revealed that participants were middle class and well educated.

4.2 Advantages

One of the advantages of conducting our study online was that people could take part at a time and place to suit them. This is especially beneficial when working with children, as they could participate from home, in a familiar and comfortable environment. We were not governed by time constraints of the school day, and we were able to meet families online at their chosen time. Online research is faster, and it makes it more affordable to conduct research without limitations of travel.

Interaction between the experimenter and participants was vital to ensure that children remained on task and that the data was of a high quality. The experimenter was also able to reassure and encourage children throughout the research process. This was important for some children, who initially lacked confidence to complete tasks online. We recruited almost 100 children to our study, and only one child was unable to commit to online sessions. Aside from this, all children remained in the study and attended all three sessions.

Parental engagement was vital to the success of our study and their support was invaluable for many reasons. Parents and guardians were responsible for making online appointments and ensuring that their child attended the research sessions. They acted as “co-researchers” by ensuring that the home environment was quiet (where possible), and they encouraged their child to respond and remain on task, if required. They were usually present during the sessions, seated alongside their child. This was a positive aspect as they were able to take part in the research process and share and celebrate their child’s achievements. Parents were mindful of their role as observers in the study and there were very few occasions where parents tried to answer a question or intervene on behalf of their child. However, this success could be an outcome of the specific sample, and we do not know if other parents would be equally cooperative.

A key to maintaining participant attention in online research is task design. Keeping tasks short and simple and engaging for children is fundamental to maintaining their attention.

5 Conclusions

The aim of this paper was to provide a methodological overview of how we continued with research during the COVID-19 pandemic. Most importantly, despite the initial lockdowns delaying our study, we were able to redesign our methodology to continue with data collection during this difficult time. The lessons we learned here are also applicable to doing research in normal times, especially when that research involves reaching people in remote locations or specialist clinical groups. These days, children are digital natives and are increasingly accustomed to online formats for socializing with friends, playing games, and learning online. Some adults also enjoy socializing and playing games online. With the potential to create fun and engaging tasks to test psychological and linguistic theories, and the possibility of reaching a vast range of participants worldwide, we look forward to the continued development of online data collection.


Corresponding author: Ashley Blake, University of Birmingham, Birmingham, UK, E-mail:

Award Identifier / Grant number: 1195918

Acknowledgement

This project was supported by an Alexander von Humboldt Professorship (grant number ID-1195918) awarded to the second author. We are grateful to the Getting Data project for their support in partial funding to enable us to complete the design of our online MToH task. Finally, we would like to take the opportunity to thank the children and their families for participating in our study.

  1. Research funding: This work was supported by Alexander von Humboldt-Stiftung (http://dx.doi.org/10.13039/100005156) under the grant no. 1195918.

References

Anderson, John R. 1982. Acquisition of cognitive skill. Psychological Review 89(4). 369–406. https://doi.org/10.1037/0033-295X.89.4.369.Search in Google Scholar

Anderson, John R. (ed.). 1993. Rules of the mind. Hillsdale, NJ: Lawrence Erlbaum Associates.Search in Google Scholar

Anderson, John R., Daniel Bothell, Michael D. Byrne, Scott Douglass, Christian Lebiere & Yulin Qin. 2004. An integrated theory of the mind. Psychological Review 111(4). 1036–1060. https://doi.org/10.1037/0033-295X.111.4.1036.Search in Google Scholar

Anwyl-Irvine, Alexander L., Jessica Massonnié, Adam Flitton, Natasha Kirkham & Jo K. Evershed. 2020. Gorilla in our midst: An online behavioral experiment builder. Behavior Research Methods 52(1). 388–407. https://doi.org/10.3758/s13428-019-01237-x.Search in Google Scholar

Arciuli, Joanne & Ian C. Simpson. 2011. Statistical learning in typically developing children: The role of age and speed of stimulus presentation. Developmental Science 14(3). 464–473. https://doi.org/10.1111/j.1467-7687.2009.00937.x.Search in Google Scholar

Beaunieux, Hélène, Valérie Hubert, Thomas Witkowski, Anne-Lise Pitel, Sandrine Rossi, Jean-Marie Danion, Béatrice Desgranges & Francis Eustache. 2006. Which processes are involved in cognitive procedural learning? Memory 14(5). 521–539. https://doi.org/10.1080/09658210500477766.Search in Google Scholar

Bishop, Dorothy V. M. 2003. The test for reception of grammar: TROG-2, Version 2. London: Pearson.Search in Google Scholar

Bishop, Dorothy V. M., Margaret J. Snowling, Paul A. Thompson, Trisha Greenhalgh & the CATALISE-2 Consortium. 2017. Phase 2 of CATALISE, a multinational and multidisciplinary Delphi consensus study of problems with language development: Terminology. Journal of Child Psychology & Psychiatry 58(10). 1068–1080. https://doi.org/10.1111/jcpp.12721.Search in Google Scholar

Chater, Nick & Morten H. Christiansen. 2018. Language acquisition as skill learning. Current Opinion in Behavioral Sciences 21. 205–208. https://doi.org/10.1016/j.cobeha.2018.04.001.Search in Google Scholar

Chater, Nick, Stewart M. McCauley & Morten H. Christiansen. 2016. Language as skill: Intertwining comprehension and production. Journal of Memory & Language 89. 244–254. https://doi.org/10.1016/j.jml.2015.11.004.Search in Google Scholar

Dąbrowska, Ewa & Ashley Blake. 2022. Speed of automatization predicts performance on “decorative” grammar in second language learning. In Thorsten Piske & Anja K. Steinlen (eds.), Cognition and second language acquisition: Studies on pre-school, primary school and secondary school children (Multilingualism and Language Teaching Band 4), 285–309. Tübingen: Narr Francke Attempto.Search in Google Scholar

Dunn, Lloyd M., Douglas M. Dunn, Ben Styles & Julie Sewell. 2009. The British picture vocabulary scale, 3rd edn. London: GL Assessment.Search in Google Scholar

Fitts, P. M. 1964. Perceptual-motor skill learning. In A. W. Melton (ed.), Categories of human learning, 243–285. New York: Academic Press.10.1016/B978-1-4832-3145-7.50016-9Search in Google Scholar

Fitts, P. M. & M. I. Posner. 1967. Human performance. Belmont, CA: Brooks/Cole.Search in Google Scholar

Henrich, Joseph, Steven J. Heine & Ara Norenzayan. 2010. The weirdest people in the world? Behavioral & Brain Sciences 33(2–3). 61–83. https://doi.org/10.1017/S0140525X0999152X.Search in Google Scholar

Hubert, Valérie, Hélène Beaunieux, Gaël Chételat, Hervé Platel, Brigitte Landeau, Jean-Marie Danion, Fausto Viader, Béatrice Desgranges & Francis Eustache. 2007. The dynamic network subserving the three phases of cognitive procedural learning. Human Brain Mapping 28(12). 1415–1429. https://doi.org/10.1002/hbm.20354.Search in Google Scholar

Johnson, K. 1996. Language teaching and skill learning. Oxford: Wiley.Search in Google Scholar

Kamhi, Alan G. 2019. Speech-language development as proceduralization and skill learning: Implications for assessment and intervention. Journal of Communication Disorders 82. 105918. https://doi.org/10.1016/j.jcomdis.2019.105918.Search in Google Scholar

Leonard, Laurence B. 2014. Children with specific language impairment. Cambridge, MA: MIT Press.10.7551/mitpress/9152.001.0001Search in Google Scholar

Lezak, Muriel D., Diane B. Howieson, David W. Loring, Julia H. Hannay & Jill S. Fischer (eds.). 2004. Neuropsychological assessment, 4th edn. Oxford: Oxford University Press.Search in Google Scholar

Mayer, Mercer. 1969. Frog, where are you? New York: Dial Books for Young Readers.Search in Google Scholar

Nippold, Marilyn A., Tracy C. Mansfield, Jesse L. Billow & J. Bruce Tomblin. 2008. Expository discourse in adolescents with language impairments: Examining syntactic development. American Journal of Speech-Language Pathology 17(4). 356–366. https://doi.org/10.1044/1058-0360(2008/07-0049.Search in Google Scholar

Norbury, Courtenay Frazier, George Vamvakas, Debbie Gooch, Gillian Baird, Tony Charman, Emily Simonoff & Andrew Pickles. 2017. Language growth in children with heterogeneous language disorders: A population study. Journal of Child Psychology & Psychiatry 58(10). 1092–1105. https://doi.org/10.1111/jcpp.12793.Search in Google Scholar

Raven, J. C., J. E. Raven & J. H. Court. 1938. Progressive matrices. Oxford: Oxford Psychologists Press and Psychological Corporation.Search in Google Scholar

Riches, Nick G. 2012. Sentence repetition in children with specific language impairment: An investigation of underlying mechanisms. International Journal of Language & Communication Disorders 47(5). 499–510. https://doi.org/10.1111/j.1460-6984.2012.00158.x.Search in Google Scholar

Scott, Kimberly & Laura Schulz. 2017. Lookit (part 1): A new online platform for developmental research. Open Mind 1(1). 4–14. https://doi.org/10.1162/OPMI_a_00002.Search in Google Scholar

Taatgen, Niels A., David Huss, Daniel Dickison & John R. Anderson. 2008. The acquisition of robust and flexible cognitive skills. Journal of Experimental Psychology: General 137(3). 548–565. https://doi.org/10.1037/0096-3445.137.3.548.Search in Google Scholar

Tomblin, J. Bruce, Nancy L. Records, Paula Buckwalter, Xuyang Zhang, Elaine Smith & Marlea O’Brien. 1997. Prevalence of specific language impairment in kindergarten children. Journal of Speech, Language, & Hearing Research 40(6). 1245–1260. https://doi.org/10.1044/jslhr.4006.1245.Search in Google Scholar

Wiig, Elisabeth H., Eleanor Messing Semel & Wayne Secord. 2013. CELF-5: Clinical evaluation of language fundamentals, 5th edn. Bloomington, MN: Pearson.Search in Google Scholar

Zoelch, Christof, Katja Seitz & Ruth Schumann-Hengsteler. 2005. From rag(bag)s to riches: Measuring the developing central executive. In Wolfgang Schneider, Ruth Schumann-Hengsteler & Beate Sodian (eds.), Young children’s cognitive development: Interrelationships among executive functioning, working memory, verbal ability, and theory of mind, 39–69. Mahwah, NJ: Lawrence Erlbaum.Search in Google Scholar


Supplementary Material

This article contains supplementary material (https://doi.org/10.1515/lingvan-2021-0145).

Video 1
Video 2

Received: 2021-12-14
Accepted: 2023-09-05
Published Online: 2024-01-05

© 2023 the author(s), published by De Gruyter, Berlin/Boston

This work is licensed under the Creative Commons Attribution 4.0 International License.

Downloaded on 2.10.2025 from https://www.degruyterbrill.com/document/doi/10.1515/lingvan-2021-0145/html
Scroll to top button