Home Linguistics & Semiotics Some comments on robustness in comparative grammar research: commentary on “Replication and methodological robustness in quantitative typology” by Becker and Guzmán Naranjo
Article Open Access

Some comments on robustness in comparative grammar research: commentary on “Replication and methodological robustness in quantitative typology” by Becker and Guzmán Naranjo

  • Martin Haspelmath ORCID logo EMAIL logo
Published/Copyright: August 4, 2025

Over the last two decades, an increasing number of scientists have expressed worries as it has been increasingly clear that in a range of fields, a large proportion of scientific results could not be repeated (or “replicated”, or “reproduced”). Is this perhaps also a problem for comparative linguistics?

For comparative grammar (or typology), reproducibility was first discussed two decades ago (Plank 2006), and as more and more research projects use advanced quantitative methods, it is of course important to have some discussion of which are the best methods, and whether different statistical approaches give the same results. It seems clear from discussions in other fields that the way scientists use statistical methods is often highly problematic. In 2005, John Ioannidis shocked researchers by suggesting that “for most study designs and settings, it is more likely for a research claim to be false than true” (Ioannidis 2005). And as recently as 2023, anthropologist Richard McElreath made some highly critical comments:[1]

“[T]he status quo is the biological and behavioral sciences is terrible. It makes little sense to invest in ambitious research projects when a typical researcher has no ability to define a non-null model of a phenomenon, to explore the implications of such a model, and to evaluate those implications with evidence. Success in the sciences does not, in the present day, depend upon those skills. But we should fix the foundations before wasting more public money.”

Thus, it is no doubt important to examine the tools of quantitative inference closely, just as we need to critically examine other aspects of our methodology – be it elicitation methods, or comparative corpus methods, or transcription methods. I also find it important to compare competing approaches, rather than to pursue only one approach without considering alternatives. The contribution by Becker and Guzmán Naranjo (2025), henceforth B&GN, is thus highly welcome, and may play a similar role in advancing methodological awareness in quantitative typology as was played by Bell’s (1978) seminal paper about language sampling. I also welcome their plea (in the supplementary Appendix) for complete and transparent datasets (ideally including example sentences), something that could not be taken for granted until recently.

In this short comment, I will discuss a few further factors that seem to be important for robustness and reproducibility in our field, in the spirit of the “manifesto for reproducible science” (Munafò et al. 2017). As B&GN observe, robustness also means stability of results (i) across different language samples and (ii) alternative analyses and annotations. Munafò et al. furthermore mention quite a few further possibilities of enhancing robustness, such as (iii) protection against cognitive biases, (iv) improving methodological training, (v) encouraging team science, and (vi) improving incentives.

  1. Robustness across different language samples: B&GN mention a few papers where this has been addressed, but there could (and should) of course be a lot of additional work trying to replicate earlier results with new data. Many readily searchable open-access grammars have been published over the last 20 years (many of them accessible via ALT’s Grammar Watch site),[2] and there is no reason why such replications should not be assigned to masters or even bachelor students. However, some linguists have questioned whether comparative-grammar research should be based on language samples, preferring analyses based on phylogenetic trees (e.g. Jäger and Wahle 2021; Levinson et al. 2011). The advantages and downsides of both methods have not been discussed thoroughly by typologists,[3] but in any event, when a phylogeny-based study basically confirms the results of earlier sample-based studies (as in Jäger and Wahle’s results on word order correlations), this may be taken as reassuring. Phylogeny-based methods seem to work best (or be applicable exclusively) when one has data on a very large number of languages (e.g. the entire Grambank sample, as in Verkerk et al. 2023), which is rarely the case in typological studies.

  2. Alternative analyses and annotations: This is perhaps the greatest stumbling block for reproducibility in comparative linguistics, because the comparative concepts used for cross-linguistic comparison are not standardized in the field, and traditional terms are often understood and used in diverse ways across scholars and subcommunities. For example, what exactly is the difference between “case (marking)” and marking by an adposition? If one wanted to replicate claims about case (e.g. the universality of Blake’s (2001: 156) “case hierarchy”: nom > acc/erg > gen > dat > loc > abl/inst > others), one would have to have a clear way of distinguishing between case affixes and adpositions. The difficulty is mentioned in a paper by two generative linguists:

    “The truth of [the claim above] can only be judged … if one is able to distinguish reliably between an object NP in dative case (which can be agreed-with in languages of the non-Indo-European type) and PPs with a directional adposition like English to (which cannot be agreed with in either type of language). Settling such questions takes considerable work, work of exactly the type that the methodology of generative syntax requires and encourages.” (Baker and McCloskey 2007: 292)

    Baker and McCloskey emphasize that “settling” such questions is “labour-intensive”, but every practitioner of the field knows that they cannot be settled in the sense that all linguists (or all generative linguists) agree – there are simply too many moving parts in the generative approach, despite frequent appeals to “restrictiveness”. Over the last few decades, generative linguists have engaged in many discussions and have discovered a lot of interesting phenomena, but none of the larger issues or analytical questions has been “settled”. By contrast, in the comparative-concept approach (Haspelmath 2018), one defines cross-linguistically applicable concepts in a uniform way, but if one uses traditional terms (as is usual in typology), the definitions are often considered unsatisfactory by other linguists. For example, Dryer (2005) defines article words (as opposed to affixed articles) as elements that do not always occur directly next to the noun, so that the Basque article -a is regarded as a word, not as an affix. Because it is written as a suffix (e.g. etxe ‘house’, etxea ‘the house’, etxe berria ’the new house’), this decision is regarded as unintuitive by many. Thus, typological “analyses” are either subjective (as in generative grammar) or unintuitive (as in the comparative-concept approach), which means that the details of annotation in cross-linguistic studies are often contentious, and cross-linguistic databases such as WALS or Grambank are often regarded as “unreliable”. In the perception of many linguists, this is a far more evident source of uncertainty than non-reproducible quantitative analyses.[4]

  3. Protection against cognitive biases: This contributing factor to robustness has been discussed repeatedly in metascience, but it has played almost no role in the discussions that linguists have had. As Munafò et al. (2017) note, “an effective solution to mitigate self-deception and unwanted biases is blinding”, an approach that has long been used by experimentalists in biomedical fields. In comparative grammar, a principal investigator could perhaps apply “blinding” in the sense that she might instruct her research assistants to examine particular aspects of languages without telling them about the hypothesis that she is interested in. However, de facto linguists have strong cognitive biases, and often they are not even aware of their groupthink: They often “adopt” a certain framework at the beginning of the research and analyze the incoming data from the perspective of that framework and its commitments. The framework is typically an approach practiced by a community of scholars (perhaps with their own conferences and journals), which (by groupthink) reinforces the tendency to view the data exclusively from a particular perspective. This particular problem may not be so acute with large-scale studies of the type discussed by B&GN, but otherwise it is unfortunately quite typical of the field of (non-quantitative) linguistics.[5]

  4. Improving methodological training: This is a very practical matter that is perhaps less relevant to a small field such as typology, because it is not possible to organize larger training sessions for the small number of comparative grammarians. Examining a substantial number of languages worldwide requires substantial background knowledge and willingness to engage with highly technical grammatical descriptions, and acquiring a comprehensive knowledge of statistics in addition is a tall order. Psychologists have included training in statistics in academic curricula for many decades, but in linguistics, this is still rare.[6] Moreover, in view of the uncertainties surrounding different statistical approaches (McElreath 2020), it is not immediately clear what aspects of statistics one needs most urgently. Perhaps a solution could be collaborative research, or team science, i.e. point (v).

  5. Encouraging team science: Munafò et al. (2017) note that many research results are highly uncertain because of low statistical power, and this is certainly true of comparative grammar: Hypotheses about typological connections have often been based on just a few languages, or just languages of particular world regions. In the field of psychology, the “Many Labs” project brought together dozens of psychologist teams in order to increase the replicability of psychological research results (e.g. Klein et al. 2014), and there is a more recent project in developmental psychology called “Many Babies” (Frank et al. 2017). So could there be something like an analogous project “Many Languages”, where dozens of linguists working in different locations collaborate to improve the methodology of comparative grammar? The challenges of experimental research (as in psychology and psycholinguistics) are surely different, but more collaboration will probably be helpful in linguistics as well. There is a great deal of interesting research by individual scholars, often in dissertations, that involves many languages, but does it add up to something bigger, and are the individual studies reliable? Our knowledge would be more certain if we had more collaboration, although comparative linguistics is of course a far smaller field than psychology, and there may not be sufficient incentives (which brings us to the next topic).

  6. Improving incentives: Scientists are driven by curiosity, but also by funding options, and while they want to improve their science, they also need to survive. Improving reproducibility thus also requires a way to incentivize it (basically, to fund it). In different disciplinary context (open hardware for microscopy), physicist Julian Stirling observes:

    “Making hardware open source takes considerable time, the noble benefits … do not align well within our current systems for ranking research” (Stirling 2024: 2).

    The same applies in many other fields, including comparative grammar. Current incentives in terms of career prospects and project funding emphasize innovative research, not necessarily robust research. For example, the European Research Council funds “ground-breaking research” that “leads to advances at the frontiers of knowledge” (ERC 2025). Whether such innovations are robust seems to be of secondary importance. Open science and reproducibility efforts have played an increasing role in various fields and have attracted some funding over the last two decades, but whether there will be sustained funding also depends on high-level political decisions, and not so much of what individual researchers do. In a glossy university magazine, a headline such as “Two thirds of last year’s research papers successfully replicated” will not impress readers too much. But of course, the mood may shift, and our societies may want researchers to pay much more attention to robustness, especially in a context when public trust in science can no longer be taken for granted (e.g. Cologna et al. 2025).

B&GN’s paper will hopefully lead to further reflection and to improving comparative grammar research even more, although we should keep in mind that large-scale comparative research is (and will always be) a small corner of the larger field of language science. The great majority of linguists will always study only one major language, and will be able to compare at most a few languages (e.g. Chinese and English, or Spanish and Portuguese, or Polish, Ukrainian and Russian). And there is of course a great danger that linguistics will continue to be subject to all kinds of biases, not only political biases, but also biases deriving from particular perspectives such as strong convictions either about innateness or non-innateness (cf. Pinker 2003), biases deriving from the perceived success or non-success of Large Language Models (cf. Piantadosi 2024), and so on. Bias-free and robust research may be more difficult than we used to think, but there is no alternative to trying harder.


Corresponding author: Martin Haspelmath [maʁti:n 'haspl̩maːt], Max Planck Institute for Evolutionary Anthropology, Leipzig, Germany, E-mail:

References

Baker, Mark C. & Jim McCloskey. 2007. On the relationship of typology to theoretical syntax. Linguistic Typology 11(1). 285–296. https://doi.org/10.1515/LINGTY.2007.023.Search in Google Scholar

Becker, Laura & Matías Guzmán Naranjo. 2025. Replication and methodological robustness in quantitative typology. Linguistic Typology 29(3). 463–505. https://doi.org/10.1515/lingty-2023-0076.Search in Google Scholar

Bell, Alan. 1978. Language samples. In Joseph H. Greenberg (ed.), Universals of human language, vol. 1, 123–156. Stanford: Stanford University Press.Search in Google Scholar

Berg, Thomas. 2020. Nominal and pronominal gender: Putting Greenberg’s Universal 43 to the test. STUF – Language Typology and Universals 73(4). 525–574. https://doi.org/10.1515/stuf-2020-1018.Search in Google Scholar

Blake, Barry J. 2001. Case (Cambridge Textbooks in Linguistics), 2nd edn. Cambridge: Cambridge University Press.Search in Google Scholar

Bugaeva, Anna, Johanna Nichols & Balthasar Bickel. 2022. Appositive possession in Ainu and around the Pacific. Linguistic Typology 26(1). 43–88. https://doi.org/10.1515/lingty-2021-2079.Search in Google Scholar

Cologna, Viktoria, Niels G. Mede, Sebastian Berger, John Besley, Cameron Brick, Marina Joubert, Edward W. Maibach, Sabina Mihelj, Naomi Oreskes, Mike S. Schäfer, Sander van der Linden, Nor Izzatina Abdul Aziz, Suleiman Abdulsalam, Nurulaini Abu Shamsi, Balazs Aczel, Indro Adinugroho, Eleonora Alabrese, Alaa Aldoh, Mark Alfano, Innocent Mbulli Ali, Mohammed Alsobay, Marlene Altenmüller, R. Michael Alvarez, Richard Amoako, Tabitha Amollo, Patrick Ansah, Denisa Apriliawati, Flavio Azevedo, Ani Bajrami, Ronita Bardhan, Keagile Bati, Eri Bertsou, Cornelia Betsch, Apurav Yash Bhatiya, Rahul Bhui, Olga Białobrzeska, Michał Bilewicz, Ayoub Bouguettaya, Katherine Breeden, Amélie Bret, Ondrej Buchel, Pablo Cabrera-Álvarez, Federica Cagnoli, André Calero Valdez, Timothy Callaghan, Rizza Kaye Cases, Sami Çoksan, Gabriela Czarnek, Steven De Peuter, Ramit Debnath, Sylvain Delouvée, Lucia Di Stefano, Celia Díaz-Catalán, Kimberly C. Doell, Simone Dohle, Karen M. Douglas, Charlotte Dries, Dmitrii Dubrov, Małgorzata Dzimińska, Ullrich K. H. Ecker, Christian T. Elbaek, Mahmoud Elsherif, Benjamin Enke, Tom W. Etienne, Matthew Facciani, Antoinette Fage-Butler, Md. Zaki Faisal, Xiaoli Fan, Christina Farhart, Christoph Feldhaus, Marinus Ferreira, Stefan Feuerriegel, Helen Fischer, Jana Freundt, Malte Friese, Simon Fuglsang, Albina Gallyamova, Patricia Garrido-Vásquez, Mauricio E. Garrido Vásquez, Winfred Gatua, Oliver Genschow, Omid Ghasemi, Theofilos Gkinopoulos, Jamie L. Gloor, Ellen Goddard, Mario Gollwitzer, Claudia González-Brambila, Hazel Gordon, Dmitry Grigoryev, Gina M. Grimshaw, Lars Guenther, Håvard Haarstad, Dana Harari, Lelia N. Hawkins, Przemysław Hensel, Alma Cristal Hernández-Mondragón, Atar Herziger, Guanxiong Huang, Markus Huff, Mairéad Hurley, Nygmet Ibadildin, Maho Ishibashi, Mohammad Tarikul Islam, Younes Jeddi, Tao Jin, Charlotte A. Jones, Sebastian Jungkunz, Dominika Jurgiel, Zhangir Kabdulkair, Jo-Ju Kao, Sarah Kavassalis, John R. Kerr, Mariana Kitsa, Tereza Klabíková Rábová, Olivier Klein, Hoyoun Koh, Aki Koivula, Lilian Kojan, Elizaveta Komyaginskaya, Laura König, Lina Koppel, Kochav Koren Nobre Cavalcante, Alexandra Kosachenko, John Kotcher, Laura S. Kranz, Pradeep Krishnan, Silje Kristiansen, André Krouwel, Toon Kuppens, Eleni A. Kyza, Claus Lamm, Anthony Lantian, Aleksandra Lazić, Oscar Lecuona, Jean-Baptiste Légal, Zoe Leviston, Neil Levy, Amanda M. Lindkvist, Grégoire Lits, Andreas Löschel, Alberto López Ortega, Carlos Lopez-Villavicencio, Nigel Mantou Lou, Chloe H. Lucas, Kristin Lunz-Trujillo, Mathew D. Marques, Sabrina J. Mayer, Ryan McKay, Hugo Mercier, Julia Metag, Taciano L. Milfont, Joanne M. Miller, Panagiotis Mitkidis, Fredy Monge-Rodríguez, Matt Motta, Iryna Mudra, Zarja Muršič, Jennifer Namutebi, Eryn J. Newman, Jonas P. Nitschke, Ntui-Njock Vincent Ntui, Daniel Nwogwugwu, Thomas Ostermann, Tobias Otterbring, Jaime Palmer-Hague, Myrto Pantazi, Philip Pärnamets, Paolo Parra Saiani, Mariola Paruzel-Czachura, Michal Parzuchowski, Yuri G. Pavlov, Adam R. Pearson, Myron A. Penner, Charlotte R. Pennington, Katerina Petkanopoulou, Marija B. Petrović, Jan Pfänder, Dinara Pisareva, Adam Ploszaj, Karolína Poliaková, Ekaterina Pronizius, Katarzyna Pypno-Blajda, Diwa Malaya A. Quiñones, Pekka Räsänen, Adrian Rauchfleisch, Felix G. Rebitschek, Cintia Refojo Seronero, Gabriel Rêgo, James P. Reynolds, Joseph Roche, Simone Rödder, Jan Philipp Röer, Robert M. Ross, Isabelle Ruin, Osvaldo Santos, Ricardo R. Santos, Philipp Schmid, Stefan Schulreich, Bermond Scoggins, Amena Sharaf, Justin Sheria Nfundiko, Emily Shuckburgh, Johan Six, Nevin Solak, Leonhard Späth, Bram Spruyt, Olivier Standaert, Samantha K. Stanley, Gert Storms, Noel Strahm, Stylianos Syropoulos, Barnabas Szaszi, Ewa Szumowska, Mikihito Tanaka, Claudia Teran-Escobar, Boryana Todorova, Abdoul Kafid Toko, Renata Tokrri, Daniel Toribio-Florez, Manos Tsakiris, Michael Tyrala, Özden Melis Uluğ, Ijeoma Chinwe Uzoma, Jochem van Noord, Christiana Varda, Steven Verheyen, Iris Vilares, Madalina Vlasceanu, Andreas von Bubnoff, Iain Walker, Izabela Warwas, Marcel Weber, Tim Weninger, Mareike Westfal, Florian Wintterlin, Adrian Dominik Wojcik, Ziqian Xia, Jinliang Xie, Ewa Zegler-Poleska, Amber Zenklusen & Rolf A. Zwaan. 2025. Trust in scientists and their role in society across 68 countries. Nature Human Behaviour 9. 713–730. https://doi.org/10.1038/s41562-024-02090-5.Search in Google Scholar

Dryer, Matthew S. 2005. Definite articles. In Martin Haspelmath, Matthew S. Dryer, David Gil & Bernard Comrie (eds.), The world atlas of language structures, 154–157. Oxford: Oxford University Press. Available at: http://wals.info/chapter/37.Search in Google Scholar

ERC. 2025. ERC Work Programme 2025. Available at: https://ec.europa.eu/info/funding-tenders/opportunities/docs/2021-2027/horizon/wp-call/2025/wp_horizon-erc-2025_en.pdf.Search in Google Scholar

Fedden, Sebastian & Greville G. Corbett. 2017. Gender and classifiers in concurrent systems: Refining the typology of nominal classification. Glossa: A Journal of General Linguistics 2(1). 34. https://doi.org/10.5334/gjgl.177.Search in Google Scholar

Frank, Michael C., Elika Bergelson, Christina Bergmann, Alejandrina Cristia, Caroline Floccia, Judit Gervain, J. Kiley Hamlin, Erin E. Hannon, Melissa Kline, Claartje Levelt, Casey Lew-Williams, Thierry Nazzi, Robin Panneton, Hugh Rabagliati, Melanie Soderstrom, Jessica Sullivan, Sandra Waxman & Daniel Yurovsky. 2017. A collaborative approach to infant research: Promoting reproducibility, best practices, and theory-building. Infancy 22(4). 421–435. https://doi.org/10.1111/infa.12182.Search in Google Scholar

Haspelmath, Martin. 2018. How comparative concepts and descriptive linguistic categories are different. In Daniël Van Olmen, Tanja Mortelmans & Frank Brisard (eds.), Aspects of linguistic variation: Studies in honor of Johan van der Auwera, 83–113. Berlin: De Gruyter Mouton. Available at: https://zenodo.org/record/3519206.10.1515/9783110607963-004Search in Google Scholar

Ioannidis, John P. A. 2005. Why most published research findings are false. PLoS Medicine 2(8). e124. https://doi.org/10.1371/journal.pmed.0020124.Search in Google Scholar

Jäger, Gerhard & Johannes Wahle. 2021. Phylogenetic typology. Frontiers in Psychology 12. 2852. https://doi.org/10.3389/fpsyg.2021.682132.Search in Google Scholar

Klein, Richard A., Kate A. Ratliff, Michelangelo Vianello, Reginald B. Adams Jr, Štěpán Bahník, Michael J. Bernstein, Konrad Bocian, Mark J. Brandt, Beach Brooks, Claudia Chloe Brumbaugh, Zeynep Cemalcilar, Jesse Chandler, Winnee Cheong, William E. Davis, Thierry Devos, Matthew Eisner, Natalia Frankowska, David Furrow, Elisa Maria Galliani, Fred Hasselman, Joshua A. Hicks, James F. Hovermale, S. Jane Hunt, Jeffrey R. Huntsinger, Hans IJzerman, Melissa-Sue John, Jennifer A. Joy-Gaba, Heather Barry Kappes, Lacy E. Krueger, Jaime Kurtz, Carmel A. Levitan, Robyn K. Mallett, Wendy L. Morris, Anthony J. Nelson, Jason A. Nier, Grant Packard, Ronaldo Pilati, Abraham M. Rutchick, Kathleen Schmidt, Jeanine L. Skorinko, Robert Smith, Troy G. Steiner, Justin Storbeck, Lyn M. Van Swol, Donna Thompson, A. E. van ‘t Veer, Leigh Ann Vaughn, Marek Vranka, Aaron L. Wichman, Julie A. Woodzicka & Brian A. Nosek. 2014. Investigating variation in replicability. Social Psychology 45(3). 142–152. https://doi.org/10.1027/1864-9335/a000178.Search in Google Scholar

Levinson, Stephen C., Simon J. Greenhill, Russell D. Gray & Michael Dunn. 2011. Universal typological dependencies should be detectable in the history of language families. Linguistic Typology 15. 509–534. https://doi.org/10.1515/lity.2011.034.Search in Google Scholar

McElreath, Richard. 2020. Statistical rethinking: A Bayesian course with examples in R and Stan (Texts in Statistical Science Series), 2nd edn. Boca Raton: CRC Press.10.1201/9780429029608Search in Google Scholar

Munafò, Marcus R., Brian A. Nosek, Dorothy V. M. Bishop, Katherine S. Button, Christopher D. Chambers, Nathalie Percie du Sert, Uri Simonsohn, Eric-Jan Wagenmakers, Jennifer J. Ware & John P. A. Ioannidis. 2017. A manifesto for reproducible science. Nature Human Behaviour 1(1). 1–9. https://doi.org/10.1038/s41562-016-0021.Search in Google Scholar

Piantadosi, Steven T. 2024. Modern language models refute Chomsky’s approach to language. In Edward Gibson & Moshe Poliak (eds.), From fieldwork to linguistic theory, 353–414. Berlin: Language Science Press. Available at: https://langsci-press.org/catalog/view/434/4519/2779-1.Search in Google Scholar

Pinker, Steven. 2003. The blank slate: The modern denial of humen nature. New York [etc.]: Penguin Books.Search in Google Scholar

Plank, Frans (ed.). 2006. Re-doing typology. Linguistic Typology 10(1). 67–128. https://doi.org/10.1515/LINGTY.2006.004.Search in Google Scholar

Stirling, Julian. 2024. Open instrumentation, like open data, is key to reproducible science. Yet, without incentives it won’t thrive. Philosophical Transactions of the Royal Society A: Mathematical, Physical and Engineering Sciences 382(2274). 20230215. https://doi.org/10.1098/rsta.2023.0215.Search in Google Scholar

Verkerk, Annemarie, Hannah J. Haynie, Olena Shcherbakova, Hedvig Skirgård, Quentin Atkinson, Simon Greenhill & Russell Gray. 2023. Global-scale inference of typological universals using Grambank. Paper presented at the Grambank workshop, MPI-EVA Leipzig. Available at: https://www.youtube.com/watch?v=zyT0G-mILyk.Search in Google Scholar

Received: 2025-04-08
Accepted: 2025-05-15
Published Online: 2025-08-04
Published in Print: 2025-10-27

© 2025 the author(s), published by De Gruyter, Berlin/Boston

This work is licensed under the Creative Commons Attribution 4.0 International License.

Articles in the same Issue

  1. Frontmatter
  2. Target Paper and Discussion
  3. Introduction
  4. Replication, robustness and the angst of false positives: a timely target article and its multifaceted comments
  5. Target Paper
  6. Replication and methodological robustness in quantitative typology
  7. Commentaries
  8. Embracing uncertainty, and the multifaceted soul of linguistic typology: commentary on “Replication and methodological robustness in quantitative typology” by Becker and Guzmán Naranjo
  9. Replicability all the way up: commentary on “Replication and methodological robustness in quantitative typology” by Becker and Guzmán Naranjo
  10. Some comments on robustness in comparative grammar research: commentary on “Replication and methodological robustness in quantitative typology” by Becker and Guzmán Naranjo
  11. Open research requires open mindedness: commentary on “Replication and methodological robustness in quantitative typology” by Becker and Guzmán Naranjo
  12. An experimentalist’s perspective on replicability in typology: commentary on “Replication and methodological robustness in quantitative typology” by Becker and Guzmán Naranjo
  13. Sampling matters: commentary on “Replication and methodological robustness in quantitative typology” by Becker and Guzmán Naranjo
  14. Weak theories and robustness: commentary on “Replication and methodological robustness in quantitative typology” by Becker and Guzmán Naranjo
  15. Commentary: Replication, robustness or methodological competition?
  16. Good enough for Galton, and much more: commentary on “Replication and methodological robustness in quantitative typology” by Becker and Guzmán Naranjo
  17. What is ‘advanced statistical modelling’?: commentary on “Replication and methodological robustness in quantitative typology” by Becker and Guzmán Naranjo
  18. The value of replication: commentary on “Replication and methodological robustness in quantitative typology” by Becker and Guzmán Naranjo
  19. Statistical signal versus areal/universal/genealogical pressure: commentary on “Replication and methodological robustness in quantitative typology” by Becker and Guzmán Naranjo
  20. Different models, different assumptions, different findings: commentary on “Replication and methodological robustness in quantitative typology” by Becker and Guzmán Naranjo
  21. Response
  22. Authors’ response to “Replication and methodological robustness in quantitative typology”
  23. Research Article
  24. Geospatial effects on phonological complexity in the world’s languages
  25. Editorial
  26. Grammar Highlights 2024
Downloaded on 20.12.2025 from https://www.degruyterbrill.com/document/doi/10.1515/lingty-2025-0037/html
Scroll to top button