Startseite A desmemic architecture for autotyp: a review article
Artikel Open Access

A desmemic architecture for autotyp: a review article

  • Adam J. R. Tallman EMAIL logo
Veröffentlicht/Copyright: 11. März 2022

Reviewed Publication:

Good Jeff 2016. The linguistic typology of templates. Cambridge: Cambridge University Press. ISBN 978-1-107-01502-9.


Jeff Good’s The linguistic typology of templates develops a typological approach to investigating templatic structures. The notion of a ‘template’ is a polysemous term covering a wide array of structural relations. For Good the core concept unifying different notions of template is that they represent “surprising” instances of linear stipulation. While some notion of linear order has played a part in different typological projects (e.g. word order universals), Good’s study is unique in that it investigates linear stipulation across different domains of grammar and develops a sophisticated description language that allows for relevant typological variables to be to coded with a high degree of granularity. While the research is in its preliminary stages in terms of the number of languages and constructions coded, the program envisioned by Good cuts to shreds an oft recited mantra about typological research necessarily being coarse grained and superficial. Good’s approach also represents a significant advance in autotype method (Bickel 2010; Bickel and Nichols 2002; Witzlack-Makarevich et al. 2022) as he shows that typological variables can be baked into Head Drive Phrase Structure (HPSG) style nested feature-value structures. The resulting database can be analyzed with graph theory methods, which Good shows to be flexible enough to engage with a diverse range of research questions.

The book is divided into five chapters. The first chapter provides a literature review of the uses of the notion of ‘template’ in linguistics, dividing the uses into different subclasses while teasing out a comparative concept for the notion. Good describes morphophonological, morphosyntactic, phonological, and syntactic templates. A morphophonological template is one where some constituent is subject to constraints on linear order that can be stated in phonological terms. A morphosyntactic template refers to a template where the linear order of “morphosyntactic or morphosemantic” categories are referred to. A phonological template is one which specifies the linear order of phonological elements based on phonological constraints (e.g. the order of consonants, vowels etc. in syllabification rules). A syntactic template is one that involves an analysis whereby linear realization of elements involve syntactic notions such as phrase. Good abstracts away from the differences between templates in the distinct domains with the following definition/comparative concept for a template.

(1)
Template: An analytic device used to characterize the linear realization of a linguistic constituent whose linear stipulations are unexpected from the point of view of a given linguist’s approach to linguistic analysis.

Good then provides a discussion of what it means for linear stipulation to be “unexpected”. He points out that the notion of template as it is used in the literature is not rigorous enough for typological study. The reason for this is because it is unclear how we should discern that some pattern of linear stipulation is “unexpected” a priori. Good’s passage is worth quoting in full since it shows a perceptive understanding of the degree to which linguistic analysis is overly subjective while also explaining the motivation for his own study.

An additional condition must be met [for some instance of linear ordering to be considered templatic]: the nature of the stipulation must, in some way, be considered to deviate from expectations. Unfortunately, on the whole, such expectations are merely implicit, and linguists lack anything resembling a generalized theory (or even descriptive model) of what kinds of linear stipulation are “normal” for a given class of linguistic elements. It is even hard to find explicit statements regarding basic generalizations that would almost certainly be uncontroversial: for example, that smaller domains (such as words) allow for a more elaborated degree of linear stipulation than larger domains (such as sentences) or that phonology’s somehow more “intimate” connection to grammatical linearization when set against, for instance, syntax means that we should see a general correlation between the degree of a construction’s phonological specifications and linear ones. (p. 25)

Good discusses how such intuitions about unexpectedness are manifested in the assumptions of a few theoretical models in linguistics. Good does not mention this, but one might add that expectations about what linear stipulations are surprising could be derived from the languages a given linguist is more familiar with. An intuitive notion of unexpected patterns runs the risk of being biased towards specific over-studied language structures. Conspicuously, Good takes linear stipulations from English grammar as non-templatic (hence non-surprising) “controls” in his study (in Chapter 3, see below).

Since the implicit assumptions about what linear stipulations are surprising have not been explicitly drawn out, let alone motivated empirically, Good develops a comparative concept for the study of linear stipulation called the desmeme. The desmeme is a template but without the linguist’s intuitions about “expectedness” forming part of the definition. Chapter 1 introduces some fundamental notions in typological research, including the notion of a typological description language, comparative concepts, and multivariate approaches to typology.

Chapter 2 turns towards discussing the structure of the desmeme. Good’s description of the desmeme is coupled with concomitant goal of a dispelling traditional descriptive and theoretical notions from their “chimerics”. For Good, chimerics are categories or linguistic notions that combine (and thereby inadvertently conflate) logically distinct linguistic properties. A “prefix” could be an example of a chimeric since it combines a notion of boundedness with a parameter of direction to some host. Chimerics blind us to the abstract similarities between different domains of grammar. The idea is that the internal structure of phonological, morphological, or syntactic constituents/templates might have more in common than first appears once we have been dispelled of our chimeric notions.

Chapter 2 is somewhat dense since it introduces a new language for typological description. Good avoids naming the properties of desmemes after more familiar notions. He does this in order to prevent infelicitous conceptual conflation of his technical vocabulary with commonplace notions like ‘slot’ and ‘order’ (p. 45). Obviously a review such as this cannot do justice to the intricacies of Good’s description language and so I will focus on some of the most salient aspects of the desmeme. The desmeme is constructed out of components and a set of four ‘high level features’. Components are basically like slots of the traditional template, but elaborated with variables coding syntagmatic and other properties specific to each of them. The high level desmeme features are features that hold over the whole desmeme. They are conditioning, violability, stricture and foundation. Conditioning tells us what type of domain of structure the desmeme characterizes, whether phonological, morphophonological, morphosyntactic or syntactic. Stricture characterizes what type of linear stipulation is involved: whether we are dealing with a constraint on length or the ordering of components. Foundation is a fairly sophisticated feature that concerns the relation the different components have to each other. Roughly there are two types of foundation: (i) one with a head-like (‘keystone’) component and a headless type which is just defined by its edges (‘arches’). I found the notion of violability somewhat problematic in light of Good’s critical exegesis of the template in linguistics. A violable template is one where its constraints “may simply fail to apply”. Good provides an example of a hypothetical language where some minimal size constraint applies to most lexical items except a few. While Good’s example is intuitive, it is unclear to me how we would go about distinguishing a poorly articulated template (i.e. one that simply gets the constraints wrong) from one that is violated under certain circumstances. In an ideal research project violability might need to be conceptualized as a stochastic notion distributed more or less over the lexicon of a language. How to operationalize such a notion strikes me as a challenging research project.

The desmeme contains a nested feature-value structure rather than a list of flat variables as in typical multivariate typological studies (e.g. Bickel 2010). Each of the features has associated properties with values. Good convincingly presents this as a methodological advance over typical multivariate analyses as it better captures the co-dependencies present in individual languages (more on this below). Each of the high level features of desmemes described above can take on particular properties. Desmemes are additionally composed of components. Components are similar to ‘slots’ in more common usage. Each component has three features: filledness, elasticity, and stability. Filledness refers to how a component is filled, either by a single element or a class of elements. For instance, a component can be filled by a single aspect marker or a whole class of tense morphemes. Elasticity refers to whether the component can be filled by one or more elements. Roughly, an inelastic component is close to what most would consider to be a “slot” since there can only be one element at a time. An elastic component can have many elements inside it.[1] Stability refers to whether the component is dependent on other positions or not and in what sense. Such a component could be used for tackling cross-dependencies (e.g. extended exponents and other types of deviations from biuniqueness).

Figure 1 provides a simplified overview of Good’s desmeme. Note that each feature noted below can take a number of values. The description language is flexible enough to be able to accommodate the addition of new properties if need be (desmemes are not closed single use systems).

Figure 1: 
Simplified desmeme structure.
Figure 1:

Simplified desmeme structure.

A desmeme can consist of any number of components. An interesting question emerges at this point regarding whether the researcher should be a ‘templatic lumper’ or a ‘templatic splitter’. An enthusiastic templatic splitter would construct a desmeme for every linear stipulation. A templatic lumper would make desmemes as large as possible. One issue that I felt could have received more discussion is the extent to which a lumping versus a splitting strategy can or should be guided by empirical evidence and the extent to which it is simply contingent on one’s research questions. Good suggests that there is some optimal number of desmemes that can be determined empirically in each case; “Whether one should be a templatic lumper or splitter is not a question that can be answered generally but, rather, depends on the specific facts of each language” (p. 89). However, Good does not elaborate on this point and proceeds to provide examples where there is ambiguity between cutting a particular grammatical pattern into smaller or larger desmemes. Good states that he adopts a splitting approach for “metatheoretical” reasons:

To the extent that templatic patterns do exist, it seems reasonable to assume that they truly are unusual given linguists’ perception over decades that they are not the “normal” way that patterns of linearization are structured. The conventional wisdom may very well be wrong, but it seems reasonable to follow it at this state, and it would lead, in general to positing that templatic restrictions are simpler rather than more complex, all things being equal, which favors a splitting approach. (p. 89)

I do not completely follow the argument here. It seems to grate against Good’s earlier discussions about “unexpectedness” and “chimerics” where linguist’ intuitions were subject to a more critical scrutiny. Also, Good does not provide sufficient examples to illustrate that the decision between lumping versus splitting is usually an empirical issue; in fact, he only provides arguments to the contrary. I would suggest that it is a matter of one’s research question (Tallman 2021b). I agree with the quote above that the lumping/splitting decision cannot be answered generally, but it is unclear to me how it is a strictly empirical issue, rather than one of research prerogatives. Filling out variables with abstract intuitive notions about what constitutes a surprising pattern runs the risk of prejudging answers to research questions we might have about, for instance, the real difference between linear stipulation at different domains of structure (how different is morphology from syntax in this regard?). Nevertheless, the issue is not too important at this stage in the research, since as Good emphasizes, his database is structured to allow the researcher to recode information according to a lumping strategy. Furthermore, the issue of how one goes about deciding whether a desmeme should be cut up is partially addressed (albeit obliquely) in Chapter 3 in the context of a comparison between Bantu and Nimboran.

Armed with the desmemic architecture, in Chapter 3 Good moves to illustrating his coding practices discussing methodological issues along the way. He illustrates 16 desmemes coded in his database from a diverse range of languages, but with an overall skew towards Bantu languages. The following templates are discussed; (i) Turkish stems; (ii) Chintang prefixes; (iii) Nimboran; (iv) Bantu causative-transitive; (v) Bantu causative-applicative; (vi) Bantu applicative-reciprocal; (vii) Tiene verb stem; (viii) Chechen preverbal ’a; (ix) Serbo-Croatian je; (x) Serbo-Croatian topicalization; (xi) Aghem clauses; (xii) Mande clauses; (xiii) German clauses; (xiv) English plural; (xv) English verb phrase. The presentation goes from templates with smaller elements to those with larger ones. I can only share a few highlights.

The templatic restrictions on the Turkish (nucl1301) stem only become visible for CV roots that do not meet a bisyllabic minimality condition for morphologically complex forms. Good illustrates the purpose of the repairability variable (under violability) showing that the noun and verb desmemes of Turkish stems are distinct in this regard: CV nouns are ineffable (cannot be uttered), whereas CV verb roots are repaired through insertion.

One the challenging aspects of the desmeme coding system is knowing when to code linear restrictions between formatives as a single desmeme or multiple desmemes. The problem seems most obvious when we compare the verb complexes of Nimboran to those of Bantu languages. The Nimboran (nucl1633) verb complex has been described in terms of nine structural positions. The most striking motivation for treating the Nimboran verb complex in terms of an array of position classes comes from blocking phenomena. Blocking phenomena refer to cases where two morphemes would not seem to be incompatible on semantic grounds. However, they cannot co-occur; their distributional properties suggest they are in the same slot, and the slot can only be occupied by one element at a time. For example, the dual object marker -dar- and the dual subject marker -k- do not conflict with each other on semantic grounds, but only one of them can occupy position two (see Inkelas 1993 for further details and analysis). Good constructs a single desmeme out of positions which emerge from consideration of such distributional facts. In contrast, the Bantu verb complex is treated differently. Instead of analyzing all Bantu suffixes into a single desmeme structure, each affix-affix pair is treated as a desmeme. The discussion does not provide an explicit articulation as to why such a difference in treatment is motivated. However, a close reading of Good’s sections on Nimboran and Bantu languages provides the reader with a few clues as to why this differential treatment makes sense.

In Bantu languages, there is less evidence from blocking phenomena to motivate position classes. Any combination of causative, applicative, transitive etc. can co-occur. To the extent that there are constraints, they refer to individual elements (e.g. a causative must occur before an applicative if these two co-occur, otherwise affix ordering is variable). Thus, the desmemes refer to these elements. In contrast a host of blocking phenomena, superpositions and co-occurrence constraints provide evidence for a position class structure. While Good’s detailed consideration of the differences between Nimboran and Bantu gives the reader a sense that the author made the right decision, more discussion of how to cut up a grammar into the right number of desmemes would have made the coding decisions more transparent and perhaps more replicable.

With regards to Chintang, following Bickel et al. (2007), Good analyzes variable prefix ordering in this language as emerging from a phonological selection constraint. The important point about Chintang is that prefixes can variably order with one another without producing a difference in scope.

In these discussions it appears that Good has a strong tendency to provide prosodic or morphophonological analyses of linear stipulations. While it is true that there are some cases where a prosodic explanation seems necessary (e.g. Serbo-Croatian second position/Wackernagel clitics), it is not clear that all of the cases discussed by Good require a prosodic explanation, even where they can be given one. One wonders if some of the desmemes capture correct, but unnecessary, generalizations in light of other facts about the language. For instance, Good takes Bickel et al.’s (2007) analysis of Chintang (chhi1245) at face value but it is unclear why variable prefix ordering in this language is not analyzed in a way that is analogous to that of free constituent order in Meskwaki (Algonquian). For Meskwaki (mesk1242), Good codes a postverbal component for XPs, which is elastic and thereby allows the relevant constituents to variably order. An analogous analysis could be given to Chintang prefixes: simply posit an elastic, but incoherent, component for prefixes giving them a morphosyntactic stricture which forces them to occur before the verb root. While it is understandable that the author would not want to deviate from published analyses, in this case following Bickel et al. (2007) seems to make Good adopt coding practices that are at odds with one of his main research questions: what are the differences and similarities in linear stipulation across different domains of grammar? Coding Chintang the way Good does seems to prejudge the question, since the prosodic explanation is only seen as required because the type of variable prefix ordering is “unexpected” in the morphological domain, but not the syntactic one. Apart from these minor and, I think fixable issues, Chapter 3 provides a convincing proof of concept of the desmemic architecture as a useful comparative concept vis-à-vis a wide variety of case studies.

Chapter 4 is concerned with showing how the desmemic architectures can be compared quantitatively. It is in this chapter that the usefulness of desmemes for investigating typological variation and universals becomes clearer. Good shows that desmemes can be translated into graph structures (see Trudeau 1993 for an introduction and Kolaczyk and Csárdi 2020 for implementations in R). Armed with the mathematics of graph structures, Good provides some exemplary illustrations of metrics that can be used to assess the overall similarity between desmemes or their subparts. Good illustrates how desmemic graphs can be compared holistically using methods devised in research on genetics. A (dis)similarity matrix across desmemes can be derived by using a node-based similarity algorithm over desmemic structures. Familiar network or clustering methods can then be applied to assess the overall similarity between desmemes. Good illustrates one such method by applying a Neighbornet analysis over the derived dissimilarity matrix. The method is partially corroborated by providing some expected results. For instance, we see that overall syntactic templates cluster with each other. Morphological templates are somewhat more diverse. Interestingly, the Nimboran desmeme comes close to clustering with clause patterns. The Nimboran morphological template is highly elaborate (perhaps ‘polysynthetic’). One might expect that languages where morphological systems display highly elaborate syntax-like properties will be measured closer overall to clause-level templates.

Good illustrates a more sophisticated method called ‘similarity flooding’ for comparing graph structures (Melnik et al. 2002). Roughly, similarity flooding is used to assess the overall similarity of the pieces of desmemes based on their surrounding context across the desmemic structures between languages. This methodology can be used to compare the interpredictability of different pieces of desmemic structure across languages. For example Good’s results suggest that there is relatively high interpredictability between the stricture and foundation features. The type of stricture feature which codes the type of linear stipulation (e.g., length versus component ordering) is correlated with foundation (whether the template has head-like elements). Good speculates that this is because length type strictures typically involve two-unit restrictions that do not need to make reference to the more complex headed template structures.

The desmemic graph similarity metrics could be used to assess longstanding issues related to the relative autonomy of morphology vis-à-vis syntactic structure in face of “boundary elements”. Does the typological prevalance of boundary cases (elements or constructions that mix morphological with syntactic properties) statistically swamp the morphology-syntax distinction cross-linguistically or is the distinction motivated even in the face of some intermediate cases? I think that Good’s analyses suggest that cluster validation techniques over desmemic graph (dis)similarity matrices could provide insight into this issue. The novel methodology developed in Good’s book thus provides hope that linguists could tackle a theoretical question that was previously considered methodologically intractable (Haspelmath 2011; Tallman 2020).

Good’s illustration of feature-level desmeme analysis using similarity flooding and feature interpredictability also provides a fascinating new way of reconceptualizing typological issues. The methods could be used to investigate the relative (in)stability of linguistic structures through time in a way that does not abstract linguistic properties so far away from their language internal structural context. This would allow linguists to test structure-based theories in a more rigorous fashion without relying so heavily on noisy and chimeric structural notions such as “X0”, “clitic”, “affix”, “auxiliary” or “phrase/XP”, whose definitions vary from author to author.

In Chapter 5, Good provides a discussion of the steps forward needed to develop the desmemic database and the research questions that could guide such a project. He envisions studies that focus on the patterns of linear stipulation found in particular functional domains or, with a smaller sample of languages, one could construct a set of desmemic linear grammars of some well-described languages, so that templaticity could be investigated across grammatical domains. I find the latter idea particularly intriguing. The descriptive linguist is often forced to vacillate awkwardly and non-committally between structural notions that are posited because it seems they have a language-internal motivation, on the one hand, and structural notions that are posited for expositional reasons, on the other. Some structural concepts might be necessary to describe a language at all and others might be adopted because they serve as a scaffold to organize the description (Tallman 2021c). But it is sometimes unclear which of these the linguist means to adopt. I think a rigorous quantitative methodology that cuts across different domains of grammar such as Good’s could serve as a powerful tool for the analysis of language-particular structures.

Good also discusses the issue of coding consistency. It seems that developing the type of research projects envisioned by Good would involve substantial collaboration between a number of researchers and thus run the risk of inconsistencies in coding. As I have mentioned above, some of the ways that language facts were fit into desmemic structures seemed arbitrary to me, even if I have no reason to doubt the internal consistency of the Good’s own coding practices. A well-connected research group would have to mutually enforce coding consistency to avoid results that might emerge from idiosyncratic differences between different researchers. In such a context, an interesting question is the extent to which the enforcement of such coding practices should be guided by particular research questions or whether more general principles can serve as a guide (see Tallman et al. in prep for example).

The book ends by discussing further (meta)theoretical issues such as whether templates are psychologically real and how they arise diachronically. Good reiterates some of the themes discussed earlier in the book. He implies that not enough methodological and empirical groundwork has been laid to develop high-level causal theories that account for linear stipulation.

The main criticism one could levy against Good’s discussion is a lack of methodological consistency (see my comments about Good’s incorporation of Bickel et al.’s 2007 analysis above). The force of Good’s argument about the need to dispel linguistic chimerics in typology is somewhat dampened by the extent to which some such chimerics seem to be implicitly adopted in Good’s own analyses. For instance, as stated above, Good makes a distinction between morphosyntactic and syntactic templates, but the definitions are not mutually exclusive. They both make reference to morphosyntactic categories, except that the syntactic level refers to a phrase, but Good does not clarify how to distinguish a word from a phrase. The methodology might need to have a more rigorous way of classifying templates rather than relying on author/language specific proposals about where the boundary between words and phrases are. To illustrate where reliance on author/language specific categories might run into problems, we can consider Good’s analysis of Meskwaki clauses (pp. 203–204). Good provides an analysis of the Meskwaki clause that treats the verb as a single element in the template.

(2)

However, a well-known property of some Algonquian languages is that the so-called ‘verbal word’ can be interrupted by full noun phrases (Dahlstrom 2000; Russell 1999). An example of such a construction from Meskwaki is provided in (3) below. The problem with modeling Meskwaki as a template displayed in (3) is that XPs (e.g. ke-taˑnes-a ‘my daughter’), can also interrupt the constituent V.

(3)
ne-pyeˑči ke-ta ˑ nes-a waˑpam-aˑ-pena
1-come- 2-daughter- sg -look.at + direct-1Pl
‘We have come to see your daughter’ (Dahlstrom 2000: 80)

It is not clear, therefore, why the desmeme is not modeled over the terminal elements of the following structure in (4) (note the actual constituent structure proposed is only for expositional purposes).

(4)

This problem highlights a more general issue with the coding schema developed by Good. A distinction is made between morphosyntactic and syntactic templates, but there are clearly intermediate situations identified in the descriptive literature. Furthermore, from a more general typological perspective it is not clear that a distinction between morphosyntactic and syntactic structures can be made consistently (Tallman 2020). Perhaps a solution to this problem would be to ground the desmeme classification in terms of domains defined by specific wordhood or constituency tests. Rather than referring to morphosyntactic versus syntactic templates, we could simply refer to a domain of contiguity (Bickel and Zúñiga 2017), or noninterruptability domain (Tallman 2021b), for example.

Furthermore, Good makes use of some chimerical notions borrowed from the prosodic hierarchy which conflate logically distinct properties. The distinction between “prosodic word” and “phonological phrase” merges two distinct properties: (i) a domain around which phonological properties are supposed to cluster (but see Bickel et al. 2009); and (ii) a domain which is structurally close in some sense to either a morphosyntactic word or syntactic phrase. It would be more consistent with Good’s approach to code for phonological observants (e.g. stress, tone, nasal harmony etc.) and linguistic level (‘word’, “phrase’) (see Tallman 2021b, and especially Tallman 2020 for a critique of the notion of the prosodic word as a comparative concept). In many languages it is simply not clear whether a given phonological domain ought to be treated at the level of a prosodic word or a phonological phrase. Consider South Bolivian Quechua (sout2991) for instance. The language attests to a predictable pitch accent assignment rule that interacts with rules of suffix deletion (Camacho-Rios and Tallman forthcoming).

(5)
(kuntan)LH* (t’iqpa -rpa -ysi -lla -sa -yki) LH*
(soon) (peel -suddenly -assist -only -2.Obj -1.sg)
‘I will soon help you to peel out (the dry corn) … (Camacho-Rios and Tallman forthcoming)

However, given that there are no productive phonological processes that occur below the pitch accent assignment domain (Camacho-Rios and Tallman forthcoming), and that there is currently no consensus about whether the orthographic word should be treated as a ‘word’ or a larger constituent (compare Muysken 1981 and Weber 1983), it is not clear whether the pitch-accent domain should be considered a phonological word or a phonological phrase.

I think this problem could be adjusted for by coding for p(honological)-domains in the manner of Schiering et al. (see also the papers in Tallman et al. in prep for a similar methodology that abstracts away from the distinction between prosodic words and phonological phrases).

Despite the tentativeness and hedging, Good has produced a piece of scholarship with potentially revolutionary implications. He has expanded our methods of typological research by synthesizing multivariate typology with some of the more salvageable aspects of the generative research tradition. In this vein, I think Good’s introductory statements about his study being primarily about hypothesis raising rather than hypothesis testing somewhat undersells the importance of the work. In typical HPSG studies (as in all generative studies), a formal model back-fit to describe data from a single language need not be applicable to the next. The open-endedness of the variables of generative models dampens or renders completely obsolete the hypothesis-testing function of such models. In the context of research on linear stipulation, I find it infelicitous to contrast hypothesis testing with hypothesis raising, if the former is supposed to stand in for the generativist work that Good reviews in the first chapter. Such work has shown itself to be incapable of developing testable hypotheses precisely because the generative program has not developed a description language that does not prejudge one or the other theory to be true (Tallman 2021a). The research programs proposed by Good have a real chance of overcoming this impasse if they can be carried through.


Corresponding author: Adam J. R. Tallman [æɾm̩ dʒemz ɹas talmn̩], Friedrich Schilller Universität, Jena, Germany; and Max Planck Institut für Evolutionäre Anthropologie, Leipzig, Germany, E-mail:

References

Bickel, Balthasar. 2010. Capturing particulars and universals in clause-linkage. In Isabel Brill (ed.), Clause linking and clause hierarchy, 51–104. Amsterdam: John Benjamins.10.1075/slcs.121.03bicSuche in Google Scholar

Bickel, Balthasar, Goma Banjade, Martin Gaenszle, Elena Lieven, Netra Prasad Paudyal, Ichchha Purna Rai, Manoj Rai, Novel Kishore Rai & Sabine Stoll. 2007. Free prefix ordering in Chintang. Language 83(1). 43–73. https://doi.org/10.1353/lan.2007.0002.Suche in Google Scholar

Bickel, Balthasar, Kristine A. Hildebrandt & René Schiering. 2009. The distribution of phonological word domains: A probabilistic typology. In J. Grijzenhout & B. Kabak (eds.), Phonological domains: Universals and deviations, 47–75. Berlin: De Gruyter Mouton.10.1515/9783110219234.1.47Suche in Google Scholar

Bickel, Balthasar & Johanna Nichols. 2002. Autotypologizing databases and their use in fieldwork. In Peter Austin, Helen Dry & Wittenburg Peter (eds.), Proceedings of the International LREC Workshop on Resources and Tools in Field Linguistics, Las Palmas, 26–27 May 2002, Nijmegen: ISLE and DOBES.Suche in Google Scholar

Bickel, Balthasar & Fernando Zúñiga. 2017. The ‘word’ in polysynthetic languages: Phonological and syntactic challenges. In Michael Fortescue, Marianne Mithun & Nicholas Evans (eds.), The Oxford handbook of polysynthesis, 158–185. Oxford: Oxford University Press.10.1093/oxfordhb/9780199683208.013.52Suche in Google Scholar

Camacho-Rios, Gladys & Adam J. R. Tallman. forthcoming. Word structure and constituency in Uma Piwra South Bolivian Quechua. In Adam J. R. Tallman, Sandra Auderset & Hiroto Uchihara (eds.), Constituency and convergence in the Americas. Berlin: Language Science Press.Suche in Google Scholar

Dahlstrom, Amy. 2000. Morphosyntactic mismatches in Algonquian: Affixal predicates and discontinuous verbs. In Arika Okrent & John P. Boyle (eds.), Proceedings from the Panels of the 36th Meeting of the Chicago Linguistic Society, 63–87. Chicago: Chicago Linguistics Society.Suche in Google Scholar

Haspelmath, Martin. 2011. The indeterminacy of word segmentation and the nature of the morphology and syntax. Folia Linguistica 45(1). 31–80. https://doi.org/10.1515/flin.2011.002.Suche in Google Scholar

Inkelas, Sharon. 1993. Nimboran position class morphology. Natural Language and Linguistic Theory 11(11). 559–624. https://doi.org/10.1007/bf00993014.Suche in Google Scholar

Kolaczyk, Eric D. & Gábor Csárdi. 2020. Statistical analysis of network data with R, 2nd edn. Boston: Springer.10.1007/978-3-030-44129-6Suche in Google Scholar

Melnik, Sergey, Hector Garcia-Molina & Erhard Rahm. 2002. Similarity flooding: A versatile graph matching algorithm and its application in schema matching. In Proc. 18th Intl Conf. on Data Engineering (ICDE), San Jose, CA.10.1109/ICDE.2002.994702Suche in Google Scholar

Muysken, Pieter C. 1981. Quechua word structure. In Heny Frank (ed.), Binding and filtering. Cambridge: MIT Press.Suche in Google Scholar

Russell, Kevin. 1999. The ‘word’ in two polysynthetic languages. In T. Alan Hall & Ursula Kleinhenz (eds.), Studies on the phonological word, 203–222. Amsterdam: John Benjamins.10.1075/cilt.174.08rusSuche in Google Scholar

Schiering, René, Balthasar Bickel & Kristine A. Hildebrandt. 2010. The prosodic word is not universal, but emergent. Journal of Linguistics 46(3). 657–709. https://doi.org/10.1017/s0022226710000216.Suche in Google Scholar

Tallman, Adam J. R. 2020. Beyond grammatical and phonological words. Language and Linguistics Compass 14(2). e12364. https://doi.org/10.1111/lnc3.12364.Suche in Google Scholar

Tallman, Adam J. R. 2021a. Analysis and falsifiability in practice. Theoretical Linguistics 1–2. 95–112. https://doi.org/10.1515/tl-2021-2009.Suche in Google Scholar

Tallman, Adam J. R. 2021b. Constituency and coincidence. Studies in Language 45(2). 321–383. https://doi.org/10.1075/sl.19025.tal.Suche in Google Scholar

Tallman, Adam J. R. 2021c. Documentación y descripción lingüística. In Gladys Camacho-Rios & Gabriel Gallinate (eds.), Introducción a la Lingüística en el contexto Boliviano. Cochabamba: Linguistics Summer School Bolivia, forthcoming.Suche in Google Scholar

Tallman, Adam J. R., Sandra Auderset & Hiroto Uchihara (eds.). in prep. Constituency and convergence in the Americas. Berlin: Language Science Press.Suche in Google Scholar

Trudeau, Richard J. 1993. Introduction to graph theory. Dover Publications.Suche in Google Scholar

Weber, David J. 1983. The relationship of morphology and syntax: Evidence from Quechua In Work Papers of the Summer Institute of Linguistics, University of North Dakota Session, vol. 27, 162–181. University of North Dakota.10.31356/silwp.vol27.10Suche in Google Scholar

Witzlack-Makarevich, Alena, Johanna Nichols, Kristine A. Hildebrandt, Taras Zakharko & Balthasar Bickel. 2022. Managing AUTOTYP data: Design principles and implementation. In Andrea, L. Berez-Kroeker, Bradley McDonnell Eve Koller, Lauren B. Collister (eds.). The open handbook of linguistic data management. Cambridge: MIT Press.10.7551/mitpress/12200.003.0061Suche in Google Scholar

Published Online: 2022-03-11
Published in Print: 2022-10-26

© 2022 Adam J. R. Tallman, published by De Gruyter, Berlin/Boston

This work is licensed under the Creative Commons Attribution 4.0 International License.

Heruntergeladen am 8.9.2025 von https://www.degruyterbrill.com/document/doi/10.1515/lingty-2022-2093/html
Button zum nach oben scrollen