Home Indeterminacies and mismatches in grammatical systems
Article Open Access

Indeterminacies and mismatches in grammatical systems

  • Volker Struckmeier EMAIL logo and Andreas Pankau
Published/Copyright: January 22, 2021
Become an author with De Gruyter Brill

Abstract

This introduction proposes to investigate mismatches and indeterminacies in languages much more than has hitherto been done. Such seemingly unruly aspects of language(s), it is argued, are interesting since they may help shed light on the internal make-up of grammatical systems. The question of the internal make-up of grammar(s), it is argued, cannot be addressed by the normal modus operandi of linguistic research, which is to find matches (rather than mismatches) between the observable (sound and meaning) interface systems, and to find how the interface representations map unto each other deterministically: It is only in the “lo-fi” aspects of mappings that the internal mechanisms of the overall grammatical architecture may reveal themselves.

The introduction also points out that our concern is independent of the various theoretical orientations linguists may choose for their work, since the problem presents itself in all approaches to language research currently available, it seems – if in slightly different ways.

We propose, in sum, that mismatches and indeterminacies are an extremely worthwhile field for future linguistic research, and one that should be on the agenda (or minimally, within the field of view) for linguists of all theoretical convictions.

1 Fundamental questions this volume seeks to address

This installment of ZS Online concerns itself with the architecture of grammatical systems. Grammatical systems are composed of subcomponents in different ways in different grammatical theories. Many descriptions subdivide phenomena into phonological, morphological and syntactic ones, and use different subcomponents of the grammar to represent and explain each of these types of phenomena, respectively. Subcomponents of this type are not only kept apart by naming conventions, however: Different computational atoms, rules or operations, and (consequently) structures are employed by the different subcomponents, supplying the distinctions with actual theoretical substance.

Sometimes, distinctions proposed in the literature are abandoned by newer proposals, for example when morphology is considered as entirely syntactic, or at least as so similar to syntax as to warrant the assumption that there is no separate morphological component after all (as in Chomsky1965). In other cases, the classic subdivisions are not only retained, but carried further. In some theories, certain grammatical subcomponents are themselves deconstructed into further parts. In Minimalist frameworks, syntax is divided up into core syntax, and a separate branch of the grammatical architecture taking care of the mapping to PF of core syntactic representations (Chomsky2007, 2008; Chomsky et al.2019, a. o.). In yet other cases, subdivisions are created that run orthogonally to the classic subdivisions. Inflectional morphology is considered as a part of syntax, but word formation is considered a separate component (as part of a computational conception of the lexicon, cf. Chomsky1970; Anderson1982). Yet other theories assume that morphology as a whole is handled by the lexicon (Jackendoff1972; Lapointe1980; Selkirk1982; Di Sciullo and Williams1987). Yet other approaches completely deconstruct notions like lexicon and morphology into multiple different architectural subcomponents (Halle and Marantz1993).

In this volume, we do not want to revive discussions as to which of these approaches are right, wrong, or better than others. Rather, we want to pose some questions which follow, we believe, more or less directly from any division of labor in grammars, i. e. questions that apply to all grammars that are decomposable systems at all.[1] As soon as any subcomponents are assumed, at least two families of questions come up.

I) The first question we can ask is how often we find cases where subcomponents that are expected to regulate their assigned aspects of structure building in fact shirk their duties, as it were, i. e. leave some pertinent phenomena from their domain unregulated, or leave aspects of regulated phenomena indeterminate, upon closer empirical scrutiny of the subcomponents’ actual respective outputs.

The terms unregulated and indeterminate are understood in a specific way in this introduction. A subcomponent is assigned phenomena to handle, if phenomena of such types are explicitly described as being handled by that subcomponent by the theories that postulate the subcomponents. For example, nobody will be surprised to hear that morphological subcomponents are often assigned tasks relating to word-internal structure composition, where the output structures are semantically meaningful. This sets apart the morphological subcomponents of many grammar models from syntax because syntactic structure-building describes output structures larger than words, but also from phonology because it describes structures that are not themselves meaning-bearing. Given these assignments of duties for the grammatical subcomponents, we are in a position (vis-à-vis each grammatical model that specifies such duties) to identify unexpected behaviors of subcomponents, where subcomponents do not carry out tasks that are, by assumption, expected of them. Unexpected derelictions of duty would therefore be for a morphological component to leave prima facie aspects of word-internal form aspects open, or fail to address the interpretation effects of such word-internal structural aspects. Similarly, syntactic theories that leave open questions of word order that would seem to be genuinely syntactic in nature (or fail to address compositional/ structural interpretations that would seem to follow) would seem to be, at least in part, neglecting tasks assigned to them.

Such failures can come about in different ways. A subcomponent of a grammar could leave certain pertinent aspects from its domain unregulated. By this term, we mean that the relevant subcomponent does not even attempt to specify regulations that would constrain a phenomenon from its intended domain of application. Let us try and imagine what unregulated phenomena could look like. If grammars involve subcomponents, it follows trivially that these subcomponents never consider linguistic phenomena as within their domain which would involve categories that are not defined in relation to the domain itself. For example, syntax may be concerned with the structure of sentences (amongst other things), but we certainly do not expect syntactic systems to regulate whether a sentence starts with an obstruent, since the category “obstruent” would have to be considered phonological by definition. Also, we do not expect syntactic theories to impose constraints on semantic interpretations such as “This clause is always false when uttered by a high priest of the Great Juju at the bottom of the sea”. The domain in which sentences are truth-functionally evaluated, and questions of speaker choice are simply not syntactic pursuits. These cases we take to be uncontroversial, in that we know of no grammars who would disagree with such assessments. However, as soon as we stray from such clear cases of assignments of duties, we feel that we are actually relying, as a discipline, on more or less intuitive assessments. For example, the question which aspects syntax can regulate is one that recently has proven hard to answer. The actual speaker is to our knowledge never represented in any grammar, but the social distance between the interlocutors can be represented. This is what we find in languages with honorifics (Hasegawa2006): the form of a verb or of a personal pronoun varies depending on the social distance between the speaker and (typically) the addressee. Similarly, speaker and addressee seem to play fundamental role in syntax proper, when we take phenomena like allocative agreement into consideration (Antonov2015; Miyagawa2017; Oyharçabal1993): in allocative agreement, the addressee is encoded on the verb by a specific morpheme, similar to the subject. It is therefore no trivial task to initially determine the scope of grammar as a whole nor of each specific subcomponent in particular. In this volume, therefore, we try to address questions such as: What should be regulated by syntax? What is clearly a task for morphology? What would a compositional analysis of a meaning-bearing structure have to involve? Looking at the different ways that grammatical tasks have been carved up by different theories, as outlined above, we find that this question in fact becomes only more meaningful: Which aspects of structures have been proposed to be left unregulated by some postulated subcomponent of proposed grammars?

Some phenomena are not left completely unregulated by a grammatical subcomponent, but neither are they regulated to the greatest extent imaginable. Thus, the subcomponent in question issues some constraints with regard to the phenomenon, but it does so with poor detail resolution. The constraints do not suffice to regulate all the empirical distinctions that can be made out from the language data the grammar is designed to describe. For example, core syntax in current Chomskyan generative grammar does not really address word order to the greatest extent possible. On the contrary, the operation Merge (the only structure-building operation standardly assumed currently to exist in core syntax) only generates unordered sets of syntactic objects (lexical items and larger structural objects). The sets formed by Merge only impose some ordering constraints which, however, do not come close to precise predictions regarding actual word orders. Suppose that the structure-building operation Merge merges two syntactically atomic elements, the elements X and Y, as in (1).

(1)

Merge (X, Y) → {X, Y}

It is important to point out that this operation is not a phrase-structure rule, and does not generate a phrase per se. Rather, Merge simply states the fact that a syntactic object could exist that is composed of two component objects, X and Y in our example. However, no linear ordering is imposed by Merge. In other words, the set {X, Y} conforms to both the orders X Y as well as Y X. As we see, then, all expectable orders of two elements are allowed, and no ordering restriction at all is imposed at this step of a derivation. Ordering restrictions only come about when the recursive aspect of Merge is called upon, i. e., the ability of the operation to merge complex objects, the output of Merge, as the input to (other instances of) Merge.[2] Witness now in (2), how such Merge instances play out.

(2)

Merge ({X, Y}, Z) → {{X, Y} Z}

Upon merging a third element, Z, as in (2), Merge begins to show its potential to issue ordering restrictions. By common assumption, the set {{X, Y} Z} does not conform to six ordering options (3!=3×2×1=6), but rather only to four orderings. The third element, Z, cannot intervene between X and Y, as shown in (3).

(3)

{{X, Y} Z} conforms to:

a.

X Y Z

b.

Y X Z

c.

Z X Y

d.

Z Y X

e.

*X Z Y (Z may not intervene between X and Y)

f.

*Y X Z (Z may not intervene between X and Y)

To put it simply, upon every instance of Merge, the newly merged syntactic object may come to be linearized before or after the syntactic object generated by previous instances of Merge. This leads to a sizeable reduction of options, when the number of such merged objects increases. Suppose, instances of Merge have created a complex object such as {{{{{X, Y} Z} A} B} C}. Upon merging yet another object, O, again only two ordering options are admitted: For O to precede, or follow, the complex object. As we see, then, every instance of Merge doubles the number of potential orders of the merged object. Seven objects could therefore be arranged in 27=128 orders. In an unconstrained system, however, upon every instance of Merge the number of orders would multiply by the total number of merged objects. In the example just given, six merged objects (X, Y, Z, A, B, C) can form 6! linear orders. Adding a seventh element (O), this set of orders must be multiplied by 7 (since 7!=6!×7), yielding 7!=5040 orders. Clearly, then, Merge (and ipso facto, core syntax) issues some ordering restrictions. Just as clearly, however, there are still many potential word orders associated with the output of core syntax. In this way, core syntax does not leave word order unregulated, but it does leave it indeterminate, i. e. it does not issue word order restrictions so strict and deterministic that they would result in the prediction of just a single word order, or some reasonably small set of word orders.

Therefore, many more detailed word order properties are not constrained deterministically by core syntax. In order to map the sets of (sets of...) structural objects onto linearly ordered descriptions of sentence forms, two additional operations have to be carried out (minimally). Firstly, the linearly unordered sets of elements have to be linearized. Assume that the operation Merge has been applied to two syntactic atoms, eat and cake, in the way shown in (4).

(4)

Merge (eat, cake) → {eat, cake}

Secondly, however, this set of two lexical items must be linearized at the relevant stage of the derivation, i. e. the unordered set must be mapped onto a linear order, as in (5).

(5)

{eat, cake} → eat cake

This latter operation, however, is not standardly assumed to be syntactic in nature (Chomsky et al.2019 for a recent assessment along these lines), but would be relegated solely to an extra-syntactic linearization mechanism in the mapping to PF, illustrated in (6).

(6)
a.

Merge (essen, Kuchen) → {essen, Kuchen} (= core syntactic representation)

b.

{essen, Kuchen} → Kuchen essen (= mapping to PF)

In this way, then, the different VP word orders (OV for German, VO for English) are derived in such a way that the similar (or identical) meaning constitution is handled by the same core syntactic operation: To merge a predicate such as eat/essen with an argument such as cake/Kuchen is to assign the argument the internal argument role of the predicate. To handle this assignment in this (order-independent) way seems attractive if one assumes (as is now also standardly done) that core syntax caters very directly to the requirements of the semantic interface. The identical (structural) meaning of the two VPs is brought about by the same mechanism, so that cake/Kuchen wind up to be the patient arguments of eat/essen, respectively. Core syntax in this model, thus, is not at all concerned with properties of externalization, i. e. the operations required to derive the form side of a structure, word order being one property of the form side. In the example just described, the differences between the structures are lexical – the words used – and (for lack of a better word) “PFey”. Linearization effects on word order are not represented syntax-internally, according to this model. Rather, they are handled by a separate subcomponent of the grammar, which maps the output of core syntax onto linearly specified sequences. As we can see, then, the syntactic subcomponent of current Chomskyan generative grammar leaves word order massively underspecified – a state of affairs we want to represent by saying that word order properties are partially indeterminate within core syntax.

In a similar vein, elements that move in a sentence are now to be taken as simply merged in more than one copy. Rather than literally moving an element as such, we now find multiple copies of the element in question. Suppose Merge has constructed some set of (sets of...) structural objects, and suppose further that the linearization of this set-theoretic object has already been taken care of. We then arrive at a representation such as (7)).[3]

(7)

[Peter [does [not [Peter [v [eat cake]]]]]]

This linearized structure is then in need of yet another operation that is relevant for the determination of word order: The lexical item Peter appears in two copies. An algorithm called copy spellout has to decide which of these copies will be phonologically represented, since multiple spellouts of copies are not the norm (despite the fact that they have been argued to exist in some cases, in some languages). Only a linearized structure that has taken care of the spellout of copies of elements is supposed to be representative of the relevant word order the language displays. For English, the spellout algorithm choses the structurally higher copy for spellout, as shown in (8).

(8)

[Peter [does [not [Peter [v [eat cake]]]]]] → [Peter [does [not [Peter [v [eat cake]]]]]]

This linearized structure with copies designated for spellout can now be read as a representation of the linear word order of Peter does not eat cake. Again, the spellout algorithm removes a task considered syntactic in many grammars (including previous versions of Chomskyan generative syntax) from the realm of core syntax. Since the spellout algorithm in modern Chomskyan generative syntax is not assumed to be syntactic, core syntax is again indeterminate. It supplies the copies that the spellout algorithm can decide over (restricting choices for word orders), but leaves the details of the decision unspecified otherwise (does not make spellout choices itself).

In sum, we see that word order is, in fact, only very partially restricted by core syntax in these approaches, and therefore word order qualifies as a property that is left indeterminate by standard current Chomskyan generative proposals: partially specified, but with a (very) limited resolution of word order detail.

II) Given these definitions of unregulated and indeterminate aspects of grammatical subcomponents, we can now turn to the second set of questions about the division of labour among the various grammatical subcomponents. First, are we in fact in a position to know which aspects of structure-building are unregulated by or indeterminate in specific subcomponents in our grammars?[4] The papers presented in this volume state (in various different ways) that we may find that subcomponents may delegate tasks differently than what the original assignment of duties might have led us to expect. We therefore submit that unregulated and indeterminate aspects of structure building may constitute very valuable data to find how the internal makeup of grammatical architectures can be (or cannot be) construed, since questions of the internal makeup of grammatical systems are hard to observe otherwise. While speakers can intuit (sentence-level, compositional) meanings relatively precisely, and observe word and morpheme orders directly, we still have no way of looking into grammatical systems’ internal makeups. We only observe the interface representations of the system, never its internal representations or mechanisms.

Second, what happens when aspects of linguistic structures are left less than fully determined in one of these ways? We might wonder whether an indeterminacy left open by some subcomponent can be, or even is always, taken up by another subcomponent – and some of the articles collected in this volume argue just this. For example, if the indeterminacy is taken up by some component, then the question arises whether this component is a subcomponent of the grammar or some component interfering with the grammar. The first position boils down to the claim that the blanks are filled by some other grammatical subsystem, whereas the second view suggests that all aspects of language use might kick in, like frequency. In recent years, a trend emerged that combines both views, arguing that grammatical subcomponents incorporate information about frequency. In OT, constraints can be weighed, so the ranking between two constraints A and B is not strictly A outweighing B, but only that A outweighs B in 70 %. Whether this blurring of the division between language and language use is initially attractive or ultimately correct is of no real concern. What matters is that this approach has implications for what counts as a grammatical subcomponent and how their interactions can be modeled.

Third, we can ask other consequent questions. Are we supposed to believe that the subcomponent that takes up the slack is a system that directly interfaces with the subcomponent introducing the slack? Or can we assume that subcomponents only indirectly related to each other (i. e., which handle incommensurable representations via completely different operations) can also exploit options left unconstrained by the other? How loosely or strictly constrained do grammatical subcomponents interact in this regard? Conversely, do we find cases where subcomponents of the grammar do not take up the slack for each other after all, but rather, introduce actual mismatches between different linguistic levels? Mismatches can be defined as cases where there are distinctions between two (or more) structures on one level of description, which are, however, obscured on other levels (or minimally, one level).

Mismatches between different levels of representation have been argued for in descriptions of many languages, and some authors present in this volume argue for mismatches of this kind, too. To give a simple example of what mismatches may look like, take the interpretation of quantifier scopes across languages. It could be argued that the operation of quantifier raising in English is simply a technical representation of a mismatch between the sentence-level semantic level and surface forms in English. Two very clearly distinct semantic readings of the sentence in the following example can be given [cf. (9-b) and (9-c)], and both differ in truth values vis-à-vis situations. However, these two semantic representations are still expressed indeterminately by a single form, as in (9-a)):

(9)
a.

Sentence form: Every man loves a woman.

b.

Meaning 1: For every man, there is one woman such that that man loves that woman.

c.

Meaning 2: There is one woman, such that every man loves that woman.

Current Chomskyan generative models do not assume quantifier raising as a designated syntactic operation. Rather the combination of (internal) Merge of quantified phrases and the spellout algorithm of English results in a mismatch between two clearly distinguishable interpretations, (10-b) and (10-c), and one surface form (10-a).

(10)
a.

Every man loves a woman.

b.

[TP every man [vPevery man loves a woman]] [cf. (9-b))]

c.

[a woman ... [TP every man [vPevery man loves a woman]]] [cf. (9-c)]

We have chosen the examples so far to be from a single type of grammar framework, namely different versions of Chomskyan generative grammar. The impression could arise that the matters this volume discusses are only relevant for, or formulable in, or even caused by, Chomskyan generative approaches. This, however, is not what we are trying to say – and in fact, it is not true in the first place. We cannot discuss all grammatical theories ever proposed here, of course, but we can quickly demonstrate that different grammars frameworks warrant essentially similar questions, since they grapple with comparable issues. Consider a typical non-Chomskyan, non-derivational framework, such as Head-Driven Phrase Structure Grammar (HPSG). There are two fundamental differences between Chomskyan frameworks and HPSG. First, HPSG’s grammar conception is not that of a device that successively builds up a syntactic structure, but one where the grammar defines properties syntactic structures need to satisfy. In other words, whereas a syntactic structure is generated in Chomskyan frameworks, the syntactic structure is given in HPSG and the grammar inspects whether it satisfies all relevant properties. Let us illustrate this with a specific example from German. As is well known, German has verb second declarative main clauses. Under a Chomskyan perspective, this requirement follows from an independent constraint in the grammar of German that the specifier position of the highest clausal projection be filled by some phrase (maybe for EPP or edge feature reasons). For HPSG, however, the grammar of German contains a constraint requiring the relevant main clause type to have verb second order. Any incoming structure violating this constraint will be rejected by the grammar. Second, HPSG has a radically different conception of “syntactic structure” to begin with. In many versions of Chomskyan grammar, syntactic structures are (some version of) phrase-structure trees, typically a succession of phrase-structure trees, or equivalents thereof (e. g. derivation trees). HPSG, however, does not employ phrase-structure trees but uses typed feature structures, that is, attribute-value matrices. Although we find that the two frameworks have radically different conceptions of grammar and of the type of structures they describe, our questions re-emerge in HPSG just as in any Chomskyan framework. Recall the above mentioned constraint requiring verb second order in German declarative main clauses. Obviously, this syntactic constraint interacts with constraints from other subcomponents of the grammar: the XP has to be morphologically well formed (for example in terms of case marking and agreement), and the words the XP is composed have to be phonologically well formed as well. But there seems to be no component that regulates which XP appears in the preverbal position. It can be an argument or an adjunct of the main clause or some embedded clause (if present). The syntactic component does not seem to care and leaves room for a multitude of options. The question arises, which components (if any) regulate which XP appears preverbally? Is there some meta-constraint at work, requiring the closest possible XP to position in the CP? Are there discourse/ information structural restrictions (topic/focus structure, focus/background structure)? If so, are such factors subcomponents of the grammar or not? Can extralinguistic factors regulate which XP appears pre-verbally? The answers to all these questions have implications for the architecture of grammar quite generally. More importantly, we find that the questions we address in this volume arise in more than one framework. We see, then, that there is nothing specific about Chomskyan architectures that would make them susceptible (or even only more susceptible) to the questions we have outlined above.

Still, readers might point out, Chomskyan generative models and HPSG still share many assumptions. Maybe other types of grammars do not relate to our questions in the same way after all? We do not believe this to be the case. To counter this suspicion, consider grammars of the functional(ist) type. These come in different versions, of course (just as formal(ist) grammars do), but it can still be shown that a functionalist outlook on language does not take away from our topic.

Functionally oriented grammars can be distinguished by the status they assign to the formal description of language. Van Valin (2001), for example, assumes that there exist conservative functional grammars, which basically try to add a concern for communicative functions onto grammars that use the same formal means and mechanisms as formal(ist) grammars to characterize the forms of languages. These form mechanisms are only taken to express the communicative functions that have been added to the picture, to represent communicative language functions. It stands to reason that conservative functional grammars will show the same fault lines between grammatical mechanisms – if and when they employ the same subcomponents as the grammars we have discussed above. Functionalist grammars of the moderate type (Van Valin2001: 149–150) try to replace the formalist representations of linguistic structures with alternative representations that explain the forms that serve to express communicative functions. One such theory, for example, is Dik’s Functional Grammar (Dik1997). In this grammar, language structures are considered primarily semanto-pragmatic formulae, and expression rules then help map the semanto-pragmatically represented speaker intentions onto a form a speaker may wish to use to express that intention. In this way, then, language forms are assigned to the semanto-pragmatic functions they help to implement linguistically. Crucially, the existence of such formal expressions of functions is not denied (unlike in extreme functional theories, cf. below). Maybe it does not come as a surprise, then, that many of the issues we have pointed above for formally oriented grammars and conservative functionally oriented grammars will constitute problems for moderate functional grammars as well: For example, the notorious scope ambiguity of English is representative of the fact that this facet of sentence-level semantics is not easily expressed in many types of constructions, which nevertheless have to be taken as “expressions” of the two relevant readings, if only ambiguously so. The question arises, therefore, why there is a mismatch between scopal semantics and at least these kinds of expressions. Moderate functional grammars could expect that there is an expression rule that helps express semantic argument status. In many languages, word orders and/or case markings can be taken to constitute such expressions of argument status. Here, too, mismatches between semanto-pragmatic function and formal expression are easy to point out. In German, it is mainly the case marking aspect of the form system that helps hearers identify which argument phrases have which semantic roles to carry out, because word order in German is rather free, and often offers no reliable clue as to the roles of argument phrases. However, with many types of phrases, and for many case, number and gender specifications, the semanto-pragmatic function is in fact obscured in German. Case markings are often ambiguous, too, and cannot express semantic roles clearly.

(11)

DieKatzensehendieFrauen.
thecatsseethewomen
‘The cats see the women.’ or ‘The women see the cats.’

This case reflects the fact that case distinctions are almost completely neutralized for verbal arguments that are plural in German.[5] For proper names, which never inflect for argument cases, this is the general state of affairs in the language:[6]

(12)

FritzsiehtFranz.
FritzseesFranz
‘Fritz sees Franz.’ or ‘Franz sees Fritz.’

In languages like English, argument status is encoded mostly by word order, since the only overt case marking left is the genitive (which is not a verbal argument case). Arguments that come to precede finite verbal elements and negation are generally subject arguments. While this works fine for a majority of cases, there also exist dummy subjects. In formal terms, these occupy the prenominal position like subjects (in fact, block it for other phrases), but cannot reasonably be described to help express subject arguments semantically, while other phrases, now blocked from the subject position, could and should. In the following example (13-a), the semantic subject (the agent of running) would have to be a man, but the phrase does not appear in subject position, given the word order. In fact, given the presence of the expletive subject there, the semantically plausible subject, a man, cannot come to take up the subject position, (13-b) and (13-c).

(13)
a.

There is a man running around in the garden.

b.

*a man there is running in the garden

c.

*there a man is running in the garden[7]

Similarly, if verb agreement is taken to be functional (in that it helps hearers identify subjects), mismatches arise again, when verbal agreement fails to point towards the phrases that are arguably the subjects of clauses, as in the following examples.[8]

(14)
a.

There’sSg? [three men]Plin the garden.

b.

Ichfriere.Wirfrieren.(verbs agree with nom. subject)
I.1sg.nomfreeze.1sgwe.1pl.nomfreeze.1pl
‘I am cold.’   ‘We are cold.’

c.

Michfriert.Unsfriert.(verbs do not agree with dat. subject)
me.1sg.datfreeze.3sgus.1pl.datfreeze.3sg
‘I am cold.’   ‘We are cold.’

Again, then, some semantic similarities are assigned different formal treatments, (14-b) and (14-c), or else formal distinctions typically used to signal functional differences may not be applied consistently across the board in all constructions in a language (14-a). In cases such as (14-b) and (14-c), particularly, a semantic property that is readily expressible in the language (as with the nominative examples), can also receive a formal expression that fails to carry out the alleged expressive function (in the dative cases[9]). In sum, grammars that attempt to map semanto-pragmatic functions onto formal expressions will face very similar problems as the formal(ist) models we discussed above: Some distinctions on some levels (say, argument roles in the examples just discussed) are not matched up with isomorphic distinctions on another level representation (word order, case marking, and agreement, in our examples).

This leaves the type of functional theories that Van Valin (2001: 150) characterizes as “extreme”. In these kinds of theories, the existence of any type of structure-building operations of any generality is denied – for the simple reason that structures of any generality are denied in the first place (Hopper1987). In these theories, consequently, even a notion like “compositional interpretation” is rejected, since there can be no formal means of expression that would systematically represent such meanings via linguistic forms. The only form-meaning pairings such extreme functionalist theories accept are idiosyncratic by definition, and may not even have any long-term stability: They are argued to be the subject of constant re-negotiations (of their meanings) in discourses (Hopper1987). Now, it may seem as though the questions we raise above are not formulable in extreme functionalist theories. However, it seems to us that maybe even these theories would have to discuss our questions, too, if only from the opposite perspective. Whereas we ask “which mismatches exist between subcomponents of grammar?”, they would still have to ask “are there really no matches between forms and meanings (other than what is found for individual constructions, used by individual speakers, in individual contexts)?”.

It seems, therefore, as if the question why grammars show unregulated, indeterminate, or mismatched aspects is not the result of some specific theoretical choice. In every theory we are aware of, similar questions arise somewhere, and no grammatical theory seems to address every aspect of linguistic forms and functions equally (let alone perfectly) coherently and elegantly. We therefore feel justified to submit that answers to the questions we pose are required, independently of theoretical choices. The present volume is an attempt to start a process of finding some answers to these fundamental questions.

2 Can we pose our questions meaningfully – or will the answers only ever be circular?

There are, of course, many problems with posing fundamental questions such as the ones we outlined in Section 1:

  1. Do we have any a priori expectations as to which aspects of structure building the subcomponents have to address? Are these a priori expectations in any way justified?

  2. Why would grammars produce mismatches, leave pertinent aspects of language unregulated, or fail to resolve sufficient detail in indeterminate cases; and since grammars are meant to represent how forms and meanings match up, which aspects are the mapping are regulated, and how?

  3. Conversely, are we not just restating our axiomatic assumptions about grammatical subcomponents, coupled with the sad admission that all our efforts to come up with formal descriptions of such subcomponents are simply lacking in coherence and sophistication?

In this section, we want to argue that the questions we pose are not admissions of failure, nor do they reflect just how poorly the discipline of linguistics has tackled the analysis of form-meaning mappings.

We do not subscribe to a pessimistic world view, where the differentiation of grammatical subcomponents boils down to a just so story about languages, which may appear tidy, but is in fact entirely stipulative. On the contrary, many of the subcomponents commonly postulated in the history of linguistics have been supported quite convincingly, and by various, mutually supportive kinds of evidence.

Theoretically, the rules and operations that have been demonstrated to successfully model syntactic phenomena are not all too similar to rules, operations or structures from phonology, after all. Also, while compositional semantics can be closely associated with formal structural means of expression, it still stands to reason that the expressed meaning and its formal expression are still fundamentally different entities ontologically, and their respective theoretical representations show it. By identifying which formal inventories seem similar to (or demonstrably different from) each other, we identify groupings of mechanisms, operations, etc., which outline grammatical subcomponents non-circularly, we believe. Labels for the groupings (like syntax and morphology) may not carry much weight empirically, but formal similarities between some operations, and differences between others, cannot be overlooked in the study of languages, and lead to a picture that is, despite many remaining questions, largely coherent.

The separations of levels is neither an easy task, nor are the specific separations uncontroversial. From a logical, or modeling point of view, various arguments have shown that morphological and syntactic subcomponents share enough similarities to warrant the question whether they constitute a single, overarching structure component of grammar, which concerns itself with the construction of meaning-bearing structures both large (syntax) and small (morphology). We also find, conversely, that the differences between morphology and syntax never quite go away, at least not in all languages and regarding all data points. We would like to submit, therefore, that both the postulation of a grammar comprising morphology and syntax, as well as the possible subdifferentiation between the two levels of descriptions have argumentative support. Also, subdifferentiating morphology into word formation and inflectional components has been shown to have beneficial consequences for theories trying to model morphological phenomena (even though strong lexicalist positions may deny this, of course). Subdividing syntax into more semantically oriented sub-subcomponents (say, core syntax, in modern Chomskyan terms) and more surface-oriented sub-subcomponents (the mapping to PF in the same family of theories) has opened up interesting possibilities (and problems) for the description of syntactic structures. Ultimately, we find that the question of whether specific subcomponents exist is a question of the granularity of the description. Is morphology distinct from syntax? From a bird’s eye perspective, no. But from the point of view of higher resolution descriptions (at least for some cases, in some languages), possibly yes. Whether morphology and syntax are used as the names of subcomponents that handle these phenomena, or whether we insist that these distinctions are just reflective of small-scale syntax vs. large-scale syntax seems to be an entirely empty discussion, concerned more with naming than substance. Other distinctions, for example between lexicon and syntax have also been disputed (Fried and Boas2005; Hoffman and Trousdale2016; Johnson and Postal1980). Other differentiations have never, to our knowledge, been called into question. No theory we know tries to conflate phonology and semantics, or other, equally incommensurable levels of linguistic description. Thus, even though the precise definition of levels may still be disputed, the usefulness of some such distinctions has been demonstrated beyond reasonable doubt, we maintain.

From a neuro- and psycholinguistic perspective, it has often been pointed out that at least some subcomponents that have proven helpful for theoretical purposes may, in fact, have independent support from the observation of processing operations, and/or phenomena associated with language loss. The literature is replete with such connections, so we will only touch upon them very selectively here. To give one example from the processing literature, it is clear that violations of linguistic expectations do not seem to be indistinguishably similar to each other. On the contrary, it is well established that event-related potentials like a P600 in the posterior temporal lobe are associated with violations of formal linguistic properties (like syntactic violations). It is also, of course, clear that brain regions are not single-purpose devices. P600 effects have been proposed to also be caused by musical stimuli which violate certain structural expectations (Patel et al.1998), as well as by syntactic violations resulting from anomalous filler-gap dependencies (a competence-related phenomenon) to issues having to do with the processing of garden-path structures (a performance issue). However, it still seems reasonable to point out that these violations are of a formal, or minimally form-related nature. They can, despite all internal differentiations, be kept apart from more semantics-related ERP effects, such as the N400 type (Kutas and Hillyard1980, and much subsequent literature). While not every subdivision proposed for the purposes of theoretical modeling has been found reflected in psycho- or neurolinguistic observations, the overall picture, it seems to us, is still mostly one of compatibility (or even consilience, in the sense of Wilson1999) between the different approaches to describing language structures and their processing. Some phenomena associated with language loss seem to selectively affect faculties that would belong to specific theoretical subcomponents (given the definitions of that subcomponent by some theory), but leave other faculties from other subcomponents intact (like the lexicon). This further corroborates the impression that theoretical subdistinctions – while not perfectly replicated in all their detail – seem to make some sense, at least in their rough outlines. If these observations from the linguistic literature are on the right track, then a differentiation into cognitively and neurologically real subcomponents of grammars seems a plausible assumption.

Ultimately, even when no independent support for some subcomponent has (yet) been offered, we can remind ourselves that the postulation of the subcomponent may, in the worst case, still constitute an interesting hypothesis about how language(s) could be organized on a purely abstract level. This holds even when the hypothesis subsequently leads to discoveries that make the subcomponent implausible, for independent reasons. However, as long as the hypotheses are not contradicted by independent research, they constitute valid proposals, and as such help drive research developments to fill in the blank spots we only know exist because we have posited our hypotheses clearly to begin with.

Nothing of what we have said so far is intended to mean that we deny the possibility of future surprises, especially when our theories are confronted with data from hitherto understudied languages. On the contrary, we positively expect that the investigation of more typologically diverse languages will unearth findings that would not have been expected given even the most thorough investigation of, say, East Coast American English. However, interesting as those future developments will certainly be, we are not currently expecting to see surprises of arbitrary magnitude. While cross-linguistic empirical research has demonstrated that languages differ with regard to the inventory of lexical and/or syntactic categories (Sasse1993; Wunderlich1996), it does not seem rational to be holding our breaths for the discovery of a language that makes absolutely no distinctions at all with regards to members of its lexical inventory of atoms of meaning or structure. Semantically, we would, as a discipline, be shocked to find that a certain language makes no use of function-argument pairings, as found in predicate-argument type structures (and many similar semantic constellations). We do not expect languages that have a finite set of calls (meaning-bearing, but non-compositional, atomic units of communication), despite the fact that such call systems are the only communicative inventory available to our closest biological relatives amongst primates. Grammatically speaking, languages which have no potential for creative structure-building (which could be used to formally express an unbounded array of meanings) would be similarly shocking to find. Overall, then, in this way, too, linguistics has established a body of knowledge that many practitioners will expect not be overthrown overnight.

However, while such extremely unexpected findings may not arise, questions about the internal organization of grammatical architectures are still interesting next questions to ask: If, after all is said and done, languages do show some similarities in their mappings from forms to meanings, the question still arises how “hi-fi” such mappings, in fact, are. Which types of mismatched, indeterminate, or unregulated aspects can be found in individual languages – and how do they compare cross-linguistically?

From a purely theoretical point of view, we readily admit, it could be considered surprising, at least a priori, that there should ever be “lo-fi” grammatical mappings in the first place. Grammars are systems that handle possible mappings from linguistic forms to meanings. To put this even more bluntly, grammars simply are not observable systems that have any ontological status except as the systems representing such mappings. It thus may be slightly paradoxical to ask whether grammars in fact produce mismatches. And it may seem impossible to point out such mismatches. However, given our presentation above, we find that, at least with the theoretical proposals established so far, it seems that the mismatches pop up again and again, despite our best efforts as a discipline. Why is this?

To immediately make a completely opposite argument, maybe we should not be surprised to find mismatches, given what it means to map forms onto meanings. Meanings represent the mappings of some kinds of complex (and at least potentially extra-linguistic) cognitive objects (vulgo “thoughts”) onto linearly ordered, rule-governed arrangements of atomic units, after all (according to all but the extreme functionalist grammars). Given the incommensurable nature of these two interface objects, a lossless mapping could probably not be expected, even with great optimism. Any conceivable mapping device that relates these ontologically different objects would no doubt have to make drastic changes to some representations on some levels of descriptions. In fact, the mapping mechanism that language grammars seek to supply has to be lossy and fundamentally mismatched in some ways – namely to the precise degree that we believe the ontological objects mapped onto each other to be incommensurable.

Of course, some proposals try to overcome some of the translational burden between such incomparable cognitive objects by denying that the two levels of representation are separated to such a degree. Take the proposal of a language of thought (Fodor1975, 2008). If thoughts (at least the propositional thoughts of philosophical tradition) would turn out to be generated by an essentially language-like system, the syntax-meaning interface could be conceived of as basically lossless. As we have seen above, some current syntactic proposals see no real reason to assume that a semantic component should exist independently from the syntactic component that generates meanings. For example, Chomsky argues (in sharp contradistinction to, e. g., Dowty1979; Jackendoff1972) that “[most] of what’s called ‘semantics’ is, in my opinion, syntax. It is the part of syntax that is presumably close to the interface system that involves the use of language. So there is that part of syntax and there certainly is pragmatics in some general sense of what you do with words and so on. But whether there is semantics in the more technical sense is an open question. I don’t think there’s any reason to believe that there is.” (Chomsky2000: 74) Therefore, syntax may interface with extra-linguistic cognitive faculties, but not with a semantic subcomponent in the technical sense. Given Fodor’s language of thought, the additional question arises what language-external cognition even consists of, or at least which language-external elements of cognition are linguistically expressible (since only “propositional thoughts” may be) at all. However, some authors have argued (supported by extremely interesting philosophical arguments and ingenious experiments) that the syntactic algorithms homo sapiens seems to employ to express aspects of meaning, and the species’ knack for complex thoughts could still turn out to rely on the very same cognitive mechanism (Hinzen2011; Hyde et al.2011; Shusterman et al.2011; Spelke2003; Spelke and Tsivkin2001). Maybe even more so than with the language of thought, shifting our view of what the syntactic machinery is could do away with much complexity at the interface between syntactic and semantic aspects, since there is no interface, and few distinctions between the systems. Under this “functionalism in reverse” perspective, languages do not express meanings which conform to a priori and language-external thoughts. Instead, this still relatively new direction of inquiry proposes that the mechanisms that underlie language may, in fact, be the same mechanisms that allow us to have thoughts in the first place (at least for certain types of complex, propositional thoughts), upending many of the conceptions we have discussed above completely. But no matter how this chicken or egg type of discussion may play out, shifting our conceptions around in this way will not render thoughts and forms that express thoughts indistinguishable, of course. Formal and meaning/thought-related aspects of linguistic or general cognitive structures will have to be kept apart, as reflected in the different ways that individual languages still uncontroversially supply different translations from thoughts into expressions.

To sum up this section, we propose that some mappings between forms and either meanings or thoughts of some type to be determined, will stay with us. Thus, the need to investigate the architecture of the mapping device remains an issue. Which interfaces are required between which subcomponents, and which tasks are unregulated or indeterminate in these components is a question that can be meaningfully asked regardless.

3 Previous works on similar questions

As the last section has established, we think that the questions we try to ask here are interesting and fundamental to our discipline. Luckily, the present volume finds itself in a historical context where similar questions have already been asked, and most of the groundwork is thus already laid for our discussion. The following developments in the field were therefore not only influential for kick-starting our project, they also developed many of the tools, techniques and conceptions of language that we and the authors in this volume rely on.

One such older development is the long-standing (and ultimately unresolved) discussion about the functional underpinnings of grammars already discussed above. While nobody denies that languages are used for communication, opinions still differ as to whether grammars (more specifically, grammatical rules or operations) are brought about by functional considerations, and whether they have communicative functions as their sole raison d’être. It is clear why this question is still open. While many good functional explanations can be found for many formal phenomena across languages’ grammars, mismatches between forms and meanings are also still found, and also across most or all known languages, it seems. For many rules and operations that seem reasonable to posit from a formalist point of view, no functional explanation seems readily available. While the discussion may not be finished for now, the mismatches that have been pointed out to exist between formal structures and semanto-pragmatic functions of such structures have planted in us the doubt that languages are “hi-fi” systems for the meaning-form mapping. If they were, functionalist conceptions of grammar should easily, and unequivocally have won the debate, we assume.

It is also certainly no coincidence, then, that generative syntactic models in the Chomskyan tradition never shied away from postulating syntactic structures and operations that made a lot of sense from a purely formal perspective, but which upon even the most superficial inspection could potentially cause mismatches. Early generative models almost routinely made proposals that could not easily be matched up with semantic findings (lest the structures could look too functionalist). In later developments (certainly in the so-called Minimalist Program), the syntactic machinery is extremely closely aligned with sentence-level meaning aspects of clause structures. However, not coincidentally, there must now be additional formal systems that map the outputs of syntactic derivations onto phonological representations which speakers would recognize as the forms of sentences. These mapping devices are not functionalist in nature, in fact they may not even be fully systematic across languages in the first place. Over the course of the development of Chomskyan generative grammars, then, at least two fault lines were standardly assumed, in varying constellations. Syntax has to interface with a semantic subcomponent (call it LF) and a surface form-oriented component (call it PF), to combine atomic elements from another grammatically independent subcomponent, the mental lexicon. No completely lossless mapping between these interfacing systems was ever assumed. In the Y-model of generative syntax in the 1980s and early 1990s, the LF branch of the derivation represented mismatches between observed word orders and interpretative effects such as scopal interpretations or binding options. In more current models, the PF branch becomes a messy affair, responsible for representing many mismatches between forms and meanings. For example, the mismatches between scope and word order already discussed above were handled as LF-movements in the older Y-model, but were then represented as overt but invisible movements, i. e. as PF-regulated spellouts of low copies of multiply merged elements.

In sum, therefore, Chomskyan generative grammar never conceived of the mapping from sentential semantics to forms as lossless, it was only the precise way where and how the mismatches were represented that changed. Other grammars have to make similar choices.

Another influence on our thinking we would like to acknowledge here is a question that does not concern itself as much with architectural questions, such as the interfaces between subcomponents, but rather asks whether operations and structures could be flawed internally to some individual subsystems of the grammar, too. For example, discussions can be found in the literature whether we should not consider grammatical systems that generate mismatches much more frequently – and maybe even fruitfully – than had previously been assumed:

In an edited volume, Brandt and Fuß compile articles that address the possibilities of (and potential necessity for) Repairs (Brandt and Fuß2013b). The authors of this volume assume that grammatical operations on some level of description can generate structures that could be considered problematic in some respects, but which ultimately “can be put to service to economically code interpretations that are difficult or even perhaps impossible to express transparently” (Brandt and Fuß2013a: 9). Brandt and Fuß, in turn, acknowledge that their own thoughts follow from older works like Reinhart (2006), and point out various linguistic phenomena which may show repairs at work. We refer our reader to Brandt and Fuß’ very interesting thoughts, which we need not replicate here.

There may arise the impression that the picture of grammars we are painting here could stand in contradiction to other concepts of language, which consider languages to constitute perfect solutions to the issue of mapping thoughts onto expressions. For example, the strong minimalist thesis (SMT, Chomsky2005, 2007) claims that grammars should constitute “an optimal way to link sound and meaning” (Chomsky2008: 135). Brandt and Fuß explicitly position their conception of repairs relatively explicitly as an alternative to the SMT. Somewhat similarly (but against a completely different conceptual background), functionally oriented grammars will want to argue that formal expressions mimic semanto-pragmatic functions relatively transparently, for the simple reason that the formal mechanisms are supposed to express communicative functions by definition. We believe, however, that the questions we ask (and the conception of language they presuppose) stand in no fundamental opposition to either of these assumptions. As regards the SMT, it is near-impossible to define what constitutes an optimal link between forms and meanings, a point that Chomsky himself discussed many times. We submit here that the individual operations a Chomskyan grammar provides could be optimally simple, but that, at the same time, the structures generated by such operations could prove problematic, as well as the ways in which these outputs of syntax are passed on to, and handled in, other, non-syntactic subcomponents of the grammatical architecture. Similarly, functionalist tenets (at least in conservative or moderate functional grammars, adopting Van Valin’s terminology) about the semanto-pragmatic transparency of grammars can be met in general, even if turns out that not every intermediate representation or grammatical factor involved in the mapping is without its complexities or (perhaps: necessary) contradictions. In this way, we do not think that our topic constitutes any issue a priori for any of the overarching ways we use to conceptualize language, grammar, etc., and we do not intend for it to argue against well-established conceptions directly.

Leaving such overarching questions behind, we find that many well-known concrete phenomena are representative of the kinds of mismatches in linguistic architectures we want to talk about. We will now present some of those well-known cases in turn.

Lexical ambiguity and the sheer availability (or absence) of lexical atoms in languages are sources for various types of mismatches. While lexical inventories might seem like a phenomenon that is different from the interplay of grammatical subcomponents, we believe it may be fruitful to consider the way in which the lexicon integrates into the array of more centrally grammatical (i. e., non-lexical) subcomponents. This is not only necessary, in view of the fact that this very distinction itself has come under close scrutiny by construction-based approaches. How the lexicon is conceived of, and formally connected to the grammatical subcomponents, matters for many other approaches to syntax and morphology, too, as we show now.

On the one hand, in a lexicalist conception of Chomskyan generative grammar, lexical items (LIs) constitute ready-made building blocks, which drive syntactic composition unidirectionally. Syntax has to respect lexical properties (saturate argument roles of predicates, fulfill selectional requirements and restrictions, etc.) as well as morpho-syntactically relevant properties like agreement features, which enter the syntactic component(s) fully specified. In this way, mismatches between the lexicon and the grammatical components are maybe not expected to occur a lot – since the lexicon basically dictates which lexical aspects the grammatical components will have to respect.

On the other hand, compare this with how an a priori lexicon plays out when no lexicalist morphology is assumed. For example, in Distributed Morphology (DM) stored lexical items are inserted at a late step in the derivation. This leads to an interface between the lexicon and the structure component that is again characterized by a potential for mismatches. For in DM, the structural computation carried out by the grammatical subcomponents operates over feature bundles which are spelled out by lexical items only after the point of lexical insertion. At this point in a derivation, various types of mismatches can arise. In the simplest case, different feature bundles arrived at by the derivation can be lexicalized either by items that perfectly match the feature bundles or else by items that do not. In a competition between items that could potentially lexicalize feature bundles for which no perfectly matching item can be supplied, suboptimal candidates for insertion can end up the winners of the competition for insertion. Items that enter the competition with the largest subset of the required feature set can be inserted even if their feature specification does not really express the structurally required feature set. This, of course, introduces a mismatch between the type of structure that is fed to the semantic component, and to the surface form component, respectively. Syntactic structures can be mapped onto available lexical items in a lossy way. Therefore, various syntactic feature bundles can be imagined for which the optimal candidate for lexical insertion is one and the same lexical item. Consequently, syntactically available distinctions are not found replicated in the structure that has all lexical items inserted. The mismatch for such theories, then, is placed at the point of lexical insertion.

Alternatively, DM provides mechanisms for changing the constellation found for the lexical insertion process. With fusion, nodes that constituted multiple syntactic heads for the syntactic derivation, are turned into a single feature set for the purposes of lexical insertion. Given this operation, the lexical insertion competition may play out differently – but now, a mismatch is created between syntactic derivations that handle one or more than one head, and the lexical insertion process which cannot replicate the distinction between the two syntactic constellations, given fusion. Similarly, the operation of fission splits up syntactic heads, so that now, the lexical insertion process makes a distinction between cases after fission has applied (multiple insertion competitions take place) and cases where fission has not applied (one insertion) that is not replicated in the structure generated by the syntactic derivation. Similar arguments can be made for impoverishment, lowering, dislocation, and potentially other DM operations.

Since the lexicon that supplies candidates for insertion is to some degree language-specific, we arrive at a situation where some language L may carry out mismatch-inducing DM operations, whereas another language L′ will not do so. If the output sentences of L and L′ are, however, equivalent in meaning, it becomes apparent that fusion, fission, etc. are operations that create mismatches between sentence-semantic interpretations, and options for expressing such meanings formally.

Turning to what may be a more uncontroversial case, structural ambiguity is a quintessential example of the kinds of mismatches we have in mind: There is one surface form (some words in some order, potentially characterized by specific prosodic properties), but the mapping of this surface form to syntactic representations must fork out into two different structural constellations, with different structural properties, and semantic interpretations. To discuss but one example of this well-known phenomenon, a sentence such as we decided on the boat can be analyzed with the PP on the boat either as a local adverbial, or else as a complement of the verb decide, causing the differences in semantic interpretation. It becomes apparent, therefore, that there is a mismatch in English between the syntactic component (which differentiates between the two configurations) and the subcomponent concerned with deriving the surface form of structures (which does not).

Given the examples discussed so far, the question arises whether mismatches are limited to the interface between the grammatical subcomponents and some surface-oriented component. We do not believe that this is the case. Rather, a different, but ultimately comparable scenario can play out at the interface between the grammatical subcomponents and the semantic interface. In a phenomenon called spurious ambiguity (Karttunen1989; Steedman1991; Pankau et al.2010), the syntax supplies two structures, but the semantic component will not reflect this syntactic difference. Consider the two analyses of the following sentences:

(15)
a.

Wenglaubstduhatsiegesehen?
whothinkyouhassheseen
‘Who has she seen, do you think?’  (interpretation of both b and c)

b.

Wen [glaubst Du] hat sie gesehen? = parenthetical

c.

Wen glaubst Du [wenhat sie gesehen]? = extraction

The structure in (15-b) was proposed as a parenthetical analysis of the sentence in (15-a). The structure in (15-c) was proposed as an analysis that employs the syntactic extraction of the wh-phrase from the subordinate clause. Both analyses receive great amounts of support, and both seem needed to explain the various properties and restrictions that hold for clauses of this type. It has not been possible, at least to date, to propose a single analysis that explains all properties. Consider, now, that this state of affairs is not caused by our lack of imagination for such solutions, but actually constitutes an empirical fact of German. In that case, structures of this type reflect mismatches in the sense we are interested in here: The semantic component is incapable of assigning different meanings to two different syntactic constellations. This finding will no doubt make some linguists shrug, since the constellation of (multiple) structural options for a (single) interpretation seems innocuous, from the point of view of the communicative expression of thoughts. As long as there are ways to explain the mappings from all possible meanings of the relevant structure to surface forms they associate with, certainly no harm is done. From a more formal point, however, we submit that a distinction the structure-forming component can make cannot be replicated by the subcomponent that interprets the structures derived. We are thus potentially looking at another case of mismatch that relates to our questions.

Another harmless fact about many languages is that they can represent a single meaning via distinguishable surface forms. Some linguists tend to argue that there simply must be differences in meaning associated with the different structures [and such linguists are found in both the functionally minded as well as the formal camp, e. g. in syntactic cartography; for some criticisms, cf. Struckmeier2014, 2017, 2020]. While we do not want to pre-judge whether different meanings of distinguishable forms will be observed at some point in the future, we must not fail to point out that such meaning distinctions are clearly not readily perceivable (at least for the time being). In German, a syntactic phenomenon called scrambling can permutate the order of arguments and adverbials in the so-called middle field of the German clause, as in:

(16)
a.

Heute hat Peter den Kindern die Kekse hinter der Scheune gegeben.

b.

Heute hat Peter die Kekse hinter der Scheune den Kindern gegeben.

c.

HeutehatPeterhinterderScheunedenKinderndieKeksegegeben.
todayhasPeterbehindthebarnto-thekidsthecookiesgiven
‘Today, Peter gave the cookies to the kids behind the barn.’

The meaning given in the gloss is not only the meaning of the examples (16)), but also materializes for the various other permutations the arguments and the adverbial can be found (which are almost unrestricted: all conceivable 4! = 24 orders are possible). Truth-functionally, all of these different orders seem to be equivalent. If there is any difference in meaning associated with these sentences at all, they are found in the domain of information structure. For example, discourse-old materials are preferably positioned before discourse-new (or maybe, stressed) materials. However, this does not necessarily make scrambled structures differentiated meaning-wise, even if meaning is construed in such a wide way as to include information structural effects. Firstly, no one-to-one mapping between scrambling constellations and information structural effects can be posited, despite much efforts in this direction (Struckmeier2014, 2020). Secondly, there are sentences which simply fail to even only show an information structural difference in the first place. Consider the two answers (17-b) and (17-c) for the question in (17-a).

(17)
a.

Q: Who did you give the money to?

b.

A: Ich habe dem KELLner das Geld gegeben.

c.

A:IchhabedasGelddemKELLnergegeben.
Ihavethemoneyto-thewaitergiven
‘I have given the money to the waiter.’

The two sentences in (17-b) and (17-c) are absolutely, completely and demonstrably identical in sentence-level semantic meaning: Not a single situation can be construed where one of the two sentences would be false while the other sentence would be true. Since the two sentences therefore fail to meet Cresswell’s minimal standard for what constitutes difference in meaning, any difference in interpretations would have to be information structural in nature. However, the two sentences are legitimate answers to the same question, as the example just given shows. More generally, we challenge anybody to come up with a context in which (17-b) is an acceptable answer, and (17-c) is not.[10] If no such context can be found, it seems hard to argue that (17-b) and (17-c) differ in their information structural meaning. Since their truth-functional sentence-level meaning is identical, we would be hard-pressed to consider (17-b) and (17-c) as anything but identical with regard to every aspect of their meaning, even where meaning is widely construed. This, then, leads to a situation, where the semantic component cannot differentiate between (17-b) and (17-c), but the structure-forming components that implement the word order difference can. German, thus, has redundant mechanisms to implement meanings in every case where scrambling options are available that do not result in meaning differences.[11] Such redundancy, then, is somewhat comparable to spurious ambiguities, in that the meaning system(s) cannot differentiate between structures that some structure-forming component(s) can (and must) be able to keep apart. Similar arguments have also been made for the operation(s) that implement the distribution of elements in the so-called pre-field of German, i. e. the position preceding the finite verb in a V2 clause: Here, too, different placements of phrases in the pre-field may or may not result in (demonstrably) different meanings, as reflected in the fact that, in generative treatments, pre-field movements have almost exclusively been considered as A-movements.

From a theoretical point, what makes redundancy interesting is primarily the fact that it creates loci for optionality. Scrambling options remain even when semantic, pragmatic, discourse-structural, prosodic and, in fact, virtually all conceivable other factors for reigning in the optionality have been drawn upon to no avail. In this way, when we find redundant ways of mappings forms onto meanings (even widely construed), we find that the language system is not as deterministic as some approaches to grammar seem to assume. The set of such approaches run orthogonally to well-established fault lines in the field. Cartographic approaches to syntax have a strong tendency to try and rule out optionality (since optionality threatens to provide an alternative to the feature-driven, deterministic operations cartography deals in). Similarly, but against a completely different background, functionalist theories try and tie the existence of different forms to different meanings the forms (are supposed to) express, but they will find it difficult to accept that some formal mechanisms do not seem to cater to such a neat mapping from forms to functions.

The existence of formal redundancy also seems interesting from a psycholinguistic point of view, as well, since redundancy arguably makes it harder for children to acquire a language: Firstly, and most trivially, multiple forms require a higher learning effort than a single form. Secondly, distinctions that are not tied to semantic differences may be harder to learn, since children seem to learn formal distinctions better and more easily when they are tied to differences in interpretation. Thirdly, from a more statistical perspective, the mere existence of alternative expressions should lead to differences for learnability. Given alternative forms to express a single meaning, each individual synonymous sentence can only ever be lower in frequency, given the (respective) alternatives adult speakers may choose to express that meaning. While we cannot gauge how much more difficult redundancy would make language acquisition, it stands to reason that it certainly does not help the process. Given that no functional effect is achieved by redundancy (given that the alternative forms express the same meaning by definition), we would like to point out that this mismatch seems in need of some explanation.

4 How important are the mismatches – i. e., how does “language” handle them?

As we have just seen, various types of mismatches, indeterminacies, and unregulated phenomena are already well known from the literature. In this section, we would like to argue that there are, at least potentially, much more loci in our current theoretical architectures that could be suspected to bring forth similar mismatches: In many theories of language, the notion of grammar has been decomposed into a multitude of subsystems in the available literature. We are thus in a situation where more interfaces between these more deconstructed subcomponents exist than in older theories. Consequently, more interfaces than just, say, the one between “syntax and semantics”, or similar “large-scale” subcomponents have to be attended to. Each of these interfaces would seem to have a certain potential for mismatches like the ones we have spoken about. Therefore, the questions we have posed above would have to be re-investigated with regard to such newer architectures, with their increased division of labor: Can we find mismatches akin to structural ambiguity, spurious ambiguity, redundancy, etc. at each of these interfaces? This is currently very much an open question, but one which we think can be addressed with fruitful results.

As soon as we are in a position to see which mismatches exist between the subcomponents a theory under discussion displays, we can ask an important follow-up question: How does the proposed language architecture handle the mismatches, wherever they may occur? There are several guesses we could easily venture from the top of our heads, currently all without much empirical evidence to support them. In order to give an idea of which more concrete venues for research exist, consider the following (open) questions:

  1. Do grammars impose (arbitrary) restrictions which remove the mismatches by fiat? Maybe this does not seem plausible, since we still find some mismatches. However, we could imagine that grammars produce more mismatches than the ones we see. And the ones we see are characterized by certain properties, which those mismatches lack that have been removed.

  2. Could functional systems come to piggyback on the optionalities that are left open by formal grammatical systems? That is, could options left open by a formal system of grammar be exploited functionally, by assigning meanings to the different formal options, maybe in a diachronic cultural process, or more liberally in the context of discourses (borrowing at least a fraction of the assumptions made by extreme functionalism, e. g. Emergent Grammar)? This option does not seem to far-fetched, in fact: For example, some authors at least have proposed that information structure exploits word order options in German scrambling. The different word orders available in the German middle field thus do not strictly express information structural distinctions (like case markings helps express argument status). Rather, some authors assume (Bayer and Kornfilt1994; Haider this volume; Fanselow2001, 2003; Struckmeier2014, 2017) that the grammar treats the word order as truly optional choices. But the use of these optionally available structures can still reflect discourse distinctions, maybe also in connection with the prosodic markings that also go into the computation of such discourse-related requirements. Similarly, some authors have argued (Féry2008) that prosody is not a system that would express information structural distinctions with any level of accuracy or determinism. Rather, information structural distinctions can piggyback on independently available prosodic options. In this volume, the article by Haider presents thoughts along these lines.

  3. Alternatively, do conventions arise in speaker communities to the effect that languages are only used in such a way that the usage of mismatch-demonstrating structures is avoided, despite the fact that such structures are, strictly speaking, grammatical? This would be at least somewhat reminiscent of an idea proposed by Newmeyer (2005) that grammars provide an option space for what is structurally possible in languages, but that functional considerations whittle away from such option spaces all structural options that are not functional. In that way, languages will display mostly (in the extreme, only) the functionally usable structures, even if more structures would be technically available.

  4. On a more optimistic note, could mismatch-demonstrating structures be in fact unproblematic? Could, for example, extra-linguistic utterance contexts supply enough information to make the mismatch-demonstrating structures usable? Can we demonstrate that this hold for all cases?

With answers to some of these questions, maybe old discussions of how functional grammars are can be laid to rest, as artefacts of theoretical descriptions. From a formal point of view, grammars could be unproblematically expected to produce mismatches between forms and (their) meanings, since a particularly “hi-fi” mapping between these interfaces is not what such a formal system is concerned about. It is certainly no coincidence in this regard that formal proposals often point out the existence of non-communicative, in fact non-linguistic, applications of their formal machinery. Chomsky et al. (2019) argue that core syntax is in no way trying to generate the usable sentences. Instead, it generates whatever it can generate, with no concern for the usefulness of its output. In that way, it is clear that core syntax cannot overgenerate structures in any meaningful way (Chomsky et al.2019: 38), since there is no standard by which overgeneration could be judged. Furthermore, and maybe more interestingly, formal structures that fail to map onto sentences may still be cognitively legitimate structures from a different point of view. Consider (18):

(18)
a.

WenglaubstDu,dassderChefentlässt?
Whothinkyouthatthebossfires
‘Who do you think the boss will fire?’

b.

??WenignorierstDu,dassderChefentlässt?
whoignoreyouthatthebossfires
‘Who do you ignore the boss will fire?’ (intended reading)

Suppose that (18-a) presupposes that the speaker does not know the identity of the person to be fired, and suppose also that the speaker assumes that the addressee knows that some person X will be fired. Then, it seems a legitimate question to ask about the identity of X. However, for (18-b), if we assume that the speaker assumes that the addressee of the questions ignores that the boss will fire X (but still knows who X is), why is (18-b) such an odd question from a grammatical point of view? Why is it, furthermore, that we know what (18-b)would mean, if it was acceptable? Maybe, sentences of this type (for an analysis, see Müller2011) demonstrate that the structures generated by core syntax are not, in fact, exclusively linguistic objects, but can conform simply to “thoughts” in cases where they are not usable as expressions of such thoughts (for whatever reasons)?

Similarly, some authors assume that the structure-forming capacity humans display with language is not limited to linguistic expressions at all. Instead, its effects can be found in the structures found in music (Lerdahl and Jackendoff1983), mathematics (Hauser et al.2002) or navigational skills (Shusterman et al.2011). In that way, structures that cannot map onto (all of) the linguistically relevant interface systems may still be able to be constitute legitimate outputs for other systems and their interfaces.

Given how contingent such considerations seem to make what linguistic outputs must look like, we can also ask whether in cross-linguistic comparisons, we can find evidence that it is requirements of the interfaces specifically used for certain languages that constitute the mechanisms that whittle away unusable structures, as we put it above. Kremers (2013) points out that at least one linguistic factor that we have considered as indispensable for syntactic sentence formation above may be a logically independent factor, stemming from the contingencies of the externalization channel a language uses. We have assumed above that the linearization of structures, as well as questions of copy spellout, in Chomskyan generative grammar reflect the fact that languages certainly have to bring about word orders in a language. As Kremers points out, however, sign languages employ a different (manual-visual) channel of externalization, which does not impose the same requirements. Multiple signs can be externalized simultaneously in such languages, at least to some degree. It seems reasonable, then, to ask whether linearization factors are, in fact, grammatical factors in the first place. It could be assumed, from a Chomskyan point of view, that syntax is completely unconcerned with linear orderings, and caters almost exclusively to the semantic interface. From a functional point of view, a similar argument could be made, since word orders are superficial properties for such theories, and semanto-pragmatic functions are more central to their inner workings. Again, we find that, at least to some level of granularity, the dispute between functionalist and formalist approaches to grammar may not be quite as substantial as has often been assumed. What certainly remains is the questions about which mismatches arise where in language, and why.

5 What can we hope to achieve?

In the sections above, we have tried to preempt certain types of criticism that some reader might leverage against the basis of the questions we are trying to pose. Far from constituting a meaningless or even self-contradictory program, we believe that finding indeterminacies, unregulated properties and (resulting) mismatches between the subcomponents of grammars seems a worthwhile and interesting endeavor. By identifying points where grammars seem to slack off (creating apparent or real mapping issues) we arrive at points where we can try to observe (at least the shadowy outlines of) what it means to map thoughts onto expressions. Which aspects of the relevant interface systems tend to get lost in translation? Will we find that functional(ist) conceptions have it right, and mismatches and inconsistencies are only ever slight, and maybe explicable as historical contingencies, or even only apparent altogether?

If the mismatches are real, can we find reasons for any seemingly negligent behaviors? Can we identify subsystems that are particularly prone to creating mismatches and indeterminacies, and maybe exploiting the options creatively, similar to the way that Brandt and Fuß (2013a) envision? Can the mismatches that occur be squared with functionalist assumptions – which take it that languages should comprise relatively “hi-fi” translations from “meanings” to “expressions” – or do they constitute actual problems for (at least overly optimistic, or overly simplistic) functionalist hopes? We do not claim to have any answers to these questions, but we believe that the discipline is in a position at least to begin answering them.

As for the rewards of doing so, we believe that an analogy may be interesting to contemplate. In many psycholinguistic experiments, it is problems of processing that yield the best insights into how the processing mechanisms may work. It is production and comprehension errors, in other words, that often help outline what a processing device may look like that would be prone to make such errors. In trying to identify structural mismatches, between different levels of representation, between different (or even within) grammatical subcomponents, we can, at least with some (admittedly necessary) optimism, hope to achieve something similar for the description of language structures and the knowledge speakers have about such structures. If a system of knowledge exists which transcends the processing mechanisms of neuro- and psycholinguistic interest at all, we may hope to find what the possibilities and limits of the system of knowledge are. We may hope to conclude from there what that system of knowledge (which we can never observe directly, but only in its outputs) may look like – and whether it appears to be a demonstrably different system from the processing machinery that neuro- and psycholinguistic research has already begun to outline. Mismatches and indeterminacies may provide a window into such knowledge systems (implementing competence), we hope, and show how they are different from or similar to processing errors and speaker/hearer workloads (relating to performance).

In sum, mismatches, indeterminacies and absent regulations may turn out to provide central evidence for the knowledge systems language users command – in addition to the matching of form-meaning pairs via precise regulations that grammars have already described successfully.

6 Contents of this volume

The articles collected in this volume address the overarching question of responsible subsystems neglecting their (assumed) duties, creating indeterminacies and/ or mapping issues. The collection covers a wide range of possible connections between a partially indeterminate system (mostly syntax and/or morphology) and other systems that the apparently negligent systems relate to. Looking at these contributions, it seems to us that certain types of arguments can be made out, and we have grouped the articles in the parts of the volume according to these approaches (as perceived by us).

Part I: Apparent indeterminacies can be explained away

  1. The contribution by Grohmann, Kambanaros, Leivada, and Pavlou addresses optionality in clitic placement that results from the bilingual situation in Cyprus where both Standard Modern Greek (SMG) and Cypriot Greek (CG) are used as spoken languages, albeit in different contexts. One prominent difference between SMG and CG concerns the placement of object clitics. In a nutshell, contexts where SMG requires proclitic placement correspond to contexts in CG where enclitic placement is required. Grohmann and colleagues show first that for bilingual Cypriot speakers clitic placement is truly free. Second, they argue that sociolinguistic variables are no good predictors for this optionality in clitic placement. Third, although clitic placement for Cypriot speakers hence looks like the prime example for optionality, they suggest that this optionality is due to the presence of two grammars, one with proclitic placement, the other with enclitic placement. Therefore, they conclude that this situation does not represent a case of optionality. Instead, speakers simply have two grammars, each of which operates deterministically. Therefore, there is no indeterminacy anywhere, in any of the systems involved. Rather, multiple grammatical systems are at issue, which, confusingly, are potentially employed by one and the same speaker.

  2. Amaechi and Georgi discuss the apparent optionality between wh-movement and wh-in-situ in Igbo. That this optionality is only apparent is argued for on the basis of a battery of tests that support the presence of wh-movement. For as the authors show, both the structures with wh-movement and those without exhibit properties indicative for wh-movement. Therefore, what seems to represent a syntactic indeterminacy is in fact no such thing. Syntax deterministically derives a single output structure (with displace wh items in every single case). What the authors then show is that the choice between wh-movement and wh-in-situ does not reduce to optionality at PF either. That is, it is not the case that PF is free to choose which copy in a chain to spell out. Instead, information structure drives the decision deterministically. Amaechi and Georgi conclude that the Igbo data favor a view where PF and LF are not independent from each other, but communicate with each other: a distinction relevant at LF can have influence on a choice located at PF. As with Grohmann and colleagues, we do not observe any actual optionality, but only mismatches between system driven by different interface requirements: syntax has to represent question semantics, but PF can integrate requirements of information structure.

  3. In their paper about verb agreement in Santiago Tz’utujil, Levin, Lyskawa, and Ranero focus on a curious property of the language: Agreement seems to be optional with 3rd person plural arguments. Excluding phonological and morphological aspects that could be responsible for this optionality, they argue that the optionality goes along with a structural difference between the targets of the agreement process. Some arguments can be DPs, some can be NPs. Since the presence of the D0-head is crucial for the application of Agree according to Levin and colleagues, the apparent optionality reduces to deterministic derivations. This however is masked by the fact that D0 is empty in Santiago Tz’utujil, so that at PF the two structures look nearly identical. Again, there are no real indeterminacies to investigate, since all the involved derivations operate deterministically. However, as in other cases of structural ambiguities, the component that derives the surface forms of these deterministic derivations is unable to represent the differences between them.

Part II: (Real) Indeterminacies that represent the limited resolution of some subsystem

  1. Leivada in her contribution discusses so-called mid-level syntactic generalizations and what they reveal about the way grammars are organized. By mid-level syntactic generalizations, Leivada refers to generalizations that refer neither to purely abstract concepts, nor to mere surface aspects of sentence structure. She discusses two such generalizations, Cinque’s adverb hierarchy and the Final-over-Final Constraint, and argues that both are too restrictive. That is, there exist real counterexamples that cannot be made compatible with them (unless ad hoc machinery is introduced). The relevance for the questions this volume addresses is that syntax is only partially restrictive: It is merely responsible for the mergers of two items, regulated by very general principles. Everything beyond that is relegated to other components of the grammar. Syntax provides a multitude of possible well-formed structures. What looks like rigidity is really a reflection of syntactic indeterminacy, coupled with restrictions issued by other linguistic systems.

  2. The article by Haider investigates various “free” word order phenomena in German, Dutch, some Slavic languages and compares these to yet other languages and their word order properties. Haider argues strictly against a “deterministic” solution to such word order options, since no syntactic triggers of any kind can be formulated that would allow to describe the available options – rather than just restate them in technical terms, as cartographic triggers have done. Haider therefore argues that certain types of languages allow for more variable argument placements. These languages have head-final predicate projections, i. e. phrase structures that allow arguments to be linearized to the left of their predicate heads. For example, in languages with an OV verb phrase (as well as in “T3” languages which allow both OV and VO), the re-merger of a verbal argument “to the left” leaves the argument within the licensing domain of the predicate, and word order is therefore “free”. The same holds for other types of phrases, e. g. adjectival phrases (but not, e. g. head-initial NPs) in German. The options generated in such languages (which, by our definition, allow “redundant” mappings from semantics to syntax) may be exploited by other systems, in that information structure, prosody, sentence-level semantics, etc. come to make use of the available options. However, the options cannot be taken to “express” the distinctions of interfacing systems, Haider argues – instead, the system remains “redundant” with regard to these systems.

  3. Bader’s contribution deals with verb clusters. A verb cluster is a string of non-finite verbs and one finite verb in embedded clauses in German. They result from the OV-property of German. If the object of some verb is an infinitive, two adjacent verbs result: [[OV]V]. Since the object of the infinitive can be an infinitive again, a principally unlimited number of adjacent verbs can appear in German embedded clauses. What is interesting about verb clusters is that the order of the verbs involved is to a certain extent free. In particular, the finite verb can appear at various positions of the cluster (with dialectal preferences) and also the order of the non-finite verbs can vary. So which verb follows which other verb in a verb cluster is basically optional. Bader shows first that one is truly dealing with optionality because the different orderings have no semantic or information structural effect. Bader then addresses the questions how to interpret and implement this optionality. He rejects the simplest assumption, namely that the optionality reduces to frequency, because the acceptability of the various orders does not line up with their frequency. Instead, Bader suggests that the constraints that are responsible for the verb cluster orderings are weighed, and that the weight they are assigned is independent of frequency.

References

Anderson, Stephen R. 1982. Where is morphology? Linguistic Inquiry 13. 571–612.Search in Google Scholar

Antonov, Anton. 2015. Verbal allocutivity in a crosslinguistic perspective. Linguistic Typology 19. 55–85.10.1515/lingty-2015-0002Search in Google Scholar

Bayer, Josef & Jaklin Kornfilt. 1994. Against scrambling as an instance of Move-α. In Nobert Corver & Henk van Riemsdijk (eds.), Studies on scrambling, 17–60. Berlin & New York: Mouton de Gruyter.10.1515/9783110857214.17Search in Google Scholar

Brandt, Patrick & Eric Fuß. 2013a. Introduction. In Patrick Brandt & Eric Fuß (eds.), Repairs: The added value of being wrong, 1–30. Berlin & Boston: de Gruyter Mouton.10.1515/9781614510796Search in Google Scholar

Brandt, Patrick & Eric Fuß (eds.) 2013b. Repairs: The added value of being wrong. Berlin & Boston: de Gruyter Mouton.10.1515/9781614510796Search in Google Scholar

Chomsky, Noam. 1965. Aspects of the theory of syntax. Cambridge, MA: MIT Press.10.21236/AD0616323Search in Google Scholar

Chomsky, Noam. 1970. Remarks on nominalization. In Roderick A. Jacobs & Peter S. Rosenbaum (eds.), Readings in English transformational grammar, 184–221. Waltham, MA: Ginn.Search in Google Scholar

Chomsky, Noam. 2000. The architecture of language, ed. by Nirmalangshu Mukherji, Bibudhendra Naryan Patnaik & Rama Kant Agnihotri. Oxford: Oxford University Press.Search in Google Scholar

Chomsky, Noam. 2005. Three factors in language design. Linguistic Inquiry 36. 1–22.10.1162/0024389052993655Search in Google Scholar

Chomsky, Noam. 2007. Approaching UG from below. In Uli Sauerland & Hans-Martin Gärtner (eds.), Interfaces + Recursion = Language?, 1–29. Berlin & New York: Mouton de Gruyter.10.1515/9783110207552-001Search in Google Scholar

Chomsky, Noam. 2008. On phases. In Robert Freidin, Carlos P. Otero & Maria Luisa Zubizaretta (eds.), Foundational issues in linguistic theory: Essays in honor of Jean-Roger Vergnaud, 133–166. Cambridge, MA: MIT Press.10.7551/mitpress/9780262062787.003.0007Search in Google Scholar

Chomsky, Noam, Ángel J. Gallego & Dennis Ott. 2019. Generative Grammar and the faculty of language: Insights, questions, and challenges. Catalan Journal of Linguistics Special Issue 2019. 229–261.10.5565/rev/catjl.288Search in Google Scholar

Dik, Simon C. 1997. In Kees Hengeveld (eds.), The theory of Functional Grammar. Berlin & New York: Mouton de Gruyter.Search in Google Scholar

Di Sciullo, Anna M. & Edwin Williams. 1987. On the definition of word. Cambridge, MA: MIT Press.Search in Google Scholar

Dowty, David 1979: Word meaning and Montague Grammar. Dordrecht: Reidel.10.1007/978-94-009-9473-7Search in Google Scholar

Fanselow, Gisbert. 2001. Features, theta-roles, and free constituent order. Linguistic Inquiry 32. 405–437.10.1162/002438901750372513Search in Google Scholar

Fanselow, Gisbert. 2003. Free constituent order: A minimalist interface account. Folia Linguistica 37(1–2). 191–231.10.1515/flin.2003.37.1-2.191Search in Google Scholar

Féry, Caroline. 2008. Information structural notions and the fallacy of invariant correlates. In Caroline Féry, Gisbert Fanselow & Manfred Krifka (eds.), The notions of information structure (Interdisciplinary Studies of Information Structure 6), 161–184. Potsdam: University of Potsdam.Search in Google Scholar

Fodor, Jerry A. 1975. The language of thought. Cambridge, MA: Harvard University Press.Search in Google Scholar

Fodor, Jerry A. 2008. LOT 2: The language of thought revisited. Oxford: Oxford University Press.10.1093/acprof:oso/9780199548774.001.0001Search in Google Scholar

Fried, Mirjam & Hans C. Boas. 2005. Grammatical constructions: Back to the roots. Amsterdam: Benjamins.10.1075/cal.4Search in Google Scholar

Halle, Morris & Alec Marantz. 1993. Distributed morphology and the pieces of inflection. In Ken Hale & Samuel J. Keyser (eds.), The view from Building 20: Essays in honor of Sylvain Bromberger, 111–176. Cambridge, MA: MIT Press.Search in Google Scholar

Hasegawa, Nobuko. 2006. Honorifics. In Martin Everaert, Henk van Riemsdijk (eds.), The Blackwell companion to syntax, vol 1, 493–543. Oxford: Blackwell Publishers.10.1002/9780470996591.ch32Search in Google Scholar

Hauser, Marc D., Noam Chomsky & W. Tecumseh Fitch. 2002. The faculty of language: What is it, who has it, and how did it evolve? Science’s Compass 298. 1569–1579.10.1017/CBO9780511817755.002Search in Google Scholar

Hinzen, Wolfram. 2011. The emergence of a systematic semantics. In Cedric Boeckx (ed.), The biolinguistic enterprise, 417–439. Oxford: Oxford University Press.Search in Google Scholar

Hoffman, Thomas & Graeme Trousdale (eds.) 2016. The Oxford handbook of Construction Grammar. Oxford: Oxford University Press.Search in Google Scholar

Hopper, Paul. 1987. Emergent grammar. In Jon Aske, Natasha Beery, Laura Michaelis & Hana Filip (eds.), Proceedings of the Thirteenth Annual Meeting of the Berkeley Linguistics Society, 139–157. Berkeley, CA: Berkeley Linguistics Society.10.3765/bls.v13i0.1834Search in Google Scholar

Hyde, Daniel C., Nathin Winkler-Rhoades, Sang-Ah Lee, Veronique Izard, Kevin A. Shapiro & Elizabeth S. Spelke. 2011. Spatial and numerical abilities without a complete natural language. Neuropsychologia 49. 924–936.10.1016/j.neuropsychologia.2010.12.017Search in Google Scholar

Jackendoff, Ray. 1972. Semantic interpretation in Generative Grammar. Cambridge, MA: MIT Press.Search in Google Scholar

Johnson, David E. & Paul M. Postal. 1980. Arc pair grammar. Princeton, NJ: Princeton University Press.10.1515/9781400855551Search in Google Scholar

Karttunen, Lauri. 1989. Radical lexicalism. In Mark R. Baltin & Anthony S. Kroch (eds.), Alternative conceptions of phrase structure, 43–65. Chicago, IL: University of Chicago Press.Search in Google Scholar

Kremers, Joost. 2013. Linearisation as repair. In Patrick Brandt & Eric Fuß (eds.), Repairs: The added value of being wrong, 207–236. Berlin & Boston: de Gruyter Mouton.10.1515/9781614510796.207Search in Google Scholar

Kutas, Marta & Steven A. Hillyard. 1980. Reading senseless sentences: Brain potentials reflect semantic incongruity. Science 207(4427). 203–208.10.1126/science.7350657Search in Google Scholar

Lapointe, Steve. 1980. A theory of grammatical agreement. Amherst, MA: University of Amherst dissertation.Search in Google Scholar

Lerdahl, Fred & Ray Jackendoff. 1983. A generative theory of tonal music. Cambridge, MA: MIT Press.Search in Google Scholar

Miyagawa, Shigeru. 2017. Agreement beyond phi. Cambridge, MA: MIT Press.10.7551/mitpress/10958.001.0001Search in Google Scholar

Müller, Sonja. 2011. (Un)informativität und Grammatik. Extraktion aus Nebensätzen im Deutschen. Tübingen: Stauffenburg.Search in Google Scholar

Newmeyer, Frederick J. 2005. Possible and probable languages. Oxford: Oxford University Press.10.1093/acprof:oso/9780199274338.001.0001Search in Google Scholar

Oyharçabal, Beñat. 1993. Verb agreement with non-arguments: On allocutive agreement. In José Ignacio Hualde & Jon Ortiz de Urbina (eds.), Generative studies in Basque linguistics, 89–114. Amsterdam: Benjamins.10.1075/cilt.105.04oyhSearch in Google Scholar

Pankau, Andreas, Craig Thiersch & Kay-Michael Würzner. 2010. Spurious ambiguities and the Parentheticals debate. In Thomas Hanneforth & Gisbert Fanselow (eds.), Language and logos: Studies in theoretical and computational linguistics, 127–144. Berlin: Akademie Verlag.10.1524/9783050062365.129Search in Google Scholar

Patel, Aniruddh D., Edward Gibson, Jennifer Ratner, Mireille Besson & Phillip J. Holcomb. 1998. Processing syntactic relations in language and music: An event-related potential study. Journal of Cognitive Neuroscience 10(6). 717–733.10.1162/089892998563121Search in Google Scholar

Reinhart, Tanya. 2006. Interface strategies. Cambridge, MA: MIT Press.10.7551/mitpress/3846.001.0001Search in Google Scholar

Sasse, Hans-Jürgen. 1993. Syntactic categories and subcategories. In Joachim Jacobs, Arnim von Stechow, Wolfgang Sternefeld & Theo Vennemann (eds.), Syntax. Ein internationales Handbuch zeitgenössischer Forschung / An international handbook of contemporary research, Vol. 1, 646–685. Berlin & New York: de Gruyter.Search in Google Scholar

Selkirk, Elisabeth. 1982. The syntax of words. Cambridge, MA: MIT Press.Search in Google Scholar

Shusterman, Anna, Sang Ah Lee & Elizabeth S. Spelke. 2011. Cognitive effects of language on human navigation. Cognition 120. 186–201.10.1016/j.cognition.2011.04.004Search in Google Scholar

Steedman, Mark. 1991. Structure and intonation. Language 67. 260–296.10.7551/mitpress/6591.003.0007Search in Google Scholar

Struckmeier, Volker. 2014. Scrambling ohne Informationsstruktur? Prosodische, semantische und syntaktische Faktoren der deutschen Wortstellung. Berlin: Akademie Verlag.10.1524/9783110347715Search in Google Scholar

Struckmeier, Volker. 2017. Against information structure heads: A relational analysis of German scrambling. Glossa 2. 1–29.10.5334/gjgl.56Search in Google Scholar

Struckmeier, Volker. 2020. Cartography cannot express scrambling restrictions: Experimental evidence for a relational approach. In Joost Kremers & Gerrit Kentner (eds.), Prosody in syntactic encoding, 265–301. Berlin & Boston: de Gruyter Mouton.10.1515/9783110650532-010Search in Google Scholar

Spelke, Elizabeth. 2003. What makes us smart? Core knowledge and natural language. In Dedre Gentner & Susan Goldin-Meadow (eds.), Language and mind: Advances in the study of language and thought, 277–311. Cambridge, MA: MIT Press.10.7551/mitpress/4117.003.0017Search in Google Scholar

Spelke, Elizabeth S. & Sanna Tsivkin. 2001. Language and number: A bilingual training study. Cognition 78. 45–88.10.1016/S0010-0277(00)00108-6Search in Google Scholar

Van Valin, Robert D. Jr. 2001. Functional linguistics. In Mark Aronoff & Janie Rees-Miller (eds.), The handbook of linguistics, 319–336. Oxford: Blackwell.10.1002/9780470756409.ch13Search in Google Scholar

Wilson, Edward O. 1999. Consilience: The unity of knowledge. New York: Vintage Books.Search in Google Scholar

Wunderlich, Dieter. 1996. Lexical categories. Theoretical Linguistics 22. 1–48.10.1515/thli.1996.22.1-2.1Search in Google Scholar

Published Online: 2021-01-22
Published in Print: 2021-02-26

© 2020 Struckmeier and Pankau, published by De Gruyter

This work is licensed under the Creative Commons Attribution 4.0 International License.

Downloaded on 29.10.2025 from https://www.degruyterbrill.com/document/doi/10.1515/zfs-2020-2015/html?lang=en&srsltid=AfmBOooibUdfsj4hDen1iEzsYGfszd4eBuhZjjP9ziPBlThAac3Zvca8
Scroll to top button