Home Flat structure: a minimalist program for syntax
Article Open Access

Flat structure: a minimalist program for syntax

  • Giuseppe Varaschin EMAIL logo and Peter W. Culicover
Published/Copyright: July 19, 2024
Become an author with De Gruyter Brill

Abstract

We explore the possibility of assuming largely flat syntactic structures in Simpler Syntax, suggesting that these are plausible alternatives to conventional hierarchical structures. We consider the implications of flat structure for analyses of various linguistic phenomena in English, including heavy NP shift, extraposition, topicalization and constituent order variation in the VP. We also sketch a general strategy to circumvent some of the problems flat structure is said to cause for semantic interpretation. Our proposals eliminate the need for movement, unpronounced copies and feature-bearing nodes postulated to trigger syntactic operations. We assume the Parallel Architecture and use declarative schemas to establish direct correspondences between phonology on the one hand and syntactic and semantic structures on the other. The resulting picture is one in which narrow syntax can be relatively stable across languages and constructions, largely reflecting the structure of human thought, and the main source of linguistic variation is the linearization of conceptual and syntactic structures. Unlike other minimalist theories that reach a similar conclusion, the theory we propose takes mappings to phonology to be central to the architecture of grammar.

1 Introduction

A defining characteristic of virtually all work in generative grammar is the idea that natural language expressions have a hierarchical structure that can be inferred on the basis of distributional evidence, i.e. so-called “constituency tests” (Harris 1951; Miller 1992; Müller 2023; Wells 1947, i.a.). Over the past 30 years, a significant body of work has gone beyond this simple claim, embracing a view in which syntactic representations are far more complex than what purely distributional analyses can reveal. On this approach, structures are populated by a plethora of covert elements such as empty heads, unpronounced copies and feature-bearing nodes postulated to trigger operations that move around parts of structures. Motivation for this typically comes from the additional assumption that syntax should provide a transparent and cross-linguistically stable mapping to meaning, on the one hand, and to linguistic form, on the other (Chomsky 2000, 2001, 2005; Cinque 1999, 2023; Cinque and Rizzi 2008; Trotzke 2015, i.a.).

In contrast, on the basis of learnability considerations, Simpler Syntax (Culicover and Jackendoff 2005) takes the view that syntactic structure should be as simple as possible and complexity should be realized in other components of grammar. A corollary is that syntactic structure should not be invoked if descriptions in terms of semantics, phonology, pragmatics, processing or a combination of these are explanatorily sufficient (Culicover 2013b; Culicover et al. 2022).

In this paper we carry out an experiment that takes seriously Simpler Syntax’s minimalist perspective on syntactic structure. We do not question that phrases are hierarchically structured in non-arbitrary ways. But we explore the consequences of assuming the minimal hierarchical structure consistent with the goal of accounting for the correspondence between the form of expressions and their meanings.

We suggest, based on the analysis of a range of constructions in English, that the structure of major phrases such as TP, VP and NP is much simpler than is typically assumed; in fact, it is largely flat. In particular, all complements and adjuncts are sisters of a head and, for each such head, there is no recursion of multiple phrasal projections. On this approach, there is only one segment per individual category, and no intermediate bar-levels or functional layers between a head and its maximal projection. We show that much of the descriptive work that has traditionally been done by syntactic structure can be done by positing more flexible interfaces with phonology and semantics, in a way that accords more autonomy to these components. Phenomena that have typically been treated in terms of movement in classical analyses can be characterized in terms of constructional correspondences between flat structure, semantic interpretation and linear order. In particular, instances of local movement are better analyzed in terms of alternative linearizations of a single flat structure, eliminating much of the motivation for recursive V- or N-headed projections. Of course, assuming flat structure in this sense is not without problems, and we point some of them out as we proceed.

The view that syntactic structure is flat, though out of the mainstream, is not without precedent. Flat structure in our sense was standardly assumed in early generative grammar (Chomsky 1965, 1970, 1981), as well as in non-transformational theories such as LFG (Bresnan 1982; Simpson and Bresnan 1983), GPSG (Gazdar et al. 1985; Uszkoreit 1986) and HPSG (Pollard and Sag 1994; Przepiórkowski 1999). Empirical and conceptual arguments for flat structure were offered by Nerbonne (1994), Dowty (1996), and Pollard (1996b), and more recently in works such as Culicover and Jackendoff (2005), Wetta (2015), Krivochen (2015), Miliorini (2021), McInnerney (2022) and Goto and Ishii (2022). In addition, there have been a number of proposals that assume flat structure in the analysis of particular languages, e.g. Borsley (2006); Carnie (2005); Hale (1983); Mohanan (1983).[1]

We explore here the possibility that flat structure is not only a plausible option from a syntactic perspective, but the correct option, based on the analysis of a number of particular constructions, some of which have not been invoked in prior studies. We work out an account of flat structure that is consistent with Simpler Syntax and that is entirely natural from the perspective of linearization and semantic interpretation embodied by the Parallel Architecture (Culicover 2021; Jackendoff 2002; Jackendoff and Audring 2020; Varaschin 2021).

The structure of the paper is as follows. In Section 2 we argue that much of the richness that is ascribed to syntactic structure in the contemporary literature follows from two foundational principles: Meaning in Structure and Order in Structure. In Section 3 we look at phenomena in English word order and interpretation that offer empirical evidence for flat structure. Regarding word order, we show that analyses of heavy NP shift (Section 3.1) and extraposition of relative and result clauses (Section 3.2) require surface filters that guarantee that the derived orders satisfy conditions of focus, weight, dependency and similar factors. The fact that such filters are necessary means that using movement to produce the hierarchical structures for the ordering is superfluous. A simpler approach that captures the same facts is one in which the ordering is free and the independently required filters do the descriptive work of accounting for linear order. Regarding interpretation, we review evidence that suggests that while some constraints on anaphora favor a left branching structure, others favor a right-branching structure (Section 3.3). This suggests that what is needed is a flat syntactic structure and a more sophisticated mapping between form and meaning.

Section 4 introduces a framework for describing constructions. Constructional descriptions in this framework have a crucial property that is required by flat structure: they allow for licensing of constituent order variation and free order without requiring movement. Section 5 illustrates how this licensing works for VP topicalization in English. Section 6 extends the analysis to linear order variation in the English VP.

Section 7 turns to the problem of interpreting flat structure. Typically, the interpretation of each node in a hierarchical structure is determined by the semantic types of its daughters. But if the structure is flat, this does not suffice. In a flat structure, subconstituents of VPs and NPs stand in a symmetric relation to each other, whereas the semantic scope relations between such constituents are asymmetric. We propose that some rules of interpretation for subconstituents of NP and VP can be formulated in terms of string-adjacency (Dowty 1996). This implies that, in addition to interpretive rules which relate syntax to semantics, there exist also direct connections between semantics and phonology.

Section 8 summarizes the paper and points to a number of implications. In order to facilitate readability and highlight the main points of the argument, technical details of the constructional framework are relegated to an Appendix.

2 Functions of hierarchical structure

In this section, we discuss some of the reasons why complex hierarchical structures came to be so widely accepted in syntactic theory. We argue that this practice has been motivated, at least in part, by two interdependent assumptions.[2]

  1. Meaning in Structure (MS): The syntactic structure of a phrase fully determines its meaning in a cross-linguistically stable way.

  2. Order in Structure (OS): The syntactic structure of a phrase determines the linear order of its phonological terminals.

We illustrate the idea of MS with the concrete example of the structure of the VP speak to Otto about Robin in the garden on Tuesday (see also McInnerney 2022: Ch. 4). There have been three families of proposals over the years: right-branching (Chomsky 1995; Hale and Keyser 2002; Kayne 2004; Larson 1988), left-branching (Jackendoff 1977; Lakoff and Ross 1976) and mixed approaches which assume both types of structures for different kinds of VP-internal constituents (Ernst 2002; Harley 2014; Müller to appear; Pollard and Sag 1994). For simplicity of presentation we focus on a uniform right-branching analysis like that of Schweikert (2005: 132) and Takamine (2010: 129), who posit a basic structure like (1) before movements:

(1)

The theoretical context for (1) is the idea that clausal structure follows a universal blueprint that is virtually invariant across languages – one that imposes strict binary branching, endocentricity and, in some approaches, a rigid hierarchical order among heads (Cinque 1999; Kayne 1994). Moreover, the organization of phrases in (1) is semantically transparent, reflecting a universal thematic hierarchy. This hierarchy is motivated primarily on the basis of cross-linguistic facts about unmarked word orders (Cinque 2023; Schweikert 2005; Takamine 2010), patterns of argument realization (Baker 1997; Hale and Keyser 2002; Pylkkänen 2008), and semantic phenomena like binding (Larson 1988; Pesetsky 1995; Miyagawa and Tsujioka 2004). Of course, it is necessary to appeal to several movement operations to map (1) to the appropriate linear order: e.g. movement of the verb and a series of remnant movements of the extended verbal projections.

As (1) suggests, MS goes beyond the principle of compositionality, narrowly construed. What defines MS is not just the idea that there is a systematic relation between the meaning of a phrase and the meaning of its parts, but that every aspect of the meaning of a phrase must be traced back to the meaning of one of its subconstituents – even when there is no language-internal distributional evidence for a subconstituent with that meaning. This implies, crucially, that there are no other factors (e.g. linear order, lexical representations, constructions) that may contribute to the meaning of a phrase beyond the discrete meanings of its immediate daughters. Thus, one has to posit constituents like Theme0 and Temp0 in (1) to encode thematic relations which could otherwise be encoded constructionally or as part of the lexical meaning of the verb.

Przepiórkowski (1999), Culicover and Jackendoff (2005), Wetta (2015), Miliorini (2021) and McInnerney (2022) review the traditional arguments for both right and left branching VP structures and argue that the evidence for them is not conclusive. In the absence of strong evidence for hierarchical structure, representational economy recommends a flat structure like (2) as the null hypothesis (Chomsky 1965: 196; Culicover and Jackendoff 2005: 109–110). In addition, (2) also does not require movements to derive the correct linear order.

(2)

The more general problem with approaches that assume structure beyond (2) is that the arguments for them are not based on purely distributional evidence, but crucially rely on some version of MS or OS – the latter especially in work following Kayne (1994). Moreover, as we see in more detail in Section 3, there is also empirical evidence in favor of a purely flat VP structure like (2).

Let us now turn to OS. What is kept constant throughout most developments in syntactic theory is the idea that the syntax generates one or more structures – i.e. phrase markers – on which there is defined an ordering of terminal elements. Associated with these elements are phonological representations, so that one of these syntactic structures – the last in a derivation – has an ordering of terminals that determines the phonological form of the sentence. The origin of this assumption is arguably Chomsky’s (1955) proposal to treat syntax as an algebra with syntactic categories as primitives and concatenation as the fundamental recursive operation, directly generating ordered strings as the output of the system.

This notion of a phrase-marker and the corresponding tenet that the strings of formatives that constitute the linguistic expression are constituents of the syntactic structure was implicitly adopted at least up to the Minimalist Program (Chomsky 1995, et seq.). Minimalism marked the abandonment of concatenation in favor of bottom-up unordered set-formation (Merge) as the core recursive operation in language.[3] As a result, linear order ceased to be a primitive property of the objects generated by syntax. Nonetheless, following Chomsky’s (1995: 340) reinterpretation of Kayne’s (1994) Linear Correspondence Axiom, some minimalist theories still assume a linearization algorithm that reads linear order off of asymmetric c-command relations in syntax, possibly also taking into account other syntactic features such as labels and headedness (Collins and Stabler 2016; Fukui and Takano 1998; Guimarães 2004; Stabler 2011). This means that, even though syntactic objects do not themselves embody precedence relations, the latter are nonetheless fully and uniquely determined by the information contained in syntactic objects.[4]

Both the earlier view (where linear order is a property of syntactic structure) as well as the more recent view (where linear order is algorithmically derived from syntactic structure) are instances of OS. OS entails that two strings are adjacent only if they are sisters in the syntactic structure, or are located on neighboring edges of sisters in the syntactic structure.

As we proceed we provide evidence against such a view, showing that in many cases the order is free, subject to conditions that are external to narrow syntax.

3 Evidence for flat structure

In this section we look at several phenomena that we believe can best be accounted for by abandoning the correspondence between hierarchical structure and linear order in the English VP. These involve constructions like adjunct clauses, heavy NP shift, extraposition from subjects, objects and wh-phrases, and result clause extraposition. Since some of the richer structures posited in the literature have been motivated partly on the basis of binding and scope, we also address these considerations, and illustrate some cases where hierarchical VP structures lead to paradoxes involving binding theoretic constraints.

We show that tying the linear order and semantic binding relations to hierarchical structure leads to complex, unintuitive and arbitrary assumptions about the hierarchical structure. The alternative, that the structure is flat, leads to simple, intuitive and principled accounts. Such accounts require explicit reference to linear order, and link the linear order to aspects of semantics, information structure and prosody, not hierarchical syntactic structure. Crucially, even if hierarchical structure and movement are assumed in the derivation of linear order, it is necessary to refer to independent constraints on linear order anyway.[5] This renders the assumption of hierarchical structure superfluous. Thus, hierarchical structure is neither sufficient nor necessary to account for order.

3.1 Heavy NP shift

Consider first Heavy NP Shift (HNPS).[6] We review various ways in which HNPS might be viewed as movement, and argue that they are problematic at best. We conclude that the facts of HNPS are adequately captured under the assumptions that VP lacks internal syntactic structure, and that the order of constituents reflects ordering constraints stated over the flat structure.

In the analyses of Rochemont and Culicover (1990) and Culicover and Rochemont (1990), HNPS is ostensibly an A′ movement rule. One reason for this is that there is no canonical structure in the English VP that licenses an NP in final position.[7] That is, the canonical order of English is (3a). But the VP-final NP is more acceptable when it is ‘heavy’, as in (3b) (Ross 1967: 51ff).[8]

(3)
a.
Chris lent i t m o n e y a b i c y c l e t h a t h a d s e e n b e t t e r d a y s to Sandy.
b.
Chris lent to Sandy * i t ? m o n e y a b i c y c l e t h a t h a d s e e n b e t t e r d a y s .

In a theory in which hierarchical structure determines linear order, there must be a derived syntactic structure corresponding to the order in (3b). If the canonical order is (3a), then one of the following holds.

  1. The NP moves from the post-verbal position to the right (Ross 1967: 56).

  2. The NP is higher than the VP, and the VP moves to the left of the NP (Larson 1988: 347; Wallenberg 2015: 337).

The first option is problematic for several reasons. First, if movements are structure preserving in the narrow sense, there must be a canonical NP position following the PP for the NP to move to. But then there is no motivation for movement.

The alternative is that rightward HNPS is not narrowly structure preserving, but an A′ movement to an empty specifier position analogous to topicalization. However, A′ movement is typically unbounded, and HNPS is radically clause-bound, as Ross (1967) noted – see (4).

(4)
a.
Otto mentioned [that Benny [VP ate [the pizza that we ordered] i at breakfast]] to Claude.
b.
Otto mentioned [that Benny [VP ate t i at breakfast] [the pizza that we ordered] i ] to Claude.
c.
* Otto mentioned [that Benny [VP ate t i at breakfast] to Claude] [the pizza that we ordered] i .

The bounded nature of HNPS disfavors prima facie an A′ movement approach. We could state a principle that says that only leftward A′ movements are unbounded, but this would be a stipulation that does not follow from anything.

Moreover, taking A′ movement to be the product of (Internal) Merge, we should expect HNPS to exhibit reconstruction effects. For instance, a quantifier moved to the right should be able bind a pronoun that precedes it, but the examples in (5) show that the opposite is the case (Culicover and Jackendoff 2005: 119–120).

(5)
a.
Chris bought [every book that was for sale] i from its i author.
b.
* Chris bought from its i author [every book that was for sale] i .
c.
* Chris bought [a book that she j had written] from every author j .
d.
Chris bought from every author j [a book that she j had written].

Finally, the idea that HNPS is rightward movement is difficult to reconcile with the view that movements check triggering features with an independently motivated functional head (Chomsky 2000, et seq.). The structure in VP-final position in (6) that could satisfy this requirement is not independently motivated.[9]

(6)

Consider the second option. Rochemont and Culicover (1997) explored the possibility that HNPS is actually leftward movement, with subsequent leftward movement of the remnant to a position above it. Such a derivation is shown in (7).

(7)

But this option – which is entirely feasible in a derivational approach – also fails to capture the fact that HNPS is bounded, as in (4). Moreover, HNPS, unlike a true A′ construction, cannot apply to the complement of a preposition (8).

(8)
a.
Who i were you talking to t i about the game?
b.
* I was talking to t i about the game [some guy that I met at the party] i .

However, Rochemont and Culicover (1997) failed to notice that Larson’s (1988) proposal for HNPS avoids the problem of bounding and the problem of preposition stranding. On the assumption that the direct object is the specifier of the VP and does not itself move, either V′, which contains [V0-PP] raises to the left, yielding the order [V0-PP]-NP, or V0 does, yielding the order V0-[NP-PP], as in (9).

(9)

Although this analysis avoids the problems of bounding and no preposition stranding, there are a number of additional stipulations that have to be made in order for it to work properly. First, the higher V0 node has to be filled by something overt; it cannot be filled by an invisible element, otherwise the verb will not appear in the right position in the linear order. Second, the possibility of binding in (5d) poses further difficulties. Since the antecedent is embedded inside a VP, it would have covertly move to a position where it c-commands the pronoun.[10]

Finally and most importantly, if the NP is not heavy, e.g., if it is a pronoun or a one word name, then raising of V′ yields an unacceptable sentence. So this derivation has to be filtered by a condition that licenses only heavy NPs in the position that is destined to be the final position. This filter renders the movement unnecessary if we allow free ordering of the complements of V0.

The situation is further complicated by the fact that the heavy NP need not be absolutely final in the VP; cf. (10).

(10)
a.
Chris put [the beer that he bought] very carefully in the refrigerator in the evening.
b.
Chris put very carefully [the beer that he bought] in the refrigerator in the evening.
c.
Chris put very carefully in the refrigerator [the beer that he bought] in the evening.
d.
Chris put very carefully in the refrigerator vin the evening [the beer that he bought].

These various orderings can be derived by adjusting the hierarchical position in the tree of [NP the beer …] above the constituents that it precedes. Such variability of hierarchical structure undermines the assumption of a single canonical structure. But to derive all of the orderings from a single structure, it is also possible to assume that the underlying order is (11), and the verb picks up or leaves behind arbitrary adverbs and PPs as it moves up the tree. So, for example, (10) could be derived as shown in (11).

(11)

Another strategy would be to adapt Fanselow’s (2001; 2003 proposal for scrambling in German. On this proposal, complements and adjuncts of a verb can be merged into the VP in various orders, which derives the observed variation without movement. Again, deriving the linear order as a matter of syntactic structure leaves unresolved the problem that the order must be independently referred to in order to accommodate the heaviness condition on rightward shifted NPs.

Furthermore, the rightward placement of heavy constituents is not limited to NP, which further complicates the analysis. Clauses are preferred in VP-final position (12), as are PPs (13), adverbs (14) and adjective phrases (15).

(12)
a.
? I mentioned that I was anxiously planning to leave tomorrow to her.
b.
I mentioned to her that I was anxiously planning to leave tomorrow.
(13)
a.
? I talked about all of the tasks that needed to be completed to her.
b.
I talked to her about all of the tasks that needed to be completed.
(14)
a.
? I’ll finish painting the walls as quickly as I possibly can tomorrow.
b.
I’ll finish painting the walls tomorrow as quickly as I possibly can.
(15)
a.
? Chris seemed very unhappy about the outcome of the election to us.
b.
Chris seemed to us very unhappy about the outcome of the election.

It appears that we need constraints on order that are stated in terms of the relative weight of the constituents. We leave open the question of precisely what these constraints are. These may be complex, but they can be identified with some precision; see Büring (2013); Göbbel (2020); Hawkins (1994, 2004, 2014); Wasow (1997, 2002); Wasow and Arnold (2003), i.a. From our perspective, the main problem with any derivational analysis is this: if the XP is not heavy, e.g., if it is a pronoun, it cannot appear to the right in the VP. Such a derivation has to be filtered by a condition that licenses only heavy XPs in final position. This would be a filter constraining the relation between XPs and their position in the phonological string. But if this filter is necessary, there is no reason to derive linear order by movement – free ordering of the sisters of V0 in flat structure is sufficient.

Suppose, then, that the structure of the English VP is flat, following Culicover and Jackendoff (2005: Ch. 4). The basic syntactic constraint on order is that the phonological form of the verb must precede that of its complements. There are also ordering constraints or preferences connected to dependency, focus and constituent weight. Otherwise, the order of the constituents of VP is free. In this respect it resembles scrambling, a point to which we return below in Section 6.[11]

An interesting consequence of the flat structure approach to linear order is that it rules out a derivational account of the fact that shifted constituents are ‘frozen’, as in (16) (Wexler and Culicover 1980).

(16)
a.
* the problem that j I showed t i to the student [the solution to t j ] i
b.
the problem that j I showed the student [the solution to t j ]

Under a non-movement analysis, the apparent frozenness of HNPS must be accounted for outside of syntax proper, absent any other stipulations. A promising candidate is an account in terms of processing complexity (Culicover et al. 2022).

3.2 Extrapositions

We turn next to extraposition of relative clauses and result clauses. Examples are of the general form in (17). We refer to some people as the antecedent of the extraposed clause, and notate the relationship with subscripts.

(17)
Some people i were at the party [S who didn’t like pizza] i .

Extraposition raises issues that are related to, but different from, those concerning HNPS. It is generally agreed that extraposition is not derived by syntactic movement; cf. Webelhuth et al. (2013); Rochemont (2015); Göbbel (2020). Still at issue, however, is whether there is a correspondence in the analysis of extraposition constructions between the linear order and hierarchical syntactic structure.

At the center of the analysis of extraposition are these four observations (Rochemont and Culicover 1990).

Observation 1. Extraposition is possible from NPs in a range of positions in the classical hierarchical structure: object, subject, and Spec,CP. In addition, there is result-clause extraposition. The examples in (18) illustrate:

(18)
a.
I saw [some people] i when I went to the concert [that I recognized] i .
(Extraposition from object (OX))
b.
[Some people] i were at the concert [that I recognized] i .
(Extraposition from subject (SX))
c.
[So many people] i were at the concert [that I was stunned] i .
(Result clause extraposition (RX))
d.
[How many people] i did you say were at the concert just now [that you recognized] i ? (Extraposition from wh (WX))

Observation 2. The order of multiple extraposed clauses is the inverse of the order of their antecedents. The linear orders are illustrated in (19)–(22). We show only the more local relationships. While the examples are complex, the judgments appear to favor the inverse ordering.

(19)
OX ≪ SX
a.
A woman i solved the puzzle j yesterday [that was published in the Times (OX)] j [who used to work for me (SX)] i .
b.
* A woman i solved the puzzle j yesterday [who used to work for me (SX)] i [that was published in the Times (OX)] j .
(20)
SX ≪ RX
a.
[So many i people] j were at the concert yesterday [that we recognized (SX)] j [that we were astonished (RX)] i .
b.
* [So many i people] j were at the concert yesterday [that we were astonished (RX)] i [that we recognized (SX)] j .
(21)
SX ≪ WX
a.
?(?) Which room i did a man j enter last night [who had blond hair (SX)] j [that you had just finished painting (WX)] i ?
b.
* Which room i did a man j enter last night [that you had just finished painting (WX)] i [who had blond hair (SX)] j ?
(22)
OX ≪ WX
a.
Which article i did you find on a table j yesterday [that was in the living room (OX)] j [that you claimed was written by your best friend (WX)] i ?
b.
* Which article i did you find on a table j yesterday [that you claimed was written by your best friend (WX)] i [that was in the living room (OX)] j ?
(23)
RX ≪ WX
a.
? Which article i did so many people j criticize yesterday [that you were offended (RX)] j [that you recently published in LI (WX)] i ?
b.
* Which article i did so many people j criticize yesterday [that you recently published in LI (WX)] i [that you were offended (RX)] j ?

Observation 3. It is possible to account for the order by attaching each type of extraposed clause at a different position in a hierarchical structure, along the lines of (24).

(24)

This structure accounts for the fact that the linear ordering is OX ≪ SX ≪ RX ≪ WX.[12]

Observation 4. Assuming the structure in (24), ellipsis of VP should include OX but exclude SX, RX, and WX as in (25).

(25)
a.
Sandy solved the puzzle yesterday [that was published in the Times (OX)] but Chris didn’t < solve the puzzle yesterday [that was published in the Times (OX)] > .
b.
A woman solved the puzzle yesterday [that was published in the Times (OX)] [who used to work for me (SX)] and a man did < solve the puzzle yesterday [that was published in the Times (OX)] > [who never worked for me (SX)].
c.
How many professors went to the party that was in the gym who were not invited, and how many students did < go to the party that was in the gym > who were invited?

On the basis of these observations, the analysis proposed by Rochemont and Culicover (1990), closely following Guéron and May (1984), assumed ambiguity of attachment of extraposed clauses, as follows.

  1. A clause extraposed from objects (OX) is adjoined to VP.

  2. A clause extraposed from subjects (SX) is adjoined to IP or VP.

  3. A result clause (RX) is adjoined to CP or TP or VP.

On this analysis, a constituent further to the right is attached higher in the tree.

However, the motivations for the structure hypothesized in (24) are open to question. Contrary to Observation 4, it seems that ellipsis can target all types of extraposition. This suggests that all extraposed clauses can be constituents of VP, regardless of the height of their antecedents. Example (26a) shows that SX may undergo ellipsis, and (26b) shows that RX may undergo elllipsis.[13]

(26)
a.
Ten of the students were at the party [that I invited], and five weren’t < at the party [that I invited]. >
b.
So many students came to the concert [that I was astonished], and so many professors did < come to the concert [that I was astonished] > , too.
c.
Which students came to the concert [that you recognized] and which professors did < come to the concert [that you recognized] > ?

Moreover, the examples in (27) show that multiple extraposed clauses may undergo ellipsis.

(27)
a.
Ten of the students were at the party yesterday [that I gave] [that I invited], and five weren’t < at the party yesterday [that I gave] [that I invited] > . (OX and SX)
b.
So many students came to the party yesterday [that I gave] [that I was astonished], and so many professors did < come to the party yesterday [that I gave] [that I was astonished] > , too. (OX and RX)
c.
So many students came to the party yesterday [that I gave] [that I invited] [that I was astonished], and so many professors did < come to the party yesterday [that I gave] [that I invited] [that I was astonished] > , too. (OX, SX and RX)

So, contrary to what (24) might predict, ellipsis appears to treat both OX, SX and RX as constituents of VP.[14]

Furthermore, the constructed examples in (28) show that a clause extraposed from a subject may precede a clausal complement in the VP; this suggests that such extraposed clauses may be VP-internal.

(28)
a.
Several people have suggested [who have some basis for knowing] [that there will be major increases in sea levels in the next decade].
b.
No one would ever claim [who is not absolutely certain] [that this evidence is conclusive].

An alleged motivation for the rich structure in (24) is the fact that it predicts interactions between extraposition and binding (e.g. Göbbel 2020). Condition C of the binding theory rules out c-command of an r-expression by a coindexed pronoun. On this view, examples such as (29a) suggest that RX is higher than TP, since the pronominal subject could not c-command the r-expression. By the same logic, there should not be a Condition C violation when a pronominal object is coindexed with an NP in OX (29b). But a pronominal subject should produce Condition C violations with OX, a prediction that is contradicted by (30).

(29)
a.
She i ate so many cookies [that Susan i got sick].
b.
I gave her i a present for Christmas yesterday [that Susan i didn’t like].
(30)
a.
She i is frequently the subject of gossip these days [that Susan i ’s idiot father always denies].
b.
She i ran over a man last week [that Robert says was only trying to give Susan i directions].

Given (30), non-flat accounts could stipulate that the NP containing the antecedent is covertly raised to a position above the antecedent, leaving behind a copy of itself with the r-expression replaced by a pronoun through vehicle change (see Göbbel 2020: 108–109). However, discussing naturally occurring examples similar to (30), Varaschin et al. (in press) propose an arguably simpler alternative that is compatible with flat structure in the VP (see also Sells 1987; Yashima 2015). They show that an r-expression can be c-commanded by a coindexed pronoun if it is in an anti-logophoric context (i.e. a context where its referent does not count as a perspective bearer). This is the case in the examples in (29). As (31) shows, when this condition is not met, coreference is not possible even when, according to (24), the pronoun does not c-command the r-expression.

(31)
a.
* I heard from her i about a lovely movie today that Susan i really liked.
b.
* That Susan i would finally win was expected by her i .
c.
* Her i greatest fear is that Susan i might lose the election.

Finally, the option of hierarchical attachment along with the possibility of uniform attachment to VP creates a paradox. While hierarchical attachment can account for linear ordering when there are multiple extraposed clauses, attachment to VP cannot. If we allow OX, SX and RX to be adjoined to VP, even optionally, we cannot use height of attachment to account for their ordering. We still have to impose an independent ordering condition on them, guaranteeing OX ≪ SX ≪ RX ≪ WX.[15] But if we require this condition, then we do not also need hierarchical structure to induce linear order. As with HNPS, hierarchical structure is neither necessary nor sufficient; the structure may then be a flat non-recursive VP.[16]

3.3 Asymmetry paradoxes

Our final argument for flat structure in the English VP is based on binding and coreference phenomena that have been claimed to require asymmetric c-command (Larson 1988, i.a.), but which, taken collectively, do not converge on a uniform structure. As a result, each phenomenon winds up entailing a different structural description for the same string (Barss and Lasnik 1986; Pesetsky 1995). This paradox undermines the binding-theoretic motivations for hierarchical structure. Upon closer examination, the asymmetries in question are better represented in linear order and semantics, which means that the syntactic structure itself can be flat.

We begin with quantifier binding. Assuming that binding is only sensitive to c-command, it has been suggested that the possibility of quantifier binding into adverbial clauses, as in (32), entails a right-branching structure where the object quantifier c-commands into the adjunct (Larson 1988, 1990, i.a.).

(32)
I hired every actor i [before he i met you].

The traditional left-branching analysis of clausal adjuncts (Ernst 2002; Harley 2014; Jackendoff 1977; Lakoff and Ross 1976) makes the wrong predictions here because the PP would be higher than the QNP in object position. The right-branching analysis (Hale and Keyser 2002; Kayne 2004; Larson 1988) posits the correct c-command relations. However, as several authors note, right-branching creates a paradox in connection to Condition C (Bianchi 2001: 5, i.a.). If the QNP c-commands the pronoun in (32), then we should see a Condition C violation in (33), but we don’t (Solan 1983).[17]

(33)
a.
I hired him i [before John i met you].
b.
I talked to her i [after Amy i arrived].

The paradox is that the object needs to asymmetrically c-command into the adverbial clause for the purposes of quantifier binding, but not for the purposes of coreference with an r-expression. Hornstein (1995: 110) attempts to resolve the paradox by assuming that the adverbial clause occupies a higher position in (33) and a lower position in (32). The problem with this view is that coreference and binding seem to be simultaneously possible in such structures; see (34).

(34)
I would introduce no actor i to her k [before Amy k commits to hiring him i ].

There are other ways of averting this problem, but they raise difficulties of their own. On the basis of different facts, Culicover (1992) proposed that quantifiers within the VP undergo quantifier raising to a higher A′ position inside the VP, allowing them to bind into adverbial clauses (assuming, contra Reinhart (1983, 2006, that quantifier binding is licensed in LF). The structure in (35) illustrates.[18]

(35)
[VP no actor i [introduce t i to her k ] [before Amy k commits to hiring him i ]]

A technical difficulty with this analysis is that it violates the typical constraints that rule out weak crossover in structures like (36). The QNP in (35) is simultaneously binding a trace and a pronoun with the former not c-commanding the latter. This is precisely the configuration invoked to exclude the LF of (36) after quantifier raising has taken place (Koopman and Sportiche 1983; Safir 1984, i.a.).

(36)
* I introduced his i co-star to no actor i .

Therefore, structures like (35) overgenerate when it comes to predicting the anaphoric possibilities of QNPs. In addition, the operation of quantifier raising to VP, in contrast to quantifier raising to CP, lacks the motivation of being able to account for the fact that VP-internal QNPs can have wide scope over the subject.

Since the idea that anaphoric dependencies are solely governed by c-command leads to contradictory structures or to overgeneration, it is preferable to seek an alternative account. This point is reinforced by other counterexamples to c-command documented in Barker (2012). Following previous work, we hypothesize that, in addition to structural asymmetries (e.g. subject vs. object), binding and coreference are also sensitive to linear order and properties of the global discourse structure (Bruening 2014; Culicover 2013a; Varaschin 2021; Varaschin et al. in press, i.a.). These non-structural factors can explain why anaphoric dependencies are possible in structures like (34) while maintaining flat structure as a null hypothesis.

4 A framework for constructions

We now introduce a general descriptive framework that is suitable for stating the correspondences between syntactic structure, linear order and meaning in a way that overcomes the problems of previous approaches. The approach is spelled out and justified in greater detail in Culicover and Jackendoff (2005), Culicover (2021) and Varaschin (2021). We also provide more formal details in the Appendix.

We assume the Parallel Architecture of Jackendoff (2002), which models linguistic objects in terms of (at least) three independent levels of representation: phonological structure (phon), syntactic structure (syn) and conceptual structure (cs). Each of these structures represents different aspects of linguistic expressions. Thus, we avoid assigning to a single formal object the burden of modeling types of linguistic information as diverse as linear order, syntactic constituency, thematic roles and inference. A feature of this architecture that we make use of is the possibility of direct connections between phon and cs, bypassing syn, as indicted by the curved arrow at the bottom of Figure 1.

Figure 1: 
The Parallel Architecture (Jackendoff 2002).
Figure 1:

The Parallel Architecture (Jackendoff 2002).

Structures on each of the levels in Figure 1 are defined by their own characteristic primitives and are connected to structures on other levels by means of correspondence relations, which we notate with coindexing. Consider (37).

(37)

The structure in (37) represents the sentence Sandy kissed Chris. The symbol ⊕ is the string concatenation function, which yields immediate precedence relations (Chomsky 1955). Note that there is nothing in syn that signals that NP1 in (37) is pronounced as the string Sandy – this information is only represented in phon. Similarly, the string kissed is linked to both V0 and T0, as indicated by the coindex 2: it is a piece of phonology that corresponds to two different nodes in syn.[19]

A constructional schema, on this picture, is a partial finite description of what counts as a well-formed expression. This means that a single schema typically does not specify all of the details to license any single grammatical utterance: multiple schemas are needed in order to license the licit configurations in phon, syn and cs, as well as all the correspondences between these levels. To use the metaphor Pullum (2019: 62) proposed for McCawley’s (1968) formalization of node admissibility conditions, the grammar can be conceptualized as a finite library of pictures of linguistic expressions (i.e. the schemas), and an expression γ is grammatical iff every region of γ matches at least one of the pictures contained in the grammar.

Constructional schemas scale up from individual lexemes like in (38), to idioms (39), to idiomatic expressions with variables (40) to general principles of phrase structure (42). For the sake of readability, in what follows, we use the correspondence indices themselves as variables over phon and cs objects:

(38)
(39)
(40)
(41)
head-complement schema
(42)
head-initial schema
(43)
complement order schema

The schema in (41) licenses configurations where lexical heads combine with their (optional) complements to yield a phrase of the same category. The lexical head is immediately dominated by the maximal projection and must be distinct from its complement. Given the absence of schemas stipulating other dominance possibilities, this excludes recursive projections of depth-n > 1, including recursive X′ structures. Bar levels are, thus, ruled out. It is of course possible to have multiple layers of X-headed constituents when the X-heads are different words.[20]

The schema in (42) says that a head may precede its complement. The notion of precedence employed here is not immediate precedence (⊕), but weak precedence (≪), which means that there can be material intervening between the phon of X0 and the phon of YP. The schema in (43) governs the ordering of any two complements of VP, basically licensing any arbitrary order among them. Free variables in constraints are implicitly existentially quantified: (42) says there should be at least one (but possibly more) X0 and YP daughters in the prescribed order, similarly for (43). Therefore, (41)–(43), by themselves, do not rule out the possibility of there being multiple X0s, YPs or XPs in the structures they license. This feature proves useful to account for n-ary branching structures and verbal complexes in languages like German (Culicover and Varaschin to appear).[21]

For an expression to be licensed, it suffices that each of its parts and correspondences fully instantiate some constraint: i.e. the relations defined over the basic units in phon, syn and cs and the correspondence relations defined between the units in each of these levels have to be a model of some schema in the grammar (Varaschin 2021: 196). For instance, for a language to be uniformly head-initial in all contexts, it does not suffice that its grammar includes (42). It must also be the case that there are no other schemas specifying alternative orderings for heads.[22] The grammar can be thought of as a big disjunction of statements like (38)–(42) and expressions are licensed insofar as they are models of such a disjunction.

What is particularly useful for our purposes here is that formulations like (42)–(43) allow us to license variations in linear order given a single syntactic structure, without further stipulation. For illustration, suppose that a VP contains two PPs that can appear in either order, as in (44).

(44)
a.
Sandy made a cake [PP on Sunday] [PP in the kitchen].
b.
Sandy made a cake [PP in the kitchen] [PP on Sunday].

The schema (43) leaves the order between PPs underspecified. Since its content is simply ‘an XP daughter of VP may follow YP’, both orders are licensed. So, for that matter, is the ordering of the direct object with respect to the other constituents of VP, subject to ordering conditions as noted in Section 3.1. Therefore, (42) licenses the linear ordering of HNPS without hierarchical structure or movement.

Many of our alternative solutions to the problems that have led to the postulation of non-flat structure will appeal to this kind of underspecification between properties of the phonological string and syntax. Underspecification of this sort is ruled out in approaches that assume order is fully determined from syntax.

5 VP topicalization

In this section we focus on the internal structure of VP. The fact that some parts of the VP can be fronted leaving other material of the VP behind has standardly been viewed as a motivation for assuming richer VP structures. We argue that the evidence for rich hierarchical structure in the VP associated with VP topicalization can be accommodated under the conclusion of Section 3 that the structure of VP is flat.

5.1 VP topicalization paradoxes

VP topicalization is exemplified by (45).

(45)
I thought Chris would put all the money in the safe and [put all the money in the safe] he did.

The standard analysis of VP topicalization involves movement of a VP projection to an initial A′ position (Huang 1993; Müller 1998; Zagona 1988). Accordingly, (46) points to the existence of a hierarchy of multiple VP projections, along the lines of [ VP [ VP [ VP cook the potatoes] for fifteen minutes] in the morning].

(46)
They said that Chris would cook the potatoes
a.
and [cook the potatoes] i he did t i for fifteen minutes in the morning
b.
and [cook the potatoes for fifteen minutes] i he did t i in the morning
c.
and [cook the potatoes in the morning] i he did t i for fifteen minutes
d.
and [cook the potatoes for fifteen minutes in the morning] i he did t i

Examples like those in (47) suggest that there is a further split in the English VP, with the verbal head also being able to move independently of its complements (Culicover and Winkler 2019: 184–5).

(47)
a.
The other week, I went up to the Compendium bookshop in Camden Town, London NW1, to hear Iain Sinclair read from his latest novel. And read he did, the bit about the floating science fiction convention, from towards the end of Radon Daughters . The heavy metal lads rushed … [23]
b.
If you were not 100% in support of his crusade, you were his enemy to be destroyed and destroy he did a lot of good people . So we are very … [24]
c.
And write he did, a fair few gems including this one .[25]
d.
With that kind of eloquence, it is no wonder Jefferson was selected to write the Declaration. And write, he did, a document that still shines as bright today as it did [26]

Furthermore, it is possible to strand not only PPs, complements CPs and heavy NPs, but also extraposed complements and adjuncts of NPs (48)–(49).

(48)
a.
…and make the claim she did [that the Yankees would win it all this year].
b.
…and read a book he did [that he had taken with him to school].
c.
…and give a book he did to Mary [that he had taken with him to school].
d.
…and give a book he did [that he had taken with him to school] to the girl who was sitting next to him in class.
(49)
a.
…and buy a book she did about anti-reconstruction phenomena in Old High German.
b.
…and take a course she did offered by the inter-college consortium for higher learning.

In the analysis of Culicover and Winkler (2019), which is partly motivated by data like (46)–(49), what moves is actually a V. Following Bare Phrase Structure (Chomsky 1995), they assume that there are no categorial distinctions between various projection levels of the same head. All of the projections in a VP are V, as in (50), and any of them, including the lexical head alone, may topicalize.[27]

(50)

The assumption behind this analysis is that the initial VP is the same VP that is formed in the beginning of the derivation and subsequently fronted to the A′ position. Since the fronted VP is literally formed inside the VP in the base, this entails that hierarchical structure like (50) is possible for non-fronted VPs as well.

One prediction of this analysis is that no VP should be able to be fronted that is not able to appear in the ‘base’ position occupied by its trace/copy. Therefore, an initial problem for this analysis is the fact that it is possible to topicalize discontinuous portions of VPs, as shown in the variants of (45) in (51).

(51)
a.
and put in the safe Chris did all the money that he had last night.
b.
? and put last night Chris did all the money that he had in the safe.
c.
and put in the safe last night Chris did all the money that he had.

Examples such as these suggest one of two possibilities: (i) for VPs with many complements there are as many alternative structures of the form in (50) as there are possible combinations of verb+complement, or (ii) what is topicalized is not necessarily something that would be a constituent of VP, but an independently well-formed VP that corresponds to a coherent interpretation in cs and that can be matched with the remnant of the VP in situ (when there is one).[28]

The second alternative is the one consistent with flat structure. It is further supported by the fact that topicalized VPs in English exhibit morphological autonomy with respect to their putative positions in the VP domain. This is incompatible with the assumption that the structure of the fronted VPs is determined by their base position (Thoms and Walkden 2019: 174).

(52)
a.
We thought he would lose his temper, and [lose his temper] he has.
b.
He has {*lose/lost} his temper.

If the ill-formedness of (52b) is related to the auxiliary’s selectional properties, then the gap in (52a) can’t have exactly the same structure as the fronted VP.

In some cases, the auxiliary can even be doubled in the fronted VP (53a). If the auxiliary appeared doubled in its base position, this would be evidently ill-formed, as (53b) shows (Thoms and Walkden 2019: 175).

(53)
a.
[Willingly been examined by the committee] she certainly has been.
b.
She certainly has (*been) willingly been examined by the committee.

In the next section we first sketch out a constructional approach to filler-gap constructions, of which VP topicalization is a special case, and then show how it applies specifically to VP topicalization, avoiding the problems we mentioned.[29]

5.2 Licensing filler-gap configurations

We follow the standard approach of Zagona (1988), Huang (1993), Müller (1998), i.a. in treating VP fronting as an instance of a more general filler-gap structure which involves the extraction of a constituent to a clause-initial A′ position (Chomsky 1977; Levine and Hukari 2006; Pollard and Sag 1994). The fronting schema in (54) licenses any possible constituent (including a VP or any of its independently well-formed subparts) in clause-initial position, leaving behind the rest.

(54)
fronting schema [30]

The category symbol TP↾X stands for a TP which is missing a constituent of category X – a slash-category, in the sense of Categorial Grammar. The slash we use (↾) is the non-directional vertical slash of Kubota and Levine (2020). Unlike the complements of non-slashed categories, X need not be phrasal (i.e. XP). This allows us to capture cases like (47), where only V0 is fronted. As in HPSG, all categorial features (including a VP or any of its independently well-formed subparts) in clause-initial position, leaving behind the rest). valence information which distinguishes a head from its maximal projection) are required to be shared between the filler and the gap: i.e. the X in the slash and in the filler share all of their syntactic properties.

The general schema for licensing gaps is also common to all kinds of non-local dependency constructions in various languages. We follow early HPSG in treating gaps as ordinary lexical items as in (55) (Pollard and Sag 1994: 161):

(55)

The phon of a gap is the empty string (ɛ), its syn can be a slashed constituent of any category and its cs is a gap variable of the appropriate semantic type Z 1 gap .[31] For VP fronting, the gap variable will typically be of type ⟨e, t⟩.

We also need a schema to license phrases containing slash categories. This schema should register the presence of the gap on the category of its mother by means of the slash feature. For this purpose, we assume (56), drawing from similar propagation mechanisms adopted in GPSG and HPSG (Borsley and Crysmann 2021: 539–543; Gazdar et al. 1985: 137–144; Pollard and Sag 1994: 159–164).

(56)
slash phrase schema

Along with (54), (56) ensures that information about the presence of a gap will be projected from the gap site upwards until a filler of the corresponding category is found. There is no restriction requiring X, Y and Z to be distinct: (56) only says that a phrase inherits the slash specification from its daughter. It is possible that the phrase and the daughter are projections of the same head. This is important for our account of partial VP topicalization, where we have a VP↾VP inside a VP.

Lastly, another schema is needed to accomplish the binding of gaps in cs. This is (57), where “/” is a term replacement function such that, for cs terms α, β of the same type, α/β is the result of replacing free variables in β by α.

(57)
gap binding schema

This schema guarantees that gaps are interpreted as λ-bound variables to be saturated by the meaning of the filler that the slashed phrase combines with.

Now we are ready to apply the filler-gap analysis to VP toplicalization, assuming flat structure. Let us start with a case where an entire VP is topicalized. For the second conjunct in (58a), our schemas license the structure in (58b).

(58)
a. I knew Chris would cook the potato and cook the potato Chris did.

The fronted VP is directly licensed in initial position, given that it satisfies the independent licensing conditions for VPs in English: e.g. VP dominates V0, V0 is initial in phon. The syntactic connection between VP and the remainder of the clause is guaranteed by the slash feature and the semantic connection (i.e. “reconstruction”) is captured by the fact that the meaning of the slashed constituent is a functor over the meaning of the fronted VP in cs. This is ensured by (57), which establishes a correspondence between a phrase containing a gap and a function which results from λ-abstraction over the gap variable. The resulting β-reduced expression in cs is the same as the one we would get in a non-fronted structure. This analysis also accounts for V0 fronting cases like (47), with the only difference being that the filler and the gap are simple lexical items (i.e. V0).

Since this analysis does not assume that the topicalized VP is in situ at any stage, it avoids all of the movement paradoxes mentioned in connection to (52)–(53). The schemas only require that the fronted sequence be of the same syntactic category as the gap and that it correspond to a coherent interpretation that can be fed as an argument to the functor that results from λ-abstracting over the gap variable. Morphophonological properties – the kind of information that makes (52b) and (53b) bad – need not be shared among the filler and the gap, precisely because the filler is not derived from a VP in its base position.[32]

Let us consider now cases of partial VP topicalization like (51). What we have so far is almost enough to license these, but one extra ingredient needs to be added. When a partial VP is fronted leaving behind a remnant, we want to license a phon/syn structure like (59), where an VP gap is contained inside an VP.[33]

(59)

The head-complement schema in (41) licenses any sequence containing an X0 head plus a (possibly empty) subset of its complements as a constituent, with the mappings to phon and cs for this sequence subject to additional correspondence constraints. Any independently well-formed constituent can be the filler because the fronting schema in (54) imposes no categorial requirement on fronted elements. Therefore, any appropriately linearized and semantically coherent sequence including an initial V0 and a subset of its complements can be topicalized, including V0 alone. This is sufficient to license the fronted VP in (59).

The problem for us is the VP gap in situ. The head-complement schema does not license the combination between VP↾VP and its two complements in (59) because VP↾VP is not a lexical category like V0 (the category of a normal verb or a bare verb gap). Our solution, which is inspired by approaches to partial fronting in German (de Kuthy and Meurers 2001; Nerbonne 1994), is to say that partial VPs like those in (59) can only appear inside VPs as gaps. To account for the combination between a slashed constituent and its complements (the VP↾VP and the two PPs in (59)), we posit the following gap-complement schema.

(60)
gap-complement schema

Since the X variable in (60) need not be lexical, the gap complement schema in (60) licenses recursion of XPs with the same head. Thus, corresponding to each filler containing an initial V0, the mechanisms underlying filler-gap dependencies can license a VP gap inside the VP which combines with a proper subset of the verb’s complements. The head-complement schema, on the other hand, does not allow this, because any X phrase appearing inside an XP can only be a lexical X0 – this is precisely what rules out recursive XP projections and ensures flat structure in our system. Therefore, non-gap variants of partial VPs are never licensed inside VPs, only slashed VPs are. The only position in which non-gap variants of partial VPs can appear is the fronted position, where they are licensed as separate constituents and can bind their corresponding partial VP gaps in situ.

Since VP fronting results in a recursive VP projection in the extraction path, this analysis could potentially be seen as weakening the case for flat structure. However, we do not think that this is so. First, note that both the partial fronted phrases as well as the stranded remnants in (59) are flat constituents: all of the dependents of the head are sisters. Second, in our non-derivational approach, allowing for partial VPs in fronted positions does not automatically commit us to assuming that these partial VPs could also appear as partial VPs in situ (which would contradict flat structure). In fact, we have already seen reasons for not making the latter assumption: namely, the movement paradoxes in (52)–(53).

This entails that, if a sequence of constituents in the left-periphery includes V0 and satisfies the independent linear order, syntactic and semantic licensing conditions for VPs in English, it can be interpreted as a constituent even when it would not be a VP in situ. Hence all of the sequences in (61) are licensed as VPs in virtue of the fact that they are clause-initial and meet other constraints on English VPs.[34]

(61)
a.
… and [sleep in the park under the stars in a sleeping bag last night] Chris did.
b.
… and [sleep under the stars in a sleeping bag last night] Chris did in the park.
c.
… and [sleep in the park in a sleeping bag last night] Chris did under the stars.
d.
… and [sleep under the stars in a sleeping bag] Chris did in the park last night.
e.
… and [sleep in a sleeping bag last night] Chris did in the park under the stars.
f.
… and [sleep under the stars last night] Chris did in the park in a sleeping bag.
g.
… and [sleep in the park under the stars] Chris did in a sleeping bag last night.
h.
… and [sleep in the park in a sleeping bag] Chris did under the stars last night.
i.
… and [sleep in the park last night] Chris did under the stars in a sleeping bag.
j.
… and [sleep in the park] Chris did under the stars in a sleeping bag last night.

It is of course possible to assume that for each of the different orders in (61) there is a different syntactic structure with corresponding VPs along the lines of (50). Assuming that alternative linear orderings of constituents are derived by movements from a canonical structure, such an approach entails that there is an underlying fixed ordering of the adjuncts. Furthermore, the idea that adjuncts are all specifiers of some functional head and that all movement must be leftward and triggered by agreement yields an analysis that is breathtaking in its complexity (Borsley and Müller 2021). Moreover, since pretty much any sequence starting with V0 can in principle be fronted (provided suitable discourse-conditions are met), this would entail spurious structural ambiguities for complex VPs (Pollard 1996b: 302–304). We will not attempt to sketch out the possibilities here – for some examples, see Cinque’s (1999) carefully worked out analysis, and for critical discussion, see Neeleman and van de Koot (2008) and the papers in Bailyn (2011).[35]

6 Linear order variation in the English VP

In this section we argue that variation in linear ordering in the English VP is similar to German scrambling. The key to the flat structure approach is that as far as the correspondence between syntax and phonological form is concerned, linear order is free unless it is explicitly constrained. In other words, this correspondence may leave certain ordering possibilities underspecified.

‘Scrambling’ traditionally refers to (more or less) optional variation in the ordering of constituents of a phrase in ‘free word order’ languages. The examples in (62) from Fanselow (2003: 194) show scrambling in embedded clauses in German.

(62)
German
a.
dass der Mann dem Kind den Apfel gestern gab
that the.NOM man the.DAT child the.ACC apple yesterday gave
‘that the man gave the child the apple yesterday’
b.
dass dem Kind den Apfel der Mann gestern gab
c.
dass den Apfel dem Kind der Mann gestern gab
d.
dass der Mann dem Kind den Apfel gestern gab
e.
dass der Mann den Apfel dem Kind gestern gab

The classical approach to scrambling assumes that there is a uniform canonical underlying order of the constituents of a phrase, and that alternative orderings are derived by movement of these constituents to various non-canonical positions in a hierarchical syntactic structure (see e.g. Frey (1993), Hinterhölzl (2006: Ch. 2) and Frey (2015) for German and Miyagawa (2011) for Japanese). Within this general framework, the main theoretical questions are whether such movements are to specifier or adjoined positions, whether the movement is triggered by feature-checking requirements, whether such feature-checking is accomplished by invisible functional heads, and whether the movement is A′ or A movement (Fanselow 2003; Haider and Rosengren 2003; Salzmann to appear, i.a.). Since we are exploring the possibility that this type of linear order variation reflects correspondences between an unordered flat structure and linear order, we set aside questions that arise in movement analyses. This said, there is a substantial literature that suggests that scrambling can be neither A′ or A movement (see Bayer and Kornfilt 1994; Fanselow 2001; Haider 2021, among others). And, as reviews such as Abels (2015) and Salzmann (to appear) show, there are problems under any approach, whether or not movement and hierarchical structure are assumed.[36]

Interestingly, the flat structure approach to scrambling was already worked out by Uszkoreit (1986) and further developed by Kasper (1994), Bouma and Van Noord (1998) and Wetta (2015). Uszkoreit noted that GPSG allowed for the dissociation of hierarchical structure/immediate dominance (ID) and linear precedence (LP). Citing Gazdar and Pullum (1981), Uszkoreit (1986: 884) pointed out that LP rules allow for the statement of “fixed order, ordering variation that depends on a certain syntactic feature, and free order variation”. Crucially “[t]he absence of LP rules that impose a linear order on a set of sibling constituents will permit all permutations of these constituents” (Uszkoreit 1986: 885), thereby making possible a flat structure analysis of the basic German clause.

Uszkoreit also observed that there are ordering preferences among constituents of VP formulated in terms of θ-roles that produce ‘unmarked’ orders (e.g., agent precedes theme). In addition, there are ordering preferences involving information and discourse structure, and prosodic weight (e.g., short constituents precede long ones). It is straightforward to adapt Uszkoreit’s intuition to the framework of Section 4. The key idea is that alternative linear orderings of the immediate constituents of a phrase are licensed if we simply let (most) information about ordering of sisters within a phrase be underspecified by the grammar.

Space limitations do not permit us to explore here a constructional analysis for German scrambling; we focus on the order of constituents in the English VP. The basic linear ordering in English is captured by the underspecified linearization constraints in (63)–(64). (63) says that XP sisters of a V0 follow V0, however many XPs and V0s there are (XP and V0s are interpreted as existentially quantified variables). (64) says that one complement has to precede the other (they can’t be simultaneous). This entails that any ordering of V0’s siblings is licensed, as long as the phon corresponding to each XP follows the phon of each V0.

(63)
head-initial vp schema
(64)
complement order schema

At its essence, this proposal predicts that the ordering of heads with respect to their complements and the ordering of complements with respect to each other are of a fundamentally different nature. The former can be more or less rigidly determined on the basis of categorical syn information (sisterhood and headedness). The latter is typically more flexible, in virtue of the fact that it is governed by a host of redundant constraints operating on different levels (cs, information structure, phon), as well as extra-grammatical factors (e.g. dependency length).

To summarize, a flat structure approach to variable constituent order appears feasible in principle. It is also preferable to the assumption that linear order corresponds to hierarchical structure, since it captures equivalent distributional phenomena without assuming movement and empty structure. Following Uszkoreit, we assume that all orderings of the constituents in a VP are possible, subject to non-syntactic constraints (relating to weight, information structure, etc.); crucially, these constraints are independently required regardless of how the linear order is derived. Note that this inverts the usual logic associated with scrambling: free order is the universal default, and not the result of a particular construction or a rule. Any ordering of sister constituents is in principle possible, unless linearization constructions in the language explicitly impose further requirements.

What remains is to explain how flat structure is interpreted. This question is addressed in Section 7.[37]

7 Interpreting flat structure

7.1 Correspondences

Having demonstrated the feasibility of flat structure from the perspective of the syntax-phonology correspondence, we must deal with the question of how flat structure is interpreted, with an eye towards capturing the scope-related phenomena which have motivated more complex branching structures for both NPs and VPs (Andrews 1983; Cinque 2005, 2006; Levine 2003; Pesetsky 1995).

In standard approaches, each branching node in a syntactic structure corresponds to a rule of interpretation (Heim and Kratzer 1998; Klein and Sag 1985). If one assumes (as we do) a type theory in the semantic component, all such rules can be summarized in a single schema (65), which expresses the idea that the semantic contribution of a complex phrase is determined by the semantic contribution of its component parts.

(65)
compositionality schema

This schema employs a version of Klein and Sag’s (1985) Functional Realization operator (FR), which we define as follows (Sag et al. 2020: 17).

(66)
If τ is a logical type and Σ is a multiset consisting of typed logical expressions σ 1,…, σ n then FR τ (σ 1,…,σ n ) denotes a set of logical expressions of type τ that are derived by exhaustively applying some σ i to some σ k until each member of Σ has been consumed exactly once.

This allows us to say that the semantics of a mother node is the result of exhaustively applying the semantics of its daughters to each other in a manner fully-driven by their semantic types. In this way, both 1′(2′) and 2′(1′) are possible instances of the more general formula FR (1′, 2′). Which term is applied to the other is determined by their semantic types. The following two examples illustrate:

(67)
a.
b.

If syntactic structure is binary-branching, for any node n, the interpretation that (65) licenses for n can consist in applying the semantics of one of n’s daughters (typically the head) to the semantics of the other daughter. As a result, the cs of any phrase dominated by the daughter that is interpreted as a semantic argument will be under the scope of the cs of the daughter that is interpreted as a functor. In this way, the interpretation is essentially read off of the syntactic structure and semantic scope is fully determined by the height of attachment of phrases.

If, however, the syntactic structure is flat, type-driven compositionality will often underspecify the interpretation. This occurs because the functional realization of a multiset of cs formulae may contain more than one well-formed cs formula. This will happen, for instance, whenever a phrase has more than one daughter of the same semantic type.[38] As an illustration, consider the case of a ditransitive verb like give, which takes three entity arguments: the agent, the theme and the goal. Assuming that the latter two combine with the verb at the same level (the VP) and that to is semantically vacuous, both of the interpretations in (68) are licensed by (65):

(68)
a.
b.

Further constraints will be needed in addition to (65) to guarantee that only (68a) is licensed as the correct interpretation for the string give the cake to Bob. One likely source of constraints is the hierarchy of grammatical functions. We could constrain (65) so that that less oblique elements (e.g. direct objects) are composed with the meaning of V0 before more oblique ones (PP arguments, datives) (Büring 2005; Dowty 1982; Keenan and Comrie 1977). This correctly rules out (68b).

Another possible source of constraints on type-driven functional realization is the linear order of complements: semantically dependent phrases tend to cluster together in linear order. In the next section, we provide a preliminary exploration of this idea, showing how some aspects of interpretation can be expressed in terms of direct correspondences between phonological form and semantic representation without reference to the syntactic structure internal to phrases. The only relevant structure is the head/projection relation.[39] We look specifically at scopal interactions between modifiers of NPs (Section 7.2) and VPs (Section 7.3).

7.2 Flat NP

Culicover and Jackendoff (2005: 135–143) argue that the structure of NP is (69). A similar structure is proposed by Belk and Neeleman (2017).

(69)

This structure is completely flat: N0 and all of its dependents are in a symmetric relation with respect to each other. Scope, however, is intrinsically asymmetric: i.e., if a cs term α scopes over a cs term β, then β does not scope over α. To illustrate, consider the examples with intensional adjectives in (70). For the sake of simplicity, we ignore the representation of intensionality (i.e. the fact that former introduces a quantification over worlds/times) and assume that modifiers are always of type ⟨⟨e, t⟩, ⟨e, t⟩⟩ and nouns are of type ⟨e, t⟩.

(70)
a.
[NP former corrupt officials]
b.
[NP corrupt former officials]

In (70a), former ′ scopes over corrupt ′ ( officials ′): i.e. the NP refers to a set of individuals who used to be corrupt officials. In (70b), we have the opposite scopal interaction, with corrupt ′ scoping over former ′ ( officials ′): i.e. the reference is to a set of individuals who used to be officials but are (now) corrupt.[40]

The standard approach is to encode the asymmetry of scopal interpretation in terms of the asymmetric c-command relation between constituents in syntax (May 1985; Panayidou 2013; Teodorescu 2006). However, in an approach adopting (69), these differences in scope cannot be defined syntactically by appealing to the different height of attachment of former and corrupt. It is necessary to invoke some other formal relation in terms of which the scopal asymmetries can be licensed.

Our proposal here is that, at least for scopal interactions between modifiers and nouns, this relation is the linear ordering of the phon realizations of subconstituents of NP. The basic idea is that a nominal modifier can scope over the cs corresponding to whatever is on its right or left edge, as long as the resulting interpretation is semantically well-defined – i.e. as long as the string whose semantics is scoped over corresponds to a semantic type that is defined as a possible argument for the function denoted by the modifier. We don’t need complex branching syntactic structure, because all of the information in the structure is contained in the phon-cs correspondence, plus constraints on semantic types.

We posit two schemas to account for the fact that modifiers can appear both before and after the noun in English. (71a) handles cases where the modifier precedes the head, and (71b) the cases where the modifier follows the head.

(71)
flat np modifier schemas
a.
b.

Note that there is no constituent in syn with a subscript 2 in either of these constructions. The way the modifiers are integrated into the semantics of the NP is determined directly by properties of phon. These schemas license correspondences where modifiers scope over the semantics of the strings which are (left or right) adjacent to them inside the NP, regardless of whatever else is inside the NP. To start with a simple case, consider the examples in (72).

(72)
a.
[NP a furry dog]
b.
[NP a dog from Chicago]

The respective correspondences are given in (73).

(73)
a.
b.

We see that the correspondence in (73a) is licensed because it satisfies the conditions of (71a). The cs representation furry ′ scopes over dog ′, and the corresponding phon structure ‘furry’ precedes ’dog’. Similarly for (73b), ceteris paribus. Crucially there is no need to mention the head in the syntactic representation in the schema, although of course it plays a role in licensing the actual correspondence of the NP in virtue of independent constructions, like the head-complement schema, which ensures that XP must dominate an X0 (Jackendoff 1977).

The situation becomes more interesting when we consider the case of multiple modifiers in (70), repeated below:

(74)
a.
[NP former corrupt officials]
b.
[NP corrupt former officials]

The question is, when the cs representation is of the form A 1 ( A 2 ( N 3 ) ) and the phon representation is 1⊕2⊕3, is the correspondence for the entire NP licensed by the schema in (71a)?

(75)

Consider first former. The string ‘former’ precedes ‘corrupt′⊕officials’ in phon and former ′ scopes over the corresponding corrupt ′(officials′) in cs. So this correspondence is licensed. Similarly, corrupt is licensed because ‘corrupt’ precedes ‘officials’ in phon and corrupt ′ scopes over officials ′ in cs. Notice that the interpretation of (74b), namely corrupt ′(former′(officials′) is not licensed for (74a), because the material in phon corresponding to the function with widest scope ( corrupt ′) does not precede a string that corresponds to what is in the scope of this function.

The other case of interest is where there is one modifier on either side of the head, and thus an opportunity for ambiguity depending on which one has wider scope. An example is given in (76).

(76)
[NP former professor with pink hair]

On one reading, the former professor had pink hair when they were a professor. On the other reading, the former professor has pink hair at the present.

The two correspondences are given in (77).

(77)
a.
b.

In (77a), former scopes over professor with pink hair and precedes the corresponding phon. In (77b), former scopes over professor, and also precedes the corresponding form. Although it does not scope over the entire material that follows it, it satisfies (71) because the requirement is that it precede the material that it scopes over, not that it scope over everything that it precedes. Similarly for with pink hair. Thus, both (77a) and (77b) are licensed.[41]

7.3 Flat VP

As in the case of the interpretation of adjectives that modify a noun, a VP adjunct scopes over the adjuncts that appear between it and the head. This gives us a possible way to derive some of the generalizations identified by Cinque (1999), without having to resort to rich branching VP structure and functional heads whose only motivation is to ensure the correspondence between linear order and scope. Consider Pesetsky’s (1995:233) example with VP-final PPs.

(78)
a.
Chris plays quartets [in foreign countries] [on weekends].
b.
Chris plays quartets [on weekends] [in foreign countries].

In (78a), the most natural interpretation is that Chris’s playing in foreign countries is restricted to weekends; in (78b), Chris’s activity of playing on weekends is restricted to foreign countries. Under the assumption that scope requires c-command, a left-branching analysis would be required to capture these readings. However, the only observable differences between (78a) and (78b) are: (i) the linear order between the PPs; and (ii) the fact that in (78a), on weekends scopes over in foreign countries and, in (78b), the opposite scopal relation holds.

As in the case of nominal modifiers discussed above, the difference in linear order in (78) correlates with the difference in scope. Therefore, in stating the principles determining the interpretation of VP modifiers, we also do not need to invoke anything beyond linear order in phon, assuming the semantic type-theory is sufficiently rich to constrain other unwanted readings (e.g. a reading where the cs of on weekends scopes over that of countries in (78a)).[42] In other words, rather than taking the differences in scope between modifiers to be a reflection of differences in the height of attachment of the PPs, we can interpret these data as evidence that the interpretation of VP modifiers is subject to the direct linear order constraints in (79), which are exactly parallel to the correspondence constraints for NP modifiers proposed in Section 7.2.

(79)
flat vp modifier schemas
a.
b.

The constraint in (79a) licenses the correct interpretations for (78) exactly as discussed for the NPs in (74). Whenever we have a configuration with multiple modifiers where (79a) and (79b) can potentially apply, ambiguities analogous to the ones in (77) emerge. Consider the German examples from Müller (2023: 395):

(80)
a.
dass er das Buch nicht oft liest
that he the book not often reads
‘It is not the case that he reads the book often.’
b.
dass er das Buch oft nicht liest
that he the book often not reads
‘It is often not the case that he reads the book.’
c.
Oft liest er das Buch nicht.
often read him the book not
‘It is often that he does not read the book.’ or,
‘It is not the case that he reads the book often.’

Example (80a) has the negation scoping over the semantics of oft. The opposite scopal interpretation holds in (80b). This follows from (79b): oft and nicht are adverbs, which can scope over the semantics of the string to their right within the VP, given that the latter is a predicate and both oft and nicht are interpreted as functions that map predicates to predicates (i.e. ⟨⟨e, t⟩, ⟨e, t⟩⟩). Crucially, only one reading is possible because there is nothing to the right of liest. Example (80c) is ambiguous like (77) because now (79a) can apply to license a wide-scope reading for nicht, in addition to (79b) for oft. This is ambiguity follows from our account since both oft and nicht can be interpreted as functors over the semantics of strings that are adjacent to them (to the right and left, respectively). In (80a-b) only one possible reading is licensed because only the string to the right of the adjuncts inside the VP is of a semantic type that could be combined with the adjuncts.[43]

This analysis also derives the observation that locative adjuncts only receive an object-oriented interpretation when there are no other event-scoping adjuncts (e.g. temporals) intervening between them and the head:

(81)
a.
Terry saw the accident [in the park] [yesterday].
b.
Terry saw the accident [yesterday] [in the park].

Only (81a) can be interpreted as situating the accident in the park (Koenig et al. 2003; Maienborn 2001; McInnerney 2022). This follows from the fact that, in virtue of (71b), in-the-park ′ can be combined directly with the semantic contribution of the accident. In the case of (81b), the presence of the intervening element yesterday forces an event locative interpretation for in the park because yesterday is also interpreted as an operator over the whole event. As a result, the semantic type of the string adjacent to in the park narrows the interpretation of the PP to that of an event operator. We do not attempt to spell out the formal details of this proposal, but merely point out that the scopal properties can be fully determined by the linear order of the locatives with respect to verbal heads.

All of these phenomena have been cited as motivations for enriched left-branching VP structures (Ernst 2002; Maienborn 2001; McInnerney 2022; Müller 2023). They are, therefore, in conflict with other phenomena like anaphor and quantifier binding, which, as we saw in Section 3.3, many interpret as favoring a right-branching analysis (Hale and Keyser 1993, 2002; Kayne 2004; Larson 1988).

What is useful about the linear-order based approach sketched above is that it derives the effects that both of these proposals try to model. Scope is represented by a right-branching structure in cs, but constraints like (79a) allow elements to the right in the linear order to scope over elements to the left in the linear order (see Jackendoff 1990a; Riezler 1995; Culicover 2013a for other semantic constraints invoking linear order). Our proposal is thus similar to the hybrid proposals in Pesetsky (1995), Schweikert (2005) and Cinque (2006). The difference is that, rather than having the linear and the scopal information be part of different syntactic representations, we encode them in terms of phon and cs, respectively.

8 Conclusions and implications

In this paper we have explored the possibility of assuming flat syntactic structure. We motivated this on the basis of the facts of heavy NP shift, extraposition and asymmetry paradoxes. We have seen that flat structure is also plausible for the type of free ordering associated with scrambling and English VP topicalization. A flat structure approach to these phenomena successfully avoids the need to posit restructuring, phonologically null projections and other devices that depart from representational economy.[44]

A broader implication is that, given that syntactic structure turns out to be very simple, there is not as much room for genuinely syntactic variation as there is in other approaches. The structures that pertain to syn (e.g. head-complement phrases, adjunction, major syntactic categories like VP, NP, AP) are, to a considerable extent, a reflection of the structure of meaning (cs), which is arguably universal in humans (Berwick and Chomsky 2016; Bouchard 1991; Culicover 2021; Jackendoff 1983, 1990b, 2002; Ramchand and Svenonius 2014). Our proposal here is, therefore, compatible with the view that syntax is relatively stable (across both languages and constructions) and that the main source of linguistic variation is the relationship between syn/cs and phon – i.e. what Chomsky et al. (2019) call externalization and Sauerland and Alexiadou (2020) call compression.

However, unlike Chomsky et al. (2019) (but like Sauerland and Alexiadou 2020), our theory allows for the possibility of direct correspondences between meaning and the overt form of utterances (i.e. phon). Furthermore, rather than viewing mappings to phon as something “external to I-language”, we take them to be central to the architecture of grammar. In fact, as we have illustrated above, many of the phenomena that syntacticians are typically concerned with can be plausibly recast in terms of flexible constructional correspondences between syntax and semantics on the one hand, and phonology on the other.

Like Chomsky et al. (2019: 244), we assume that such correspondences can be quite “messy”, given the radically different nature of the systems involved. In particular, linearization statements – i.e. schemas relating cs/syn to phon – have to be sensitive to the syntactic categories of the mother and all of its daughters (as Abels and Neeleman (2012: 66) argue), as well as to the scopal relations holding between them in cs. In addition to these linearization statements that account for possible and impossible orderings we also assume non-syntactic soft constraints that establish linearization preferences among equally licensed orderings.

The assumption of flat structure of course has implications for a vast range of phenomena that have previously been analyzed in terms of conventional hierarchical structure, following the assumptions of Meaning in Structure and Order in Structure. We stress that flat structure at this point is part of a minimalist program for linguistic theory, as sketched out in Simpler Syntax, not a fully worked out theory. We have shown how to implement flat structure for a small number of syntactic phenomena, using the general framework of Simpler Syntax to specify the correspondence between syntax and linear order, on the one hand, and an approach to interpretation to specify the correspondence between syntax and meaning, on the other hand. A question for future research is the extent to which it is feasible to use linearization and interpretation to account for a fuller range of phenomena that have previously been described terms of hierarchical structure and movement (Culicover and Varaschin to appear).


Corresponding author: Giuseppe Varaschin, Institut für deutsche Sprache und Linguistik, Humboldt-Universität zu Berlin, Berlin, Germany, E-mail:

Acknowledgments

We would like to thank David Adger, Ray Jackendoff, Antonio Machicao y Priemer, Andrew McInnerney, Stefan Müller, Geoff Pullum and Viola Schmitt for useful conversations about the topics covered in this paper. We are very grateful to two anonymous reviewers for their detailed and constructive comments, which have led to substantial improvements. Of course, all remaining errors are our responsibility. The research reported here was partially funded by the Deutsche Forschungsgemeinschaft (DFG) – SFB 1412, Project A04, ID 416591334.

Appendix: formalizing constructions

The core assumption in constraint-based formalisms is the idea a grammar is a set of truth-evaluable statements that describe the properties of well-formed linguistic expressions (Pollard 1996a; Postal 2003). Therefore, if the Parallel Architecture (PA) is to be characterized as constraint-based framework, it is important to be clear about what the theory takes to be the fundamental structures that linguistic expressions can have (i.e. the basic units that constitute them, and the relations defined over these units) and which kind of formal language is appropriate to describe these structures. Though we cannot do full justice to this goal here, we provide a rough outline below. See Varaschin (2021: Ch. 4) for more details.

As we saw, the PA posits three main levels of representation: phon, syn and cs. Each of these levels is designed to model different aspects of linguistic expressions. The structures the PA ascribes to linguistic expressions thus conform to what Jackendoff (1997: 41) calls representational modularity: “The overall idea is that the mind/brain encodes information in some finite number of distinct representational formats or ‘languages of the mind’”.

A fundamental assumption of this kind of framework is that “there are different kinds of informational dependencies among the parts of a sentence, and that these are best expressed using different formal structures” (Kaplan 1995: 10). This avoids having to overload a single representational format (e.g. hierarchical phrase-markers) with the burden of modeling types of linguistic information as diverse as linear order, syntactic constituency, grammatical functions and inference. This is precisely the intuition that led to the projection architecture of LFG (Bresnan and Kaplan 1982; Bresnan 2001; Dalrymple 2001), which is one of the major inspirations for the formalization of PA we present here.

The representations pertaining to each level of linguistic organization can be mathematically defined as relational structures: i.e. as finite sets of primitives and relations defined over these primitives. The primitive units in phon are strings of sounds (i.e. segments or lists thereof). We assume that the primitive relations native to phon are inclusion (⊂) and the concatenation function (⊕).[45] We represent inclusion by placing the substrings of a given string between brackets. According to this convention, 1 ⊂ 3 is equivalent to [1]3. Concatenation maps two non-overlapping strings σ 1, σ 2 into a string σ 3 that includes σ 1 followed by σ 2 with nothing in-between or in addition to σ 1 and σ 2. In particular examples of phon objects and their descriptions, we often omit the “⊕” symbol and represent concatenation simply by the linear arrangement of characters.

We define constructions by stating the relations pertaining to each level and the appropriate correspondences between their primitive units. To illustrate, (82) uses the concatenation relation to define the linear ordering constraint on the phonology of the English idiom by and large.

(82)

The constraint in (82) licenses the phon of by and large as part of the description of the idiom, but not variations like *large and by or *by and very large.

In addition to names of primitive elements and relations that define each level of representation, constraints may employ predicates which do not correspond directly to structures in the domain of described objects. In this spirit, we define over the units of phon the ternary relation of weak precedence (≪) as follows:

(83)
≪= def For any strings σ a , σ b and σ c , σ a σ b = σ c iff σ c is a sequence of strings σ 0, σ 1, …, σ n such that σ 0 = σ a and σ n = σ b and for every σ i , 0 ≤ i < n, σ i σ i+1.

Weak precedence, as defined in (83), is the transitive closure of concatenation. Two strings stand in a weak precedence relation if one is pronounced before (but not necessarily immediately before) the other. We can use this relation to define a constraint on the order between the phonology of determiners and nouns in English. This allows us to license structures where Det and N are in strict adjacency (e.g. the book) as well as cases where they are not (e.g. the big yellow book), while at the same time ruling out structures where N precedes Det (e.g. *book the).

The primitive units of syn, in turn, are nodes and syntactic features. There are two basic relations in this level: the mother function (M), which maps nodes onto nodes, and the label function (L), which maps nodes onto features (Kaplan 1995; Partee et al. 1990). In more explicit terms, each object in syn is an unordered tree, which consists in a quadruple ⟨N, F, M, L⟩, where:

(84)
N: set of nodes (n1, n2, n3, …, n n )
F: set of syntactic features (e.g. V, N, [fin], selectional features)
M: a partial function from N into N (the mother function)[46]
L: a function from N into F (the label function)

The trees in syn are unordered because the linear left-right arrangement of nodes on the printed page does not make a difference to the syntactic representation. The only relations that matter to syn are the mother and label.

The fact that order is represented only in phon by means of the concatenation relation does not mean that syntax is unrelated to linear order. In languages like English, there is a fairly rigid correspondence between order and syntactic structure: strings that correspond to syntactic heads tend to precede strings that correspond to syntactic complements. However, instead of encoding these facts directly in syn (e.g. by defining concatenation as a relation between nodes), we can state them as correspondence constraints between syn and phon.

We describe a structure in syn by listing the defining relations M and L that hold among its primitive elements (nodes and features). Some syn constraints may also employ variables over features. Any occurrence of a free variable in a linguistic constraint is, by convention, interpreted as being existentially quantified. As an example, consider the abstract schema that licenses a constituent of type XP dominating a constituent of type X0 and another constituent of type YP. This is the analogue to the X-bar rule licensing head-complement structures:

(85)
M(n 2) = n 1M(n 3) = n 1L(n 1) = XP ∧ L(n 2) = X0L(n 3) = YP

Rather than using these first-order logic representations with implicit existential quantification over variables, we adopt the more perspicuous labeled bracketing notation for representing constraints over syn. So (85) is equivalent to (86):

(86)

An example of syn objects that are licensed by (85)/(86) are given in (87).

(87)
a.
b.
c.

Note that (85)/(86) says nothing about how many X0s and YPs there should be: it simply says that a structure is well-formed if there exists at least one constituent labeled XP that is a mother of at least one constituent labeled X0 and a constituent labeled YP.[47] Descriptions are always confined to local structures (i.e. trees of a maximum finite depth, but not necessarily of depth-1), given that the only primitive relation between nodes we assume is M.[48]

Lastly, consider the representations in cs. We assume that the basic units in cs are the Meaningful Expressions (MEs) of Montague (1974), which receive a model-theoretic interpretation along familiar lines. As is customary, each ME in cs is assigned to a semantic type which determines the kind of denotation it has. The notion of type is defined as follows (where e is short for entity, and t is short for truth-value):

(88)
a.
e and t are types.
b.
If a and b are types, then ⟨a, b⟩ is a type.
c.
Nothing else is a type.

Types defined by (88a) denote primitive objects: entities and truth-values. Types defined by the recursive clause in (88b) are called functional types because they are interpreted as functions from things of type a to things of type b; for example, a ME of type ⟨e, t⟩ corresponds to a function from entities to truth-values. The notion of a Meaningful Expression of type a (ME a ) is defined as follows:

(89)
a.
Every variable and constant of type a is in ME a .
b.
If α ∈ ME a and u is a variable of type b, then λu[α] ∈ MEb,a.
c.
If α ∈ MEa,b and β ∈ ME a , then α(β) ∈ ME b .
d.
If α, β ∈ ME α , then α = β ∈ ME t .
e.
If ϕ, ψ ∈ ME t , then ¬ϕ, [ϕψ], [ϕψ], [ϕψ], [ϕψ] ∈ ME t .
f.
If u is a variable and ϕ ∈ ME t , then ∀u[ϕ], ∃u[ϕ] ∈ ME t .
g.
If u is a variable of type a and ϕ ∈ ME t , then ιu[ϕ] ∈ ME a .

For convenience, we assume a neo-Davidsonian overlay to the standard Montagovian system outlined above, where thematic predicates (ag, pat, exp, etc.) denote relations between individuals and the events they partake in, as in Parsons (1990). However, instead of representing quantification over events directly, we use an abbreviated notation, where labels for thematic roles are indexed to argument positions of event-describing predicates. In this set up, the cs representation for Chris broke the glass is (90a), which is equivalent to the standard representation in (90b):

(90)
a.
break′(ag : c h r i s , pat : ιx[glass′(x)])
b.
∃e [break′(e) ∧ ag( e , c h r i s ) ∧ pat(e, ιx[glass′(x)])]

As we mentioned above, the Parallel Architecture posits (at least) three types of correspondence among the different structures which comprise linguistic expressions. These are depicted by the double arrows in Figure 1. In formal terms, a correspondence will be a binary symmetric relation between the minimal units in each level of representation. The three correspondences posited within the Parallel Architecture are defined in (91).

(91)
Correspondences:
a.
A symmetric relation C phon-syn that holds between strings and nodes.
b.
A symmetric relation C phon-cs that holds between strings and MEs.
c.
A symmetric relation C syn-cs that holds between nodes and MEs.

As with all other defining properties of linguistic objects, correspondences can also be used in descriptions of modeled structures in order to state well-formedness constraints. A simple example of a correspondence constraint is an individual word like cow:

(92)
L(n 1) = N0 ∧ C phon-syn (cow, n1) ∧ C syn-cs (n1, λy[cow′(y)])

Like the other constraints we propose, logical formulae such as (92) can be abbreviated by attribute-value matrices. Each correspondence relation is depicted by coindexing of structures in different levels. The constraint in (92) is, therefore, equivalent to (93):

(93)

An example of a phon-syn correspondence constraint requiring heads to precede their complements is (94) (where ≪ is the weak precedence relation over strings, 4 and 5 are variables over strings, x 1, x 2 and x 3 are variables over nodes, M is the mother function and L is the label function). The AVM abbreviation of (94) is given in (95).

(94)
M(x 2) = x 1M(x 3) = x 1L(x 1) = XPL(x 2) = X 0L(x 3) = YPφ 4φ 5∧CPHON-SYN(φ 4, x 2)∧CPHON-SYN(φ 5, x 3)
(95)

An example of a linguistic object that is licensed by (94)/(95) is given in (96):

(96)

Due to the pervasiveness of correspondences and their importance in the Parallel Architecture, the framework can also be called a Correspondence Architecture – a term often used in LFG work (Findlay 2016). This kind of architecture sets our framework and LFG apart from sign-based theories like HPSG and SBCG (Pollard and Sag 1994; Sag 2012). The latter use the same kind of data structure to model all aspects of linguistic objects: i.e. typed feature-structures. Different types of information are not related by means of modular correspondences, but in virtue of being values assigned to different attributes of the same sign, with each attribute representing a different type of linguistic information. The design of HPSG/SBCG does not make it clear that phonology, syntax and semantics are autonomous combinatorial systems. Combinatoriality only exists at the level of signs as a whole (e.g. in features like dtrs, which take lists of signs as values, instead of syntactic nodes).

References

Abels, Klaus. 2015. Word order. In Tibor Kiss & Artemis Alexiadou (eds.), Syntax – theory and analysis: An international handbook, vol. 2, 1400–1448. Berlin: De Gruyter Mouton.10.1515/9783110363708-017Search in Google Scholar

Abels, Klaus & Ad Neeleman. 2009. Universal 20 without the LCA. In José M. Brucart, Anna Gavarró & Jaume Solà (eds.), Merging features: Computation, interpretation, and acquisition, 60–79. Oxford: Oxford University Press.10.1093/acprof:oso/9780199553266.003.0004Search in Google Scholar

Abels, Klaus & Ad Neeleman. 2012. Linear asymmetries and the LCA. Syntax 15(1). 25–74. https://doi.org/10.1111/j.1467-9612.2011.00163.x.Search in Google Scholar

Andrews, Avery D. 1983. A note on the constituent structure of modifiers. Linguistic Inquiry 14(4). 695–697.Search in Google Scholar

Ariel, Mira. 1990. Accessing noun-phrase antecedents. London: Routledge.Search in Google Scholar

Ariel, Mira. 2001. Accessibility theory: An overview. In Ted J.M. Sanders, Joost Schilperoord & Wilbert Spooren (eds.), Text representation: Linguistic and psycholinguistic aspects, 29–87. Amsterdam: John Benjamins Publishing Company.10.1075/hcp.8.04ariSearch in Google Scholar

Bailyn, John Frederick. 2011. Review of ‘alternatives to cartography. Language 87(3). 665–671.10.1353/lan.2011.0067Search in Google Scholar

Baker, Mark. 1997. Thematic roles and syntactic structure. In Liliane Haegeman (ed.), Elements of grammar, 73–137. Dordrecht: Kluwer Academic Publishers.10.1007/978-94-011-5420-8_2Search in Google Scholar

Baltin, Mark. 1978. Toward a theory of movement rules. Cambridge: MIT dissertation.Search in Google Scholar

Baltin, Mark. 1981. Strict bounding. In C. Lee Baker & John McCarthy (eds.), The logical problem of language acquisition. Cambridge, MA: MIT Press.Search in Google Scholar

Baltin, Mark. 2006. The nonunity of VP-preposing. Language 82(4). 734–766. https://doi.org/10.1353/lan.2006.0181.Search in Google Scholar

Barker, Chris. 2012. Quantificational binding does not require c-command. Linguistic Inquiry 43(4). 614–633. https://doi.org/10.1162/ling_a_00108.Search in Google Scholar

Barss, Andrew & Howard Lasnik. 1986. A note on anaphora and double objects. Linguistic Inquiry 17. 347–354.Search in Google Scholar

Bayer, Josef & Jaklin Kornfilt. 1994. Against scrambling as an instance of move-alpha. In Norbert Corver & Henk van Riemsdijk (eds.), Studies on scrambling, 17–60. Berlin: Mouton de Gruyter.10.1515/9783110857214.17Search in Google Scholar

Belk, Zoë & Ad Neeleman. 2017. AP adjacency as a precedence constraint. Linguistic Inquiry 48(1). 1–45. https://doi.org/10.1162/ling_a_00234.Search in Google Scholar

Berwick, Robert C. & Noam Chomsky. 2016. Why only us: Language and evolution. Cambridge, MA: MIT Press.10.7551/mitpress/9780262034241.001.0001Search in Google Scholar

Bianchi, Valentina. 2001. Antisymmetry and the leftness condition: Leftness as anti-c-command. Studia Linguistica 55(1). 1–38. https://doi.org/10.1111/1467-9582.00073.Search in Google Scholar

Bobaljik, Jonathan D. 2004. Clustering theories. In Katalin É. Kiss & Henk van Riemsdijk (eds.), Verb clusters: A study of Hungarian, German, and Dutch, 121–146. Amsterdam & Philadelphia: John Benjamins Publishing Company.10.1075/la.69.08bobSearch in Google Scholar

Borsley, Robert D. 2006. On the nature of Welsh VSO clauses. Lingua 116(4). 462–490. https://doi.org/10.1016/j.lingua.2005.02.004.Search in Google Scholar

Borsley, Robert D. & Berthold Crysmann. 2021. Unbounded dependencies. In Robert D. Borsley, Stefan Müller, Anne Abeillé & Jean-Pierre Koenig (eds.), Head-driven phrase structure grammar: The handbook, 537–594. Berlin: Language Science Press.Search in Google Scholar

Borsley, Robert D. & Stefan Müller. 2021. HPSG and minimalism. In Robert D. Borsley, Stefan Müller, Anne Abeillé & Jean-Pierre Koenig (eds.), Head-driven phrase structure grammar: The handbook, 1253–1329. Berlin: Language Science Press.Search in Google Scholar

Bošković, Željko. 1997. Superiority effects with multiple wh-fronting in Serbo-Croatian. Lingua 102(1). 1–20. https://doi.org/10.1016/s0024-3841(96)00031-9.Search in Google Scholar

Bosque, Ignacio & Carme Picallo. 1996. Postnominal adjectives in Spanish DPs. Journal of Linguistics 32(2). 349–385. https://doi.org/10.1017/s0022226700015929.Search in Google Scholar

Bouchard, Denis. 1991. From conceptual structure to syntactic structure. In Katherine Leffel & Denis Bouchard (eds.), Views on phrase structure, 21–35. Dordecht: Springer.10.1007/978-94-011-3196-4_2Search in Google Scholar

Bouchard, Denis. 1998. The distribution and interpretation of adjectives in French: A consequence of bare phrase structure. Probus 10. 139–183. https://doi.org/10.1515/prbs.1998.10.2.139.Search in Google Scholar

Bouma, Gosse & Gertjan Van Noord. 1998. Word order constraints on verb clusters in German and Dutch. In Erhard Hinrichs, Andreas Kathol & Tsuneko Nakazawa (eds.), Complex predicates in nonderivational syntax, 43–72. San Diego: Academic Press.10.1163/9780585492223_003Search in Google Scholar

Bresnan, Joan. 1977. Variables in the theory of transformations. In Peter W. Culicover, Thomas Wasow & Adrian Akmajian (eds.), Formal syntax, 157–196. New York: Academic Press.Search in Google Scholar

Bresnan, Joan. 1982. Control and complementation. Linguistic Inquiry 13(3). 343–434.Search in Google Scholar

Bresnan, Joan. 2001. Lexical-functional syntax. Oxford: Blackwell.Search in Google Scholar

Bresnan, Joan & Ronald Kaplan. 1982. Lexical-functional grammar: A formal system for grammatical representation. In Joan Bresnan (ed.), The mental representation of grammatical relations, 173–281. Cambridge, MA: MIT Press.Search in Google Scholar

Bruening, Benjamin. 2014. Precede-and-command revisited. Language 90(2). 342–388. https://doi.org/10.1353/lan.2014.0037.Search in Google Scholar

Bruening, Benjamin. 2020. The head of the nominal is N, not D: N-to-D movement, hybrid agreement, and conventionalized expressions. Glossa: A Journal of General Linguistics 5(1). https://doi.org/10.5334/gjgl.1031.Search in Google Scholar

Bruening, Benjamin. 2022. Locative inversion, PP topicalization, and weak crossover in English. Journal of Linguistics 58(4). 739–757. https://doi.org/10.1017/s0022226721000414.Search in Google Scholar

Bruening, Benjamin & Eman Al Khalaf. 2019. No argument–adjunct asymmetry in reconstruction for Binding Condition C. Journal of Linguistics 55(2). 247–276. https://doi.org/10.1017/s0022226718000324.Search in Google Scholar

Büring, Daniel. 2005. Binding theory. Cambridge: Cambridge University Press.10.1017/CBO9780511802669Search in Google Scholar

Büring, Daniel. 2013. Syntax, information structure and prosody. In Marcel den Dikken (ed.), The cambridge handbook of generative syntax, 860–895. Cambridge: Cambridge University Press.10.1017/CBO9780511804571.029Search in Google Scholar

Carnie, Andrew. 2005. Flat structure, phrasal variability and VSO. Journal of Celtic Linguistics 9(1). 13–31.Search in Google Scholar

Chomsky, Noam. 1955. The logical structure of linguistic theory. Published as Chomsky 1975.Search in Google Scholar

Chomsky, Noam. 1965. Aspects of the theory of syntax. Cambridge, MA: MIT Press.10.21236/AD0616323Search in Google Scholar

Chomsky, Noam. 1970. Remarks on nominalization. In Roderick A. Jacobs & Peter S. Rosenbaum (eds.), Readings in English transformational grammar, 184–221. Waltham, Massachusetts: Ginn.Search in Google Scholar

Chomsky, Noam. 1973. Conditions on transformations. In Stephen Anderson & Paul Kiparsky (eds.), A Festschrift for Morris Halle, 232–286. New York: Holt, Reinhart & Winston.Search in Google Scholar

Chomsky, Noam. 1975. The logical structure of linguistic theory. New York: Plenum Press.Search in Google Scholar

Chomsky, Noam. 1977. On wh-movement. In Peter W. Culicover, Thomas Wasow & Adrian Akmajian (eds.), Formal syntax, 71–132. New York: Academic Press.Search in Google Scholar

Chomsky, Noam. 1981. Lectures on government and binding. Dordecht: Foris.Search in Google Scholar

Chomsky, Noam. 1986. Barriers. Cambridge, MA: MIT Press.Search in Google Scholar

Chomsky, Noam. 1995. The minimalist program. Cambridge, MA: MIT Press.Search in Google Scholar

Chomsky, Noam. 2000. Minimalist inquiries: The framework. In Roger Martin, David Michaels & Juan Uriagereka (eds.), Step by step: Essays on minimalist syntax in honor of Howard Lasnik, 89–156. Cambridge, MA: MIT Press.Search in Google Scholar

Chomsky, Noam. 2001. Derivation by phase. In Michael Kenstowicz (ed.), Ken Hale: A life in linguistics, 1–52. Cambridge, MA: MIT Press.10.7551/mitpress/4056.003.0004Search in Google Scholar

Chomsky, Noam. 2005. Three factors in language design. Linguistic Inquiry 36(1). 1–22. https://doi.org/10.1162/0024389052993655.Search in Google Scholar

Chomsky, Noam. 2013. Problems of projection. Lingua 130. 33–49. https://doi.org/10.1016/j.lingua.2012.12.003.Search in Google Scholar

Chomsky, Noam, Ángel J. Gallego & Dennis Ott. 2019. Generative grammar and the faculty of language: Insights, questions, and challenges. Catalan Journal of Linguistics 18. 229–261. https://doi.org/10.5565/rev/catjl.288.Search in Google Scholar

Chomsky, Noam & Howard Lasnik. 1977. Filters and control. Linguistic Inquiry 8(3). 425–504.Search in Google Scholar

Cinque, Guglielmo. 1994. On the evidence for partial N movement in the Romance DP. In Guglielmo Cinque, Jan Koster, Jean-Yves Pollock, Luigi Rizzi & Rafaela Zanuttini (eds.), Paths towards Universal Grammar. Studies in honour of Richard S. Kayne, 85–110. Washington, D.C.: Georgetown University Press.Search in Google Scholar

Cinque, Guglielmo. 1999. Adverbs and functional heads: A cross-linguistic perspective. New York: Oxford University Press.10.1093/oso/9780195115260.001.0001Search in Google Scholar

Cinque, Guglielmo. 2005. Deriving Greenberg’s universal 20 and its exceptions. Linguistic Inquiry 36(3). 315–332. https://doi.org/10.1162/0024389054396917.Search in Google Scholar

Cinque, Guglielmo. 2006. Complement and adverbial PPs: Implications for clause structure. In Guglielmo Cinque (ed.), Restructuring and functional heads, 145–166. New York: Oxford University Press.10.1093/oso/9780195179545.003.0007Search in Google Scholar

Cinque, Guglielmo. 2010. The syntax of adjectives: A comparative study. Cambridge, MA: MIT Press.10.7551/mitpress/9780262014168.001.0001Search in Google Scholar

Cinque, Guglielmo. 2013. Cognition, typological generalizations, and universal grammar. Lingua 130. 50–65. https://doi.org/10.1016/j.lingua.2012.10.007.Search in Google Scholar

Cinque, Guglielmo. 2023. On linearization: Toward a restrictive theory. Cambridge, MA: MIT Press.10.7551/mitpress/14681.001.0001Search in Google Scholar

Cinque, Guglielmo & Luigi Rizzi. 2008. The cartography of syntactic structures. Studies in Linguistics 2. 42–58.Search in Google Scholar

Clemens, Lauren Eby & Maria Polinsky. 2017. Verb-initial word orders (primarily in Austronesian and Mayan languages). In Martin Everaert & Henk van Riemsdijk (eds.), The Blackwell companion to syntax, 2nd edn. Hoboken, NJ: Wiley-Blackwell.10.1002/9781118358733.wbsyncom056Search in Google Scholar

Collins, Chris & Edward Stabler. 2016. A formalization of minimalist syntax. Syntax 19(1). 43–78. https://doi.org/10.1111/synt.12117.Search in Google Scholar

Cooper, Robin. 1979. The interpretation of pronouns. In Frank Heny & Helmut S. Schnelle (eds.), Syntax and semantics 10: Selections from the Third Groningen Round Table, 61–92. New York: Academic Press.10.1163/9789004373082_004Search in Google Scholar

Corcoran, John, William Frank & Michael Maloney. 1974. String theory. The Journal of Symbolic Logic 39(4). 625–637. https://doi.org/10.2307/2272846.Search in Google Scholar

Culicover, Peter W. 1971. Syntactic and semantic investigations. Cambridge, MA: MIT dissertation.Search in Google Scholar

Culicover, Peter W. 1992. A note on quantifier binding. Linguistic Inquiry 23(4). 659–663.Search in Google Scholar

Culicover, Peter W. 2013a. The role of linear order in the computation of referential dependencies. Lingua 136. 125–144. https://doi.org/10.1016/j.lingua.2013.07.013.Search in Google Scholar

Culicover, Peter W. 2013b. Simpler syntax and explanation. In Stefan Müller (ed.), The 20th international conference on head-driven phrase structure grammar, 263–283. Stanford, CA: CSLI.10.21248/hpsg.2013.14Search in Google Scholar

Culicover, Peter W. 2021. Language change, variation and universals – a constructional approach. Oxford: Oxford University Press.10.1093/oso/9780198865391.001.0001Search in Google Scholar

Culicover, Peter W. & Ray Jackendoff. 2005. Simpler syntax. Oxford: Oxford University Press.10.1093/acprof:oso/9780199271092.001.0001Search in Google Scholar

Culicover, Peter W. & Ray Jackendoff. 2012. A domain-general cognitive relation and how language expresses it. Language 82(2). 305–340.10.1353/lan.2012.0031Search in Google Scholar

Culicover, Peter W & Robert D. Levine. 2001. Stylistic inversion in English: A reconsideration. Natural Language & Linguistic Theory 19(2). 283–310. https://doi.org/10.1023/a:1010646417840.10.1023/A:1010646417840Search in Google Scholar

Culicover, Peter W. & Michael Rochemont. 1990. Extraposition and the complement principle. Linguistic Inquiry 21(1). 23–47.Search in Google Scholar

Culicover, Peter W. & Giuseppe Varaschin. to appear. Deconstructing syntactic theory: A critical review. Oxford: Oxford University Press.Search in Google Scholar

Culicover, Peter W., Giuseppe Varaschin & Susanne Winkler. 2022. The radical unacceptability hypothesis: Accounting for unacceptability without universal constraints. Languages 7(2). 96. https://doi.org/10.3390/languages7020096.Search in Google Scholar

Culicover, Peter W & Susanne Winkler. 2008. English focus inversion. Journal of Linguistics 44. 625–658. https://doi.org/10.1017/s0022226708005343.Search in Google Scholar

Culicover, Peter W. & Susanne Winkler. 2019. Why topicalize VP? In Verner Egerland, Valeria Molnar & Susanne Winkler (eds.), The architecture of topic. Berlin: Walter de Gruyter.10.1515/9781501504488-006Search in Google Scholar

Curry, Haskell B. 1963. Some logical aspects of grammatical structure. In Roman Jacobson (ed.), Structure of language and its mathematical aspects: Proceedings of the Twelfth Symposium in Applied Mathematics, 56–68. Providence, RI: American Mathematical Society.10.1090/psapm/012/9981Search in Google Scholar

Dalrymple, Mary. 2001. Lexical functional grammar. San Diego, CA: Academic Press.10.1163/9781849500104Search in Google Scholar

de Kuthy, Kordula & Walt Detmar Meurers. 2001. On partial constituent fronting in German. Journal of Comparative Germanic Linguistics 3(3). 143–205. https://doi.org/10.1023/a:1011926510300.10.1023/A:1011926510300Search in Google Scholar

de Kuthy, Kordula & Walt Detmar Meurers. 2011. Integrating GIVENness into a structured meaning approach in HPSG. In Stefan Müller (ed.), Proceedings of the 18th International Conference on Head-Driven Phrase Structure Grammar, 209–301. Stanford, CA: CSLI Publications.10.21248/hpsg.2011.16Search in Google Scholar

den Dikken, Marcel. 1996. The minimal links of verb (projection) raising. In Werner Abraham, Samuel David Epstein, Höskuldur Thráinsson & C. Jan-Wouter Zwart (eds.), Minimal ideas, 67–96. Amsterdam: John Benjamins Publishing Company.10.1075/la.12.05dikSearch in Google Scholar

Dowty, David. 1982. Grammatical relations and Montague grammar. In Pauline Jacobson & Geoffrey K. Pullum (eds.), The nature of syntactic representation, 79–130. Dordrecht: D. Reidel Publishing Company.10.1007/978-94-009-7707-5_4Search in Google Scholar

Dowty, David R. 1996. Toward a minimalist theory of syntactic structure. In Harry Bunt & Arthur van Horck (eds.), Discontinuous constituency, 11–62. Berlin: Mouton de Gruyter.10.1515/9783110873467.11Search in Google Scholar

Ernst, Thomas B. 2002. The syntax of adjuncts. Cambridge: Cambridge University Press.Search in Google Scholar

Fanselow, Gisbert. 2001. Features, theta-roles, and free constituent order. Linguistic Inquiry 32(3). 405–437. https://doi.org/10.1162/002438901750372513.Search in Google Scholar

Fanselow, Gisbert. 2003. Free constituent order: A minimalist interface account. Folia Linguistica 37(1/2). 191–232. https://doi.org/10.1515/flin.2003.37.1-2.191.Search in Google Scholar

Fanselow, Gisbert. 2006. On pure syntax (uncontaminated by information structure). In Patrick Brandt & Eric Fuß (eds.), Form, structure, and grammar: A festschrift presented to Günther Grewendorf on occasion of his 60th birthdayand, 137–157. Berlin: Akademie Verlag.10.1524/9783050085555.137Search in Google Scholar

Fanselow, Gisbert & Damir Ćavar. 2002. Distributed deletion. In Artemis Alexiadou (ed.), Theoretical approaches to universals, 65–107. Amsterdam & Philadelphia: John Benjamins Publishing Company.10.1075/la.49.05fanSearch in Google Scholar

Fanselow, Gisbert & Caroline Féry. 2008. Missing superiority effects: Long movement in German (and other languages). In Jacek Witkoś & Gisbert Fanselow (eds.), Elements of Germanic and Slavic grammars: A comparative view, 67–87. Frankfurt: Lang.Search in Google Scholar

Findlay, Jamie Yates. 2016. Mapping theory without argument structure. Journal of Language Modelling 4(2). 293–338. https://doi.org/10.15398/jlm.v4i2.171.Search in Google Scholar

Fodor, Janet Dean. 1978. Parsing strategies and constraints on transformations. Linguistic Inquiry 9(3). 427–473.Search in Google Scholar

Fox, Danny & Jon Nissenbaum. 1999. Extraposition and scope: A case for overt QR. In Sonya Bird, Andrew Carnie, Jason D. Haugen & Peter Norquest (eds.), Proceedings of the West Coast Conference on Formal Linguistics 18, 132–144. Somerville, Massachusetts: Cascadilla Press.Search in Google Scholar

Fox, Danny & David Pesetsky. 2005. Cyclic linearization of syntactic structure. Theoretical Linguistics 31(1-2). 1–46. https://doi.org/10.1515/thli.2005.31.1-2.1.Search in Google Scholar

Frey, Werner. 1993. Syntaktische Bedingungen für die semantische Interpretation: Über Bindung, implizite Argumente und Skopus. Berlin: Akademie Verlag.Search in Google Scholar

Frey, Werner. 2015. Word order. In Tibor Kiss & Artemis Alexiadou (eds.), Syntax–theory and analysis: An international handbook, vol. 1, 514–562. Berlin: De Gruyter Mouton.Search in Google Scholar

Fukui, Naoki & Yuji Takano. 1998. Symmetry in syntax: Merge and demerge. Journal of East Asian Linguistics 7(1). 27–86. https://doi.org/10.1023/a:1008240710949.10.1023/A:1008240710949Search in Google Scholar

Gazdar, Gerald. 1981. Unbounded dependencies and coordinate structure. Linguistic Inquiry 12(2). 155–184.Search in Google Scholar

Gazdar, Gerald, Ewan Klein, Geoffrey Pullum & Ivan A. Sag. 1985. Generalized phrase structure grammar. Oxford, England & Cambridge, MA: Blackwell Publishing and Harvard University Press.Search in Google Scholar

Gazdar, Gerald & Geoffrey Pullum. 1981. Subcategorization, constituent order and the notion ‘head. In Michael Moortgat, Harry van der Hulst & Teun Hoekstra (eds.), The scope of lexical rules, 107–123. Dordecht: Foris.10.1515/9783112327364-004Search in Google Scholar

Giurgea, Ion. 2009. Adjective placement and linearization. In Jeroen van Craenenbroeck (ed.), Alternatives to cartography, 275–324. Berlin: Mouton de Gruyter.10.1515/9783110217124.275Search in Google Scholar

Göbbel, Edward. 2020. Extraposition from NP in English: Explorations at the syntax-phonology interface. Berlin: De Gruyter Mouton.10.1515/9781501509858Search in Google Scholar

Goto, Nobu & Toru Ishii. 2022. Multiple nominative and form sequence: A new perspective to MERGE and form-set. Lingbuzz. Available at: https://ling.auf.net/lingbuzz/005931 (accessed 30 May 2021).Search in Google Scholar

Guéron, Jacqueline & Robert May. 1984. Extraposition and logical form. Linguistic Inquiry 15(1). 1–32.Search in Google Scholar

Guimarães, Maximiliano. 2004. Derivation and representation of syntactic amalgams. College Park, MD: University of Maryland dissertation.Search in Google Scholar

Haegeman, Liliane. 1994. Verb raising as verb projection raising: Some empirical problems. Linguistic Inquiry 25(3). 509–521.Search in Google Scholar

Haegeman, Liliane & Henk van Riemsdijk. 1986. Verb projection raising, scope, and the typology of verb movement rules. Linguistic Inquiry 17(3). 417–466.Search in Google Scholar

Haider, Hubert. 2000. Towards a superior account of superiority. In Uli Lutz, Gereon Müller & Arnim von Stechow (eds.), Wh-scope marking, 231–248. Amsterdam: Benjamins.10.1075/la.37.09haiSearch in Google Scholar

Haider, Hubert. 2004. The superiority conspiracy: Four constraints and a processing effect. In Arthur Stepanov, Gisbert Fanselow & Ralf Vogel (eds.), Minimality effects in syntax, 147–175. Berlin: Mouton de Gruyter.10.1515/9783110197365.147Search in Google Scholar

Haider, Hubert. 2021. A null theory of scrambling. Zeitschrift für Sprachwissenschaft 39(3). 375–405. https://doi.org/10.1515/zfs-2020-2019.Search in Google Scholar

Haider, Hubert & Inger Rosengren. 2003. Scrambling: Nontriggered chain formation in OV languages. Journal of Germanic Linguistics 15(3). 203–267. https://doi.org/10.1017/s1470542703000291.Search in Google Scholar

Hale, Ken. 1983. Warlpiri and the grammar of non-configurational languages. Natural Language & Linguistic Theory 1(1). 5–47. https://doi.org/10.1007/bf00210374.Search in Google Scholar

Hale, Ken & Samuel Jay Keyser. 2002. Prolegomenon to a theory of argument structure. Cambridge, MA: MIT Press.10.7551/mitpress/5634.001.0001Search in Google Scholar

Hale, Kenneth & Samuel Jay Keyser. 1993. On argument structure and the lexical expression of syntactic relations. In Kenneth Hale & Samuel Jay Keyser (eds.), The view from building 20, 53–110. Cambridge, MA: MIT Press.Search in Google Scholar

Halle, Morris & Alec Marantz. 1994. Some key features of distributed morphology. In Andrew Carnie & Heidi Harley (eds.), MITWPL 21 , Papers on phonology and morphology, 275–288. Cambridge, MA: MIT.Search in Google Scholar

Harley, Heidi. 2014. On the identity of roots. Theoretical Linguistics 40(3-4). 225–276. https://doi.org/10.1515/tl-2014-0010.Search in Google Scholar

Harley, Heidi & Rolf Noyer. 1999. Distributed morphology. Glot International 4(4). 3–9.Search in Google Scholar

Harris, Zellig. 1951. Methods in structural linguistics. Chicago, IL: University of Chicago Press.Search in Google Scholar

Hawkins, John A. 1994. A performance theory of order and constituency. Cambridge: Cambridge University Press.10.1017/CBO9780511554285Search in Google Scholar

Hawkins, John A. 2004. Efficiency and complexity in grammars. Oxford: Oxford University Press.10.1093/acprof:oso/9780199252695.001.0001Search in Google Scholar

Hawkins, John A. 2014. Cross-linguistic variation and efficiency. Oxford: Oxford University Press.10.1093/acprof:oso/9780199664993.001.0001Search in Google Scholar

Heim, Irene & Angelika Kratzer. 1998. Semantics in generative grammar. Malden, MA: Blackwell.Search in Google Scholar

Hinrichs, Erhard & Tsuneko Nakazawa. 1994. Linearizing AUXs in German verbal complexes. In John Nerbonne, Klaus Netter & Carl Pollard (eds.), German in head-driven phrase structure grammar, 11–37. Stanford, CA: CSLI.Search in Google Scholar

Hinterhölzl, Roland. 2006. Scrambling, remnant movement, and restructuring in West Germanic. Oxford: Oxford University Press.10.1093/acprof:oso/9780195308211.001.0001Search in Google Scholar

Hornstein, Norbert. 1995. Logical form: From GB to minimalism. Cambridge, MA: Basil Blackwell.Search in Google Scholar

Hornstein, Norbert & Jairo Nunes. 2008. Adjunction, labeling, and bare phrase structure. Biolinguistics 2(1). 57–86. https://doi.org/10.5964/bioling.8621.Search in Google Scholar

Huang, C.-T. James. 1993. Reconstruction and the structure of VP: Some theoretical consequences. Linguistic Inquiry 24(1). 103–138.Search in Google Scholar

Jackendoff, Ray. 1972. Semantic interpretation in generative grammar. Cambridge, MA: MIT Press.Search in Google Scholar

Jackendoff, Ray. 1977. X′ Syntax. Cambridge, MA: MIT Press.Search in Google Scholar

Jackendoff, Ray. 1983. Semantics and cognition. Cambridge, MA: MIT Press.Search in Google Scholar

Jackendoff, Ray. 1990a. On Larson’s treatment of the double object construction. Linguistic Inquiry 21(3). 427–455.Search in Google Scholar

Jackendoff, Ray. 1990b. Semantic structures. Cambridge, MA: MIT Press.Search in Google Scholar

Jackendoff, Ray. 1997. The architecture of the language faculty. Cambridge, MA: MIT Press.Search in Google Scholar

Jackendoff, Ray. 2002. Foundations of language: Brain, meaning, grammar, evolution. Oxford: Oxford University Press.10.1093/acprof:oso/9780198270126.001.0001Search in Google Scholar

Jackendoff, Ray & Jenny Audring. 2020. The texture of the lexicon. Oxford: Oxford University Press.10.1093/oso/9780198827900.001.0001Search in Google Scholar

Kaplan, Ronald M. 1995. The formal architecture of lexical-functional grammar. In Formal issues in lexical-functional grammar. Stanford: CSLI Publications.Search in Google Scholar

Kasper, Robert. 1994. Adjuncts in the Mittelfeld. In John Nerbonne, Klaus Netter & Carl Pollard (eds.), German in head-driven phrase structure grammar, 39–69. Stanford, CA: CSLI.Search in Google Scholar

Kayne, Richard S. 1994. The antisymmetry of syntax. Cambridge, MA: MIT Press.Search in Google Scholar

Kayne, Richard S. 2004. Prepositions as probes. In Adriana Belletti (ed.), Structures and beyond: The cartography of syntactic structures, vol. 3, 192–212. Oxford: Oxford University Press.10.1093/oso/9780195171976.003.0006Search in Google Scholar

Kayne, Richard S. 2022. Antisymmetry and externalization. Studies in Chinese Linguistics 43(1). 1–20. https://doi.org/10.2478/scl-2022-0001.Search in Google Scholar

Keenan, Edward L. & Bernard Comrie. 1977. Noun phrase accessibility and universal grammar. Linguistic Inquiry 8(1). 63–99.Search in Google Scholar

Kim, Jong-Bok. 2003. English locative inversion: A constraint-based approach. Korean Journal of Linguistics 28. 207–235.Search in Google Scholar

Klein, Ewan & Ivan A. Sag. 1985. Type-driven translation. Linguistics and Philosophy 8(2). 163–201. https://doi.org/10.1007/bf00632365.Search in Google Scholar

Koenig, Jean-Pierre, Gail Mauner & Breton Bienvenue. 2003. Arguments for adjuncts. Cognition 89(2). 67–103. https://doi.org/10.1016/s0010-0277(03)00082-9.Search in Google Scholar

Koopman, Hilda & Dominique Sportiche. 1983. Variables and the bijection principle. The Linguistic Review 2. 139–160. https://doi.org/10.1515/tlir.1982.2.2.139.Search in Google Scholar

Krivochen, Diego Gabriel. 2015. On phrase structure building and labeling algorithms: Towards a non-uniform theory of syntactic structures. The Linguistic Review 32(3). 515–572. https://doi.org/10.1515/tlr-2014-0030.Search in Google Scholar

Kubota, Yusuke & Robert D. Levine. 2020. Type-logical syntax. Cambridge, MA: MIT Press.10.7551/mitpress/11866.001.0001Search in Google Scholar

Lakoff, George & John R. Ross. 1976. Why you can’t do so into the sink. In James D. McCawley (ed.), Notes from the linguistic underground. San Diego, CA: Academic Press.10.1163/9789004368859_008Search in Google Scholar

Lamarche, Jacques. 1991. Problems for N-movement to NumP. Probus 3(2). 215–236. https://doi.org/10.1515/prbs.1991.3.2.215.Search in Google Scholar

Larson, Richard. 1988. On the double object construction. Linguistic Inquiry 19(3). 335–392.Search in Google Scholar

Larson, Richard. 1990. Double objects revisited: Reply to Jackendoff. Linguistic Inquiry 21(4). 589–632.Search in Google Scholar

Levine, Robert D. 2003. Adjunct valents, cumulative scopings and impossible descriptions. In Jong-Bok Kim & Stephen Wechsler (eds.), The proceedings of the 9th international conference on Head-Driven Phrase Structure Grammar, 209–232. Stanford, CA: CSLI.10.21248/hpsg.2002.11Search in Google Scholar

Levine, Robert D. & Thomas Hukari. 2006. The unity of unbounded dependency constructions. Stanford, CA: CSLI.Search in Google Scholar

López, Luis. 2009. Ranking the linear correspondence Axiom. Linguistic Inquiry 40(2). 239–276. https://doi.org/10.1162/ling.2009.40.2.239.Search in Google Scholar

Maienborn, Claudia. 2001. On the position and interpretation of locative modifiers. Natural Language Semantics 9(2). 191–240. https://doi.org/10.1023/a:1012405607146.10.1023/A:1012405607146Search in Google Scholar

May, Robert. 1985. Logical form: Its structure and derivation. Cambridge, MA: MIT Press.Search in Google Scholar

McCawley, James D. 1968. Concerning the base component of a transformational grammar. Foundations of Language 4(3). 243–269.Search in Google Scholar

McInnerney, Andrew. 2022. The argument/adjunct distinction and the structure of prepositional phrases. Ann Arbor, MI: University of MIchigan dissertation.Search in Google Scholar

Miliorini, Rafaela. 2021. O papel explanatório da distinção argumento–adjunto: adjuntos segregados e adjuntos integrados. Florianopolis: Universidade Federal de Santa Catarina dissertation.Search in Google Scholar

Miller, Philip H. 1992. Clitics and constituents in phrase structure grammar. Santa Cruz, CA: University of California, Santa Cruz dissertation.Search in Google Scholar

Miyagawa, Shigeru. 2011. Optionality. In Cedric Boeckx (ed.), The Oxford handbook of linguistic minimalism, 354–376. Oxford: Oxford University Press.10.1093/oxfordhb/9780199549368.013.0016Search in Google Scholar

Miyagawa, Shigeru & Takae Tsujioka. 2004. Argument structure and ditransitive verbs in Japanese. Journal of East Asian Linguistics 13(1). 1–38. https://doi.org/10.1023/b:jeal.0000007345.64336.84.10.1023/B:JEAL.0000007345.64336.84Search in Google Scholar

Mohanan, Karuvannu P. 1983. Grammatical relations and clause structure in Malayalam. In Joan Bresnan (ed.), The mental representation of grammatical relations, 504–589. Cambridge, MA: MIT Press.Search in Google Scholar

Montague, Richard. 1974. Formal philosophy. New Haven: Yale University Press.Search in Google Scholar

Müller, Gereon. 1998. Incomplete category fronting: A derivational approach to remnant movement in German. Dordrecht: Kluwer Academic Publishers.10.1007/978-94-017-1864-6_1Search in Google Scholar

Müller, Stefan. 2002. Complex predicates: Verbal complexes, resultative constructions, and particle verbs in German. Stanford, CA: CSLI Publications.Search in Google Scholar

Müller, Stefan. 2013. Head-driven phrase structure grammar: Eine Einführung. Tübingen: Stauffenburg Verlag.Search in Google Scholar

Müller, Stefan. 2023. Grammatical theory: From transformational grammar to constraint-based approaches, 5th edn. Berlin: Language Science Press.Search in Google Scholar

Müller, Stefan. to appear. German clause structure: An analysis with special consideration of so-called multiple frontings. Berlin: Language Science Press.Search in Google Scholar

Needle, Jordan. 2022. Embedding HTLCG into LCGϕ. Journal of Logic, Language and Information 31(4). 677–721. https://doi.org/10.1007/s10849-022-09388-5.Search in Google Scholar

Neeleman, Ad & Hans van de Koot. 2008. Dutch scrambling and the nature of discourse templates. The Journal of Comparative Germanic Linguistics 11(2). 137–189. https://doi.org/10.1007/s10828-008-9018-0.Search in Google Scholar

Nerbonne, John. 1994. Partial verb phrases and spurious ambiguities. In John Nerbonne, Klaus Netter & Carl Pollard (eds.), German in head-driven phrase structure grammar, 109–150. Stanford, CA: CSLI.Search in Google Scholar

Ott, Dennis. 2009. Multiple NP split: A distributed deletion analysis. Groninger Arbeiten zur Germanistischen Linguistik 48. 65–80.Search in Google Scholar

Ott, Dennis. 2018. VP-fronting: Movement vs. dislocation. The Linguistic Review 35(2). 243–282. https://doi.org/10.1515/tlr-2017-0024.Search in Google Scholar

Panayidou, Fryni. 2013. (In)flexibility in adjective ordering. London: University of London dissertation.Search in Google Scholar

Parsons, Terence. 1990. Events in the semantics of English: A study in subatomic semantics. Cambridge, MA: MIT Press.Search in Google Scholar

Partee, Barbara Hall, Alice G. B. ter Meulen & Robert Eugene Wall. 1990. Mathematical methods in linguistics. Dordrecht & Boston: Kluwer Academic.Search in Google Scholar

Pesetsky, David. 1995. Zero syntax. Cambridge, MA: MIT Press.Search in Google Scholar

Pollard, Carl. 1996a. The nature of constraint-based grammar. Linguistic Research 15. 1–18.Search in Google Scholar

Pollard, Carl & Ivan A. Sag. 1994. Head-driven phrase structure grammar. Chicago, IL: University of Chicago Press and CSLI Publications.Search in Google Scholar

Pollard, Carl J. 1996b. On head non-movement. In Harry Bunt & Arthur van Horck (eds.), Discontinuous constituency, 279–306. Berlin: Mouton De Gruyter.10.1515/9783110873467.279Search in Google Scholar

Postal, Paul M. 2003. (Virtually) conceptually necessary. Journal of Linguistics 39(3). 599–620. https://doi.org/10.1017/s0022226703002111.Search in Google Scholar

Przepiórkowski, Adam. 1999. Case assignment and the complement-adjunct dichotomy: A non-configurational constraint-based approach. Tübingen: University of Tübingen dissertation.Search in Google Scholar

Pullum, Geoffrey K. 2019. What grammars are, or ought to be. In Stefan Müller & Petya Osenova (eds.), Proceedings of the 26th International Conference on Head-Driven Phrase Structure Grammar, 58–78. Stanford: CSLI Publications.10.21248/hpsg.2019.4Search in Google Scholar

Pullum, Geoffrey K. 2020. Theorizing about the syntax of human language: A radical alternative to generative formalisms. Cadernos de Linguística 1(1). 01–33. https://doi.org/10.25189/2675-4916.2020.v1.n1.id279.Search in Google Scholar

Pylkkänen, Liina. 2008. Introducing arguments. Cambridge, MA: MIT Press.10.7551/mitpress/9780262162548.001.0001Search in Google Scholar

Ramchand, Gillian & Peter Svenonius. 2014. Deriving the functional hierarchy. Language Sciences 46. 152–174. https://doi.org/10.1016/j.langsci.2014.06.013.Search in Google Scholar

Reinhart, Tanya. 1983. Anaphora and semantic interpretation. Chicago, IL: University of Chicago Press.Search in Google Scholar

Reinhart, Tanya. 2006. Interface strategies. Cambridge, MA: MIT Press.10.7551/mitpress/3846.001.0001Search in Google Scholar

Richter, Frank. 2021. Formal background. In Stefan Müller, Anne Abeillé, Robert D. Borsley & Jean-Pierre Koenig (eds.), Head-driven phrase structure grammar: The handbook, 89–124. Berlin: Language Science Press.Search in Google Scholar

Riezler, Stefan. 1995. Binding without hierarchies. In CLAUS-Report 50. Saarbrücken: Universität des Saarlandes.Search in Google Scholar

Rizzi, Luigi. 1978. A restructuring rule in Italian syntax. In Samuel Jay Keyser (ed.), Recent transformational studies in European languages, 113–158. Cambridge, MA: MIT Press.Search in Google Scholar

Rizzi, Luigi. 1990. Relativized minimality. Cambridge, MA: MIT Press.Search in Google Scholar

Rochemont, Michael. 1978. A theory of stylistic rules in English. Amherst, MA: University of Massachusetts dissertation.Search in Google Scholar

Rochemont, Michael & Peter W. Culicover. 1990. English focus constructions and the theory of grammar. Cambridge: Cambridge University Press.Search in Google Scholar

Rochemont, Michael & Peter W. Culicover. 1997. Deriving dependent right adjuncts in English. In Dorothee Beerman, David LeBlanc & Henk van Riemsdijk (eds.), Rightward movement, 277–300. Amsterdam: John Benjamins Publishing Company.10.1075/la.17.12rocSearch in Google Scholar

Rochemont, Michael S. 2015. Gert Webelhuth, Manfried Sailer & Heike Walker (eds.), Review of Rightward movement in a comparative perspective (Linguistik aktuell/Linguistics today 200.). Language 91(5). 501–503.10.1353/lan.2015.0026Search in Google Scholar

Ross, John R. 1967. Constraints on variables in syntax. Cambridge, MA: MIT dissertation.Search in Google Scholar

Safir, Ken. 1984. Multiple variable binding. Linguistic Inquiry 15(4). 603–638.Search in Google Scholar

Sag, Ivan A. 2007. Remarks on locality. In Stefan Müller (ed.), Proceedings of the 14th international conference on Head-Driven Phrase Structure Grammar, 394–414. Stanford, CA: CSLI Publications.10.21248/hpsg.2007.23Search in Google Scholar

Sag, Ivan A. 2010. Feature geometry and predictions of locality. In Greville G. Corbett & Anna Kibort (eds.), Features: Perspectives on a key notion in linguistics. Oxford: Clarendon Press.10.1093/acprof:oso/9780199577743.003.0010Search in Google Scholar

Sag, Ivan A. 2012. Sign-based construction grammar – a synopsis. In Hans C. Boas & Ivan A. Sag (eds.), Sign-based construction grammar, 61–197. Stanford, CA: CSLI.Search in Google Scholar

Sag, Ivan A, Rui P. Chaves, Anne Abeillé, Bruno Estigarribia, Dan Flickinger, Paul Kay, Laura A. Michaelis, Stefan Müller, Geoffrey K. Pullum, Frank Van Eynde & Thomas Wasow. 2020. Lessons from the English auxiliary system. Journal of Linguistics 56. 1–69. https://doi.org/10.1017/s002222671800052x.Search in Google Scholar

Salzmann, Martin. to appear. Word order in the German middle field – scrambling. In Katharina Hartmann, Johannes Mursell & Susi Wurmbrand (eds.), Handbook of Germanic syntax. Berlin: De Gruyter.Search in Google Scholar

Sauerland, Uli & Artemis Alexiadou. 2020. Generative grammar: A meaning first approach. Frontiers in Psychology 11. 571295. https://doi.org/10.3389/fpsyg.2020.571295.Search in Google Scholar

Schweikert, Walter. 2005. The order of prepositional phrases in the structure of the clause. Amsterdam: John Benjamins.10.1075/la.83Search in Google Scholar

Sells, Peter. 1987. Backwards anaphora and discourse structure: Some considerations. Chicago, IL: CSLI.Search in Google Scholar

Sheehan, Michelle L. 2013. Some implications of a copy theory of labeling. Syntax 16(4). 362–396. https://doi.org/10.1111/synt.12010.Search in Google Scholar

Sichel, Ivy. 2000. Evidence for DP-internal remnant movement. In Masako Hirotani, Andries Coetzee, Nancy Hall & Ji-yung Kim (eds.), Proceedings of the North East Linguistic Society, 569–582. Rutgers University: Graduate Linguistic Student Association.Search in Google Scholar

Simpson, Jane & Joan Bresnan. 1983. Control and obviation in Warlpiri. Natural Language and Linguistic Theory 1(1). 49–64. https://doi.org/10.1007/bf00210375.Search in Google Scholar

Solan, Lawrence. 1983. Pronominal reference: Child language and the theory of grammar. Dordrecht: Reidel.10.1007/978-94-009-7004-5Search in Google Scholar

Stabler, Edward. 2011. Computational perspectives on minimalism. In Cedric Boeckx (ed.), Oxford handbook of linguistic minimalism, 617–641. Oxford: Oxford University Press.10.1093/oxfordhb/9780199549368.013.0027Search in Google Scholar

Starke, Michal. 2010. Nanosyntax: A short primer to a new approach to language. Nordlyd 36(1). 1–6. https://doi.org/10.7557/12.213.Search in Google Scholar

Takamine, Kaori. 2010. The postpositional hierarchy and its mapping to clause structure in Japanese. Tromsø: Universitetet i Tromsø dissertation.Search in Google Scholar

Teodorescu, Alexandra. 2006. Adjective ordering restrictions revisited. In Michael Scanlon Donald Baumer & David Montero (eds.), Proceedings of the 25th West Coast Conference on Formal Linguistics, 399–407. Somerville, MA: Cascadilla Press.Search in Google Scholar

Thoms, Gary & George Walkden. 2019. vP-fronting with and without remnant movement. Journal of Linguistics 55(1). 161–214. https://doi.org/10.1017/s002222671800004x.Search in Google Scholar

Trnavac, Radoslava & Maite Taboada. 2016. Cataphora, backgrounding and accessibility in discourse. Journal of Pragmatics 93. 68–84. https://doi.org/10.1016/j.pragma.2015.12.008.Search in Google Scholar

Trotzke, Andreas. 2015. Rethinking syntactocentrism: Architectural issues and case studies at the syntax-pragmatics interface. Amsterdam: John Benjamins.10.1075/la.225Search in Google Scholar

Uszkoreit, Hans. 1986. Constraints on order. Linguistics 24. 883–906. https://doi.org/10.1515/ling.1986.24.5.883.Search in Google Scholar

van Craenenbroeck, Jeroen (ed.). 2009. Alternatives to cartography. Berlin & New York: Mouton de Gruyter.10.1515/9783110217124Search in Google Scholar

Varaschin, Giuseppe. 2021. A simpler syntax of anaphora. Florianopolis: Universidade Federal de Santa Catarina dissertation.Search in Google Scholar

Varaschin, Giuseppe, Peter W. Culicover & Susanne Winkler. in press. In pursuit of condition C: (non-)coreference in grammar, discourse and processing. In Andreas Konietzko & Susanne Winkler (eds.), Information structure and discourse in generative grammar. Berlin: de Gruyter.Search in Google Scholar

Wallenberg, Joel. 2015. Antisymmetry and heavy NP shift across Germanic. In Theresa Biberauer & George Walkden (eds.), Syntax over time: Lexical, morphological and information-structural interactions, 336–349. Oxford: Oxford University Press.10.1093/acprof:oso/9780199687923.003.0020Search in Google Scholar

Ward, Gregory L. 1990. The discourse functions of VP preposing. Language 66(4). 742–763. https://doi.org/10.2307/414728.Search in Google Scholar

Wasow, Thomas. 1997. Remarks on grammatical weight. Language Variation and Change 9(1). 81–105. https://doi.org/10.1017/s0954394500001800.Search in Google Scholar

Wasow, Thomas. 2002. Postverbal behavior. Stanford, CA: CSLI.Search in Google Scholar

Wasow, Thomas & Jennifer Arnold. 2003. Post-verbal constituent ordering in English. Topics in English Linguistics 43. 119–154. https://doi.org/10.1515/9783110900019.119.Search in Google Scholar

Webelhuth, Gert, Manfred Sailer & Heike Walker. 2013. Introduction by the editors. In Gert Webelhuth, Manfred Sailer & Heike Walker (eds.), Rightward movement in a comparative perspective, 1–60. Amsterdam & Philadelphia: John Benjamins Publishing Company.10.1075/la.200Search in Google Scholar

Wells, Rulon S. 1947. Immediate constituents. Language 23(1). 81–117. https://doi.org/10.2307/410382.Search in Google Scholar

Wetta, Andrew Charles. 2015. Construction-based approaches to flexible word order. Buffalo, NY: State University of New York at Buffalo dissertation.Search in Google Scholar

Wexler, Kenneth & Peter W. Culicover. 1980. Formal principles of language acquisition. Cambridge, MA: MIT Press.Search in Google Scholar

Williams, Edwin. 1974. Rule ordering in syntax. Cambridge, MA: MIT dissertation.Search in Google Scholar

Willis, David. 2006. Against N-raising and NP-raising analyses of Welsh noun phrases. Lingua 116(11). 1807–1839. https://doi.org/10.1016/j.lingua.2004.09.004.Search in Google Scholar

Wurmbrand, Susi. 2004. West Germanic verb clusters: The empirical domain. In Katalin É Kiss & Henk van Riemsdijk (eds.), Verb clusters: A study of Hungarian, German and Dutch, 43–85. Amsterdam & Philadelphia: John Benjamins Publishing Company.10.1075/la.69.05wurSearch in Google Scholar

Wurmbrand, Susi. 2006. Verb clusters, verb raising, and restructuring. In Martin Everaert & Henk van Riemsdijk (eds.), The Blackwell companion to syntax, vol. 5, 229–343. Oxford: Blackwell Publishers.10.1002/9780470996591.ch75Search in Google Scholar

Yashima, Jun. 2015. Antilogophoricity: In conspiracy with the binding theory. Los Angeles, CA: UCLA dissertation.Search in Google Scholar

Zagona, Karen. 1988. Verb phrase syntax: A parametric study of English and Spanish. Dordrecht: Kluwer Academic Publishers.10.1007/978-94-009-2717-9Search in Google Scholar

Zwart, Jan-Wouter. 1995. A note on verb clusters in the Stellingwerf dialect. Linguistics in the Netherlands 12(1). 215–226. https://doi.org/10.1075/avt.12.20zwa.Search in Google Scholar

Published Online: 2024-07-19
Published in Print: 2024-09-25

© 2024 the author(s), published by De Gruyter, Berlin/Boston

This work is licensed under the Creative Commons Attribution 4.0 International License.

Downloaded on 15.9.2025 from https://www.degruyterbrill.com/document/doi/10.1515/tlr-2024-2016/html
Scroll to top button