bradrn wrote: ↑Fri Jul 03, 2020 4:03 am
I suppose I’m talking more about sentences like ‘He studies linguistics at the university’, which seems pretty simple but ends up with a tree like this:
You're right that, just looking at this one sentence "He studies linguistics at the university", there's not really much motivation for positing an I(nflection) node here (you might also see it called T, for "tense"—I'll be using T here because it's more readable in sans-serif). The motivation comes from looking at other cases, both within English and without, where there is stronger evidence for the necessity of a separate T node in the structure. You then construct a set of rules for syntactic structure-building, as well as rules for semantic interpretation for things like tense (because yes, of course, these syntactic structures need to be mapped to semantic interpretations!). So you've "complicated" the theory at this point, but you're satisfied because it seemed necessary at the time.
Then you go back to the original, "simple" sentence. There's no overt T in "he studies linguistics at the university" screaming out at you. So what are your options at this point?
One option is to further add to your syntactic rules, and come up with a principled theory of why certain sentences lack T, but other sentences need T. You'd also need to further add to your semantic rules, to come up with a principled way to map a T node to tense interpretation in one set of structures, and present and past affixes on the lexical verb to tense interpretation in another set of structures.
The other option is to not do all that, and posit a null T node (actually, some people have verbal inflection, like present 3sg
-s and past
-ed sitting in T, which "lowers" onto the verb root in these "simple" sentences; they still have a T node in these sentences though). This means you can have a uniform theory of both the more syntactically complex sentences as well as the "simple" sentences, as well as uniform ways of mapping syntactic structures to semantic interpretations. Whether you think this is worth the "cost" of positing a null node in the syntax (or of positing a "lowering" operation) is an aesthetic choice at this point, from my perspective. There are plenty of syntactic theories out there on the market that don't make the same choices broadly-Chomskyan generative grammar does.
(This has all been very abstract and high-level—if you want more detail, there are plenty of introductory textbooks that try to motivate this kind of thing, and I don't want to spend my time rehashing stuff like that that's already out there. The motivation might be in a dedicated section on T, or it might be split among sections on finiteness, syntactic selection, head movement, potentially maybe some other things. I think Andrew Carnie's textbook
Syntax: A Generative Introduction is quite clear and lucid (libgennable), and MITOCW has (two) resources for undergraduate introductory syntax classes with lecture notes that might also be useful resources—
one from 2003 and
one from 2015.)
bradrn wrote: ↑Fri Jul 03, 2020 4:03 am
To use a physics analogy again, look at
the Shroedinger equation... that sure as heck isn't noob-friendly. But it's not like physicists just added shit in to make it more difficult.
A counterpoint: I learned about the Schrödinger equation this semester (or at least one form of it), and I was surprised to find that it wasn’t actually all that difficult. (You do need a decent knowledge of calculus to use it, but that’s no different to anything else in physics.)
Similarly, the "(over)complicated trees" (or matrices in HPSG or LFG, or whatever equivalents in other more formal models of syntax) aren't actually all that difficult to understand if you have a decent knowledge of the basics and guiding principles. Undergrads in intro classes seem to end up understanding this stuff (even the "complicated" tree you had an image of, which the kind of structure I would expect someone to come out of an introductory syntax class being able to understand and draw). Of course, you need some buy-in and some effort put in to understand the basics and guiding principles (though certainly much less buy-in than learning calculus). I don't think "it looks complicated, and I think I should understand it, but I don't, therefore it must be unnecessary and wrong" is a good argument.
bradrn wrote: ↑Fri Jul 03, 2020 7:39 am
zompist wrote: ↑Fri Jul 03, 2020 4:43 am
The other is that the first tree follows the rules of X-bar syntax. There are justifications for the extra nodes, but since Chomsky got rid of them in Minimalism, it's pretty moot now.
So Minimalism made X-bar theory obsolete? I had thought that Minimalism and X-bar theory are still both widely used today.
The literature is (understandably) confusing about this point. Most people don't believe "X-bar theory proper", in the sense that we don't believe there's some God-given X-bar schema that all phrases must follow, and we don't believe there's some separate "X-bar module" in the grammar that regulates this stuff. For instance, certainly we don't believe that something like "sleep" is a V dominated by V' dominated by VP. It's just one node (unless you wanna do some kind of lexical decomposition), which you can either call V or VP, it doesn't really matter too much.
However, we do use the terminology and tree-drawing conventions from X-bar because they're useful (and we're used to them, quite frankly). We use X-bar terms like "specifier", "head", "complement", and "adjunct", for instance, but those meant just as ways to refer to different nodes in the tree. Similarly, we might draw trees with Xs, X's, and XPs, but that's again just a notational convention, which people don't always follow. X refers to heads (which still is a primitive notion), XP refers to the "maximal projection" of a head (which still is a useful notion to have, but people try to derive it from other principles—there's a whole literature on "labeling" and the "labeling algorithm" that thinks about this kind of stuff; I'm not too familiar with it though), and X' is just whatever comes in between X and XP. Oftentimes you might see people just not drawing X' levels (though this can be hard to do nicely in LaTeX sometimes); here's something pulled from Google to illustrate what I mean:
bradrn wrote: ↑Fri Jul 03, 2020 4:03 am
(For instance, one thing that’s puzzled me before: how can you have a VP in a VSO language? Presumably they have some solution, although I wouldn’t know what it is.)
Honestly I would need a lot of convincing myself to accept a VO node in a VSO language. I'd want to see solid syntactic evidence
within that language.
Ah, right — I had thought that GG assumes the presence of a VP in any language, so thanks for correcting this!
Most people, at least within broadly-Chomskyan syntax, assume that you have VP constituents in
every language—that is, verb and object form a constituent to the exclusion of the subject. But obviously, VSO can't have a VP constituent on the surface.
However, we independently need to allow for the possibility of displacement/movement dependencies. So, all else being equal, we predict that it should be possible to, for instance, have the verb move out of VP, leaving us with a surface VSO order. And indeed, people have argued that that kind of derivation is exactly how some languages get to be VSO (e.g. Irish,
McCloskey 1996 (scihubbable), among others; there's also a lucid introduction to this idea in the Carnie textbook I mentioned earlier). And there are other ways of getting to VSO—
Clemens and Polinsky (2017) is a nice overview, I find.
Of course, just because we
can derive VSO orders with VPs doesn't mean we should. And of course, we'd like to to look for independent evidence of VPs in the language, with the same kinds of diagnostics we use for other languages. Are there things like VP proforms (e.g.
do so), and can they be substituted for V and O to the exclusion of S? Assuming that binding is sensitive to hierarchical structure (and I think there is plenty of evidence that it is), can we show that S is structurally higher than O (as would be predicted by VO being a constituent to the exclusion of S)? Is there evidence of a movement dependencies that "break up" the VP? etc.