Syntax random

Natural languages and linguistics
zompist
Site Admin
Posts: 2944
Joined: Sun Jul 08, 2018 5:46 am
Location: Right here, probably
Contact:

Re: Syntax random

Post by zompist »

Tropylium wrote: Thu Jul 02, 2020 5:12 pm
priscianic wrote:The point is that sometimes you might expect certain things to be grammatical (for whatever reason), and they aren't—and that naturally leads you to wonder why.
No, I don't mean that there's anything bad with this if you already have ended up with an expectation… The original observation I made is that this seems to be the only common workflow for bringing syntactic theories in better accord with languages as they exist in reality. And I mean not just the question, but also the answer, which invariably is something roughly "here's a constraint that prevents this from being grammatical".
Now what's wrong with GG is that it posits rules, like every other branch of linguistics and like every science?

Morphology has rules too. When you hear a child saying "I goed", you see them applying a rule— all the more evident since it's 'wrong'; it means that they're applying a rule rather than just repeating what they hear.
It's also interesting that you tell me movement is only a "framework" and it does not need to float my boat. Does this mean that its existence is not even in principle falsifiable?
Now, arguments about GG all too often turn into arguments about philosophy of science. Syntacticians do it too, but it's not very edifying.

But really, you're not even trying to understand at this point. You heard something about "unfalsifiable" and you want to apply it because someone told you movement isn't a God-given proof. That's not what "unfalsifiable" means. Yet when people talk about a process where syntacticians actually try to falsify their theories— by talking about ungrammatical sentences— you don't like it.

Syntax is hard to explain in a few posts. It takes a book, which is why I've written one. Why not read it to see what syntax really is and what it really does? (Believe me, I criticize Chomsky plenty, but also explain what he was trying to do.) Hell, e-mail me and I'll PayPal you the $7 to buy the Kindle edition.

Or read another intro. I wouldn't start with Chomskyans, so I'd recommend McCawley's The Syntactic Phenomena of English, Van Valin's An Introduction to Syntax. Goldberg's Constructions outlines a different framework that departs further from Chomskyan GG.
bradrn
Posts: 6257
Joined: Fri Oct 19, 2018 1:25 am

Re: Syntax random

Post by bradrn »

Tropylium wrote: Thu Jul 02, 2020 2:26 pm If then "on cat the sitting see mat I the" is rejected as not being a sentence, this could likewise mean that it just lacks a referent to a particular relation between the words (which I take you'd agree are known English words). Why should it be a priori ruled out even as a possibility that some strings of words just happen to not be meaningful sentences?
Um… because if you go up to any native English speaker and try to say it to them, they won’t be able to interpret it? To me that seems like reasonable grounds for ruling out that sentence as a possible English sentence.
Indeed just like we could try to coin the word "copybat" — to propose a referent that it applies to — we could also try to develop a referent for your currently-nonsense sentence:
– "the sitting" — already an NP.
– "see mat" — already a VP.
– "on cat" — compare "on point"; could be an expression meaning 'to be catlike', to be coming soon to internet cat memes near you.
– "I the" — compare "Beatles, The", i.e. parseable as a library catalog rendering of "the I". Which in turn could be a psychological or philosophy-of-the-mind term coined to mean something akin to 'ego'.

At this point the sentence will be readable, and only a bit of punctuation is additionally needed to highlight this:
"On cat: the sitting; see mat, I, the!". ≈ "Is it catlike that sitting is taking place, and I exhort my ego to see a mat."

This could be clearly also further developed e.g by coining a meaning for an NP + imperative + NP construction.
You can attempt to suggest a meaning for it as much as you want, but that won’t make it correct! No native English speaker would ever parse that ‘sentence’ in the way that you have described, which makes those contortions of meaning simply an intellectual exercise without any relevance to the argument. In much the same way, I could declare that from now on the word ‘orange’ means ‘sneeze’ — but that has exactly no practical relevance if no-one uses that word in the way I describe.
It is not the only option though that any novel sentence was already a part of the language in some abstract sense, and that if it has not been predicted, our previous theory of the language's grammar was therefore wrong.
You’re going to need a lot of justification to convince me of this. To me, the definition of a language includes its word order and hence all its possible sentences. If you have two languages which are identical in every way except for word order, I would most definitely consider those to be two different languages (or at least different dialects, if they’re mutually intelligible).
At least sometimes novel sentences must be instances of language change, the invention of some new syntactic device(s) that did not previously exist (such as "on cat" just now above). If this is the case, then there is no real problem in the previous synchronic grammar's failure to predict this novel sentence.
Yes, I agree — but why can’t this be represented as a diachronic change in its syntax over time?
Tropylium wrote: Thu Jul 02, 2020 5:12 pm Meanwhile we still do not seek constraints that theoretically exclude e.g. the existence of fiffen or copybat or a zillion other phonologically possible words and are content to think that these are random gaps.
Because — as I said earlier — people do not coin new words every time they speak. Languages only have a finite number of words, which as far as we can tell seem to have no systematic relationship between form and semantics, so it is inevitable that there will be some random gaps. Now, if people did coin new words productively, then there would indeed be a need to explain why they all mysteriously avoid saying fiffen — but they don’t, so we can just accept that a finite lexicon will have some random gaps in it. (And indeed, on occasion we see a syntactician saying that there are only a finite number of languages, and such-and-such a construction is not attested simply due to a random gap where no current language happens to use it yet.)
It's also interesting that you tell me movement is only a "framework" and it does not need to float my boat. Does this mean that its existence is not even in principle falsifiable?
I don’t think that’s what priscianic meant. Rereading his post where he said that, he said: ‘The background assumptions aren't the point here—you can replace it with whatever floats your boat.’ I think he was just trying to say that, if you disagree with the idea of movement, then his argument there will still be valid with whatever assumptions you prefer — not that movement is invalid.
It's also interesting that you tell me movement is only a "framework" and it does not need to float my boat. Does this mean that its existence is not even in principle falsifiable? That it indeed doesn't actually exist in the real, physical world? That, therefore, nothing is actually explained by an analysis that proposes movement, since movement doesn't even exist? That whenever someone answers "because movement", they haven't actually answered the "why", only named the phenomenon? Do phenomena called movement perhaps actually really occur because of wakalixes?
But this is an interesting argument, so let me explain why I find it incorrect. In English, we see lots of pairs of sentences such as:

I see youWho do I see?
He sits thereWhere does he sit?
The quick brown fox hates the lazy dogWho does the quick brown fox hate?

Now, there’s three interesting things to notice about these pairs. First, there is a regular semantic relationship between the two — the first is a proposition, the second questions the object of the proposition. Secondly, there is a regular syntactic relationship between the two — the first is of the form ‘NP₁ V NP₂’, the second is of the form ‘question-word do NP₁ V’. And thirdly, this transformation can apply to any sentence of that form, and has the same semantics. Under these conditions, I think it is reasonable to analyse this relationship — which I believe we call ‘wh-movement’ — as having some sort of underlying meaning.
zompist wrote: Thu Jul 02, 2020 6:29 pm
Tropylium wrote: Thu Jul 02, 2020 5:12 pm
priscianic wrote:The point is that sometimes you might expect certain things to be grammatical (for whatever reason), and they aren't—and that naturally leads you to wonder why.
No, I don't mean that there's anything bad with this if you already have ended up with an expectation… The original observation I made is that this seems to be the only common workflow for bringing syntactic theories in better accord with languages as they exist in reality. And I mean not just the question, but also the answer, which invariably is something roughly "here's a constraint that prevents this from being grammatical".
Now what's wrong with GG is that it posits rules, like every other branch of linguistics and like every science?.
I’m pretty sure that’s not what Tropylium is saying here. I think Tropylium’s complaint here is that, if a counterexample to a syntactic theory is found, then all too often the response is ‘let’s figure out the laziest and least insightful possible way to explain that’. (I expand on this point below.)

(On the other hand, Tropylium certainly does seem to hold the opinion that rules for syntax are bad — that just isn’t what they’re saying here.)
Yet when people talk about a process where syntacticians actually try to falsify their theories— by talking about ungrammatical sentences— you don't like it.
Sadly, syntacticians seem to do this all too rarely. Whenever I try to read about a syntactic theory, the impression I get is something like: ‘Here’s a theory. But, oops, this sentence doesn’t fit our theory! What, our theory is false? That would be ridiculous! Let’s add another piece to our theory to explain why this happens!’ Which is fine once or twice, but after enough applications you end up with ridiculously convoluted syntactic trees for even the simplest sentences and it becomes impossible to believe that this is really a good theory.

Or, you can use what I call the ‘ergative approach’, so called because I’ve seen it in almost every paper on ergativity I’ve read so far. If you want to apply this approach, you don’t even need to look for exceptions — just cherry-pick the five languages which do fit your theory and make sure that you don’t even mention the fifty-five languages which don’t fit! It gets very frustrating reading these papers after a while…
Last edited by bradrn on Thu Jul 02, 2020 11:19 pm, edited 1 time in total.
Conlangs: Scratchpad | Texts | antilanguage
Software: See http://bradrn.com/projects.html
Other: Ergativity for Novices

(Why does phpBB not let me add >5 links here?)
zompist
Site Admin
Posts: 2944
Joined: Sun Jul 08, 2018 5:46 am
Location: Right here, probably
Contact:

Re: Syntax random

Post by zompist »

bradrn wrote: Thu Jul 02, 2020 9:07 pm
Tropylium wrote: Thu Jul 02, 2020 2:26 pm
It's also interesting that you tell me movement is only a "framework" and it does not need to float my boat. Does this mean that its existence is not even in principle falsifiable?
I don’t think that’s what zompist meant.
That was priscianic.
Sadly, syntacticians seem to do this all too rarely. Whenever I try to read about a syntactic theory, the impression I get is something like: ‘Here’s a theory. But, oops, this sentence doesn’t fit our theory! What, our theory is false? That would be ridiculous! Let’s add another piece to our theory to explain why this happens!’
Does it bother you when physics does it? When something unexpected comes up, should they throw out quantum mechanics? Should biologists throw out evolution?

A syntactic theory is a huge edifice of ideas; one sentence isn't going to overthrow it. However, one sentence can overthrow part of a syntactic theory. The study of syntax is full of such sentences, in fact! Just like every other science ever, the best response is to modify the theory.

Besides, it's a pretty odd complaint about syntax that it doesn't throw theories out enough. Usually the complaint is just the opposite: that it has too many theories. Heck, you can make this complaint about nothing but Chomskyan theories: wait a decade, and Chomsky himself will throw out his previous theory.
ridiculously convoluted syntactic trees for even the simplest sentences
First... do you really think syntacticians don't care about this? If anyone totally agrees with you on this, in fact, it's Noam Chomsky. He hated (and helped to tear down) the generative semanticists' over-elaborate trees in the 1970s. In the 200s, he did the same to his own X-bar theory, producing Minimalism, which in its earlier years really was simple. Ray Jackendoff has a book, Simpler Syntax, which argues for syntactic trees very close to surface structure. In general the cognitive linguistics school prefers alternative models in place of or in addition to trees.

Second, there are reasons trees keep growing. If you simply apply standard syntactic tests to surface structures, you end up with quite vertical trees to start with. There are good reasons why things like null nodes are added.

When the tree starts getting incomprehensible, that's probably a sign that this one tool is getting over-used. E.g. X-bar theory and Minimalism have the idea of defining arguments by tree structure. To its adherents, it's clever and fascinating. But honestly there are far better ways of handling valence and argument structure. Where I personally want to give syntacticians a kick in the pants, it's because many still have the all-you-have-is-a-hammer problem, and don't even seem aware that many things are more easily handled outside the tree.
bradrn
Posts: 6257
Joined: Fri Oct 19, 2018 1:25 am

Re: Syntax random

Post by bradrn »

zompist wrote: Thu Jul 02, 2020 10:08 pm
bradrn wrote: Thu Jul 02, 2020 9:07 pm
Tropylium wrote: Thu Jul 02, 2020 2:26 pm
It's also interesting that you tell me movement is only a "framework" and it does not need to float my boat. Does this mean that its existence is not even in principle falsifiable?
I don’t think that’s what zompist meant.
That was priscianic.
Oh, sorry! Corrected.
Sadly, syntacticians seem to do this all too rarely. Whenever I try to read about a syntactic theory, the impression I get is something like: ‘Here’s a theory. But, oops, this sentence doesn’t fit our theory! What, our theory is false? That would be ridiculous! Let’s add another piece to our theory to explain why this happens!’
Does it bother you when physics does it? When something unexpected comes up, should they throw out quantum mechanics? Should biologists throw out evolution?
No, because the vast majority of disciplines don’t do this with the same depressing regularity with which this happens in syntax. You mentioned physics, so let’s use that as an example. In the realm of classical mechanics, a few principles — Newton’s laws, the conservation laws, etc. — are sufficient to explain the motion of practically every macroscopic object we see in our daily lives. There is no need to continually add new bits and pieces, since the theory as it stands now is already explanatory.

By contrast, many syntactic theories seem work in almost exactly the opposite way. When first proposed, they can explain a few key parts of language, but tend to be severely lacking in explaining many more complex structures. Over time, bits and pieces get agglomerated to the theory to explain each of those unexplained constructions. Eventually — if we’re lucky — someone realises that the theory has gotten too complicated to be reasonable and tries to throw it out. Of course, plenty of times the theory lingers on for a good long while before this happens.

(Disclaimer: I am in absolutely no way an expert on syntax, so the previous paragraph could well be completely unsupported by the evidence. Still, that’s the impression I generally get from syntax.)
A syntactic theory is a huge edifice of ideas; one sentence isn't going to overthrow it. However, one sentence can overthrow part of a syntactic theory. The study of syntax is full of such sentences, in fact! Just like every other science ever, the best response is to modify the theory.
And this is another place in which syntax differs from, say, physics. In physics, if you see an observation which contradicts your theory, you usually try to figure out a new theory. (See: development of relativity, development of quantum mechanics etc.) One generally does not try to desperately rescue the current theory by adding bits and pieces; where this has been tried, it generally didn’t work. (See: the aether theory, geosynclines, the current problems with dark matter etc.) On the other hand, syntax seems to work the opposite way: when a syntactic theory doesn’t work, people generally try to desperately rescue it by adding new bits to it, and only throw it out once it’s clearly gotten too complicated.
ridiculously convoluted syntactic trees for even the simplest sentences
First... do you really think syntacticians don't care about this? If anyone totally agrees with you on this, in fact, it's Noam Chomsky. He hated (and helped to tear down) the generative semanticists' over-elaborate trees in the 1970s. In the 200s, he did the same to his own X-bar theory, producing Minimalism, which in its earlier years really was simple. Ray Jackendoff has a book, Simpler Syntax, which argues for syntactic trees very close to surface structure. In general the cognitive linguistics school prefers alternative models in place of or in addition to trees.



When the tree starts getting incomprehensible, that's probably a sign that this one tool is getting over-used. E.g. X-bar theory and Minimalism have the idea of defining arguments by tree structure. To its adherents, it's clever and fascinating. But honestly there are far better ways of handling valence and argument structure. Where I personally want to give syntacticians a kick in the pants, it's because many still have the all-you-have-is-a-hammer problem, and don't even seem aware that many things are more easily handled outside the tree.
I’m glad to know I’m not the only one who thinks like this. (Though honestly I’m a bit surprised that me and Chomsky agree on anything!)
Second, there are reasons trees keep growing. If you simply apply standard syntactic tests to surface structures, you end up with quite vertical trees to start with. There are good reasons why things like null nodes are added.
Could you give an example? I feel that might help me understand this. (Possibly this was covered in your syntax book, but I read it a while ago and don’t remember terribly much from it.)
Conlangs: Scratchpad | Texts | antilanguage
Software: See http://bradrn.com/projects.html
Other: Ergativity for Novices

(Why does phpBB not let me add >5 links here?)
zompist
Site Admin
Posts: 2944
Joined: Sun Jul 08, 2018 5:46 am
Location: Right here, probably
Contact:

Re: Syntax random

Post by zompist »

bradrn wrote: Fri Jul 03, 2020 12:20 am No, because the vast majority of disciplines don’t do this with the same depressing regularity with which this happens in syntax. You mentioned physics, so let’s use that as an example. In the realm of classical mechanics, a few principles — Newton’s laws, the conservation laws, etc. — are sufficient to explain the motion of practically every macroscopic object we see in our daily lives. There is no need to continually add new bits and pieces, since the theory as it stands now is already explanatory.
That is not a good summary of physics! For one thing, physics is enormously complex. For another, new things come up all the time. Theories are modified all the time.

Don't compare GG to Newton's laws. GG has not been around for 500 years, but 50.

Do compare it to string theory. String theory doesn't really work well, comes in a bunch of variants, is devilishly hard to test, and some people are trying out entirely different ideas. That's what a new science looks like.
And this is another place in which syntax differs from, say, physics. In physics, if you see an observation which contradicts your theory, you usually try to figure out a new theory. (See: development of relativity, development of quantum mechanics etc.)
Um, no. Quantum mechanics has been around for a hundred years. There have been plenty of new observations, and plenty of changes in the theory. And major challenges to it from physicists, too.

No scientist leaps to "figure out a new theory" the moment some new data comes in. The times a whole theory gets thrown out are rare events that get written up in the newspapers. The everyday challenges don't get that sort of publicity.

Second, there are reasons trees keep growing. If you simply apply standard syntactic tests to surface structures, you end up with quite vertical trees to start with. There are good reasons why things like null nodes are added.
Could you give an example? I feel that might help me understand this. (Possibly this was covered in your syntax book, but I read it a while ago and don’t remember terribly much from it.)
I'm thinking of things like the tests for constituency. If you apply them to an NP like "these three old men with their hats askew on their heads", you get a pretty vertical tree even of surface structure.

Now, in some theories there's even more structure than that, and it looks weird and intimidating to an outsider. But that doesn't in itself mean it's wrong. To use a physics analogy again, look at the Shroedinger equation... that sure as heck isn't noob-friendly. But it's not like physicists just added shit in to make it more difficult.

(I would argue about lots of specifics of some syntacticians' trees. But just complaining about the notation isn't some sort of proof that GG is bad.)
bradrn
Posts: 6257
Joined: Fri Oct 19, 2018 1:25 am

Re: Syntax random

Post by bradrn »

zompist wrote: Fri Jul 03, 2020 3:02 am
bradrn wrote: Fri Jul 03, 2020 12:20 am No, because the vast majority of disciplines don’t do this with the same depressing regularity with which this happens in syntax. You mentioned physics, so let’s use that as an example. In the realm of classical mechanics, a few principles — Newton’s laws, the conservation laws, etc. — are sufficient to explain the motion of practically every macroscopic object we see in our daily lives. There is no need to continually add new bits and pieces, since the theory as it stands now is already explanatory.
That is not a good summary of physics! For one thing, physics is enormously complex. For another, new things come up all the time. Theories are modified all the time.
Maybe it isn’t a good summary of physics, but it certainly seems to me to be a good summary of classical mechanics… when I learnt this stuff last year (I’m in my second year of a physics major now), that’s the impression I got.
Don't compare GG to Newton's laws. GG has not been around for 500 years, but 50.

Do compare it to string theory. String theory doesn't really work well, comes in a bunch of variants, is devilishly hard to test, and some people are trying out entirely different ideas. That's what a new science looks like.
You’re completely right on this — I forgot about the relative times of things, and that invalidates my analogy. String theory does indeed correspond better to GG than classical mechanics does.
And this is another place in which syntax differs from, say, physics. In physics, if you see an observation which contradicts your theory, you usually try to figure out a new theory. (See: development of relativity, development of quantum mechanics etc.)
Um, no. Quantum mechanics has been around for a hundred years. There have been plenty of new observations, and plenty of changes in the theory. And major challenges to it from physicists, too.
Are you sure? It seems to me that ever since the completion of the Standard Model, quantum mechanics has remained essentially static in that very little new confirmed stuff has been added to it. (Of course, it did take quite a while for it to get there, which I suppose gives another analogy to GG!)
Second, there are reasons trees keep growing. If you simply apply standard syntactic tests to surface structures, you end up with quite vertical trees to start with. There are good reasons why things like null nodes are added.
Could you give an example? I feel that might help me understand this. (Possibly this was covered in your syntax book, but I read it a while ago and don’t remember terribly much from it.)
I'm thinking of things like the tests for constituency. If you apply them to an NP like "these three old men with their hats askew on their heads", you get a pretty vertical tree even of surface structure.
Well, yes, that’s what I would expect for such a complicated NP. I suppose I’m talking more about sentences like ‘He studies linguistics at the university’, which seems pretty simple but ends up with a tree like this:

Image

When it seems to me that there is no obvious reason to prefer it to this:
tree.png
tree.png (5.95 KiB) Viewed 9613 times
Now, in some theories there's even more structure than that, and it looks weird and intimidating to an outsider. But that doesn't in itself mean it's wrong. … (I would argue about lots of specifics of such trees. But just complaining about the notation isn't some sort of proof that GG is bad.)
And I’m not arguing that it’s wrong — I don’t even mind the notation really! I’m more concerned that, even if I just read quickly through your syntax book, I can already find that syntactic theories appear to have severe problems explaining such basic parts of language as, say, pronouns. (Though admittedly it may possibly only be X-bar theory which has trouble with that.) And, while many of these theories do work surprisingly well for English, it can be fairly amusing trying to see syntacticians trying to stop them from falling down in non-English languages. (For instance, one thing that’s puzzled me before: how can you have a VP in a VSO language? Presumably they have some solution, although I wouldn’t know what it is.)
To use a physics analogy again, look at the Shroedinger equation... that sure as heck isn't noob-friendly. But it's not like physicists just added shit in to make it more difficult.
A counterpoint: I learned about the Schrödinger equation this semester (or at least one form of it), and I was surprised to find that it wasn’t actually all that difficult. (You do need a decent knowledge of calculus to use it, but that’s no different to anything else in physics.)
Conlangs: Scratchpad | Texts | antilanguage
Software: See http://bradrn.com/projects.html
Other: Ergativity for Novices

(Why does phpBB not let me add >5 links here?)
zompist
Site Admin
Posts: 2944
Joined: Sun Jul 08, 2018 5:46 am
Location: Right here, probably
Contact:

Re: Syntax random

Post by zompist »

bradrn wrote: Fri Jul 03, 2020 4:03 am Well, yes, that’s what I would expect for such a complicated NP. I suppose I’m talking more about sentences like ‘He studies linguistics at the university’, which seems pretty simple but ends up with a tree like this:

Image
There's only two real differences. One is minor: the PP in the second tree is a sister of the Comp, which I'd argue is just wrong. :)

The other is that the first tree follows the rules of X-bar syntax. There are justifications for the extra nodes, but since Chomsky got rid of them in Minimalism, it's pretty moot now.
(For instance, one thing that’s puzzled me before: how can you have a VP in a VSO language? Presumably they have some solution, although I wouldn’t know what it is.)
Honestly I would need a lot of convincing myself to accept a VO node in a VSO language. I'd want to see solid syntactic evidence within that language.
A counterpoint: I learned about the Schrödinger equation this semester (or at least one form of it), and I was surprised to find that it wasn’t actually all that difficult. (You do need a decent knowledge of calculus to use it, but that’s no different to anything else in physics.)
It's impenetrable to me, but I don't remember differential equations at all, and my eyes glaze over when I hear "eigenvector."
User avatar
cedh
Posts: 201
Joined: Fri Jul 13, 2018 9:55 am
Location: Tübingen, Germany
Contact:

Re: Syntax random

Post by cedh »

Tropylium wrote: Thu Jul 02, 2020 5:12 pm
priscianic wrote:To me, questions like "why is the world one way, and not the other?" are natural and interesting questions that curious people ask, the kinds of questions that drive scientific inquiry.
"Why" is always a good question, but it is fundamentally a question about diachrony. "Everything is the way it is because it got that way."

I do not mean to go absolutely anti-Saussure; synchrony is separable from diachrony in principle. But my starting point would be that the synchronic state of a system can only be described, and there cannot exist any predictive theories that are "purely synchronic". Causality does not exist without time.

Causally speaking, if some construction like "reflexive pronoun in a transitive subject position to indicate a reflexive action" is never attested, as a logical necessity either (1) for some reason there are no possible or no sufficiently probable diachronic ways for such a construction to develop; or (2) the same holds for all precedessors that could plausibly give rise to such a construction.

Sometimes (1) is because the target is logically incoherent (it is not possible to construct a house with four walls but five corners even if you try). It seems awfully bold to think that this should be always or by default the case though.
I think here's (one of) the main reason(s) for the controversy on the previous page, and it seems to me that it's based on a misunderstanding of other people's starting points.

Like Tropylium, I tend to look at things with the view that the most interesting question to ask is, "How did things get the way they are now?", i.e. with a diachronic focus. The synchronic question "Why are things the way they are?" is valid and interesting too, but for me it is secondary. Part of this is just personal preference, but another part is epistemological: As Tropylium says, only for some cases can the outcome of a diachronic development be argued to have been logically necessary, while in many other cases it is simply based on frequency, chance, and contingent circumstances.

If you are mainly interested in synchrony, then of course you tend to look for logical arguments before (or instead of) considering the historical context. This approach has its merits, but people may end up overapplying it. And many syntacticians do overapply it, or at least often seem to completely forget that diachronic factors have played a major role in shaping today's languages.

(There also seems to be some kind of correlation between preferring "why?" or "how?" questions, and having a worldview based on the natural sciences or on the humanities: If you ask "how?" in physics, a science with timeless rules, you can only describe what happens, but if you ask "why?", you can often find an explanation with a solid logic behind it. On the other hand, if you ask "why?" in the humanities, in a cultural environment shaped by history, you can only speculate about the intentions of an author or a character, but if you ask "how?", you can trace how different diachronic developments or artistic techniques play together to produce the observed outcome, and you can analyze how each of these elements contributes to the result.)

(And in my view, linguistics has more elements of the humanities than of the natural sciences.)
Richard W
Posts: 1471
Joined: Sat Aug 11, 2018 12:53 pm

Re: Syntax random

Post by Richard W »

Tropylium wrote: Wed Jul 01, 2020 3:37 pm The difference I'm pointing at is that syntax starts with enormously permissive theories and seeks to falsify parts of them. Lexicology, phonology, semantics etc. generally don't: they start out by assuming nothing unwarranted and only add items to the model when they find evidence that they exist.
Lexicographers tend to assume fully populated inflection tables. When one tries to verify what is in a dictionary, one runs into all sorts of problems at that level. The Latin community at Wiktionary keeps hitting problems such as the existence of case forms. In the case of nominative plurals of neuter 3rd declension nouns, the issue can be quite serious. There are some striking absences or attestation gaps - did _mensa_ 'table' have a genitive plural in classical Latin or not? Perfect tense forms of Latin compound verbs is another area where dictionary writers seem to have just invented things. Closer to the present day, I get the impression that some Finnish words' case forms do not rest on very sound ground.
bradrn
Posts: 6257
Joined: Fri Oct 19, 2018 1:25 am

Re: Syntax random

Post by bradrn »

zompist wrote: Fri Jul 03, 2020 4:43 am
bradrn wrote: Fri Jul 03, 2020 4:03 am Well, yes, that’s what I would expect for such a complicated NP. I suppose I’m talking more about sentences like ‘He studies linguistics at the university’, which seems pretty simple but ends up with a tree like this:
There's only two real differences. One is minor: the PP in the second tree is a sister of the Comp, which I'd argue is just wrong. :)
If that’s wrong, I’d be interested to know what you think the right tree is for that sentence.
The other is that the first tree follows the rules of X-bar syntax. There are justifications for the extra nodes, but since Chomsky got rid of them in Minimalism, it's pretty moot now.
So Minimalism made X-bar theory obsolete? I had thought that Minimalism and X-bar theory are still both widely used today.
(For instance, one thing that’s puzzled me before: how can you have a VP in a VSO language? Presumably they have some solution, although I wouldn’t know what it is.)
Honestly I would need a lot of convincing myself to accept a VO node in a VSO language. I'd want to see solid syntactic evidence within that language.
Ah, right — I had thought that GG assumes the presence of a VP in any language, so thanks for correcting this!
A counterpoint: I learned about the Schrödinger equation this semester (or at least one form of it), and I was surprised to find that it wasn’t actually all that difficult. (You do need a decent knowledge of calculus to use it, but that’s no different to anything else in physics.)
It's impenetrable to me, but I don't remember differential equations at all, and my eyes glaze over when I hear "eigenvector."
Well, that would certainly make it trickier to understand…
Conlangs: Scratchpad | Texts | antilanguage
Software: See http://bradrn.com/projects.html
Other: Ergativity for Novices

(Why does phpBB not let me add >5 links here?)
Moose-tache
Posts: 1746
Joined: Fri Aug 24, 2018 2:12 am

Re: Syntax random

Post by Moose-tache »

I've been reading a lot of GG lately, and I've come to an epiphany that even Noam (pbuh) himself couldn't see!

Sure, on the surface sentences in many different language may appear to be quite diverse, but that's just superficial. The nouns, the verbs, all the stuff people think about when they make sentences, that's a load of crap. I mean, who are you going to trust when parsing language, real human brains, or the voice deep down telling you that you're a genius? Good, we agree.

So I was sitting in my oak paneled study, stroking my brown tabby Mittens when it hit me. Every sentence in every language is actually a brown tabby cat with green eyes named Mittens. Thank about it. Did that convince you? Good, I knew it would. Let's explore this idea a little more, with examples from the language that most closely matches default human consciousness: the prestige dialect of English in eastern Massachusetts between 1955 and 1985.

"I hit the ball."

Now, we start with Mittens, because obviously every sentence is deep down a brown tabby named Mittens (I'm glad we all agree about that). Then we simply apply however many transformations we need to in order to get to "I hit the ball." First of all, the sentence is present tense and Mittens, being a cat, has no particular tense marking. So we need a node that says [pres]. Then our next task is to get rid of the feet, because even though Mitten's little paws are adorable, they are not present in the sentence, so we add another node marked [-feet]. And so, and so on, until we get to "I hit the ball."

Now, I know what you're thinking. You're thinking "Yeah Lady, obviously this is all correct and perfect because it fits your personal intuition, duh. But I have friends who are constructivists, and sometimes they ask me silly questions like "But prove it, though" and "Yeah but what if not that?". One time, one of those fools asked me why I put "not past tense" on a sentence tree for a language that has no tense marking. I mean seriously, what am I supposed to say to these idiots?" Well, I'm glad you asked.

All you have to do is show them this diagram of a sentence. Once they see that everything fits together exactly the way I've described it, they will have no choice but to understand the truth! Love live GG! Hail Satan!
I did it. I made the world's worst book review blog.
priscianic
Posts: 37
Joined: Sun Jun 28, 2020 11:10 pm

Re: Syntax random

Post by priscianic »

bradrn wrote: Fri Jul 03, 2020 4:03 am I suppose I’m talking more about sentences like ‘He studies linguistics at the university’, which seems pretty simple but ends up with a tree like this:

Image
You're right that, just looking at this one sentence "He studies linguistics at the university", there's not really much motivation for positing an I(nflection) node here (you might also see it called T, for "tense"—I'll be using T here because it's more readable in sans-serif). The motivation comes from looking at other cases, both within English and without, where there is stronger evidence for the necessity of a separate T node in the structure. You then construct a set of rules for syntactic structure-building, as well as rules for semantic interpretation for things like tense (because yes, of course, these syntactic structures need to be mapped to semantic interpretations!). So you've "complicated" the theory at this point, but you're satisfied because it seemed necessary at the time.

Then you go back to the original, "simple" sentence. There's no overt T in "he studies linguistics at the university" screaming out at you. So what are your options at this point?

One option is to further add to your syntactic rules, and come up with a principled theory of why certain sentences lack T, but other sentences need T. You'd also need to further add to your semantic rules, to come up with a principled way to map a T node to tense interpretation in one set of structures, and present and past affixes on the lexical verb to tense interpretation in another set of structures.

The other option is to not do all that, and posit a null T node (actually, some people have verbal inflection, like present 3sg -s and past -ed sitting in T, which "lowers" onto the verb root in these "simple" sentences; they still have a T node in these sentences though). This means you can have a uniform theory of both the more syntactically complex sentences as well as the "simple" sentences, as well as uniform ways of mapping syntactic structures to semantic interpretations. Whether you think this is worth the "cost" of positing a null node in the syntax (or of positing a "lowering" operation) is an aesthetic choice at this point, from my perspective. There are plenty of syntactic theories out there on the market that don't make the same choices broadly-Chomskyan generative grammar does.

(This has all been very abstract and high-level—if you want more detail, there are plenty of introductory textbooks that try to motivate this kind of thing, and I don't want to spend my time rehashing stuff like that that's already out there. The motivation might be in a dedicated section on T, or it might be split among sections on finiteness, syntactic selection, head movement, potentially maybe some other things. I think Andrew Carnie's textbook Syntax: A Generative Introduction is quite clear and lucid (libgennable), and MITOCW has (two) resources for undergraduate introductory syntax classes with lecture notes that might also be useful resources—one from 2003 and one from 2015.)
bradrn wrote: Fri Jul 03, 2020 4:03 am
To use a physics analogy again, look at the Shroedinger equation... that sure as heck isn't noob-friendly. But it's not like physicists just added shit in to make it more difficult.
A counterpoint: I learned about the Schrödinger equation this semester (or at least one form of it), and I was surprised to find that it wasn’t actually all that difficult. (You do need a decent knowledge of calculus to use it, but that’s no different to anything else in physics.)
Similarly, the "(over)complicated trees" (or matrices in HPSG or LFG, or whatever equivalents in other more formal models of syntax) aren't actually all that difficult to understand if you have a decent knowledge of the basics and guiding principles. Undergrads in intro classes seem to end up understanding this stuff (even the "complicated" tree you had an image of, which the kind of structure I would expect someone to come out of an introductory syntax class being able to understand and draw). Of course, you need some buy-in and some effort put in to understand the basics and guiding principles (though certainly much less buy-in than learning calculus). I don't think "it looks complicated, and I think I should understand it, but I don't, therefore it must be unnecessary and wrong" is a good argument.
bradrn wrote: Fri Jul 03, 2020 7:39 am
zompist wrote: Fri Jul 03, 2020 4:43 am The other is that the first tree follows the rules of X-bar syntax. There are justifications for the extra nodes, but since Chomsky got rid of them in Minimalism, it's pretty moot now.
So Minimalism made X-bar theory obsolete? I had thought that Minimalism and X-bar theory are still both widely used today.
The literature is (understandably) confusing about this point. Most people don't believe "X-bar theory proper", in the sense that we don't believe there's some God-given X-bar schema that all phrases must follow, and we don't believe there's some separate "X-bar module" in the grammar that regulates this stuff. For instance, certainly we don't believe that something like "sleep" is a V dominated by V' dominated by VP. It's just one node (unless you wanna do some kind of lexical decomposition), which you can either call V or VP, it doesn't really matter too much.

However, we do use the terminology and tree-drawing conventions from X-bar because they're useful (and we're used to them, quite frankly). We use X-bar terms like "specifier", "head", "complement", and "adjunct", for instance, but those meant just as ways to refer to different nodes in the tree. Similarly, we might draw trees with Xs, X's, and XPs, but that's again just a notational convention, which people don't always follow. X refers to heads (which still is a primitive notion), XP refers to the "maximal projection" of a head (which still is a useful notion to have, but people try to derive it from other principles—there's a whole literature on "labeling" and the "labeling algorithm" that thinks about this kind of stuff; I'm not too familiar with it though), and X' is just whatever comes in between X and XP. Oftentimes you might see people just not drawing X' levels (though this can be hard to do nicely in LaTeX sometimes); here's something pulled from Google to illustrate what I mean:

Image
bradrn wrote: Fri Jul 03, 2020 4:03 am
(For instance, one thing that’s puzzled me before: how can you have a VP in a VSO language? Presumably they have some solution, although I wouldn’t know what it is.)
Honestly I would need a lot of convincing myself to accept a VO node in a VSO language. I'd want to see solid syntactic evidence within that language.
Ah, right — I had thought that GG assumes the presence of a VP in any language, so thanks for correcting this!
Most people, at least within broadly-Chomskyan syntax, assume that you have VP constituents in every language—that is, verb and object form a constituent to the exclusion of the subject. But obviously, VSO can't have a VP constituent on the surface.

However, we independently need to allow for the possibility of displacement/movement dependencies. So, all else being equal, we predict that it should be possible to, for instance, have the verb move out of VP, leaving us with a surface VSO order. And indeed, people have argued that that kind of derivation is exactly how some languages get to be VSO (e.g. Irish, McCloskey 1996 (scihubbable), among others; there's also a lucid introduction to this idea in the Carnie textbook I mentioned earlier). And there are other ways of getting to VSO—Clemens and Polinsky (2017) is a nice overview, I find.

Of course, just because we can derive VSO orders with VPs doesn't mean we should. And of course, we'd like to to look for independent evidence of VPs in the language, with the same kinds of diagnostics we use for other languages. Are there things like VP proforms (e.g. do so), and can they be substituted for V and O to the exclusion of S? Assuming that binding is sensitive to hierarchical structure (and I think there is plenty of evidence that it is), can we show that S is structurally higher than O (as would be predicted by VO being a constituent to the exclusion of S)? Is there evidence of a movement dependencies that "break up" the VP? etc.
akam chinjir
Posts: 769
Joined: Fri Jul 13, 2018 11:58 pm

Re: Syntax random

Post by akam chinjir »

(I see that that I'm cross-posting with priscianic, fingers crossed...)
bradrn wrote: Fri Jul 03, 2020 7:39 am So Minimalism made X-bar theory obsolete? I had thought that Minimalism and X-bar theory are still both widely used today.
One central X-bar idea is still widely adopted, namely that you can do a fair bit of syntax without distinguishing between word classes (so you say XP rather than NP or VP or whatever).

X-bar theory also has the requirement than an X (a terminal node) has to be a daughter of an X' (X-bar), and an X' has to be a daughter of either another X' or of an XP. In cases where, say, a bare noun functions as an NP (like English mass nouns can), you get a bunch of unbranching nodes, like in your first diagram. Minimalists tend not to want unbranching nodes, so in your diagrams they might count "linguistics" as both N and NP at the same time. (For more details about this sort of thing, you could investigate "bare phrase structure.")
bradrn wrote: Fri Jul 03, 2020 4:03 am (For instance, one thing that’s puzzled me before: how can you have a VP in a VSO language? Presumably they have some solution, although I wouldn’t know what it is.)
Fwiw, it's pretty easy to generate VSO order in broadly Chomskyan terms.

Start by combining the verb with the object:

Code: Select all

 VP
 /\
O  V
Then you add the subject:

Code: Select all

 VP
 /\
S  VP
   /\
  O  V
Then let's say you add some TAM, I'll just write it as T (maybe it represents a past tense):

Code: Select all

 TP
 /\
T VP
  /\
 S  VP
    /\
   O  V
Now look at this:

Code: Select all

   TP
  /  \
V₁+T  VP
      /\
     S  VP
        /\
       O  t₁
That's supposed to represent the result of taking the verb from the VP, and moving it up to combine it with T. This is a very common way for Chomskyans to think that a verb combines with a suffix.

Er, and we're done, VSO.

SVO is actually a bit more complicated on this view. One way to get it is to get the VSO order, and then move the subject:

Code: Select all

  TP
 /  \
S₂  TP
   /  \
 V₁+T  VP
       /\
      t₂ VP
         /\
        O  t₁
(This is roughly how things are usually supposed to go in French, for example.)

Okay, complications.

First, nothing in there is even a little bit an argument that this is the right way to analyse things. For language-internal evidence of a V+O constituent, you'd look for things like light verbs (like English "take") whose meaning varies with the object in a way that it doesn't vary with the subject, or you'd check to see whether it has lots of V+O idioms but few transitive S+V idioms, things like that.

Second, I'm simplifying the structure in a bunch of ways. Here's a slightly less simplified variant:

Code: Select all

   TP
  /  \
V₁+T  vP
      /\
     S  vP
        /\
       v  VP
          /\
         O  t₁
I've added a "v" ("little-v") node. (Sometimes this is instead called Voice, sometimes people include both Voice and v.) This has a bunch of roles, one of which is to introduce an agent argument. It's often thought to be present even when it's not pronounced.

Third, I've been talking about subject and object, but these aren't primitives in Chomskyan syntax---arguably they don't have any theoretical significance at all. It might be fun to talk about this in more detail another time, but for now I'll just say you can get a long way with the assumption that an argument that's combined directly with the verb gets a patient or theme role, whereas an argument that enters the structure higher up gets an agent or experiencer role; and if an argument has to move higher (e.g., to joint with T), it'll be the highest argument that moves.

Fourth, I've implicitly assumed that you can read linear order directly off of the tree representation. This is a complicated issue, though partly I've just cheated by putting things together so that I'll get the right linear order. The sneakiest bit was starting with apparent O-V order rather than V-O order, though that doesn't affect the VSO issue at all.

Fifth, movement. Chomskyans talk about movement, and I've talked about movement, but this is pretty clearly a metaphor, and it may not be a very helpful one. What's important here is that, supposedly, certain things are pronounced in a different position from the one in which they first enter the structure. Like, in the VSO derivation, the verb first combines with the object, but it's not pronounced with the object, it's pronounced up with T. (Incidentally, that sort of movement, or whatever, is called head movement, and it's distinct from both A movement and A-bar movement.)

Finally, English. This model has more trouble with English than with either VSO or French. (It's basically incorrect to say that all this stuff is designed to make the analysis of English come out easy.) The subject has to move above T, like in French. But the verb only moves as high as little-v, not all the way up to T; and T has to somehow lower to meet it there. (Or something. The sort of data that's important here is the contrast between "Je mange souvent des pommes" and "I often eat apples," with the different placement of the adverb. Er, I hope I've got the French right.)

I hope that made some sense!

Edit: another complication that's maybe worth mentioning, that's a way to derive VSO order, I don't mean it's the only way, and even if you like the general picture, that doesn't mean it's obvious which sort of derivation is correct (or that all VSO orders are derived in the same way).
zompist
Site Admin
Posts: 2944
Joined: Sun Jul 08, 2018 5:46 am
Location: Right here, probably
Contact:

Re: Syntax random

Post by zompist »

bradrn wrote: Fri Jul 03, 2020 7:39 am
zompist wrote: Fri Jul 03, 2020 4:43 am One is minor: the PP in the second tree is a sister of the Comp, which I'd argue is just wrong. :)
If that’s wrong, I’d be interested to know what you think the right tree is for that sentence.
I'd put the PP within the VP. The main evidence is from verbal anaphors:

Max studied linguistics at the university, and Sam did so too.

I understand this as saying that Sam studied linguistics at the university, not just that he studied linguistics.

Sam always wanted to study linguistics at the university, and now he could.

Again, what Sam could do was "study linguistics at the university".

Evidence from conjunctions is ambiguous:

Max studied linguistics at the university and rode bicycles at the gym.
Max studied linguistics and Eunice taught biology at the university.


But the latter can be explained with Gapping.
bradrn
Posts: 6257
Joined: Fri Oct 19, 2018 1:25 am

Re: Syntax random

Post by bradrn »

While investigating control constructions for my next ergativity post, I found what appears to be a contradiction. In Andrew Radford’s book Minimalist Syntax: Exploring the structure of English, the subordinate clause in sentences such as ‘We would like [you to stay]’ is analysed as an infinitive clause with an overt subject. (Control in sentences such as ‘Weᵢ would like [PROᵢ to stay]’ can then be analysed as having the overt subject replaced with a null subject.) However, Amy Deal’s Ergativity seems to analyse similar sentences as having a null subject, with the apparent subject of the infinitive being an object of the main clause, e.g. ‘Father told motherᵢ PROᵢ to return’. To me, this seems like a contradiction. Which is correct — are these sort of sentences usually analysed as having an overt subject as Radford says, or a null subject as Deal says? (To me, the latter seems more likely, since the controller gets accusative case, e.g. ‘Father told him to return’, but then that would give the rather puzzling conclusion that English allows S or O but not A to act as a controller.)
Conlangs: Scratchpad | Texts | antilanguage
Software: See http://bradrn.com/projects.html
Other: Ergativity for Novices

(Why does phpBB not let me add >5 links here?)
priscianic
Posts: 37
Joined: Sun Jun 28, 2020 11:10 pm

Re: Syntax random

Post by priscianic »

bradrn wrote: Sun Jul 12, 2020 11:39 pm While investigating control constructions for my next ergativity post, I found what appears to be a contradiction. In Andrew Radford’s book Minimalist Syntax: Exploring the structure of English, the subordinate clause in sentences such as ‘We would like [you to stay]’ is analysed as an infinitive clause with an overt subject. (Control in sentences such as ‘Weᵢ would like [PROᵢ to stay]’ can then be analysed as having the overt subject replaced with a null subject.) However, Amy Deal’s Ergativity seems to analyse similar sentences as having a null subject, with the apparent subject of the infinitive being an object of the main clause, e.g. ‘Father told motherᵢ PROᵢ to return’. To me, this seems like a contradiction. Which is correct — are these sort of sentences usually analysed as having an overt subject as Radford says, or a null subject as Deal says? (To me, the latter seems more likely, since the controller gets accusative case, e.g. ‘Father told him to return’, but then that would give the rather puzzling conclusion that English allows S or O but not A to act as a controller.)
Both—it depends on the embedding predicate. You have two different structures here that look different on the surface (hey, another thing about syntax that's not about overt linear word order!).

In we would like [you to stay], you have what's known as "exceptional case marking" (ECM), where the embedded subject (which really does start out its life in the embedded clause) gets "exceptionally" marked as accusative (e.g. we would like him to stay) across a clause boundary. You'll also see this kind of construction called "raising to object", as the embedded subject seems to actually end up in the matrix clause—for instance, you can have matrix adverbials follow the embedded subject, in sentences like we would like youᵢ with all our heart [ youᵢ to stay]. The adverbial phrase with all our heart is modifying the matrix VP, telling us something about the degree of intensity with which the speaker would like the addressee to stay.

The crucial point, in any case, is that the embedded subject originates in the embedded clause. You can show this by the fact that you can preserve idiom meanings, and you also can get expletive subjects:
  • We would like [the shit to hit the fan].
  • We would like [it to rain].
  • We would like [there to be a party tomorrow].
In contrast, with father told him to return, the standard analysis is as Deal has it: you have "object control", where a matrix object is controlling PRO: father told motherᵢ [PROᵢ to return]. The crucial difference here is that mother is not and never was inside the embedded clause. So you shouldn't be able to get idiomatic readings of sentential idioms in the embedded clause, and you shouldn't be able to get expletive subjects. And that's exactly what happens:
  • *Father told the shitᵢ [PROᵢ to hit the fan].
  • *Father told itᵢ [PROᵢ to rain].
  • *Father told thereᵢ [PROᵢ to be a party tomorrow].
In order to account for this contrast, we need two different structures: one where the matrix object is somehow "directly linked" to the embedded clause subject position—which is the insight that the ECM/raising to object analyses capture, with the matrix object starting out in the embedded clause and then getting assigned accusative across the clause boundary (ECM) or moving into the matrix clause (raising to object); and we need another structure where the matrix object is not directly linked to the embedded clause subject position—which is the insight that the object control analysis captures, which has the matrix object and PRO only indirectly linked by binding.

FYI, English does allow A to be a controller, e.g. with promise: Joannaᵢ promised me [PROᵢ to go grocery shopping tomorrow].

(A side note: Amy Rose Deal's first name is Amy Rose—she doesn't go by "Amy Deal". In citations, you'll see "Deal, Amy Rose".)
bradrn
Posts: 6257
Joined: Fri Oct 19, 2018 1:25 am

Re: Syntax random

Post by bradrn »

priscianic wrote: Mon Jul 13, 2020 12:03 am In we would like [you to stay], you have what's known as "exceptional case marking" (ECM), where the embedded subject (which really does start out its life in the embedded clause) gets "exceptionally" marked as accusative (e.g. we would like him to stay) across a clause boundary. You'll also see this kind of construction called "raising to object", as the embedded subject seems to actually end up in the matrix clause—for instance, you can have matrix adverbials follow the embedded subject, in sentences like we would like youᵢ with all our heart [ youᵢ to stay]. The adverbial phrase with all our heart is modifying the matrix VP, telling us something about the degree of intensity with which the speaker would like the addressee to stay.
So this is a raising construction rather than a control construction? That makes more sense — thanks for clarifying! (I was actually looking at the difference between raising and control earlier today; I feel a bit stupid now for not realising this myself, although perhaps I was mislead by the fact that Radford used this sentence when he was discussing control.)

However, I’m still a bit confused: Wikipedia lists the use of an expletive there as allowable for a raising-to-object predicate but not a control predicate, yet *we would like there to stay is ungrammatical. Why is this?
In order to account for this contrast, we need two different structures: one where the matrix object is somehow "directly linked" to the embedded clause subject position—which is the insight that the ECM/raising to object analyses capture, with the matrix object starting out in the embedded clause and then getting assigned accusative across the clause boundary (ECM) or moving into the matrix clause (raising to object); and we need another structure where the matrix object is not directly linked to the embedded clause subject position—which is the insight that the object control analysis captures, which has the matrix object and PRO only indirectly linked by binding.
As I mentioned, I was trying to understand the difference between raising and control, and I sort of managed to figure it out myself based on what I could find, but I think this must be the best explanation of it I’ve seen so far! I feel that I understand it a lot better now.
FYI, English does allow A to be a controller, e.g. with promise: Joannaᵢ promised me [PROᵢ to go grocery shopping tomorrow].
Ooh, that’s a good example! May I steal it for my next post here?
(A side note: Amy Rose Deal's first name is Amy Rose—she doesn't go by "Amy Deal". In citations, you'll see "Deal, Amy Rose".)
Oops — thanks for the correction!
Conlangs: Scratchpad | Texts | antilanguage
Software: See http://bradrn.com/projects.html
Other: Ergativity for Novices

(Why does phpBB not let me add >5 links here?)
priscianic
Posts: 37
Joined: Sun Jun 28, 2020 11:10 pm

Re: Syntax random

Post by priscianic »

bradrn wrote: Mon Jul 13, 2020 12:27 am
priscianic wrote: Mon Jul 13, 2020 12:03 am In we would like [you to stay], you have what's known as "exceptional case marking" (ECM), where the embedded subject (which really does start out its life in the embedded clause) gets "exceptionally" marked as accusative (e.g. we would like him to stay) across a clause boundary. You'll also see this kind of construction called "raising to object", as the embedded subject seems to actually end up in the matrix clause—for instance, you can have matrix adverbials follow the embedded subject, in sentences like we would like youᵢ with all our heart [ youᵢ to stay]. The adverbial phrase with all our heart is modifying the matrix VP, telling us something about the degree of intensity with which the speaker would like the addressee to stay.
So this is a raising construction rather than a control construction? That makes more sense — thanks for clarifying! (I was actually looking at the difference between raising and control earlier today; I feel a bit stupid now for not realising this myself, although perhaps I was mislead by the fact that Radford used this sentence when he was discussing control.)

However, I’m still a bit confused: Wikipedia lists the use of an expletive there as allowable for a raising-to-object predicate but not a control predicate, yet *we would like there to stay is ungrammatical. Why is this?
*We would like there to stay is ungrammatical because there stays is ungrammatical. Presumably there stays is ungrammatical because the verb stays requires some kind of actual semantically-contentful argument, and expletive there isn't a semantically-contentful argument. (In the Government and Binding framework, this is the result of the "theta criterion"—the requirement that each argument receive one (and only one) theta role, and for each theta role to get assigned to an argument. So the idea is that stay assigns some kind of theta role (e.g. perhaps agent or something) to the "stayer", and in this sentence there is no argument to receive that theta role, hence the ungrammaticality.)
FYI, English does allow A to be a controller, e.g. with promise: Joannaᵢ promised me [PROᵢ to go grocery shopping tomorrow].
Ooh, that’s a good example! May I steal it for my next post here?
Yeah, of course! It's a very standard kind of example.

Glad I could help!
bradrn
Posts: 6257
Joined: Fri Oct 19, 2018 1:25 am

Re: Syntax random

Post by bradrn »

priscianic wrote: Mon Jul 13, 2020 12:35 am
bradrn wrote: Mon Jul 13, 2020 12:27 am
priscianic wrote: Mon Jul 13, 2020 12:03 am In we would like [you to stay], you have what's known as "exceptional case marking" (ECM), where the embedded subject (which really does start out its life in the embedded clause) gets "exceptionally" marked as accusative (e.g. we would like him to stay) across a clause boundary. You'll also see this kind of construction called "raising to object", as the embedded subject seems to actually end up in the matrix clause—for instance, you can have matrix adverbials follow the embedded subject, in sentences like we would like youᵢ with all our heart [ youᵢ to stay]. The adverbial phrase with all our heart is modifying the matrix VP, telling us something about the degree of intensity with which the speaker would like the addressee to stay.
So this is a raising construction rather than a control construction? That makes more sense — thanks for clarifying! (I was actually looking at the difference between raising and control earlier today; I feel a bit stupid now for not realising this myself, although perhaps I was mislead by the fact that Radford used this sentence when he was discussing control.)

However, I’m still a bit confused: Wikipedia lists the use of an expletive there as allowable for a raising-to-object predicate but not a control predicate, yet *we would like there to stay is ungrammatical. Why is this?
*We would like there to stay is ungrammatical because there stays is ungrammatical. Presumably there stays is ungrammatical because the verb stays requires some kind of actual semantically-contentful argument, and expletive there isn't a semantically-contentful argument. (In the Government and Binding framework, this is the result of the "theta criterion"—the requirement that each argument receive one (and only one) theta role, and for each theta role to get assigned to an argument. So the idea is that stay assigns some kind of theta role (e.g. perhaps agent or something) to the "stayer", and in this sentence there is no argument to receive that theta role, hence the ungrammaticality.)
Ah, right. Thanks for the help!
Conlangs: Scratchpad | Texts | antilanguage
Software: See http://bradrn.com/projects.html
Other: Ergativity for Novices

(Why does phpBB not let me add >5 links here?)
bradrn
Posts: 6257
Joined: Fri Oct 19, 2018 1:25 am

Re: Syntax random

Post by bradrn »

Another question about raising! (You didn’t think you could escape my incessant questioning, did you? :)) I was reading Dixon’s Basic Linguistic Theory — a useful resource, but always a bit odd when it comes to syntax — when I came across the following paragraph:
Dixon wrote: It has been suggested that ‘raising’ is involved for some complement clause constructions in English; however, this is a matter for debate. … Consider:

(50) Mary persuaded John to hit Fred

It is unprofitable to put forward the limited view that an NP can have only one function and then to try to decide which of the two functions to assign to John in (50). A reflexive pronoun ending in -self can only be used when coreferential with another NP in the same clause. Note that one can say Mary forced herself to hit Fred (the herself is in the same clause as Mary) and also Mary forced John to hit himself (the himself is in the same clause as John); this shows that […] John is functioning both as O for the first clause and as A argument for the second one. The underlying structure of (50) is:

(50u) MaryA persuaded JohnO [JohnA to hit FredO]

One of the two successive occurrences of John is then omitted from surface structure.

When John is replaced by a pronoun, the underlying structure becomes:

(51u) MaryA persuaded himO [heA hit FredO]

And this comes out in surface structure as:

(51) Mary persuaded him to hit Fred.
Now, I realise that I am horribly ignorant of syntactic matters, but to me this seems to contradict what everyone was saying earlier about raising. On the other hand, this approach does have a certain appeal: it avoids the need to postulate exceptional case-marking, and it neatly explains the behaviour of reflexives. So is there any merit to this alternate explanation, or is it simply incorrect — and if so, what makes this incorrect?
Conlangs: Scratchpad | Texts | antilanguage
Software: See http://bradrn.com/projects.html
Other: Ergativity for Novices

(Why does phpBB not let me add >5 links here?)
Post Reply