Soshul meedja.

Topics that can go away
chris_notts
Posts: 682
Joined: Tue Oct 09, 2018 5:35 pm

Re: Soshul meedja.

Post by chris_notts »

bradrn wrote: Thu Mar 16, 2023 3:20 am Yep, I definitely see it as an argument for Construction Grammar or similar. But then again I was already leaning strongly towards such theories a year or so ago, so maybe it’s just confirmation bias.
Me too! Probably the more Croftian version of it, but I'm not too picky.
bradrn
Posts: 5740
Joined: Fri Oct 19, 2018 1:25 am

Re: Soshul meedja.

Post by bradrn »

chris_notts wrote: Thu Mar 16, 2023 3:50 am
bradrn wrote: Thu Mar 16, 2023 3:20 am Yep, I definitely see it as an argument for Construction Grammar or similar. But then again I was already leaning strongly towards such theories a year or so ago, so maybe it’s just confirmation bias.
Me too! Probably the more Croftian version of it, but I'm not too picky.
Hmm, what’s that?
Conlangs: Scratchpad | Texts | antilanguage
Software: See http://bradrn.com/projects.html
Other: Ergativity for Novices

(Why does phpBB not let me add >5 links here?)
chris_notts
Posts: 682
Joined: Tue Oct 09, 2018 5:35 pm

Re: Soshul meedja.

Post by chris_notts »

bradrn wrote: Thu Mar 16, 2023 3:55 am
chris_notts wrote: Thu Mar 16, 2023 3:50 am
bradrn wrote: Thu Mar 16, 2023 3:20 am Yep, I definitely see it as an argument for Construction Grammar or similar. But then again I was already leaning strongly towards such theories a year or so ago, so maybe it’s just confirmation bias.
Me too! Probably the more Croftian version of it, but I'm not too picky.
Hmm, what’s that?
I was thinking of this book:

https://books.google.co.uk/books/about/ ... edir_esc=y
zompist
Site Admin
Posts: 2725
Joined: Sun Jul 08, 2018 5:46 am
Location: Right here, probably
Contact:

Re: Soshul meedja.

Post by zompist »

chris_notts wrote: Thu Mar 16, 2023 3:03 am
Ares Land wrote: Thu Mar 16, 2023 2:54 am Looking at it another way: artificial intelligence sheds light on how the brain works. Language centers in the human brain probably resemble ML language models somewhat. Without even going into ML, there's probably something of a Markov chain there.
(When I'm very tired, I certainly tend to pick the next statistically likely word instead of what I actually meant to say :))
I want to see linguistics grapple with what it means, if anything, for an LLM to be able to produce fluent, grammatical English just by finding usage patterns, without directly encoding anything built in as complex as UG at all.
Eh, Markov generators are just as much of a challenge. They're far simpler, are obviously not "intelligent", and yet are almost creepy in their ability to reproduce the basics of language. I agree with Brad that something like Construction Grammar is more able to handle learning from a corpus than UG. But then I think UG is about the worst of Chomsky's ideas.

But, take your LLM to a cocktail party and ask it to describe the room, and analyze the social relationships in it. Uh oh! Maybe a purely linguistic corpus does not fully explain human functioning.

Though we don't have UG as Chomsky envisions it, we do have a hundred million years of evolutionary history as animals functioning in the world. We didn't evolve to randomly generate plausible-sounding text; we did evolve to see and interact with things in the world.

(And yes, I'm aware that you could add visual data, mechanical sensors, motor devices, etc, etc. But then it's not just a language model, is it?)
zompist
Site Admin
Posts: 2725
Joined: Sun Jul 08, 2018 5:46 am
Location: Right here, probably
Contact:

Re: Soshul meedja.

Post by zompist »

bradrn wrote: Wed Mar 15, 2023 11:01 pm This isn’t the argument. The argument is that a superintelligent AI just has to be fixated on any goal. If ‘preserve humanity‘ isn’t its priority, then there’s a good chance it won’t preserve humanity.

(The obvious answer to that is that then we should figure out how to make ‘preserve humanity’ a priority for it; that’s basically what the AI alignment people go on about.)
Arguably the machines in the Matrix found a way to do just that. "Preserve humanity" is pretty vague.
chris_notts
Posts: 682
Joined: Tue Oct 09, 2018 5:35 pm

Re: Soshul meedja.

Post by chris_notts »

zompist wrote: Thu Mar 16, 2023 4:06 am Though we don't have UG as Chomsky envisions it, we do have a hundred million years of evolutionary history as animals functioning in the world. We didn't evolve to randomly generate plausible-sounding text; we did evolve to see and interact with things in the world.

(And yes, I'm aware that you could add visual data, mechanical sensors, motor devices, etc, etc. But then it's not just a language model, is it?)
True, but isn't this just a further argument against the Chomskian school? They're the ones trying to separate the core of language itself from the rest of cognition and the embedded nature of the speaker. If you accept that the purpose and use of language is strongly intertwined with every aspect of its design and that there's no hard boundaries between grammar, lexicon, pragmatics etc. then surely formalism and the Chomskian programme is dead.
bradrn
Posts: 5740
Joined: Fri Oct 19, 2018 1:25 am

Re: Soshul meedja.

Post by bradrn »

zompist wrote: Thu Mar 16, 2023 4:06 am
chris_notts wrote: Thu Mar 16, 2023 3:03 am
Ares Land wrote: Thu Mar 16, 2023 2:54 am Looking at it another way: artificial intelligence sheds light on how the brain works. Language centers in the human brain probably resemble ML language models somewhat. Without even going into ML, there's probably something of a Markov chain there.
(When I'm very tired, I certainly tend to pick the next statistically likely word instead of what I actually meant to say :))
I want to see linguistics grapple with what it means, if anything, for an LLM to be able to produce fluent, grammatical English just by finding usage patterns, without directly encoding anything built in as complex as UG at all.
Eh, Markov generators are just as much of a challenge. They're far simpler, are obviously not "intelligent", and yet are almost creepy in their ability to reproduce the basics of language.
Not really, no. Output like this:
viii--the queen's croquet ground near our breath.
this moment, the white rabbit cried alice opened the fall never heard it, i wonder?"
said, quiet thing sat down in another moment she went straight on, to herself.
…simply cannot be compared to what current LLMs can produce. In general, the latter seems indistinguishable (or at least very nearly so) from human utterances — there is a reason universities are panicking about ChatGPT-produced essays but never worried about Markov chains!
But, take your LLM to a cocktail party and ask it to describe the room, and analyze the social relationships in it. Uh oh! Maybe a purely linguistic corpus does not fully explain human functioning.
Of course! But there is a chance it could go some way towards explaining language.
Conlangs: Scratchpad | Texts | antilanguage
Software: See http://bradrn.com/projects.html
Other: Ergativity for Novices

(Why does phpBB not let me add >5 links here?)
Moose-tache
Posts: 1746
Joined: Fri Aug 24, 2018 2:12 am

Re: Soshul meedja.

Post by Moose-tache »

chris_notts: "True, but isn't this just a further argument against the Chomskian school?"

We don't need more arguments against Chomsky. His school of linguistics is absurd on its face and always has been.

But the fact that AI can create lucid bullshit doesn't do anything to further disprove his ramblings, since an AI generates lucid bullshit by applying superhuman amounts of computing power to the problem. You might as well say the room of infinite monkeys and infinite typewriters disproves this or that linguistic theory.
I did it. I made the world's worst book review blog.
zompist
Site Admin
Posts: 2725
Joined: Sun Jul 08, 2018 5:46 am
Location: Right here, probably
Contact:

Re: Soshul meedja.

Post by zompist »

chris_notts wrote: Thu Mar 16, 2023 4:16 am
zompist wrote: Thu Mar 16, 2023 4:06 am Though we don't have UG as Chomsky envisions it, we do have a hundred million years of evolutionary history as animals functioning in the world. We didn't evolve to randomly generate plausible-sounding text; we did evolve to see and interact with things in the world.

(And yes, I'm aware that you could add visual data, mechanical sensors, motor devices, etc, etc. But then it's not just a language model, is it?)
True, but isn't this just a further argument against the Chomskian school? They're the ones trying to separate the core of language itself from the rest of cognition and the embedded nature of the speaker. If you accept that the purpose and use of language is strongly intertwined with every aspect of its design and that there's no hard boundaries between grammar, lexicon, pragmatics etc. then surely formalism and the Chomskian programme is dead.
I totally agree that the philosophy behind Chomsky's UG is wrong... I'm for cognitive linguistics too. But when you refute the weakest and silliest part of a theory, you do not disprove the whole theory. There are cognitivist versions of formalism too-- Lakoff's work, for instance.
chris_notts
Posts: 682
Joined: Tue Oct 09, 2018 5:35 pm

Re: Soshul meedja.

Post by chris_notts »

Moose-tache wrote: Thu Mar 16, 2023 4:30 am chris_notts: "True, but isn't this just a further argument against the Chomskian school?"

We don't need more arguments against Chomsky. His school of linguistics is absurd on its face and always has been.
You say this, but it hasn't stopped him and his ideas dominating much of the field for decades, despite the fact that everything we know about how evolution and cognition work would suggest he was talking rubbish. It's depressing how insular and anti-empirical much of linguistics has managed to be.

If you want to debate, you need to argue against what your opponents actually say and believe, not against what's not absurd.
But the fact that AI can create lucid bullshit doesn't do anything to further disprove his ramblings, since an AI generates lucid bullshit by applying superhuman amounts of computing power to the problem. You might as well say the room of infinite monkeys and infinite typewriters disproves this or that linguistic theory.
This is potentially a good counterargument to using LLMs as evidence in the debate about the nature of language.

Although it's not clear to me if LLMs are the best way possible to do it with computers, or if their training (the bit that requires the most computing power by far) is just so expensive now because we're using the early stages of a technology. This may or may not change over time.
bradrn
Posts: 5740
Joined: Fri Oct 19, 2018 1:25 am

Re: Soshul meedja.

Post by bradrn »

zompist wrote: Thu Mar 16, 2023 4:06 am (And yes, I'm aware that you could add visual data, mechanical sensors, motor devices, etc, etc. But then it's not just a language model, is it?)
Didn’t notice this comment earlier. Let me post this in response (and as a plain interesting piece of work in and of itself): PaLM-E: An Embodied Multimodal Language Model.
Conlangs: Scratchpad | Texts | antilanguage
Software: See http://bradrn.com/projects.html
Other: Ergativity for Novices

(Why does phpBB not let me add >5 links here?)
zompist
Site Admin
Posts: 2725
Joined: Sun Jul 08, 2018 5:46 am
Location: Right here, probably
Contact:

Re: Soshul meedja.

Post by zompist »

bradrn wrote: Thu Mar 16, 2023 4:29 am Not really, no. Output like this:…simply cannot be compared to what current LLMs can produce. In general, the latter seems indistinguishable (or at least very nearly so) from human utterances — there is a reason universities are panicking about ChatGPT-produced essays but never worried about Markov chains!
Did you seriously stop reading before you got to level 3 generators? Yes, amazingly, ChatGPT does better than looking at two-word sequences in a 25,000 word text.

How well would a higher-level Markov generator do if it was trained on 300 bllion words?

I do think ChatGPT is a remarkable achievement, but by this time it's also remarkably overhyped.

When you have a mega-mondo database of English, then you can do some absolutely stunning statistical feats. You can do them better with LLM but you can do them with Markov too.
zompist
Site Admin
Posts: 2725
Joined: Sun Jul 08, 2018 5:46 am
Location: Right here, probably
Contact:

Re: Soshul meedja.

Post by zompist »

chris_notts wrote: Thu Mar 16, 2023 4:44 am
Moose-tache wrote: Thu Mar 16, 2023 4:30 am chris_notts: "True, but isn't this just a further argument against the Chomskian school?"

We don't need more arguments against Chomsky. His school of linguistics is absurd on its face and always has been.
You say this, but it hasn't stopped him and his ideas dominating much of the field for decades, despite the fact that everything we know about how evolution and cognition work would suggest he was talking rubbish. It's depressing how insular and anti-empirical much of linguistics has managed to be.
Chomsky's ideas on UG are about 1% of his work on linguistics and 0% of the useful part. Don't talk rubbish yourself.
bradrn
Posts: 5740
Joined: Fri Oct 19, 2018 1:25 am

Re: Soshul meedja.

Post by bradrn »

zompist wrote: Thu Mar 16, 2023 4:53 am
bradrn wrote: Thu Mar 16, 2023 4:29 am Not really, no. Output like this:…simply cannot be compared to what current LLMs can produce. In general, the latter seems indistinguishable (or at least very nearly so) from human utterances — there is a reason universities are panicking about ChatGPT-produced essays but never worried about Markov chains!
Did you seriously stop reading before you got to level 3 generators?
Er, yes, it appears I did. Sorry.
How well would a higher-level Markov generator do if it was trained on 300 bllion words?

I do think ChatGPT is a remarkable achievement, but by this time it's also remarkably overhyped.

When you have a mega-mondo database of English, then you can do some absolutely stunning statistical feats. You can do them better with LLM but you can do them with Markov too.
I’d actually be quite interested in seeing this. Has anyone attempted it? I feel doubtful it would produce anything near the level of ChatGPT, but it’s hard to know until I’ve seen it.
Conlangs: Scratchpad | Texts | antilanguage
Software: See http://bradrn.com/projects.html
Other: Ergativity for Novices

(Why does phpBB not let me add >5 links here?)
chris_notts
Posts: 682
Joined: Tue Oct 09, 2018 5:35 pm

Re: Soshul meedja.

Post by chris_notts »

zompist wrote: Thu Mar 16, 2023 4:56 am
chris_notts wrote: Thu Mar 16, 2023 4:44 am
Moose-tache wrote: Thu Mar 16, 2023 4:30 am chris_notts: "True, but isn't this just a further argument against the Chomskian school?"

We don't need more arguments against Chomsky. His school of linguistics is absurd on its face and always has been.
You say this, but it hasn't stopped him and his ideas dominating much of the field for decades, despite the fact that everything we know about how evolution and cognition work would suggest he was talking rubbish. It's depressing how insular and anti-empirical much of linguistics has managed to be.
Chomsky's ideas on UG are about 1% of his work on linguistics and 0% of the useful part. Don't talk rubbish yourself.
It's a foundational idea! By accepting it, and then focusing most of your area's research on a small number of big well known languages (often the ones the author speaks) while making strong universality assumptions, you systematically bias your output as a field. What positive contribution of Chomsky's do you think outweighs the damage?
bradrn
Posts: 5740
Joined: Fri Oct 19, 2018 1:25 am

Re: Soshul meedja.

Post by bradrn »

chris_notts wrote: Thu Mar 16, 2023 5:03 am What positive contribution of Chomsky's do you think outweighs the damage?
To mention but one, as I recall he was one of the first people to popularise syntax as a legitimate and interesting area of study.
Conlangs: Scratchpad | Texts | antilanguage
Software: See http://bradrn.com/projects.html
Other: Ergativity for Novices

(Why does phpBB not let me add >5 links here?)
Torco
Posts: 663
Joined: Fri Jul 13, 2018 9:11 am

Re: Soshul meedja.

Post by Torco »

Moose-tache wrote: Wed Mar 15, 2023 11:12 pm Multiple pages into to omnidirectional epistemological slap fight, and my original conclusion remains undisloged: If you think the logical functions performed by a computer cannot be a form of consciousness, you do not know what consciousness is made out of.
but that's the point, we indeed *don't* know how the mind works. we do not know what consciousness is made out of. my lil bro works in that field, and he knows we don't know. ask other neuroscientists, they also don't know. the consciousness research community gets excited at things like "look, i made this equation to calculate the complexity of the EEG signals from the brain, and they correlate to how conscious you are as you fall asleep or something". we have no idea what consciousness is made out of. they don't even have a good definition for consciousness. now, ofc, that don't mean we can't have computers get consciousness... but it also doesn't mean we can. what even is consciousness? how would you know if gpt-9 is feeling feelings or just producing the kinds of language that it thinks are consistent with feeling feelings?

if you ask me, they should focus more on more concrete aspects of the functioning of the mind... like thought, memory, recognition, attention and so on... but hey, "i research consciousness" sounds sexy, and thus gets grants, postdocs and whatever else scientists need to survive in our post-capitalist days.

Also, the basilisk is just pascal's mugging all over again. look, I bring the message of Grod, which is exactly like God but will torture you forever if you don't give me 6 bucks right now: are you willing to risk it?
bradrn
Posts: 5740
Joined: Fri Oct 19, 2018 1:25 am

Re: Soshul meedja.

Post by bradrn »

Torco wrote: Thu Mar 16, 2023 6:51 am how would you know if gpt-9 is feeling feelings or just producing the kinds of language that it thinks are consistent with feeling feelings?
How can I know if you are feeling feelings or just producing the kinds of language that are consistent with feeling feelings?
Conlangs: Scratchpad | Texts | antilanguage
Software: See http://bradrn.com/projects.html
Other: Ergativity for Novices

(Why does phpBB not let me add >5 links here?)
Torco
Posts: 663
Joined: Fri Jul 13, 2018 9:11 am

Re: Soshul meedja.

Post by Torco »

can't speak for you, but the way I know you have feelings is cause i'm very similar to you (made out of meat, same amount of appendages plus minus one or two, same organs, etcetera) and since I know I have emotions that affect mybehavior in this and that way, I guess that you do too. I don't know this apodictically, but I don't know anything apodictically, all is guesses.
zompist
Site Admin
Posts: 2725
Joined: Sun Jul 08, 2018 5:46 am
Location: Right here, probably
Contact:

Re: Soshul meedja.

Post by zompist »

chris_notts wrote: Thu Mar 16, 2023 5:03 am
zompist wrote: Thu Mar 16, 2023 4:56 am Chomsky's ideas on UG are about 1% of his work on linguistics and 0% of the useful part.
It's a foundational idea!
No, it really isn't. If you read Syntactic Structures or even Aspects, there's nothing about it there. Generative grammar started as an attempt to formalize the rules of syntax, and it was explicitly agnostic about whether those rules were even in the brain or not. Chomsky used to relegate all such details to "performance", insisting he was only studying "competence." A syntactic theory was an attempt to describe how the syntax of a language worked; Chomsky's methods were easily applied and extended for 70 years now without anyone having to believe in UG.

Thinking it's foundational is like thinking the Copenhagen interpretation is the foundation of quantum mechanics. It's not, it's just a philosophy about how QM 'works', and there are alternatives.

I'd also note that if you don't stress on the details, the question isn't so easy to dismiss. The brain is a device that learns languages pretty easily; till recently there were no other such devices. Our nearest ape relatives can't really do it; the monkeys certainly can't. It's not unreasonable to say that something genetic underlies this difference.

Now, the cognitivist position is that general cognitive abilities are enough; specialized language organs aren't needed. That's a strong claim in itself, and the best we can say is that it isn't proven. But it's worse than that: damage Broca's area and Wernicke's area, and you damage language ability. People with language deficits due to damage in these areas have general cognitive abilities, but it does not seem to be enough to get around those deficits.

The main reason to dismiss Chomsky's ruminations on UG is that he really has no interest in cognitive psychology, evolution, or language acquisition— his ideas are based on theoretical steps or philosophical positions.
By accepting it, and then focusing most of your area's research on a small number of big well known languages (often the ones the author speaks)
This is an old charge against Chomsky; it had some validity in 1965, but it's nonsense today. When you throw an army of grad students at the world for 70 years, the field is no longer limited to the languages Chomsky knew (and let's not have any nonsense about "just English"; his dissertation is on Hebrew and he knows French fluently).

I do think Chomskyan syntax retains some anglocentric assumptions... it's (now) designed to accommodate many languages, but the base structures are suspiciously English-like.
What positive contribution of Chomsky's do you think outweighs the damage?
Demolishing Skinner's extremely reductive approach.
A hierarchy of generative tools that is useful enough to be taught in computer science classes.
The basics of generative syntax (base structures + transformations).
A clever and innovative analysis of the English verbal complex.
Tools for analyzing and graphically displaying syntactic structure.
Feature analysis in syntax and phonology.
An uncounted number of syntactic phenomena which earlier syntax completely missed.
Proofs that surface structure is not enough to determine semantics.
Attention to issues of scope (for quantifiers, negation, pronouns, etc.).
Using purely syntactic methods, rather than philosophy, to determine syntactic classes.
Keeping the GS school honest. (Any theory benefits from having an intelligent critic.)
Inspiring a generation of AI researchers.*
Giving conlangers something to put in the syntax section (as opposed to having none, the 19th century practice)
Inspiring (and in many cases directly teaching) James McCawley, Paul Postal, John Ross, Barbara Partee, George Lakoff, Robin Lakoff, Ray Jackendoff, Joan Bresnan, Peter Culicover, Andrew Carnie, Charles Fillmore, Richard Hudson, Samuel Keyser, John Lawler, Geoffrey Pullum, Geoffrey Sampson, Robert Van Valin, Susumu Kuno, Hu Matthews, Adrian Akmajian, David Allerton, Judith Levi, David Perlmutter, Mark Baker, Jerrold Katz, Howard Lasnik, and I'm now tired of typing out names.

* There are fads in AI as there are in linguistics, so there may be a tendency to dismiss non-neural-net-based work. But that work had a clear advantage present work did not: it could be understood by a human. It now seems likely that we could produce general AI without, in fact, learning anything about intelligence.
Post Reply