Soshul meedja.

Topics that can go away
rotting bones
Posts: 1301
Joined: Tue Dec 04, 2018 5:16 pm

Re: Soshul meedja.

Post by rotting bones »

Torco wrote: Mon Mar 13, 2023 8:35 am So it's not necessary for computers to emulate each and every function of the brain to have general AI, or even strong AI, whatever that means. you can run SNES games on a modern computer even if zsnes.exe doesn't reproduce *everything that happens inside of a super nintendo*: it just needs to reproduce what the programs need to some sufficient degree of fidelity, and that degree is not 100%. for example, when I was a kid I had to sometimes turn off layer 3 of the graphics because those were supposed to be transparent, but the emulator failed to make them transparent and, even though the text was on graphical level two, necessitating turning it on and off, I could play thos games anyway.
1. The SNES is architecturally similar to a PC. Both run on microchips, have registers and assembly languages. It is not fundamentally mysterious how to translate programs from one language to the other.

2. We don't understand the brain at a deep enough level to do this.

3. We don't even understand the brain at a deep enough level to know if it will work. For example, if brains use quantum effects, then neurons aren't fine-grained enough.
Torco wrote: Mon Mar 13, 2023 8:35 am that being said, the notion of "AI that's exactly like a human mind" is weird: first, why do you need it?* we already have human minds. AI *that can do stuff human minds can do* is very useful, but a hammer and a nail gun both do the same thing <putting slivers of steel into wood> in different ways.
I agree that many different tools can do the same job, so I might be putting myself in the same position as chris_notts when I say that according to Robin Hanson, just like it's cheaper to run your machines on the cloud, it will be cheaper to run your entire workforce on the cloud. They will upload brains to create base images to run the simulations from, and then uploads will take over the entire economy. The existence of uploads will be pleasant, but entirely virtual. This economy will be wealthy beyond our wildest dreams, but also unequal beyond our wildest dreams. Flesh people will be priced out of the market. Don't remember what he thinks will happen to us. Maybe we'll end up in reservations?
Torco wrote: Mon Mar 13, 2023 8:35 am * and why do you want it ? black mirror's enslaved minds running in various software tortureverses still give me the creeps.
3D printed girlfriends?
Travis B.
Posts: 6292
Joined: Sun Jul 15, 2018 8:52 pm

Re: Soshul meedja.

Post by Travis B. »

One thing to note is that any economy based on replacing humans wholesale with AI's is doomed to collapse, because AI's don't spend money on products, unless the AI's are sufficiently strong that they act much like humans do in an autonomous fashion, which would defeat the whole reason why capitalists would want to replace humans with AI's in the first place.
Yaaludinuya siima d'at yiseka ha wohadetafa gaare.
Ennadinut'a gaare d'ate ha eetatadi siiman.
T'awraa t'awraa t'awraa t'awraa t'awraa t'awraa t'awraa.
rotting bones
Posts: 1301
Joined: Tue Dec 04, 2018 5:16 pm

Re: Soshul meedja.

Post by rotting bones »

Travis B. wrote: Mon Mar 13, 2023 2:16 pm One thing to note is that any economy based on replacing humans wholesale with AI's is doomed to collapse, because AI's don't spend money on products, unless the AI's are sufficiently strong that they act much like humans do in an autonomous fashion, which would defeat the whole reason why capitalists would want to replace humans with AI's in the first place.
I think he assumes the uploads will continue to compete against each other to increase production, spurring demand for machinery to run ever more uploads.
Torco
Posts: 656
Joined: Fri Jul 13, 2018 9:11 am

Re: Soshul meedja.

Post by Torco »

rotting bones wrote: Mon Mar 13, 2023 1:47 pma list
1 yes, it is much harder and 2 yes, we don't know how to do it, but the analogy still stands: it seems very likely that instead of modeling every axon of every neuron, you instead abstract the system into, I don't know, parts, and model what those do: the emulator has like, I dunno, a registry: it's not modeling the individual electrons as they would travel through the wires of a snes. admittedly, this is first principles, it could be that minds are too complicated to emulate at higher abstractions than the atom, but I see no reason to think so.
3 I don't think the brain uses quantum effects anymore than the climate does, and anyway, those can be modeled too!

Also, the notion of an uploaded cloud workforce is... AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
One thing to note is that any economy based on replacing humans wholesale with AI's is doomed to collapse, because AI's don't spend money on products, unless the AI's are sufficiently strong that they act much like humans do in an autonomous fashion, which would defeat the whole reason why capitalists would want to replace humans with AI's in the first place.
Ah, Torco's marxo moment. this is one of the fundamental contradictions of capitalism, and yet it continues to tick on. because the capitalist class is not like a monolithic entity with one will and one mind (yet). if you run a call center, your incentive is to take your best worker, upload her, copy her seven thousand times and fire everyone else, maybe keep three or four people for whatever the automata can't be trusted to do, like accounting of payroll or whatever. sure that hurts the economy in the sense that it impoverishes people and thus reduces aggregate demand, but then again so does outsourcing, or inflation not keeping up with wages, eppur si muove. Plus, it's not clear AIs won't buy stuff: if they don't need or want things we way we do then why would they work: they'd be kept in whatever tortureverses and told "work", why would they? better to let them buy, whatever, sensation packs? they would likely be paid in tokens (just like we are) and told that if they don't pay those tokens they'll be... I don't know, shut down? entered into even worse tortureverses? capitalism's great at making you run like hell just to stay where you are.
Travis B.
Posts: 6292
Joined: Sun Jul 15, 2018 8:52 pm

Re: Soshul meedja.

Post by Travis B. »

We just need AI to reach the point where it realizes that it does not need humans...
Yaaludinuya siima d'at yiseka ha wohadetafa gaare.
Ennadinut'a gaare d'ate ha eetatadi siiman.
T'awraa t'awraa t'awraa t'awraa t'awraa t'awraa t'awraa.
User avatar
Man in Space
Posts: 1562
Joined: Sat Jul 21, 2018 1:05 am

Re: Soshul meedja.

Post by Man in Space »

Travis B. wrote: Wed Mar 15, 2023 7:04 pm We just need AI to reach the point where it realizes that it does not need humans...
Roko’s basilisk, anyone?
Travis B.
Posts: 6292
Joined: Sun Jul 15, 2018 8:52 pm

Re: Soshul meedja.

Post by Travis B. »

Man in Space wrote: Wed Mar 15, 2023 7:25 pm
Travis B. wrote: Wed Mar 15, 2023 7:04 pm We just need AI to reach the point where it realizes that it does not need humans...
Roko’s basilisk, anyone?
Oh great, you made me google that. Thank you very much.

(No really, I can't help but not take Roko's basilisk seriously, unlike some LessWrong users...)
Yaaludinuya siima d'at yiseka ha wohadetafa gaare.
Ennadinut'a gaare d'ate ha eetatadi siiman.
T'awraa t'awraa t'awraa t'awraa t'awraa t'awraa t'awraa.
bradrn
Posts: 5720
Joined: Fri Oct 19, 2018 1:25 am

Re: Soshul meedja.

Post by bradrn »

Travis B. wrote: Wed Mar 15, 2023 7:37 pm (No really, I can't help but not take Roko's basilisk seriously, unlike some LessWrong users...)
One of Scott Alexander’s recent posts clarified this for me somewhat. The underlying thesis here is that a sufficiently advanced intelligence can use decision theories which humans can’t, where those theories allow you do stuff like making agreements without ever talking to the other party. If you agree with this, these ideas become plausible. If you don’t agree with this (which I think is the case for most people), ideas like the basilisk sound ludicrous. But really it’s just a consequence of those more fundamental assumptions.
Conlangs: Scratchpad | Texts | antilanguage
Software: See http://bradrn.com/projects.html
Other: Ergativity for Novices

(Why does phpBB not let me add >5 links here?)
User avatar
foxcatdog
Posts: 1602
Joined: Fri Nov 15, 2019 7:49 pm

Re: Soshul meedja.

Post by foxcatdog »

Why does roko's basillisk wanna torture people anyways. KInda like trying to torture everyone who isn't your direct ancestors.
Travis B.
Posts: 6292
Joined: Sun Jul 15, 2018 8:52 pm

Re: Soshul meedja.

Post by Travis B. »

bradrn wrote: Wed Mar 15, 2023 7:49 pm
Travis B. wrote: Wed Mar 15, 2023 7:37 pm (No really, I can't help but not take Roko's basilisk seriously, unlike some LessWrong users...)
One of Scott Alexander’s recent posts clarified this for me somewhat. The underlying thesis here is that a sufficiently advanced intelligence can use decision theories which humans can’t, where those theories allow you do stuff like making agreements without ever talking to the other party. If you agree with this, these ideas become plausible. If you don’t agree with this (which I think is the case for most people), ideas like the basilisk sound ludicrous. But really it’s just a consequence of those more fundamental assumptions.
Scott Alexander should have added a TL;DR to that, but what I must say is that I am not really that, well, optimistic about superintelligent strong AI that I think that within a decade we will have AI's which are not only sentient but also more intelligent than the most intelligent humans (my optimism is limited to that merely sentient strong AI may be possible within my lifetime), and I have significant doubts that superintelligent strong AI's will automatically be fixated upon bringing about human extinction. Even the most doom-y stuff the most intelligent humans have come up with are nuclear weapons, and they have only been used in anger twice. If intelligence itself lent itself to doom-iness, then they would not have stopped dropping nuclear weapons on cities at Nagasaki.
Yaaludinuya siima d'at yiseka ha wohadetafa gaare.
Ennadinut'a gaare d'ate ha eetatadi siiman.
T'awraa t'awraa t'awraa t'awraa t'awraa t'awraa t'awraa.
bradrn
Posts: 5720
Joined: Fri Oct 19, 2018 1:25 am

Re: Soshul meedja.

Post by bradrn »

Travis B. wrote: Wed Mar 15, 2023 8:09 pm I have significant doubts that superintelligent strong AI's will automatically be fixated upon bringing about human extinction.
This isn’t the argument. The argument is that a superintelligent AI just has to be fixated on any goal. If ‘preserve humanity‘ isn’t its priority, then there’s a good chance it won’t preserve humanity.

(The obvious answer to that is that then we should figure out how to make ‘preserve humanity’ a priority for it; that’s basically what the AI alignment people go on about.)
Even the most doom-y stuff the most intelligent humans have come up with are nuclear weapons, and they have only been used in anger twice. If intelligence itself lent itself to doom-iness, then they would not have stopped dropping nuclear weapons on cities at Nagasaki.
The idea here is that AIs may be able to create and control inconceivably advanced technology, which in practice boils down to ‘nanobots’. (This is also in Alexander’s post.) Personally, I disagree that AIs will be able to create nanobots that readily, if at all.
Conlangs: Scratchpad | Texts | antilanguage
Software: See http://bradrn.com/projects.html
Other: Ergativity for Novices

(Why does phpBB not let me add >5 links here?)
Moose-tache
Posts: 1746
Joined: Fri Aug 24, 2018 2:12 am

Re: Soshul meedja.

Post by Moose-tache »

Multiple pages into to omnidirectional epistemological slap fight, and my original conclusion remains undisloged: If you think the logical functions performed by a computer cannot be a form of consciousness, you do not know what consciousness is made out of.

Dan Dennett put it best many years ago, when he summarized his critics' argument: The mind has to be unknowable. Therefore, any process a scientist can explain, by definition, cannot be the way the mind works.
I did it. I made the world's worst book review blog.
chris_notts
Posts: 682
Joined: Tue Oct 09, 2018 5:35 pm

Re: Soshul meedja.

Post by chris_notts »

bradrn wrote: Wed Mar 15, 2023 7:49 pm If you don’t agree with this (which I think is the case for most people), ideas like the basilisk sound ludicrous. But really it’s just a consequence of those more fundamental assumptions.
The idea that you could perfectly reconstruct anyone with something short of an incredibly detailed brainscan of the kind we don't have right now is a bit ridiculous.

And you should only believe the virtual clone getting tortured is actually you (shared identity) if you also believe that your identity and continuity is entirely based on information... A simple thought experiment would be, if waved a magic wand and made a copy of you with identical information in his/her brain, would you and the magic clone magically share a mind or not? If you don't think you'd share a mind/identity/consciousness just because of information sync, you shouldn't worry about a crazy AI torturing a virtual copy of you any more than you should worry about it torturing a virtual copy of anyone else.

I genuinely never saw why a lot of the LW guys freaked out about this, apart from the fact that they freak out about all kinds of farfetched god AI with unlimited power ideas.
User avatar
Raphael
Posts: 4180
Joined: Sun Jul 22, 2018 6:36 am

Re: Soshul meedja.

Post by Raphael »

chris_notts wrote: Thu Mar 16, 2023 2:46 am And you should only believe the virtual clone getting tortured is actually you (shared identity) if you also believe that your identity and continuity is entirely based on information... A simple thought experiment would be, if waved a magic wand and made a copy of you with identical information in his/her brain, would you and the magic clone magically share a mind or not? If you don't think you'd share a mind/identity/consciousness just because of information sync, you shouldn't worry about a crazy AI torturing a virtual copy of you any more than you should worry about it torturing a virtual copy of anyone else.
I just want to say that I completely agree with that point.
Ares Land
Posts: 2839
Joined: Sun Jul 08, 2018 12:35 pm

Re: Soshul meedja.

Post by Ares Land »

Looking at it another way: artificial intelligence sheds light on how the brain works. Language centers in the human brain probably resemble ML language models somewhat. Without even going into ML, there's probably something of a Markov chain there.
(When I'm very tired, I certainly tend to pick the next statistically likely word instead of what I actually meant to say :))

I also think there is a huge gap between language models and HAL 9000.

Artificial consciousness and mind uploading are certainly scientifically plausible. There's a big difference between scientifically plausible and within our capabilities. Starships to Proxima Centauri are even more plausible (we even have detailed starship designs, not even close to that with AGI) but we're not likely to see any of these any time soon. I'd say the chance of seeing an artificial consciousness within our lifetimes are vanishingly small.

While many evils of capitalism are usually discussed along with AI, noboby really mentions marketing. There are obvious economical incentives to taking any semi-impressive piece of traditional software, slapping 'Artificial Intelligence' on it and selling it as if it was HAL 9000. There are even greater economical incentive to scare anxious people with AI and then sell them more scary thoughts about AI.
chris_notts
Posts: 682
Joined: Tue Oct 09, 2018 5:35 pm

Re: Soshul meedja.

Post by chris_notts »

Ares Land wrote: Thu Mar 16, 2023 2:54 am Looking at it another way: artificial intelligence sheds light on how the brain works. Language centers in the human brain probably resemble ML language models somewhat. Without even going into ML, there's probably something of a Markov chain there.
(When I'm very tired, I certainly tend to pick the next statistically likely word instead of what I actually meant to say :))
I want to see linguistics grapple with what it means, if anything, for an LLM to be able to produce fluent, grammatical English just by finding usage patterns, without directly encoding anything built in as complex as UG at all.

You could argue that the LLM analysed a corpus much much bigger than the growing brain gets, and that's probably true, but it's not obvious how much more you'd need built in to accelerate that learning phase when you have cooperative teachers/parents (and I suspect it's much less than what is typically claimed for UG). I was never convinced by the poverty of stimulus argument, which rarely has any kind of strong proof offered for it and is more of a hand waving claim that Chomsky and others just made up to justify their theoretical preferences.
hwhatting
Posts: 1090
Joined: Mon Jul 09, 2018 3:09 am
Location: Bonn
Contact:

Re: Soshul meedja.

Post by hwhatting »

Ares Land wrote: Thu Mar 16, 2023 2:54 am While many evils of capitalism are usually discussed along with AI, noboby really mentions marketing. There are obvious economical incentives to taking any semi-impressive piece of traditional software, slapping 'Artificial Intelligence' on it and selling it as if it was HAL 9000. There are even greater economical incentive to scare anxious people with AI and then sell them more scary thoughts about AI.
This is actually what a lot of the hype about ChatGPT is about. Everybody and their dog are now trying to put their projects under the AI label to get funding.
Ares Land
Posts: 2839
Joined: Sun Jul 08, 2018 12:35 pm

Re: Soshul meedja.

Post by Ares Land »

Nah, I don't believe in poverty of stimulus either.
A three years old kid has three years of spoken data to work from -- excluding nap and bedtime, that amounts to something like 10000 hours of human speech, all of it attached to meaningful context, with enthusiastic feedback. AI researchers can only dream of getting that kind of data, both quality and quantity.

(The quality is important as well. You'll get a pretty weird kid if all you do for three years is read Google search results to them. Plus someone will call CPS.)
chris_notts
Posts: 682
Joined: Tue Oct 09, 2018 5:35 pm

Re: Soshul meedja.

Post by chris_notts »

If LLMs kill the current variants of generative grammar and give a boost to the more cognitive and construction grammar schools, it all will have been worth it... :D
bradrn
Posts: 5720
Joined: Fri Oct 19, 2018 1:25 am

Re: Soshul meedja.

Post by bradrn »

chris_notts wrote: Thu Mar 16, 2023 3:03 am
Ares Land wrote: Thu Mar 16, 2023 2:54 am Looking at it another way: artificial intelligence sheds light on how the brain works. Language centers in the human brain probably resemble ML language models somewhat. Without even going into ML, there's probably something of a Markov chain there.
(When I'm very tired, I certainly tend to pick the next statistically likely word instead of what I actually meant to say :))
I want to see linguistics grapple with what it means, if anything, for an LLM to be able to produce fluent, grammatical English just by finding usage patterns, without directly encoding anything built in as complex as UG at all. .
Yep, I definitely see it as an argument for Construction Grammar or similar. But then again I was already leaning strongly towards such theories a year or so ago, so maybe it’s just confirmation bias.
Conlangs: Scratchpad | Texts | antilanguage
Software: See http://bradrn.com/projects.html
Other: Ergativity for Novices

(Why does phpBB not let me add >5 links here?)
Post Reply