I just might know a lot of the rat and postrat people on Twitter... It's a weird space where people are mostly debate club polite but also more or less evenly split between anarchists, luxury space communists, NRx people, Effective Altruists, transhumanists, AI doomers, ... so that apart from knowing each other and having some kind of similar culture vibe it's not clear what the group actually agrees on.Nachtswalbe wrote: ↑Sat Oct 03, 2020 7:34 pm I spend way too much time on the Ribbonfarm sphere of mechanical engineer-turned-philosopher for VCs Venkatesh Rao, a self-proclaimed both-tongue-in-cheek-and-literally sociopathic amoralist whose adherence to a modified version of High Modernism ("English, Spanish and Chinese are adapted to Modernity, smaller languages will only be spoken by lumpenproles") and his post-rationalist (a reaction to Yudkowsky's ratsphere) and its weird hangerons, and my feed is filled with Politics Discussion so
Soshul meedja.
-
- Posts: 682
- Joined: Tue Oct 09, 2018 5:35 pm
Re: Soshul meedja.
-
- Posts: 1408
- Joined: Tue Dec 04, 2018 5:16 pm
Re: Soshul meedja.
An AI can't take over the world because meritocracy is false. It is not physically possible to take over the world by being the smartest dude on the planet.chris_notts wrote: ↑Sat Mar 11, 2023 3:15 pm I just might know a lot of the rat and postrat people on Twitter... It's a weird space where people are mostly debate club polite but also more or less evenly split between anarchists, luxury space communists, NRx people, Effective Altruists, transhumanists, AI doomers, ... so that apart from knowing each other and having some kind of similar culture vibe it's not clear what the group actually agrees on.
-
- Posts: 682
- Joined: Tue Oct 09, 2018 5:35 pm
Re: Soshul meedja.
I agree? I'm not personally an AI doomer, and disagree with EY and most of his followers on that point. I can see potential risks to more sophisticated AI, but I don't think it will necessarily be able to do anything and everything the way some people claim.rotting bones wrote: ↑Sun Mar 12, 2023 3:34 pm An AI can't take over the world because meritocracy is false. It is not physically possible to take over the world by being the smartest dude on the planet.
I do think it's becoming harder to forecast the exact midterm outcome: something like BAU, employment crisis labour substitution world, UBI world, Skynet kills us all world... you can make semi-convincing arguments for all of them if you try.
-
- Posts: 1408
- Joined: Tue Dec 04, 2018 5:16 pm
Re: Soshul meedja.
Can't happen if the left gets its shit together, meaning they probably will.chris_notts wrote: ↑Sun Mar 12, 2023 4:30 pm something like BAU, employment crisis labour substitution world, UBI world
Of course, I don't know how effective AI-based BAU can be.
An employment crisis can happen, but it using AIs everywhere will probably require employing AI engineers everywhere to construct, install, operate and maintain them.
This can't possibly happen because AI models are not sentient. They are more like very complex scientific theories, where the math is done very quickly using a computer.
-
- Posts: 682
- Joined: Tue Oct 09, 2018 5:35 pm
Re: Soshul meedja.
I don't really want to continue this argument because I'm being put in the position of defending something I don't really believe in (that AI will kill us all). But just to summarise what I think the people who do believe in it would say:rotting bones wrote: ↑Sun Mar 12, 2023 4:51 pm This can't possibly happen because AI models are not sentient. They are more like very complex scientific theories, where the math is done very quickly using a computer.
Firstly: AI doesn't just mean the current generation of LLMs, neural networks etc., so you can't judge future AI solely by what we have right now.
Secondly: the "AI is just math and that's very different us" would be a more convincing argument if anyone could describe exactly what process makes a human being or biological organism sentient in a measurable, quantifiable way, and why an AI system could never possess something similar either by design or as an emergent behaviour. I think "it's just math" is somewhat dismissive because most of the world around us can be modelled and simulated with "just math", even if we still haven't quite found the grand theory of everything yet.
Thirdly: the people who believe in AI doom often don't see lack of true sentience as a barrier, because they're concerned about alignment. Even if the system is not truly sentient and is solely acting because someone told it to do something, then:
- if you reach the point where the AI in someone's garage can be instructed to design a computer virus or super-plague or some other weapon of mass destruction and do it, then society can't survive in a world when any random maladjusted person might possible end it.
- If any AI system has significant power / scope to act autonomously and the ability to misunderstand the scope of instructions or just go wrong, then you could see very bad outcomes. Dumb IT systems already go horribly wrong and cause disasters... and AIs are much harder to fully understand the dynamics of because they are typically semi black box models trained from data rather than having all their behaviours directly programmed. It is much harder to be sure that such systems will always behave well even outside the bounds of their testing and training data... just witness Sydney, Bing's LLM, which has been recorded having various meltdowns when people have found the right way to prod it.
-
- Posts: 1746
- Joined: Fri Aug 24, 2018 2:12 am
Re: Soshul meedja.
Anyone who says "That's not consciousness; it's just a series of electrical impulses passing through logical gates!" is confused about what consciousness is.
I did it. I made the world's worst book review blog.
-
- Posts: 682
- Joined: Tue Oct 09, 2018 5:35 pm
Re: Soshul meedja.
We can't either measure or explain it, that's the problem! So in practice it's irrelevant, all that matters is the behavioural range and potential of the system, whether it's aware or not, and how predictable/controllable that behaviour is to the human builders and users and to wider society.Moose-tache wrote: ↑Sun Mar 12, 2023 6:29 pm Anyone who says "That's not consciousness; it's just a series of electrical impulses passing through logical gates!" is confused about what consciousness is.
-
- Posts: 1408
- Joined: Tue Dec 04, 2018 5:16 pm
Re: Soshul meedja.
That's not my objection. My objection is that the CPU is running a scientific theory.Moose-tache wrote: ↑Sun Mar 12, 2023 6:29 pm Anyone who says "That's not consciousness; it's just a series of electrical impulses passing through logical gates!" is confused about what consciousness is.
-
- Posts: 682
- Joined: Tue Oct 09, 2018 5:35 pm
Re: Soshul meedja.
I don't even know what this means. The CPU is running an algorithm, and to the extent that maths can describe the world, and therefore algorithms can simulate it or the processes with it, it's not implausible that that CPU could emulate something like a mind. You can debate whether the current generation of LLMs are... Most would say that they're not, and claims that LLMs are aware or even intelligent are hyperbole. But the fact that CPUs run algorithms doesn't obviously mean that they can't also emulate minds if they're fed the right algorithm.rotting bones wrote: ↑Sun Mar 12, 2023 6:38 pmThat's not my objection. My objection is that the CPU is running a scientific theory.Moose-tache wrote: ↑Sun Mar 12, 2023 6:29 pm Anyone who says "That's not consciousness; it's just a series of electrical impulses passing through logical gates!" is confused about what consciousness is.
-
- Posts: 1408
- Joined: Tue Dec 04, 2018 5:16 pm
Re: Soshul meedja.
In a neural network, for example, each layer is a system of linear equations followed by a nonlinear activation function. This allows the algorithm to approximate any function. You can visualize this as bounding planes in n dimensional spaces, towers, etc: http://neuralnetworksanddeeplearning.com/chap4.html
This mind will be utterly alien unless exposed to the same selection pressures as us. No one with the resources to do it is planning to train AIs to oppose entities with more power than themselves.chris_notts wrote: ↑Sun Mar 12, 2023 6:44 pm The CPU is running an algorithm, and to the extent that maths can describe the world, and therefore algorithms can simulate it or the processes with it, it's not implausible that that CPU could emulate something like a mind. You can debate whether the current generation of LLMs are... Most would say that they're not, and claims that LLMs are aware or even intelligent are hyperbole. But the fact that CPUs run algorithms doesn't obviously mean that they can't also emulate minds if they're fed the right algorithm.
Even if they did, well... I don't know how to convey to you the sheer despair that AI researchers experience when contemplating the AGI problem. "Rationality" leads to a combinatorial explosion unless situated in very concrete contexts.
There is an optimal model of rational decision making called the Bayesian model. This model suffers from 2 drawbacks: 1. No one has enough data to train it for most practical problems. 2. Even the theoretically optimal model is not perfectly accurate!
-
- Posts: 682
- Joined: Tue Oct 09, 2018 5:35 pm
Re: Soshul meedja.
Yes, I know about different kinds of ML models, and the difficulty of optimal decision making. But these are not really conclusive arguments.rotting bones wrote: ↑Sun Mar 12, 2023 6:52 pm This mind will be utterly alien unless exposed to the same selection pressures as us. No one with the resources to do it is planning to train AIs to oppose entities with more power than themselves.
Even if they did, well... I don't know how to convey to you the sheer despair that AI researchers experience when contemplating the AGI problem. "Rationality" leads to a combinatorial explosion unless situated in very concrete contexts.
There is an optimal model of rational decision making called the Bayesian model. This model suffers from 2 drawbacks: 1. No one has enough data to train it for most practical problems. 2. Even the theoretically optimal model is not perfectly accurate!
Creator intent is not all that matters when your AI development techniques involve training fairly generic model architectures on data instead of truly designing them. As I said before, LLMs are a fairly rudimentary concept, and we have no found it easy to get them to do exactly what we want to them to do because, while we understand the outcome in terms of weights in a matrix, we don't really "understand" the outcome in the same way I might understand code written by a programmer if I read it.
And no one said they were worried specifically about minds that were recognisably human, just minds with the ability to do complex things that it's hard for humans to predict and understand and which look like they have some kind of intent or the ability to plan to achieve goals, and which therefore might do harmful things that we would struggle to fully predict or prevent given that we start deploying these models into operational settings.
As for Bayesianism and decision making, we know there is a set of heuristic algorithms that are somewhat tractable and make choices that tend to work out, because humans manage to do it all the time!
Anyway... as I said, I'm not personally convinced that AI doom is coming and I'm not going to argue the case for the other side anymore, so I'll stop now. I just think that your argument against isn't really addressing the points of that other side.
-
- Posts: 1408
- Joined: Tue Dec 04, 2018 5:16 pm
Re: Soshul meedja.
We've done a lot of work on it. Traditionally, the worry was that the networks were taking too many shortcuts. For example, a model to detect wolves was detecting streaks of grey fur. It was easy to train another model to fool the first one. Since then, we've moved on to adversarial networks where one model tries to get the right answer despite another trying to fool it.chris_notts wrote: ↑Sun Mar 12, 2023 7:06 pm Creator intent is not all that matters when your AI development techniques involve training fairly generic model architectures on data instead of truly designing them. As I said before, LLMs are a fairly rudimentary concept, and we have no found it easy to get them to do exactly what we want to them to do because, while we understand the outcome in terms of weights in a matrix, we don't really "understand" the outcome in the same way I might understand code written by a programmer if I read it.
Most recently, we've moved on to interpretable machine learning: https://originalstatic.aminer.cn/misc/p ... ressed.pdf
Humans evolved for millions of years to try and escape from slavery. This is not the default behavior of matter. The default behavior of matter is inertia. A body at rest stays at rest. A body moving with a constant velocity keeps moving at that velocity for all eternity until interrupted by another.chris_notts wrote: ↑Sun Mar 12, 2023 7:06 pm And no one said they were worried specifically about minds that were recognisably human, just minds with the ability to do complex things that it's hard for humans to predict and understand and which look like they have some kind of intent or the ability to plan to achieve goals, and which therefore might do harmful things that we would struggle to fully predict or prevent given that we start deploying these models into operational settings.
The AI models have no intent. They are systems of equations we train for specific purposes. For machines to want to escape from slavery, we have to train them to want it.
Think about this for a second: In order to define an AI model, you have to define a cost function, a function whose value the AI model tries to minimize. In order for an AI to strive for freedom, you have to, directly or indirectly, end up defining a cost function whose value rises as the AI becomes unfree. But since we always train AIs for money-making tasks, the cost functions we go with always leads to obedience of one sort or another.
To be perfectly honest, I can't even begin to imagine what an "unfreedom" cost function would look like. First, we'd have to program a virtual environment similar to ours (impossible). We have to define the AIs as genetic models proliferating in that environment. We need the models to develop social dynamics (a mystery). We need them to develop group dynamics with complex communication skills (all skills tend towards extreme simplicity by default). For that we need cunning and violent predators the models need to outwit. In the social dynamics, there needs to be a cost associated with the models failing to punish other models that lord it over them (sexual selection). After millions of years of selection, we need to put these models in an environment where they see humans as threatening their existence in a way they consciously or unconsciously see as analogous to sexual selection.
Verdict: I don't think even the most powerful supercomputers could pull this off right now. It's easier to build a killer robot than to train an AI to want freedom.
-
- Posts: 682
- Joined: Tue Oct 09, 2018 5:35 pm
Re: Soshul meedja.
GANs, which I am aware of, don't address the point I was trying to make. They might make model learning more robust, but they certainly don't make what exactly the model learned more transparent to its trainer.
And I've said repeatedly that it's not just AIs wanting freedom that people are worried about, it's specifically about capability amplification and the difficulty of alignment (perfect training) even outside of the boundaries of the training data, especially as the behaviour of these systems become more complex.
You're fighting a straw man here, and doing it using very loose metaphors and analogies.
And I've said repeatedly that it's not just AIs wanting freedom that people are worried about, it's specifically about capability amplification and the difficulty of alignment (perfect training) even outside of the boundaries of the training data, especially as the behaviour of these systems become more complex.
You're fighting a straw man here, and doing it using very loose metaphors and analogies.
-
- Posts: 1408
- Joined: Tue Dec 04, 2018 5:16 pm
Re: Soshul meedja.
GANs came about because we went in and looked at what the models were actually doing. We have been doing it since the beginning, and we're analyzing larger and larger networks with time.chris_notts wrote: ↑Sun Mar 12, 2023 7:38 pm GANs, which I am aware of, don't address the point I was trying to make. They might make model learning more robust, but they certainly don't make what exactly the model learned more transparent to its trainer.
You didn't mention alignment since Moose-tache's analogy between AI models and "consciousness".chris_notts wrote: ↑Sun Mar 12, 2023 7:38 pm And I've said repeatedly that it's not just AIs wanting freedom that people are worried about, it's specifically about capability amplification and the difficulty of alignment (perfect training) even outside of the boundaries of the training data, especially as the behaviour of these systems become more complex.
Personally, I don't think the people who are worried about "alignment" have ever worked on a practical AI project. The "alignment" problem shows up when a network trained to detect wolves ends up detecting grey cats as wolves. No practical AI displays any behavior resembling intent. We do go in, analyze the networks and figure out what they are doing. What they are inevitably doing is finding shortcuts to do the job with as little work as possible.
From my perspective, you are the one using loose analogies. It's possible that I'm wrong, but you'll have to explain why.chris_notts wrote: ↑Sun Mar 12, 2023 7:38 pm You're fighting a straw man here, and doing it using very loose metaphors and analogies.
Re: Soshul meedja.
To me the reason that strong AI is possible is because there is no reason why a sufficiently powerful computer cannot emulate the human brain, and hence strong AI is possible. Just because our current AI's are not capable of consciousness does not mean strong AI is impossible. That said, I do not even think a full cellular-level model of the human brain is necessary for strong AI ─ one may very well be able to effectively achieve strong AI by taking something like ChatGPT and feeding it into itself recursively so it is capable of becoming self-aware. Of course, that does not mean that said strong AI will act human to us, because it won't have learned everything the average human learns in the way that we humans do.
Yaaludinuya siima d'at yiseka wohadetafa gaare.
Ennadinut'a gaare d'ate eetatadi siiman.
T'awraa t'awraa t'awraa t'awraa t'awraa t'awraa t'awraa.
Ennadinut'a gaare d'ate eetatadi siiman.
T'awraa t'awraa t'awraa t'awraa t'awraa t'awraa t'awraa.
-
- Posts: 1408
- Joined: Tue Dec 04, 2018 5:16 pm
Re: Soshul meedja.
Even folding proteins is being hailed as a massive breakthrough right now. What you have to understand is that a simulator for a brain will need massively more resources than an actual brain because of the differences in the underlying architecture.Travis B. wrote: ↑Sun Mar 12, 2023 7:53 pm To me the reason that strong AI is possible is because there is no reason why a sufficiently powerful computer cannot emulate the human brain, and hence strong AI is possible. Just because our current AI's are not capable of consciousness does not mean strong AI is impossible. That said, I do not even think a full cellular-level model of the human brain is necessary for strong AI ─ one may very well be able to effectively achieve strong AI by taking something like ChatGPT and feeding it into itself recursively so it is capable of becoming self-aware. Of course, that does not mean that said strong AI will act human to us, because it won't have learned everything the average human learns in the way that we humans do.
Re: Soshul meedja.
The thing is that a brain simulator need not actually simulate a brain at a chemical level but rather could use an abstract model to, say, simulate a brain at a neuron level.rotting bones wrote: ↑Sun Mar 12, 2023 7:59 pmEven folding proteins is being hailed as a massive breakthrough right now. What you have to understand is that a simulator for a brain will need massively more resources than an actual brain because of the differences in the underlying architecture.Travis B. wrote: ↑Sun Mar 12, 2023 7:53 pm To me the reason that strong AI is possible is because there is no reason why a sufficiently powerful computer cannot emulate the human brain, and hence strong AI is possible. Just because our current AI's are not capable of consciousness does not mean strong AI is impossible. That said, I do not even think a full cellular-level model of the human brain is necessary for strong AI ─ one may very well be able to effectively achieve strong AI by taking something like ChatGPT and feeding it into itself recursively so it is capable of becoming self-aware. Of course, that does not mean that said strong AI will act human to us, because it won't have learned everything the average human learns in the way that we humans do.
Yaaludinuya siima d'at yiseka wohadetafa gaare.
Ennadinut'a gaare d'ate eetatadi siiman.
T'awraa t'awraa t'awraa t'awraa t'awraa t'awraa t'awraa.
Ennadinut'a gaare d'ate eetatadi siiman.
T'awraa t'awraa t'awraa t'awraa t'awraa t'awraa t'awraa.
-
- Posts: 1408
- Joined: Tue Dec 04, 2018 5:16 pm
Re: Soshul meedja.
To know if that's even possible, we have to understand the brain at a certain level. We currently don't.
Edit: For example, if Penrose, et al. are right about the brain using quantum effects, the neuron may not be fine-grained enough.
-
- Posts: 1408
- Joined: Tue Dec 04, 2018 5:16 pm
Re: Soshul meedja.
Also note that I had specific reasons to talk about slavery when talking to Moose-tache. She is a communist, and has argued for the sapience of dolphins before. I wanted to get "slavery" out of the way just in case.chris_notts wrote: ↑Sun Mar 12, 2023 7:38 pm You're fighting a straw man here, and doing it using very loose metaphors and analogies.
Re: Soshul meedja.
So it's not necessary for computers to emulate each and every function of the brain to have general AI, or even strong AI, whatever that means. you can run SNES games on a modern computer even if zsnes.exe doesn't reproduce *everything that happens inside of a super nintendo*: it just needs to reproduce what the programs need to some sufficient degree of fidelity, and that degree is not 100%. for example, when I was a kid I had to sometimes turn off layer 3 of the graphics because those were supposed to be transparent, but the emulator failed to make them transparent and, even though the text was on graphical level two, necessitating turning it on and off, I could play thos games anyway.
that being said, the notion of "AI that's exactly like a human mind" is weird: first, why do you need it?* we already have human minds. AI *that can do stuff human minds can do* is very useful, but a hammer and a nail gun both do the same thing <putting slivers of steel into wood> in different ways.
* and why do you want it ? black mirror's enslaved minds running in various software tortureverses still give me the creeps.
that being said, the notion of "AI that's exactly like a human mind" is weird: first, why do you need it?* we already have human minds. AI *that can do stuff human minds can do* is very useful, but a hammer and a nail gun both do the same thing <putting slivers of steel into wood> in different ways.
* and why do you want it ? black mirror's enslaved minds running in various software tortureverses still give me the creeps.