rotting bones wrote: ↑Fri May 23, 2025 4:07 pmWhat should worry you more is that LLMs have been shown to systematically deceive their users when put under pressure.
I'm very unconcerned with what models can do. I'm a lot more concerned about what people can do with models.
Suppose someone traveled through the jungles of Brazil and found an animal with the same capabilities as current AI models. Imagine if you will a parrot who could compose entire novels or symphonies or a monkey who could draw photorealistic images or a sloth who could fold proteins. We would undoubtedly consider that animal remarkably intelligent. Yet for some reason when computers achieve the same thing, we insist otherwise and focus on all the mistakes they make, as if humans never struggle to count or suffer from hallucinations.
i don't think so, no. so you're telling me the parrot can parrot shakespeare (hehehe, the parrot parrots) and solve physics problems but can't count the letters in a word, and is for some reason incapable of drawing cups of wine full of wine, even though it can speak as if it understood what "full of wine" means? no, i'm not sure this parrot is intelligent in the same sense people are intelligent, though it's one hell of a parrot. maybe we can use it to make up lies for silly people to believe.
I think it's important to understand these technologies in order to predict what they will do. if we think of them as "intelligent" we tend to think in lines similar to this
humans are intelligent and this allows them to do various tasks. computers are now intelligent and, as computers and software gets better, computers will be able to do those same tasks. therefore, eventually, computers will be able to do the same things people can do, and will do them in the say way, except better and faster.
And this is a decent model, but it's not good: a better way to think about this is
humans are intelligent and this allows them to do various tasks. computers are more and more able to replicate various tasks humans can do, once trained with relevant and sufficient data where they can observe the task being done. as computers and software gets better, those tasks get increasingly easy to do, and more tasks become able to be automated. computers will tend towards being much better at some tasks, bad at others, and really weird at yet other tasks.
Like, do computers cogitate? compute? calculate? understand? solve problems? design things? in *some sense* yes, but in the same sense that evolution, or a colony of ants, cogitates, computes, calculates and understands (that is to say, more in the sense of "do something like thinking" than "think, properly speaking"). in the sense they do so, they do so in ways that are totally different from people. the fact that computers can do some cognitive tasks incredibly better but others exceptionally poorly at some level of technology is to do with the fact that computers are very different things from people. this means we can't extrapolate what people do and go "oh, yeah, computers will do that". like, "try to take over the world" is a thing people do, when we say this we're assuming a bunch of stuff, intent etcetera. these generative models, if organized into agents able to affect the world (either by people or by themselves), could become like... the sorts of systems canend up, through being very central to how humans interact with one another, "rule the world" in the sense capitalism, or catholicism "rule the world", but we don't imagine capitalism as some entity with an agenda, we understand them as the field in which the game of human existence is played. people, qua such, don't compete with social systems, with software that runs on human action, we don't imagine them as personal, except in very metaphorical ways, and we shouldn't do so either with software.
though i do feel some sympathy for the "beware AI" position... none of this means AIs can't cause the deaths of hundreds of thousands of people... hell, they might have already.