malloc wrote: ↑Wed Feb 26, 2025 4:33 pm
zompist wrote: ↑Wed Feb 26, 2025 3:48 pmFor someone terrified of artificial general intelligence, you seem to have no understanding of the "general" part. It precisely means being good at a wide range of physical and mental tasks, not just "advanced mathematics". It precisely includes the sort of disconnected and difficult tasks that Travis lists.
We already have AI models capable of handling a wide range of physical and mental tasks, everything from drawing original artwork to folding proteins.
So your example of physical tasks are two purely symbolic tasks. Do you know what "physical" means? Or do you thnk that AlphaFold is
physically manipulating proteins?
Sometimes you sound like an AI yourself, in that you are utterly confident of your statements even when you know nothing about the subject. Really, stop talking about protein folding until you can explain in your own words
why it's a difficult problem.
People used to consider chess and mathematics examples of highly sophisticated intelligence and only stopped when computers mastered those. Looking at the history of AI, it often feels like people are moving the goalposts to avoid granting computers the mantle of intelligence. If you traveled back to 1950 and told Alan Turing all the things computers have achieved, he would undoubtedly conclude that AI had succeeded beyond his wildest dreams.
And I've been criticizing Turing
for thirty years. Your conception of AI seems to come right out of the 1950s. AI researchers like Terry Winograd and Roger Schank were talking about real-world knowledge back in the early 1970s.
Yeah, people got fixated on chess, and that was short-sighted. Vision turned out to be a much harder problem.
Navigating a 3-D environment, like a human house, is in fact a tremendously difficult problem, or rather a set of problems, starting with understanding vision itself. If you think that's easy, why not enlighten us with a general algorithm for how an array of pixels can be turned into a 3-D model of the world?
That really is a question for experts in computer science and cognitive psychology.
Name three who've addressed the problem and support your claim that it's extremely simple.
Nonetheless we know that navigating a 3-D environment requires little intelligence simply because even the least intelligent animals like insects achieve this with ease. Are you seriously suggesting that a housefly with mere thousands of neurons is more intelligent than a data center with trillions of logic gates comprising its processing units?
Dude, I spent my career as a programmer, I know how computers work way more than you do. You can't fool me by equating neurons with "logic gates". Do you know what a logic gate does or even how complex its "computation" is? The basic processor in the computer on your desk is absolutely brain-dead simple— you can learn to program it at the assembly level in a one-quarter course. Computers are
fast, not intelligent.
A fruit fly has 140,000 neurons with 50,000,000 connections between them. Housefly brains haven't been mapped yet but they're estimated at 250,000 neurons.
That data center can't navigate an arbitrary room as well as a housefly. How do we know? Because "self-driving cars" can't navigate an arbitrary street without killing people, and that's with
human observers doing a lot of the work. E.g. the company Cruise was found to use 1.5 workers per vehicle. If your advanced math is as bad as you say it is, that's 0.5 human workers
more than are needed for a human to drive a regular car.
(I also have to note that we don't know much about how flies navigate the world. Their needs are certainly simpler than even a simple mammal's. A shrew, which I talked about recently, has 52 million neurons.)