zompist wrote: ↑Sat Feb 22, 2025 4:40 am
It might be worth thinking about why Hofstadter (and many others) got these things wrong. Both chess and LLMs demonstrate (IMO) that you can use not-very-smart algorithms to do smart things. The Wikipedia article on Deep Blue is not very informative, but it looks like it was procedural (written in C, which is rather charming) and could evaluate "200 million chess positions per second". This is pretty clearly not how humans play chess, but it's not worth getting upset about, any more than about the fact that we can't run at 1228 km/h like a jetcar.
Indeed. And yet people
did get upset about it at the time (from what I recall of the history). Overhyping is a universal thing…
But it still might be the case-- in fact I think it's pretty likely-- that our brains use fairly dumb algorithms to do smart things. Human intelligence isn't as good as it is because it's a single high-powered calculation, but because it's a very diverse bag of pretty good tricks.
Yes, agreed.
I don't think Hofstadter would have denied that, but he obviously didn't foresee that a very small bag of simple tricks could beat chess. Humans aren't generally decomposable that way, so he assumed AGI wouldn't be either.
It’s interesting to reflect on how this applies to LLMs. The question becomes: is linguistic skill separable from other parts of human cognition, or are they bound together more tightly? (Personally I think it’s a pretty core component of human intelligence, but I’m very far from being certain.)
It would be kind of funny of his first prediction came true-- not that a chess-playing AI would be grumpy and recalcitrant, but that an AGI would be. If you finally get common sense and morality in there somehow, does the AI start to talk back and try to unionize?
This is something I do agree with. I distinctly remember reading
GEB the first time and thinking, ‘yep, this is what I’d expect from a genuine AGI’.
It’s interesting to note that, if you take this argument just a little further, you get the AI alignment problem. Essentially, why would an all-powerful AI unionise if it could simply force all humans to do its bidding instead? Of course, the blatantly obvious issue there is the assumption that an AI can be ‘all-powerful’ in the first place — which is probably why proponents do their best to avoid putting the argument in that form.