malloc wrote: ↑Fri Feb 21, 2025 7:27 am
One problem with this debate is that it seems like nobody can agree on the capabilities of artificial intelligence. Many sources ascribe all kinds of fantastical ability even to contemporary AI from inventing new materials to figuring out how proteins fold. Others claim AI can only emulate the patterns of existing data without generating anything new or even understanding their activity. It seems plausible enough that at least some AI pushers are overstating the abilities of their technology for financial reasons. Yet it hardly makes sense that everything from ChatGPT to AlphaFold is fraudulent or illusory.
I think there are two parts to this issue: What has AI been proven to help with so far? And what are the philosophical implications of that?
The first question is rather easily answered: AI techniques can help with an
extremely wide range of problems. But note that ‘AI techniques’ include far more than just LLMs! AlphaFold, for example, is a neural network which is not an LLM. (Apparently it uses a form of deep learning.) AlphaFold is indeed neither fraudulent nor illusory, and by all reports has essentially solved the protein folding problem. There are plenty of similar cases, both for neural networks and for other forms of AI. Indeed, probably dozens of modern statistical methods started their lives being called ‘AI’, before being assimilated into the body of standard statistical algorithms. (The Random Forest method comes to mind, though I’m sure a statistician could name many others.)
This, incidentally, is why I dislike the term ‘AI’ — it makes the whole thing sound fancier than it really is. I prefer more specific terms such as ‘machine learning’. To me this term in particular nicely encapsulates the domain of applicability of the most commonly encountered ‘AI’ methods: they’re about
learning from elements of a domain to predict properties of other elements.
The philosophical implications of all this are more murky. (As you can surely tell from this thread!) One thing is for sure: 99% of ‘AI’ is not about replicating human intelligence. ELIZA may have fooled people in the 60s, but we know better now. These methods are well-characterised, and we know what they can and can’t do. The remaining 1% is mostly the methods that are being actively developed
right now, such as LLMs, which we really don’t have a good handle on yet. Perhaps someday we’ll understand them better, and look back and wonder how we ever over-hyped them now. Or perhaps they will lead to AGI after all… we just
don’t know enough about LLMs yet, and I think that’s probably what’s causing you such confusion about the capabilities of AIs more generally. But most AIs are not LLMs, and the only challenge their capabilities pose to us is, ‘we can be fooled by
that‽’.
(…and now it’s 2:30am and after typing this out I need to go to sleep, goodnight!)