Sure but nobody has managed to explain why we couldn't achieve AGI, particularly given the massive and ever increasing resources spent on the problem. We know that general intelligence is possible and not even that difficult, given that humans evolved it completely by accident. Furthermore, the tech industry is investing hundreds of billions in AI every year with no signs of slowing down. Despite that, many people deny the possibility of AGI on rather nebulous grounds.Travis B. wrote: ↑Wed Apr 09, 2025 1:27 pmmalloc, you can claim we have AGI when we have AI's that can of their own will decide to learn chess from first principles and then go on and compete with humans at it -- or decide to play go and then go on and compete with humans at that, or decide that they don't like playing board games and reject the idea altogether. Currently what we have are AI's that are hard-wired to play chess and which have no will of their own. This isn't AGI at all!
It sounds like you consider the real obstacle to AGI agency or will rather than cognitive feats as such. We could create machines can perform pretty much any cognitive task, whether playing chess or diagnosing disease or whatever, but those machines would still require humans to decide what they should do. That said, plenty of organisms have agency in that sense even though we don't consider them particularly intelligence (Trump has plenty of agency but little intelligence after all). Conversely it seems quite easy to imagine a machine that has mastered all intellectual feats, achieving what seems like general intelligence, while still requiring someone to choose the specific task. It seems to me like intelligence and agency are distinct concepts in other words.