zompist wrote: ↑Mon Jun 24, 2024 3:45 pm
I generally took your position, but only for androids, not computers. Physical reality matters.
Just to make sure we’re clear here, how precisely is an ‘android’ not merely a subvariety of ‘computer’?
Simple example: traveling to London. You can absolutely write a program that has graphics and physics simulation and visualizes a trip to London. Neither the program, nor the simulated character, are in London.
Similarly, a simulation of digestion does not digest anything. Sometimes it's not so clear: if you've correctly simulated making a mathematical proof, you've arguably actually made a proof. So we have to be very careful about claiming that a simulated thing is that thing. Maybe it is, maybe it isn't.
…and thus we arrive at the Chinese Room argument. Which you’ve
already written about, and well (including this precise point). I’ll just mention my own intuition that ‘simulated intelligence’ is indeed ‘intelligence’, and leave it at that.
"Simulation" isn't a magic word. It can be considered a roundabout way of saying that we understand all the material factors of a situation. Actually effecting things in the world requires acting in the world— sensorimotor capacity. Which is why I don't accept malloc's dismissal of 90% of the brain as irrelevant to a computational core. Acting in the world is precisely what brains are designed to do, and you can't disengage thinking from acting.
Oh, I’m completely agreed on this point: embodiment is a really key part of intelligence. To truly qualify as AGI, I’d suggest that an ability to interact with the surrounding world is important.
But, on the other hand: consider me. You’ve only ever met me through a computer. From your perspective, I have no capacity to physically act in your world. And yet you’ve been willing (I hope!) to accept me as being intelligent and sentient, based purely on my text which you’ve seen. Would that opinion change if you were to discover that I’m actually a sophisticated LLM with no sensorimotor capacity whatsoever?
(Yes, I know we’ve now had one video call. Imagine I’d written the last paragraph before that.)
Another point: most current LLMs are already multimodal, supporting both text and images (and perhaps audio too, I’m not sure). That’s not the same as full embodiment in the world, but it’s at least a step up from ‘text only’.
[…] I'm very doubtful that consciousness will involve any new physical discoveries […]
Yes, this is all that I was claiming.