Travis B. wrote: ↑Mon May 02, 2022 2:53 pm
Ares Land wrote: ↑Mon May 02, 2022 3:09 am
rotting bones wrote: ↑Sun May 01, 2022 12:57 pm
My conjectures are based on the current state of science, which accepts the validity of the Church-Turing thesis. If you can prove that models of problem-solving exist beyond the Turing machine, you could publish it and immortalize your name. Until then, it's not rational to bet on it.
The Church-Turing thesis is about numerical problem solving. Is the human brain's function really numerical problem solving?
The question is whether there is anything about the human brain that could not be simulated at a physical level by a
sufficiently powerful computer (emphasis on
sufficiently powerful). If one cannot theoretically build a computer of some sort (whether conventional or quantum) that can do so, then it would imply that there exist physical systems that simply cannot be simulated. If you prove that, you probably will win a Turing Award. Of course, it may be practically impossible given the
physical constraints of computing to do so in any reasonable amount of memory or time (e.g. it may belong to a complexity class in memory or time that renders it highly intractable), but that is another story. Again, if you prove that, you might win a Turing Award.
Shades of comp.ai.philosophy! I spent
so many hours there 30 years ago. Looks like the arguments of today are identical!
Well, except that today, I think, people assume that an AI will be a Deep Learning type neural network that no one can understand, rather than an arcane LISP algorithm that only the programmer understands, but forgets to comment.
I think Travis is right here, but kind of trivially, meaning no awards for anyone. Could you simulate a human brain using a computer the size of the universe? I don't see why not! Can you do it with one the size of, oh, a human brain? Not guaranteed at all. All these hifalutin references to Turing just come down to that: you can do it with an infinite-sized computer. That hardly addresses Ares Land's point about efficiency. (Y'all do remember that Turing machines are infinite? And that Moore's Law will never get to infinity?)
I don't think anything magic or supernatural is going on inside the brain, but I do think analog and sensorimotor processes are way more important than AI types usually acknowledge. The brain is a machine, yes, but a machine for operating a physical body in the physical world. More like a robot, then. But people think of robots as having programs entirely separate from their machinery, and that just doesn't seem to be the case for the brain. If all you want to do is identify faces for the cops, that probably doesn't matter, but maybe it's a bad way of thinking about the problem if you really want an omnifunctional android.
(A rather minor demonstration of this is that animal brains often solve geometric problems
with brain geometry. That is, there's an array of neurons that precisely corresponds to a geometric array in the world. This allows some easy processing of certain problems. E.g., if neuron A represents point A', and point B' is next to that, where is the corresponding neuron B? Right next to A. This makes things like edge detection easier. It also allows insects to easily perceive polarization patterns with their tiny brains. Yes, you can do all this with math. But using physical geometry is a pretty nice hack.)
A big unresolved question is qualia. Before joining your consciousness uploader, I want to know: does it preserve qualia? If the researcher says "Oh yes, we even stored it in a bigint!" then I'm not jumping in. This doesn't mean I think qualia is an unsolvable problem, just that AI has
no theory at all of qualia and therefore no solution.