bradrn wrote: ↑Thu Jan 30, 2025 8:17 pm
This is, of course, how artificial neural networks work too. But it’s also how GPUs and FPGAs work. So ‘having many smaller processors’ is not sufficient for intelligence, and I strongly suspect that it’s not necessary either (though it’s awfully convenient).
I suspect that what’s important is not so much the simple fact of having many processors, but rather the way in which they’re connected together. This problem is much less amenable to being simplified into smaller parts. I feel sure there’s
some way to do it, but neither neuroscientists nor computer scientists seem to have worked out how. So, for now, I remain agnostic on what the ‘smaller parts’ of intelligence may be.
At some level those
are the smaller parts of human intelligence and, as you say, artificial neural networks. Personally I think a lot of theorizing about intelligence goes astray because it
doesn't take this point of view into account. Most people like top-down schemes instead, and maybe that's what you're after, pointing at things like "prediction."
I think it's likely there
are processes at a higher level. After all computer programs have structure that certainly isn't best understood at the level of individual transistors. Still... what if there really isn't much else? Maybe having a hundred billion processors
is the key.
If you haven't seen it, see my review of
How Life Works, or read the book. It's about organisms, not intelligence, but the overall situation is similar: if you look at genes or proteins or other aspects of cell activity, it's an utter mess. Genes rarely do one thing, there's huge amounts of redundancy, it's a Rube Goldberg machine we're far from understanding. Maybe one biological system, the brain, is a lot like another biological system, the cell? Throw enough pieces at the puzzle, allow evolution to select what works for 500 million years, and you get something that works, but not in a way that's easy to understand.
(I don't at all mean this is the
only way to get intelligence. Before neural networks, most AI people thought procedural code would do it. There are big advantages to programs we can actually understand!)
(And ‘common sense’ as a criterion could well result in myself being considered unintelligent!)
Sounds like you've been internalizing neurotypical propaganda.
I don’t get your point here…
(I didn’t mean to call myself unintelligent, to be clear! I’m saying that this is a bad criterion
because it would result in that conclusion.)
Why would you be considered not to have common sense? Certainly you have it, even if you find social interactions (or whatever) hard. There's different ways of being intelligent even among humans.
In such a situation, the actual using of intelligence feels like nothing at all. It’s simply the ground state of being: at every moment our thoughts arise from an intelligent mind. The actual state of being intelligent is not a ‘mental state’ as such; mental states arise from it, but being intelligent is simply what we are.
What all this means is that as humans we are incapable of feeling any more, less or differently intelligent to what we already are. Consequently, I don’t think introspection or empathy are helpful here. The most we can do is to observe the behaviour of other humans, animals and apparently intelligent systems, and thereby infer how intelligent they are. Which, after all, is precisely what we do with our fellow humans.
Er, it's not-- we do use our own introspection and empathy to understand our fellow humans. Again, these skills have been developing in the animal kingdom for millions of years. (Also, our powers of expression have co-evolved with our powers of observing each other. Why do we
have facial expressions? It's useful for an organism to actually broadcast some of its internal state.)
I'd also point out that feeling stupid is something that most of us have felt, quite strongly, at one point or another.
But I don't quite get why you're talking about "feeling intelligent". I do think sci fi and some philosophy has given us the notion that feelings are "not rational" and are somehow bad, which I think is a mistake. And there is such a thing as emotional intelligence.
Conversely: if some animal could navigate as well as humans but using an inhuman cognition to do so, would we be able to tell? Would it matter that there are underlying differences?
Think two minutes about how a dog functions in the world. It should be obvious that yes, we can tell, and yes, it matters. Also note that we are not limited to watching animal behavior; we also have anatomy and can figure out, e.g., senses or eye architectures that humans don't have.
I’m not sure how dogs are relevant here; elaborate please?
Haven't you watched a dog moving around in the world? You can tell that they use their noses far more than we do. You can do experiments to confirm this: e.g. a dog is not only very sensitive to smells, but can tell things we absolutely cannot, like what direction someone went based on the smell of their footprints. Very occasionally we can navigate by smell (e.g. toward the smell of cookies), but it's obviously a huge part of the mindset of a dog.