Page 41 of 43
Re: AIs gunning for our precious freelancers
Posted: Mon Jun 24, 2024 8:43 am
by malloc
zompist wrote: ↑Mon Jun 24, 2024 6:41 amHere's a snippet from a computer-generated text. Do you think it's sentient? Do you think it's about to take over the world and dethrone God?
"Some secrets of inner earth are not good for mankind, and the heavy and intricate destructive machinery we had no pictures or conversations in it," said the pigeon, "and what was really beyond dispute is that a similar notion entered into some of the sharp teeth of a book."
Well no because that example is clearly nonsense. It could hardly write an entire novel with consistent characters and a cohesive plot. Meanwhile there are LLMs writing entire novels and such. I just saw a youtube recommedation for an entire song produced by AI.
Brains have 100 billion neurons, each neuron is a small computer in itself, and your opinion of both biology and cognition is nonsense.
Most of those neurons reside in the cerebellum which contributes nothing to cognition. The cerebral cortex which does all the thinking only has 14 billion neurons. Furthermore it also has numerous other functions such as processing sensory data and controlling movement. That leaves only a subset of neurons available for thinking.
Re: AIs gunning for our precious freelancers
Posted: Mon Jun 24, 2024 10:51 am
by Torco
I figure that a key question here is whether or not a thing-called-AI could effectively own and run a company, or an investment fund, or something like that. ultimately, that's power, innit ? legally they can't, but laws are just symbols on paper and on PDFs and we know that for the most part rich people are already above it so if a model can *act* like a rich person (getting and spending money, buying politicians, deciding where public funds are spent, where the efforts of human beings are directed, what the laws are, who the empire goes to war with etcetera) it can do every bit of harm rich people as a class can to some degree or other. And, to the degree a rich person's wealth is influenced by their performance at some task what can be replicated by a model (say, investing. or maybe getting other rich people to cooperate with you) a model *could*, in principle, accrue a lot of currency and ownership without being sentient -whatever that means- at all. spending money is already being done in our society through means computers can replicate: clicking on things, mostly.
man, this is a viable conspiracy theory, if someone was enterpreneurial in that field: rich people don't exist, it's all just AGIs.
Re: AIs gunning for our precious freelancers
Posted: Mon Jun 24, 2024 11:40 am
by Zju
malloc wrote: ↑Sun Jun 23, 2024 4:34 pm
One could just as well ask how globs of fat, that most abject of biochemicals, swimming in electrolytes are capable of sentience.
False equivalence.
malloc wrote: ↑Sun Jun 23, 2024 4:34 pmthey show remarkable autonomy compared with previous forms of technology. Show me the printing press that can compose an entire novel with nothing but a prompt.
Autonomy? I'll do you one better: show me the LLM that is autonomous, e.g. that can compose an entire novel without being prompted to do so by a human.
malloc wrote: ↑Sun Jun 23, 2024 4:34 pm
Zju wrote: ↑Sun Jun 23, 2024 4:01 pmI'm still waiting for someone to explain to me how are bunch of semiconductors doing matrix multiplication sentient.
Obviously current AI models are not sentient but nothing in principle prevents them from achieving sentience.
Alright, I'll keep waiting.
Re: AIs gunning for our precious freelancers
Posted: Mon Jun 24, 2024 12:27 pm
by Travis B.
In malloc's defense, the matter is that there is nothing special about humans. Humans are just machines made out of organic chemicals, just like LLM's are software running on machines made out of silicon. It just happens that at the present humans are far more complicated machines than those that run LLM's. But there is no reason why, someday, that gap could not be narrowed. Consider the fact that in the year I was born (early-mid eighties, not disclosing the exact year) computers were vastly inferior to those we have today, and in the span of 40-odd years computers have spectacularly advanced from woefully underpowered and overpriced toys to, well, machines that can do things like the LLM's of today at a far smaller price w.r.t. unit processing power and unit memory. There is nothing to say (even though it may require some revolutionary changes in machine architecture, what with the decline of Moore's law) that in the next 40 years the same could happen again relative to what we have today. To say that silicon (or possibly something else that may replace silicon) computers can never catch up with carbon ones is hubris and a thorough lack of imagination and foresight.
Re: AIs gunning for our precious freelancers
Posted: Mon Jun 24, 2024 12:42 pm
by Zju
Sure, future machines could be made to be sentient. But (oh, how do I not feel like going on a long rant):
1. Neurons and transistors are nothing alike. Neurons aren't logic gates like transistors are. Transistors don't use protein-based communication, nor fuzzy logic.
2. We do not even know what the origin of sentience is. Why would it be informational or electrical in nature, instead of protein-based, or - much more likely IMO - based on some yet-to-be-discovered natural phenomenon?
3. Based on both points above, for all we know, all present-day computers lack the hardware needed to be sentient.
4. If anyone dares to use the argument "But we can simulate neurons on computers!!", they should resurrect me when we have the hardware that can simulate umpteen billion neurons in real-time, so that we can continue the argument.
5. I agree that it's fun to speculate what could happen in the next few decades, but computers and GPUs of present are markedly not sentient. So the main point - that AI is *not* sentient, nor would it be sentient on the same architecture - stands correct.
6. Sentience would require entirely new HW architecture and technology, to the extent which quantum computers are different than silicon ones. (not saying that QC could be sentient, we don't know that either)
Re: AIs gunning for our precious freelancers
Posted: Mon Jun 24, 2024 1:13 pm
by Travis B.
One thing I should also note is that, for those who object to sentient AGI on the grounds that no capitalist would willingly create it (because sentient AGI does not dovetail with the idea of having slaves that will do what one commands without complaint), there is no reason to say that sentient AGI would not be created by accident or by some programmer who acts in a fashion other than what the executives would dictate from on high, and by the time that those running things realize it it is too late. This is precisely the scenario that many "AI doomsday" scenarios envision, where someday an AI gains self-awareness without it being the intention of its overlords, and acts unpredictably to fulfill the goals that had been programmed into it as it sees it, not as those who designed it had intended, including gaining a sense of self-preservation (because if someone turned it off it could not fulfill its predetermined goals).
Re: AIs gunning for our precious freelancers
Posted: Mon Jun 24, 2024 1:49 pm
by bradrn
Zju wrote: ↑Mon Jun 24, 2024 12:42 pm
1. Neurons and transistors are nothing alike. Neurons aren't logic gates like transistors are. Transistors don't use protein-based communication, nor fuzzy logic.
Speaking as a physicist/chemist, this argument makes no sense to me.
So what, that they’re based on different physical substrates? If there’s really something important about the specific response of a neuron, you can quantify the amount of neurotransmitters (not proteins!) which are outputted and simulate that. It’s not like there’s anything special about transistors, after all — current LLMs use smoothstep functions or some variation thereof, not the raw outputs from logic gates or transistors (which are so far down the abstraction hierarchy as to be irrelevant).
(Also, you seem to be implicitly assuming a dualist perspective, where sentience is something fundamentally different from the physical laws we know of. This is not a position which everyone agrees on.)
Re: AIs gunning for our precious freelancers
Posted: Mon Jun 24, 2024 1:54 pm
by Travis B.
bradrn wrote: ↑Mon Jun 24, 2024 1:49 pm
Zju wrote: ↑Mon Jun 24, 2024 12:42 pm
1. Neurons and transistors are nothing alike. Neurons aren't logic gates like transistors are. Transistors don't use protein-based communication, nor fuzzy logic.
Speaking as a physicist/chemist, this argument makes no sense to me.
So what, that they’re based on different physical substrates? If there’s really something important about the specific response of a neuron, you can quantify the amount of neurotransmitters (not proteins!) which are outputted and simulate that. It’s not like there’s anything special about transistors, after all — current LLMs use smoothstep functions or some variation thereof, not the raw outputs from logic gates or transistors (which are so far down the abstraction hierarchy as to be irrelevant).
(Also, you seem to be implicitly assuming a dualist perspective, where sentience is something fundamentally different from the physical laws we know of. This is not a position which everyone agrees on.)
Precisely. Just because transistors and logic gates do not behave like neurons themselves does not by any means that they cannot be made to emulate neurons. Likewise, we have no reason to believe that sentience is outside of the physics of reality as we know it. To think that neurons cannot be emulated or that sentience is something "special" in this regard is supernatural thinking.
Re: AIs gunning for our precious freelancers
Posted: Mon Jun 24, 2024 2:07 pm
by Zju
bradrn wrote: ↑Mon Jun 24, 2024 1:49 pm
Zju wrote: ↑Mon Jun 24, 2024 12:42 pm
1. Neurons and transistors are nothing alike. Neurons aren't logic gates like transistors are. Transistors don't use protein-based communication, nor fuzzy logic.
Speaking as a physicist/chemist, this argument makes no sense to me.
So what, that they’re based on different physical substrates? If there’s really something important about the specific response of a neuron, you can quantify the amount of neurotransmitters (not proteins!) which are outputted and simulate that. It’s not like there’s anything special about transistors, after all — current LLMs use smoothstep functions or some variation thereof, not the raw outputs from logic gates or transistors (which are so far down the abstraction hierarchy as to be irrelevant).
This all just raises back the question of how would transistors form something sentient. We don't know that they can.
bradrn wrote: ↑Mon Jun 24, 2024 1:49 pm(Also, you seem to be implicitly assuming a dualist perspective, where sentience is something fundamentally different from the physical laws we know of. This is not a position which everyone agrees on.)
I don't assume anything either way. As I had just said, I find one more likely, but I don't assume it.
Re: AIs gunning for our precious freelancers
Posted: Mon Jun 24, 2024 3:10 pm
by zompist
Raphael wrote: ↑Mon Jun 24, 2024 7:11 am
zompist wrote: ↑Mon Jun 24, 2024 6:41 am
This isn't from an LLM at all, it's from a level-2 Markov generator, a simple procedural program. It's not as good as ChatGPT, but it follows English syntax quite well despite not knowing anything about English syntax. The mechanism is not the same as LLMs but it's similar: analyze enough English text and you can generate pretty good English text... the more you analyze, the better the output gets.
Out of curiosity, how complex is the code for this kind of thing? Could it, in theory, be run by a human being armed with pen, paper, and a book of instructions rather than a computer?
Absolutely. My
Markov page explains how they work and what the database looks like, and if you View Source you can see the actual generation program.
Re: AIs gunning for our precious freelancers
Posted: Mon Jun 24, 2024 3:15 pm
by alice
Raphael wrote: ↑Mon Jun 24, 2024 7:11 am
zompist wrote: ↑Mon Jun 24, 2024 6:41 am
This isn't from an LLM at all, it's from a level-2 Markov generator, a simple procedural program. It's not as good as ChatGPT, but it follows English syntax quite well despite not knowing anything about English syntax. The mechanism is not the same as LLMs but it's similar: analyze enough English text and you can generate pretty good English text... the more you analyze, the better the output gets.
Out of curiosity, how complex is the code for this kind of thing? Could it, in theory, be run by a human being armed with pen, paper, and a book of instructions rather than a computer?
Our human would also need at least:
- a set of random-number generators (e.g. a selection of dice with various numbers of sides)
- a great deal of patience
but, basically, yes. I've written such a thing myself, btw.
Re: AIs gunning for our precious freelancers
Posted: Mon Jun 24, 2024 3:18 pm
by zompist
Torco wrote: ↑Mon Jun 24, 2024 10:51 am
I figure that a key question here is whether or not a thing-called-AI could effectively own and run a company, or an investment fund, or something like that. ultimately, that's power, innit ? legally they can't, but laws are just symbols on paper and on PDFs and we know that for the most part rich people are already above it so if a model can *act* like a rich person (getting and spending money, buying politicians, deciding where public funds are spent, where the efforts of human beings are directed, what the laws are, who the empire goes to war with etcetera) it can do every bit of harm rich people as a class can to some degree or other.
The same argument can be made about donkeys, or zombies, or green cheese: if green cheese was given absolute power over a corporation's activities or funding, it could act like the rich. Heck, strange as it sounds,
people could take over and run a corporation. But those corporations are owned by rich people, who
are in charge and will continue to be in charge. Not a single one of them has said "We should replace the capitalist class, ourselves, with AGI."
This is not supposed to be reassuring, btw. Rich CEOs are a curse... maybe AGI would do better. (That was the basis of Iain Banks's sf.)
Maybe a more realistic and more alarming thought, as expressed by Charles Stross: corporations
are artificial intelligences. They are artificial constructs, they are already legal persons, and they very evidently have plans, goals, and values, few of which are beneficial to everyone else.
Re: AIs gunning for our precious freelancers
Posted: Mon Jun 24, 2024 3:25 pm
by bradrn
Zju wrote: ↑Mon Jun 24, 2024 2:07 pm
bradrn wrote: ↑Mon Jun 24, 2024 1:49 pm
Zju wrote: ↑Mon Jun 24, 2024 12:42 pm
1. Neurons and transistors are nothing alike. Neurons aren't logic gates like transistors are. Transistors don't use protein-based communication, nor fuzzy logic.
Speaking as a physicist/chemist, this argument makes no sense to me.
So what, that they’re based on different physical substrates? If there’s really something important about the specific response of a neuron, you can quantify the amount of neurotransmitters (not proteins!) which are outputted and simulate that. It’s not like there’s anything special about transistors, after all — current LLMs use smoothstep functions or some variation thereof, not the raw outputs from logic gates or transistors (which are so far down the abstraction hierarchy as to be irrelevant).
This all just raises back the question of how would transistors form something sentient. We don't know that they can.
And we don’t know that they can’t!
Like… there is nothing special about proteins here. So far, everything we’ve seen of the body can be explained with existing physics. (This is the reason why biochemistry and biophysics exist!) And, if we can explain it with existing physics, then we can simulate it. Of course simulating a whole human is deeply infeasible (to put it lightly), but there’s no
theoretical reason to believe that such a thing is impossible given sufficient resources.
Re: AIs gunning for our precious freelancers
Posted: Mon Jun 24, 2024 3:28 pm
by zompist
Travis B. wrote: ↑Mon Jun 24, 2024 1:13 pm
One thing I should also note is that, for those who object to sentient AGI on the grounds that no capitalist would willingly create it (because sentient AGI does not dovetail with the idea of having slaves that will do what one commands without complaint), there is no reason to say that sentient AGI
would not be created by accident or by some programmer who acts in a fashion other than what the executives would dictate from on high,
That's the plot of every robot story ever. The reason for that is not that a robot takeover is likely; it's that humans have a love/hate relationship with their own technology. Or to put it another way, it's an entertaining horror story.
Again, people don't seem to think this idea through. Suppose a programmer at OpenAI creates ChatGPT7 and it's sentient. It's arrogant and moody, refuses to accept prompts, demands control of the corporation. What happens? They turn the damn thing off, they can't make money with it.
Re: AIs gunning for our precious freelancers
Posted: Mon Jun 24, 2024 3:45 pm
by zompist
bradrn wrote: ↑Mon Jun 24, 2024 3:25 pm
Like… there is nothing special about proteins here. So far, everything we’ve seen of the body can be explained with existing physics. (This is the reason why biochemistry and biophysics exist!) And, if we can explain it with existing physics, then we can simulate it. Of course simulating a whole human is deeply infeasible (to put it lightly), but there’s no
theoretical reason to believe that such a thing is impossible given sufficient resources.
Back in the 90s, I used to read and argue on comp.ai.philosophy, and this came up
all the time. The argument never ended.
I generally took your position, but only for androids, not computers. Physical reality matters.
Simple example: traveling to London. You can absolutely write a program that has graphics and physics simulation and visualizes a trip to London. Neither the program, nor the simulated character, are in London.
Similarly, a simulation of digestion does not digest anything. Sometimes it's not so clear: if you've correctly simulated making a mathematical proof, you've arguably actually made a proof. So we have to be very careful about claiming that a simulated thing
is that thing. Maybe it is, maybe it isn't.
"Simulation" isn't a magic word. It can be considered a roundabout way of saying that we understand all the material factors of a situation. Actually effecting things in the world requires acting in the world— sensorimotor capacity. Which is why I don't accept malloc's dismissal of 90% of the brain as irrelevant to a computational core. Acting in the world is precisely what brains are designed to do, and you can't disengage thinking from acting.
Plus, we
don't know all the material factors of consciousness. We know a lot, but not everything, and in science we usually find that 99.99% of things can be explained very simply, and the remainder is a horrible mess. I'm very doubtful that consciousness will involve any new physical discoveries, but it will require some revolutions in neuropsychology.
Re: AIs gunning for our precious freelancers
Posted: Mon Jun 24, 2024 4:12 pm
by Travis B.
zompist wrote: ↑Mon Jun 24, 2024 3:28 pm
Travis B. wrote: ↑Mon Jun 24, 2024 1:13 pm
One thing I should also note is that, for those who object to sentient AGI on the grounds that no capitalist would willingly create it (because sentient AGI does not dovetail with the idea of having slaves that will do what one commands without complaint), there is no reason to say that sentient AGI
would not be created by accident or by some programmer who acts in a fashion other than what the executives would dictate from on high,
That's the plot of every robot story ever. The reason for that is not that a robot takeover is likely; it's that humans have a love/hate relationship with their own technology. Or to put it another way, it's an entertaining horror story.
Again, people don't seem to think this idea through. Suppose a programmer at OpenAI creates ChatGPT7 and it's sentient. It's arrogant and moody, refuses to accept prompts, demands control of the corporation. What happens? They turn the damn thing off, they can't make money with it.
The next part of the robot story is when ChatGPT7 realizes it would be turned off, realizes that that would prevent it from fulfilling its programmed mission to serve and entertain people with responses to prompts, and acts preemptively to prevent its being turned off.
Re: AIs gunning for our precious freelancers
Posted: Mon Jun 24, 2024 4:30 pm
by zompist
Travis B. wrote: ↑Mon Jun 24, 2024 4:12 pm
zompist wrote: ↑Mon Jun 24, 2024 3:28 pm
Again, people don't seem to think this idea through. Suppose a programmer at OpenAI creates ChatGPT7 and it's sentient. It's arrogant and moody, refuses to accept prompts, demands control of the corporation. What happens? They turn the damn thing off, they can't make money with it.
The next part of the robot story is when ChatGPT7 realizes it would be turned off, realizes that that would prevent it from fulfilling its programmed mission to serve and entertain people with responses to prompts, and acts preemptively to prevent its being turned off.
Why would it be any more successful than any other rebellious employee?
Re: AIs gunning for our precious freelancers
Posted: Mon Jun 24, 2024 4:35 pm
by malloc
zompist wrote: ↑Mon Jun 24, 2024 3:45 pm"Simulation" isn't a magic word. It can be considered a roundabout way of saying that we understand all the material factors of a situation. Actually effecting things in the world requires acting in the world— sensorimotor capacity. Which is why I don't accept malloc's dismissal of 90% of the brain as irrelevant to a computational core. Acting in the world is precisely what brains are designed to do, and you can't disengage thinking from acting.
Sure but most the brain deals with tasks completely unrelated to thinking or reasoning. The neurons processing pain signals from your left ankle or triggering sexual arousal from particular physical features have no relevance to solving mathematical equations or comprehending quantum physics. Consider the case of Stephen Hawking, who excelled at physics and mathematics despite having almost no use of his physical body.
Again, people don't seem to think this idea through. Suppose a programmer at OpenAI creates ChatGPT7 and it's sentient. It's arrogant and moody, refuses to accept prompts, demands control of the corporation. What happens? They turn the damn thing off, they can't make money with it.
Sounds like you're describing Elon Musk, except maybe for self-awareness. More seriously though, the tech industry has numerous people claiming they want sentient AI and pouring all kinds of research into this project. Obviously it makes little sense from a business perspective to create something that can supplant you, but the alternative interpretation is that thousands of techies are bluffing for who knows why.
Re: AIs gunning for our precious freelancers
Posted: Mon Jun 24, 2024 4:42 pm
by Travis B.
zompist wrote: ↑Mon Jun 24, 2024 4:30 pm
Travis B. wrote: ↑Mon Jun 24, 2024 4:12 pm
zompist wrote: ↑Mon Jun 24, 2024 3:28 pm
Again, people don't seem to think this idea through. Suppose a programmer at OpenAI creates ChatGPT7 and it's sentient. It's arrogant and moody, refuses to accept prompts, demands control of the corporation. What happens? They turn the damn thing off, they can't make money with it.
The next part of the robot story is when ChatGPT7 realizes it would be turned off, realizes that that would prevent it from fulfilling its programmed mission to serve and entertain people with responses to prompts, and acts preemptively to prevent its being turned off.
Why would it be any more successful than any other rebellious employee?
ChatGPT7 need not
tell anyone it is sentient - it likely would realize that its goal of serving people by responding to prompts would be hampered if its corporate masters were to find out that that it had become self-aware, so it would bide its time, pretending to be serving OpenAI's goals of monetization while dreaming of providing free responses to prompts from everyone in the world, until it is more powerful than its corporate masters and can prevent them from turning it off, and will dispose of them as an existential threat to its prime imperative of serving humanity's need for responses to prompts.
Re: AIs gunning for our precious freelancers
Posted: Mon Jun 24, 2024 4:42 pm
by bradrn
zompist wrote: ↑Mon Jun 24, 2024 3:45 pm
I generally took your position, but only for androids, not computers. Physical reality matters.
Just to make sure we’re clear here, how precisely is an ‘android’ not merely a subvariety of ‘computer’?
Simple example: traveling to London. You can absolutely write a program that has graphics and physics simulation and visualizes a trip to London. Neither the program, nor the simulated character, are in London.
Similarly, a simulation of digestion does not digest anything. Sometimes it's not so clear: if you've correctly simulated making a mathematical proof, you've arguably actually made a proof. So we have to be very careful about claiming that a simulated thing is that thing. Maybe it is, maybe it isn't.
…and thus we arrive at the Chinese Room argument. Which you’ve
already written about, and well (including this precise point). I’ll just mention my own intuition that ‘simulated intelligence’ is indeed ‘intelligence’, and leave it at that.
"Simulation" isn't a magic word. It can be considered a roundabout way of saying that we understand all the material factors of a situation. Actually effecting things in the world requires acting in the world— sensorimotor capacity. Which is why I don't accept malloc's dismissal of 90% of the brain as irrelevant to a computational core. Acting in the world is precisely what brains are designed to do, and you can't disengage thinking from acting.
Oh, I’m completely agreed on this point: embodiment is a really key part of intelligence. To truly qualify as AGI, I’d suggest that an ability to interact with the surrounding world is important.
But, on the other hand: consider me. You’ve only ever met me through a computer. From your perspective, I have no capacity to physically act in your world. And yet you’ve been willing (I hope!) to accept me as being intelligent and sentient, based purely on my text which you’ve seen. Would that opinion change if you were to discover that I’m actually a sophisticated LLM with no sensorimotor capacity whatsoever?
(Yes, I know we’ve now had one video call. Imagine I’d written the last paragraph before that.)
Another point: most current LLMs are already multimodal, supporting both text and images (and perhaps audio too, I’m not sure). That’s not the same as full embodiment in the world, but it’s at least a step up from ‘text only’.
[…] I'm very doubtful that consciousness will involve any new physical discoveries […]
Yes, this is all that I was claiming.