Page 26 of 43
Re: AIs gunning for our precious freelancers
Posted: Thu Oct 05, 2023 5:36 am
by KathTheDragon
bradrn wrote: ↑Thu Oct 05, 2023 4:02 am
Well, I could argue that LLMs can’t feel pain or suffer, as far as we’re aware… but to address your broader argument: yes, I think ethical considerations
will apply at some point, but I’m not going to hazard a guess as to when. We know too little to say anything about that.
I don't think you can argue that LLMs can even
feel since they have no memory of any kind. Even plants have a kind of memory.
Re: AIs gunning for our precious freelancers
Posted: Thu Oct 05, 2023 5:47 am
by Raphael
elgis wrote: ↑Wed Oct 04, 2023 10:23 pm
To me it seems a bit pointless to argue whether or not LLM are AI. Depending on who you ask, even simple search algorithms are AI. This
textbook, first published in 1995, has four chapters on search algorithms.
Compare the routine use of the term "AI" for computer-controlled opponents (or allies) in games.
Re: AIs gunning for our precious freelancers
Posted: Thu Oct 05, 2023 5:52 am
by Raphael
zompist wrote: ↑Wed Oct 04, 2023 11:00 pm
There's a simple test for anyone who's tempted to call LLMs intelligent: can we arrest the owners of ChatGPT for holding slaves?
I'm not the slightest bit tempted to call LLMs intelligent, but right now, part of the reason
why we can't arrest the owners of ChatGPT on those grounds is that the laws in question were all written with human beings in mind. If the politics of AIs had turned out the way many people had predicted, rather than the way they actually turned out, we might have parts of the activist Left protesting for changing in the laws along those lines now.
Re: AIs gunning for our precious freelancers
Posted: Thu Oct 05, 2023 7:03 am
by bradrn
KathTheDragon wrote: ↑Thu Oct 05, 2023 5:36 am
bradrn wrote: ↑Thu Oct 05, 2023 4:02 am
Well, I could argue that LLMs can’t feel pain or suffer, as far as we’re aware… but to address your broader argument: yes, I think ethical considerations
will apply at some point, but I’m not going to hazard a guess as to when. We know too little to say anything about that.
I don't think you can argue that LLMs can even
feel since they have no memory of any kind. Even plants have a kind of memory.
Well, I’d argue that the already-outputted token stream acts as a sort of memory. If you separated that out into a ‘subconscious’ and a ‘conscious’ level, I reckon that would start to approximate the way humans think — we don’t blurt out every thought which comes into our brains, whereas an LLM does.
Re: AIs gunning for our precious freelancers
Posted: Thu Oct 05, 2023 8:33 am
by malloc
zompist wrote: ↑Wed Oct 04, 2023 11:53 pmThis is a rather sad and strange move. If humans are so worthless, why are you worried that they could be replaced? By your value system it would be an improvement.
Because humans are my species and I don't have the option of shifting to any other. Granted this is genuinely a tricky problem and one reason why AI worries me so much. Previously we never had to worry about this point because humans were the only game in town. Now they have competition and they are really struggling to keep pace. We can no longer take humans for granted as the obvious subjects and masters of civilization.
Brains are the most amazing three-pound machine we know of. I'm sorry you worship electronic devices instead. Without denying that computers are pretty neat toys, they are nowhere near as perfect, scary, or costless as you think they are. (And seriously, if you aren't aware of how prone LLMs are to misinformation, you are incredibly inattentive.)
Then what are some intellectual tasks where humans beat computers or ones that computers cannot perform? Currently it seems like computers have us beat at pretty much everything, from writing and drawing to playing chess to mathematics. My laptop only cost $300 at most yet it can remember far more than me without error, apply pretty much any arithmetical operation to any two complex numbers, and so forth.
Re: AIs gunning for our precious freelancers
Posted: Thu Oct 05, 2023 8:35 am
by bradrn
malloc wrote: ↑Thu Oct 05, 2023 8:33 am
My laptop only cost $300 at most yet it can remember far more than me without error, apply pretty much any arithmetical operation to any two complex numbers, and so forth.
If you think those tasks are the pinnacle of human intellectual accomplishment… well, I’m not even quite sure
what to say to that.
Re: AIs gunning for our precious freelancers
Posted: Thu Oct 05, 2023 9:21 am
by Raphael
malloc wrote: ↑Thu Oct 05, 2023 8:33 am
Then what are some intellectual tasks where humans beat computers or ones that computers cannot perform?
In principle, perhaps none.
At the current moment in history, there's stuff like writing code that actually runs, or writing
accurate reports on what a person has done in their life. Writing restaurant tips for cities that don't send tourists to food banks.
Re: AIs gunning for our precious freelancers
Posted: Thu Oct 05, 2023 10:42 am
by WeepingElf
Computers have always been vastly overrated. When I was a child, installations that filled entire buildings were popularly called "electronic brains" - and they had much less computing power (let alone intelligence) than a smartphone has today. The history of artificial intelligence is a history of booms and busts, of alternating periods of enthusiasm and massive disillusionment. And as I said here some days ago, people want novels, music, etc. done by real humans - think of the Milli Vanilli scandal back in 1990, which did not even involve machines masquerading as people, just studio musicians masquerading as pop stars.
In theory, everything human beings can do can be done by a machine of sufficient complexity, but in practice, we are not even close. We don't even really understand how the mind works, but it seems to be something very different than the computers we have now, perhaps a vast quantum information system. And once we will have succeeded building a "machine" that can do anything humans can do, we'll probably find out that it has a will of its own, and that the key advantage of machines over people - namely that they don't have a will of their own - has gone.
That, as I already have said here, doesn't mean that the "artificial intelligences" we have now or will have in the near future aren't a challenge to the framework of our society, but there is IMHO no reason to believe in impending doom. Technological revolutions have happened, and will happen, and will change the ways we work and live, but so far, we have managed to live with them each time.
Re: AIs gunning for our precious freelancers
Posted: Thu Oct 05, 2023 3:11 pm
by zompist
malloc wrote: ↑Thu Oct 05, 2023 8:33 am
Then what are some intellectual tasks where humans beat computers or ones that computers cannot perform? Currently it seems like computers have us beat at pretty much everything, from writing and drawing to playing chess to mathematics. My laptop only cost $300 at most yet it can remember far more than me without error, apply pretty much any arithmetical operation to any two complex numbers, and so forth.
I can do arithmetic on complex numbers too— it's not that hard. As for memory, are you aware that your brain has 86 million neurons, each of which is a small computer? That your brain's memory is formed by interconnections between those neurons, to the number of 60 trillion? Human memory did not evolve to solve paper-and-pen problems, but to navigate a very complex natural world, including social interactions. The things that are hard for computers are things you take for granted because you never think about them.
I really suggest you read up on the sorts of errors ChatGPT is prone to: hallucinating data, basic arithmetic errors, being easily misled (e.g., being convinced that 2 + 2 = 5). Iterating on current methods (i.e. more training data, more nodes) will not solve these problems, which are inherent to the way LLMs work.
Re: AIs gunning for our precious freelancers
Posted: Thu Oct 05, 2023 3:19 pm
by Travis B.
zompist wrote: ↑Thu Oct 05, 2023 3:11 pm
I really suggest you read up on the sorts of errors ChatGPT is prone to: hallucinating data, basic arithmetic errors, being easily misled (e.g., being convinced that 2 + 2 = 5). Iterating on current methods (i.e. more training data, more nodes) will not solve these problems, which are inherent to the way LLMs work.
Apparently they have created a Wolfram plugin to add to ChatGPT to ameliorate these issues because ChatGPT just plan sucks at these kinds of things while Wolfram Alpha is essentially a glorified symbolic math system.
Re: AIs gunning for our precious freelancers
Posted: Thu Oct 05, 2023 3:31 pm
by KathTheDragon
bradrn wrote: ↑Thu Oct 05, 2023 7:03 am
KathTheDragon wrote: ↑Thu Oct 05, 2023 5:36 am
bradrn wrote: ↑Thu Oct 05, 2023 4:02 am
Well, I could argue that LLMs can’t feel pain or suffer, as far as we’re aware… but to address your broader argument: yes, I think ethical considerations
will apply at some point, but I’m not going to hazard a guess as to when. We know too little to say anything about that.
I don't think you can argue that LLMs can even
feel since they have no memory of any kind. Even plants have a kind of memory.
Well, I’d argue that the already-outputted token stream acts as a sort of memory.
How did I know you'd make this argument? No, attention is not memory. The interfaces provide the illusion that it is by modifying the prompt over time. But that's all it is - the same network, exactly the same, but with its own earlier output and your response as part of the prompt. If you started off the session with exactly that mass of text written yourself, you'd get broadly the same outcome, but can you honestly claim that the network is still "remembering"? No, you're just telling it what the past is for that single interaction, and it processes everything it can fit into its buffer.
If you separated that out into a ‘subconscious’ and a ‘conscious’ level, I reckon that would start to approximate the way humans think — we don’t blurt out every thought which comes into our brains, whereas an LLM does.
This is honestly quite laughable. Language models are just repeatedly guessing what the next token should be, and
only the next token. They cannot jump ahead, ever. Human minds on the other hand can plan out what they're going to say very non-linearly, and backtrack to edit something earlier to fit with what they're just now composing. The contrast is quite stark.
Re: AIs gunning for our precious freelancers
Posted: Thu Oct 05, 2023 4:04 pm
by malloc
zompist wrote: ↑Thu Oct 05, 2023 3:11 pmI can do arithmetic on complex numbers too— it's not that hard.
But can you solve expressions like (3.89+2.5i)^(4.8-2.73i) in your head? My laptop can solve that in less than one second.
As for memory, are you aware that your brain has 86 million neurons, each of which is a small computer?
But how powerful are these computers? It seems downright feasible to build one large computer with the processing power of 86 million pocket calculators. Such a machine would presumably have the same processing power as one human brain.
I really suggest you read up on the sorts of errors ChatGPT is prone to: hallucinating data, basic arithmetic errors, being easily misled (e.g., being convinced that 2 + 2 = 5). Iterating on current methods (i.e. more training data, more nodes) will not solve these problems, which are inherent to the way LLMs work.
Humans hallucinate and make arithmetic errors all the time, though. Right now, half the country believes the Bible is literally true, that Trump won the 2020 election and will soon expose the Democrats as cannibal pedophiles, and so forth. If programs like chatGPT can avoid that nonsense, they're already intelligent than many humans.
WeepingElf wrote: ↑Thu Oct 05, 2023 10:42 am
And once we will have succeeded building a "machine" that can do anything humans can do, we'll probably find out that it has a will of its own, and that the key advantage of machines over people - namely that they don't have a will of their own - has gone.
My thoughts as well, and yet many powerful figures are gunning really hard for superintelligent AGI and the singularity. The whole concept of Roko's basilisk never made sense to me because it seems obvious that you should simply not build the basilisk. Yet many people talking about seem to take it for granted that we should build godlike AIs and the whole concept assumes that not building them is somehow a horrible crime.
Re: AIs gunning for our precious freelancers
Posted: Thu Oct 05, 2023 4:09 pm
by Raphael
zompist wrote: ↑Thu Oct 05, 2023 3:11 pmAs for memory, are you aware that your brain has 86 million neurons, each of which is a small computer? That your brain's memory is formed by interconnections between those neurons, to the number of 60 trillion?
Someone should rephrase those two sentences a bit, and the sing them to the tune of Monty Python's
Galaxy Song.
Human memory did not evolve to solve paper-and-pen problems, but to navigate a very complex natural world, including social interactions. The things that are hard for computers are things you take for granted because you never think about them.
What I find a bit interesting is that apparently, precisely those human beings who are unusually
good at those things that computers are good at, and that most people are bad at - like, for instance, certain types of people on the spectrum - are also often unusually (by human standards)
bad at things that computers are bad at, and that humans are
usually good at.
Re: AIs gunning for our precious freelancers
Posted: Thu Oct 05, 2023 4:29 pm
by zompist
malloc wrote: ↑Thu Oct 05, 2023 4:04 pm
zompist wrote: ↑Thu Oct 05, 2023 3:11 pmI can do arithmetic on complex numbers too— it's not that hard.
But can you solve expressions like (3.89+2.5)^(4.8-2.73i) in your head? My laptop can solve that in less than one second.
My toaster can make toast and I can't do that by breathing on it. Oh noes toasters are taking over the world.
Can your laptop create a conlang as well as I can? And then write correct and fluid text in it?
If programs like chatGPT can avoid that nonsense,
They can't. What the heck do you think Chat-GPT is trained on? Text written by humans. It reproduces all the errors those humans make.
Re: AIs gunning for our precious freelancers
Posted: Thu Oct 05, 2023 4:34 pm
by Ketsuban
malloc wrote: ↑Wed Oct 04, 2023 8:17 pm
You yourself said that with the development of LLMs, they were basically halfway to AI with human level intelligence. Now you claim they aren't even close.
It's entirely possible that AI is "halfway to" human-level intelligence in the "
first half of the chessboard" sense.
Re: AIs gunning for our precious freelancers
Posted: Fri Oct 06, 2023 4:52 am
by zompist
Very interesting article by Paris Marx on why ChatGPT is overhyped, expensive, and environmentally harmful. It's on Substack, so I don't know if the link will let you read it. Some highlights:
* Tech requires a succession of bubbles. AI came just at the right time, after blockchains, NFTs, and VR flopped.
* ChatGPT requires a good deal of exploitative human labor to filter out harmful content.
* LLMs require high-end graphics cards, which have a large carbon footprint, require rare materials, and suck up water:
Companies have been trying to hide the full footprint of their data centers because they know the public could turn against them if they knew the reality. In The Dalles, Oregon, Google was found to be using a quarter of the city’s water supply to cool its facilities. Tech companies have been facing pushback elsewhere in the United States, but also across the world in places like Uruguay, Chile, the Netherlands, Ireland, and New Zealand. Now opposition is growing in Spain too, where droughts are wiping out crops and people are wondering why they’d give their limited water resources to Meta for a data center. But adopting generative AI will require a lot more of those data centers to be built around the world.
* And as I was saying, general AI alarmism is a distraction mechanism:
Instead, they’re working hard to distract us from the environmental, labor, and social concerns with fantasies about AI’s potential threat to the human race as a whole. It’s part of a broader longtermist ideology that seeks to shift our resources from present-day crises to the priorities of fabulously wealthy and hopelessly disconnected tech billionaires.
Re: AIs gunning for our precious freelancers
Posted: Sat Oct 07, 2023 7:45 am
by bradrn
In case anyone wants a bit of cheering up, I saw
this in a South African newspaper:
Re: AIs gunning for our precious freelancers
Posted: Sat Oct 07, 2023 9:13 am
by Raphael
bradrn wrote: ↑Sat Oct 07, 2023 7:45 am
In case anyone wants a bit of cheering up, I saw
this in a South African newspaper:
"Open my folders, HAL!"
I guess as soon as the computers can plan stuff like that ahead on their own and then execute it, we're in
real trouble.
Then again, perhaps by writing that, I'm doing exactly the kind of distraction from the real issues that the Paris Marx post linked to by zompist warns about.
Re: AIs gunning for our precious freelancers
Posted: Sat Oct 07, 2023 10:21 am
by malloc
Raphael wrote: ↑Sat Oct 07, 2023 9:13 am"Open my folders, HAL!"
I guess as soon as the computers can plan stuff like that ahead on their own and then execute it, we're in real trouble.
Then again, perhaps by writing that, I'm doing exactly the kind of distraction from the real issues that the Paris Marx post linked to by zompist warns about.
You can worry about multiple things at once, though. Just because a problem seems remote does not mean we should ignore it. Go back one hundred years and nobody would have believed that fossil fuels could imperil the earth and humanity through global warming. Even critics of capitalism would likely have found such concerns silly. Why worry about the possible harm of air pollution centuries down the line when children are working in coal mines now, they might ask. Now the ecological bill for decades of greenhouse gas emissions is coming due and we face looming catastrophe. Certainly it makes sense to focus on LLMs given the immediate threat they pose to both culture and employment. We need to stand strong against attempts to drive humans out of our own cultural production and reduce art to algorithm. But there is hardly any harm in considering future dangers and planning on how to address them when they arrive.
My personal view is that intelligence is power and perhaps the most significant power of all. Humans did not achieve our commanding position in the world through strength or durability. We are proportionally quite weak compared to most other animals and vulnerable to all manner of disease and injury that other animals shrug off with ease. Despite all that, humans number in the billions and live all over the globe while far stronger animals like tigers and rhinos are consigned to tiny pockets of hinterland where they face looming extinction. What gives humans such an extraordinary advantage over tigers and rhinos is intelligence. Giving our greatest and indeed only strength to machines, subject to none of our biological weaknesses or lapses in judgment, seems incredibly risky to me with no real benefits. Machines can have strength, speed, and invulnerability, but let us retain intelligence and all the power it brings for ourselves. Instead of developing artificial intelligence to think for us, we really ought to focus on cultivating our own intelligence through better education, more respect for the arts, and so forth.
Re: AIs gunning for our precious freelancers
Posted: Sat Oct 07, 2023 6:18 pm
by Richard W
malloc wrote: ↑Thu Oct 05, 2023 4:04 pm
But can you solve expressions like (3.89+2.5i)^(4.8-2.73i) in your head? My laptop can solve that in less than one second.
I'm not sure that you would ever want to evaluate that in your head. There are serious problems with the concept of
a^b for arbitrary complex
a and
b. I recall not appreciating the involved nature of exponentiation when
Simon Norton threw us the off-topic question of which fields had it at a supervision in around 1976 or 1977.