Soshul meedja.

Topics that can go away
rotting bones
Posts: 1408
Joined: Tue Dec 04, 2018 5:16 pm

Re: Soshul meedja.

Post by rotting bones »

Torco wrote: Tue Mar 28, 2023 5:58 pm and with great power comes the possibility that misalignments, or bugs, are harmful... which is the opposite of this.
...
and misalignments both look like bugs and cause real harm.
If you're worried about bug-like misalignments, then none of my above arguments apply. Bugs are disruptive already.

In that case, my only question is: Are humans better at the task or AI?
Torco wrote: Tue Mar 28, 2023 5:58 pmI think these are two distinct meanings of the word: in the first case, it means like ability to relatively autonomously fulfill a goal, i.e. to do work, and in the second, it means to be free. In the first sense, capitalism increases the agency of workers: it gives them better tools, better methods and better coordination capabilities: a person can till a vast field in an afternoon these days blabla.
I don't think agency means power per se. I'm using agency in the sense of having autonomous goals. I only mentioned agency in the context of misalignments that resemble autonomous goals rather than bugs.
Torco wrote: Tue Mar 28, 2023 5:58 pm but I think your two positions are hard to reconcile here: if you think an AI can be built such that it manipulates people's behaviours in a more or less autonomous way, in the sense that it puts out output (i.e. enacts behavior), gets input from the world, and tries to maximize an outcome of the "shaping people's behavior" type... that is, almost definitionally, power, no? power in the "hands" of a blind and inscrutable pseudo-mind.
I don't think human minds are infinitely malleable. Under capitalism, I don't think a super-intelligent AI will get anywhere unless it's given a bank account, significant amounts of starting capital and allowed to buy and sell. Thing is, humans cause trouble in that way already:
rotting bones wrote: Sun Mar 19, 2023 5:02 pm I have a more fundamental objection. You obviously can't make any headway in the world without intelligence, but I personally think that money makes a much bigger difference under our current economic system. You don't need a super-intelligent AI to wreak havoc if it can find an angel investor to give it tons of cash. In fact, the intelligence need not be artificial. Humans do that already. Conversely, a super-intelligent AI won't be able to do anything unless it gets access to someone's bank account.
Torco wrote: Tue Mar 28, 2023 5:58 pm I have! nothing fancy, a tensorflow that guesses if a name is a boy's name or a girl's name that got like 98% right on test data it had never seen before. (98% sounds like nothing much, but many names *I* couldn't have figured out). a toy, of course, I'm sure the pros program those for breakfast, but it's given me like a general notion. And I agree, most cases of misalignment are trivial and remarkably silly: one gets impressions such as "so intelligent and so deeply stupid at the same time". Like those videogame AIs that are trained to reach the goal and that, when put in a novel environment, just go right cause that's what you do in a videogame.
Now try it with a very complex model. Some kind of super-LSTM or transformer that aggregates streams of context from multiple outlets.
rotting bones
Posts: 1408
Joined: Tue Dec 04, 2018 5:16 pm

Re: Soshul meedja.

Post by rotting bones »

Torco wrote: Tue Mar 28, 2023 2:38 pm 2 is, it seems to me, just a matter of time: and anyway the model wouldn't need to be aware of everything in reality. plus, again, abstraction: the environment for an agent-AI, if it was to do, for example, HR, could be as simple as a company: 100 people, their work, seven catfish linkedin accounts, the feeds from those accounts, stock market numbers and maybe a chat window connected to the main shareholders: this is not that big a universe to pipeline into a model, but it is enough for this entity I'm positing to do its job.
I feel like finding the right simplification is the hard problem. Once that is found, anyone's nephew can do the automation.

PS. Forgive me if I'm being annoying. I'm sleepy, I can't sleep, and I don't know what I'm saying half the time.
Torco
Posts: 796
Joined: Fri Jul 13, 2018 9:11 am

Re: Soshul meedja.

Post by Torco »

I feel like finding the right simplification is the hard problem. Once that is found, anyone's nephew can do the automation.
yeah, that's the fun of it too, the philosophy of it.
hwhatting
Posts: 1093
Joined: Mon Jul 09, 2018 3:09 am
Location: Bonn
Contact:

Re: Soshul meedja.

Post by hwhatting »

Just throwing this in - I don't know whether any AI would ever "want" to subject humanity, but it certainly could be programmed in a way that would feel to people like that. Already now, there are cars with systems that override human decisions, e.g., don't allow to speed above the speed limit or brake if they see an obstacle, even if the driver puts their foot on the gas pedal. Create more complex systems like that - say, houses that don't let you leave if they consider conditions outside dangerous, or a fridge that will dispense only healthy food, etc., and legal systems that legislate such restraints to be obligatory, for your own good, and at one point nobody being there anymore who can override these things even if they get buggy or make harmful choices based on faulty assumptions... that looks like possible scenario to me. Of course, it would have been enabled by human actions, but that would be true of any scenario where AI takes over.
Or assume an automated law-enforcement system goin wrong that way, putting everybody in prison or shooting everyone as dangerous criminals. Or automated weaponry coming to the conclusion that it should nuke humanity. Not out of a sense of agency, but because of faulty goals and assumptions programmed into it.
rotting bones
Posts: 1408
Joined: Tue Dec 04, 2018 5:16 pm

Re: Soshul meedja.

Post by rotting bones »

hwhatting wrote: Wed Mar 29, 2023 12:35 pm Just throwing this in - I don't know whether any AI would ever "want" to subject humanity, but it certainly could be programmed in a way that would feel to people like that. Already now, there are cars with systems that override human decisions, e.g., don't allow to speed above the speed limit or brake if they see an obstacle, even if the driver puts their foot on the gas pedal. Create more complex systems like that - say, houses that don't let you leave if they consider conditions outside dangerous, or a fridge that will dispense only healthy food, etc., and legal systems that legislate such restraints to be obligatory, for your own good, and at one point nobody being there anymore who can override these things even if they get buggy or make harmful choices based on faulty assumptions... that looks like possible scenario to me. Of course, it would have been enabled by human actions, but that would be true of any scenario where AI takes over.
The Internet of Things already exists, and it's super annoying. My only objection is that a misalignment of goals will lead to a paperclip-level catastrophe.
hwhatting wrote: Wed Mar 29, 2023 12:35 pm Or assume an automated law-enforcement system goin wrong that way, putting everybody in prison or shooting everyone as dangerous criminals. Or automated weaponry coming to the conclusion that it should nuke humanity. Not out of a sense of agency, but because of faulty goals and assumptions programmed into it.
This would require someone to put an AI in charge of whether or not to set off nukes. You'll get similar results if you put your cat in charge of that.
rotting bones
Posts: 1408
Joined: Tue Dec 04, 2018 5:16 pm

Re: Soshul meedja.

Post by rotting bones »

Clarification: When I say there's a new model that supports autonomous browsing, I'm not talking about GPT-4.
Torco wrote: Tue Mar 28, 2023 2:38 pm 1 is true, but it's not necessarily true, it is mitigatable through abstraction, and having a model with continuity and some model of the outside world is a necessary feature for many applications which, trust me, would be profitable as fuck.
BTW, but if you actually try to create a large neural network, you'll face things like vanishing gradient problems and exploding gradient problems. There are ways of dealing with these, but I'm not sure those solutions are what you're looking for in this quote. For example, the ResNet architecture deals with the vanishing gradient problem by creating short-circuits that add information from the shallower layers to the deeper ones.
Torco wrote: Tue Mar 28, 2023 2:38 pm This is my whole point, though: there's no particular reason an AI would "rise against us"... but there are at least concievable ways in which competent enough AIs that are able to set their own instrumental goals and follow through on them could exist. and, if existant, they would be just one more thing people would need to worry about: kind of like the fed, or the rockefeller foundation.

and if you think this is, in principle, impossible, let me, again, point out that we've already built agent-like powerful entities that, to some degree or other, exert power over us: they don't run on silicon, they're social systems, but still. they haven't "turned against us", but they... well... exist.
These are the weeds. I'm using "turning against" loosely to refer to the paperclip problem.

1. In that sense, humans can "turn against" humans without thinking that's what they're doing either. Eg. By refusing to address climate change.

2. An AI can't "turn against" humans while thinking it's maximizing profit because, strictly speaking, it doesn't "think" anything. It's just one function that was obtained by minimizing loss on some dataset.
MacAnDàil
Posts: 763
Joined: Thu Aug 09, 2018 4:10 pm

Re: Soshul meedja.

Post by MacAnDàil »

One problem with these pandora box ais that Italy has thankfully banned is the ecological impact. The resources for the robots need to come from somewhere. And we're already way overusing the resources of the planet we need to survive as it is. We'd be better off making lista of jobs only robots can do rather one of jobs only humans can do.

If we're going to invent silly dystopian pandora box machines, why not a time machine to prevent the invention of them? But that could easily go wrong in the wrong hands.
User avatar
Raphael
Posts: 4566
Joined: Sun Jul 22, 2018 6:36 am

Re: Soshul meedja.

Post by Raphael »

MacAnDàil wrote: Sat Apr 01, 2023 1:52 am If we're going to invent silly dystopian pandora box machines, why not a time machine to prevent the invention of them? But that could easily go wrong in the wrong hands.
Because the invention of time machines is probably a lot less likely to be possible than the invention of those pandora box machines.
User avatar
Raphael
Posts: 4566
Joined: Sun Jul 22, 2018 6:36 am

Re: Soshul meedja.

Post by Raphael »

MacAnDàil wrote: Sat Apr 01, 2023 1:52 am One problem with these pandora box ais that Italy has thankfully banned is the ecological impact.
What makes you think that laws can stop an out-of-control AI machine once it gets going?
MacAnDàil
Posts: 763
Joined: Thu Aug 09, 2018 4:10 pm

Re: Soshul meedja.

Post by MacAnDàil »

Raphael wrote: Sat Apr 01, 2023 2:42 am
MacAnDàil wrote: Sat Apr 01, 2023 1:52 am If we're going to invent silly dystopian pandora box machines, why not a time machine to prevent the invention of them? But that could easily go wrong in the wrong hands.
Because the invention of time machines is probably a lot less likely to be possible than the invention of those pandora box machines.
That suggestion was not too serious.
MacAnDàil
Posts: 763
Joined: Thu Aug 09, 2018 4:10 pm

Re: Soshul meedja.

Post by MacAnDàil »

Raphael wrote: Sat Apr 01, 2023 2:43 am
MacAnDàil wrote: Sat Apr 01, 2023 1:52 am One problem with these pandora box ais that Italy has thankfully banned is the ecological impact.
What makes you think that laws can stop an out-of-control AI machine once it gets going?
They can at least prevent it getting to that stage and perhaps even provide a framework for dealing with it if it does.
hwhatting
Posts: 1093
Joined: Mon Jul 09, 2018 3:09 am
Location: Bonn
Contact:

Re: Soshul meedja.

Post by hwhatting »

rotting bones wrote: Wed Mar 29, 2023 12:43 pm
This would require someone to put an AI in charge of whether or not to set off nukes. You'll get similar results if you put your cat in charge of that.
My cat would never do that! Look at him!
More: show
Image
But to the point, it's exactly that kind of abdication of responsibility that I'm afraid of - that people will put an AI in charge of systems that can destroy life, and then lose the will or the ability to override it when that leads to catastrophic outcomes, not because the AI has gained consciousness and decided to stick it to mankind, but because of programming that was not thought through well enough or because of bugs.
MacAnDàil
Posts: 763
Joined: Thu Aug 09, 2018 4:10 pm

Re: Soshul meedja.

Post by MacAnDàil »

hwhatting wrote: Tue Apr 04, 2023 8:17 am But to the point, it's exactly that kind of abdication of responsibility that I'm afraid of - that people will put an AI in charge of systems that can destroy life, and then lose the will or the ability to override it when that leads to catastrophic outcomes, not because the AI has gained consciousness and decided to stick it to mankind, but because of programming that was not thought through well enough or because of bugs.
Yes, just like how, if there would be a Matrix, it would not be set up by the robots but by a combination of lazy humans who neglect real life and greedy humans who want to make billions off the lazy ones.
Moose-tache
Posts: 1746
Joined: Fri Aug 24, 2018 2:12 am

Re: Soshul meedja.

Post by Moose-tache »

To anyoe who thinks politicians wouldn't put an AI in charge of nukes, look at the current situation. Our economy is based on believing in a fictional idea of how markets work, which our leaders defer to whenever they need to make a decision. Instead of using their resources to accomplish a task, they burn the resources in an altar to motivate this fictional force to accomplish the task for them. Our state is organized according to a genteel version of Huitzilopochtli worship. The high priests would definitely put an AI in charge of nukes.
I did it. I made the world's worst book review blog.
Travis B.
Posts: 6858
Joined: Sun Jul 15, 2018 8:52 pm

Re: Soshul meedja.

Post by Travis B. »

I for one welcome our new AI overlords.
Yaaludinuya siima d'at yiseka wohadetafa gaare.
Ennadinut'a gaare d'ate eetatadi siiman.
T'awraa t'awraa t'awraa t'awraa t'awraa t'awraa t'awraa.
Torco
Posts: 796
Joined: Fri Jul 13, 2018 9:11 am

Re: Soshul meedja.

Post by Torco »

Moose-tache wrote: Mon Apr 10, 2023 4:21 pm To anyoe who thinks politicians wouldn't put an AI in charge of nukes, look at the current situation. Our economy is based on believing in a fictional idea of how markets work, which our leaders defer to whenever they need to make a decision. Instead of using their resources to accomplish a task, they burn the resources in an altar to motivate this fictional force to accomplish the task for them. Our state is organized according to a genteel version of Huitzilopochtli worship. The high priests would definitely put an AI in charge of nukes.
a relevant difference is that the decisions being made by the god Market benefits the decisionmakers and that's why they foster it: "market" just means "owners of businesses" after all: It's not clear to me that putting things in the hands of this or that AI does the same: like, sure, if replacing a person with a convolutional neural network means job x is done more cheaply then they'll do it: but I don't think that putting actual decisions in the hands of the a GPT is going to necessarily be beneficial to them, for example in the case of nukes.
Ares Land
Posts: 3021
Joined: Sun Jul 08, 2018 12:35 pm

Re: Soshul meedja.

Post by Ares Land »

Torco wrote: Fri Apr 14, 2023 8:18 am a relevant difference is that the decisions being made by the god Market benefits the decisionmakers and that's why they foster it: "market" just means "owners of businesses" after all: It's not clear to me that putting things in the hands of this or that AI does the same: like, sure, if replacing a person with a convolutional neural network means job x is done more cheaply then they'll do it: but I don't think that putting actual decisions in the hands of the a GPT is going to necessarily be beneficial to them, for example in the case of nukes.
I agree with you on that.
Essentially political decisions are made in such a way that they benefit a number of key people and lobbies. The Market cultist's job is to explain this is the way the Invisible Hand wants it. In time of course they come to believe it sincerely, of course, but that's the way good lies work :)

I don't believe AI will be given control over nukes, for a very cynical reasons. Nuclear reasons are scary because of their sheer destructive power, the radioactive fallout, and the fact that most world leaders are a little crazy. There's something deeply scary about the human being in charge of the button being more a little crazy, and nuclear powers milk it for all it's worth. See Russia: of course Russia has every rational reason not to launch nukes; then again Putin is a crazy and violent motherfuckers, so who knows? Of course Trump won't nuke North Korea, but it's Trump we're talking about -- who knows what goes through that nutty orange head of his?

I don't see an AI being as convincing at playing the madman as a real madman.
User avatar
Raphael
Posts: 4566
Joined: Sun Jul 22, 2018 6:36 am

Re: Soshul meedja.

Post by Raphael »

Sorry to interrupt this fascinating discussion, but I've got a question about actual social media:

I've got a Facebook page which, for quite a long time, I've used for basically nothing, and which I only keep around for two reasons:

1) in case that someone who knows me or used to know me want to get in contact and doesn't have any other way to contact me, and

2) in case that at some point in the future I end up in some kind of situation where I'm basically required to join a particular Facebook group.

But I'm starting to wonder whether either of those two factors is really worth the hassle - that is, the constant annoying emails from Facebook.

So, what do you think, people: should I keep my Facebook page, or delete it?
Torco
Posts: 796
Joined: Fri Jul 13, 2018 9:11 am

Re: Soshul meedja.

Post by Torco »

I would just put the facebook emails into a filter, and make that filter feed into spam tbh
zompist
Site Admin
Posts: 2948
Joined: Sun Jul 08, 2018 5:46 am
Location: Right here, probably
Contact:

Re: Soshul meedja.

Post by zompist »

Raphael wrote: Fri Apr 14, 2023 12:15 pm I've got a Facebook page which, for quite a long time, I've used for basically nothing, and which I only keep around for two reasons:

1) in case that someone who knows me or used to know me want to get in contact and doesn't have any other way to contact me, and
2) in case that at some point in the future I end up in some kind of situation where I'm basically required to join a particular Facebook group.
I deleted my Facebook account years ago, after reading one too many articles about their aggressive data tracking. Nothing I've heard since has made it more attractive.

Cory Doctorow recently described social media "enshittification":
Doctorow wrote:Here is how platforms die: first, they are good to their users; then they abuse their users to make things better for their business customers; finally, they abuse those business customers to claw back all the value for themselves. Then, they die.
He discusses Facebook later on in the article. tl;dr: you can't even count on your point (1) any more. Facebook doesn't want to show you posts from people you care about any more. As for (2), if that hasn't happened yet, why would it now? And if you absolutely had to, you could create a new account.
Post Reply