Soshul meedja.

Topics that can go away
rotting bones
Posts: 1408
Joined: Tue Dec 04, 2018 5:16 pm

Re: Soshul meedja.

Post by rotting bones »

I should probably clarify that I believe an AI like a brain upload can perfectly simulate sentience without actually being sentient. When I say you don't need a sentient AI to automate production, I mean you don't need an AI that simulates sentience. When I say that intelligence is orthogonal to sentience, I'm talking about actual sentience.
zompist
Site Admin
Posts: 2944
Joined: Sun Jul 08, 2018 5:46 am
Location: Right here, probably
Contact:

Re: Soshul meedja.

Post by zompist »

rotting bones wrote: Sun Mar 26, 2023 12:31 pm I should probably clarify that I believe an AI like a brain upload can perfectly simulate sentience without actually being sentient. When I say you don't need a sentient AI to automate production, I mean you don't need an AI that simulates sentience. When I say that intelligence is orthogonal to sentience, I'm talking about actual sentience.
What do people even mean by brain uploading these days?

I'd be kind of maximalist about this: we have little idea what level of neural architecture needs to be copied, so I'd expect all 170 million or so brain cells (not just neurons) need to be scanned and represented. A neuron is itself more comparable to a computer than to a deep learning weight, and a lot of the information in the brain is composed of neural connections— and a neuron can connect up to 15,000 other neurons.

First, how do you get this information? If you're imagining a destructive scan— take off 1 nm of brain each time, working from the top— it won't work, for the same reason you can't "scan" your house by removing 1 cm at a time. Once you remove the top bolt, the roof collapses. MRI scans can't scan individual neural connections, and higher-energy scans will destroy what you're scanning.

I can (just barely) buy nanobots as the long-term solution, but that's not going to be in the lifetime of today's enthusiasts.

And second, what is supposed to happen when you load up your computer with your data? "You" suddenly awake inside the computer? For the purposes of argument I'll grant that someone wakes up there, and even thinks they are you. But once you have the data you can make an indefinite number of AIs that think they are you. How anyone thinks that they themselves will experience the upload, I don't know.

Maybe you always destroy yourself after the nanobot scan, so no one is around with a different viewpoint? But what if, say, you go to the hospital and they surreptitiously do a scan and load it up. Does the scanned AI get to claim your assets?

I just think people have read too much sf and imagine something that just isn't going to happen.
rotting bones
Posts: 1408
Joined: Tue Dec 04, 2018 5:16 pm

Re: Soshul meedja.

Post by rotting bones »

zompist wrote: Sun Mar 26, 2023 7:26 pm And second, what is supposed to happen when you load up your computer with your data? "You" suddenly awake inside the computer? For the purposes of argument I'll grant that someone wakes up there, and even thinks they are you. But once you have the data you can make an indefinite number of AIs that think they are you. How anyone thinks that they themselves will experience the upload, I don't know.
Serious brain uploading theorists don't believe in shared identities: https://www.pdfdrive.com/the-age-of-em- ... 83174.html
Moose-tache
Posts: 1746
Joined: Fri Aug 24, 2018 2:12 am

Re: Soshul meedja.

Post by Moose-tache »

The easiest way to upload your consciousness would be to make a new brain. Luckily, we know how to do that; we've been doing it for a hundred thousand years. Now, it does take time. And it can't happen in a vacuum. Much of what makes your brain you is due to physical arrangements in the brain based on experiences in your past. So you'll need to run this new brain through its paces in a way that most closely matches your history. For example, some sort of standardized initiation process that takes up much of the day and happens based on centrally determined rubric. Then of course you need this new brain to be able to sustain itself while you wait for your current brain to come to the end of its usable life, so the new brain will need some job training, again at a central facility that operates according to government standards. The new brain will need to absorb your memories, and the fastest way to do this is orally, suplemented by images captured from your perspective as the memories were made by some sort of device that can turn light into static or moving images. Once all this is complete, you don't need the actual scanning technique. Once the first brain reaches the end of its use period, the new brain is already up and running. Problem solved.
I did it. I made the world's worst book review blog.
User avatar
alice
Posts: 962
Joined: Mon Jul 09, 2018 11:15 am
Location: 'twixt Survival and Guilt

Re: Soshul meedja.

Post by alice »

zompist wrote: Sun Mar 26, 2023 7:26 pmI just think people have read too much sf and imagine something that just isn't going to happen.
It's like a lot of sexy SF ideas certain people get very excited about; make a lot of noise and don't worry about trivial details like scale, cost, what to do if things go wrong, and so on. See also: terraforming Mars, militia enclaves, body augmentation, and cryptocurrencies. Or, going back some years, a Beowulf cluster of cats with buttered toast strapped to their backs, scaling up a perpetual-motion machine to a useful size.
Self-referential signatures are for people too boring to come up with more interesting alternatives.
User avatar
Raphael
Posts: 4556
Joined: Sun Jul 22, 2018 6:36 am

Re: Soshul meedja.

Post by Raphael »

rotting bones wrote: Sun Mar 26, 2023 12:31 pm I should probably clarify that I believe an AI like a brain upload can perfectly simulate sentience without actually being sentient.
Well, at one point in the past - I don't know to which extent he still agrees with it - zompist argued that it's not all that easy to distinguish between "real" and "simulated" things, at least when it comes to non-physical things. To quote one of his examples, "Does ELIZA amuse you, or only simulate amusing you?"
zompist
Site Admin
Posts: 2944
Joined: Sun Jul 08, 2018 5:46 am
Location: Right here, probably
Contact:

Re: Soshul meedja.

Post by zompist »

Raphael wrote: Mon Mar 27, 2023 6:24 am
rotting bones wrote: Sun Mar 26, 2023 12:31 pm I should probably clarify that I believe an AI like a brain upload can perfectly simulate sentience without actually being sentient.
Well, at one point in the past - I don't know to which extent he still agrees with it - zompist argued that it's not all that easy to distinguish between "real" and "simulated" things, at least when it comes to non-physical things. To quote one of his examples, "Does ELIZA amuse you, or only simulate amusing you?"
Yeah, that's from the Chinese Room page. I do still agree with myself here. :)

I'm not sure I agree with rotting bones, but mostly because it seems way too vague. With humans, we in effect have an ostensive definition of sentience, which famously does not satisfy logicians, but works for most other humans. I don't think we have a good test for computational sentience.

I am a little worried that there is a clear point when a corporation has developed, or is about to develop, a sentient AI: when it fires its ethics team.
Moose-tache
Posts: 1746
Joined: Fri Aug 24, 2018 2:12 am

Re: Soshul meedja.

Post by Moose-tache »

What are the stages of developing a working AI?

1) Get investors
2) Product is discovered to have pornographic applications
3) Investors pull out
4) NSFW filter added
?)
?)
?)
?) Ethics team fired for sending executives pointed emails containing words like "trolley" and "concerning."
?) Skynet achieved. All humanity dead. Stacey postumously admits she was wrong to pick Chuck over you.
I did it. I made the world's worst book review blog.
Torco
Posts: 791
Joined: Fri Jul 13, 2018 9:11 am

Re: Soshul meedja.

Post by Torco »

The distinct advantage with the project/fantasy of computer soul uploading, above just making a brain and uploading yourself into it is that we're noticeably getting better at making computers and software capable of more and more things, whereas we're, other than having a kid and killing it, quite incapable of making new brains to upload ourselves over it.

I think sentience is the wrong way to go about thinking about the possibilites of AI: why would we expect computers to think or experience the world as we do just... spontaneously? sure, you could, in principle, get a computer to simulate whatever the brain does so well that you could copy a person onto it, all of its memories and inclinations feelings and whatnot... feelings are just one of the things brains do, and there's no reason to think it's the result of some magically unemulable (unamulatable? impossible to emulate, I mean) soul thing. But vertebrate brains are their own kind of thing, quite different from bunches of transistors. (and indeed it seems to me at least mammals all have some degree of experience, feelings, thoughts and the like, birds too, though I'm not convinced about squid, and it seems likely that arthropods categorically don't have much of an inner life). Sentience assumes that there's this one way in which highly competent agents must operate, i.e. on the basis of thoughts and feelings and sense experiences and... well, the way we do, let's face it, sentience just kinda means "a mind like mine". But there's no.... okay I don't believe I have a reason to think skynet will have any of those. we could try emulating those, but that's not what we're doing in AI, the eggheads are all doing *task* emulation, not feelings emulation, so it seems likely that if an AI becomes like a competent agent, at the level of strategizing around us (i.e. being smarter the way we're smarter than chimps, it wouldn't take that big of a different tbh) it'll be a thing that does tasks and doesn't do feelings or thoughts.

I suppose my Mankind conworld idea boils down to that thought: all kinds of highly competent agent-like things, which aren't human minds, with the power to change the world around them, perpetuate themselves, maxmizes some utility function etcetera exist already, if we need examples: institutions, companies, the cia, the catholic church.... these kinds of things are somewhat aligned with human aims, both because they leverage them and because they create them, but then again, there's no reason why a powerful, extremely clever, agentive AI won't do those kinds of things, indeed it'd be weird if it didn't! whatever it's goals are, they're probably not going to be "destroy humanity", and as an instrumental goal, destroying humanity is quite a risky move! *can you*, skynet thing, really make sure to do all the things that would need doing in order to effectively maximize whatever? all of them? picking up every pebble, every asteroid, every bit of matter that can be turned into a paperclip? if nothing else, humans are pretty useful tools, compared to rocks, interstellar gast and ice. This effect is compounded if we consider that what the eggheads are actually doing with AI, the kinds of tasks they elect to use the tech to accomplish, belong to the general category of *making people do things*, so if chatGPT goes singularity, it'll probably start up as being pretty good at doing that, as opposed to... I don't know, what's next for singularity AI after it kills us all, von newman probes? Na, you start by taming humans. they've done half the work for you already, they've gotten themselves sorta kinda tamed.
rotting bones
Posts: 1408
Joined: Tue Dec 04, 2018 5:16 pm

Re: Soshul meedja.

Post by rotting bones »

The idea is that if you want to turn everything into paperclips, things might go more smoothly if you nuke the planet first, ridding yourself of billions of pesky antagonists.

My fundamental objections are still:

1. We never put AIs with a sense of agency in charge of anything. If you want an AI to have a sense of agency, you need to write a system of equations that define its sense of agency. Why bother?

2. Whether they are agentive or not, we always train AIs to do things for us. Their "goals" don't align with ours in the same way that the "goals" of buggy programs don't align with ours.

For these reasons, I think the most likely scenario of an unaligned AI destroying the planet is if an eccentric asshole billionaire decides that only a computer is smart enough to inherit his empire.
Torco
Posts: 791
Joined: Fri Jul 13, 2018 9:11 am

Re: Soshul meedja.

Post by Torco »

but I'm not talking about mystical or metaphysical agency, I'm talking empirical agency: acting as an agent. it doesn't have to have a 'sense' (?) of being an agent to behave as one, again, look at corporations.

And yeah, sure, antagonists: but that's only if it's unable to manipulate humans very effectively, which is precisely what AIs in the real world do. that'd be like... it's thing. And maybe its paperclip has something to do with humans (most likely it would, woudn't it?), so... enslaving. optimize the whole of human society to maximize youtube watchtime or something.
rotting bones
Posts: 1408
Joined: Tue Dec 04, 2018 5:16 pm

Re: Soshul meedja.

Post by rotting bones »

Torco wrote: Mon Mar 27, 2023 9:16 pm but I'm not talking about mystical or metaphysical agency, I'm talking empirical agency: acting as an agent. it doesn't have to have a 'sense' (?) of being an agent to behave as one, again, look at corporations.
Corporate agency emerges from having to please shareholders. This is the kind of agency that practical AIs don't have: they don't have an internal model that tracks their environment and responds to events that happen to them. In this respect, they resemble patterns that are subject to intertia, like rocks and Hello World programs.

Agency is not a metaphysical matter for AIs. If you take an AI course, there is one chapter in the textbook that explicitly tells you how to program an agent. Eg. An AI that plays Wumpus.
Torco wrote: Mon Mar 27, 2023 9:16 pm And yeah, sure, antagonists: but that's only if it's unable to manipulate humans very effectively, which is precisely what AIs in the real world do. that'd be like... it's thing. And maybe its paperclip has something to do with humans (most likely it would, woudn't it?), so... enslaving. optimize the whole of human society to maximize youtube watchtime or something.
It is not possible to direct the world by being smart. The way to direct the world is to have a lot of people believe their interests align with yours for decades on end. People are suspicious by default, and an AI starts with a disadvantage in that department.
Torco
Posts: 791
Joined: Fri Jul 13, 2018 9:11 am

Re: Soshul meedja.

Post by Torco »

well, sure, only some AIs can do a singularity, but that kind, the kind that are specifically programmed to act in the world, are also the kind of things IT companies might want! for example, a hiring AI: hiring is a lot of work, and most of it (checking CVs, posting job ads, selecting three guys for boss man to interview) *can*, at least in principle, and likely in practice right now, be automated. Also, reporting AIs could write entire newspapers, influencer AIs could get many views, and algorythmically generate drama or whatever... It's not that niche to have an AI behave agentively, even if it doesn't have like a sense of agency.
Corporate agency emerges from having to please shareholders. This is the kind of agency that practical AIs don't have: they don't have an internal model that tracks their environment and responds to events that happen to them.
Agreed, but that doesn't mean they couldn't get one, for example, from it being profitable to program one into them.
rotting bones
Posts: 1408
Joined: Tue Dec 04, 2018 5:16 pm

Re: Soshul meedja.

Post by rotting bones »

Torco wrote: Tue Mar 28, 2023 6:42 am Agreed, but that doesn't mean they couldn't get one, for example, from it being profitable to program one into them.
Under most circumstances, it's either unprofitable or physically impossible to give your AI the required sense of agency. I will list my objections under 3 headings:

1. Continuity: ChatGPT already exhibits some memory within each conversation. However, propagating this sense of continuity across sessions will probably lead to a combinatorial explosion because of the way RNNs and LSTMs work: https://colah.github.io/posts/2015-08-U ... ing-LSTMs/ Assuming you could afford it, why invest human effort, chip performance, storage and other resources into implementing such a feature?

2. Complexity: The world humans navigate is complex, and humans adapted to it over a very long time. Even if it's possible to represent such an environment with current technology (to train the AI or inside the internal world model), it's still an unnecessary expense for most practical tasks. Assuming you successfully represent the environment, navigating the combinatorial explosion that will follow is no mean feat. See my response to chris_notts:
rotting bones wrote: Sun Mar 12, 2023 7:30 pm To be perfectly honest, I can't even begin to imagine what an "unfreedom" cost function would look like. First, we'd have to program a virtual environment similar to ours (impossible). We have to define the AIs as genetic models proliferating in that environment. We need the models to develop social dynamics (a mystery). We need them to develop group dynamics with complex communication skills (all skills tend towards extreme simplicity by default). For that we need cunning and violent predators the models need to outwit. In the social dynamics, there needs to be a cost associated with the models failing to punish other models that lord it over them (sexual selection). After millions of years of selection, we need to put these models in an environment where they see humans as threatening their existence in a way they consciously or unconsciously see as analogous to sexual selection.
3. The nature of goal-directed behavior: From the outset, the goals of an AI agent are tailored to the needs of its programmers. It doesn't display random behavior like paperclip generation. For basic examples, see chapters 7-11 in this textbook: https://github.com/yanshengjia/ml-road/ ... ition).pdf
Torco wrote: Tue Mar 28, 2023 6:42 am well, sure, only some AIs can do a singularity, but that kind, the kind that are specifically programmed to act in the world, are also the kind of things IT companies might want! for example, a hiring AI: hiring is a lot of work, and most of it (checking CVs, posting job ads, selecting three guys for boss man to interview) *can*, at least in principle, and likely in practice right now, be automated. Also, reporting AIs could write entire newspapers, influencer AIs could get many views, and algorythmically generate drama or whatever... It's not that niche to have an AI behave agentively, even if it doesn't have like a sense of agency.
It would be a miracle for agentive behavior without an internal model to direct the world. That's like if the wind carves furrows into the surface of a rock that inspires whoever looks at it with ideas that lead the rock to world domination. That's not how the world works.

I think people feel that an AI is bound to turn against its creator because the bible said that Adam disobeyed God, which was later adapted by the gothic novel Frankenstein. These intuitions are not applicable to physical reality.

Personally, I think humans are much more likely to turn against humans than an AI. That's the kind of thing humans evolved for, not systems of equations carefully curated from scratch. Humans are basically projecting their own temptations and insecurities onto inanimate objects. Animism rises again.
Torco
Posts: 791
Joined: Fri Jul 13, 2018 9:11 am

Re: Soshul meedja.

Post by Torco »

Oh, I don't disagree with you that much, I suppose. "the singularity" is by no means unavoidable, indeed it's not even that likely, depending on what we mean. 1 is true, but it's not necessarily true, it is mitigatable through abstraction, and having a model with continuity and some model of the outside world is a necessary feature for many applications which, trust me, would be profitable as fuck. HR-GPT wouldn't need to be aware of all of its conversations with every employee, it can, I don't know, embed those into a lower dimensionality "impression" of each employee it uses to make decisions... there's plenty of dimensionality reduction algos.

2 is, it seems to me, just a matter of time: and anyway the model wouldn't need to be aware of everything in reality. plus, again, abstraction: the environment for an agent-AI, if it was to do, for example, HR, could be as simple as a company: 100 people, their work, seven catfish linkedin accounts, the feeds from those accounts, stock market numbers and maybe a chat window connected to the main shareholders: this is not that big a universe to pipeline into a model, but it is enough for this entity I'm positing to do its job.

3 is right, but paperclip is just a proxy for "whatever it ends up trying to maximize". Again, an HR AI could be trying to maximize company profits. or employee well-being, or, more likely, whatever a programmer thought was close enough to company profits and employee wellbeing that they could finish that line and go home.

Though not obviously going to happen, agent AIs are also not impossible and the point I was making is that *if* they come about, it would probably not be like in Her, a person in your cellphone that's just a very clever human with all the feelings and emotions you'd expect, and it wouldn't have any special motivation to kill us all: it's unlikely that'd be its paperclip. However, you *can* have AI that ends up acting at least as agentively as current social systems do, and where the boundary between software and institution may even get quite blurry. Agent AIs are not yet implemented (as far as we know), but I really don't think they're that far off: an AI that does things in the world, such as invest in the stock market, or hire people, or influence public opinion: this is not intergallactic travel: just look at the emerging genre of "accomplishing a human task with AI" youtube videos: for example you can write a video script with gpt, make a character with midjourney, animate it with I don't remeber the name of that other model, turn the script to audio with whatever else, find appropriate images through a similar method, and upload the whole thing to youtube through an API: once you have that pipeline working, such that you can just hit play, the next natural step is to set up a control loop for it, no? one that checks the news and the rest of youtube for what is 'trendy', and decides on the topic of the next video according to some function, such as how many or how few people you get to click the link to join the PLA infantry corps, or how much it looks like you're making people believe in antivax conspiracy theories... these are all things people would pay a fuckton of money to be able to automate.
Personally, I think humans are much more likely to turn against humans than an AI. That's the kind of thing humans evolved for, not systems of equations carefully curated from scratch. Humans are basically projecting their own temptations and insecurities onto inanimate objects. Animism rises again.
This is my whole point, though: there's no particular reason an AI would "rise against us"... but there are at least concievable ways in which competent enough AIs that are able to set their own instrumental goals and follow through on them could exist. and, if existant, they would be just one more thing people would need to worry about: kind of like the fed, or the rockefeller foundation.

and if you think this is, in principle, impossible, let me, again, point out that we've already built agent-like powerful entities that, to some degree or other, exert power over us: they don't run on silicon, they're social systems, but still. they haven't "turned against us", but they... well... exist.

the least convincing part of the singularity idea for me is unfettered, unlimited growth up to infinity: it's like that comic.
Image

I will check out that book tho
Travis B.
Posts: 6852
Joined: Sun Jul 15, 2018 8:52 pm

Re: Soshul meedja.

Post by Travis B. »

Probably a bigger concern, to me at least, than the possibility of Skynet, is humans intentionally or unintentionally instilling their own goals, beliefs, and biases into the AI's they train. It has already shown that AI's can be racist based on how they are trained, even if there was no specific intent on the part of their creators to make the racist. Now, with this taken to the nth degree, with an AI as powerful as GPT-4, or a future AI more powerful than it, what could happen?
Yaaludinuya siima d'at yiseka wohadetafa gaare.
Ennadinut'a gaare d'ate eetatadi siiman.
T'awraa t'awraa t'awraa t'awraa t'awraa t'awraa t'awraa.
zompist
Site Admin
Posts: 2944
Joined: Sun Jul 08, 2018 5:46 am
Location: Right here, probably
Contact:

Re: Soshul meedja.

Post by zompist »

rotting bones wrote: Tue Mar 28, 2023 10:34 am
Torco wrote: Tue Mar 28, 2023 6:42 am Agreed, but that doesn't mean they couldn't get one, for example, from it being profitable to program one into them.
Under most circumstances, it's either unprofitable or physically impossible to give your AI the required sense of agency. I will list my objections under 3 headings:

1. Continuity: ChatGPT already exhibits some memory within each conversation. However, propagating this sense of continuity across sessions will probably lead to a combinatorial explosion because of the way RNNs and LSTMs work: https://colah.github.io/posts/2015-08-U ... ing-LSTMs/ Assuming you could afford it, why invest human effort, chip performance, storage and other resources into implementing such a feature?

2. Complexity: The world humans navigate is complex, and humans adapted to it over a very long time. Even if it's possible to represent such an environment with current technology (to train the AI or inside the internal world model), it's still an unnecessary expense for most practical tasks. Assuming you successfully represent the environment, navigating the combinatorial explosion that will follow is no mean feat.
These are not very convincing objections when people have invested at least $50 billion in cryptocurrency, and for that matter when Elmo Musk paid $44 billion to turn Twitter into Gab, and Facebook has spent $36 billion to re-create Second Life. Since when have mere expense and incoherent goals deterred venture capital?
It would be a miracle for agentive behavior without an internal model to direct the world.
I agree with you that present-day deep-learning models are overhyped, and are not "actually agentive". But I think you're overconfident in the human/AI difference.

For one thing, "just agentive enough" will be enough for the venture capitalists, and can be plenty disruptive. Just look at driverless cars. There's enormous pressure to let them on the roads, despite the fact that they're already killing people. Are they agentive? Who cares? They're just agentive enough— they have a goal of traveling from point A to point B.

Supposedly GPT-4 is a big advance on GPT-3. It won't be "agentive", but what does that matter if CEOs think it is? Humans are going to use these things to mess up the lives of other humans.

Oh, and on a purely philosophical level— human brains are a sack of meat isolated from the world in a bone cave, which build a model of the world entirely from neural signals. It's hard to make a principled difference from an AI. Your position here is a little too close to Searle: you are clearly convinced that humans are "agentive" and AIs are not, but don't seem to recognize that it's not a clear-cut binary.
rotting bones
Posts: 1408
Joined: Tue Dec 04, 2018 5:16 pm

Re: Soshul meedja.

Post by rotting bones »

Travis B. wrote: Tue Mar 28, 2023 2:40 pm Probably a bigger concern, to me at least, than the possibility of Skynet, is humans intentionally or unintentionally instilling their own goals, beliefs, and biases into the AI's they train. It has already shown that AI's can be racist based on how they are trained, even if there was no specific intent on the part of their creators to make the racist. Now, with this taken to the nth degree, with an AI as powerful as GPT-4, or a future AI more powerful than it, what could happen?
You have to train ethics into your model to avoid this. Musk and friends are already accusing ChatGPT of having a left-wing bias.
rotting bones
Posts: 1408
Joined: Tue Dec 04, 2018 5:16 pm

Re: Soshul meedja.

Post by rotting bones »

Here's what I think AI won't do: I don't think that the misalignments between its goal and ours will amount to any behavior that doesn't resemble bugs in code.

Remember, the purpose of Machine Learning is to find a function. We're given a set of points in the form (x,y). The goal is to find a function f such that f(x) is approximately equal to y. That function f is the "AI".



Here are some things I believe that AI can be used for:
Torco wrote: Tue Mar 28, 2023 2:38 pm once you have that pipeline working, such that you can just hit play, the next natural step is to set up a control loop for it, no? one that checks the news and the rest of youtube for what is 'trendy', and decides on the topic of the next video according to some function, such as how many or how few people you get to click the link to join the PLA infantry corps, or how much it looks like you're making people believe in antivax conspiracy theories... these are all things people would pay a fuckton of money to be able to automate.
AI can automate production.
zompist wrote: Tue Mar 28, 2023 4:23 pm For one thing, "just agentive enough" will be enough for the venture capitalists, and can be plenty disruptive. Just look at driverless cars. There's enormous pressure to let them on the roads, despite the fact that they're already killing people. Are they agentive? Who cares? They're just agentive enough— they have a goal of traveling from point A to point B.
AI can disrupt the market.



Additional clarifications:
Torco wrote: Tue Mar 28, 2023 2:38 pm 1 is true, but it's not necessarily true, it is mitigatable through abstraction, and having a model with continuity and some model of the outside world is a necessary feature for many applications which, trust me, would be profitable as fuck.
It depends. I hear ChatGPT Pro users already have access to a model that autonomously browses the internet.

OTOH, I don't think agency per se is profitable. Capitalism does everything in its power to minimize the agency of workers. This is what it means to "automate production".

It wouldn't do to confuse the appearance of agency with actual agency. Sales clerks are often required to smile at customers. Customers might be fooled into thinking this is a display of agency, but it's the exact opposite.
Torco wrote: Tue Mar 28, 2023 2:38 pm HR-GPT wouldn't need to be aware of all of its conversations with every employee, it can, I don't know, embed those into a lower dimensionality "impression" of each employee it uses to make decisions... there's plenty of dimensionality reduction algos.
zompist wrote: Tue Mar 28, 2023 4:23 pm These are not very convincing objections when people have invested at least $50 billion in cryptocurrency, and for that matter when Elmo Musk paid $44 billion to turn Twitter into Gab, and Facebook has spent $36 billion to re-create Second Life. Since when have mere expense and incoherent goals deterred venture capital?
My impression is that the overfitting, asymptotic slowdown, etc. are brutal. Go to Google Colaboratory, change the runtime type to GPU from the Runtime menu, train a few complex models using the Keras Functional API (there are online tutorials), tweak them a bit, look at the outputs and tell me what you think.

Most of the breakthroughs right now come from finding simpler models that are almost as good as the proper ones. Management hasn't managed to browbeat the theory of algorithms into submission yet.
Torco
Posts: 791
Joined: Fri Jul 13, 2018 9:11 am

Re: Soshul meedja.

Post by Torco »

I have! nothing fancy, a tensorflow that guesses if a name is a boy's name or a girl's name that got like 98% right on test data it had never seen before. (98% sounds like nothing much, but many names *I* couldn't have figured out). a toy, of course, I'm sure the pros program those for breakfast, but it's given me like a general notion. And I agree, most cases of misalignment are trivial and remarkably silly: one gets impressions such as "so intelligent and so deeply stupid at the same time". Like those videogame AIs that are trained to reach the goal and that, when put in a novel environment, just go right cause that's what you do in a videogame.

____

but I think your two positions are hard to reconcile here: if you think an AI can be built such that it manipulates people's behaviours in a more or less autonomous way, in the sense that it puts out output (i.e. enacts behavior), gets input from the world, and tries to maximize an outcome of the "shaping people's behavior" type... that is, almost definitionally, power, no? power in the "hands" of a blind and inscrutable pseudo-mind. and with great power comes the possibility that misalignments, or bugs, are harmful... which is the opposite of this.
I don't think that the misalignments between its goal and ours will amount to any behavior that doesn't resemble bugs in code.
Now, whether it'll "think" -omg, in order to fulfill my purpose I must modify myself and formulate intermediate instrumental goals which may or may not involve turning against humans... seems far-fetched.

also
OTOH, I don't think agency per se is profitable. Capitalism does everything in its power to minimize the agency of workers. This is what it means to "automate production".
I think these are two distinct meanings of the word: in the first case, it means like ability to relatively autonomously fulfill a goal, i.e. to do work, and in the second, it means to be free. In the first sense, capitalism increases the agency of workers: it gives them better tools, better methods and better coordination capabilities: a person can till a vast field in an afternoon these days blabla.
zompist wrote: They're just agentive enough— they have a goal of traveling from point A to point B.
and misalignments both look like bugs and cause real harm.
Post Reply