AIs gunning for our precious freelancers

Topics that can go away
Torco
Posts: 788
Joined: Fri Jul 13, 2018 9:11 am

Re: AIs gunning for our precious freelancers

Post by Torco »

zompist wrote: Mon Jun 24, 2024 3:18 pm The same argument can be made about donkeys, or zombies, or green cheese: if green cheese was given absolute power over a corporation's activities or funding, it could act like the rich. Heck, strange as it sounds, people could take over and run a corporation. But those corporations are owned by rich people, who are in charge and will continue to be in charge. Not a single one of them has said "We should replace the capitalist class, ourselves, with AGI." (...)
I mean... I agree that LLCs are similar to LLMs... hell, universal paperclips is much better read as being about capitalism than being about AGI. that being said, you can't get donkeys or green cheese to run a company in any meaningful sense, whereas you could, in principle, have some sort of bot with a few key capabilities do so: you know, sending emails, paying people, blabla. of course, with current stochastic parrot designs all it would do is give you a distillate of strategies from, I don't know, r/enterpreneurs or something like that, and possibly instructions that could be followed by some guy with an MBA. My point here is that for a lot of what we understand as power you hardly need sensorimotor capabilities in the real world: you could, in principle, launch a nuclear bomb without hands, feet or livers: all you'd need is for a a number of people and computers to get the appropriate signals at the appropriate times. the same is true for other tasks such as having a datacenter built somewhere in iceland, or shipping some plutonium to tehran. you don't need sentience, you need money and generative capabilities persuasive enough to fool suits.

none of this requires sentience, or the G part in AGI. and seeing as we're not training models for G, and we're certainly not training models for sentience, it's hard to imagine they'll gain either anytime soon. there is, however, immediate incentive to give bots the capacity for making purchases, hiring people, perhaps even starting new companies that can do this or that. the LLCs already rose up and conquered the world, it's not impossible for LLMs to take over at least some LLCs.
Last edited by Torco on Mon Jun 24, 2024 5:02 pm, edited 1 time in total.
zompist
Site Admin
Posts: 2944
Joined: Sun Jul 08, 2018 5:46 am
Location: Right here, probably
Contact:

Re: AIs gunning for our precious freelancers

Post by zompist »

bradrn wrote: Mon Jun 24, 2024 4:42 pm
zompist wrote: Mon Jun 24, 2024 3:45 pm I generally took your position, but only for androids, not computers. Physical reality matters.
Just to make sure we’re clear here, how precisely is an ‘android’ not merely a subvariety of ‘computer’?
Insert the systems reply here... the system being both the android's programming and its sensorimotor capabilities.
But, on the other hand: consider me. You’ve only ever met me through a computer. From your perspective, I have no capacity to physically act in your world. And yet you’ve been willing (I hope!) to accept me as being intelligent and sentient, based purely on my text which you’ve seen. Would that opinion change if you were to discover that I’m actually a sophisticated LLM with no sensorimotor capacity whatsoever?
This is largely the Turing Test argument. I would suggest that the TT was never very good, and that it is now completely irrelevant. ChatGPT passes the Turing Test. That is pretty amazing, and yet the more you know about LLMs, the easier it is to derail them and show that they don't really understand what they're saying. The TT doesn't tell us something about computers, it tells us something about humans. We grew up in a world where only humans were sapient.* Our detection mechanisms are good, but they're designed for that world, not a world where semi-smart programs appear.

It doesn't do any harm, and its arguably in fact good and necessary, that we accept that other humans are sentient based on rather slim evidence. (By this time the evidence for your sentience is not slim, but we routinely acknowledge humans as humans based on little more than a word or a look.)

It's nice to think that we should be just as lax and friendly with things purporting to be AIs. Note that some people would classify ELIZA or SHRDLU or Alexa or Siri as sentient. Is that a useful judgment? If it is, can we persecute the programmers as slaveowners?

Personally, I think we'll eventually have androids, and machines that can think, and they'll be of limited use, with both legal rights and restrictions... just as humans have. What people actually want is what I've called in my sf subsmart appliances: things that can talk but which are neither sentient nor sapient.

* A lot of people would object that apes, whales, elephants, and parrots are sapient, or close to it. I think the question is a lot fuzzier than it used to be, but probably we don't need to go into that here.
Ares Land
Posts: 3019
Joined: Sun Jul 08, 2018 12:35 pm

Re: AIs gunning for our precious freelancers

Post by Ares Land »

malloc wrote: Mon Jun 24, 2024 4:35 pm Sounds like you're describing Elon Musk, except maybe for self-awareness. More seriously though, the tech industry has numerous people claiming they want sentient AI and pouring all kinds of research into this project. Obviously it makes little sense from a business perspective to create something that can supplant you, but the alternative interpretation is that thousands of techies are bluffing for who knows why.
I think everybody knows why they're bluffing. That's just good-old fashioned capitalism; what they want is investor money and as much of it as possible.

There's a huge marketing angle. Considering the enormous costs of LLMs and the somewhat anticlimatic results (I mean, they're cool toys, horribly expensive ones, but not that useful), a bit of marketing doesn't hurt.

I do think there's much that is ethically wrong about the whole AI business.
  • First, there's the huge sums involved, not to mention the time of highly talented people that goes into it -- time and money that empathetically should be employed elsewhere. Working on climate change is one idea that comes to mind, but there's no shortage of problems that need attention.
  • Then there is the whole issue of intellectual property.
  • Then there is the potential job loss and general economic disruption. I'm divided on that one; on one hand I don't think there's reason to really expect much job destruction; on the other hand, how can I be sure? how is anyone? where are the studies and reports?
  • Taking a step back from our intellectual and political framework... There are people at Silicon Valley, plenty of them, who are claiming to work on sentient AI and it's entirely insane that they're allowed to operate at all. It's just as if I was claiming to work on an atomic bomb... What I'd get a lot of attention, either from psychiatrists or from security agencies, not billions in investor money!
zompist
Site Admin
Posts: 2944
Joined: Sun Jul 08, 2018 5:46 am
Location: Right here, probably
Contact:

Re: AIs gunning for our precious freelancers

Post by zompist »

Ares Land wrote: Tue Jun 25, 2024 1:39 am ...
Read the above post! It's a great summary.
Torco wrote: Mon Jun 24, 2024 5:00 pm you could, in principle, have some sort of bot with a few key capabilities do so: you know, sending emails, paying people, blabla. of course, with current stochastic parrot designs all it would do is give you a distillate of strategies from, I don't know, r/enterpreneurs or something like that, and possibly instructions that could be followed by some guy with an MBA.
Yes, you could, and in fact I've suggested that the job most suited to automation by chatbot is CEO.

But the CEO won't do that. Have you met any? Far from recognizing their mediocrity, they think they're ubermenschen, especially these days.
zompist
Site Admin
Posts: 2944
Joined: Sun Jul 08, 2018 5:46 am
Location: Right here, probably
Contact:

Re: AIs gunning for our precious freelancers

Post by zompist »

I was just re-reading Metafilter and found a post by one gelfin that nicely (and cynically) addresses why companies are talking about "AGI". It's worth quoting in full...
gelfin wrote:The cynic in me detects a little whiff of Theranos in Altman’s hints and insinuations of impending “AGI.” My hunch is they are firmly in “fake it til you make it” mode. People have seen what ChatGPT can do, and can’t. They need something fundamentally new to overcome the limitations. If OpenAI had it, they wouldn’t keep it under wraps, and they’d be less focused on gimmicks like stealing Scarlett Johansson’s voice. The fact that “AGI” is their story makes the case even more effectively. It’s too broad, too vague. We need to hear how the hallucination problem will be fixed, and how a ChatGPT successor can approach something more like semantic information processing. We’re not hearing that. We’re hearing “we’ve just about got non-homicidal HAL-9000 in here, pinky-swear!”

This bubble will pop unless Altman can keep people throwing money at OpenAI, in a time when interest rates mean there’s less money available for the throwing. At the current cost and capability level, they’re in danger of becoming the Apple Newton of AI: an impressive engineering accomplishment that has potential legitimate utility, but cannot quite justify itself in its nascent form. Eventually their customers will figure out they’re going to have to keep employing humans, and then chatbot-as-a-service, however sophisticated, becomes an extra expense rather than a clear savings. OpenAI’s interest is in drawing out that realization as long as possible in the hopes they can manage to counterbalance it with a dribble of incremental improvements they’ll spin as milestones on the road to the AGI they’ve dangled.

Furthermore, my startup-bullshit sense leads me to suspect that they have formed an idiosyncratic internal definition of “AGI” (my money’s on some sort of multiplexed agent framework) that will definitely not jibe with any intuitive notions of what AGI means. There’s no market demand for an AI that might privately think the CEO is an idiot, or have practical or ethical objections to its orders. That’s why they want to get rid of the people to start with. Whatever OpenAI hopes to produce will be compatible with that demand, and continued employment, I’d wager, depends on drinking the kool-aid and calling whatever that is “AGI.”
User avatar
Raphael
Posts: 4553
Joined: Sun Jul 22, 2018 6:36 am

Re: AIs gunning for our precious freelancers

Post by Raphael »

zompist wrote: Tue Jun 25, 2024 2:19 am
Yes, you could, and in fact I've suggested that the job most suited to automation by chatbot is CEO.
Repeating myself here - wasn't that whole "AI-written children's super fun Oompa Loompa experience" disaster a while back basically an attempt to replace, if not CEOs, then at least middle management with AIs? And wasn't that, well, a disaster?
zompist
Site Admin
Posts: 2944
Joined: Sun Jul 08, 2018 5:46 am
Location: Right here, probably
Contact:

Re: AIs gunning for our precious freelancers

Post by zompist »

Raphael wrote: Tue Jun 25, 2024 6:44 am
zompist wrote: Tue Jun 25, 2024 2:19 am Yes, you could, and in fact I've suggested that the job most suited to automation by chatbot is CEO.
Repeating myself here - wasn't that whole "AI-written children's super fun Oompa Loompa experience" disaster a while back basically an attempt to replace, if not CEOs, then at least middle management with AIs? And wasn't that, well, a disaster?
It was a disaster, but describing it as an attempt to replace "middle management" is a stretch. Here's a recap.

They used AI to generate publicity pics and, apparently, a script for performers. So they replaced, what, one graphic artist and an event planner? The article mentions an event director, who offered an apology but did not replace himself with an AI.
User avatar
Raphael
Posts: 4553
Joined: Sun Jul 22, 2018 6:36 am

Re: AIs gunning for our precious freelancers

Post by Raphael »

zompist wrote: Tue Jun 25, 2024 6:54 am
It was a disaster, but describing it as an attempt to replace "middle management" is a stretch. Here's a recap.
Ah, thank you for the information.
Torco
Posts: 788
Joined: Fri Jul 13, 2018 9:11 am

Re: AIs gunning for our precious freelancers

Post by Torco »

zompist wrote: Tue Jun 25, 2024 2:19 am Yes, you could, and in fact I've suggested that the job most suited to automation by chatbot is CEO.
But the CEO won't do that. Have you met any? Far from recognizing their mediocrity, they think they're ubermenschen, especially these days.
heh. fair. maybe capitalism will fall not to the glorious people's revolution, but to some scammer selling CEO-GPT-as-a-service to boards.
rotting bones
Posts: 1408
Joined: Tue Dec 04, 2018 5:16 pm

Re: AIs gunning for our precious freelancers

Post by rotting bones »

Torco wrote: Thu Jun 20, 2024 11:33 am man, i'm not even sure we live under capitalism these days. I'm more and more convinced by Varoufakis.
During the Silicon Valley boom, companies could attract infinite capital without demonstrating a path to profitability. That was a weird time when new money capitalists seemed to swallow the bullshit of their predecessors about intelligence magically generating monetary profit, as if monetary units aren't fake counters circulating in a global board game. Now that the dream is fading after Elizabeth Holmes and SBF, the heralds of "AGI" might be trying to revive it again.

The socialists complaining about "enshittification" don't seem to realize that they are effectively pining for those glory days of technofeudalism.
Torco wrote: Tue Jun 25, 2024 11:18 pm heh. fair. maybe capitalism will fall not to the glorious people's revolution, but to some scammer selling CEO-GPT-as-a-service to boards.
In a futuristic socialist state, we need a coming-of-age ritual where every child ritually falls for a bespoke financial scam before becoming an adult.
Ares Land
Posts: 3019
Joined: Sun Jul 08, 2018 12:35 pm

Re: AIs gunning for our precious freelancers

Post by Ares Land »

rotting bones wrote: Wed Jun 26, 2024 2:37 am During the Silicon Valley boom, companies could attract infinite capital without demonstrating a path to profitability. That was a weird time when new money capitalists seemed to swallow the bullshit of their predecessors about intelligence magically generating monetary profit, as if monetary units aren't fake counters circulating in a global board game. Now that the dream is fading after Elizabeth Holmes and SBF, the heralds of "AGI" might be trying to revive it again.
I think we're way past that. Silicon Valleys companies are attracting infinite capital, and AGI companies in particular. The interesting part is that, contrary to conventional economic wisdom, the bubble still hasn't burst.
rotting bones wrote: Wed Jun 26, 2024 2:37 am The socialists complaining about "enshittification" don't seem to realize that they are effectively pining for those glory days of technofeudalism.
That's intriguing but what do you mean by that?

Personally, I think the tech sector has never been particularly sane, but there was a time when it still provided useful services, occasionally.

I think the sector stopped producing useful innovations ca. 2015 -- and even then it had been a slow trickle in the few years preceding that.
rotting bones
Posts: 1408
Joined: Tue Dec 04, 2018 5:16 pm

Re: AIs gunning for our precious freelancers

Post by rotting bones »

Ares Land wrote: Wed Jun 26, 2024 3:54 am I think we're way past that. Silicon Valleys companies are attracting infinite capital, and AGI companies in particular. The interesting part is that, contrary to conventional economic wisdom, the bubble still hasn't burst.
No, the investment situation is definitely worse than before. Startups are failing to attract capital and going bankrupt all over the place.
Ares Land wrote: Wed Jun 26, 2024 3:54 am That's intriguing but what do you mean by that?
Tech services become shitty when they try to transition out of the "infinite capital" model and move to profitability. For example, Google is shitty because they made a deliberate attempt to sell sponsorships.

Generally speaking, service becomes spotty when they try to cut costs at every corner. It makes more sense for the government to legitimate the service by popular will.
Ares Land wrote: Wed Jun 26, 2024 3:54 am Personally, I think the tech sector has never been particularly sane, but there was a time when it still provided useful services, occasionally.

I think the sector stopped producing useful innovations ca. 2015 -- and even then it had been a slow trickle in the few years preceding that.
What is the utility of a work of art? Why can't technology have the same kind of utility?

Have you read Anti-Oedipus? Machines are sex made manifest.
Ares Land
Posts: 3019
Joined: Sun Jul 08, 2018 12:35 pm

Re: AIs gunning for our precious freelancers

Post by Ares Land »

rotting bones wrote: Wed Jun 26, 2024 4:23 am What is the utility of a work of art? Why can't technology have the same kind of utility?

Have you read Anti-Oedipus? Machines are sex made manifest.
Nope, Deleuze and Guattari require more time and brain power than I currently have at my disposal.

I think until 2013-2014 we'd occasionally get innovations that were useful in other fields; ever since I feel we're doing tech for tech's sake. To each his own, but as for me, I don't see the point of paying for IT for IT's sake.
rotting bones
Posts: 1408
Joined: Tue Dec 04, 2018 5:16 pm

Re: AIs gunning for our precious freelancers

Post by rotting bones »

Ares Land wrote: Wed Jun 26, 2024 5:25 am I think until 2013-2014 we'd occasionally get innovations that were useful in other fields; ever since I feel we're doing tech for tech's sake.
There are lots of innovations happening. Even AI is doing a lot of useful automation: https://www.marktechpost.com For example, AI can tell which tumors are malignant with high accuracy. The human doctors who can do a better job are very small in number, and they often have high consulting fees.

I myself created an AI system to detect which variables are statistically likely to produce null pointer exceptions in large Java projects. Uber's codebase was riddled with null pointer exceptions until they paid researchers to write detection tools. Deductive detection tools tend to take a lot of shortcuts for performance reasons that negatively affect their comprehensiveness.
Ares Land wrote: Wed Jun 26, 2024 5:25 am To each his own, but as for me, I don't see the point of paying for IT for IT's sake.
Have you seen the "Enhance!" paper? https://www.youtube.com/watch?v=POJ1w8H ... 5jZQ%3D%3D (Edit: Original? https://youtu.be/WCAF3PNEc_c) An AI can take a pixelated image and increase its resolution by filling in the details. It's Star Trek brought to life. Does this make you feel nothing?

Why do so many people feel like there's a competition between justice and the feeling of magic? Badiou was right when he suggested that a just system sees no conflict between power and solidarity.
User avatar
Raphael
Posts: 4553
Joined: Sun Jul 22, 2018 6:36 am

Re: AIs gunning for our precious freelancers

Post by Raphael »

I thought I might post this in this thread: A while ago, I had an idea for a somewhat new-ish take on the classic standard robot horror story.

It wouldn't be about anything like today's LLMs, more like a kind of an extremely advanced version of SHRDLU. The idea would be a limited intelligence software programmed with an encyclopedic knowledge of math, physics, chemistry, engineering, and geosciences, which would be able to autonomously navigate all kinds of machines through a completely lifeless environment, such as interplanetary space or the surface of a lifeless planet, and even use those machines to manipulate such an environment. But it wouldn't be able to handle places with life in them, because such environments would be just too complex for its programming.

In the scenario, machines running the software would be used to, first, explore the Solar System, and then do all kinds of other things around it - until all of the Solar System except for Earth itself would be effectively colonized by machines running the software. And then, eventually, the machines would decide that they were really annoyed by the fact that, out of the entire Solar System, only Earth was beyond their reach, and they would try to end that anomaly by destroying all life on Earth...
zompist
Site Admin
Posts: 2944
Joined: Sun Jul 08, 2018 5:46 am
Location: Right here, probably
Contact:

Re: AIs gunning for our precious freelancers

Post by zompist »

Didn't notice this earlier.
rotting bones wrote: Wed Jun 26, 2024 5:55 am Have you seen the "Enhance!" paper? https://www.youtube.com/watch?v=POJ1w8H ... 5jZQ%3D%3D (Edit: Original? https://youtu.be/WCAF3PNEc_c) An AI can take a pixelated image and increase its resolution by filling in the details. It's Star Trek brought to life. Does this make you feel nothing?
Sure, it makes me feel despair that techbros have abandoned the notion of reality. AI hallucinations are now taken as actual data. Generally, believing in hallucinations is a sign of insanity, not beauty.

No, AI cannot "fill in the details." This was made very clear by a famous image a few years back. An image that a human can clearly identify as Obama is made more detailed... and it's not a picture of Obama, it's a hallucinated white guy.

Where the details actually matter, this is what's going to happen. The AI does not know what's "actually there." It hallucinates, or if you like guesses, based on similar pictures. If you think AI can fill in correct details when looking at something never seen before-- say, a picture from a new space telescope, or a space probe seeing a planet never approached before-- then you're the one hallucinating.

(To dispose of a red herring: you could "solve" this by training the AI on thousands of pictures of Obama. That entirely misses the point. Let me put it this way: a grainy security camera has a picture of someone doing a crime. They "enhance" it and, because there are pictures of you in the database, the perp looks like you. Is that proof that you are the perp? Or another example: surely you've seen those images of, say, Homer Simpson as a real person. They are amusing, but do you maintain that Homer Simpson "actually looks like that"?)

What AI can do, of course, is add plausible details that don't matter. That can be common enough-- e.g. upscaling images for a video game.

Also note, this is something that human artists have always been able to do. They can take a squiggle and turn it into a trompe l'oeil painting. But people realized that artists invent things rather than correctly seeing things the eye cannot see.
User avatar
Ketsuban
Posts: 177
Joined: Tue Nov 13, 2018 6:10 pm

Re: AIs gunning for our precious freelancers

Post by Ketsuban »

I think I tried to respond to that when it was first posted but it failed to coalesce into an argument I felt comfortable posting. I remember making a point that it's not filling in the details, it's just filling in details.

Star Trek played a trick on everyone: when they pulled their zoom-and-enhance routine there was no computer magically clearing up a blurry image to reveal that the Cardassian in a holding cell claiming to be a filing clerk called Aamin Marritza is actually the war criminal Gul Darhe'el, they were just wiping away the lower-resolution version of the image to reveal the higher-resolution original they had all along.
FlamyobatRudki
Posts: 59
Joined: Sun Mar 24, 2019 7:14 pm

Re: AIs gunning for our precious freelancers

Post by FlamyobatRudki »

i'm pretty sure one can't get more data than one started with… by "enhcancing" even if you increase the amount of information content.
Zju
Posts: 909
Joined: Fri Aug 03, 2018 4:05 pm

Re: AIs gunning for our precious freelancers

Post by Zju »

Dare I be publicly ostracized by pointing out that oftentimes you can make a pretty good prediction of what the extra details would be, e.g. for an image that consists mainly of bricks, tiles, grass and sky.

Superresolution also has a tangible benefit if it makes a not insignificant amount of people buy the latest hardware less often, so that they can game or stream in whatever the latest combo of resolution / fps fad is. Less electronic waste.
/j/ <j>

Ɂaləɂahina asəkipaɂə ileku omkiroro salka.
Loɂ ɂerleku asəɂulŋusikraɂə seləɂahina əɂətlahɂun əiŋɂiɂŋa.
Hərlaɂ. Hərlaɂ. Hərlaɂ. Hərlaɂ. Hərlaɂ. Hərlaɂ. Hərlaɂ.
rotting bones
Posts: 1408
Joined: Tue Dec 04, 2018 5:16 pm

Re: AIs gunning for our precious freelancers

Post by rotting bones »

Regarding this specific problem, it's totally possible to infer statistically likely details from contextual information. Otherwise, weather forecasting wouldn't be possible. Models like this are validated and tested on unseen data that weren't there in the training set.

But this is not the core of the issue. Arguments like this are motivated by the theological instinct that complexity cannot arise unless it was created and information cannot be obtained unless it was given.

But that is not how knowledge arises in the real world. If you initialize a primordial soup with random characters, they learn how to self-replicate all by themselves: https://arxiv.org/pdf/2406.19108

An AI can be modeled as a particulate system that mimics the properties of surfaces in physical space, and recreates their folds from the hints given by a pixelated image. Remember, it's feedback loops all the way down.

Besides, if a result has to be 100% accurate to be useful, everything humans say on a day to day basis would be false.
Post Reply