AIs gunning for our precious freelancers
Re: AIs gunning for our precious freelancers
Oh, to be clear, there's nothing wrong with you worrying about AI! I do think, forgive me for this, than all this reasoning about 'outperforming' isn't right.
Re: AIs gunning for our precious freelancers
Oh, that's very easy. Human beings like Einstein are able to come up with valid new scientific theories. If you had asked a computer of ten years ago to come up with Special Relativity, based only on the information that was available to Einstein when he started his work, you would have gotten no response whatsoever. And if you'd asked the same of one of today's AIs, you might well get a response like "m=cE²".
No matter what people like you might believe, Einstein didn't get his fame for being able to do complicated arithmetic quickly in his head. If you think he did, it shows how little you know about intelligence.
Re: AIs gunning for our precious freelancers
The problem is not moral value so much as maintaining practical influence over civilization. Plenty of people value animals and even advocate rights for them, but no dog however beloved is conducting scientific research or running politics. If computers become powerful and smart enough, humans will find themselves in the position of pets or worse with no means of holding our new overlords accountable or escaping our lowly position.Ares Land wrote: ↑Thu Dec 19, 2024 2:13 amThe idea of humans outperforming computers, or computers outperforming humans sounds is a very capitalistic way of looking at the world. You're more of a socialist, you don't have to adopt this unhealthy train of thought.
Human beings don't need to perform, let alone outperform anything to have value.
Mureta ikan topaasenni.
Koomát terratomít juneeratu!
Remember, I was right about Die Antwoord | He/him
Koomát terratomít juneeratu!
Remember, I was right about Die Antwoord | He/him
Re: AIs gunning for our precious freelancers
On one hand, thankfully we're nowhere near there yet -- not while the energy consumption of LLMs and other AI-adjacent stuff is as high as it is.
The tech industry insistance that it be allowed to run unregulated is worrying in the long run; in the short term there's no risk of a machine upheaval but plenty of damage to be done still.
The tech industry insistance that it be allowed to run unregulated is worrying in the long run; in the short term there's no risk of a machine upheaval but plenty of damage to be done still.
-
- Posts: 1458
- Joined: Tue Dec 04, 2018 5:16 pm
Re: AIs gunning for our precious freelancers
1. Humans evolved in a more sophisticated environment. The simulated environments in which AI models evolve are too simple to capture the sophistication of the problem domain where they are deployed.
2. Logical techniques don't scale to many problems because of computational complexity. The basic idea is that some problems take much longer to correctly solve than others. Even though some problems take forever to solve correctly, it is possible to arrive at reasonable solutions in reasonable time using statistical leaps of logic.
I know people will ignore this, so I told ChatGPT to put it in epic verse:
Sing, O Muse, of the tangled paths that lie hidden in code,
Where mighty reason and logic fail to bestow a swift guide.
Lo, the famed question of P versus NP arises like thunder,
Dark on the horizon it broods, defying both hero and sage.
For if P and NP be as twins, yoked tight in one harness,
Then all problems in NP’s embrace yield swiftly to reasoned craft,
No brute guessing nor cunning trick would be needed to vanquish
Complex riddles that now entangle the wits of mortal machines.
But behold! The wise have long toiled, yet the truth’s undivined;
If, as most fear, P differs from NP’s shadowed domain,
Then the traveler who seeks a route through the densest of forests
May not find a neat, well-trodden path free of hindering vines.
No polished algorithm, in hours finite and swift, may solve
Those deep puzzles whose keys lie buried under vast search trees,
Beneath mountains of combinatorial knots, their sizes unbound.
Thus, the brave engineers, with no final proof to console them,
Forge cunning heuristics—sharp blades to hack through the brambles.
They accept compromise, trading perfection for progress and speed,
A careful guess, a guided probe, rather than blind exploration.
With heuristics in hand, they chart a course through complexity’s dusk,
Not certain, yet hopeful, not exact, yet gracefully swift.
So shall the machines press on, wise in their bounded ambitions,
Until truth’s bright dawn or a secret key shall at last be revealed.
Owners like Elon Musk don't do any actual work. They mostly boss people around and curry favor with other moneybags.
I know the super-sophisticated bloggers think capitalists do a lot of work these days, but that's only because they are always wrong about everything.
-
- Posts: 1458
- Joined: Tue Dec 04, 2018 5:16 pm
Re: AIs gunning for our precious freelancers
Once a model is known to be biased, its dataset can be adjusted using data augmentation. This is usually one of the first chapters in big data textbooks.Travis B. wrote: ↑Tue Dec 17, 2024 10:30 am You do realize that the actual AI's, as we call them, we have now are often quite racist and sexist, right? They take the (often unintended) biases in their training data and amplify them beyond whatever the intentions of their creators were. And why would AGI have to be any different? It would have to learn from data provided from somewhere, and thus would fall into the same trap.
-
- Posts: 1458
- Joined: Tue Dec 04, 2018 5:16 pm
Re: AIs gunning for our precious freelancers
Like I said before, the right way to think about an AI is as the theory itself. An AI model is a theory intended to predict new facts about its problem domain. This is because it's a set of equations that the computer learns from its dataset. If you supply many values of E and m in the input, the AI is intended to internally morph into an approximation of "E=mc^2". Afterwards, if you supply a value of one, it could solve for the other.Raphael wrote: ↑Thu Dec 19, 2024 4:47 am Oh, that's very easy. Human beings like Einstein are able to come up with valid new scientific theories. If you had asked a computer of ten years ago to come up with Special Relativity, based only on the information that was available to Einstein when he started his work, you would have gotten no response whatsoever. And if you'd asked the same of one of today's AIs, you might well get a response like "m=cE²".
No matter what people like you might believe, Einstein didn't get his fame for being able to do complicated arithmetic quickly in his head. If you think he did, it shows how little you know about intelligence.
ChatGPT is an extended theory about how English is used in practice. I don't believe ChatGPT would say "m=cE²" at this point. It's way too familiar with the idiom.
-
- Posts: 1458
- Joined: Tue Dec 04, 2018 5:16 pm
Re: AIs gunning for our precious freelancers
Does the "left" really believe that if AI models didn't use this energy, then it wouldn't be used for other purposes?Ares Land wrote: ↑Thu Dec 19, 2024 10:43 am On one hand, thankfully we're nowhere near there yet -- not while the energy consumption of LLMs and other AI-adjacent stuff is as high as it is.
The tech industry insistance that it be allowed to run unregulated is worrying in the long run; in the short term there's no risk of a machine upheaval but plenty of damage to be done still.
-
- Posts: 1458
- Joined: Tue Dec 04, 2018 5:16 pm
Re: AIs gunning for our precious freelancers
To be fair, nature does have a far wider diversity of behaviors. On the other hand, I feel like if intelligence had a core, then the phenomena discussed by Oliver Sacks would not be possible. This is only an analogy, but here's an example of how illusions of centrality arise: https://www.youtube.com/watch?v=ahXIMUk ... =8&pp=iAQBAres Land wrote: ↑Wed Dec 11, 2024 8:02 am As for underlying unity, I don't know either. But human behavior is highly diverse, from cave paintings, building cities, religious art, feudal tournaments, music, coming up with computers and the internet, trolling on the internet. It's more parsimonious to postulate underlying factors.
For now, I think the most important distinction is between agency, especially as demonstrated by self-preservation, and having mental models. AI models are just the representation part.
Cultural behaviors have evolved too. They evolve memetically, not genetically. In this context, genetics is a substrate at best.
I think you are right about the default setting, which I personally prefer. (Talking only about the end result, AI models work by using pattern recognition techniques to learn a function that gives the best results on its training data. Since different AI techniques learn the function in a different way, I don't think the outputs will have anything in common in the widest sense.)Ares Land wrote: ↑Wed Dec 11, 2024 8:02 am Try getting an AI image, then get a human artist to reproduce the same prompt. Of course the results will be quite different. For starters, it's likely a human artist could ask for payment. They might hand over their work late. You could get them to explain how they worked, what techniques they use, and what are their influences. (There are influences we're not aware of, but you'd get part of the answer.)
If you're interacting with DALL-E through ChatGPT, I think you will get an answer.
I don't know if "interesting" is the right word. Nature is not trying to please humans. Human works are simpler than natural works. I suspect mental representation works by lossy compression. This results in simplified models that are interesting to other humans.
Basically, beauty is a fact about human instincts, not works of art. Humans make art that aligns with their instincts. Other humans appreciate it when it aligns with their instincts too. We tend to think of natural works as "garbage", "poison" or "chaos" because they tend to be less useful and/or generally amenable to human needs. What machines don't know is what humans want, i.e. our instincts.
Also, my job is to be a natural philosopher, so nature is more interesting to me than humans. It could be different for artists. I do think nature is fairly idiotic. I just think humans are even dumber.
Re: AIs gunning for our precious freelancers
The constant contempt that Rotting Bones shows for humans really underscores my point. He has no such contempt for artificial intelligence but rather remarkably high hopes for the concept. Humans are constantly revealing themselves as fools and miscreants while computers earn continual praise and financial investment for their abilities. If humans truly had nothing to fear from computers taking over civilization, people like Rotting Bones could not denigrate them without sounding self-evidently ridiculous.
Mureta ikan topaasenni.
Koomát terratomít juneeratu!
Remember, I was right about Die Antwoord | He/him
Koomát terratomít juneeratu!
Remember, I was right about Die Antwoord | He/him
Re: AIs gunning for our precious freelancers
What if the high praise and financial investment computers get for their abilities are simply yet another example of humans revealing themselves as fools and miscreants?
-
- Posts: 1458
- Joined: Tue Dec 04, 2018 5:16 pm
Re: AIs gunning for our precious freelancers
In case I wasn't clear, AI models are superficial theories that are even dumber than humans.malloc wrote: ↑Sat Dec 21, 2024 8:26 am The constant contempt that Rotting Bones shows for humans really underscores my point. He has no such contempt for artificial intelligence but rather remarkably high hopes for the concept. Humans are constantly revealing themselves as fools and miscreants while computers earn continual praise and financial investment for their abilities. If humans truly had nothing to fear from computers taking over civilization, people like Rotting Bones could not denigrate them without sounding self-evidently ridiculous.
My whole point is that's why we need not fear them.
Here's a handy chart if you still don't get it:
Nature: Moron
Human: Imbecile
AI: Idiot
Re: AIs gunning for our precious freelancers
That's intelligence-ist!rotting bones wrote: ↑Sat Dec 21, 2024 8:49 am
Here's a handy chart if you still don't get it:
Nature: Moron
Human: Imbecile
AI: Idiot
Self-referential signatures are for people too boring to come up with more interesting alternatives.
-
- Site Admin
- Posts: 3070
- Joined: Sun Jul 08, 2018 5:46 am
- Location: Right here, probably
- Contact:
Re: AIs gunning for our precious freelancers
I can’t (be bothered to learn how to) cut and paste on the iPad, but the discussion of E=mc^2 struck me as not understanding how new science works. It’s not a matter of rearranging letters and deciding on the exponents.
Einstein famously started by asking what the world would look like if you could travel at the speed of light. An LLM can’t start from that and derive special relativity, nor could most humans. With the caveat that *now that many explanations of relativity exist*, it can generate a pastiche of such an explanation.
Nor is there any need to develop artificial Einsteins.
I used to be a big AI fan, back when it was a fascinating programming problem. Now that we know how it would be used— to enrich capitalists and emmiserate actual humans— I think GAI would be a mistake. Maybe a socialist utopia could safely make one—as a valued co-worker, not a slave— but that can wait till we have the utopia.
Einstein famously started by asking what the world would look like if you could travel at the speed of light. An LLM can’t start from that and derive special relativity, nor could most humans. With the caveat that *now that many explanations of relativity exist*, it can generate a pastiche of such an explanation.
Nor is there any need to develop artificial Einsteins.
I used to be a big AI fan, back when it was a fascinating programming problem. Now that we know how it would be used— to enrich capitalists and emmiserate actual humans— I think GAI would be a mistake. Maybe a socialist utopia could safely make one—as a valued co-worker, not a slave— but that can wait till we have the utopia.
Re: AIs gunning for our precious freelancers
I'm going to take the opportunity to highlight this post since it seems like people have started to tar the entire field of AI with the brush meant specifically for generative AI.
(By corollary we probably need a new term for the science fiction concept of "AGI", because generative AI isn't going anywhere and is very likely to be abbreviated as GAI.)Please stop demonizing “AI”; the stuff you have a problem with isn’t AI research or AI tech, it’s a very small subset of that domain that (a)has ethical issues with training data sourcing and (b)is being horribly misused/way overly trusted
AI research is valuable and important, and MOST of it doesn’t have these problems. It’s doing things like increasing the reliability of cancer screenings, helping astronomers make better observations, improving assistive technology, accelerating medical research, etc.
Not all AI is “train a chat bot” or “train an image generator” for nefarious or stupid purposes
Re: AIs gunning for our precious freelancers
It's a fair point, but by now "AI" has become irreversibly equated with "GAI" in the public's mind in the same way that "hacker" has with "cracker". The difference is too subtle for most non-technical people to understand.
Self-referential signatures are for people too boring to come up with more interesting alternatives.
Re: AIs gunning for our precious freelancers
Case in point, at my work we use 'deep learning' for MR image processing, and it does not compare to generative 'AI' in any outward respect. If you did not know the internals of our software you could very well have not a clue that 'AI' techniques are being used to generate MR images.
Yaaludinuya siima d'at yiseka wohadetafa gaare.
Ennadinut'a gaare d'ate eetatadi siiman.
T'awraa t'awraa t'awraa t'awraa t'awraa t'awraa t'awraa.
Ennadinut'a gaare d'ate eetatadi siiman.
T'awraa t'awraa t'awraa t'awraa t'awraa t'awraa t'awraa.
Re: AIs gunning for our precious freelancers
Fair enough. I think I was trying to make the same point as you, though I might have been a bit clumsy about how I expressed it.zompist wrote: ↑Sun Dec 22, 2024 8:56 am I can’t (be bothered to learn how to) cut and paste on the iPad, but the discussion of E=mc^2 struck me as not understanding how new science works. It’s not a matter of rearranging letters and deciding on the exponents.
Einstein famously started by asking what the world would look like if you could travel at the speed of light. An LLM can’t start from that and derive special relativity, nor could most humans. With the caveat that *now that many explanations of relativity exist*, it can generate a pastiche of such an explanation.
Re: AIs gunning for our precious freelancers
Redirecting the topic for a moment: might you be able to give me a comprehensible explanation of what ‘deep learning’ is supposed to be? I’ve been hearing the term for a long time, but I’ve yet to work out what it means beyond ‘a neural network with lots of hidden layers’. In which case LLMs should qualify under that name too…Travis B. wrote: ↑Sun Dec 22, 2024 2:44 pm Case in point, at my work we use 'deep learning' for MR image processing, and it does not compare to generative 'AI' in any outward respect. If you did not know the internals of our software you could very well have not a clue that 'AI' techniques are being used to generate MR images.
Conlangs: Scratchpad | Texts | antilanguage
Software: See http://bradrn.com/projects.html
Other: Ergativity for Novices
(Why does phpBB not let me add >5 links here?)
Software: See http://bradrn.com/projects.html
Other: Ergativity for Novices
(Why does phpBB not let me add >5 links here?)
-
- Posts: 1458
- Joined: Tue Dec 04, 2018 5:16 pm