Re: AIs gunning for our precious freelancers
Posted: Thu Dec 19, 2024 2:17 am
Oh, to be clear, there's nothing wrong with you worrying about AI! I do think, forgive me for this, than all this reasoning about 'outperforming' isn't right.
Crossing our fingers
https://verduria.org/
Oh, that's very easy. Human beings like Einstein are able to come up with valid new scientific theories. If you had asked a computer of ten years ago to come up with Special Relativity, based only on the information that was available to Einstein when he started his work, you would have gotten no response whatsoever. And if you'd asked the same of one of today's AIs, you might well get a response like "m=cE²".
The problem is not moral value so much as maintaining practical influence over civilization. Plenty of people value animals and even advocate rights for them, but no dog however beloved is conducting scientific research or running politics. If computers become powerful and smart enough, humans will find themselves in the position of pets or worse with no means of holding our new overlords accountable or escaping our lowly position.Ares Land wrote: ↑Thu Dec 19, 2024 2:13 amThe idea of humans outperforming computers, or computers outperforming humans sounds is a very capitalistic way of looking at the world. You're more of a socialist, you don't have to adopt this unhealthy train of thought.
Human beings don't need to perform, let alone outperform anything to have value.
1. Humans evolved in a more sophisticated environment. The simulated environments in which AI models evolve are too simple to capture the sophistication of the problem domain where they are deployed.
Owners like Elon Musk don't do any actual work. They mostly boss people around and curry favor with other moneybags.
Once a model is known to be biased, its dataset can be adjusted using data augmentation. This is usually one of the first chapters in big data textbooks.Travis B. wrote: ↑Tue Dec 17, 2024 10:30 am You do realize that the actual AI's, as we call them, we have now are often quite racist and sexist, right? They take the (often unintended) biases in their training data and amplify them beyond whatever the intentions of their creators were. And why would AGI have to be any different? It would have to learn from data provided from somewhere, and thus would fall into the same trap.
Like I said before, the right way to think about an AI is as the theory itself. An AI model is a theory intended to predict new facts about its problem domain. This is because it's a set of equations that the computer learns from its dataset. If you supply many values of E and m in the input, the AI is intended to internally morph into an approximation of "E=mc^2". Afterwards, if you supply a value of one, it could solve for the other.Raphael wrote: ↑Thu Dec 19, 2024 4:47 am Oh, that's very easy. Human beings like Einstein are able to come up with valid new scientific theories. If you had asked a computer of ten years ago to come up with Special Relativity, based only on the information that was available to Einstein when he started his work, you would have gotten no response whatsoever. And if you'd asked the same of one of today's AIs, you might well get a response like "m=cE²".
No matter what people like you might believe, Einstein didn't get his fame for being able to do complicated arithmetic quickly in his head. If you think he did, it shows how little you know about intelligence.
Does the "left" really believe that if AI models didn't use this energy, then it wouldn't be used for other purposes?Ares Land wrote: ↑Thu Dec 19, 2024 10:43 am On one hand, thankfully we're nowhere near there yet -- not while the energy consumption of LLMs and other AI-adjacent stuff is as high as it is.
The tech industry insistance that it be allowed to run unregulated is worrying in the long run; in the short term there's no risk of a machine upheaval but plenty of damage to be done still.
To be fair, nature does have a far wider diversity of behaviors. On the other hand, I feel like if intelligence had a core, then the phenomena discussed by Oliver Sacks would not be possible. This is only an analogy, but here's an example of how illusions of centrality arise: https://www.youtube.com/watch?v=ahXIMUk ... =8&pp=iAQBAres Land wrote: ↑Wed Dec 11, 2024 8:02 am As for underlying unity, I don't know either. But human behavior is highly diverse, from cave paintings, building cities, religious art, feudal tournaments, music, coming up with computers and the internet, trolling on the internet. It's more parsimonious to postulate underlying factors.
Cultural behaviors have evolved too. They evolve memetically, not genetically. In this context, genetics is a substrate at best.
I think you are right about the default setting, which I personally prefer. (Talking only about the end result, AI models work by using pattern recognition techniques to learn a function that gives the best results on its training data. Since different AI techniques learn the function in a different way, I don't think the outputs will have anything in common in the widest sense.)Ares Land wrote: ↑Wed Dec 11, 2024 8:02 am Try getting an AI image, then get a human artist to reproduce the same prompt. Of course the results will be quite different. For starters, it's likely a human artist could ask for payment. They might hand over their work late. You could get them to explain how they worked, what techniques they use, and what are their influences. (There are influences we're not aware of, but you'd get part of the answer.)
If you're interacting with DALL-E through ChatGPT, I think you will get an answer.
I don't know if "interesting" is the right word. Nature is not trying to please humans. Human works are simpler than natural works. I suspect mental representation works by lossy compression. This results in simplified models that are interesting to other humans.
What if the high praise and financial investment computers get for their abilities are simply yet another example of humans revealing themselves as fools and miscreants?
In case I wasn't clear, AI models are superficial theories that are even dumber than humans.malloc wrote: ↑Sat Dec 21, 2024 8:26 am The constant contempt that Rotting Bones shows for humans really underscores my point. He has no such contempt for artificial intelligence but rather remarkably high hopes for the concept. Humans are constantly revealing themselves as fools and miscreants while computers earn continual praise and financial investment for their abilities. If humans truly had nothing to fear from computers taking over civilization, people like Rotting Bones could not denigrate them without sounding self-evidently ridiculous.
That's intelligence-ist!rotting bones wrote: ↑Sat Dec 21, 2024 8:49 am
Here's a handy chart if you still don't get it:
Nature: Moron
Human: Imbecile
AI: Idiot
(By corollary we probably need a new term for the science fiction concept of "AGI", because generative AI isn't going anywhere and is very likely to be abbreviated as GAI.)Please stop demonizing “AI”; the stuff you have a problem with isn’t AI research or AI tech, it’s a very small subset of that domain that (a)has ethical issues with training data sourcing and (b)is being horribly misused/way overly trusted
AI research is valuable and important, and MOST of it doesn’t have these problems. It’s doing things like increasing the reliability of cancer screenings, helping astronomers make better observations, improving assistive technology, accelerating medical research, etc.
Not all AI is “train a chat bot” or “train an image generator” for nefarious or stupid purposes
Fair enough. I think I was trying to make the same point as you, though I might have been a bit clumsy about how I expressed it.zompist wrote: ↑Sun Dec 22, 2024 8:56 am I can’t (be bothered to learn how to) cut and paste on the iPad, but the discussion of E=mc^2 struck me as not understanding how new science works. It’s not a matter of rearranging letters and deciding on the exponents.
Einstein famously started by asking what the world would look like if you could travel at the speed of light. An LLM can’t start from that and derive special relativity, nor could most humans. With the caveat that *now that many explanations of relativity exist*, it can generate a pastiche of such an explanation.
Redirecting the topic for a moment: might you be able to give me a comprehensible explanation of what ‘deep learning’ is supposed to be? I’ve been hearing the term for a long time, but I’ve yet to work out what it means beyond ‘a neural network with lots of hidden layers’. In which case LLMs should qualify under that name too…Travis B. wrote: ↑Sun Dec 22, 2024 2:44 pm Case in point, at my work we use 'deep learning' for MR image processing, and it does not compare to generative 'AI' in any outward respect. If you did not know the internals of our software you could very well have not a clue that 'AI' techniques are being used to generate MR images.