Page 292 of 310
Re: Random Thread
Posted: Sun Feb 01, 2026 3:09 pm
by zompist
malloc wrote: ↑Sun Feb 01, 2026 3:00 pm
Travis B. wrote: ↑Sun Feb 01, 2026 12:06 pmDo you realize how much energy and water and hardware generative AI takes? Do you realize what is entailed in training a generative AI model to do anything? These things cost significant quantities of money.
The only reason the AI companies haven't gone bust yet is that money to burn keeps getting shoveled in by investors caught up in the AI hype.
Sure but one AI data center can produce as many images or lines of text as thousands of humans working around the clock. Does each data center really cost more than raising thousands of humans to adulthood and paying them to write or draw full-time?
If you run a pipe from a fermenting dumpster, you can output as much organic product as a chef can. Should we fire the chef?
Re: Random Thread
Posted: Sun Feb 01, 2026 3:38 pm
by Travis B.
malloc wrote: ↑Sun Feb 01, 2026 3:00 pm
Travis B. wrote: ↑Sun Feb 01, 2026 12:06 pmDo you realize how much energy and water and hardware generative AI takes? Do you realize what is entailed in training a generative AI model to do anything? These things cost significant quantities of money.
The only reason the AI companies haven't gone bust yet is that money to burn keeps getting shoveled in by investors caught up in the AI hype.
Sure but one AI data center can produce as many images or lines of text as thousands of humans working around the clock. Does each data center really cost more than raising thousands of humans to adulthood and paying them to write or draw full-time?
There is a reason what is produced by generative AI is known as "slop", i.e. it fundamentally is garbage lacking creativity or skill that people Do Not Want. People do not want to consume media generated by generative AI, people do not want to interact with AI's instead of humans in the area of customer service, the code generated by coding AI's is trash that simply cannot be trusted (and requires more work to get working than the total time that would be taken if a human simply wrote it themselves), and so on.
The only reason we are dealing so much with AI right now is that the executives have bought into the hype and are attempting to ram it down our throats.
Once they realize that it has not actually added to their bottom lines and the hype evaporates many of these AI companies are going to go belly up, and are likely to take our investments with the.
I expect things like Microsoft's stated goal of rewriting all their code in Rust by 2030 (!) using AI to achieve a goal of each programmer producing one million lines of code a month (an utterly insane goal translating to 0.5 LoC/second) to fail miserably. Things like this are what you get when upper management is completely divorced from reality, as is the case with these people attempting to push AI.
Re: Random Thread
Posted: Sun Feb 01, 2026 8:10 pm
by jcb
Travis B. wrote: ↑Sun Feb 01, 2026 3:38 pm
malloc wrote: ↑Sun Feb 01, 2026 3:00 pm
Travis B. wrote: ↑Sun Feb 01, 2026 12:06 pmDo you realize how much energy and water and hardware generative AI takes? Do you realize what is entailed in training a generative AI model to do anything? These things cost significant quantities of money.
The only reason the AI companies haven't gone bust yet is that money to burn keeps getting shoveled in by investors caught up in the AI hype.
Sure but one AI data center can produce as many images or lines of text as thousands of humans working around the clock. Does each data center really cost more than raising thousands of humans to adulthood and paying them to write or draw full-time?
There is a reason what is produced by generative AI is known as "slop", i.e. it fundamentally is garbage lacking creativity or skill that people Do Not Want. People do not want to consume media generated by generative AI, people do not want to interact with AI's instead of humans in the area of customer service, the code generated by coding AI's is trash that simply cannot be trusted (and requires more work to get working than the total time that would be taken if a human simply wrote it themselves), and so on.
The only reason we are dealing so much with AI right now is that the executives have bought into the hype and are attempting to ram it down our throats.
Once they realize that it has not actually added to their bottom lines and the hype evaporates many of these AI companies are going to go belly up, and are likely to take our investments with the.
I expect things like Microsoft's stated goal of rewriting all their code in Rust by 2030 (!) using AI to achieve a goal of each programmer producing one million lines of code a month (an utterly insane goal translating to 0.5 LoC/second) to fail miserably. Things like this are what you get when upper management is completely divorced from reality, as is the case with these people attempting to push AI.
(1) Even if AI hype and investment eventually dies out, it could take a decade (or more) for that to happen. After all, it took Uber a decade before it finally turned a profit for a year, and they're still around.
(2) I think there will still be a place for "slop", because there's many things that don't *need* to be deep high quality art (focusing on AI art for this example), like clip art or stock footage. Compare how society doesn't hire an artist to finely paint the middle line of a highway with a paint brush, but instead uses a mechanical spray painter that crudely does it.
Re: Random Thread
Posted: Sun Feb 01, 2026 10:19 pm
by zompist
jcb wrote: ↑Sun Feb 01, 2026 8:10 pm
(1) Even if AI hype and investment eventually dies out, it could take a decade (or more) for that to happen. After all, it took Uber a decade before it finally turned a profit for a year, and they're still around.
OpenAI started 9 years ago and is still burning through hundreds of billions of dollars. AI companies are losing money.
Corporate America isn't blithely unaware of AI, needing ten more years to hear of it and use it. They've all heard of it, tried it, been trying to force it down our throats. Still not enough for AI to make money.
"Oh, but Amazon Web Services didn't make money at first." AWS broke even within 3 years.
(2) I think there will still be a place for "slop", because there's many things that don't *need* to be deep high quality art (focusing on AI art for this example), like clip art or stock footage. Compare how society doesn't hire an artist to paint the middle line of a highway with a paint brush.
I can't find a size for the clip art industry, but Getty Image + Shutterstock (which are merging) make about $1.8 billion a year. That's 1/3 the size of Wrigley, makers of bubble gum and candy.
People get unduly excited because AI can make pictures. Getty + Shutterstock already have a quarter of a billion pictures. There are thousands of free pictures available. Or you could pay a human somewhere between $5 and $500 depending on the quality you need.
To be profitable, with its current spending plans, OpenAI needs to sell
ten times the amount of AI it does now within the next few years.
Re: Random Thread
Posted: Sun Feb 01, 2026 10:37 pm
by Man in Space
> ku
ERROR! THIS IS A PATSAK PLANET, YOU ARE A CHATLIAN
> sudo ku
_ _ _ _ _ _ _ _ _ _ _ _ _
| 1 _ _ | _ _ _ | _ _ _ |
| _ _ _ | _ 3 _ | _ _ _ |
| _ _ _ | _ _ _ | 6 _ _ |
_ _ _ _ _ _ _ _ _ _ _ _ _
| _ _ _ | _ _ 8 | _ _ _ |
| _ 2 _ | _ _ _ | _ _ _ |
| _ _ _ | _ _ _ | _ 4 _ |
_ _ _ _ _ _ _ _ _ _ _ _ _
| _ _ _ | 9 _ _ | _ _ _ |
| _ _ _ | _ _ _ | _ _ 5 |
| _ _ 7 | _ _ _ | _ _ _ |
_ _ _ _ _ _ _ _ _ _ _ _ _
Re: Random Thread
Posted: Mon Feb 02, 2026 1:40 am
by Raphael
Man in Space wrote: ↑Sun Feb 01, 2026 10:37 pm
> ku
ERROR! THIS IS A PATSAK PLANET, YOU ARE A CHATLIAN
> sudo ku
_ _ _ _ _ _ _ _ _ _ _ _ _
| 1 _ _ | _ _ _ | _ _ _ |
| _ _ _ | _ 3 _ | _ _ _ |
| _ _ _ | _ _ _ | 6 _ _ |
_ _ _ _ _ _ _ _ _ _ _ _ _
| _ _ _ | _ _ 8 | _ _ _ |
| _ 2 _ | _ _ _ | _ _ _ |
| _ _ _ | _ _ _ | _ 4 _ |
_ _ _ _ _ _ _ _ _ _ _ _ _
| _ _ _ | 9 _ _ | _ _ _ |
| _ _ _ | _ _ _ | _ _ 5 |
| _ _ 7 | _ _ _ | _ _ _ |
_ _ _ _ _ _ _ _ _ _ _ _ _
*chuckle*
Re: Random Thread
Posted: Mon Feb 02, 2026 5:53 am
by bradrn
???
Re: Random Thread
Posted: Mon Feb 02, 2026 5:59 am
by Raphael
I understand the "sudo ku" part. I don't understand the "ku" reference itself. Wikipedia doesn't seem to help.
Re: Random Thread
Posted: Mon Feb 02, 2026 6:18 am
by zompist
Raphael wrote: ↑Mon Feb 02, 2026 5:59 am
I understand the "sudo ku" part. I don't understand the "ku" reference itself. Wikipedia doesn't seem to help.
https://en.wikipedia.org/wiki/Kin-dza-dza!
Re: Random Thread
Posted: Mon Feb 02, 2026 6:24 am
by Raphael
Re: Random Thread
Posted: Mon Feb 02, 2026 1:57 pm
by naz
Re: Random Thread
Posted: Mon Feb 02, 2026 2:22 pm
by alice
Travis B. wrote: ↑Sun Feb 01, 2026 3:38 pm
I expect things like Microsoft's stated goal of rewriting all their code in Rust by 2030 (!) using AI to achieve a goal of each programmer producing one million lines of code a month (an utterly insane goal translating to 0.5 LoC/second) to fail miserably. Things like this are what you get when upper management is completely divorced from reality, as is the case with these people attempting to push AI.
"(name of proprietary code-generating software), write a simple loop in C to print the numbers from one to ten in as many lines of code as possible".
int
i
;
i
=
0
;
while
(
i
<
10
)
{
printf
(
"%d"
,
i
)
;
i
=
i
+
1
;
}
Well... it's a start, but there's plenty of room for improvement; you could use
atoi instead of
printf for a start, and I'm sure you can get a few calls to
malloc in there somewhere. (Not to mention an obvious bug which an AI could plausibly hallicinate!)
Re: Random Thread
Posted: Mon Feb 02, 2026 2:37 pm
by linguistcat
generative AI doesn't even make usable "art" without human intention behind it. And I don't mean the prompt written to make an image. I mean like, I saw a piece that was supposed to "Celebrate the American heroes of WW2" or something like that. Only the AI had the soldiers running into the water toward the ships, not toward the enemy; Whoever had prompted it either hadn't noticed or was being sarcastic about their intentions.
Now, I'm far from anti-Allies and WW2 was probably the last truly necessary war to fight in my mind (I'd be willing to hear evidence otherwise if folks have it either saying that WW2 was not necessary or that a later war was). But you have to be an extremely poor "artist" by any definition to portray the exact opposite of your intention. It would be better to poorly draw it and I'm finding that I would not use ai for something I can't already do myself, because if I can't do it, I cannot fix its mistakes. Anyone who relies on AI to do things they can't is a fool who no one should rely on. Not the AI's fault. It can be an incredible tool for things like finding cancer and what not. But then it is being used by trained professionals to augment and not replace their abilities. And it is a TOOL among other tools, not the only thing the person is using.
Re: Random Thread
Posted: Mon Feb 02, 2026 3:05 pm
by malloc
Sure but I could easily find numerous AI images that are impossible to distinguish from human-made ones. You can reasonably argue that AI often misses the mark but AI advocates could say the same about human artists. It may well be that AI cannot produce fine art or literature but it definitely seems capable of simple things like manga-styled pinup or short news articles.
Part of the problem is that experts in computing and experts outside it give very different answers. Linguists, psychologists, and anthropologists all insist that AI is not replicating human cognition in any meaningful way. Meanwhile computer scientists and software engineers are convinced that LLMs are rapidly closing on human intelligence. Who in this debate should I trust?
Re: Random Thread
Posted: Mon Feb 02, 2026 3:41 pm
by linguistcat
malloc wrote: ↑Mon Feb 02, 2026 3:05 pm
Sure but I could easily find numerous AI images that are impossible to distinguish from human-made ones. You can reasonably argue that AI often misses the mark but AI advocates could say the same about human artists. It may well be that AI cannot produce fine art or literature but it definitely seems capable of simple things like manga-styled pinup or short news articles.
Part of the problem is that experts in computing and experts outside it give very different answers. Linguists, psychologists, and anthropologists all insist that AI is not replicating human cognition in any meaningful way. Meanwhile computer scientists and software engineers are convinced that LLMs are rapidly closing on human intelligence. Who in this debate should I trust?
Well for one, if the people making images through ai get pictures that ACTUALLY align with their intent then good for them. I will judge it by the same standards I judge other art by though I'd still like to know it was made with ai, so I don't (for example) pay for a hand made good when it's not hand made.
Second, as smart as computer scientists and software engineers are about computers, I don't trust what they have to say about human cognition. I would trust the people who understand that, especially if for example, they have also studied non-human intelligence.
I think the comp sci folks are hyping up what they're doing. As someone who has only baseline knowledge in either realm, I default to the people who understand what human cognition is as much as anyone does to tell me what is or isn't similar to it. But that's my own decision on how to weigh things. I will point out that people will pay computer people more if they say computers are so friggin' smart you guys! Whereas no one will pay the folks studying us more unless they find evidence of something really crazy. Saying computers aren't as smart as humans doesn't scratch that itch, but I really do think people assuming computers are definitely intelligent have something like pareidolia going on, tbh.
Re: Random Thread
Posted: Mon Feb 02, 2026 4:10 pm
by zompist
linguistcat wrote: ↑Mon Feb 02, 2026 3:41 pm
Second, as smart as computer scientists and software engineers are about computers, I don't trust what they have to say about human cognition. I would trust the people who understand that, especially if for example, they have also studied non-human intelligence.
Please don't accept malloc's blithe statements about "computer scientists and software engineers" as if he knows what he's talking about.
Here on the board, Travis and I are both pretty skeptical about AI, and we're both career programmers. (Were, in my case.) Plenty of developers, though often forced to use AI by their bosses, are skeptical about the results. Daniel Dennett, one of the foremost writers on AI in general, mostly on the pro-AI side, didn't live to see ChatGPT, but warned, "The real danger, I think, is not that machines more intelligent than we are will usurp us as captains of our destinies, but that we will over-estimate the comprehension of our latest thinking tools, prematurely ceding authority to them far beyond their competence."
I really do think people assuming computers are definitely intelligent have something like pareidolia going on, tbh.
For sure.
The people who get super-excited about AI are CEOs. And, to be frank, people whose standards are low. As a Mefite noted, "The output of gen AI is more convincing to people with lower standards and less knowledge of the domain."
Some humans do very well, far above chance, in detecting AI art. Artists are pretty good at this. It's no longer just a matter of counting fingers, but there are tells.
Re: Random Thread
Posted: Mon Feb 02, 2026 7:53 pm
by Travis B.
AI content in general has an uninspired, derivative air about it, which is not surprising given that generative AI fundamentally lacks creativity and understanding and merely distills and summarizes preexisting works.
Re: Random Thread
Posted: Mon Feb 02, 2026 9:35 pm
by jcb
zompist wrote: ↑Sun Feb 01, 2026 10:19 pm
jcb wrote: ↑Sun Feb 01, 2026 8:10 pm
(1) Even if AI hype and investment eventually dies out, it could take a decade (or more) for that to happen. After all, it took Uber a decade before it finally turned a profit for a year, and they're still around.
OpenAI started 9 years ago and is still burning through hundreds of billions of dollars. AI companies are losing money.
Corporate America isn't blithely unaware of AI, needing ten more years to hear of it and use it. They've all heard of it, tried it, been trying to force it down our throats. Still not enough for AI to make money.
"Oh, but Amazon Web Services didn't make money at first." AWS broke even within 3 years.
(2) I think there will still be a place for "slop", because there's many things that don't *need* to be deep high quality art (focusing on AI art for this example), like clip art or stock footage. Compare how society doesn't hire an artist to paint the middle line of a highway with a paint brush.
I can't find a size for the clip art industry, but Getty Image + Shutterstock (which are merging) make about $1.8 billion a year. That's 1/3 the size of Wrigley, makers of bubble gum and candy.
People get unduly excited because AI can make pictures. Getty + Shutterstock already have a quarter of a billion pictures. There are thousands of free pictures available. Or you could pay a human somewhere between $5 and $500 depending on the quality you need.
To be profitable, with its current spending plans, OpenAI needs to sell
ten times the amount of AI it does now within the next few years.
I don't disagree with you. There is a giant bubble, and it will (eventually) pop.
Re: Random Thread
Posted: Tue Feb 03, 2026 10:25 am
by linguistcat
zompist wrote: ↑Mon Feb 02, 2026 4:10 pm
linguistcat wrote: ↑Mon Feb 02, 2026 3:41 pm
Second, as smart as computer scientists and software engineers are about computers, I don't trust what they have to say about human cognition. I would trust the people who understand that, especially if for example, they have also studied non-human intelligence.
Please don't accept malloc's blithe statements about "computer scientists and software engineers" as if he knows what he's talking about.
Sorry for that. I guess my point is, even if his conceptions of the situation were true, I'd trust the people who know about cognition since it's a question about cognition and not the architecture supporting it. I do remember both of you being very skeptical. I do remember someone a couple years back who worked for - I think - google who said he believed ai was already sentient, and quit his position because of that. But I seem to remember he was kind of seen as a fringe weirdo, so probably best not to count that.
Re: Random Thread
Posted: Tue Feb 03, 2026 2:39 pm
by alice
zompist wrote: ↑Mon Feb 02, 2026 4:10 pm
Here on the board, Travis and I are both pretty skeptical about AI, and we're both career programmers. (Were, in my case.)
You can add me to that list, as a "was" (or a "were"? That's for another thread.)
zompist wrote: ↑Mon Feb 02, 2026 4:10 pm
Plenty of developers, though often forced to use AI by their bosses, are skeptical about the results. Daniel Dennett, one of the foremost writers on AI in general, mostly on the pro-AI side, didn't live to see ChatGPT, but warned, "The real danger, I think, is not that machines more intelligent than we are will usurp us as captains of our destinies, but that we will over-estimate the comprehension of our latest thinking tools, prematurely ceding authority to them far beyond their competence."
A pertinent saying, possibly slightly misquoted: "If you put garbage into a computer, noting will come out but garbage. But this garbage, having passed through a very expensive and complicated process, is somehow ennobled and no-one dare criticise it".
Alternatively: "Recycled and recirculated shit is still shit, in whatever quantity". Just as the Internet is a part of life for increasingly many people , so too will AI-generated slop be. And that's the main problem, *not* that people with more money than sense are trying to automate and replace human creativity (which won't succeed, I think we're pretty much agreed).