WeepingElf wrote: ↑Wed Jun 11, 2025 3:45 pm
Autocorrect often gets things
dead wrong, especially if technical or otherwise rare vocabulary is involved, and therefore doesn't render proofreaders redundant - because it
has no idea what the text means. It doesn't know that "Queen" and "Sensation" aren't names of languages, and "Quenya" and "Sindarin" are correctly spelled language names, for instance. And LLMs may be better, but only gradually, and they still don't know what the texts mean, and often produce nonsense. On a tangent, I know a guy whose job is proofreading machine-translated texts, and he finds
lots of mistakes in the texts he proofreads.
for sure. i don't think there are many jobs that generative models can fully replace, but you don't need to replace a full job for a person's job to be replaced. and it's not like people don't screw up from time to time as well.
zompist wrote: ↑Wed Jun 11, 2025 4:00 pmIf your point is "LLMs can be used by evil CEOs to replace human jobs but do an even shittier job", I agree and
have all along. Malloc's problem is grossly overestimating how close we are to "real AI", and assigning the evil agency to the AIs rather than the CEOs.
oh, totally, and I think my position is basically yours as well. I'm merely saying that some of the corrections to his obviously mistaken position were a bit too far. like, there's really been no significant progress towards the goal of humans on mars, but there has been significant progress towards automating relatively complex cognitive tasks to a machine, notably carrying out instructions given by humans
which is, from an employer's point of view, is all a worker is: an thing that if you tell it to do X, does X. bosses often use their workers exactly like most people at work are now using LLMs: say "write me a proposal for that thing they're asking in that email (even though i'm gonna read it, and probably rewrite half of it, cause it's me who really knows how to do this but i'm busy / rather do with this other thing) ". increasingly, they're building agents, "self-prompting" ensembles of models, and that opens the door for more tasks to be automated that were thought to be inherently in the human domain, the AGI doomsayers are not wrong about that bit. of course, all technologies plateau: the reasoning of the AGI doomsayer [the steelmanned version of malloc's postiion here]
reminds me of that comic ME AM PLAY GODS. the mistake is reasoning along the lines of "fewer back then, more now, eventually all". if we learn to make fire, we'll burn the entire cosmos. if computers become able to do some things, they'll be able to do all the things. but i mean, i don't think it has plateaued (?) yet, with NLP. then again
zompist wrote: ↑Wed Jun 11, 2025 5:17 pm
Torco wrote: ↑Wed Jun 11, 2025 10:19 am
generative models *are* actually becoming useful in a way that they weren't 15, 30 or 45 years ago. [not exactly replace, but close enough
Chat programs are a huge advance, sure. But the tempo is important to get right, since malloc is convinced human-level AI is coming next week.(things which all are true)
absolutely yes. i'm perhaps even more radical: malloc thinks human-level AI is a possible thing: conceivably, it'll either be better-at-some-things-than-people-and-worse-at-others, or better-at-all-things. inbetween, tho, there's a lot of sets of things it can be better than us at which are honestly scary, so being scared of a nebulous "AGI" [what does that even mean? it's immensely unlikely that it'll be just-as-good-as-people-at-all-things] is a regrettably ignorant version of a genuinely reasonable concern, not an altogether sillyness.
lmao malloc, honestly, learn about these things if they concern you that much. dall-e was 2020. we used to have a saying here, RAFBA, but it was a bit crass so i'll say "please read up on this subject before so vehemently discussing it". PRUOTSBSVDI just doesn't have that good of a ring, alas. maybe prutos ?