Page 50 of 51

Re: AIs gunning for our precious freelancers

Posted: Tue Jan 28, 2025 9:46 am
by Ketsuban
malloc wrote: Mon Jan 27, 2025 8:21 pm But logically if something is physically possible and has trillions of dollars dedicated to its realization, it will eventually happen. Zompist himself already admitted that they were half-way to AGI and that was several years ago when the technology first debuted. Sure the current forms of AI fall short of human intelligence, but we are talking about technology that has only existed for several years. Airplanes in their first few years could barely get off the ground but eventually even the crappiest plane could outfly even the fastest and most agile bird.
For one, this assumes the money is going into a method that can lead to AGI; I contend that LLMs and latent diffusion models cannot. (Zomp is not an expert in the field.) For two, the field of artificial intelligence has been around for decades and seen a new hype bubble whenever they came up with something that people found interesting. Which is more likely: that this time unlike before they've cracked it and just need a few trillion dollars more to get there, or that this is another hype cycle which will go away the same way it did before, leaving behind a couple neat gadgets and poisoning the term "AI" for another generation of researchers who have to come up with another word for what they do ("robotics", "machine learning", "computer vision") to get funding?
malloc wrote: Mon Jan 27, 2025 8:21 pm Meanwhile the internet is overflowing with AI generated images and artists are struggling to find work because AI can replicate their abilities except faster and more cheaply. Perhaps the finest and most innovative artists have nothing to fear currently, but those just getting started or lacking superlative talent cannot easily keep pace with machines that can produce decent if not necessarily inspired images.
Is it? Or are you defining "the internet" as the likes of Google and Facebook, which have a vested interest in making AI look big and relevant because they're currently funnelling money into AI projects? As soon as you try to do anything with Stable Diffusion beyond anodyne pablum like generic landscape shots, it shows sharp limits—it doesn't actually know what a human looks like so all the poses have a samey quality (or it includes too many fingers or an extra arm), it doesn't know how light works so its shading is incoherent, it has no sense of symmetry or shape in three-dimensional space so it produces impossible objects and random curlicues with no regularity, etc.

Re: AIs gunning for our precious freelancers

Posted: Tue Jan 28, 2025 2:22 pm
by alice
@malloc: what is going to happen to all those redundant people when AI eventually takes over? Are they going to be recycled into integrated circuits or power sources, or something? Do you genuinely believe that I'm the first person to think about this? That's a *lot* of living breathing humans to take care of.

Re: AIs gunning for our precious freelancers

Posted: Tue Jan 28, 2025 9:44 pm
by malloc
Ketsuban wrote: Tue Jan 28, 2025 9:46 amIs it? Or are you defining "the internet" as the likes of Google and Facebook, which have a vested interest in making AI look big and relevant because they're currently funnelling money into AI projects? As soon as you try to do anything with Stable Diffusion beyond anodyne pablum like generic landscape shots, it shows sharp limits—it doesn't actually know what a human looks like so all the poses have a samey quality (or it includes too many fingers or an extra arm), it doesn't know how light works so its shading is incoherent, it has no sense of symmetry or shape in three-dimensional space so it produces impossible objects and random curlicues with no regularity, etc.
That really depends on the specific AI and how it was trained. Certainly there are plenty of terrible AI images that one can easily identify as rubbish even as thumbnails. But there are also plenty of impressive AI images that pass for real art or even photorealism. There have been many times when I saw what looked like an impressive drawing only to see something explaining it was AI generated. Then there are the remarkable advances of LLMs which can write entire novels in mere fractions of a second. One can criticize LLMs for their tendency toward factual inaccuracies, but one must also concede that they routinely get things right about an enormous range of topics, certainly as much as any human would.

Re: AIs gunning for our precious freelancers

Posted: Wed Jan 29, 2025 1:05 am
by zompist
malloc wrote: Tue Jan 28, 2025 9:44 pmThen there are the remarkable advances of LLMs which can write entire novels in mere fractions of a second.
Can you name two or three of these that you've read? Also, [citation needed] on that "fractions of a second" claim.
One can criticize LLMs for their tendency toward factual inaccuracies, but one must also concede that they routinely get things right about an enormous range of topics, certainly as much as any human would.
This is called begging the question-- that is, you're trying to support your claim by assuming it.

At this point LLMs make hallucinations that no human would make (e.g. putting glue on pizza) and fail on elementary tasks, like counting letters.

They are remarkable, but they are at root prediction machines, which are good at producing things that look like their training materials. This turns out to be very useful if, say, you want a lot of things that look like their training materials. There is no knob you can pull to keep them from hallucinating; it's inherent in what they are and how they do it.

Re: AIs gunning for our precious freelancers

Posted: Wed Jan 29, 2025 1:37 am
by bradrn
On the last point: my mental model of LLMs is that they’re somewhat like a human — a somewhat stupid one with poor memory, that is — who’s shut inside a small room with no access to external resources. They can do a fair number of tasks which humans are generally good at, and maybe can remember some general knowledge if you’re lucky, but otherwise don’t expect them to do well at anything specialised.

Oh, and they can’t learn from their mistakes either. To me that may be the most significant way in which current LLMs are lacking.
zompist wrote: Wed Jan 29, 2025 1:05 am At this point LLMs make hallucinations that no human would make (e.g. putting glue on pizza) and fail on elementary tasks, like counting letters.
To my understanding that second one is more an artefact of the tokenisation process: current LLMs don’t have any access to individual letters. (Though they’re bad at quantitative tasks in general, to be sure.)

Re: AIs gunning for our precious freelancers

Posted: Wed Jan 29, 2025 4:24 am
by bradrn
zompist wrote: Wed Jan 29, 2025 1:05 am They are remarkable, but they are at root prediction machines, which are good at producing things that look like their training materials.
On reflection, I have a further comment on this. You’ve made this ‘prediction machine’ comment several times now, and I think I’ve finally managed to nail down what bothers me so much about it.

To summarise: I think that, to be truly general, a prediction engine must have intelligence of some form. The two are very strongly correlated, if not actually two ways of viewing the same thing.

Let me illustrate by means of my personal experience. As I may have mentioned here before, I have autism. (Formally undiagnosed but obvious to any specialist.) There are a lot of varieties of autism, but the most basic defining feature is difficulty with understanding social situations — what people have called ‘emotional intelligence’ (which is a characterisation I agree with). How does this manifest in my life? Well, a significant consequence is that it is extremely difficult for me to predict how people will react to things. I never quite know what to say in social situations, because I don’t know what would make people react well vs what would make them react badly. I can make educated guesses, but they’re often quite poor, and I do regularly say the wrong thing.

(Online I often hypercorrect, and usually take pains to make sure I don’t say anything impolite. Besides, communication through text is different.)

By contrast, I thoroughly enjoy fields like physics and programming and so on. We don’t normally conceptualise the process of problem-solving in these fields as ‘prediction’, but thinking about how I approach them, that’s exactly what I’m doing. For instance, a critical part of science is hypothesising; what is a hypothesis but a prediction of how some system will react in a certain situation? In programming, I know that what I’m writing is correct by running the program in my mind — i.e. predicting its result when run on the computer.

(I should probably clarify how I’m using the word ‘prediction’ here. Given some kind of initial state, I say that ‘prediction’ involves inferring the result after using some process to transform that state. That is, successful prediction depends on having some kind of model of the process over which it’s defined. For LLMs, the state is ’human text’, and the process is ‘humans continuing that text’. It’s a limited domain but one which can represent a surprisingly large number of problems, because given a question humans will naturally continue it with an answer.)

Compare this to someone who, say, is poor at mathematics. What happens when they attempt to solve a mathematical problem? Well… they get stuck. They can’t work out the first step to solving the problem. In other words, they can’t predict which kind of operations would get them closer to the solution and which would get them farther.

Of course, this is not to say that all aspects of intelligence can be reduced to prediction. (The best counterexample I can think of is understanding notation — although then again, how much intelligence this requires is arguable.) But hopefully these examples are enough to make my point, which is that a sufficiently good ‘prediction machine’ must be capable of doing things which we consider signs of intelligence in humans; and that conversely, a human who is intelligent in some domain becomes good at prediction tasks in that domain.

Re: AIs gunning for our precious freelancers

Posted: Wed Jan 29, 2025 4:53 am
by Ares Land
@malloc: I think what's happening is that you see AI as a very deep existential risk; something like the machines from The Matrix, or Frankenstein or Cylons or replicants.
Not to single you out; this seems to be a widely shared worry (or hope; some people want a super-intelligent AI god).

The most common view though, I think, on this board and elsewhere is that AI (to put it broadly) as it exists now is another technological product. Which doesn't mean we can't fear or worry about it, but much like we can be suspicious of any other brick of technology.

In any case, these are two different and incompatible views and I believe that's where most of the frustration is from.

Personally, I feel AGI (or replicants, or Cylons, if you prefer) are possible, in the sense that there's no physical law that would prevent it from happening, but exceedingly unlikely. The debate is surprisingly hard to settle; I don't know if there's any definitive argument on the matter.
The debate feels almost religious -- some people feel the Singularity (to reuse an outmoded word) is something to worry about about, even the most serious thing to worry about; others just can't take it seriously.

I have a mixture of fascination and annoyance towards AGI. I'd love to understand more about how, say, ChatGPT works, how it models human language (if it does), and how it seems to know thing, despite (as far as I know) merely producing human language. This could teach us a lot about the way language works, in fact even on what language is (evidently much more than a simple means of communication).

The annoyance part is that, well, generative AI doesn't feel useful. I find AI generated pictures ugly; I still haven't found a single use for ChatGPT. I just don't see any practical use. Other applications of AI are another matter, but they don't get as much publicity.


TLDR: It's not like I can dismiss your worries out of hand; but I have trouble taking these seriously myself because to me what you describe just isn't happening.

Re: AIs gunning for our precious freelancers

Posted: Wed Jan 29, 2025 5:02 am
by bradrn
Ares Land wrote: Wed Jan 29, 2025 4:53 am @malloc: I think what's happening is that you see AI as a very deep existential risk; something like the machines from The Matrix, or Frankenstein or Cylons or replicants.
Not to single you out; this seems to be a widely shared worry (or hope; some people want a super-intelligent AI god).

The most common view though, I think, on this board and elsewhere is that AI (to put it broadly) as it exists now is another technological product. Which doesn't mean we can't fear or worry about it, but much like we can be suspicious of any other brick of technology.
Agreed with this (and everything else in your post). I take on the latter view.

Also worth pointing out: The Matrix, Frankenstein and Cylons are science fiction. Not fact! There are good reasons to expect that AGI, if it does happen, will look very different to anything that any sci-fi author has yet predicted. (Simply because that’s the case with every technology ever. Most authors are lousy futurists.)
Personally, I feel AGI (or replicants, or Cylons, if you prefer) are possible, in the sense that there's no physical law that would prevent it from happening, but exceedingly unlikely. The debate is surprisingly hard to settle; I don't know if there's any definitive argument on the matter.
The debate feels almost religious -- some people feel the Singularity (to reuse an outmoded word) is something to worry about about, even the most serious thing to worry about; others just can't take it seriously.
On this, I really like Scott Alexander’s article on Why I Am Not (As Much Of) A Doomer (As Some People). It nicely lays out the argument for doomerism and its premises, and how those premises may or may not be satisfied. (The short version is that there are a lot of unknowns, and people’s intuitions on them vary wildly.)

Re: AIs gunning for our precious freelancers

Posted: Wed Jan 29, 2025 5:14 am
by zompist
bradrn wrote: Wed Jan 29, 2025 4:24 am
zompist wrote: Wed Jan 29, 2025 1:05 am They are remarkable, but they are at root prediction machines, which are good at producing things that look like their training materials.
On reflection, I have a further comment on this. You’ve made this ‘prediction machine’ comment several times now, and I think I’ve finally managed to nail down what bothers me so much about it.

To summarise: I think that, to be truly general, a prediction engine must have intelligence of some form. The two are very strongly correlated, if not actually two ways of viewing the same thing. [...]

(I should probably clarify how I’m using the word ‘prediction’ here. Given some kind of initial state, I say that ‘prediction’ involves inferring the result after using some process to transform that state. That is, successful prediction depends on having some kind of model of the process over which it’s defined. For LLMs, the state is ’human text’, and the process is ‘humans continuing that text’. It’s a limited domain but one which can represent a surprisingly large number of problems, because given a question humans will naturally continue it with an answer.)
I think you're just redefining "prediction" here to make it sound like intelligence. It's not, nor does it imply a model.

Flip a coin; I predict that 50% of the time it'll come up heads. This is statistics at its most basic, which in this case is a single number. I hope you're not maintaining that one number is intelligent.

Roll three 6-sided dice. Now you can do fancier statistics, creating something that approximates a bell curve. You can predict that rolling 3d6 will give you 10 or 11 more than anything else. You can do this by enumeration, simply working out every possible dice roll. Note that you don't need an intelligent system at all to make these predictions. You can in fact make predictions, with the precise statistical distribution, by using another 3d6-- three dead pieces of plastic.

Now consider a Markov generator. Here the dataset can be very large, larger than a human can easily memorize, certainly impossible to directly predict yourself. It can approximate natlang texts pretty well, at least at the phrase level. Yet at root it isn't more than a record of "what happened" in the training data, not much more than records at a weather station. It can produce (= predict) natlang-like behavior (e.g. "the" is followed by an adjective or noun) without having any model of language at all. Humans show these patterns for very different reasons, but those patterns do exist in mindless text and can be imitate by a very simple program.

The question is, are LLMs more like brains, or more like Markov generators? We can't assume the answer we want. An LLM doesn't have a model of the world just because you kind of think it does, or can't imagine how it works without one. Nor do I assert that it doesn't have a model. We'd have to investigate its actual weights and their meanings very closely in order to judge that... something that is theoretically possible, and has been done for simpler neural nets. (In theory you could do it with DeepSeek since they actually open-sourced their weights, the actual prediction engine.)

To my mind LLMs are more like Markov generators than anything else-- with of course the proviso that Markov generators only look at n words in a row, while LLMs in effect 'look at' your entire prompt (including your previous conversation with it). It's smarter than a Markov generator because it has unimaginably more training data, and more operating data.

People, especially AI boosters, love to compare them to people. But we shouldn't be throwing out science just because we found a cool toy. Science needs to make the minimum number of assumptions possible to explain a phenomenon. You can't just say "It does human-like things, so it's just like a human."

Till a century ago, we had no experience with things that can sound human but aren't. As a rule of thumb, something that can generate human language could only have been human-- which is basically what the Turing Test asserts. Our confidence should have been shaken by Markov generators and early AI (like Schank's story understanders), but those were not very convincing. LLMs are very convincing, enough to show that Turing was wrong. Humans are very easy to fool in this area. To put it as neutrally as possible, it's an open question what they are. I don't think they're human; as I've said before, if you really believe they are you should be demanding that Sam Altman be arrested as a slaveowner. But-- just as if you were investigating grey parrots-- you can't just assume they work like human brains, or assume that "intelligence" is a unitary concept that applies to humans, birds, octopuses, and collections of node weights.

Re: AIs gunning for our precious freelancers

Posted: Wed Jan 29, 2025 5:17 am
by sasasha
malloc wrote: Mon Jan 27, 2025 8:21 pm decent if not necessarily inspired
I have very little fear of, or excitement about, a tool that can produce decent, but not inspired, artistic outputs.

Re: AIs gunning for our precious freelancers

Posted: Wed Jan 29, 2025 5:47 am
by zompist
Ares Land wrote: Wed Jan 29, 2025 4:53 am I have a mixture of fascination and annoyance towards AGI. I'd love to understand more about how, say, ChatGPT works, how it models human language (if it does), and how it seems to know thing, despite (as far as I know) merely producing human language. This could teach us a lot about the way language works, in fact even on what language is (evidently much more than a simple means of communication).
I always like a philosophical challenge. :)

As I mentioned in the SCK and as I've hinted here, I find Markov generators a little troubling. They are so simple that we can easily see how they function. Yet before you see one operate, I doubt you'd expect how well they can generate texts. An example from my page:
for example, the queen of hearts, carrying the king's crown on a certain rainy afternoon when this illusion seemed phenomenally strong, and i recall many futile struggles and attempts to scream.
This is pretty much correct grammar (I could show you examples from recorded conversations that are far sloppier), and it's quite understandable. It's hard to believe that all the generator knows is what words most often follow any given 3-word sequence. There's no model of the world here, and not even a model of language!

What we should learn from Markov is not that the program is smart, but that statistical imitation is far more powerful than intuition suggests. Get enough language data, and you can do a lot of language-like activity. To put it more strongly: you don't need a model to do a lot of language-like tasks. Get enough data and you can do things like translation, which no one would have expected in, say, 1990.

Does this teach us something about human cognition? Maybe, but probably not. You don't learn a foreign language by ingesting every text in that language ever posted on the Internet. The same problem can be solved in multiple ways. Raw statistics turns out to be a pretty good tool, but we have other, better tools.

At the same time, we can ask of any linguistic or philosophical model, do we really need this? Can we do it with stupid statistics instead? When we make models of the brain, we usually make them too rationalistic, e.g. based on rules and manipulation of strings or graphs, as if we were a single computer running procedural code. We can follow algorithms, but a lot of neural functioning probably looks very unlike an algorithm.

Re: AIs gunning for our precious freelancers

Posted: Wed Jan 29, 2025 6:47 am
by bradrn
zompist wrote: Wed Jan 29, 2025 5:14 am I think you're just redefining "prediction" here to make it sound like intelligence. It's not, nor does it imply a model.
Then I invite you to provide your own definition.
Flip a coin; I predict that 50% of the time it'll come up heads. […] Note that you don't need an intelligent system at all to make these predictions. You can in fact make predictions, with the precise statistical distribution, by using another 3d6-- three dead pieces of plastic.
I’ll note that this and the other coin and dice examples are all Markov processes, so comments on them apply to these situations similarly.

On these ‘predictions’: I shouldn’t really use scare quotes because after all they are predictions, but they’re very limited ones. For me it’s not just about the accuracy of the predictions, but the fact that an intelligence can act as a predictor for many different systems. (More on this below.)
People, especially AI boosters, love to compare them to people. But we shouldn't be throwing out science just because we found a cool toy. Science needs to make the minimum number of assumptions possible to explain a phenomenon. You can't just say "It does human-like things, so it's just like a human."
The basic premise I work on is that ‘simulating intelligence’ and ‘being intelligent’ are one and the same thing. Something which can simulate intelligence sufficiently well must itself be intelligent. Trying to argue otherwise leads to incoherencies like the Chinese Room.

For this reason, I basically disagree with the last sentence here. I would say that something has human-level intelligence if it can do human-like things.

Mind you, I do agree that merely doing ‘human-like things’ isn’t enough to be considered ‘just like a human’. There are degrees of human-like-ness, and a text generator should only be considered ‘just like a human’ when it does indeed communicate ‘just like a human’. You seem to think this is a low bar; it really is not.

Do Markov chains write in a human-like way? Well… no. They get basic syntax right (which probably says something important about syntax), but not much more. Consider the example sentence you quote in the last post: it can barely stay on a single topic from the beginning to the end! Rather it drifts gradually from queens to crowns to rain to illusion to screaming, in a way which is not surprising at all considering that it can only look at the last two words. And if you make the context any longer, they just start repeating sentences verbatim from the source.

Do LLMs write in a human-like way? They’re much closer, yes. But they still make mistakes that no human would ever make, like the infamous glue pizza. There’s more subtle things too: for instance, it’s well-reported that they regularly veer between being weirdly agreeable and weirdly disagreeable, again in a non-human-like way. And, most damning of all, they are unable to truly learn after the training period: as soon as something goes out of the context window it’s forgotten. Current LLMs compensate by making the context window enormous, but that’s just working around the problem, not solving it.
To put it as neutrally as possible, it's an open question what they are. I don't think they're human; as I've said before, if you really believe they are you should be demanding that Sam Altman be arrested as a slaveowner. But-- just as if you were investigating grey parrots-- you can't just assume they work like human brains, or assume that "intelligence" is a unitary concept that applies to humans, birds, octopuses, and collections of node weights.
I don’t assume this at all. As described above, ‘intelligence’ for me is a function of observable behaviour; it says nothing about how they work internally.

What kind of observable behaviour, then, is ‘intelligence’? I do very strongly think it comes down to the ability of one system to predict the behaviour of another system. For LLMs (and also animals!), the other system of choice is invariably humans, and the prediction task is ‘behave like a human would if confronted by the same problem’. And this is a strong requirement, because humans can in turn do many different kinds of prediction task! We can do things all the way from catching a thrown ball to modelling other humans to solving Fermat’s Last Theorem. Not every human can do every thing, of course, but the range is clearly broader than anything else we currently know of.

It’s interesting to contemplate how these criteria could be applied to an alien species, where we have no expectation of human-like-ness in the first place. But, well, you’ve read Stanisław Lem; such a question may well be impossible to answer. We are forced to take humanity as our reference point because we have no other.

(As for Sam Altman being a slave-holder, I think that depends on consciousness and suffering more than intelligence. Those two properties do indeed depend on internals, which makes them much more difficult to talk about. For the record, I do not believe for a second that LLMs are conscious; whatever machinery yields their output, I don’t think that it’s the right sort for consciousness.)

Re: AIs gunning for our precious freelancers

Posted: Wed Jan 29, 2025 7:18 am
by Ares Land
bradrn wrote: Wed Jan 29, 2025 5:02 am

On this, I really like Scott Alexander’s article on Why I Am Not (As Much Of) A Doomer (As Some People). It nicely lays out the argument for doomerism and its premises, and how those premises may or may not be satisfied. (The short version is that there are a lot of unknowns, and people’s intuitions on them vary wildly.)
Thanks, that was very interesting. I think Scott Alexander's optimism is too pessimistic for me!
bradrn wrote: Wed Jan 29, 2025 6:47 am
The basic premise I work on is that ‘simulating intelligence’ and ‘being intelligent’ are one and the same thing. Something which can simulate intelligence sufficiently well must itself be intelligent. Trying to argue otherwise leads to incoherencies like the Chinese Room.
Ehhh... I don't know about that. Or maybe we should be extra careful in determining definitions and acceptable values for 'simulate', 'intelligence', 'well' and 'sufficient'.
But ChatGPT did break the Turing test in an unexpected way. It wouldn't be too hard to cook up a situation where I'd mistake ChatGPT for a human being; and yet, it's not quite intelligent in the way Turing meant. OK, I know that's basically your point!
zompist wrote:Does this teach us something about human cognition? Maybe, but probably not. You don't learn a foreign language by ingesting every text in that language ever posted on the Internet. The same problem can be solved in multiple ways. Raw statistics turns out to be a pretty good tool, but we have other, better tools.

At the same time, we can ask of any linguistic or philosophical model, do we really need this? Can we do it with stupid statistics instead? When we make models of the brain, we usually make them too rationalistic, e.g. based on rules and manipulation of strings or graphs, as if we were a single computer running procedural code. We can follow algorithms, but a lot of neural functioning probably looks very unlike an algorithm.
A very fair point.

Additionally, my belief is that the brain is not like a computer at all. That's another view that is difficult to prove... but I'd argue the burden of proof is on the opposite side!
But maybe it's relevant that while we can follow an algorithm, we're typically not very good at it.

Re: AIs gunning for our precious freelancers

Posted: Wed Jan 29, 2025 8:24 am
by sasasha
bradrn wrote: Wed Jan 29, 2025 6:47 am I would say that something has human-level intelligence if it can do human-like things.

Mind you, I do agree that merely doing ‘human-like things’ isn’t enough to be considered ‘just like a human’. There are degrees of human-like-ness, and a text generator should only be considered ‘just like a human’ when it does indeed communicate ‘just like a human’. You seem to think this is a low bar; it really is not.
I just want to take this to basics and question the terms and premises here.

My main questions this prompts (generally, not just to you bradrn):
  • Can intelligence be measured in levels? That assumes it is quantifiable and stable enough to be compared. Can there be a meaningful comparison of the 'level of intelligence' of any two things?
  • Where are the cut-offs around the concept of 'human-like things'? Can this be a meaningful category?
  • When does something transition from 'like x' to 'just like x'?
  • Does it matter if people perceive a/differing degree(s) of likeness between y and x when one is considering the question 'is y the same as x', or in other words 'is it true that y=x?'?
Ultimately, wrt my last point, I am sure we can agree that y does not equal x if x and y are not the same. Imagine we list human and AI capabilities and produce a Venn diagram. It is obvious that there would be some capabilities of each that would not be shared by the other, and thus they are clearly not the same thing.

I know that this is entirely obvious, but this discussion seems to me to keep getting lost in wooly semantics which frankly, obscure rather than illuminate. Perhaps the point is not that AI are quite a lot like humans in some ways, or not very much like them, depending on your perspective.

The ideological romance of the Turing test has cast a long shadow. Perhaps in evaluating its usefulness and risks we ought to try to discuss simply what AI can and can't do as a tool, rather than constantly interrogate its capacity to appear human?

Re: AIs gunning for our precious freelancers

Posted: Wed Jan 29, 2025 8:40 am
by malloc
Ares Land wrote: Wed Jan 29, 2025 7:18 amAdditionally, my belief is that the brain is not like a computer at all. That's another view that is difficult to prove... but I'd argue the burden of proof is on the opposite side!
But maybe it's relevant that while we can follow an algorithm, we're typically not very good at it.
Perhaps the human brain is a computer, just a really shitty one. All the more reason to worry about competition from AI and computers which follow algorithms so much better than us.
sasasha wrote: Wed Jan 29, 2025 5:17 amI have very little fear of, or excitement about, a tool that can produce decent, but not inspired, artistic outputs.
That's good enough to put millions of writers and artists out of work. It may well be that the greatest artists can weather this development but what about neophytes and people outside the hallowed halls of avant-garde art? Should people who draw furry OCs on commission lose their jobs? There are plenty of artists whose work I enjoy and want to see prosper even though they aren't exactly revolutionizing the world of art with groundbreaking innovation.
zompist wrote: Wed Jan 29, 2025 1:05 amCan you name two or three of these that you've read? Also, [citation needed] on that "fractions of a second" claim.
I avoid AI content out of principle and only know about it because it has become omnipresent. Regarding the second point, it's well-known that LLMs work instantaneously or nearly so like most computer programs. Certainly they aren't taking years to write one novel like George R. R. Martin or something. Perhaps not fractions of second, then, but fast enough that human writers cannot realistically keep pace.
This is called begging the question-- that is, you're trying to support your claim by assuming it.

At this point LLMs make hallucinations that no human would make (e.g. putting glue on pizza) and fail on elementary tasks, like counting letters.

They are remarkable, but they are at root prediction machines, which are good at producing things that look like their training materials. This turns out to be very useful if, say, you want a lot of things that look like their training materials. There is no knob you can pull to keep them from hallucinating; it's inherent in what they are and how they do it.
Sure but one could easily fix these problems by adding more components to correct for then. One could incorporate modules into the AI that take care of arithmetic or check facts. From what I understand, the oft-cited mistake of glue on pizza came from a reddit shitpost which the AI took at face value rather than hallucinating it. The problem is not hallucination in that case but rather inaccurate source data.

Also it must be acknowledged that most humans suck at arithmetic and get facts wrong all the time. The plurality of Americans just voted a lunatic into the presidency based on delusions of cat-eating Haitians and tariffs lowering prices. If you gave any random question to ChatGPT and the average American, who would you expect to give a more accurate answer?

Re: AIs gunning for our precious freelancers

Posted: Wed Jan 29, 2025 2:22 pm
by alice
bradrn wrote: Wed Jan 29, 2025 6:47 amDo Markov chains write in a human-like way? Well… no. They get basic syntax right (which probably says something important about syntax), but not much more. Consider the example sentence you quote in the last post: it can barely stay on a single topic from the beginning to the end! Rather it drifts gradually from queens to crowns to rain to illusion to screaming, in a way which is not surprising at all considering that it can only look at the last two words. And if you make the context any longer, they just start repeating sentences verbatim from the source.
None of which will prevent some academics somewhere from hailing it as "The New Literature", or something.

Re: AIs gunning for our precious freelancers

Posted: Wed Jan 29, 2025 3:26 pm
by sasasha
malloc wrote: Wed Jan 29, 2025 8:40 am
sasasha wrote: Wed Jan 29, 2025 5:17 amI have very little fear of, or excitement about, a tool that can produce decent, but not inspired, artistic outputs.
That's good enough to put millions of writers and artists out of work. It may well be that the greatest artists can weather this development but what about neophytes and people outside the hallowed halls of avant-garde art? Should people who draw furry OCs on commission lose their jobs? There are plenty of artists whose work I enjoy and want to see prosper even though they aren't exactly revolutionizing the world of art with groundbreaking innovation.
I may sound callous here but many tools have obsoleted previous tools, or rather, made them curios rather than the core of a common profession. Yes AI is a big tool (possibly in more ways than one). But human artistry will adapt to accommodate its possibilities, even if/as traditional opportunities erode. I'm afraid it is what has happened to calligraphers, stonemasons, woodblock printers, etc, and it's as complex: all of those things still exist.
malloc wrote:From what I understand, the oft-cited mistake of glue on pizza came from a reddit shitpost which the AI took at face value rather than hallucinating it. The problem is not hallucination in that case but rather inaccurate source data.
I recommend you read about AI, and use it yourself. Zompist is absolutely correct that AI hallucinates in this way.

Re: AIs gunning for our precious freelancers

Posted: Wed Jan 29, 2025 3:46 pm
by malloc
sasasha wrote: Wed Jan 29, 2025 3:26 pmI may sound callous here but many tools have obsoleted previous tools, or rather, made them curios rather than the core of a common profession. Yes AI is a big tool (possibly in more ways than one). But human artistry will adapt to accommodate its possibilities, even if/as traditional opportunities erode. I'm afraid it is what has happened to calligraphers, stonemasons, woodblock printers, etc, and it's as complex: all of those things still exist.
But why should humans have to make way for machines and give up something so important to our culture? We were here first and now we face demands to cede control of our culture to machines bankrolled by ultra-wealthy (and now openly fascist) plutocrats. Employment opportunities for humans, particularly those with dignity and good pay, are rapidly shrinking thanks to your machines. Years ago when I raised the problem of automation threatening humanity, the techies on this board assured me that only boring and undignified work would disappear. The rise of AI with its increasing dominance of art and literature has proven them utterly wrong and yet none of them has ever acknowledged I was right all along.

Re: AIs gunning for our precious freelancers

Posted: Wed Jan 29, 2025 4:55 pm
by sasasha
I wasn't going to deal with this, as I see it as trolling, and/or representative of a despair so deep I can't hope to alleviate it.

But I'm lacking a filter today, I'm afraid.
malloc wrote: Wed Jan 29, 2025 3:46 pm But why should humans have to make way for machines and give up something so important to our culture?

Machines are part of human culture. They do not exist outside of human culture. What you are afraid of human behaviour being superseded by is human behaviour itself – merely, another type of human behaviour.
We were here first
Yes; we made the machines on purpose to help us survive and thrive, and you are using one to communicate your distress, alongside many other, no doubt, important uses. You sound like the UK government trying to send the Windrush generation back.
and now we face demands
Oh god, not demands! Please tell me where they are so I can protect myself appropriately from them!
to cede control of our culture to machines bankrolled by ultra-wealthy (and now openly fascist) plutocrats.
I am uncomfortable with capitalism too, but it's nothing new, and its tools aren't inherently evil just because capitalists use them.
Employment opportunities for humans, particularly those with dignity and good pay, are rapidly shrinking thanks to your machines.
I shall chastise my oven, boiler and coffee maker at the earliest opportunity. Their disenfranchisement of the sadly imaginary servants whom I would be able to employ in your utopia has gone too long unnoticed; said servants would no doubt love working mind-numbingly dignified shifts in my kitchen rather than doing something else meaningful over which they have autonomy and creative freedom.

I know you are upset about competition with creative artistic work, but why, for one thing, can't we cope with more art?
Years ago when I raised the problem of automation threatening humanity,
Standing ovation for bringing this to everyone's attention for the first time.
the techies on this board assured me that only boring and undignified work would disappear.
How mean of them to use their crystal ball so dishonestly.
The rise of AI with its increasing dominance of art and literature has proven them utterly wrong and yet none of them has ever acknowledged I was right all along.
Dominance? Sorry to drop the sass and go for headpounding instead, but WTAF? Point me to any bookshop, art gallery or theatre in which AI-made artworks dominate over works created with traditional methods?

Good luck finding a more healthy way to think about this. You're going to need it, as AI isn't going anywhere (unless we're off to the Stone Age again soon). I'm sorry it frightens you but maybe you need to embrace it somehow?

Economies change. Jobs and opportunities come and go. I would have loved to have been a cartographer and make maps by hand, but that opportunity belongs to a bygone age, and yet modern maps are not evil, and I still enjoy making maps, and still have a livelihood.

There are still and will always be people out there interested in what human beings have to say to each other in art – real living ones, not just the dead voices of artists, or the AI outputs that can be extemporised from them.

Re: AIs gunning for our precious freelancers

Posted: Wed Jan 29, 2025 5:24 pm
by zompist
sasasha wrote: Wed Jan 29, 2025 8:24 am
bradrn wrote: Wed Jan 29, 2025 6:47 am I would say that something has human-level intelligence if it can do human-like things.
  • Can intelligence be measured in levels? That assumes it is quantifiable and stable enough to be compared. Can there be a meaningful comparison of the 'level of intelligence' of any two things?
  • Where are the cut-offs around the concept of 'human-like things'? Can this be a meaningful category?
  • When does something transition from 'like x' to 'just like x'?
  • Does it matter if people perceive a/differing degree(s) of likeness between y and x when one is considering the question 'is y the same as x', or in other words 'is it true that y=x?'?
These are all good questions, and they don't have obvious answers. For fun my answers would be:

1. No, intelligence is not a unitary concept, and it would be useless in philosophy— or neuroscience— if it were. To understand intelligence means to break it into smaller pieces and look for continua, not binary states.
2. All "human-like" can ever be is an impressionistic judgment that humans have of other things; it's not a scientific concept.
3. When the resemblance isn't restricted to a single viewpoint. E.g. Brad already admitted that the programs he wants to call "intelligent" are not "conscious". If we're talking human-like things, the rubber hits the road when we give them legal or moral rights. You can be prosecuted for killing someone's dog. You cannot (yet?) be prosecuted for turning off their LLM. There's something philosophically suspicious about an "intelligence" that excludes common sense, learning, accuracy, and morality.
4. Sure, and that leads to the question, who benefits from a particular set of perceptions? Unfortunately, declaring that deep learning models are "intelligent" seems to me to play into the hand of AI hypesters— it's a multibillion dollar industry that doesn't need our cheerleading. (I don't think Brad is intending to do that, I think he's just fascinated by what AI can do. But AI looks like a huge oversold bubble, and I think we shouldn't be rushing to add to the hype.)

Two big points I'm trying to get across:

One, humans are easy to fool. An average person is very ready to think that ordinary computers are already intelligent. There are evolutionary reasons for this: human intelligence largely evolved to model... human intelligence. Animals have always had to evaluate their peers (to say nothing of their predators or prey), and as humans got smarter, our understanding of other humans had to keep up. In the ancestral environment, our human-understanding toolset was mostly used on other humans. But we easily apply it to natural forces, animals, deities and demons, and our own machines. Our simplest user model of everything is that it's a thinking being— "my car is being really stubborn today." Mostly harmless, but in science — and law, and even business— we're supposed to be more careful. I hope we're all agreed here that CEOs thinking that LLMs are "just like humans" is a foolish mistake: chatbots are not what they think they are, and they're degrading search, customer service, health services, and other things because of that mistake.

Two, there are multiple solutions to every problem. And that applies to problems that human brains solve with "intelligence", whatever that is. Going by external behavior alone, as Brad suggests, is just fooling ourselves.

This has been extensively discussed in animal ethology. As just one behavior, consider an animal navigating quickly from point A to point B. Higher mammals, such as most of us on the board, do this by direct observation. Shrews do it by memorization. They can stumble along in exploration, but this is slow and inefficient for them. Eventually they memorize the route and can speed along it. On a memorized route, they literally do not see what's in front of them: they will jump over nonexistent rocks (that have been moved) or fall to their deaths jumping into a pond that is no longer there.

From the human point of view that's kind of stupid. An ethologist's view though is that it's rather marvelous: the problem of quick navigation can be solved in at least two ways, by smart direct observation or by rote memorization. One of these is human-like, but biology doesn't make value judgments. Very likely the shrew's behavior is adaptive because it can save on brain size and sense perception. (They can see, but their methods don't require extremely fine resolution.)

Now we have computer programs doing stuff. I think "is this human-like" is the wrong question. I want to know, how do they do the task, and is that interestingly different from a human? E.g. chess programs are very good at chess— but there we know their methods, and they simply aren't the way humans do it. Rushing to call them "intelligent" means not bothering to understand what they do and what different methods can be used for the same task.