Page 43 of 43

Re: AIs gunning for our precious freelancers

Posted: Mon Aug 05, 2024 3:56 pm
by zompist
rotting bones wrote: Mon Aug 05, 2024 3:23 pm Regarding this specific problem, it's totally possible to infer statistically likely details from contextual information.
"Statistically likely", yes. If you flip ten coins, it's statistically likely that they're half heads. What they actually are can only be determined by actually flipping them.
But this is not the core of the issue. Arguments like this are motivated by the theological instinct that complexity cannot arise unless it was created and information cannot be obtained unless it was given.
No, arguments like these are motivated by being interested in what is true. You don't even seem to understand the problem, or what your machines are actually doing.

Re: AIs gunning for our precious freelancers

Posted: Mon Aug 05, 2024 4:21 pm
by rotting bones
So weather forecasting is impossible now? How is that different from guessing the folds that led to a set of pixels?

---

A neural network is a technique that lets you approximate the constants in a system of equations that model your training data.

The details differ from system to system, and several interpretations are possible. Linear equations with nonlinear activations are often interpreted as a region in n-dimensional space bounded by hyperplanes. Convolutions are usually used on images to aggregate neighboring information. Transformers specifically have been interpreted as interacting systems of particles: https://arxiv.org/abs/2312.10794 Etc.

I don't understand why non-specialists think they have special insight into this field.

Re: AIs gunning for our precious freelancers

Posted: Mon Aug 05, 2024 4:43 pm
by zompist
rotting bones wrote: Mon Aug 05, 2024 4:21 pm So weather forecasting is impossible now?
Are you unable to read? Is it too hard to respond to the things I actually say, so you make up shit instead?
How is that different from guessing the folds that led to a set of pixels?
I gave you several examples where statistical methods don't work. You ignore the facts you don't like and just pretend everything is weather reports. That's not rationalism, it's rationalization.

Re: AIs gunning for our precious freelancers

Posted: Mon Aug 05, 2024 4:57 pm
by rotting bones
I was responding to the last critical post:
FlamyobatRudki wrote: Mon Aug 05, 2024 9:00 am i'm pretty sure one can't get more data than one started with… by "enhcancing" even if you increase the amount of information content.
Regarding your examples, it's not that statistical models "don't work". All models have biases that have to be gradually corrected. Even the standard model of physics is understood to be probably wrong in an absolute sense. On the other hand, newer models wouldn't turn Obama white if that is how the model was trained.

What you should be arguing is that before rushing to use AI models in practical applications, democratic processes should be implemented to ensure that society is comfortable living with the consequences given the AI model's performance. For example, an AI model's prediction cannot be used in a situation that requires the verdict to be "beyond reasonable doubt". There is always some doubt in any statistical technique. On the other hand, AI models are good at being image filters for artistic purposes, rewriting text to be in specific formats, etc.

I don't understand why you keep accusing me of not reading your post. I listed this use case under "amazing applications". Under useful applications, I mentioned cancer diagnosis. Doctors admit that those models are better at identifying malignant tumors than they are.

Re: AIs gunning for our precious freelancers

Posted: Mon Aug 05, 2024 10:43 pm
by zompist
rotting bones wrote: Mon Aug 05, 2024 4:57 pm I don't understand why you keep accusing me of not reading your post.
Because you skip over the parts that refute your claims and make up shit I didn't say. Case in point:
All models have biases that have to be gradually corrected. [...] On the other hand, newer models wouldn't turn Obama white if that is how the model was trained.
I explicitly addressed this in my original post; I said it wasn't a matter of adding more Obama to the training data. You skipped over the clear examples I provided of filling in detail wrong, and you missed the point of the Obama example exactly as I warned not to.
On the other hand, AI models are good at image filters for artistic purposes [...]
Sure, and what does that have to do with your claim that hallucinated data is real? Again, I said AIs have uses in my post. What magic process do I need to have a statistical chance of you reading my words?
zompist wrote:What AI can do, of course, is add plausible details that don't matter. That can be common enough-- e.g. upscaling images for a video game.
rotting bones wrote:For example, an AI model's prediction cannot be used in a situation that requires the verdict to be "beyond reasonable doubt".
Not putting people in jail because of hallucinated data would be a start, yes. There is an immense middle ground of cases where the AI is not putting someone in prison, but it's creating false data where we expect real data. E.g. should image processors be used to enhance historical photos? If it's for a stage backdrop or your conworld or desktop wallpaper or something, sure, go ahead. If it's for a museum that is supposed to depict what is actually there, no, because you do not have a Star Trek enhance button. You have a tool that makes photorealistic fakes based on large input sets. If all we have as actual data is a small photo, then too bad, all we have is a small photo.

Re: AIs gunning for our precious freelancers

Posted: Tue Aug 06, 2024 10:28 am
by Travis B.
The matter here is that while on one hand AI can't add information that isn't there -- that's physically impossible -- but AI can analyze data in such a fashion to perceive things that normal humans cannot -- like the example of cancer diagnoses. In the former case, things like AI turning Obama White are an effect of AI not being able to do anything beyond what it is trained for and with, and if you images of US presidents are all White, garbage in garbage out. In the latter case, though, if you train your AI on extensive training sets including data tagged as non-cancerous and data tagged as cancerous, you can potentially get an AI that can diagnose cancer better than a human can, as your training sets can be far larger than the number of patients an oncologist might see in a career.