rotting bones wrote: ↑Mon Aug 05, 2024 4:57 pm
I don't understand why you keep accusing me of not reading your post.
Because you skip over the parts that refute your claims and make up shit I didn't say. Case in point:
All models have biases that have to be gradually corrected. [...] On the other hand, newer models wouldn't turn Obama white if that is how the model was trained.
I explicitly addressed this in my original post; I said it wasn't a matter of adding more Obama to the training data. You skipped over the clear examples I provided of filling in detail wrong, and you missed the point of the Obama example exactly as I warned not to.
On the other hand, AI models are good at image filters for artistic purposes [...]
Sure, and what does that have to do with your claim that hallucinated data is real? Again, I
said AIs have uses in my post. What magic process do I need to have a statistical chance of you reading my words?
zompist wrote:What AI can do, of course, is add plausible details that don't matter. That can be common enough-- e.g. upscaling images for a video game.
rotting bones wrote:For example, an AI model's prediction cannot be used in a situation that requires the verdict to be "beyond reasonable doubt".
Not putting people in jail because of hallucinated data would be a start, yes. There is an immense middle ground of cases where the AI is not putting someone in prison, but it's creating false data where we expect real data. E.g. should image processors be used to enhance historical photos? If it's for a stage backdrop or your conworld or desktop wallpaper or something, sure, go ahead. If it's for a museum that is supposed to depict what is actually there, no, because
you do not have a Star Trek enhance button. You have a tool that makes photorealistic fakes based on large input sets. If all we have as actual data is a small photo, then too bad, all we have is a small photo.