Surely the fact that "It says low-fat milk and whole milk; no one would do that" is followed by "oh the real props do that" should cause everyone who thought the former to down-rank their own ability to tell what's AI and what's not. If they don't then they aren't incorporating evidence about the world and their skill.
To have a separate conversation from everyone else (who is talking about whether it's real or AI), I think it's interesting to see people's epistemology. If you thought something was X because of A (i.e. P(X|A) > P(X), maybe much greater), your posterior for P(X|A) should be different from your prior in response to the evidence "it was X, but not X also has A" and I think the directionality should be obvious.
For those who don't do that, I should update my adjustment factor to their claims of fact and not in their favour.
It's funny that this comment is also an example of a confidently wrong judgment. The props are 100% accurate to what's going on in the show. Not really possible to explain why without spoiling it, but the milk cartons are supposed to be suspicious.
His response is confusing to me as well. I didn't edit it that drastically. I moved one or two of the clauses around but the fundamental thrust was: if you think it's an AI image because of the whole-milk/low-fat discordance, but real-world designs for the show also have that discordance, then you should consider that the discordance does not mean it's AI.
That would still be true if:
* it was indeed an AI poster
* it was an AI image poster made to look that way
* it was a human-made poster accidentally made that way
* it was a human-made poster intentionally made to look that way
The truth of the show itself could have no bearing on what I was saying. The only thing it does rely on is whether or not the real-world designs did not correspond to the poster image.
The reality is that humans suck at telling AI. Sure, there are obvious tells for certain things, but if one really tries, they can make AI generated content indistinguishable from human made. You even see this on Twitter, where actual human artists are sometimes subjected to a modern day witch hunt by others saying their art is AI, and the artists literally have to prove that it was made by them, sometimes by pulling up various stages of the drawing in progress (and what is even funnier is that now Google's Nano Banana Pro can even generate that sort of progress compilation images).
My point was that the mistake didn't happen during prop creation. Those aren't milk cartons, those are HDP cartons, so the props are correct.
As to the content of your post: It doesn't make sense. Thinking something is not human created when it turns out that the real reason was that it wasn't created by a human in the show is not a valid reason to stop applying that as a useful discriminator between AI and human art. It's a Gettier case, but the J part of JTB knowledge still stands, and there's a reason grappling with the Gettier problem is so gnarly in epistemology.
How much evidence do we actually have that AI wasn't used for these "real props"?
(Personally I don't care about my ability to tell the difference between what's AI and what's not; I care about my ability to tell the difference between well-crafted and not, and that seems to be functioning fine)
To have a separate conversation from everyone else (who is talking about whether it's real or AI), I think it's interesting to see people's epistemology. If you thought something was X because of A (i.e. P(X|A) > P(X), maybe much greater), your posterior for P(X|A) should be different from your prior in response to the evidence "it was X, but not X also has A" and I think the directionality should be obvious.
For those who don't do that, I should update my adjustment factor to their claims of fact and not in their favour.