People posting random cute candids of their family and pets is about the most commonplace type of social media post there is. You should be getting angry at the weird pervs sexualizing the images (and the giant AI company enabling it).
Twitter isn't just generating the images. It is also posting them. Hey, now in the replies below your child's twitter post there's a photo of them wearing a skimpy swimsuit. They see it. Their friends see it.
This isn't just somebody beating off in private. This is a public image that humiliates people.
Speaking in the abstract: There are arguments that fictional / drawn CSAM (such as lolicon) lowers the rates of child sex abuse by giving pedophiles an outlet. There are also arguments that consuming fictional / drawn CSAM is the start of an escalating pattern that leads to real sex abuse, as well as contributing to a culture that is more permissive of pedophilia.
Anecdotally speaking, especially as someone who was groomed online as a child, I am more inclined toward the latter argument. I believe fictional CSAM harms people and generated CSAM will too.
With generated images being more realistic, and with AI 'girlfriends' advertised as a woman who "can't say no" or as "her body, your choice", I am inclined to believe that the harms from this will be novel and possibly greater than existing drawn CSAM.
Speaking concretely: Grok is being used to generate revenge porn by editing real images of real children. These children are direct, unambiguous victims. There is no grey area where this can be interpreted as a victimless crime. Further, these models are universally trained with real CSAM in the training data.
I understand where you're coming from, and I'll play devil's advocate to the devil's advocate: If generative AI is generating convincingly photorealistic CSAM, what the fuck are they training the models on? And if those algorithms are modifying images of actual children, wouldn't you consider those victims?
I strongly sympathize with the idea that crimes should by definition have identifiable victims. But sometimes the devil doesn't really need an advocate.
Considering that every image generation model out there tries to censor your prompts/outputs despite trying their best not to train on CSAM... you don't need to train on CSAM for the model to be capable of generating CSAM.
Not saying the models don't get trained on CSAM. But I don't think it's a foregone conclusion that AI models capable of generating CSAM necessarily victimize anyone.
It would be nice if someone could research this, but the current climate makes it impossible.
When you indiscriminately scrape literally billions of images, and excuse yourself from vigorously reviewing them because it would be too hard/expensive, horrible and illegal stuff is bound to end up in there.
That's probably incidental, horrible as it is. Models don't need training data of everything imaginable, just enough things in combination, and there's enough imagery of children's bodies (including non-sexual nudity) and porn to generate a combination of the two, same as it can make a hybrid giraffe-shark-clown on a tricycle despite never seeing that in the training data before.
The biggest issue here is not that models can generate this imagery, but that Musk's Twitter is enabling it at scale with no guardrails, including spamming them on other people's photos.
Yep, when my kid was taking selfies with my phone and playing with Google Photos, I appreciated that Google didn't let any Gemini AI manipulation of any kind occur, even if whatever they were trying to do was harmless. Seemed very strict when it detected a child. Grok should probably do that.
>If generative AI is generating convincingly photorealistic CSAM, what the fuck are they training the models on?
Pretty sure these models can generate images that do not exist on their training data. If I generate a picture of a surfing dachshund, did it have to train on canine surfers?
I'm not sure if there's been talk about it but it does make you wonder, would this AI generated CSAM 'saite' the abuser's needs and/or would it spread the idea that it isn't bad and possibly create more abusers who then go on to abuse physical children. Would those individuals have done it without the AI. I believe there's still debate over whether abuse is a result of nature or nurture but that starts to get into theoretical and philosophy. To answer your question about who the victim is I would say the children who those images are based off of. As well as any future children that are harmed due to exposure of these images or due to the abusers possibly seeking real content. I think for the most part AI generated porn hurts everyone involved.
There's definitely at least some people who will be influenced by being repeatedly exposed to images. We know that usual conditioning ideas work. (Like presence of some type of images mixed in with other sexual content) On the other hand, I remember someone on HN claiming their own images are out there in CSAM collections and they'd prefer someone using those if it stops anyone from hurting others.
The need to fight CSAM also provides a pretext for broader censorship. Look at all the people in this thread salivating over the prospect of using Grok generations to take down Musk, whom they hate for allowing people to express wrongthink on X. If they ever regain broad censorship powers over AI or people, they definitely won't stop at blocking CSAM.
Lots of research has been done on this topic. You say "let some science happen", and then two paragraphs later say "according to the research": so has or hasn't research taken place? (Last time I looked into this, I came away with the impression that most people considered pædophiles are not exclusively attracted to children: I reject your claim that the "no choice" claim is evidenced, and encourage you to show us the research you claim to have.)
I don't think you're engaging with this topic in good faith.
> Whether it is exclusive or not is not really relevant to the point.
Whether it's exclusive or not is very relevant to the point, because sexual fetishes and paraphilias are largely mutable. In much the same way that a bi woman can swear off men after a few bad experiences, or a monogamous person in a committed relationship can avoid lusting after other people they'd otherwise find attractive, someone with non-child sexual interests can avoid centring children in their sexuality, and thereby avoid developing further sexual interests related to children. (Note that operant conditioning, sometimes called "conversion therapy" in this context, does not achieve these outcomes.) I imagine it's not quite so easy for people exclusively sexually-attracted to children (though note that one's belief about their sexuality is not necessarily the same as one's actual sexuality – to the extent that "actual sexuality" is a meaningful notion).
> Can you link me to research on how AI generated CSAM consumption affects offending rates?
No, because "AI-generated" hasn't been a thing for long enough that I'd expect good research on the topic. However, there's no particular reason to believe it'd be different to consumption of similar material of other provenance.
It's a while since I researched this, but I've found you a student paper on this subject: https://openjournals.maastrichtuniversity.nl/Marble/article/.... This student has put more work into performing a literature review for their coursework than I'm willing to do for a HN comment. However, skimming the citations, I recognise some of these names as cranks (e.g. Ray Blanchard), and some papers seem to describe research based on the pseudoscientific theories of Sigmund Freud (another crank). Take this all with a large pinch of salt.
> For instance, virtual child pornography can cause a general decline in sexual child abuse, but the possibility still remains that in some cases it could lead to practicing behavior.
I remember reading research about the circumstances under which there is a positive relationship, which obviously didn't turn up in this student's literature review. My recent searches have been using the same sorts of keywords as this student, so I don't expect to find that research again any time soon.
None of the services dealing with actual research papers discovery/distribution block this. Don't expect AI to make up answers, start digging through https://www.connectedpapers.com/ or something similar.
I think primarily this victimizes all those all ready victimized by the CSAM in the training material and also generally offends the collective sense of morality our society has.
Simplistically and ignorantly speaking, if a diffusion model knows what a child looks like and also knows what an adult woman in a bikini looks like, couldn't it just merge the two together to create a child in a bikini? It seems to do that with other things (ex. Pelican riding a bicycle)
In principle yes, but in practice no: the models don't just learn the abstract space, but also memorise individual people's likenesses. The "child" concept contains little clusters for each actual child who appeared enough times in the dataset. If you tried to do this, the model would produce sexualised imagery of those specific children with distressing regularity.
There are ways to select a specific point or region in latent space for a diffusion model to work towards. If properly chosen, this can have it avoid specific people's likenesses, and even generate likenesses outside the domain of the latent space (which tend to have severe artefacts). However, text prompting doesn't do that, even if the prompt explicitly instructs it to: text-to-image prompts aren't instructions. A system like Grok will always exhibit the behaviour I described in my previous (GP) comment.
As I mentioned in another comment (https://news.ycombinator.com/item?id=46503866), there are other reasons not to produce synthetic sexualised imagery of children, which I'm not qualified to talk about: and I feel this topic is too sensitive for my usual disclaimered uninformed pontificating.
It’s been reported Grok has generated CSAM by editing photos of real children, so there’s the real victim you shouldn’t need to find this situation abominable.
This is a big, sensitive topic. Last time I researched it, I was surprised at how many things I assumed were just moralistic hand-wringing are actually well-evidenced interventions. Considering my ignorance, I will not write a lengthy response, as I am want to.
I will, instead, speak to what I know. Many models are heavily overfit on actual people's likenesses. Human artists can select non-existent people from the space of possible visages. These kinds of generative models have a latent space, many points of which do not correspond to real people. However, diffusion models working from text prompts are heavily biased towards reproducing examples resembling their training set, in a way that no prompting can counteract. Real people will end up depicted in AI-generated CSAE imagery, in a way that human artists can avoid.
There are problems with entirely-fictional human-made depictions of child sexual exploitation (which I'm not discussing here), and AI-generated CSAE imagery is at least as bad as that.