A new report warns that the proliferation of child sexual abuse images on the internet could become much worse if something is not done to put controls on artificial intelligence tools that generate deepfake photos.
I would say one of the main dangers is that it makes actual child abuse content more difficult to track, as some AI imagery borders the line of indistinguishable from real photos. So in a sea of AI-generated CSAM it becomes much easier for abusers to blend in their very real abuse content.
It’s gross either way and the social acceptance of sexualizing minors should be met with strong resistance; even if the tools used to sexualize children don’t necessarily directly necessitate abuse themselves, they facilitate the culture of abuse.
How often does tracking child abuse imagery lead to preventing actual child abuse? Out of all the children who are abused each year, what percentage of their abusers are tracked via online imagery? Aren’t a lot of these cases IRL/situationally based? That’s what I’m trying to determine here. Is this even a good use of public resources and/or focus?
As for how you personally feel about the imagery, I believe that a lot of things humans do are gross, but I don’t believe we should be arbitrarily creating laws to restrict things that others do that I find appalling… unless there’s a very good reason to. It’s extremely dangerous to go flying too fast down that road, esp with anything related to “terror/security” or “for the children” we need to be especially careful. We don’t need another case of “Well in hindsight, that [war on whatever] was a terrible idea and hurt lots and lots of people”
And let’s be absolutely clear here: I 100% believe that people abusing children is fucked up, and the fact that I even need to add this disclaimer here should be a red flag about the dangers of how this issue is structured.
I believe that images are important to investigation as they help with the identity of those children being abused. When that’s mixed in with a bunch of AI pedophile stuff it serves to obfuscate that avenue of investigation and hampers those efforts, which are 100% more important than anyone’s need to get off to pedophilic AI imagery.
If there was a chance of saving even one child but it meant that no one could see AI images of sexualized children then those would be completely acceptable terms to me.
I would hold there’s zero downside to outlawing the production of AI CSAM. There’s no indication that letting pedophiles indulge in “safe” forms of pedophilic activity stops them from abusing children. It’s not a form of speech or expression with any value. If we as a society are going to say we’re against abuse of children then that needs to include being against the cultivation and networking of abusive culture and people. I see no real slippery slope in this regard.
Okay… So correct me if I’m wrong, but being abused as a child is like… one of the biggest predictors of becoming a pedophile. So like… Should we preemptively go after these people? You know… To protect the kids?
How about single parents that expose their kids to strangers when dating. That’s a massive vector for kids to be exposed to child abuse.
What on earth? Just don’t sexualize children or normalize sexualizing children. Denying pedophiles access to pedophilic imagery is not some complex moral quandry.
Why on earth am I getting so much pushback on this point, on Beehaw of all places…
It already is outlawed in the US. The US bans all depictions precisely because of this. The courts anticipated that there would come a time when people could create images which are indistinguishable from reality so allowing any content to be produced wasn’t permissible.
I appreciate you posting the link to my question, but that’s an article written from the perspective of law enforcement. They’re an authority, so they’re incentivized to manipulate facts and deceive to gain more authority. Sorry if I don’t trust law enforcement but they’ve proven themselves untrustworthy at this point
Of all the problems and challenges with this idea, this is probably the easiest to solve technologically. If we assume that AI-generated material is given the ok to be produced, the AI generators would need to (and easily can, and arguably already should) embed a watermark (visible or not) or digital signature. This would prevent actual photos from being presented as AI. It may be possible to remove these markers, but the reasons to do so are very limited in this scenario.
This wouldn’t disrupt the pattern of pedophiles forming communities though, which is where a lot of the abuse begins to happen; as pedophiles begin to network with one another an affirm and normalize eachother’s compulsion towards abuse it emboldens them to act on those desires. It doesn’t matter if a site is a full of AI imagery, it has the same effect of allowing these communities to form.
There is no value in AI CSAM. And yes, AI content should be watermarked, but there’s no justifiable reason to allow the sexualization of children, whether through real photos or AI ones.
I was actually specifically avoiding all of those concerns in my reply. They’re valid, and others are discussing them on this thread, just not what my reply was about.
I was exclusively talking about how to identify if an image was generated by AI or was a real photo.
I was exclusively talking about how to identify if an image was generated by AI or was a real photo.
These images are being created with open source / free models. Whatever watermark feature the open source code has will simply be removed by the criminal.
Watermarking is like a lock on a door. Keeps honest people honest… which is useful, but it’s not going to stop any real criminals.
In this specific scenario, you wouldn’t want to remove the watermark.
The watermark would be the only thing that defines the content as “harmless” AI-generated content, which for the sake of discussion is being presented as legal. Remove the watermark, and as far as the law knows, you’re in possession of real CSAM and you’re on the way to prison.
The real concern would be adding the watermark to the real thing, to let it slip through the cracks. However, not only would this be computationally expensive if it was properly implemented, but I would assume the goal in marketing the real thing could only be to sell it to the worst of the worst, people who get off on the fact that children were abused to create it. And in that case, if AI is indistinguishable from the real thing, how do you sell criminal content if everyone thinks it’s fake?
Anyways, I agree with other commenters that this entire can of worms should be left tightly shut. We don’t need to encourage pedophilia in any way. “Regular” porn has experienced selection pressure to the point where taboo is now mainstream. We don’t need to create a new market for bored porn viewers looking for something shocking.
The real concern would be adding the watermark to the real thing, to let it slip through the cracks. However, not only would this be computationally expensive if it was properly implemented,
It wouldn’t be expensive, you could do it on a laptop in a few seconds.
Unless, of course, we decide only large corporations should be allowed to generate images and completely outlaw all of the open source / free image generation software - that’s not going to happen.
Most images are created with a “diffusion” model where you take an image, and run an algorithm that slightly modifies it. Over and over and over until you get what you want. You don’t have to (and commonly don’t - for the best results) start with a blank image. And you can run just a single pass, with the output being almost indistinguishable from the input.
This is a hard problem to solve and I think catching abuse after it happens is increasingly going to be more difficult. Better to focus on stopping the abuse from happening in the first place. E.g. by flagging and investigating questionable behaviour by kids in schools. That approach is proven and works well.
The image generation can be cheap, but I was imagining this sort of watermark wouldn’t be so much a visible part of the image, but an embedded signature that hashes the image.
Require enough PoW to generate the signature, and this would at least cut down the volumes of images created, and possibly limit them to groups or businesses with clusters that could be monitored, without clamping down on image generation in general.
A modified version of what you mentioned could work too, but where just these specific images have to be vetted and signed by a central authority using a private key. Image generation software wouldn’t be restricted for general purposes, but no signature on suspicious content and it’s off to jail.
I would say one of the main dangers is that it makes actual child abuse content more difficult to track, as some AI imagery borders the line of indistinguishable from real photos. So in a sea of AI-generated CSAM it becomes much easier for abusers to blend in their very real abuse content.
It’s gross either way and the social acceptance of sexualizing minors should be met with strong resistance; even if the tools used to sexualize children don’t necessarily directly necessitate abuse themselves, they facilitate the culture of abuse.
How often does tracking child abuse imagery lead to preventing actual child abuse? Out of all the children who are abused each year, what percentage of their abusers are tracked via online imagery? Aren’t a lot of these cases IRL/situationally based? That’s what I’m trying to determine here. Is this even a good use of public resources and/or focus?
As for how you personally feel about the imagery, I believe that a lot of things humans do are gross, but I don’t believe we should be arbitrarily creating laws to restrict things that others do that I find appalling… unless there’s a very good reason to. It’s extremely dangerous to go flying too fast down that road, esp with anything related to “terror/security” or “for the children” we need to be especially careful. We don’t need another case of “Well in hindsight, that [war on whatever] was a terrible idea and hurt lots and lots of people”
And let’s be absolutely clear here: I 100% believe that people abusing children is fucked up, and the fact that I even need to add this disclaimer here should be a red flag about the dangers of how this issue is structured.
I believe that images are important to investigation as they help with the identity of those children being abused. When that’s mixed in with a bunch of AI pedophile stuff it serves to obfuscate that avenue of investigation and hampers those efforts, which are 100% more important than anyone’s need to get off to pedophilic AI imagery.
Online investigation in general has been a successful avenue in the recent past.
If there was a chance of saving even one child but it meant that no one could see AI images of sexualized children then those would be completely acceptable terms to me.
I would hold there’s zero downside to outlawing the production of AI CSAM. There’s no indication that letting pedophiles indulge in “safe” forms of pedophilic activity stops them from abusing children. It’s not a form of speech or expression with any value. If we as a society are going to say we’re against abuse of children then that needs to include being against the cultivation and networking of abusive culture and people. I see no real slippery slope in this regard.
Okay… So correct me if I’m wrong, but being abused as a child is like… one of the biggest predictors of becoming a pedophile. So like… Should we preemptively go after these people? You know… To protect the kids?
How about single parents that expose their kids to strangers when dating. That’s a massive vector for kids to be exposed to child abuse.
What on earth? Just don’t sexualize children or normalize sexualizing children. Denying pedophiles access to pedophilic imagery is not some complex moral quandry.
Why on earth am I getting so much pushback on this point, on Beehaw of all places…
Wondering the same thing.
Because they’re computer generated images not children.
It already is outlawed in the US. The US bans all depictions precisely because of this. The courts anticipated that there would come a time when people could create images which are indistinguishable from reality so allowing any content to be produced wasn’t permissible.
I appreciate you posting the link to my question, but that’s an article written from the perspective of law enforcement. They’re an authority, so they’re incentivized to manipulate facts and deceive to gain more authority. Sorry if I don’t trust law enforcement but they’ve proven themselves untrustworthy at this point
Of all the problems and challenges with this idea, this is probably the easiest to solve technologically. If we assume that AI-generated material is given the ok to be produced, the AI generators would need to (and easily can, and arguably already should) embed a watermark (visible or not) or digital signature. This would prevent actual photos from being presented as AI. It may be possible to remove these markers, but the reasons to do so are very limited in this scenario.
This wouldn’t disrupt the pattern of pedophiles forming communities though, which is where a lot of the abuse begins to happen; as pedophiles begin to network with one another an affirm and normalize eachother’s compulsion towards abuse it emboldens them to act on those desires. It doesn’t matter if a site is a full of AI imagery, it has the same effect of allowing these communities to form.
There is no value in AI CSAM. And yes, AI content should be watermarked, but there’s no justifiable reason to allow the sexualization of children, whether through real photos or AI ones.
I was actually specifically avoiding all of those concerns in my reply. They’re valid, and others are discussing them on this thread, just not what my reply was about.
I was exclusively talking about how to identify if an image was generated by AI or was a real photo.
These images are being created with open source / free models. Whatever watermark feature the open source code has will simply be removed by the criminal.
Watermarking is like a lock on a door. Keeps honest people honest… which is useful, but it’s not going to stop any real criminals.
In this specific scenario, you wouldn’t want to remove the watermark.
The watermark would be the only thing that defines the content as “harmless” AI-generated content, which for the sake of discussion is being presented as legal. Remove the watermark, and as far as the law knows, you’re in possession of real CSAM and you’re on the way to prison.
The real concern would be adding the watermark to the real thing, to let it slip through the cracks. However, not only would this be computationally expensive if it was properly implemented, but I would assume the goal in marketing the real thing could only be to sell it to the worst of the worst, people who get off on the fact that children were abused to create it. And in that case, if AI is indistinguishable from the real thing, how do you sell criminal content if everyone thinks it’s fake?
Anyways, I agree with other commenters that this entire can of worms should be left tightly shut. We don’t need to encourage pedophilia in any way. “Regular” porn has experienced selection pressure to the point where taboo is now mainstream. We don’t need to create a new market for bored porn viewers looking for something shocking.
It wouldn’t be expensive, you could do it on a laptop in a few seconds.
Unless, of course, we decide only large corporations should be allowed to generate images and completely outlaw all of the open source / free image generation software - that’s not going to happen.
Most images are created with a “diffusion” model where you take an image, and run an algorithm that slightly modifies it. Over and over and over until you get what you want. You don’t have to (and commonly don’t - for the best results) start with a blank image. And you can run just a single pass, with the output being almost indistinguishable from the input.
This is a hard problem to solve and I think catching abuse after it happens is increasingly going to be more difficult. Better to focus on stopping the abuse from happening in the first place. E.g. by flagging and investigating questionable behaviour by kids in schools. That approach is proven and works well.
The image generation can be cheap, but I was imagining this sort of watermark wouldn’t be so much a visible part of the image, but an embedded signature that hashes the image.
Require enough PoW to generate the signature, and this would at least cut down the volumes of images created, and possibly limit them to groups or businesses with clusters that could be monitored, without clamping down on image generation in general.
A modified version of what you mentioned could work too, but where just these specific images have to be vetted and signed by a central authority using a private key. Image generation software wouldn’t be restricted for general purposes, but no signature on suspicious content and it’s off to jail.