BlogNBC News

Possession of AI-generated child sexual abuse imagery may be protected by First Amendment in some cases, judge rules



Federal prosecutors are appealing a federal judge’s ruling in Wisconsin that possessing child sexual abuse material created by artificial intelligence is in some situations protected by the Constitution.

The order and the subsequent appeal could have major implications for the future legal treatment of AI-generated child sexual abuse material, or CSAM, which has been a top concern among child safety advocates and has become a subject of at least two prosecutions in the last year. If higher courts uphold the decision, it could cut prosecutors off from successfully charging some people with the private possession of AI-generated CSAM.

The case centers on Steven Anderegg, 42, of Holmen, Wisconsin, whom the Justice Department charged in May with “producing, distributing, and possessing obscene visual depictions of minors engaged in sexually explicit conduct and transferring obscene material to a minor under the age of 16.”

Prosecutors alleged that he used an AI image generator called Stable Diffusion to create over 13,000 images depicting child sexual abuse by entering text prompts into the technology that then generated fake images depicting non-real children. (Some AI systems are also used to create explicit images of known people, but prosecutors do not claim that is what Anderegg was doing.)

In February, in response to Anderegg’s motion to dismiss the charges, U.S. District Judge James D. Peterson allowed three of the charges to move forward but threw one out, saying the First Amendment protects the possession of “virtual child pornography” in one’s home. On March 3, prosecutors appealed. 

In the decision, Peterson denied Anderegg’s request to dismiss charges of distribution of an obscene image of a minor, transfer of obscene matter to a person under 16 and production of an image of a minor engaging in sexually explicit conduct. 

Anderegg’s lawyer did not respond to a request for comment. The Justice Department declined to comment.

Many AI platforms have tried to prevent their tools from being used in creating such content, but some safety guardrails can easily be modified or removed, and a July study from the Internet Watch Foundation found that the amount of AI-generated CSAM posted online is increasing.

The Justice Department alleged in a news release in May that Anderegg described his image-making process in a chat with a 15-year-old boy and sent the images to the teenager. Law enforcement was alerted to Anderegg after Instagram reported his account to the National Center for Missing & Exploited Children, the release said.

The Justice Department has argued that the 2003 Protect Act, while not specifically referring to it, criminalizes AI-generated CSAM by banning “obscene visual representations of the sexual abuse of children.”

Peterson referred to a 1969 Supreme Court ruling, Stanley v. Georgia, which said private possession of obscene material in one’s own home cannot be made a crime.

That ruling has not traditionally been applied to cases involving CSAM that includes real children, which have typically been tried under a different set of laws about the sexual exploitation of minors, such as banning the transport or sale of CSAM.



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *