The AI Revolution Raises Terrifying Questions about Virtual Child Pornography

Today’s post by BC Law professor and associate dean for academic affairs Daniel Lyons originally appeared on the American Enterprise Institute (AEI) AEIdeas blog. You can view the post here.


By Daniel Lyons

It was probably inevitable that the artificial intelligence (AI) discourse would eventually turn to virtual pornography. Earlier this week, CBS News noted that increasingly sophisticated AI editing programs can exacerbate the problem of “deepfake” porn: images and videos digitally altered to appear to be someone else. This article came on the heels of a Twitter discussion Matty Yglesias prompted about whether AI-generated pornography could disrupt the adult industry by removing the need for real people to be involved.

But underlying this discussion is an even more frightening concern: the prospect of virtual child sexual abuse material (CSAM). (Hat tip to Kate Klonick.) It may surprise you that Congress was way ahead of the curve on this issue: A quarter-century ago, it banned so-called virtual child pornography, computer-generated imagery designed to look like CSAM. It may further surprise you that the Supreme Court struck down this law as unconstitutional. But the evolution of technology in the decades since suggests that it is time to revisit this problematic decision.

Virtual CSAM exists at the outer boundary of the First Amendment. The Court has explained that the First Amendment does not protect obscenity. To determine whether particular content is obscene, the Court uses the three-part Miller test:

  1. Whether the average person, applying contemporary adult community standards, finds that the matter, taken as a whole, appeals to prurient interests;
  2. Whether the average person, applying contemporary adult community standards, finds that the matter depicts or describes sexual conduct in a patently offensive way; and
  3. Whether a reasonable person finds that the matter, taken as a whole, lacks serious literary, artistic, political, or scientific value.

Because obscenity is not protected speech, Congress can prohibit—and has prohibited—trafficking in obscene materials. But the obscenity standards are not self-defining: Not all pornography is obscene, and the definition could vary from community to community. To protect children from this uncertainty, Congress separately prohibited making or possessing “any visual depiction of sexually explicit conduct involving a minor,” whether or not the material meets the Miller test. In New York v. Ferber, the Court found that CSAM was similarly outside the First Amendment, even if it is not obscene, because its production necessarily involves harm to a minor.

In 1996, Congress expanded the definition of child pornography to include sexually explicit images that appear to depict minors but were produced without children, including computer-generated images that appear to be CSAM, which the Court described as “virtual pornography.” Among other rationales, Congress explained that pedophiles might use virtual CSAM to groom child victims and virtual CSAM could help drive demand for real CSAM and increase sexual abuse of minors. But in Ashcroft v. Free Speech Coalition, the Court rejected these arguments. It explained that Ferber upheld the ban only on non-obscene CSAM because of the harm to children depicted in the materials, harm that does not exist in virtual CSAM. And the prospect of future crime does not alone justify laws suppressing free speech. The Court declined to create a new First Amendment exception for non-obscene virtual CSAM.

But Congress’s final rationale explains why this case is worth re-evaluating today. The government argued that legalizing virtual CSAM makes it harder for prosecutors to try CSAM cases, because the defendant could argue that the images in question were computer-generated and thus create reasonable doubt in the minds of the jury. The three-justice dissent found this persuasive. And while Justice Clarence Thomas voted with the majority, he noted that

while this speculative interest cannot support the broad reach of the [Act], technology may evolve to the point where it becomes impossible to enforce actual child pornography laws because the Government cannot prove that certain pornographic images are of real children. In the event this occurs, the Government should not be foreclosed from enacting a regulation of virtual child pornography. [emphasis added]

In the 1990s, digital imagery was poor, and the government could not cite a case where this defense was successful. With the AI revolution in full swing, we are rapidly approaching a time when software can produce sophisticated virtual CSAM that is indistinguishable from the real thing, which means that Justice Thomas’s prediction will likely soon come true—if it has not already.

It may seem odd that technological change can move the constitutional boundary. But those who work in internet law know this is not as uncommon as one might expect. My sense is that Ashcroft was wrong when it was decided. But even if defensible at the time, it has aged poorly in the decades since. Congress should consider reenacting the ban on virtual CSAM. Allowing trafficking in imagery virtually indistinguishable from CSAM harms society and jeopardizes the law’s ability to protect children from sexual abuse.


Daniel Lyons is a BC Law professor and currently serves as the school’s associate dean for academic affairs. He posts frequently on the AEIdeas blog. Contact him at daniel.lyons.2@bc.edu.

Featured image via Reuters/AEI

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s