Today’s post by BC Law professor and associate dean for academic affairs Daniel Lyons originally appeared on the American Enterprise Institute (AEI) AEIdeas blog. You can view the post here.
By Daniel Lyons
It was probably inevitable that the artificial intelligence (AI) discourse would eventually turn to virtual pornography. Earlier this week, CBS News noted that increasingly sophisticated AI editing programs can exacerbate the problem of “deepfake” porn: images and videos digitally altered to appear to be someone else. This article came on the heels of a Twitter discussion Matty Yglesias prompted about whether AI-generated pornography could disrupt the adult industry by removing the need for real people to be involved.
But underlying this discussion is an even more frightening concern: the prospect of virtual child sexual abuse material (CSAM). (Hat tip to Kate Klonick.) It may surprise you that Congress was way ahead of the curve on this issue: A quarter-century ago, it banned so-called virtual child pornography, computer-generated imagery designed to look like CSAM. It may further surprise you that the Supreme Court struck down this law as unconstitutional. But the evolution of technology in the decades since suggests that it is time to revisit this problematic decision.
Continue reading