This guest post by BC Law Professor and Associate Dean of Academic Affairs Daniel Lyons first appeared in the AEIdeas Blog.
The most important free speech question of the decade may not be about social media. It may be about chatbots. As generative AI reshapes how people communicate, courts and legislators must confront whether and how the First Amendment protects AI outputs. Last year, the first court to face this question punted, explaining at the motion to dismiss stage that it was “not prepared” yet to hold that a large language model’s output is speech. That case settled without a definitive answer. But the question won’t stay dormant, and First Amendment principles compel a clear conclusion: many chatbot outputs are protected speech, which should shape how courts handle AI-related litigation.
Chatbot outputs easily satisfy existing constitutional tests for what counts as protected speech. In Brown v. Entertainment Merchants Association, the Supreme Court reaffirmed that the First Amendment protects works that “communicate ideas—and even social messages” whether through “familiar literary devices” or “features distinctive to the medium.” The threshold for protection is low. In Universal City Studios v. Corley, the Second Circuit held that computer code qualifies as speech because it “will often convey information capable of comprehension and assessment by a human being,” even though its primary purpose is functional rather than expressive. Similarly, anodyne messages such as recipes, instruction manuals, and even prescription drug price information have been recognized as protected speech. The Spence test has long extended First Amendment protection even to conduct such as flag burning or the wearing of armbands, if the conduct conveys a particular message and is likely to be understood as such. If wordless physical conduct can constitute protected speech when it communicates a message, it would be a strange constitutional inversion to hold that actual words conveying information in response to a user’s prompt fall outside the First Amendment’s protection. In the chatbot space, this straightforward inquiry can get waylaid by the misguided side quest to find a human speaker to which the speech can be attributed. Of course, we can identify a wide range of humans whose creative or expressive choices shape the chatbot’s output. These could include the user, who creates a prompt and engages in iterative interaction to guide the system’s responses; the developer, whose decisions regarding training materials, design, and guardrails affect outputs; the platform as publisher of the output; or some combination of these. This search frames generative AI as a tool of human speech, like a word processor or camera.
But this framing is ultimately unnecessary, as courts have never required human authorship as a condition of protection. The First Amendment protects corporate speech, anonymous speech, fictional speech, and other expression not readily attributable to a particular, identifiable human being. In Brown, which extended First Amendment protection to violent video games, the Court protected speech generated by a player’s interactions with non-player characters in a virtual world. The fact that the game is interactive, that “the player participates in the violent action on screen and determines its outcome,” did not remove the constitutional protection accorded to the game’s outputs. This is because the doctrine has never required that expression be human-generated at the point of utterance. It has required that expression be communicative in character. Chatbot outputs meet that standard without difficulty.
This conclusion is strengthened by looking beyond the speaker to the listener. As my AEI colleague Clay Calvert explained, the First Amendment protects not only the right to speak but the right to receive information and ideas. The Virginia Pharmacy Court protected prescription drug prices because the public had a right to know accurate pricing information. And Lamont v. Postmaster General protected the right to receive communist propaganda sent from abroad. These cases reinforce the foundational values underlying free speech doctrine: individual autonomy in forming beliefs and making decisions, the collective interest in a well-informed citizenry, and the danger of allowing governments to control what people may think by determining what they may hear. All three values are implicated when governments restrict access to AI-generated communication. A patient who asks a chatbot about a medical diagnosis, a tenant who asks about her legal rights, or a student working through a difficult philosophical question is exercising precisely the kind of autonomous information-seeking that the First Amendment exists to protect. The constitutional injury in restricting that access isn’t located only in the AI system or its developers. It is also located in the user who is denied information she sought.
This does not mean chatbot outputs cannot be regulated. But it does mean that when considering liability, legislators and courts must apply familiar First Amendment frameworks to weigh enforcement against the potential chilling effect on speech. Governments have often sought to regulate the flow of information through new communications technologies. The First Amendment has always pushed back.
Daniel Lyons is a BC Law Professor and Associate Dean of Academic Affairs. He posts regularly at AEIdeas (Featured image via Adobe Stock).