Much like kids on a long drive playing “I spy” with passing cars, the modern news cycle is often ephemeral in its nervous fixations. Cases in point: a week spent earlier this year in a national discord over the topic of banning gas stoves; the tizzy over Chinese spy balloons; classified documents in a garage with Joe Biden’s Corvette; or any of the other made-for-breaking-news idée fixes that pass in and out of the national consciousness with mind-numbing regularity.
But ChatGPT–the new “generative AI” technology that can use artificial intelligence to produce strikingly well-written prose on the topic–has stuck around. Much like last year’s fanfare over generative art AI like “DALL-E” (which does a similar thing but with visual images), people seem both curious and fascinated by exploring this amusing new tool.
One particular worry with ChatGPT is that it might enable students to produce essays written by AI instead of themselves. In one of my first classes this semester, my own professor expressed concern, saying that the final exam may have to be given in person instead of as a take-home examination to keep students from using it to cheat.
That seemed a little far-fetched to me. ChatGPT might produce long sections of text on broad and well-traveled topics that would be useful in high school or college essays. According to some recent tests, ChatGPT might even be able to eke out poor but passing law school exams. But this is hardly the stuff of the Varsity Blues scandal or the intrepid forgery of Catch Me If You Can. It might even serve as a disadvantage. Law school exams involve reading a long fact pattern, identifying which facts give rise to legal issues, knowing what rules and laws apply, summonsing them up from your outline, and applying the rules and laws to the issues at hand. Pasting a long fact pattern into ChatGPT and having it perform an analysis is simply not something the tool can do well (yet).
There have been plenty of opinions on the topic:
“[It] has ‘too many temptations’ to be useful in schools and libraries.”
“[It] reinforce[s] [a] fascination with gadgetry, as opposed to intellect, that is endemic in American popular culture.”
“I believe that [it] is destined to revolutionize our educational system and that in a few years it will supplant largely, if not entirely, the [role] of textbooks.”
“Teachers tend to adopt a new technology when that technology helps them do what they are currently doing better; thus, they may be seen as reinforcing the status quo. In addition to a cautious attitude engendered by teaching and the historical and cultural resistance to change, the influx of [technology] and the perception of video and film as entertainment illustrate how [tech] can be suspect as a legitimate educational tool.”
Surprise! The first quote above was not written recently about ChatGPT, but written by a sitting US Senator in the Harvard Crimson in 1998 about computers. The second quote is from an academic journal in 1996. The third is from Thomas Edison in 1922. The fourth is from a position paper presented at an academic conference in 1997.
There’s both a pattern here, and a problem. ChatGPT is merely the latest chapter in a long line of skepticism and hand-wringing in education about new technologies.
I was speaking recently to a new associate at a large law firm about how to improve law school and the legal profession, and he brought up law school exams. His 3L year at the height of the COVID-19 pandemic saw the debut (and experiment) with take-home, self-scheduled exams.
“Gone are the days when 50 students have to sit and take an exam together,” he told me. “We don’t need all the stress and anxiety of that. If we’re going to be lawyers and members of the bar we need to believe we can trust each other [to take tests remotely].”
It’s that last piece—trust, not ChatGPT–that lies at the center of all of this.
In late 2021, well before the advent of ChatGPT, I wrote about the post-COVID fall offensive some university and law professors were waging to have their exams in person once again. In the post, I concluded that the anti-take home exam regime was largely built atop the “this is how we always did it” mindset—the same mindset at the heart of the nation’s return-to-the-office vs. work from home cold war playing out between corporations and their employees.
Do we really need to require in-person exams out of the oblique concern that students, left to their own devices, may lay waste to academic integrity by using a tool that when tested, barely received passing grades on the most open-ended of exams thrown its way?
ChatGPT is not the end of traditional teaching as we know it, nor should it cause our professors to worry more than they already do about students cheating. It’s not some new weapon of academic mass destruction that we must run away from in fear of disaster, such as when Condoleezza Rice once infamously said, “we don’t want the smoking gun to be a mushroom cloud.” ChatGPT is the latest in a progression of advancing technologies that may be useful for tasks, and play some useful role in our work and our lives–as long as we keep an eye on the disadvantages and the risks.
Tom Blakely is a third-year student at BC Law, and co-host of the BC Law Just Law Podcast. Contact him at firstname.lastname@example.org.