The AI Arms Race: A Modern Prisoner’s Dilemma

This week was “AI” week in LP.

Westlaw introduced CoCounsel. Lexis rolled out Protégé.

And my professor made a comment that felt bigger than she probably intended: It makes her depressed that as soon as we learn something about AI, ten minutes later AI has already evolved.

Some people seemed excited—I felt an impending sense of doom. Call me prehistoric (I’m not), but I can’t stand AI. And it’s not in the dramatic “robots are taking over” way. (But that Will Smith movie did scare the crap out of me as a kid.)

It’s smaller than that.

I’ve already watched what social media did to my attention span. It’s worse. Shorter. Less patient. And I have had to be extremely intentional and self aware to combat it. So when people say AI is just a tool, I keep thinking, what lasting effect will this have? Worst case scenario: diminish our ability to critically think? Because right now, at least in law school, critical thinking is kind of the whole point.

Advocates for it in my life equate it to a calculator. My argument is, a calculator is only as good as the mathematician using it. A tool can not—and should not—replace the brain behind it. 

So clearly, you see where my head is at.

I got into a debate about this at a bar review one of the first months here. A fellow cohort said AI was the future, the most incredible form of technology. I said AI was lazy, destroyed the planet, and selfish.

What about people in rural areas, they asked? People without access to good teachers? AI can teach them things they wouldn’t otherwise learn. I don’t think I responded, because at the time that seemed like an outlandish take.

But the more I sat with it… the more I could see their point. I had learned a lot from YouTube in High School (thanks Green brothers). So maybe it’s not that simple.

That was the first crack in my “AI is bad” stance. And then the semester started to pick up, anxieties started to rise, work overflowed, the whispers got louder—and I found myself in a new position.

1L is notorious for feeling like you’re never doing enough. Add that to the stress of your first year, first-semester grades determining your summer and post grad jobs…you feel the weight of the world on your shoulders.

Meanwhile, X states proudly that they’re using AI to help with their outlines, Y says it explains their notes to them in a digestible way, and I start to notice a dilemma. I too am drowning in work. I own a business, attend law school full time, I have applications flooding my laptop folders, I am trying to hold onto my mental health in any small capacity—and there seems to be something that can help me.

That’s when something from Professor Kamm’s Negotiations class clicked for me.

The prisoner’s dilemma.

Two prisoners. If both cooperate, they’re better off with a mild sentence for both. Let’s say one year. But if one folds and the other doesn’t, the one who pointed the finger is off the hook. The other gets five years. If they both point the finger they get a medium sentence of three years.

So even though cooperation is better for both, they both point the finger. Probably out of fear that if they don’t they will get screwed over, or, out of ambitious hope, they’ll win big.

That’s exactly what this feels like.

If none of us used AI, we’d probably all be better at thinking, problem solving, and conceptualizing. If all of us use it, we all would produce MORE work but we’re likely thinking less. If I don’t use it and everyone else does, I’m the one who falls behind. So even if I don’t like it, it seems the rational move is to use it.

Us 1L’s, we’ve already seen how this plays out. Big Law recruiting used to be later. Then one firm moved earlier. Then another. And suddenly firms are interviewing before students even know how to make an outline. I had screeners during reading period and callbacks during finals. No one actually thinks that’s better. I would have preferred to spend more time studying. But no one can afford to be the one who opts out.

If I push back my interview, they may choose someone else. If they delay their interviews, they may miss out on top candidates.

AI feels like that – just bigger. All it takes is one and then the system changes for everyone. So now it’s not “do I agree with AI?” It’s – can I afford not to use it?

Can AI explain this to me in simpler terms? Because I sit in class, take my notes, and still don’t understand. Maybe it can help. Can AI tell me what my professor actually wants? Do they seem more focused on policy, or rules and doctrine? Can AI summarize my notes and tell me the themes?

None of that is cheating, just to be clear. Heck, we had a lunch seminar on it from the peer coaches last semester (also, please read the handbook guys). These prompts all seem harmless. They aren’t doing the assignment for me, they’re just translating, condensing…

But they are thinking for me.

And that is the problem.

The hard part of law school isn’t getting the answer. It’s sitting there not knowing where to start. It’s being so embarrassingly confused. For a while. Connecting and learning from upperclassmen with your tail tucked. It’s forcing yourself to make something out of seemingly nothing.

Being the guy with the red string during finals and finally seeing how it all comes together.

And I can feel AI slowly removing that part.

Which sounds great. And would probably help us look less insane come the end of the semester. Until you realize that’s the skill you’re here to build. And—conveniently—the exact skill the new bar exam is testing us on. 

Farewell to the days of mass memorization, hello to thought process and “showing your work.” During a time period where we’re being enticed by technology to avoid that part.

I don’t think the answer is to just not use AI. That doesn’t feel realistic, especially when our future jobs require both excellence and efficiency. Admittedly, during recruitment nearly every email was checked by Chat GPT before I hit send. My job prospects were on the line and I was not willing to take any risk. But I also don’t think we’re being honest about what it’s replacing, if we are not careful.

I don’t think AI is going to create bad lawyers, and possibly it can create better lawyers—it’s definitely going to create lawyers who can produce answers quickly. But the ripple effect is that it may create lawyers who don’t always know how to start from nothing.

And maybe that’s fine. Or maybe that’s a problem we won’t notice until later.

I don’t hate AI because it doesn’t work. I hate it because it works so well that not using it starts to feel like a disadvantage. And once it feels like that, it’s not really a choice anymore.

Do with all of this as you will. 

And you’ll come to learn, quotes—and clearly reflections—are my thing, so I leave you with these:

“Power reveals.”
–Robert Caro

“Liberty lies in the rights of that person whose views you find most odious.”
–H. L. Mencken

And my favorite for this week, in light of the new Spiderman trailer:

“With great power comes great responsibility.”
–Uncle Ben


Zoe Arvanitis-Dalpe is a first-year law student at BC Law. Contact her at arvanit@bc.edu.

Leave a comment