You don’t have to be a fan of the TV series Black Mirror to realize that our world is becoming more computationally driven. Yet, being a fan may help you recognize the dangerous ways that technology can expand to affect how society operates. Ever since I began law school just a few months ago, I’ve been led to consider the role that courts will play in organizing and controlling new scientific frontiers. An increasingly important feature of future courts will be mathematical literacy. Unfortunately, based on empirical data, our courts system has not been very effective at analyzing empirical data.
The Supreme Court recently heard arguments grounded in statistics related to partisan gerrymandering in Gill v. Whitford. Many judges seemed dismissive of a mathematical tool, called the efficiency gap, that aims to measure the extent of partisan gerrymandering. The computation simply involves taking the difference between each party’s “wasted” votes, divided by the total number of votes cast. The court suggested that the lack of public understanding would make this standard arbitrary and erode the legitimacy of the court. Meanwhile, I’ve spent the last two months of law school rigorously attempting to internalize foundational legal concepts that I’m certain are puzzling to most lay people.
Chief Justice John Roberts, an accomplished graduate of Harvard, mentioned that it “may simply be (his) educational background but (he) can only describe it as sociological gobbledygook” (note: this has nothing to do with sociology). And, that “statistics is a hazardous enterprise.” The real danger here is that the more mathematically complicated an unjust process is, the more mathematically complicated the explanation will be to identify the process as unconstitutional. If the court system is unwilling to give these explanations any consideration, there will be a perverse incentive to use obscure math as a way of bypassing the application of justice.
Mathematical illiteracy also causes the misapplication of statistical concepts in the courts system. The “negative effect fallacy” seems to be one such recurring example. A recent case study found that courts often argue that empirical evidence cannot prove a negative. And, despite the rationale behind this theory when applied to philosophically negative statements, the rule does not apply to the arithmetic definition of the word. Social science research has produced vast amounts of empirical data that functions as evidence of a decrease in something—like, illegal searches and seizures without certain laws—by comparing numbers to similar states, previous conditions in that state, or by using other statistical controls. Yet, courts have continually dismissed much of this evidence by simply stating that “you can’t prove a negative.”
It is also likely that greater mathematical literacy in the legal institution could help establish clearer thresholds for certain issues. Our professors constantly highlight the importance of identifying the “squishy” words in the law—including, reasonable, sufficient, and significant—that indicate when the facts of a case are enough to trigger some element in the law. While I do not suggest that math can clarify all of these terms, I am confident that being able to properly examine numerical equations would provide the courts system with an additional method of establishing bright-line tests in the law. If used appropriately, this would likely decrease ambiguity and strengthen the legitimacy of the court. In the gerrymandering example above, this would mean establishing a numerical ceiling on what is a legally acceptable efficiency gap.
Legal analysis should involve weighing empiricism against traditionalism. In some cases, the evolution of our laws would benefit from transforming whenever data reveal preferable methods of applying justice (even if that means ignoring precedent). But, this cannot happen without the ability to interpret and understand math.
Now, what can be done? Certain academics have suggested incorporating empirical research into the legal curriculum. Tool-oriented courses (like statistics, research design, methods, and game theory) help students grasp foundational principles that can facilitate their understanding of empirical studies later in their careers. Still, the conversation should not end with simply making overburdened law students take a few more classes. It is equally as important for graduate students in other disciplines to get exposure to diverse fields of study.
One final example that demonstrates this need rather well lies in the new frontier of self-driving cars. The high-level engineers who are coding the decision-making system for these vehicles will have to decide how the car acts in various situations, including, the choice a car’s computer has to make when it is heading towards a group of people and is moving too fast to stop. If the car is on an enclosed road, the car can either smash into the side wall and risk injury to the driver or continue driving straight while attempting to stop and risk injury to the group of people. A utilitarian calculus would suggest solving this dilemma by risking injury only to the driver. Nevertheless, some would say that the car should principally aim to protect its owner, and the situation might have been the result of negligence by the group. The government will eventually have to establish legal rules for these types of decisions to ensure consistency among carmakers. And, those legal rules will be enforced by the courts.
As technology continues to permeate growing parts of our society, avoiding technological discourse will be increasingly difficult. Just like engineers developing these cars should be exposed to philosophy and ethics so that their new scientific tools can benefit from academic progress in these fields, legal scholars should gain a basic understanding of scientific theories in order to properly interpret and adjudicate the new scientific world. Until distinct disciplines begin interacting with one another, they will each suffer from a lack of whatever insight the others can offer.
If you are interested in further reading on this issue, check out:
— “Building An Infrastructure for Empirical Research in the Law” by Lee Epstein and Gary King
— “The Negative Effect Fallacy: A Case Study of Incorrect Statistical Reasoning by Federal Courts” by Ryan D. Enos, Anthony Fowler, and Christopher S. Havasy
— “The Supreme Court Is Allergic To Math” by Oliver Roeder