When I was in college I ran a media business that I started in high school. Sponsorships, 8 figure view counts, verified checkmarks, even a Play Button award and a congratulatory letter from the CEO of YouTube—I had all of it.
But as time went by, I started to realize that something was very wrong. When I would scroll through analytics, trying to catch on to the latest trend or find what factors were making certain online content do well, I started noticing a different type of trend—a disturbing one.
2016 was perhaps the best year to be in the media business. This is because it was the last year before 2017, when everything changed. Not many people are familiar with this story. It’s a story of politics, psychology, economics, and law, and about how the plumbing of the internet has profoundly and tragically damaged the fabric of our nation.
When we saw jarring images of chaos in the nation’s Capitol building last week, there was no shortage of shock, no shortage of reaction, and no shortage of pain felt by all Americans.
In the immediate aftermath, despite the deepening chasm of our partisan divide, most of us felt a palpable and funereal sense that despite our differences, things had gone too far. No matter how much we may disagree—no matter how strenuously the dividing lines of class, status, ZIP code, skin color, educational attainment, gender, politics, or otherwise may keep us apart—the shared revulsion to such events ought to unite us.
Add in the worst of the COVID-19 pandemic, and we’re experiencing one of the most challenging and solemn chapters in the life of our nation. In a vacuum, such profound loss, the direction our country and our world is headed in, and the health of the principles upon which our republic is based, should be a wake-up call that brings on the demarcation of a new era and path forward—a fresh page on which to write the next chapter in the book of the American experiment.
The Founders designed a system of government that rejects ochlocracy, or mob rule. “In all very numerous assemblies, of whatever characters composed, passion never fails to wrest the sceptre from reason,” James Madison argued in The Federalist Papers, the essays he co-wrote to advocate the ratification of the Constitution. In Federalist No. 10, Madison defined mobs and what we have come to know today as modern political parties or factions, as groups “united and actuated by some common impulse of passion, or of interest, adversed to the rights of other citizens, or to the permanent and aggregate interests of the community.”
In a vacuum, the electoral college and the penchant of The Framers for participatory democracy over direct democracy define a system built to resist the rule of the vulgus (Latin for “mob,” from which we get the word vulgar). For centuries, this system has worked remarkably well, and has been remarkably resilient.
But we do not live in a vacuum. We do not live in an environment where we may heal, or one where we may espouse the values necessary to overcome the social forces that keep us apart. We live in a politico-media environment that is designed not by voters in the heartland, august Framers in Philadelphia, or working class folk in industrial cities and towns. It is designed algorithmically by unelected technocrats and corporatists in Silicon Valley, and it forces us into the extremes on both sides.
James Madison, Alexander Hamilton, John Jay and others, as inspired and profound as they were, could not account for the implications of the violent modern marriage of high technology, partisan passion, and the iniquitous thrashing of elitism and populism that has reached fever pitch in recent years.
As the chaos played out last week, I was sitting in Miami International Airport watching two different televisions. One was playing CNN, the other Fox News. In a country that receives nearly all of its news through television and smartphones, with the former increasingly designed to reflect the tendencies and rancor of the latter, the issue suddenly becomes quite clear.
A cold war of terminology and characterizations was playing out between the two TV screens. A battle between the labels of “insurrectionists” and “protesters,” “terrorists” and “demonstrators,” and talking heads editorializing about the similarities and differences between last week’s events and the events of last summer.
Given the way our media works, under the hood, on the screen, in our hands, in our pockets, and in the streets, and after a decade of unabashed tribalism and growing resentment for one another, as shocking as these events are, they’re not inexplicable. As former Rolling Stone journalist Matt Taibbi recently put it, “If you sell [the] culture war all day, don’t be surprised by the real-world consequences.”
“The moment a group of people stormed the Capitol building last Wednesday, news companies began the process of sorting and commoditizing information that long ago became standard in American media.
Media firms work backward. They first ask, “How does our target demographic want to understand what’s just unfolded?” Then they pick both the words and the facts they want to emphasize.
It’s why Fox News uses the term, “Pro-Trump protesters,” while New York and The Atlantic use “Insurrectionists.” It’s why conservative media today is stressing how Apple, Google, and Amazon shut down the “Free Speech” platform Parler over the weekend, while mainstream outlets are emphasizing a new round of potentially armed protests reportedly planned for January 19th or 20th.
What happened last Wednesday was the apotheosis of the Hate Inc. era, when this audience-first model became the primary means of communicating facts to the population. For a hundred reasons dating back to the mid-eighties, from the advent of the Internet to the development of the 24-hour news cycle to the end of the Fairness Doctrine and the Fox-led discovery that news can be sold as character-driven, episodic TV in the manner of soap operas, the concept of a “Just the facts” newscast designed to be consumed by everyone died out.
News companies now clean world events like whalers, using every part of the animal, funneling different facts to different consumers based upon calculations about what will bring back the biggest engagement kick. The Migrant Caravan? Fox slices off comments from a Homeland Security official describing most of the border-crossers as single adults coming for “economic reasons.” The New York Times counters by running a story about how the caravan was deployed as a political issue by a Trump White House staring at poor results in midterm elections.
Repeat this info-sifting process a few billion times and this is how we became, as none other than Mitch McConnell put it last week, a country:
‘Drifting apart into two separate tribes, with a separate set of facts and separate realities, with nothing in common except our hostility towards each other and mistrust for the few national institutions that we all still share.’”
This is what I learned several years ago, and what one of the world’s largest technology platforms also learned when they tried to fix it.
During my time as an “influencer” (a career that now one third of kids aspire to), I would analyze the analytics, view the viewership, and review the reviews, in my constant pursuit of profit. I wanted to emulate popular content and follow the audience where they were going.
Over time I began to notice that the content that did the best was not necessarily the content people wanted the most. The content that was doing the best shared this odd aesthetic that I could not quite put my finger on at first. Thumbnails (the little picture accompanying a YouTube video) with the highest saturation, the most exuberant faces and reactions, or the most preposterous scenarios, rated well—not the best content, but videos that provided the pretense of the best content (suggestive scenarios, leading titles, or otherwise curiosity-inducing spectacle). It seemed that only the most extreme content could rise to the surface.
“Does anyone truly like this stuff?” I wondered.
Think about what you see on YouTube. Go ahead. Log on. Make sure you sign out and clear your cookies and history from your browser first, or you will be recommended things an algorithm already knows your psychology is interested in.
I guarantee what you will see is some bizarre array of extreme facial expressions, highly saturated color schemes, and shocking scenarios depicted in thumbnails that leave you curious and wanting to know more.
What is going on here is the algorithms at YouTube, tracking meticulously the preferences and behavior of billions of people, have figured out how to exploit basic human curiosities, show you new and increasingly extremist options, learn your preferences, and continue feeding you information and media that keeps you on the website and exposed to ads (that are in turn targeted based on your preferences) for as long as possible.
In a New York Times column dated March 10th, 2018, Zeynep Tufekci, a professor in the School of Information and Library Science at the University of North Carolina, a faculty associate at the Berkman Center for Internet and Society at Harvard, and a former fellow at the Center for Internet Technology Policy at Princeton, wrote about this effect:
“At one point during the 2016 presidential election campaign, I watched a bunch of videos of Donald Trump rallies on YouTube. I was writing an article about his appeal to his voter base and wanted to confirm a few quotations.
Soon I noticed something peculiar. YouTube started to recommend and “autoplay” videos for me that featured white supremacist rants, Holocaust denials and other disturbing content. Since I was not in the habit of watching extreme right-wing fare on YouTube, I was curious whether this was an exclusively right-wing phenomenon.
So I created another YouTube account and started watching videos of Hillary Clinton and Bernie Sanders, letting YouTube’s recommender algorithm take me wherever it would. Before long, I was being directed to videos of a leftish conspiratorial cast, including arguments about the existence of secret government agencies and allegations that the United States government was behind the attacks of Sept. 11.
As with the Trump videos, YouTube was recommending content that was more and more extreme than the mainstream political fare I had started with. Intrigued, I experimented with nonpolitical topics. The same basic pattern emerged. Videos about vegetarianism led to videos about veganism.
Videos about jogging led to videos about running ultramarathons. It seems as if you are never “hard core” enough for YouTube’s recommendation algorithm. It promotes, recommends and disseminates videos in a manner that appears to constantly up the stakes. Given its billion or so users, YouTube may be one of the most powerful radicalizing instruments of the 21st century.”
Ditto for Twitter. And Instagram. And Facebook. And Google. They’re all run the same way. The plumbing is the same. And there’s no way to fix this. The very essence of Big Tech is to push the public towards the extreme and the visceral for screen time and profit. We live in a politico-media environment that is designed to drive us to these extremes and aestheticize the sort of unrest we saw at the Capitol.
We are talking about software that has profoundly altered the course of human history. At one point, in the 1990s and early aughts, the fledgling “dot com” age of American technology companies, the “World Wide Web” was exciting and interesting—a way to connect the world. To protect these companies and allow the web to grow, Congress passed the Communications Decency Act of 1996, which, among other things, contained the now notorious as of late, Section 230, which stipulates:
“No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.”
Described as “the 26 words that created the Internet,” Section 230 in effect eliminates nearly any liability for Internet companies arising from the conduct of users on their sites. It allows YouTube to avoid liability for copyright infringement, Twitter to avoid liability for disseminating falsehoods and hate, Facebook to avoid liability for genocides, AirBnB to avoid the liabilities associated with the business of renting property, and so on and so on.
It is in essence a loophole that allows the most powerful companies in the world to enjoy the monopoly profits only made possible by availing themselves of the benefits and activities of a free society, while avoiding all of its norms and protections.
Within the law, there is little recourse for the excesses of Big Tech and corporate America. It seems the only place accountability can be found is from within.
The only two times in the modern era I can think of in which Big Tech has truly faced something resembling a reckoning, was last week, and four years ago.
In early 2017, the Wall Street Journal published a series of articles detailing the manner in which “Google’s automated system placed ads for some of the world’s biggest brands—including Coca-Cola Co., Procter & Gamble, Amazon.com Inc. and Microsoft Corp.—on five YouTube videos peddling racist and anti-Semitic content.”
This revelation unleashed an unprecedented firestorm from corporate America. Pressured to appear outraged, in short order many of the world’s largest corporate advertisers pulled their ads from the world’s largest corporate ad platform in protest of Google’s inability (and unwillingness) to protect their brands from hateful content.
Within weeks, I noticed my revenue collapse as the ad inventory on the online platform dried up. Answers were hard to come by. I was told by YouTube (owned by Google) staff that there was “an ad inventory issue” and nothing more.
Emails and written statements from the company were bland and patronizing, and nobody I knew, even in Hollywood, was able to get any answers out of Google.
In the spring and summer of 2017, YouTube shut it down. The site still worked, videos could still be played and watched. To the public, nothing was different. But hardly anyone, even major corporate publishers, were making any money.
As I would later come to piece together through my own number crunching and analysis of what videos YouTube’s algorithms would seemingly randomly serve ads on or not, an unprecedented and top secret experiment was underway inside the San Bruno, CA company that, as much as anything, served as a harbinger for the events at the Capitol last week.
What I suspected (and which was later confirmed in a press release) was that in response to the advertiser boycott, Google was trying to find a way to train artificial intelligence to detect hateful or extremist content and prevent a brand’s advertisements from showing on them.
But this is extremely difficult (if not impossible) to do, and an early indicator of not only how easily hate and extremism of all stripes spreads on social media, but how much it has become its very essence, and how hard it is to stop.
A computer can tell the difference between ISIS and an episode of Peppa Pig. But can it tell the difference between ISIS and a news report about ISIS? Can it tell the difference between a video of a debate about immigration and a video of violent threats towards immigrants? What about if the President is the one saying the things? What if the President’s detractors are saying the things?
As you can imagine, the situation becomes extremely complex very quickly.
In most cases, even the best artificial intelligence cannot figure it out. These are questions only humans can resolve, and even then, after many years and decades of debate and conflict over so many issues, we ourselves do not seem to be very good at resolving them either.
But the bigger problem is not any of that. The bigger problem is that for a website that has over 500 hours of video uploaded to it every minute, human review is not possible, and Section 230 does not require it, as YouTube is not liable in any event.
So what happens when AI can’t figure it out, human review is not possible, and the world’s most valuable advertisers will not spend any money until somebody figures something out?
What happens is what YouTube did. YouTube developed an AI system that grouped videos into three categories—those with qualities deemed safe by an algorithm and showing ads, “monetized,” those found to have similar qualities to controversial videos and not showing ads “demonetized,” and those videos for which the system cannot figure it out, “under review.”
That third category, the gray area, is as revealing as it is critical. In order for the AI to develop a basis on which to detect unsafe videos, some human review is then required. Google developed just such a workforce. This job involves sifting through the filth of the internet all day, training a system to detect the finite and increasingly blurred distinctions between offensive and acceptable, defined not by the preferences or norms of the public, FCC regulations, or any sort of agreed upon community standards, but instead upon the whim of what is acceptable to the world’s largest advertisers.
In fact, the task of combing the depravity of the internet for the sake of safeguarding ad revenue is so odious, YouTube requires its content moderators to sign a statement acknowledging the job can give them post traumatic stress disorder.
According to The Verge report and other outlets, Facebook and Twitter maintain similar workforces doing similar tasks for the same purpose.
Gradually ad revenue was restored. But at what cost? Once I knew how the sausage is made, I wanted to get out of the sausage factory. No amount of money was worth pumping out videos onto a platform I now knew was designed to get people addicted to it, and elevate content that some equation thinks corporate advertisers really like.
The bottom line is, as long as Section 230 is the law, and these companies have no true liability, only something as unprecedented and catastrophic as a boycott from the world’s most powerful advertisers will compel any action. Nothing else will change anything.
The more addicted people are, the more outraged they become, the more engaged they are on the site, the more ads they see, the more time they spend, and the more money these platforms make.
There is no incentive for social media platforms—Big Tech, to fix anything or change anything or heal anything. The outrage, the reaction, the outrage to the reaction—this whole cycle that has consumed our politics and the soul of our nation—is the business strategy.
When the Democrats eked out the smallest of margins in the Senate last week on top of the insurrection at the Capitol, I was not surprised to see President Trump’s account suspended. It’s exactly the playbook, a playbook I am intimately familiar with, of these companies to do that.
That was a calculated move that solved nothing. By taking down a prominent and problematic voice on the right, Big Tech for the time being pacified the newly powerful left, who until that point shared bipartisan interest in doing real damage to the unchecked power of Big Tech by repealing Section 230, something supported both by President Trump, President-Elect Biden, and many members of Congress on both sides.
But suspending President Trump for inciting violence merely heads off a true reckoning for an industry that has inflicted more carnage than any other upon the basic freedoms and values of our republic.
The problem isn’t the President (well, not completely). It’s the fact that hundreds of millions and billions of people are hardwired to a system designed to arc towards the extreme, the spiteful, the vengeful, and the chaotic. It is a system that scientifically detects our flaws, our tastes, our fears, and that which keeps us apart. It’s a system that amplifies and broadcasts the very passions and human character defects that the Framers designed a system of government to restrain—rendering us now with a nation constantly at war with itself and its identity.
The story of what happened at the Capitol last week is not about President Trump. It’s not even about politics. It’s about economics, and a country living in a politico-media environment that glorifies unrest, exalts its elite, and leaves many people feeling left behind and resigned to their devices, buried in the despair of a nation that cannot understand itself, insofar as we cannot understand one another.
Through the crooked lens of social media, we see each other not as fellow Americans, or human beings, but as partisan combatants—baskets of deplorables, coastal elites, limousine liberals, folks “clinging to their guns and their religion,” and terms far more in number and profanity than I care to repeat.
This is not who we are. It is who we have been made to become.
I did not mention President Trump, or anyone in particular until over 3,000 words into this article. Because the individuals don’t matter. The game doesn’t change. Only the players.
As harsh as the images from the Capitol were, they are the symptom, not the cause.
So long as we live in a country that sees itself through a hyperpartisan and algorithm-ridden media that exploits our divisions, and is designed to divide us, we will be divided.
As odious as many may find President Trump, the fact that the President of the United States can be “suspended” from the public domain by unelected, unaccountable corporatists in Silicon Valley in the same way that any other number of unremarkable individuals are, should represent tyrannical censorship, regardless of one’s politics.
On the other hand, this doesn’t mean the president cannot reach anyone. He can do what the president has done for a very long time—sit down behind the Resolute Desk, look into the camera, or the microphone, and deliver a well-reasoned speech to the American people, and the world.
Perhaps we would be better off if Twitter, YouTube, Facebook and all of their ilk just suspended all of us too.
This way we can go back to doing what we once did, instead of screaming partisan refrains at one another’s anonymous online alter-egos, handling our business in person. Talking to one another. Getting to know one another. Trying to understand one another.
Walking down the street in your city or town, I’d be willing to bet you never encounter the type of chaos and cacophony highlighted on social media. For 99% of us, the only time we see or hear or learn of any of these things is on social media or on the news. We do not typically just happen upon it.
This is because the warped aperture of social media leads us to think there is far more division, far more hatred, and far more bloodshed than there actually is. If we resign our view of our world to merely the world as it is depicted on the rectangle in our pockets, we should not be surprised to see it come to bear a resemblance.
Perhaps if things were different, and as they once were not too long ago, the electoral college map would not be composed of precisely titrated red counties and blue cities—red states and blue states—but a United States.
Perhaps in our elections many among us would not sit and try to game out electoral college maps with the aim of squeezing out enough votes from a county they only care about for the sake of arithmetic.
As long as we live as two peoples, and experience our lives through the auspices of technology platforms that profit from our divisions, we will remain divided.
When you think of when things first began to get this way, what year comes to mind? Of course, some might feel things ran aground a long time ago. But in terms of things being as bad as they are now? This trajectory. When did that start?
I would say around 2013. In a way, this is when the brinkmanship, and hyperpartisan tribalism we find ourselves in today truly began to take root. It is no surprise that this is around when social media truly took over our lives.
It is easy for us to forget that in the entire history of our republic, we never had social media. We never even had the internet until relatively recently. And in many ways, our democracy was far more healthy for it.
We are living in James Madison’s nightmare. Mob rule, as made possible by 26 words in a 25 year old law, have seized the very soul of this nation.
Some say that eliminating Section 230 would be catastrophic. That the technology industry needs it to survive. Some point to the fact that the economy has only done so well during the pandemic due to just how much of its success is due to that of the technology sector. I say no. I say that any company whose business model depends on a 26 word loophole in a 25 year old law, and involves pitting Americans against one another for profit, driving everyone to the extremes, causing kids to become addicted to their phones and obsessed with their image, inciting genocide abroad, and insurrection at home, is not a business model that was designed to last.
In the final analysis, we must ask ourselves at this vital crossroads, who are we? Are we a nation of corporatists, Wall Street bros, and Big Tech enthusiasts? Or are we truly one nation under God, that in this dark hour, will be able to see the light, and find our way?
We cannot simply decide it is sufficient to just move on. We cannot seek partisan vengeance, and continue the tug of war between left and right. We will not find peace in the hegemony of technocrats and coastal elites, nor will we find peace in the rule of those living in the red around the cities on the maps on TV.
Because then we have learned nothing, and we have nothing.
This is our moment. This is our Einstein warning FDR about the Atomic Bomb. It is our Space Race. It is our Cold War. It is our Independence Hall.
Our nation’s adversaries are all too familiar with, and all to willing to exploit these flaws. We know Russia uses social media to fan the flames of divisive social issues. We know other countries do this as well. Meanwhile we face an aggressive China rapidly expanding its global reach, a resurgent Iran, a climate crisis, threats at home and abroad, and a raging pandemic while we are hamstrung by our divisions and culture war.
If we are wise, this somber and solemn moment in the life of our nation is not a nadir, but merely the launching point of a new tomorrow, one that learns the lessons of that which brought us to this place, and allows the country, and its people to emerge as one.
Perhaps it took such tragedy to awaken us to how far we have fallen.
We cannot expect anything to change so long as we will not do what is necessary to make these changes. Years of living in a media environment designed to drive us apart has done just that.
This does not mean social media must go away. Connecting with friends and family is good. It is useful. But unfurling upon the world perversely algorithm-ridden platforms with influencers and brands and megaphones thrown in with our news media that can say anything, is something else, something that has gone too far, has not worked, and cannot work.
Have we not seen enough?
It’s time for change.
Enough is enough.
Tom Blakely is a first-year student and co-host of the new ‘Just Law’ podcast from BC Law. Contact him at firstname.lastname@example.org.