Banning Trump isn't Enough
Social media platforms helped radicalize the crowd that stormed the Capitol, banning Trump won't solve the bigger problem
After six years of Silicon Valley sitting on their hands while Trump used their platforms for hate speech, conspiracy theories, and dangerous misinformation, they finally took action this week.
Facebook and Instagram suspended Trump at least through the remainder of his Presidency. Twitter had permanently banned Trump from the platform. Others quickly followed suit.
Banning Trump from Twitter is a huge deal. But Trump hasn’t been silenced. He can still give speeches, appear on TV, and otherwise make his voice heard, but Twitter is where political figures make news and being absent from that arena is a tough blow to his attempts to remain politically relevant.
Given the violence on Wednesday, Trump’s previous behavior, and the real potential for additional violence at the Inauguration, I believe banning Trump was the right choice. However, we shouldn’t pretend that huge tech companies making unilateral and seemingly arbitrary decisions about whose voice can be heard on their platforms doesn’t raise a bunch of very complicated questions. New York Magazine’s Eric Levitz has some thoughts on the matter that are worth reading.
My response to these decisions by Silicon Valley is the same as the response to the Republicans who have spoken out about Trump since the assault on the Capitol — “Thank you. What took so damn long?”
While banning Trump is a positive step, we need to be crystal clear that it neither absolves the tech companies of their culpability for what happened nor does it do very much to deal with the broader problem of misinformation and radicalization that led to the armed insurrection.
What Took So Long?
Twitter’s announcement of their decision to ban Trump was evidence of the absurdity of their inaction over the last several years. Twitter said they had done a “close review” of Trump’s tweets and suspended him due to “risk of further incitement of violence.” First, this didn’t require a close review. Second, while the assault on the Capitol is a particularly symbolic and scary example of Trump using social media to incite violence, it is far from the first example.
In 2018, Caesar Sayoc, a Trump supporter, mailed bombs to CNN, prominent Democrats including Barack Obama, Hillary Clinton, Joe Biden and Kamala Harris, former Obama Administration officials John Brennan, James Clapper, and Eric Holder, as well as liberal billionaires George Soros and Tom Steyer.
Sayoc was an intense consumer of social media and a Trump backer. All of his targets were prominent critics of President Trump and part of the Right Wing conspiracy theories frequently promoted by Trump on social media.
Instead of expressing contrition or concern about the targets, Trump used social media to blame the media for the bombs that were sent to CNN and others. He tweeted:
A very big part of the Anger we see today in our society is caused by the purposely false and inaccurate reporting of the Mainstream Media that I refer to as Fake News. It has gotten so bad and hateful that it is beyond description. Mainstream Media must clean up its act, FAST!
The social media companies did nothing.
Trump posted “when the looting starts, the shooting starts” on Facebook and Twitter during the George Floyd protests. This was a clear violation of the rules prohibiting the incitement of violence. Twitter left the tweet up, but with a warning label. Facebook reportedly urged the White House to take the post down, but ultimately did nothing.
While these content moderation decisions are far from easy, especially when it comes to prominent public figures, the tech companies tend to find ways to avoid enforcing their rules against conservatives. In October, former Trump Senior Advisor Steve Bannon called for the beheading of Anthony Fauci in a video posted on Facebook. At the time, Fauci had already been the target of so many threats that he had 24 hour security. The video itself was removed after 10 hours and 200,000 views, but Mark Zuckerberg told Facebook employees that calling for beheading was not a violation of rules sufficient to merit a permanent ban on the platform.
For the last several years, most of the big social media platforms have been incredibly sensitive to criticisms of bias from Conservatives. Mark Zuckerberg has courted President Trump through dinners, meetings, and phone calls. According to reporting from Judd Legum of Popular Information, Facebook has repeatedly turned a blind eye to obvious violations of its rules by conservative commentator Ben Shapiro. Twitter CEO Jack Dorsey had a secret dinner with conservative commentators including Mercedes Schlapp, a Trump advisor with a habit of spreading misinformation on Dorsey’s platform.
In all of these previous incidents, the platforms took little to no action. So why now?
The most generous interpretation would be that the events of Wednesday were a wake up call to the executives of these companies that sparked some real self-reflection about the tremendous damage they are doing at home and abroad. Perhaps, I am too cynical, but as my friend and former colleague Jennifer Palmieri pointed out on Twitter the timing is more likely related to the fact that after the Georgia elections Republicans no longer have any regulatory or oversight authority.
The test is what comes next. Kicking Trump off their platforms is an important step, but it is only a small step towards cleaning up the mess that Facebook, Twitter and YouTube have created.
Don’t Let Deplatforming be a Distraction
Much of the debate about what the social media platforms should do has centered around access for prominent people — who should be banned, which tweets should be flagged, what constitutes violations of their terms and conditions. These are important and often complicated debates. But that debate is a distraction from the much bigger problem of radicalization and misinformation that occurs on those platforms every single days. Yes, Trump used Twitter and Facebook to incite a seditious riot. He is responsible for his lies and his actions, but Facebook, Twitter, and YouTube are responsible for the large swaths of the population who have been so indoctrinated with conspiracy theories and misinformation that some of them took up arms against their own government over an obvious lie.
As Roger McNamee, a former Facebook advisor turned critic, wrote in Wired:
In their relentless pursuit of engagement and profits, these platforms created algorithms that amplify hate speech, disinformation, and conspiracy theories. This harmful content is particularly engaging and serves as the lubricant for businesses as profitable as they are influential. These platforms also enforce their terms of service in ways that favor extreme speech and behavior, predominantly right-wing extremism.
Many of the people who showed up at the Capitol had been radicalized online. The crowd was filled with people who were adherents to a variety of conspiracy theories including QAnon. Most of them got this information from Facebook, YouTube, and Twitter.
Of course, Americans have been drawn to conspiracy theories from the beginning of time. White Supremacists and anti-government sentiment has been around long before the Internet, but there is no question that the social media platforms have made the problem exponentially worse.
According to a report in the Wall Street Journal, Facebook’s own research found that more than 60 percent of the users who joined extremist groups on the platform did so because Facebook’s algorithm recommended the group. Yes, you read that correctly. Facebook is pushing people to join extremist groups online. Facebook is not alone. Earlier this year, Kevin Roose of the New York Times wrote a must-read article about radicalization on Youtube. In the piece, Tristan Harris, a former Google employee who has become a leading critic, laid out why the platforms continually push people towards more extreme content.
There’s a spectrum on YouTube between the calm section — the Walter Cronkite, Carl Sagan part — and Crazytown, where the extreme stuff is. If I’m YouTube and I want you to watch more, I’m always going to steer you toward Crazytown.
There are no easy solutions to these problems. The behavior that needs to be curbed is endemic to the platforms themselves and intrinsically tied to their business plans. However, we know they can do better, because they have.
In the run-up to the election, Facebook made a change to their algorithm to reduce the prevalence of promoted partisan posts in order to reduce the amount of political misinformation. While far from perfect, this change had an impact. It improved the quality of the information that users saw — more posts from credible sources, fewer conspiracy theories. Unfortunately, Facebook turned off this feature after the election and things returned to their pre-election terribleness.
The Markup, a non-profit news organization, focused on Big Tech conducted an experiment where they monitored the Facebook feeds of Georgia voters.
While Facebook’s controls were in place, we found that links to traditional news sites were present in almost all election-related posts that appeared on our Georgia panelists’ feeds. After Dec. 16, however, when Facebook flipped the switch to turn on political advertising for the Georgia election, we noticed that partisan content quickly elbowed out news sites, replacing a significant proportion of mentions of the election in our users’ feeds … Of the election-related posts that appeared on our panelists’ feeds in the first half of December, a Breitbart article and video in which Alabama Republican congressman Mo Brooks called Biden’s victory “illegitimate” and called on his fellow members of Congress to reject the Electoral College’s submission of votes far and away garnered the most “like,” “love,” angry faces, and other reactions from Facebook users.
Facebook claims the change was always supposed to be temporary, but why? They found a way to improve the quality of information being shown to their users, but chose not to continue it. The company could not be more secretive about their decision-making process. Therefore, we are forced to guess at their motivations. Based on their previous behavior, there can only be one possible reason: it was bad for business. And therein lies the ultimate problem, the biggest companies in the world put profits first and they profit off of sowing divisions and spreading conspiracy theories.
What Comes Next
One way to think about what happened is that the liar has been kicked off but the lies are still present. These companies only respond to the most intense political pressure. Their hope is that banning Trump will take the wind out of the sails of the grassroots campaigns and shift the spotlight elsewhere. We cannot let that happen. These companies have to make fundamental changes in their businesses practice or the problems that led to the Capitol riots will be worse. The forces that allowed Trump and Trumpism to flourish will continue to gain strength. The incentives that push political figures to more extreme rhetoric will become even more enticing.
Here are some thoughts on how to keep up that pressure:
Regulatory Scrutiny: With Democrats in control of Washington, there needs to be an intense, substantive look how these companies should be regulated. Breaking them up should be on the table. For too long, they have operated unfettered and benefitted from the fact that Congress was too broken to update our laws to account for the role these companies play in American life. Democrats have the opportunity to change that. If they don’t, we must call them out.
Support Grassroots Pressure Campaigns:
Accountable Tech is an organization that was set up to put pressure on tech companies to make changes to their platforms. They are currently running a campaign to push Facebook to cease recommending groups to its users which would go a long way to reducing radicalization and misinformation.
Stop Hate for Profit is an a campaign started by a number of civil rights groups pushing companies to stop advertising on Facebook until they change their policies on hate speech. These advertising bans affect the only thing the companies care about — the bottom line.
The social media platforms holding the worst offenders accountable is a first step, but it’s only a small step. If we stop there, the circumstances that led to last week’s tragedy will remain and all the next demagogue will need to do is light a match and watch things burn.
"calling for beheading was not a violation of rules sufficient to merit a permanent ban on the platform."
I got a seven-day ban for saying "Okay, plague rat" to an anti-masker.
thank you for this concise guide of talking points about tech regulation!
do you have thoughts about now being the time to ban the Confederate flag, as well as the Trump flag?