By Wendy Via and Heidi Beirich
SHARE
SHARE
By Wendy Via and Heidi Beirich
For years on social media, Trump spewed hate and racism, pitted Americans against each other, spread lies, and made white supremacists feel that they had a leader in the White House. He was finally suspended or banned by many platforms in January — but only after a violent Capitol insurgency.
After January 6th, Facebook temporarily banned Trump, and says it will abide by its Oversight Board’s decision (coming out May 5th) on whether he regains his Facebook page. Twitter permanently banned Trump, and is now asking for public input about its approach to world leaders. YouTube temporarily banned Trump and CEO Susan Wojcicki announced that he will regain his YouTube channel when the “risk of violence” has passed.
We can’t let social media companies fool us. They knew all along that Trump was violating their community standards (rules about what is and isn’t allowed on their platforms). In fact, tech leaders made the affirmative decision to allow exceptions for the politically powerful, usually with the excuse of “newsworthiness” or under the guise of “political commentary” that the public supposedly needs to see.
This decision made the companies complicit in Trump’s spread of hate, racism, and misinformation.
This week, Facebook must make the decision to ban Trump permanently. He is a dangerous threat to our democracy. If that’s not enough to ban him, what is?
This decision will also have implications well beyond Trump. Thanks to Trump and the misguided decisions of tech leaders, there’s a cascade of loopholes that help far-right and extremist politicians gain followers and spread hate and disinformation through social media. The result: real world harm and weaker democracies.
And here’s the thing, these loopholes, or community standards exceptions, aren’t necessary for the vast majority of politicians across the U.S. and the globe who engage in civil debate and advocate for human rights and strong democracies. So why the loopholes? Because the incendiary rhetoric that comes out of extreme politicians’ mouths works well for online engagement, and therefore in ad buys. It’s what sells.
And that’s what it’s all about, isn’t it? The algorithms are designed to increase engagement, which evidence shows is best done with divisive, racist, and harmful content. And engagement increases ad buys, adding to the billions that these companies already have.
While Facebook is moralizing about its commitment to humanity, the company has been allowing explicit exceptions to its community standards for some of the worst human rights abusers in the world, powerful politicians who harness social media for malignant ends. Those with the most power and the biggest megaphone need to be held to the strictest account because their hateful rhetoric and lies have the most devastating impact on public safety.
Facebook has an obligation to keep Trump off its platform. It must put democracy and human rights above profit. And it doesn’t end there, to stop the spread of hate, Facebook and other social media companies must enforce their community standards equally in the U.S. and abroad, and refuse to monetize harmful content. They should remove the “newsworthy” exceptions for politicians and fact-check political ads globally so that extreme candidates and hateful leaders don’t gain an unfair advantage and cause irreversible harm.
Facebook has the opportunity to do something right. Let’s hope they do.
Heidi Beirich and Wendy Via are co-founders of the Global Project Against Hate and Extremism. The Global Project Against Hate and Extremism (GPAHE) is a nonprofit whose mission is to strengthen and educate a diverse global community committed to exposing and countering racism, bigotry and prejudice; and to promote the human rights values that support flourishing, inclusive societies and democracies.