In January 2021, then-President Donald Trump was indefinitely suspended from Facebook, Instagram and YouTube and expelled from Twitter and Snapchat, with other social media platforms also taking steps to address his abuse of their services. The mass actions against Trump content came after it became clear that he had used his online megaphone to incite the white supremacists, neo-Nazis, militia members, QAnon adherents and others who stormed the U.S. Capitol on January 6, in an effort to disrupt the certification of the election.
As the riot unfolded, Trump defended them in a tweet, “These are the things and events that happen when a sacred landslide election victory is so unceremoniously & viciously stripped away.” Hours earlier, Trump had posted a tweet attacking Vice President Mike Pence even as rioters, some of them chanting “Hang Mike Pence,” came within striking distance of him as he was evacuated from the senate chamber.
That finally appeared to be a step too far for social media. But the fact that it took an actual insurrection, planned and encouraged on the companies’ own services, to get Facebook, Twitter, et al., to move is unbelievably discouraging. And even then, Facebook COO Sheryl Sandberg tried to deflect responsibility, telling Reuters, “I think these events were largely organized on platforms that don’t have our abilities to stop hate and don’t have our standards and don’t have our transparency.”
The damage Trump did with his online activity became much clearer when he was gone. According to Zignal Labs, online misinformation about election fraud plunged 73 percent in the weeks following Twitter’s decision to ban Trump on January 8. Other forms of online misinformation also plummeted. Zignal found, “Mentions of the hashtag #FightforTrump, which was widely deployed across Facebook, Instagram, Twitter and other social media services in the week before the rally, dropped 95 percent. #HoldTheLine and the term ‘March for Trump’ also fell more than 95 percent.” It helped that some of Trump’s enablers were also deplatformed.
How Did We Get Here?
For years, Trump used social media platforms to spread hate, disinformation and conspiracy theories. Facebook is a particularly bad actor in this mess. About a quarter of Trump’s 6,081 Facebook posts in 2020 contained extremist rhetoric, misinformation about the coronavirus, the election or his critics, according to an analysis by Media Matters for America. The research determined that Facebook basically failed to limit the reach of, or block, Trump’s propaganda, which was shared and liked more than 927 million times
Like most social media platforms, Facebook is an American company, and it chose to abandon its own policies to give an American, Trump, unfettered access to its platform. Its “newsworthiness” exception for political figures was specially created in the lead up to the 2016 election to allow violative hate material posted by Trump to stay up.
An early case was candidate Trump’s announcement of the Muslim Ban in a video posted in 2015. Many Facebook employees found the video to violate the company’s community standards, but its executives made a contorted decision to allow the video to stay up. Monika Bickert, Facebook’s vice president for policy, said the company kept the video up because executives interpreted Trump’s comment to mean that he was not speaking about all Muslims, but rather advocating for a policy position on immigration as part of a newsworthy political debate. Over time, this loophole for Trump became big enough to drive a truck through.
Facing a barrage of complaints about Trump’s violations, Facebook stuck to its guns. In 2019, Nick Clegg, then the newly hired head of global affairs and communications and a former British deputy prime minister, repeated Facebook’s position that politicians would not be held to account on the platform. Claiming that, aside from speech that causes violence or real-world harm, which seemingly no longer included hate speech, Facebook would allow politicians to express themselves virtually unchecked on social media. Facebook’s network of independent fact-checkers, which had been established as a key part of the company’s response to disinformation, would not evaluate their claims and the community guidelines would largely not apply to politicians. Facebook did not want to be an “arbiter of truth.”
One former executive, Yael Eisenstat, who worked to improve the political ads process, wrote in 2019 that the controversy over allowing lies in political advertising was “the biggest test of whether [Facebook] will ever truly put society and democracy ahead of profit and ideology.” Trump’s ads were notable for disparaging comments about his opponents, calling Senator Elizabeth Warren “Pocahontas” and House Speaker Nancy Pelosi “a liar and a fraud.” Ads about immigration used especially dark rhetoric and imagery, stoking fears of “caravan after caravan” of migrants coming to the U.S. or urging voters to vote yes or no on whether to “deport illegals.” His video ads featured Trump warning that democrats were “openly encouraging millions of illegal aliens” to “destroy our nation.”
Facebook’s cover for Trump has had multiple, negative effects. It has stymied the company’s efforts against disinformation and misleading news and allowed conspiracy theories to proliferate. The company even altered its news feed algorithm to neutralize false, but insistent, claims that it was biased against conservatives. That latter decision warped the platform fundamentally, pushing Facebook into more deferential behavior toward its growing number of right-leaning and extreme users, tilting the balance of news people see on the network.
It got worse in 2020 as Trump ramped up his election rhetoric. In late April, he tweeted a series of posts against the COVID lockdowns, reading “LIBERATE MINNESOTA,” as well as other states. This began a right-wing backlash that led extremists into the streets to protest the pandemic measures. For extremists in militias and white supremacist groups, Trump’s tweets were a license to riot. Just weeks after protests erupted in Minneapolis in the wake of George Floyd’s murder at the hands of police on May 25, 2020, Trump used his social media megaphone to post, “Any difficulty and we will assume control but, when the looting starts, the shooting starts.” This phrase was used by a racist Miami police chief in the 1960s and has been widely interpreted as a violent threat against protesters. Twitter quickly hid the post for glorifying violence, as it had done for countless of Trump’s lies related to the election at this point.
Facebook chose a different path, ignoring its rules that bar speech that inspires or incites violence. The company decided to allow Trump’s tweet, which was cross-posted to Facebook, to remain on the platform. Within days, it had been shared over 71,000 times and reacted to over 253,000 times. The message was also overlaid onto a photo shared on Trump’s Instagram account, which quickly received over half a million likes.
Why did Trump’s clearly violative post stay up? Facebook CEO Mark Zuckerberg, in a decision criticized by more than 5,000 of his employees, made the call against advice of staff. And it reportedly cameafter a personal phone call from Trump. “I disagree strongly with how the President spoke about this, but I believe people should be able to see this for themselves, because ultimately accountability for those in positions of power can only happen when their speech is scrutinized out in the open,” was Zuckerberg’s explanation. Facebook also decided to leave up Trump posts that spread misinformation about voting by mail.
By the time Facebook’s own civil rights auditors issued their final report in July 2020, the damage of the Trump loopholes was clear. The auditors found that Facebook’s moderation of Trump’s use of the platform has undermined its broader civil rights efforts, singling outposts that lied about mail-in ballots or incited violence, all of which Facebook allowed to stand. “These decisions exposed a major hole in Facebook’s understanding and application of civil rights,” the auditors wrote. “While these decisions were made ultimately at the highest level, we believe civil rights expertise was not sought and applied to the degree it should have been and the resulting decisions were devastating. Our fear was (and continues to be) that these decisions establish terrible precedent for others to emulate.” It was later disclosed that Zuckerberg was involved in the decision to leave the posts up.
In 2020, Trump’s abuse of the platforms exploded. He used Facebook and other platforms to tout misleading information about coronavirus cures, election fraud and the motives of protesters, frequently and falsely targeting antifa as a cause of violence (in fact, most violence, up to and including murder, during the social justice protests, was conducted by far-right extremists). His claims that the election was stolen, which spread like wildfire across Facebook Stop the Steal groups, came to an explosive conclusion in the January 6 insurrection.
Trump has lost much of his online presence, but YouTube has said that his channel will be restored when the “risk of violence passes,” and he may regain his Facebook account. On January 21, the company forwarded its decision to its new Oversight Board, which is expected to rule within 90 days.
Trump and Twitter
Twitter was just as lax as Facebook in terms of letting Trump post whatever he wanted for many years. The list of lies, conspiracies, threats and hate Trump put up on his Twitter account is long, but there is a difference. As Twitter began to enforce its policies against everyone starting in mid-2020, the company did not create loopholes for Trump, repeatedly labeling his posts about voting and the election as untrue. And when it suspended his account on January 8, the company made clear it would be permanent.
But there was still considerable damage in terms of Trump spreading hate and misinformation starting right from when he was a candidate. In 2015, he tweeted out a false chart that claimed that 81 percent of white murder victims are killed by black people, a white supremacist talking point. The fake statistics were first posted by a neo-Nazi Twitter account. In November 2017, Trump retweeted three inflammatory and unverified anti-Muslim videos from Britain First, a racist group that was banned by the U.K. government. One of the videos purported to show an assault by a Muslim immigrant, but the assailant was neither Muslim nor an immigrant. Trump’s promoting inflammatory content from an extremist group was without precedent among modern American presidents. Trump’s sharing of the tweets was praised across far-right circles, increased anti-Muslim content on social media and elevated the profile of Britain First.
On July 2, 2017, Trump tweeted a video of himself attacking Vince McMahon during a WrestleMania event, but altered the video to place the CNN logo over McMahon’s face. News reporters rightly took Trump to task, including CNN’s Brian Stelter, who said Trump was “encouraging violence against reporters” and “involved in juvenile behavior far below the dignity of his office.” Trump subsequently said that CNN took the post too seriously, adding that CNN has “hurt themselves very badly.”
In August 2018, Trump tweeted that he had asked his secretary of state to “closely study the South African land and farm seizures and expropriations and the large scale killing of farmers,” another white supremacist talking point. South Africa’s Minister for International Relations and Cooperation rebuked Trump, saying he was expressing “right-wing ideology” and added that the South African government had requested an explanation from the U.S. embassy, which did not defend Trump’s tweet. There are no reliable figures that suggest that white farmers are at greater risk of being killed than the average South African.
In July and August 2019, Trump retweeted anti-Muslim British bigot Katie Hopkins. Among other things, Trump retweeted Hopkins’ attack on London mayor Sadiq Khan in which she blamed him for the city’s violent crime rate. Twitter permanently deleted Hopkins’ account in June 2020 for violating its “Hateful Conduct” policy. In 2020, violence became more obvious in Trump’s tweets. That May, Trump retweeted a video in which one of his supporters, Couy Griffin, a New Mexico county commissioner and founder of “Cowboys for Trump,” said, “The only good Democrat is a dead Democrat.” A day later, Trump tweeted the infamous, “When the looting starts, the shooting starts,” which importantly was flagged by Twitter as “glorifying violence.”
Twitter has now rid itself of the Trump problem, and most companies are rethinking political exceptions. But not Facebook. The company insists the use of incendiary populist language predates social media, so its spread is unrelated to Facebook. This position completely ignores how Facebook has manipulated the online space in favor of extremism and how political abuse of social media has altered the American political landscape.