In response to the armed Trump supporter who tried to breach the FBI building in Cincinnati, Ohio, the following statement is from Global Project Against Hate and Extremism’s co-founder and president Wendy Via.
“The attempted attack on the Cincinnati FBI office is unfortunately not shocking. It was only a matter of time before one of the people that Trump repeatedly lied to about the election and his own perceived mistreatment acted with violent intentions. Trump has relentlessly and knowingly been instilling fear and paranoia in his supporters with false and dangerous speech. He knows very well that something like this could happen, just as he knew January 6 would happen.”
image credit: Spencer Means, licensed under CC BY-SA 2.0.
SHARE
SHARE
Frances Haugen, a former data scientist at Facebook, is the latest employee of the tech giant to courageously come forward to share her experience. She provided documents that allowed the Wall Street Journal to publish multiple exposés on the company, she spoke with 60 Minutes, and she testified in front of Congress.
Importantly, she validated concerns that GPAHE and other groups have documented for years. Facebook has caused harm, up to and including genocide, around the globe. In Haugen’s own words, “Facebook, over and over again, has shown it chooses profit over safety.” Here are a few of our observations on her revelations that relate to hate and extremism.
Hate Speech
We know Facebook isn’t catching nearly all — or even most — hate-related content, something that has been documented for years. Still, we were surprised to hear Haugen say during her 60 Minutes interview with Scott Pelley that because Facebook picks metrics that are to its own benefit, “they can say [they] get 94 percent of hate speech and then their internal documents say [they] get 3 percent to 5 percent of hate speech.” In short, we knew it was bad, but it’s way worse than we thought. Facebook has long been lying about its actual ability to deplatform hate speech.
Content Moderation
Haugen has confirmed that the spread of misinformation on Facebook, particularly in non-English-speaking countries, is rampant. Why? Because the tech giant isn’t investing in content moderation generally, and most specifically not at all in many languages. There are dozens of countries and many, many languages where Facebook is simply AWOL. So for many users, the site is a free-for-all of hate and misinformation. It’s a guess, but if a business model is making you billions, why change it?
Special Treatment for Politicians and Influencers
Facebook recently announced that it no longer gives special privileges to the politically powerful. The Wall Street Journal called Bull$%&. After documents provided by Haugen, it’s now clear to the public that Facebook gave millions of celebrities, politicians, and other high-profile users special treatment to hate and disinform.
We already knew that to be true. Facebook and other social media companies have helped the growth of far-right authoritarian movements around the world, causing harm and threatening democracies. We documented numerous examples. We also wrote about Facebook being used to orchestrate the Rohingya genocide in Myanmar, mass murders of Muslims in India, and riots and murders in Sri Lanka that targeted Muslims for death. Let’s reiterate: Facebook has been used to orchestrate genocide and mass murders and has not made protecting its users a priority. The fact that Facebook continues to lie about these facts instead of doing something to change it is morally bereft.
We’re grateful Haugen has come forward to substantiate with Facebook’s own internal documents what civil society researchers have been documenting for years. Thanks to all the advocacy groups and other Facebook whistleblowers who came before Haugen, including Yael Eisenstat and Sophie Zhang. There is absolutely no denying that Facebook puts profits above democracy and safety. There’s no need to wonder any longer if Facebook profits from hateful and polarizing content. They do, and they do it intentionally, and they’ve done very little to stop it.
And, still we continue to hear lies from leadership at Facebook. On Monday, Neil Potts, Facebook’s vice president of trust and safety policy, told NPR’s All Things Considered that he “would categorically deny” that “polarization leads to engagement, which translates to money for Facebook.” Really, Mr. Potts?
There’s a lot that has to be done to fix this serious problem, some at the government level, and some at the tech company level.
Right now, there are three easy things Facebook (and frankly all tech companies) could do to mitigate the harm they cause: (1) treat politically powerful people exactly the same as other users, (2) fact check political ads, and (3) invest in content moderation, especially in languages other than English, and in a way that protects all users from hate and disinformation.