By Wendy Via and Farhana Khera
SHARE
SHARE
By Wendy Via and Farhana Khera
Elon Musk’s imminent takeover of Twitter has sparked profound distress and apprehension among those who value safe spaces online. Musk’s response has been, mostly, to bully, harass and deliver a barrage of juvenile humor that flippantly disregards users’ legitimate concerns. The fear for many — particularly for those of us who have spent years advocating with Twitter to do better — is: Will Musk reverse all of the progress we made?
Especially with Musk’s announcement this week that he’d let Trump back on the platform, this is not an abstract discussion. Real-life harms to ordinary people, as well as the future of our democracy, are at stake. This is also not a partisan, “left” or “right” issue — it’s a matter of respect for basic human dignity and life.
Musk behaves like his promise for restraint-free speech on Twitter has never existed. In fact, that was the way Twitter operated for many years. It was unique among social media platforms for a long time in refusing to remove hateful, abusive and harassing content. It operated relatively restraint-free with limited exceptions for “direct, specific threats of violence against others” (a policy it narrowly interpreted to mean specific, imminent threats of harm against a specific individual that could actually be carried out by the person making the threat), child pornography, and other matters that would be a violation of U.S. law. Women, LGBTQ people, Blacks, Latinos, Jews, Muslims, to name some of the groups frequently targeted for violent and hateful vitriol and threats, had no recourse.
White nationalists, neo-Nazis, and even public officials regularly stoked this hate and violence on the platform. According to a 2019 study by Internet Research, after then-President Trump tweeted that four members of Congress who are women of color — Representatives Alexandria Ocasio-Cortez, Ilhan Omar, Ayanna Pressley and Rashida Tlaib — are “traitor[s]” and should “go back” to their countries, even though they are U.S. citizens, online threats targeting these women doubled, including threats of violence and sexual assault. “We cannot ignore the implications of this study because incivility online doesn’t just stay online. It has many consequences in the real world,” said Porismita Borah, lead author of the study.
In another disturbing study, researchers found that Trump’s anti-Muslim tweets after the start of his presidential campaign correlated to a 38% increase in anti-Muslim hate crimes. The study also found that his anti-Muslim tweets resulted in increased anti-Muslim twitter activity by his followers.
Due to our advocacy and that of our colleagues, Twitter slowly began to address the unfettered onslaught of hate on its platform. While there is still work to be done and white supremacists remain on the platform, its leadership began to understand that online speech can inspire offline harms, including hate crimes, among other tragic consequences.
Twitter also began to understand that it couldn’t simply remove ISIS videos at the behest of the U.S. government but allow violent neo-Nazi and white nationalist videos to continue. Over time, Twitter developed and expanded its terms of service, putting in place more robust guidelines, including banning speech that dehumanizes people based on race, religion, ethnicity and national origin. It created a Trust & Safety Council, on which groups like ours participated, to seek continuing input from external stakeholders.
For example, after alerted by the Global Project Against Hate and Extremism (GPAHE), Twitter removed about 8,000 handles that contained the n-word and similar slurs, including accounts such as @killn******14, calling for violence against African Americans. A transnational white supremacist network, Generation Identity, whose racist propaganda inspired several terrorist attacks, including the Pittsburgh synagogue shooting in 2018 and the Christchurch, New Zealand, mosque attacks in 2019, was rampant on the platform. In July 2020, after a GPAHE report, Twitter finally de-platformed the network. That same month, after lobbying by Change the Terms, a coalition of more than 60 civil society organizations, Twitter also finally banned David Duke, the former Klansman who helped organize the 2017 Charlottesville white riots. And when a Muslim public official and Muslim community activist were harassed and physically threatened, repeatedly, by two hate actors, Twitter acted again, banning them from the platform.
To be clear, advocates didn’t always get the action we sought when we raised concerns with Twitter staff about the real-world harms of content on the platform, but we consistently had a responsive ear. Based on Musk’s public statements in recent days, however, he seems eager to dismantle hard-fought guidelines and progress, leaving communities vulnerable to attack. It’s particularly disappointing that this change in direction appears to have the support of Jack Dorsey, who made a commitment to address hate on the platform to a few of us who met with him in 2020.
Concerned Americans – public officials, users, advertisers and the public – should make their voices heard and urge Musk to maintain and improve upon the existing content moderation policies, not roll them back, and commit to refrain from reinstating individuals and groups whose accounts were suspended or banned as violations of these policies. Only then will Twitter play a role in supporting democracy and help create a safer world, on and offline.
Wendy Via is co-founder and president of the Global Project Against Hate and Extremism. Farhana Khera is the founding president of Muslim Advocates.