SPECIAL REPORT

Complicit

The Human Cost of Facebook’s Disregard
for Muslim Life

Executive Summary

In 2015, Facebook CEO Mark Zuckerberg vowed to make his social media platform a welcoming home for Muslims after members of the community faced a brutal backlash in the wake of horrific, violent attacks in Paris. One 17-year-old in France reported, “There was a flood of violent language on Facebook to kill Muslims.” Zuckerberg posted on Facebook, “After the Paris attacks and hate this week, I can only imagine the fear Muslims feel that they will be persecuted for the actions of others. As the leader of Facebook… we will fight to protect your rights and create a peaceful and safe environment for you.” Citing the lessons of his Jewish background, Zuckerberg wrote that he wanted to add his voice in support of Muslims “in our community and around the world.”

It’s hard now to imagine more hollow words being spoken. Since Zuckerberg made that commitment, Facebook has been used to orchestrate the Rohingya genocide in Myanmar, mass murders of Muslims in India, and riots and murders in Sri Lanka that targeted Muslims for death. Anti-Muslim hate groups and hate speech run rampant on Facebook with anti-Muslim posts, ads, private groups, and other content. Armed, anti-Muslim protests in the U.S. have been coordinated from Facebook event pages. The Christchurch, New Zealand, mosque massacres were live-streamed on the site and the videos shared by untold numbers worldwide.

This report details Facebook’s damaging impact on Muslim communities in nine countries and demonstrates how Facebook has willfully ignored the dangers posed by anti-Muslim content to the welfare of Muslims across the globe. Facebook’s own civil rights audit, completed in 2020 after a nearly two-year process that came only after sustained pressure from human rights groups, singled out anti-Muslim hate on the platform as a longstanding problem. The auditors wrote, “the organization of events designed to intimidate members of the Muslim community at gathering places, to the prevalence of content demonizing Islam and Muslims, and the use of Facebook Live during the Christchurch massacre,” created an atmosphere where “Muslims feel under siege on Facebook.”

Despite this damning finding, Facebook has yet to address the anti-Muslim problems found by its own audit or those identified by advocates around the world in the preceding years. The company’s actions, or lack thereof, indicate that its decision to engage in the audit was not necessarily out of concern for lives lost because of its inaction, but rather for political expediency. Incredibly, the same day that the civil rights audit was announced in May 2018, the company also announced that one of the United States Senate’s most notorious anti-Muslim and anti-immigrant members, former Senator Jon Kyl, would conduct his own audit into anti-conservative bias on the platform. Kyl has a history of demonizing Muslims and working with anti-Muslim hate groups. This is illustrative of Facebook’s penchant to fail to simply do what’s right and what they’ve promised; instead, the company falsely equated the protection of their users’ safety, civil rights, and human rights with satisfying partisan political interests. Investigating human and civil rights abuses and global anti-Muslim hate content that leads to the loss of life should not have been presented as needing to be balanced with an investigation of alleged anti-conservative bias.

For years, civil rights organizations and policymakers have raised concerns both privately and publicly, begging the company to take action. It has not. Out of desperation to be heard, many organized boycotts, called for the board and company leadership to be replaced, and asked governments to step in. Still, Facebook’s strategy was to wait and do almost nothing.

Outside of the U.S., where vulnerable communities have fewer options for recourse, the picture is even bleaker. In a most horrifying case, Facebook was cited by the U.N. for playing a “determining” role in the genocide perpetrated against the Muslim Rohingya community in Myanmar. And in India, the Delhi State Assembly’s Peace and Harmony Committee just recently found that Facebook was complicit in the Delhi riots of February 2020, and should be investigated for every riot since 2014. Evidence shows that Facebook has at times seemingly collaborated with anti-Muslim regimes, such as the current ruling party in India, to protect hate speech by its leadership in contravention of its own anti-hate policies.

Facebook’s leadership has said repeatedly that the company’s policies against hate apply to everyone regardless of who they are or where they are, and yet the company continues to allow anti- Muslim material to stay on the platform, using a variety of excuses including “newsworthiness.” For example, Facebook has said that exceptions to its hate speech policy are sometimes made if content is “newsworthy, significant or important to the public interest.”

At the same time, the company contends that hate speech does not help its bottom line. If so, why has Facebook repeatedly chosen to leave up dangerous hate content, often generated by public figures with large audiences, such as President Donald Trump? Another case in point is India, where Facebook now has its largest user base and is investing in expansion and growth to dominate that very large market. The company is working closely with the current Indian government, in particular Prime Minister Narendra Modi, who was himself allegedly involved with stoking anti-Muslim violence when he was governor of the Indian state of Gujarat that led to the deaths of 1,000 Muslims. There appears to be a clear financial incentive to pander to Modi, who has one of the largest number of followers of any political leader on Facebook, and his political party. It is, of course, hate speech by such figures that is the most dangerous because of their reach and influence.

Facebook is seeding and cultivating anti-Muslim bigotry amongst its users, leading to real world violence against the 270 million Muslims living in the nine countries covered in this report. Facebook is indisputably the world’s engine for anti-Muslim violence. The time for discussion of this issue is over. Facebook must act now and end anti-Muslim hate on its platform, no matter who or what entity, is proliferating it.

Recommendations

To bring an end to anti-Muslim hate on Facebook, we recommend the company immediately:

  • Rigorously, and without regard to political or economic implications, enforce Facebook’s community standards and Dangerous Individuals and Organizations designations worldwide to address anti-Muslim hate.

  • Ban the use of event pages for the purpose of harassment, organizing, and violence targeting the Muslim community.

  • Create a senior staff working group responsible for the reduction of hate speech on the platform, and require regular updates through Facebook’s Transparency Report on the company’s progress in removing offending content, including anti-Muslim hate content.


JOIN US IN MAKING A DIFFERENCE

    You can unsubscribe at any time.

    Introduction

    Facebook CEO Mark Zuckerberg’s 2015 promise to make his platform a welcoming home for Muslims has proven to be an illusion. In country after country, the Facebook platform has been used as a driver for anti-Muslim violence, the result of the company repeatedly and willfully ignoring the dangers of anti-Muslim content. None of this has happened without Facebook’s knowledge or without public outcry from its users, civil and human rights groups, and lawmakers. Facebook is simply looking the other way while its platform becomes the global engine of anti-Muslim bigotry.

    Many people may not be aware that Facebook’s actions have led to incredible levels of anti-Muslim violence in multiple countries, and that even when warned that material posted to the platform was likely to end in violence, the company repeatedly chose not to act. Unfortunately, decisions to ignore its own community standards that should protect Muslim communities have been influenced instead by anti-Muslim ruling governments, militias, and political influencers, the very people who can reach and influence the largest audiences.

    In 2020, dehumanizing content about Muslims remains widespread on Facebook. A recent analysis showed that the United States and Australia “lead in the number of active Facebook pages and groups dedicated to referring this dehumanizing [anti-Muslim] content.”

    This content is significantly driven by the white supremacist hate speech and hate groups that thrive on Facebook and routinely demonize Muslims. In May 2020, the Tech Transparency Project (TTP) found more than 100 U.S. white supremacist groups were active on Facebook, on their own group pages as well as on auto-generated content. Among the many vulnerable communities attacked by these groups, Muslims were demonized by circulating footage and making comments about the Christchurch mosque shootings. In the wake of TTP’s report, Facebook did alter some of the auto- generated content and groups, but many remained on the platform.

    Anti-Muslim Hate On Facebook Is Well-Documented

    The problem with anti-Muslim hate on Facebook has been documented for some years. Two years ago, computer scientist Megan Squire did an analysis of right-wing extremist groups on Facebook and found an incredible overlap with anti-Muslim hate. Squire found that anti-Muslim attitudes are not only flourishing on the platform but also acting as a “common denominator” for a range of other extremist ideologies, including anti-immigrant groups, pro-Confederate groups, militant anti- government conspiracy theorists, and white nationalists.

    Among thousands of Facebook users who were members of multiple extremist Facebook groups, Squire found that 61 percent of “multi-issue” users who were in anti-immigrant groups had also joined anti-Muslim groups; the same was true for 44 percent of anti-government groups, 37 percent of white nationalist groups, and 35 percent of neo-Confederate groups. “Anti-Muslim groups are way worse, in every way, than what I would have guessed coming in,” Squire said at the time, “Some of the anti-Muslim groups are central players in the hate network as a whole. And the anti-Muslim groups show more membership crossover with other ideologies than I expected.”

    Civil rights organizations have repeatedly warned Facebook that anti-Muslim posts, ads, private groups, and other content are rampant on a global scale. As early as 2015, Muslim Advocates informed Facebook that its event pages were being used to organize activities by anti-Muslim militias and hate groups, including armed anti-Muslim protests in the U.S. The Southern Poverty Law Center also reached out to Facebook privately starting in 2014 to warn the company about hate groups on its platform, including dozens of anti-Muslim hate groups.


    RELATED REPORT


    Democracies Under Threat

    The decision by multiple social media platforms to suspend or remove ex-American President Donald Trump after he incited a violent mob to invade the U.S. Capitol on January 6, 2021, was too little, too late.

    Even so, the deplatforming was important and it should become the standard for other political leaders and political parties around the world that have engaged in hate speech, disinformation, conspiracy-mongering and generally spreading extremist material that results in real world damage to democracies.

    Civil Rights Audit

    The modern Identitarian movement was launched in France in 2012 with the founding of Generation Identitaire, an offshoot of the white nationalist Bloc Identitaire. There are now GI chapters in several countries.

    Known for provocative anti-Muslim and anti-immigrant actions and a youthful, hipster style, the group’s first major public act was to occupy the largest mosque in Poitiers to denounce “the Islamization of France.” Also in 2012, the French branch of GI posted a slickly produced video on YouTube titled “A Declaration of War from The Generation of National Identity.” Featuring a series of young white people decrying multiculturalism and democracy, the video made clear the group’s racist and anti-immigrant views.

    “We are the generation of ethnic fracture, total failure of coexistence and forced mixing of the races… our heritage is our land, our blood, our identity,” reads the video’s subtitles. It ends with “don’t think this is a manifesto, this is a declaration of war.” In its many postings, the video has had tens of thousands of views worldwide.

    Identitarians are best known for their anti-Muslim and anti-immigrant publicity stunts, similar to the Poitiers mosque occupation. A classic example was staged by Italian members of GI who, in 2016, scaled a monument to the Italian opera composer Donizetti in Bergamo, stuck a poster on it with the inscription “Islam in Europe—2050,” then covered the statue with a niqab. That same year in Austria, adherents of that branch of GI covered a statue of the Habsburg Empress Maria Theresa with a giant burqa.

    The movement is strongly opposed to Muslims and Islam and argues for a new Reconquest of Europe, or Reconquista, the 700-year period in which Christians violently expelled Muslims from the Iberian Peninsula. Followers complain of an “Islamization” of Europe through mass immigration, which they view as a threat to European culture and society. Among other things, Identitarians in France have called for freezing of legal immigration, reestablishment of borders, stopping the construction of new mosques, and outlawing of Islamist organizations. Identitarians also accuse their governments of importing terror by allowing Muslims to immigrate.

    Identitarian thinking has metastasized across broad swaths of the West, becoming a worldwide movement that advances the idea that a global elite is conspiring to cause white people to be wiped out in their historic homelands, and that immigrants and refugees need to be repatriated to their countries of origin to stop the so-called Great Replacement. Pierre Vial, an intellectual prized by many Identitarians who runs an Identitarian-like cultural group called Land and People, says the Great Replacement is a “racial invasion of Europe” leading to “all the conditions of a racial war, which will be the war of the XXI century. A war that is under way.” He adds this will be a war “that we will wage in the name of a very simple reminder addressed at the invaders: the suitcase or the coffin.” A key German activist, Markus Willinger, has crudely said, “We don’t want Mehmed and Mustapha to become Europeans.”

    In this context, violence shouldn’t be a surprise. Not only has there been terrorism connected to these ideas, but hate crimes against mosques and Muslims in European countries have also risen in recent years. As researchers at Hope Not Hate wrote in 2019, “Despite often preaching non-violence, there is nothing stopping Identitarianism’s followers believing violence is the only feasible response to this alarmist rhetoric.”

    The Problem Is International In Scope

    Outside of the U.S., the picture is even bleaker. Tragically, there are many other cases where the hatred and poison against Muslims originating on Facebook has turned lethal, leading to riots, deaths, and mass killings. Evidence shows that Facebook has at times seemingly collaborated with anti-Muslim regimes, such as the current ruling party of India, to protect hate speech by its leadership in contravention of its own anti-hate policies.

    Facebook was cited by the U.N. for playing a “determining” role in the genocide perpetrated against the Muslim Rohingya community in Myanmar that began in 2016. In September 2018, Facebook COO Sheryl Sandberg admitted in a Senate committee hearing that the company had failed the Rohingya, calling the situation “devastating” and even admitting that Facebook may have a legal obligation to remove accounts that lead to mass violence. Yet, anti-Muslim material remained rife on the platform. One year later, in March 2019, the Christchurch, New Zealand, mosque massacres were broadcast on Facebook Live and shared countless times worldwide. In September 2020, the Delhi State Assembly’s Peace and Harmony Committee found that Facebook was complicit in the Delhi riots of February 2020, and should be investigated for every riot since 2014.

    In the wake of recent disclosures that Facebook allowed anti-Muslim hate speech to run rampant in India, its own employees have spoken out about this long-term and repeated failure to deal with anti-Muslim hate content. In an August 2020 open letter addressed to the company’s leadership, 11 employees demanded that the platform denounce “anti-Muslim bigotry” and ensure Facebook’s policies are applied equally across the platform. “It is hard not to feel frustrated and saddened by the incidents reported. We know we’re not alone in this. Employees across the company are expressing similar sentiment,” the letter read. “The Muslim community at Facebook would like to hear from Facebook leadership on our asks.”

    Facebook Grows The White Supremacist And Anti-Muslim Movement

    By ignoring this issue, Facebook for years allowed the most dangerous white supremacist propaganda to fester and grow, recruiting untold numbers of people into the ranks of a movement that has inspired violence and genocide targeting Muslims, immigrants, and Jews. The Christchurch shooter had been radicalized into Identitarianism, spread on multiple Facebook accounts, and was an adherent of the racist Great Replacement theory – an international white supremacist conspiracy movement that promotes the idea that white people are slowly experiencing a genocide in their own home countries due to a plot by elites to displace white people with rising numbers of non-white immigrants. The Great Replacement theory argues that immigrants, especially Muslims, are destroying Western countries and turning them into foreign places. These thinkers often argue disparagingly that Europe is becoming Eurabia.

    The New Zealand shooter was clear about his motive. In his manifesto, Brenton Tarrant wrote that he wanted to stop the Great Replacement, and he targeted Muslims, including their children, for extermination. As Peter Lentini has written, “Tarrant’s solution to the crisis [posed by a Muslim “invasion”] – indeed one on which he felt compelled to enact – was to annihilate his enemies (read Muslim immigrants). This included targeting non-combatants. In one point, he indicates that [immigrants] constitute a much greater threat to the future of Western societies than terrorists and combatants. Thus, he argues that it is also necessary to kill children to ensure that the enemy line will not continue.”

    Generation Identity (GI), the sprawling, international organization that pushes the Great Replacement idea, was rampant on Facebook for years. In June 2018, Facebook finally took action against GI, but only after several members of the Austrian chapter were investigated for potentially running a criminal organization (the investigation ended without charges). Facebook deplatformed the entire network citing violations of policy. But this was too little, too late. Before being blocked, more than 120,000 people followed Generation Identity on Facebook. Facebook is complicit in the worldwide transmission of Identitarian thinking, which is anti-Muslim at its heart. Since October 2018, there have been at least six mass attacks motivated by Great Replacement ideology.

    In addition to Christchurch, attacks were staged at two American synagogues, an El Paso Walmart, a synagogue in Halle, Germany, and two shisha bars in Hanau, Germany. Two of these mass shootings specifically targeted Muslims, in Hanau and Christchurch, but all were directed against immigrants or Jews who were seen as abetting non-white immigration. Anders Breivik, who in 2012 murdered dozens of children in Norway because he believed they would grow up to become adult supporters of Muslim immigrants to the country, actively promoted his ideas on Facebook before the attack.

    Muslim Users Of The Platform

    By refusing to take anti-Muslim material on its platform seriously, Facebook is in effect poisoning its Muslim users, and its users in general. The nine countries covered in this report have a total of 566 million Facebook users, which make up about 20 percent of their customers worldwide. Furthermore, the top three markets for Facebook users are India, the United States, and Indonesia, which is a Muslim-majority country. Muslims in India are an astonishing 200 million and in Indonesia, the country with the largest Muslim population in the world, are 225 million. Of Facebook’s top 20 markets by country, six are Muslim-majority countries: Indonesia, Egypt, Bangladesh, Pakistan, Turkey, and Nigeria. Two of Facebook’s top 20 markets—India and Myanmar—have been the site of massacres of Muslims orchestrated on Facebook.

    Facebook’s ongoing refusal to enforce its policies and protect Muslims is occurring as Facebook usage rapidly grows in Muslim-majority markets. In 2019, according to data compiled by the University of Oregon, more than seven out of 10 citizens in six Muslim-majority countries use Facebook and WhatsApp: Egypt, Jordan, Lebanon, Qatar, Saudi Arabia (KSA), Tunisia, and the United Arab Emirates. And the use of these networks far outpaced the use of other social channels. As one example, Facebook had 187 million active users in the Middle East (comprised of mostly Muslim-majority countries) in 2019. The University of Oregon’s data also showed that half of young people say they get their daily news on Facebook instead of newspapers, TV, or even online news portals.

    All of these people, Muslim and non-Muslim, are being assaulted by the anti-Muslim, bigoted content that Facebook refuses to address. Facebook must act now and end anti-Muslim hate on the platform, no matter who, or what entity, is proliferating the hate. The only reason to allow dangerous hate content from anyone to stay up, and especially from political figures with reach and influence, is because it’s profitable.

    Country Summaries: Anti-Muslim Organizing and Violence on Facebook

    Unfortunately, there is little public awareness regarding Facebook’s role as a chief driver of anti-Muslim hate and violence throughout the world. Even more disturbing, the company chose repeatedly over years not to act in the face of overwhelming evidence that material posted to Facebook was likely to end in violence. The following summary describes Facebook’s role in the deaths of thousands of Muslims in multiple countries. These examples not only illustrate Facebook’s complicity but also demonstrate how profoundly hate content can impact people’s lives and cause violence offline.

    What’s documented here is illustrative of the problem with anti-Muslim hate on the platform, but likely represents just the tip of the iceberg, and is by no means exhaustive, as information about Facebook’s role in dozens of countries and/or anti-Muslim incidents remains patchy, or unexamined, or nonexistent. Given the widespread nature of anti-Muslim hate on the platform, it is very likely that millions of Muslims around the world have been impacted negatively, jeopardizing their safety and their freedom, by Facebook’s failure to act on anti-Muslim hate speech.

    Facebook, and all of us, cannot forget that, every day, Muslims across the world are the targets of bigotry and hate crimes while simply going about their lives. And that anti-Muslim rhetoric and organizing that thrives online inspires and enables this abuse. These country reports are tragic, cautionary tales of how easily everyday hate speech turns into violence and murder. And how easily Facebook’s platform can be used to inspire and organize mass violence and genocide.

    China

    Facebook does not even operate in China, but the Chinese government still uses the platform to amplify anti-Muslim and specifically anti-Uighur sentiment. More than a million Muslim Uighurs have been imprisoned and brutalized by the Chinese government in concentration camps across the Xinjiang province. The Uighurs and other Muslim ethnic minorities have been separated from their families, forced to labor for their captors, beaten, tortured, and raped.

    In 2019, there were reports that “Chinese state-owned media is running ads on Facebook seemingly designed to cast doubt on human rights violations” against the Uighurs. There were three ads — two active and one inactive — within Facebook’s ad library describing alleged successes of happy detainees in the camps and falsely claiming that the detention centers do not interfere with religious beliefs and practices. Two of the ads were targeted to American and other countries’ audiences. The paid ads aimed to convince Westerners that the camps in Xinjiang are not sites of human rights abuses, contrary to the findings of several governments, human rights organizations, experts on China, the U.N., and other international bodies.

    On Facebook, the state-controlled tabloid Global Times posted a sponsored video titled, “Xinjiang center trainees graduate with hope for future.” It purports to show former detainees baking bread, as an example of the “vocational skills” Uighurs supposedly learn in the camps. Additional ads were posted to Twitter.

    After the ads were reported to Twitter, the company removed them immediately. Given the same information, Facebook, in stark contrast, decided to keep accepting such ads and said they would take a “close look at ads that have been raised to us to determine if they violate our policies.” Instead of simply refusing paid ads from Chinese state-controlled media, Facebook chose to passively rely on outside experts to flag problematic posts, which it may or may not then remove, at a pace that may or may not be quick enough to avert harm. In effect, Facebook was enabling China to use the platform to cover up widespread human rights abuses and violence in Xinjiang.

    Germany

    Sparked by online rumors that a man was killed defending a woman from rape by a Muslim refugee, riots targeting Muslim refugees and immigrants broke out in August of 2018, starting in the state of Saxony. The Facebook account of the municipal political party, Pro-Chemnitz, pushed the misinformation and organized the protest that ended in mob violence. In calling for the protest, it claimed the victim in the rumored stabbing was “a brave helper who lost his life trying to protect a woman.”

    Chancellor Angela Merkel suggested the violence was a threat to Germany’s post-war constitution, saying, “We have video footage of the fact that there was [hunting people down], there were riots, there was hatred on the streets, and that has nothing to do with our constitutional state.”

    In another incident, a teenaged Syrian refugee, Anas Modamani, took a selfie with Chancellor Angela Merkel that he posted on Facebook. After terrorist attacks in Brussels and Berlin in 2016, Modamani’s selfie began appearing on Facebook, this time doctored to falsely label him as one of the perpetrators of the attacks. Afraid of being recognized, Modamani was afraid to leave his home. Chan-jo Jun, a lawyer, brought a landmark lawsuit against Facebook on behalf of Modamani for being smeared online. His litigation was unsuccessful, but it helped lead Germany to pass one of the world’s most aggressive laws targeting online hate speech, the so-called NetzDG law.

    But the damage was already done. The rumors helped support the growth of the anti-Muslim Alternative for Germany party, which would eventually be elected to seats in several German state parliaments. The AfD now has over 500,000 followers on Facebook. Research in 2018 by academics at the University of Warwick specifically showed that thousands of hate-filled Facebook posts were linked to an increase in racially-motivated attacks on refugees in Germany, who are predominantly Muslim. The research specifically cited material from the AfD as fueling this awful trend. Now, anti-Muslim and anti-refugee sentiment is widespread among sectors of the German population. In 2019, there were more than 800 attacks on Muslims in Germany.

    Hungary

    Hungary has one of the most anti-Muslim governments in Europe. During the European refugee crisis, which began in 2015, Prime Minister Orbán and his government refused to allow refugees fleeing the Middle East into his country and specifically said in 2018 that the term “refugees” is a misnomer and that those coming to Europe were “Muslim invaders.” For Prime Minister Viktor Orbán, Muslims cannot be a part of Europe and Muslims should be kept out of Europe to “keep Europe Christian.” Orbán’s Facebook page, where he pushes his views, has more than a million followers.

    There is little information on Facebook usage to spread hate in Hungary, but one incident is instructive. In March 2018, Facebook reversed a decision to remove an anti-immigrant video targeting Muslims posted by János Lázár, then chief of staff to Hungarian Prime Minister Viktor Orbán (Lazar was called a racist by the U.N. human rights chief in 2018). The video featured Lazar saying, “If we let them in and they are going to live in our towns, the result will be crime, poverty, dirt, and impossible conditions in our cities.” Lazar accused Facebook of censorship after the social network removed his post. Facebook, in reposting the racist video, said that it was making an exception to its ban on hate speech: “Exceptions are sometimes made if content is newsworthy, significant or important to the public interest,” Facebook said.

    The anti-refugee and anti-Muslim sentiment is so severe in Hungary that even those who work at organizations devoted to helping these populations have been giving up. Hungary’s Muslim population has faced beatings, vigilante attacks, and abuse.

    India

    The situation is so dire regarding the anti-Muslim bias of Facebook India that the company’s senior executives were summoned before a parliamentary committee for a closed door hearing on September 2, 2020. The committee hearing followed allegations that the company’s top policy official in India, Ankhi Das, had prevented the removal of hate speech and anti-Muslim posts by ruling party Bharatiya Janata (BJP) politicians in order to protect and promote the Hindu nationalist party and its Prime Minister Narendra Modi, who have advanced anti-Muslim policies.

    Ties between the company and the BJP are deep. Both Zuckerberg and Sandberg have met personally with Modi, who is the most popular world leader on Facebook. Before becoming prime minister, Zuckerberg even introduced his parents to Modi, a strange choice considering Modi’s horrific track record of stoking violence against Muslims. In February 2002, while head of the Gujarat government, Modi allegedly encouraged massive anti-Muslim riots. As the state was overcome with violence and over a thousand Muslims were murdered, leaders of the BJP and its even more nationalist ally, the Vishwa Hindu Parishad, gave speeches provoking Hindus to teach Muslims a lesson. Modi himself gave an incendiary speech, mocking riot victims and calling relief camps for Muslims “child-producing factories.” The intensity and brutality of the violence unleashed against Muslims in 2002 led the Supreme Court of India to describe the Modi government in Gujarat as, “Modern day Neros who looked the other way while young women and children were burnt alive.”

    Das’ inability to carry out her responsibilities in an objective manner manifests itself in many different situations. When T. Raja Singh, another member of the BJP, called for the slaughter of Rohingya Muslim refugees, threatened to demolish mosques, and labeled Indian Muslim citizens as traitors, Facebook’s online security staff determined his account should be banned for not only violating its community standards, but also for falling under the category of “Dangerous Individuals and Organizations.” Das stepped in to protect Singh from punitive action, because “punishing violations by politicians from Mr. Modi’s party would damage the company’s business prospects in the country,” according to Facebook employees. Outrage in response to these disclosures forced Facebook to finally ban Singh from the platform in early September 2020. Lynchings spurred by bigoted and propagandistic WhatsApp posts led the company to consider banning mass messaging on the system.

    India is Facebook’s largest and most lucrative market with nearly 350 million users and another 400 million on WhatsApp. The BJP, which has more than 16 million followers on its page, is Facebook India’s biggest advertising spender in recent months. Facebook has multiple commercial ties with the Indian government, including partnerships with the Ministry of Tribal Affairs, the Ministry of Women, and the Board of Education.

    There are many more connections between anti-Muslim content on Facebook and violence in India. In May 2020, a BJP member of parliament in West Bengal, Arjun Singh, posted an image on Facebook that he wrongly claimed was a depiction of a Hindu who had been brutalized by Muslim mobs. It was captioned: “How long will the blood of Hindus flow on in Bengal…we will not stay quiet if they [Muslims] attack ordinary people.” Four hours later, an angry mob of about 100 Hindus descended on a town in West Bengal and a Muslim shrine was vandalized. Facebook failed to remove the posts until after the company experienced backlash as a result of the violent attacks, which local Muslims alleged had been incited by Singh’s post. Overall, dozens of Muslims have been lynched since 2012 by vigilantes, with many of the incidents triggered by fake news regarding cow slaughter or smuggling shared on WhatsApp.

    New Delhi Riots

    Facebook appeared to play a pivotal role in the February 2020 New Delhi riots in which more than 50 people died and thousands of homes and several mosques were destroyed. While both Hindus and Muslims were affected in the riots, Muslims were targeted in far greater numbers by mobs of young men, many of whom had traveled into the city to harass Muslims after seeing fake news shared widely on Facebook that Muslim religious leaders were calling for Hindus to be kicked out of Delhi. One post by a BJP member who is also a member of the right-wing militant Hindu organization Bajrang Dal, prompted hundreds to comment that they and their Hindu “brothers” would join the fight to defend Delhi from the Muslims. And two days before the anti-Muslim riots began in Delhi, a member of Modi’s cabinet said Muslims should have been sent out of India to Pakistan in 1947 during the partition of India. Ultimately, the Delhi State Assembly’s Peace and Harmony Committee said it had prima facie found Facebook guilty of aggravating the Delhi riots, and posited that it should be investigated for every riot since 2014.

    During the riots, Facebook was also used by members of the mobs to glorify their violence. In early February, a Bajrang Dal activist posted a video claiming to have “killed a Mulle [derogatory term for Muslims]” and the next day wrote in a public Facebook post that he had just sent a “jihadi to heaven.” It took about three days for his Facebook account to be deactivated. In the wake of the violence, hundreds of Muslim families fled New Delhi.

    Human Rights Advocates Cry Out

    Human rights organizations, including the Indian American Muslim Council, South Asians Building Accountability & Healing, and the Coalition to Stop Genocide in India (made up of dozens of organizations in the U.S. and other countries) assert that Facebook simply refuses to remove anti-Muslim hate content in India and have requested an investigation by the United States Congress. In September of 2020, a letter signed by 41 civil rights organizations from around the world called on Facebook to put an end to anti- Muslim hate on its platform and immediately suspend Das, among other requests, to protect the safety and security of Muslims.

    Facebook’s anti-Muslim actions in India have been repeatedly called out by civil society actors. In October 2019, a report by the nonprofit organization Avaaz accused Facebook of having become a “megaphone for hate” against Muslims in the northeastern Indian state of Assam — where nearly two million people, many of them Muslims, have been stripped of their citizenship by the BJP government. Another report, by the South Asian human rights group Equality Labs, found “Islamophobic [anti-Muslim] content was the biggest source of hate speech on Facebook in India, accounting for 37 percent of the content,” and that 93 percent of the hate speech they reported to Facebook was not removed. They also reported on how Facebook is being used to spread hate speech and misinformation accusing Muslims of deliberately infecting non-Muslims and Hindus with COVID-19, again contributing to potential violence against Muslims. Meanwhile, as the Modi government was stripping Muslims of their rights, Facebook was taking WhatsApp accounts away from Muslims in Kashmir.

    The government had suspended Internet in the region to prevent communication and Facebook’s policy automatically discontinues WhatsApp participation after 120 days. As a result, the government prevented Muslims in the region from organizing, and Facebook contributed by further reducing communication opportunities.

    In late August, a group of 54 former Indian bureaucrats wrote to Zuckerberg asking the company to perform an audit of how Facebook’s hate speech policy is applied and to do so without Das’ involvement. The letter pointed to the financial aspects of Facebook’s situation in India causing a conflict of interest. “That this (not censoring hate speech by members of the BJP) seems to have been done to protect Facebook’s commercial interests is even more reprehensible… We note that such behavior on Facebook’s part has become a subject of debate in other countries as well. Commercial interests at the cost of human lives? If these are the crass calculations Facebook indulges in, it is no surprise that the calculus of hate is spreading like a virus in many parts of the world,” the letter read.

    Many, however, question the utility of continuing to urge Facebook to address hate on the platform driven by the BJP and other Hindu nationalist organizations in India. Malay Tewari, a Kolkata-based activist, argued Facebook “rarely” responded to his complaints about BJP-linked posts and “quite strangely, Facebook posts which expose the propaganda or hate campaign of the BJP, which do not violate community standards, are often removed.” Indian journalist Rana Ayyub agreed saying, “For years now, verified Facebook pages of BJP leaders such as Kapil Mishra have routinely published hate speeches against Muslims and dissenting voices. The hate then translates into deadly violence, such as the February anti-Muslim attacks in Delhi that left many people dead in some of the worst communal violence India’s capital has seen in decades… It’s clear that Facebook has no intention of holding hatemongers accountable and that the safety of users is not a priority.”

    In late August, it was reported that Facebook, in an effort to evaluate its role in spreading hate speech and incitements to violence, had commissioned an independent report by the U.S. law firm Foley Hoag LLP on the platform’s impact on human rights in India. Work on the India audit, previously unknown, began before political bias in Facebook’s India operations was documented by journalists in August and September of 2020. Additionally, the new Facebook Oversight Board has indicated that it may step into the India situation. The board has said that it has the authority to decide “how Facebook treats posts from public figures that may violate community standards,” including against hate speech and that it “won’t shy away from the tough cases and holding Facebook accountable.” One can only hope that the Oversight Board will honor this commitment.

    Myanmar

    Beginning in 2012, activists, businessmen, and tech experts in Myanmar began warning Facebook that members of the military and ultra-nationalist Buddhists were directing hate speech and violence against the Rohingya, long a targeted and vulnerable Muslim community in that country. In a country of more than 53 million people, only four percent are Muslim Rohingya.

    By 2013, Facebook was ubiquitous in the country, serving basically as the Internet. The state-run newspaper said in 2013 that in Myanmar,“a person without a Facebook identity is like a person without a home address” (by 2020, nearly 90 percent of the population uses Facebook).

    Aela Callan, a foreign correspondent on a fellowship from Stanford University, met Facebook’s then vice president of global communications to discuss hate speech and fake user pages that were pervasive in Myanmar in 2013. Callan visited Facebook’s California headquarters again in March 2014 with a staffer from a Myanmar tech organization to raise these issues with the company. Callan wanted to show Facebook “how serious it [hate speech and disinformation] was.” Her pleas were rebuffed. “It was seen as a connectivity opportunity rather than a big pressing problem,” Callan said. “I think they were more excited about the connectivity opportunity because so many people were using it, rather than the core issues.” Callan said hate speech seemed like a “low priority” at the time.

    Vicious, false rumors that a Muslim Mandalay teashop owner raped a Buddhist employee started spreading across Facebook in 2014. Soon, armed men were marauding through the streets of the capital on motorbikes and by foot wielding machetes and sticks. Rioters torched cars and ransacked shops. During the multi-day melee, two men—one Muslim and one Buddhist—were killed and 20 others were injured. Similar violence would follow, spurred by hate-driven rumors on Facebook.

    That violence was serious, but the use of Facebook to coordinate the Rohingya genocide was yet to come. Facebook ignored repeated warnings that the military and Buddhist militants were using the platform to spread anti-Rohingya hate and propaganda. By the end of 2017, about 700,000 Rohingya had fled the country after Myanmar’s military launched operations against what it called “insurgents” in the state of Rakhine. Though the Rohingya have been in Myanmar for generations, the government denied citizenship to most Rohingya, arguing that they are illegal immigrants from neighboring Bangladesh. A U.N. fact-finding mission in 2018 reported that, “People died from gunshot wounds, often due to indiscriminate shooting at fleeing villagers. Some were burned alive in their homes – often the elderly, disabled and young children. Others were hacked to death.”

    Facebook Does Too Little, Much Too Late

    It took until August 2018 – a year after 25,000 Rohingya were killed and 700,000 fled Myanmar– for Facebook to ban Tatmadaw (military) leaders from its platform (only to return in June of 2020). Facing growing public pressure, the company also commissioned and published an independent Human Rights Impact Assessment on the role its services were playing in the country and committed to hiring 100 native Burmese speakers as content moderators. Prior to deplatforming Tatmadow, Facebook banned four armed ethnic groups in the country, but not the Tatmadaw, presumably because they are state actors. Repeated requests for comment from Facebook on why some ethnic groups were banned while a state actor engaged in ethnic violence was not, went unanswered.

    This, even though the 2018 U.N. fact-finding report had noted that “actions of the Tatmadaw in both Kachin and Shan States since 2011 amount to war crimes and crimes against humanity.” Facebook has yet to impose across-the-board bans on military-run accounts of the kind applied to the four minority rebel groups. Instead, it seems to be taking a deeply statist approach toward these groups, thereby helping an army that stands accused of genocide.

    A U.N. fact-finding mission was shocked by Facebook’s lack of action and found the company played a “determining role” in stirring up hatred against Rohingya Muslims in Myanmar. The chairman of the U.N. mission, Marzuki Darusman, said that social media had “substantively contributed to the level of acrimony amongst the wider public against Rohingya Muslims… Hate speech is certainly, of course, a part of that.” Yanghee Lee, Special Rapporteur for human rights violations in Myanmar said, “I’m afraid that Facebook has now turned into a beast, and not what it originally intended.”

    Facebook leadership has admitted its complicity in these events. Zuckerberg said in 2018 that Facebook needed to improve in Myanmar, though that admission came much too late for critics who said he failed to adequately take responsibility for what had been a long-term issue. The company announced in July of that year that it would expand its efforts to remove material worldwide that could incite violence. In a surprising concession before the U.S. Senate Intelligence Committee in September 2018, Sandberg called the events in Myanmar “devastating” and acknowledged the company had to do more, highlighting that Facebook had put increased resources behind being able to review content in Burmese. She also accepted that Facebook had a moral and legal obligation to take down accounts that incentivize violence in countries like Myanmar (even so, two years later, Facebook continues to leave these kinds of accounts up, including in countries like India where violence against Muslims has been serious).

    Failure To Cooperate With The International Criminal Court

    Given this catastrophic failure, one would think Facebook leadership would want to do absolutely everything in their power to support the current genocide case against Myanmar in the International Criminal Court (ICC). However, that hasn’t been the case. In August 2020, Facebook balked at providing the ICC posts made by Myanmar’s military and other leadership in a case of genocide against the regime being pursued by The Gambia and the Organization of Islamic Countries. The head of a U.N. investigative body on Myanmar said Facebook had not shared evidence of “serious international crimes,” despite vowing to work with investigators looking into abuses in the country. Nicholas Koumjian, head of the Independent Investigative Mechanism for Myanmar (IIMM), stated the social media giant was holding material “highly relevant and probative of serious international crimes” but had not shared any during year-long talks. After an uproar in the press in August 2020, Facebook finally turned over some documents to the court that “partially complied” with the request. By not cooperating with the Gambian legal team and creating a roadblock in the ongoing trial, Facebook is not being a “force for good in Myanmar” as it has repeatedly promised. It is failing to aid an important international effort to establish accountability in the country.

    In anticipation of the upcoming November Myanmar elections, Facebook released information on the small changes it has made, committing to remove “verifiable misinformation and unverifiable rumors” that are assessed as having the potential to suppress the vote or damage the “integrity” of the electoral process between September and November 22, 2020. It also introduced a new feature that limits to five the number of times a message can be forwarded on WhatsApp. Additionally, Facebook claims to now have three fact-checking partners in Myanmar and is working with two partners to verify the official Facebook pages of political parties, all of which will purportedly help detect hate speech that could lead to violence.

    New Zealand

    The last words of Haji-Daoud Nabi before he and his 50 fellow worshipers were gunned down in Christchurch, New Zealand was the welcoming phrase, “Hello, brother.” On March 15, 2019, in a performance for Facebook Live, Brenton Tarrant slaughtered worshipers at two different mosques in one of the deadliest white supremacist attacks in recent history. The 17-minute long Facebook live- stream broadcast went viral; in the 24 hours following the attack, attempts were made to re-upload the footage 1.5 million times while Facebook scrambled to stop its spread. The video of the attack was cross-posted across various social networks, and links to the live-stream and Tarrant’s manifesto were posted on the unregulated message board 8chan.

    The New Zealand shooter was clear about his anti-Muslim motive. In his manifesto, Tarrant, who was in touch with members of the anti-Muslim, white supremacist and transnational Generation Identity movement, wrote that he wanted to stop the Great Replacement, and he targeted Muslims, including their children, for extermination. As Peter Lentini has written, “Tarrant’s solution to the crisis [posed by a Muslim “invasion”] – indeed one on which he felt compelled to enact – was to annihilate his enemies (read Muslim immigrants). This included targeting non-combatants. In one point, he indicates that [immigrants] constitute a much greater threat to the future of Western societies than terrorists and combatants. Thus, he argues that it is also necessary to kill children to ensure that the enemy line will not continue.”

    Facebook’s Non-Response

    The abuse of Facebook Live was foreseeable. Facebook had already been criticized for the use of Facebook Live in broadcasting suicides and violent attacks, and yet the company took little action. In 2017, Facebook came under harsh criticism after a raft of suicides were live-streamed, including one of a 12-year-old girl in Georgia that was left live on the site for two weeks before being removed. The company responded then that it was adding more resources to monitoring, but offered the excuse that the sheer volume of content broadcast live on the platform made it impossible to monitor it all. Shockingly, after the Christchurch massacre was broadcast on its own platform, it took the company’s senior leadership more than 10 days to speak publicly about this tragedy, and that only happened after a tremendous amount of public criticism. Facebook said they would explore restrictions on live-streaming from the platform. In response to a tragedy at this scale, one would expect significant changes to Facebook Live and restrictions which would prevent abuses like this from ever happening again.

    But when change came, it was minor, and announced six weeks after the attacks. In late May of 2019, Facebook announced only a small and relatively insignificant change that said that anyone breaking certain rules in broadcasting content on Facebook Live would be temporarily barred from using the service, with the possibility of a 30-day ban on a first offense. (Previously, it did not typically bar users until they had broken these rules multiple times.) Multiple offenders, or people who posted particularly egregious content, could be permanently barred from Facebook.

    Inspiration For The Attack Leads Back To Facebook

    Tarrant may have first been exposed to the ideology that propelled his attack because it was circulated widely on Facebook for many, many years. The shooter had been radicalized into the white supremacist Identitarian movement and was an adherent of the racist Great Replacement theory – an international white supremacist conspiracy that promotes the idea that white people are slowly experiencing a genocide in their own home countries due to a plot by elites to displace them with rising numbers of non- white immigrants. This racist thinking is most prominently recognized in the form of the sprawling, multinational organization, Generation Identity, to which Tarrant gave a donation. For years, GI material flourished on Facebook and garnered many thousands of followers until the movement was deplatformed in 2018. Even after the deplatforming, the Institute for Strategic Dialogue identified some 11,000 members of Facebook groups devoted to the Great Replacement idea.

    Sri Lanka

    Hate speech and rumors targeting Muslims contributed to a 2018 outbreak of anti-Muslim violence in Sri Lanka. One viral video on Facebook falsely depicted a Muslim restaurateur seemingly admitting to mixing “sterilization pills” into the food of Sinhala- Buddhist men; other heinous material, such as a post advocating to “kill all Muslims, do not spare even an infant, they are dogs,” contributed to an outbreak of anti-Muslim violence in Sri Lanka in 2018. At least three people were killed and 20 injured in the 2018 unrest, during which mosques and Muslim businesses were burned, mainly in the central part of the Buddhist-majority nation. Facebook left most of the incendiary material up for days, even after it had been reported. Ultimately, the Sri Lankan government had to shut Facebook down to stem the violence. Sri Lankan officials ultimately determined that mobs used Facebook to coordinate attacks, and that the platform had “only two resource persons” to review content in Sinhala, the language of Sri Lanka’s ethnic majority whose members were behind the violence.

    A 2018 audit, contracted by Facebook from the human rights organization Article One found that hate speech and rumors spread on Facebook “may have led to ‘offline’ violence.” Facebook, more so than any other social media platform, was used by Buddhist nationalists to spread propaganda against Sri Lanka’s Muslims, which make up 10 percent of the country’s population. Article One suggested that before the unrest, Facebook had failed to take down such hate content, which “resulted in hate speech and other forms of harassment remaining and even spreading” on the platform.

    “We deplore this misuse of our platform,” the company said in a response to the Sri Lanka report. “We recognize, and apologize for, the very real human rights impacts that resulted.” Facebook also highlighted actions it had taken to address the problems, including hiring content moderators with local language skills, implementing technology that automatically detects signs of hate speech and keeps abusive content from spreading, and trying to deepen relationships with local civil society groups. However, again, it was too little, too late. Significant damage, strained relations between communities, and a rise in anti-Muslim sentiment had already fueled tensions.

    Sweden

    In late August 2020, far-right Danish politician Rasmus Paludan, head of the Stram Kurs (Hard Line) anti-immigrant party, tried to cross over the bridge from Denmark into Malmo, Sweden, for a Quran burning event with his anti-Muslim allies. He was turned away by Swedish authorities on the bridge and barred from Sweden for two years. Paludan responded with an angry, anti-Muslim message on Facebook: “Sent back and banned from Sweden for two years. However, rapists and murderers are always welcome!”

    Despite Paludan’s absence, his supporters moved forward with their protest and burned a Quran near a Malmo mosque. Riots broke out resulting in significant property destruction, all of which was propelled by social media videos of the desecration that were circulated widely online. Paludan, who has an active Facebook page full of anti-Muslim videos, was sentenced to a month in jail in Denmark for a string of offences, including racism, in early 2020. His conviction included charges regarding posting anti-Muslim videos on social media channels. Paludan’s Facebook page remained active as of September 2020.

    United States

    Facebook is an American company, and as such, its actions and policies reflect on the U.S. and its influence in other countries. And its decisions to allow American-produced, anti-Muslim hate content to flourish on its platform no doubt serves as an example of what the company finds acceptable worldwide.

    Facebook Knows That It Allows Anti-Muslim Content

    The problem with anti-Muslim hate on Facebook has been widely documented for years. An analysis of Facebook data in early 2020 showed that the United States and Australia “lead in the number of active Facebook pages and groups dedicated to referring this dehumanizing [anti-Muslim] content.” In May 2020, the Tech Transparency Project found more than 100 American white supremacist groups, many of them explicitly anti-Muslim, active on the platform both on their own group pages as well as on auto-generated content. In the wake of TTP’s report, Facebook did nominally alter some of the auto-generated content, but the hate groups largely remained.

    An analysis by computer scientist Megan Squire of far-right groups on Facebook found a significant crossover with anti-Muslim hate. Squire found that anti-Muslim attitudes are not only flourishing on the platform, but also acting as a “common denominator” for a range of other extremist ideologies, including xenophobic anti-immigrant groups, pro-Confederate groups, militant anti-government conspiracy theorists, and white nationalists. Squire said of her research, “Some of the anti-Muslim groups are central players in the hate network as a whole. And the anti-Muslim groups show more membership crossover with other ideologies than I expected.”

    A 2018 study by the Southern Poverty Law Center (SPLC) identified 33 anti-Muslim Facebook groups that used violent imagery, including weapons, in their main photos. According to the SPLC, twenty of these groups promoted the stereotype that all Muslims are violent. One group, “Islam Unveiled,” had an image of an ISIS fighter executing prisoners lying prone in a shallow grave. The picture was accompanied by a quote credited to “Muhammad, Prophet of Islam,” which says, “Killing Unbelievers is a SMALL MATTER to us.” The SPLC also found a group, “PRO-ISLAMOPHOBIA SAVES LIVES!!” whose cover photo was an image of a charred, blackened body which should have been prohibited according to Facebook’s claim that posts depicting “charred or burning people” are prohibited. After an academic researcher flagged the group, an automated response was generated and provided to SPLC: “We looked over the group you reported and though it doesn’t go against one of our specific Community Standards, we understand that the group or something shared in it may still be offensive to you.”

    There are other ways anti-Muslim material is spread on Facebook. In 2019, a major study found dozens of current and former American law enforcement officers as members of Facebook groups dedicated to anti-Muslim bigotry. Many were private groups, where this hatred was allowed to flourish outside of any oversight. With names such as “Veterans Against islamic Filth,” “PURGE WORLDWIDE (The Cure for the Islamic disease in your country)” and “Americans Against Mosques,” these groups serve as private forums to share bigoted messages about Muslims, and they have proven attractive to police officers.

    Facebook’s problem with anti-Muslim content was also documented by its own civil rights audit, which came after years of pressure from human rights advocates to address hate content online. Released in July 2020, the auditors singled out the problem of anti-Muslim hate as needing to be addressed, their conclusions were partly based on information provided by human rights groups who had been lobbying the company for years to take the problem seriously. Facebook has, thus far, not addressed any of the anti-Muslim problems found by its own audit. The company’s actions indicate that its decision to engage in the audit at all was not out of concern for lives lost because of its inaction, but for political expediency.

    Perhaps the most notable case of allowing anti-Muslim hate speech on Facebook, despite it violating the site’s rules, involves President Donald Trump. In 2016, Zuckerberg decided not to remove a post by Trump calling for a ban of all Muslims entering the U.S. Zuckerberg acknowledged in a meeting with his staff that Trump’s call for a ban did qualify as hate speech, but said the implications of removing them were too drastic. Once again, Facebook refused to apply its hate speech policies against a politically powerful individual.

    Double Standard For Targeting Muslim Public Officials

    While President Trump’s Facebook anti-Muslim post is protected, Muslim public figures have been threatened and attacked on the platform. On numerous occasions, Muslim public officials in Congress and around the country have been targeted with hateful content – even death threats – on Facebook and Instagram. Additionally, two Muslim congresswomen, Ilhan Omar and Rashida Tlaib, were targeted by an international fake news operation that spread anti-Muslim propaganda on Facebook. Facebook also allowed the Trump campaign to run multiple, false ads against these congresswomen. No action was taken by Facebook against the ads.

    Perhaps most shockingly, Facebook allowed a man charged with threatening to kill Congresswoman Ilhan Omar to post violent and racist content for years, and took no action to remove his posts when he was arrested in 2019. Upstate New Yorker Patrick Carlineo posted several entries to his Facebook page alluding to violence against Muslims and U.S. officials, including former president Barack Obama. Carlineo frequently posted anti-Muslim material, using racist slurs and saying he wished he could confront a group of Muslim politicians with “a bucket of pig blood.” Carlineo’s profile wasn’t removed by Facebook until a reporter contacted them two weeks after he was arrested for threatening to kill Omar (he pleaded guilty and was sentenced to one year in prison).

    Abuse Of The Event Organizing Pages

    Facebook’s event pages have been particularly problematic. For years, white nationalists, militias, and anti-Muslim hate groups have been using Facebook event pages to organize armed, hate rallies targeting mosques and Muslim community centers across the country. Shockingly, the company permitted white nationalist militias to directly intimidate worshippers and threaten mosques. In 2016, two Russian Facebook pages organized dueling rallies in front of the Islamic Da’wah Center of Houston. Heart of Texas, a Russian-controlled Facebook group that promoted Texas secession, played into the stereotype of the state as a land of guns and barbecue and amassed hundreds of thousands of followers. One of their ads on Facebook announced a noon rally on May 21, 2016 to “Stop Islamification of Texas.” A separate Russian-sponsored group, United Muslims of America (stealing the identity of a legitimate California-based Muslim organization), advertised a “Save Islamic Knowledge” rally for the same place and time. The armed protest was peaceful but terrorized those inside the religious center.

    Facebook’s civil rights auditors highlighted the company’s failure to enforce policies prohibiting a call to arms during an anti-Muslim protest organized on their events pages during August of 2019. The auditors described an event page which was used to intimidate attendees of the Islamic Society of North America’s annual convention in Houston, Texas. Despite the fact that this was the second year in a row where this same hate group threatened the conference, it took Facebook more than 24 hours to remove the event page. Facebook did later acknowledge that the Houston incident represented an enforcement misstep, and the auditors used this example to conclude that Facebook’s “events policy provides another illustration of the need for focused study and analysis on particular manifestations of hate.”

    Live-Streaming Hate

    In August of 2020, a group of anti-Muslim activists used Facebook to live-stream a hate rally outside a mosque in Milwaukee, Wisconsin. Holding a sign that read “Halt Islam,” anti-Muslim street preacher Ruben Israel yelled hateful, threatening slurs from a megaphone outside the Islamic Society of Milwaukee, the largest mosque in the city. During the protest, Israel used Facebook to broadcast multiple false, offensive slurs and conspiracies about Muslims, shouting about “wicked, perverted Islam” and asking Muslims in the mosques if they have “anything ticking” on them, and whether they had a pilot’s license. He also told a Muslim couple near the mosque, “don’t tell me you’re here getting government assistance while you hate our country.” Despite these clear violations of Facebook’s hate speech and live-streaming policies, it took outside groups to alert the company before the content was removed.

    Conclusion

    In its August 2020 update of its hate speech policies, which banned conspiracy theories and stereotypes targeting vulnerable communities, Facebook yet again left Muslims off the list. This was just another example in a long line of disappointing decisions made by the company, particularly given the conclusions of Facebook’s own audit released the month prior that highlighted the significant problem of anti-Muslim hate on the platform. From the documentation in this report of international anti-Muslim violence stoked on Facebook, it seems clear that Facebook has no regard for the lives of Muslims. The August 2020 policy changes banning blackface and anti-Semitic conspiracy theories and stereotypes are clearly welcome and important expansions of the company’s hate speech policies. It is baffling, however, that the company chose not to include implicit stereotypes and tropes about Muslims, such as the false ideas that they are inherently violent, foreign, or criminal. These stereotypes are all dangerous and have resulted in discrimination, hate crimes, and mass murder of Muslims around the world. Yet, given the many opportunities and repeated warnings over many years to protect Muslims from hate content on its platform, the company has refused to act. Ultimately, one thing is clear: for Facebook, jeopardizing the safety and security of Muslims is just the cost of doing business.

    Start Typing
    Privacy Preferences

    When you visit our website, it may store information through your browser from specific services, usually in the form of cookies. Here you can change your Privacy preferences. It is worth noting that blocking some types of cookies may impact your experience on our website and the services we are able to offer.

    GPAHE uses cookies to collect information and give you a more personalized experience on our site. You can find more information in our Privacy Policy.
    Stay connected with GPAHE
    You can unsubscribe at any time.
    Close
    You can unsubscribe at any time.
    Stay connected with GPAHE
    Thank you for subscribing!
    Thank you for subscribing.
    Join Us in The Fight Against Global Extremism.
    Stay connected with GPAHE and get the latest on how hate and extremism are threatening our safety and democracy.
    Subscribe To Our Free Newsletter!
    You can unsubscribe at any time.
    Join Us in The Fight Against Global Extremism.
    Stay connected with GPAHE and get the latest on how hate and extremism are threatening our safety and democracy.
    Subscribe To Our Free Newsletter!
    You can unsubscribe at any time.