By Wendy Via and Heidi Beirich
SHARE
SHARE
By Wendy Via and Heidi Beirich
Learn more about the Identitarian movement in Generation Identity: International White Nationalist Movement Spreading on Twitter and YouTube.
In 2017, YouTube and Google for the first time lost millions in advertising dollars because they allowed major brand ads to appear with extremist and hateful content; both apologized and went to work reviewing their policies, pledging to provide a safer environment for their advertisers. In 2018, another investigative report, this one by CNN, revealed that more than 300 advertisers’ ads ran on channels that were still promoting extremist and hateful ideas, despite YouTube’s renewed commitment to limit the monetizing of hate and extremist content.
Finally, in June 2019, YouTube changed its community guidelines to ban videos that “allege that a group is superior in order to justify discrimination, segregation or exclusion based on qualities like age, gender, race, caste, religion, sexual orientation or veteran status.” And in 2020, major brands went back to advertising on YouTube after collaborating on its brand safety protections.
The YouTube Partner Program which allows content creators to monetize their channels and videos has a specific approval process. To be monetized, YouTube must specifically approve a content creator’s channel and videos. To make that decision, YouTube uses algorithms and human oversight. Additionally, it requires that all creators follow its Community Guidelines in order to be eligible for advertisements on their content.
YouTube’s advertiser guidelines assure advertisers that videos that do not meet its content policies, including its hateful content policies, will have limited or no ad access.
Why then, in June 2020, can Donald Trump and Joe Biden campaign ads be found on extremist videos?
In research conducted by the Global Project Against Hate and Extremism (GPAHE), numerous campaign ads were found on European and American Identitarian movement content, including multiple ads from each presidential candidate.
To be clear, this is neither campaign’s fault. The campaigns are paying YouTube for suitable ad placement; YouTube is then paying the extremist content creators when the ads run.
GPAHE viewed dozens of examples of ads on Identitarian content. The Identitarian movement, personified by Identity Evropa/American Identity Movement in the US and Generation Identity in Europe, adheres to the belief that white people are suffering a genocide in their own countries caused by non-white immigrants. The Great Replacement, as this theory is called, has specifically been labeled by the Department of Homeland Security as promoting white supremacist violence, which is as big a threat for mass terrorist acts as other forms of extremism. Violence inspired by this idea led to murders in Charlottesville, Virginia, and Christchurch, NZ. YouTube itself has a context disclaimer on Great Replacement content.
In one particularly egregious case, Trump and Biden ads appeared on the same video on the same channel. From a purely advertising standpoint, this indicates that YouTube’s ad placement algorithm needs work.
From a content standpoint, the monetization of the video, titled “Generation Identitaire — A Declaration of War from the Youth of France (English subs),” violates YouTube’s own hateful content policy and its advertiser content guidelines, as well as fails to protects its advertisers from brand threat. One of the comments on the video reads, “Africans and Arabs, ESPECIALLY MUSLIMS, must be banned from Europe. Their behavior and cultures are destructive to Western Society…deport them post haste.”
Another example of YouTube failing to support its advertisers is the appearance of Trump and Biden campaign ads on Ruptly videos about extremist activity. While the content might be debated as news since Ruptly, a subsidiary of RT News, provides video content to mainstream news agencies across the globe, it is doubtful that either candidate would choose to advertise on these videos given their disclaimer reading “RT is funded in whole or in part by the Russian government.”
Again, GPAHE does not believe that the Trump or Biden campaigns requested that their ads appear on this controversial content. This is just another example of YouTube not following its own guidance. This time, it puts candidates for the American presidency at risk by making it look as though they endorse hateful ideas.
YouTube says it will limit the extremist content on its site and follow its own hate speech rules. And they say that they will work to prevent advertisers from appearing on objectionable and brand-threatening content. It’s well accepted that YouTube has much more work to do in this regard. The question is, will they? Will they protect the average viewer from hateful content and the average advertiser from potentially losing revenue or harming its brand?
And will they pay attention to the fact that there are no borders on the internet? Hate and extremism thrive across the globe, and it’s the responsibility of companies to prevent their platforms from being used for the proliferation of hate and its progeny, violence.
As a final note, YouTube is not alone in monetizing hate. There are many examples across social media and technology platforms of abhorrent views, but in light of PayPal CEO Dan Schulman’s recent and generous public commitment to anti-racism, it should be noted that Brittany Pettibone Sellner and Ein Prozent/Laut Gedacht, which has more than 54,000 YouTube subscribers, both have active PayPal accounts with links from their YouTube channels.