By Heidi Beirich
SHARE
SHARE
By Heidi Beirich
Tommy Robinson, aka Stephen Christopher Yaxley-Lennon, is mostly known as a rabble-rousing, anti-Muslim UK activist. Robinson has led violent protests against immigrants and Muslims and been in and out of jail for a variety of offenses. Notorious for founding the English Defense League, Robinson used the organization to become one of the best-known anti-Muslim activists in the world.
You can find all that in Robinson’s Wikipedia profile, but you don’t get any of that information from Google Knowledge Graph’s automated question, “What does Tommy Robinson do?” Google Graph’s answer: businessperson, politician, political activist.
There’s not a link to be found in Google’s answer to something that actually describes Robinson for what he is: the world’s best known Islamophobe. In Robinson’s Knowledge Panel, also produced automatically by the Graph, he is described as a “British-English Political Activist.” Only by digging into the Panel’s Wikipedia link, of which a small excerpt is posted in the Panel, does the real nature of Robinson’s activism reveal itself.
That’s pretty limited “knowledge” for Google to be putting out into the world about Robinson, but the problem with the information Google’s Panel provides is even worse. The Knowledge Panel for Robinson, which takes up a bunch of right-hand space on the first page of Google’s search results, also gives links to his books, and in the feature “people also searched for,” it displays photographs with embedded links to the many anti-Muslim activists Robinson cavorts with. Talk about making radicalization into these ideas a simple process.
This problematic display of “information” on Robinson is just one example of the deficiencies found in the material autogenerated by Google’s relatively new search mechanism, the Knowledge Graph, which is where Panel information is drawn from.
If you search on Google for one of America’s most prolific anti-Semites, Kevin MacDonald, his Knowledge Panel tells you he is an “American Professor,” gives you a handy link to his Twitter feed, where his diatribes against Jews are just a click away, as well as links to his books with previews that explain why white people should fear “Jewish power.”

And if you look up Brenton Tarrant, who killed more than 50 people in the Christchurch, N.Z., mosque shootings in 2019 because he wrongly believed white people were being genocided by immigrants and Muslims, you get links to other white supremacist killers like Dylann Roof.
This kind of limited information, misinformation, and contextless or unsourced information about major figures and groups in hate movements from around the world was found in dozens of profiles examined by the Global Project Against Hate and Extremism (GPAHE). Worse, the Panels often provide links to the extremists’ social media accounts or their organizations. This leads to readers being exposed directly to the harmful propaganda without any context or counter-narratives to rebut their ideas.
Google says its mission is “to organize the world’s information and make it universally accessible and useful.” But there’s nothing “useful,” and much that is harmful, about misinforming the public about who people like Robinson or MacDonald are, and it is particularly dangerous to provide a shortcut to online radicalization through an easy-to-follow set of hate links.
What is Google’s Knowledge Graph?
In 2012, Google introduced a new type of search, the Knowledge Graph, which it claimed would help searchers “discover new information quickly and easily.” The Graph amalgamates information found on the web into Google’s own database and then displays it, thereby supposedly taking the legwork out of wading through organic search results of sources outside of Google. In a 2012 blog post, Google announced, “This is a critical first step towards building the next generation of search, which taps into the collective intelligence of the web and understands the world a bit more like people do.”
Google claims the Graph is “like a giant virtual encyclopedia of facts.” This new search system has evolved into several features including Google’s Knowledge Panels, the big boxes of information that appear on the right side of a Google search results page, and other features, such as suggested questions with answers from the Graph which are generated from a user’s search terms.
On May 20, 2020, Google announced that the Graph “has amassed over 500 billion facts about five billion entities — people, places and things” although there is apparently no official documentation of how the Knowledge Graph is implemented. There are now millions of Knowledge Panels and automated questions and answers on everything one can imagine. The information and process that creates this ecosystem appears to be fully automated. No human hands touch the content or make judgements about what is going to be prominently displayed in the Panel or in answer to the questions before it is displayed (one can complain later). Ninety percent of searchers don’t go past the first page of search results and about two-thirds don’t go past the first five search results. Since the Knowledge Panels and questions are placed at the top of the first page or placed prominently on mobile searches, they are most likely to be read, both for convenience and the implied veracity of the information based on its prominent placement.
The result of all this is that people are increasingly only looking at this automated material. In 2019, the research firm Jumpshot reported that more than half of desktop searches and 62 percent of mobile searches were no-click, meaning many people only look at the information produced by the Graph. Another survey that same year found that people ages 13 to 21 were twice as likely as respondents over 50 to consider their search complete once they’d viewed a Knowledge Panel. All of this is causing people to believe that Google is the source for its information, as opposed to actual sources linked in organic search results. The Markup reported that a 2017 study found people were more than five times more likely to “mistakenly attribute information to Google itself after reading it in a Google direct answer than when no direct answer appeared in results.” Additionally, this same information is used to answer spoken questions to Google Assistant and Google Home.
How Google’s Graph Misinforms
Since Google has nearly 90 percent of the global search business, what we all learn about the world is increasingly coming from whatever is in Google’s automated Graph, as opposed to information provided by academic, journalistic, government or other legitimate sources that have processes to verify the accuracy of their information and to correct it when it is wrong. In fact, we seem to be losing our sense of sourcing completely. As Dario Taraborelli, former head of the Wikipedia Foundation, told The Markup, “It’s become really difficult to understand where information comes from. What is the provenance of what we’re learning?” Since much of the Graph is lifted from Wikipedia, Taraborelli added, “It’s going to become much harder for a new contributor to understand… that Wikipedia exists as a separate project, that it’s not a snippet on Google.” Taraborelli expressed concerns about what all this will do to media literacy skills.
This is a huge change to how Google search has worked and yet it has received very little attention or analysis. A 2016 piece by Caitlin Dewey for The Washington Post was titled, “You probably haven’t even noticed Google’s sketchy quest to control the world’s knowledge.” Dewey also quoted Taraborelli about how this new search process may negatively affect us all.
“It undermines people’s ability to verify information and, ultimately, to develop well-informed opinions…And that is something I think we really need to study and process as a society,” Taraborelli explained. He also noted that the main issue with the Knowledge Panels is their lack of knowledge. They provide information without the context of where it came from, something that makes verifying the accuracy of the information difficult. And since the Graph doesn’t cite its sources, which you would find if you clicked through linked search results or a Wikipedia entry, there’s no way for users to double-check the Graph’s answers for bias or error, which have been demonstrated to exist.
In 2019, The Atlantic published a piece showing how the Graph had “surfaced bad information.” The Atlantic found some errors that were incidental and not harmful, like actress Betty White’s age, but they also found far more serious errors. For example: “In 2018, after The Atlantic identified and reported to Google that the knowledge panel for Emmanuel Macron included the anti-Semitic nickname ‘candidate of Rothschild,’ the search giant removed the phrase… a Google search for the Charlottesville, Virginia, ‘Unite the Right’ rally rendered a knowledge panel reading, ‘Unite the Right is an equal rights movement that CNN and other fascist outlets have tried to ban.’”
In reality, Unite the Right was a racist rally in support of confederate monuments put on by white supremacists that led to riots and the death of an anti-racist protester. At the time The Atlantic did its research, white supremacists were trying to manipulate the Wikipedia entry for “Unite the Right” and had put this false material into it. That was evident on Wikipedia, where editing and challenges to entries are clearly marked, but no such information was provided by Google’s Panel.
In 2019, Google searches for the term “self-hating Jew” led to a Knowledge Panel, according to The Atlantic, with a photo of comedian Sarah Silverman. It took outside reviewers, including The Atlantic, to find the errors and get Google to fix it.
In May 2020, Danny Sullivan, the public liaison for Google search, admitted that the Graph “may sometimes also come up with inaccurate information” and suggested that people let Google know using the feedback option. This information is then used to “enhance the Knowledge Graph” by looking at “how Google’s automated systems were not able to detect these inaccuracies.” Apparently fixing errors in the Graph is a burden that falls on the public, similar to trusted flagger programs created by social media companies for the public to report abuse and hate speech, rather than on Google, which made nearly seven billion dollars in the first quarter of 2020. In effect, Google is asking the public to provide it with free research labor.
Misinformation and Lack of Context
Getting Betty White’s age wrong is unfortunate, but misrepresenting the details about a racist activist can be dangerous. In many Knowledge Panels, prominent white supremacists and other extremists are described in innocuous terms, as writers or activists. Their heinous ideas are then down-played while their social media accounts are highlighted. Only by clicking into the Wikipedia links usually displayed in the Panel and often shortly excerpted is a full biography with sources provided.
Here are some examples of troubling information related to prominent extremists presented without context or attribution in Knowledge Panels examined by GPAHE in July 2020:
Andrew Aurenheimer, better known as Weev, is a prominent webmaster for viciously racist and antisemitic websites like The Daily Stormer. But in his Knowledge Panel he is described as an “American media person.” His Panel links to other prominent neo-Nazis such as Christopher Cantwell. And his mother’s name is given in the Panel, even though Weev has attacked her in the past for pointing to his family’s Jewish heritage. This opens a possible avenue for abuse by Weev’s network of racist trolls.
Frazier Glenn Miller, Jr., is described by Google as an “American advocate.” The Panel doesn’t mention that Miller was responsible for a mass shooting in Kansas at two Jewish centers. There is nothing in his Knowledge Panel, unless you click into Wikipedia, about how this serious anti-Semite killed three people in 2014 in Kansas or what his motivations were. His wife and children, some of whom do not support his views, are gratuitously listed in the Panel. And there are links to other white supremacists such as neo-Nazi Alex Linder.
Gavin McInness, founder of the rabidly anti-Muslim and misogynistic Proud Boys, is described in his Panel as a “Canadian writer.” Links to his Twitter and YouTube accounts are provided as well as links to his TV performances and books.


The Knowledge Panels for racist and antisemitic mass shootings have additional problems. If one searches for John Timothy Earnest, who attacked a synagogue in Poway, Calif., in 2019, killing one, his Knowledge Panel gives links to several other similar attacks such as Christchurch, the El Paso Walmart mass shooting in 2019, and three others. The same process of leading people to more racial or antisemitic mass shootings is found in the Knowledge Panels for El Paso and the 2018 attack on the Tree of Life synagogue in Pittsburg, Pa.

Because the Knowledge Panels display pictures, as do the top bar Graph results, and the linked searches do not, they are much more eye-catching and likely to be clicked on. In some cases, the Knowledge Panels include pictures of the victims without any context above descriptions of the attacks. The shooters are named, but the victims are not. And the Panels expose the identity of the shooters’ family members by naming them.
A Tool for Online Radicalization
Since the boxes are automated and lack context or editorial input, they present themselves as authoritative material. They give a veneer of “truth” to some of the most dangerous ideas produced by humankind and promote their most prominent evangelists. And there are no counter-narratives revealing the horrors of racial extremism, just links to more similar material.
And it’s not just the “facts” presented by the Graph that are problematic. Because the Panels bring together several points of information, including links to social media, links to similar extremist groups and individuals, links to books, etc., just reading the Panel could be a fast track to radicalizing someone who may be open to, or simply not understand the nature of, these extremist ideas.
Let’s take a deeper look at one such box, for William Luther Pierce, the founder of the neo-Nazi National Alliance and perhaps the most dangerous American neo-Nazi of recent years. Pierce authored The Turner Diaries, a race war novel that inspired Timothy McVeigh’s 1995 bombing of the Oklahoma City Federal Building.
Pierce’s Google Panel describes him as an “American author.” Below his picture is a snippet from Wikipedia that includes a sentence that does describe his neo-Nazi beliefs. That is the only piece of information in the box that is in context. Then the Panel has his birth, death, spouses, children and education. It should be noted that his sons and some of his wives do not share his views and may well not want to be named. But given Google’s policies, they would have a very difficult time having themselves removed even though they are unrelated to Pierce’s politics. That’s if they even know they are listed there and that removal is a possibility. Family members likely have no idea how their information came to be in the Panel or how to exempt themselves for having been unfortunately related to a white supremacist or a person who has committed an atrocity.
Below his family’s names are links to Pierce’s writings, including The Turner Diaries, Hunter, another hate novel, and Who We Are, which was a tract that laid out his beliefs that Jews should be exterminated. There is a link as well to his publication attacking the Anti-Defamation League, a civil rights organization which has long battled antisemitism, hate and bigotry. In essence, Google is providing a shortcut to all of Pierce’s extremist beliefs in one convenient and contextless place.

The Panel is, in essence, a perfect tool for online radicalization into Pierce’s ideas, lacking in context and information that may sway readers away from his legacy. In contrast, if someone reads the entry on Pierce in the Encyclopedia of White Power: A Sourcebook on the Radical Racist Right, the experience is significantly different. His heinous ideas and violent legacy are detailed and there are no links to local libraries or booksellers to make the purchase of his materials all the easier. Nor is there an easy-to-find list of material similar to Pierce’s novels. In other words, there is context for Pierce’s ideas and material.
In other cases, the Panel gets its primary descriptor right. James Mason is called a “Neo-Nazi,” which would have been the fair description of Pierce, and is listed as part of the very violent Atomwaffen Division, whose members have been responsible for several murders. Even so, Mason’s profile links to his Soundcloud account, now deleted, his book Siege, and to several other prominent neo-Nazi figures.
But click on the Google link in Mason’s Panel to Siege, and a particularly problematic Panel pops up. Siege is described as a “book by James Mason”—nothing more. The next line says “78% liked this book” based on “Google users.” It then has a place to put in a zip code to find a local library hosting the book. Then it asks the searcher to rate the book. And then it includes three audience reviews, all positive. One calls the book “brilliant” and another says it “helped me stop being a white liberal.” This material is a dream for Mason and his fellow neo-Nazis trying to recruit members into their orbit. Finally, at the very bottom of the Panel, Google runs ads for bookstores selling Siege.
Pierce and Mason are well-known figures in the white supremacy movement, and the amount of academic and authoritative material published on their views is vast, yet their Panels still have major problems. There is really no excuse not to have better information on these major hate figures, especially since Google’s Graph copies others’ research and content and dumps it into its own system.
For groups or individuals that are lesser known, the egregious lack of context and information is even worse. For example, Brittany Pettibone Sellner, an American promoter since 2017 of the white supremacist Generation Identity movement, which inspired the Christchurch shooter and whose accounts were thrown off of Twitter in July, has a Panel that calls her an “American author.” The Panel says not one word about her work promoting anti-immigrant propaganda, leading an effort in the Mediterranean to intercept refugees fleeing to Europe, or her work with other white supremacists including her husband Martin Sellner, head of the Austrian chapter of Generation Identity. Rather, there are several pictures of her, links to her website and social media accounts, and her book, What Makes Us Girls.
History of Search and Online Radicalization
Concerns about Google search making online radicalization easy are not new. In 2015, Dylann Roof killed nine people at a Black church in Charleston, S.C. He came to view Black people as the enemy of white people after using Google search to look up “black on white crime.” When Roof hit enter on that term, the search engine returned a list of virulently racists sites, and he headed down a rabbit hole of hatred.
“The first website I came to was the Council of Conservative Citizens,” Roof wrote in a manifesto that was found after his attack. The Council of Conservative Citizens is a white supremacist organization that has long produced false reports claiming that Black people have been waging a hidden war against white people. Each click in his search led to more and more hate content, a mass of racist propaganda that ultimately convinced Roof he needed to buy a gun and kill Black people.
In his trial, Roof’s defense lawyer David Bruck told the jury, “Every bit of motivation came from things he saw on the internet. That’s it. … He is simply regurgitating, in whole paragraphs, slogans and facts — bits and pieces of facts that he downloaded from the internet directly into his brain.” Although nothing in any way excuses Roof’s acts of atrocity, Bruck was referring to Roof’s assertion in his confession and in a manifesto that Google searches shaped his beliefs.
After the Roof attack, Google search came under mass criticism for functioning as a radicalizing mechanism, drawing people deeper and deeper into false information because search affirmatively sends people to similar content. This became such a serious situation that in 2017 Google’s then-Vice President of Search, Ben Gomes, met with civil rights organizations to explain that it would be changing its search algorithm to privilege more authoritative material over links to hate and other forms of propaganda. In the months after, links to hate site propaganda seemed to be pushed off the front page of search in favor of data from educational institutions, non-profits and government agencies, at least in the case of searches for black on white crime and other prominent white supremacist themes, leaders, and groups.
The Graph now seems to be undoing that progress, bringing misinformation about these issues back to the top of page one of search and providing easy links to additional hate material. By taking the leg-work out of search, the Graph is also taking out the context and more authoritative results. The process towards radicalization into these ideas is easier now than when Roof started his searches in 2013.
There have been many other problems with Search over the years and its role in helping spread hate and bigotry around the world. In 2014, this author and others then working at the Southern Poverty Law Center found that searches for the civil rights giant, Martin Luther King, Jr., returned hate sites filled with lies and misinformation on the majority of the first page of results. During a meeting with Google, we were told that the search results could not be altered because they were simply the result of searchers preferring the false, hate content. Days after our meeting, Google quietly altered the sources from hate sites to more legitimate biographical results.
The autocomplete system for search queries was also cleaned up starting in 2016, as many antisemitic, anti-Muslim and other hate-related results were auto-filled when searchers asked questions about these religions or other marginalized populations. “We do our best to prevent offensive terms, like porn and hate speech, from appearing, but we don’t always get it right,” a spokesperson from Google said at the time, pointing to a June 2016 blog post by the search engine’s product management director claiming Google had changed its algorithm to “avoid completing a search for a person’s name with terms that are offensive or disparaging.”
The Graph & the Future of Search
Google’s move to the Knowledge Graph is actually a seismic shift in how search functions. Because nearly 90 percent of all internet searches happen on Google, these changes literally affect the “knowledge” base of the whole world.
This new system has received frighteningly little attention given that it is affecting all online research, not just how material on white supremacy and extremist groups is presented. A July study by The Markup found that 41 percent of the first page of search results on a mobile phone were “what [Google] calls ‘direct answers,’ which are populated with information copied from other sources, sometimes without their knowledge or consent.” The Markup found that a search for myocardial infarction retrieved several unsourced results from Google’s Graph before pointing to material published by real medical experts such as WebMD, Harvard University and Medscape.
The antitrust issues raised by these changes to search are considerable, as The Markup story explains in depth as has a recent report by the Omidyar Network. But when it comes to extremist content, it is unclear how providing a road map to white supremacist radicalization through Knowledge Panels and automated questions accords with the Graph’s own policies against hateful content, dangerous content and terrorist content.
According to Google, “Because of this prominent treatment [of information created by the Graph and its placement at the top of page one of a search result], information in Knowledge Graph displays should not violate these specific policies.” Surely some of the material described here would violate Google’s hateful content policy against material “that promotes or condones violence, or has the primary purpose of inciting hatred, against an individual or group, including but not limited to, the basis of their race or ethnic origin, religion, disability, age, nationality, veteran status, sexual orientation, gender, gender identity, or any other characteristic that is associated with systemic discrimination or marginalization.” In effect, the Graph is actually highlighting terrorist-inspiring novels, violent manuals, hateful propaganda, and white supremacist groups and leaders by giving them the top slot on Google in its Panels.
But there are even bigger issues than just how extremist content is handled. By gobbling up information and dropping it into the Graph and then using that to answer searches without any sourcing, Google is effectively pushing other sources down its first page to where a researcher is less likely to click on them. The Markup found, “the majority of links to and results for non-Google sites were pushed down to the bottom-middle of the page, where data shows users are less likely to click.”
The ham-fisted manner in which Google has handled the issue of racist extremists is a case in point. Google promises authoritative search results; what it returns is quite different. Most of the top of the first page of results is increasingly from the Graph, which means sources that actually have expertise in an issue are being downgraded.
That’s a problem for us all.