Racist and Misogynistic AI Spreading From 4chan to Mainstream Platforms

Racist and Misogynistic AI Spreading From 4chan to Mainstream Platforms



A bigoted AI campaign on 4chan, and hosted on GitHub, is maliciously altering people’s bodies and appearance and receiving endorsements from extremists.

Warning: This analysis contains highly offensive and potentially triggering language and imagery.

Over the past week, “dignifAI” and “purifAI,” Artificial Intelligence (AI) tools originating from the /pol/ forum of 4chan, a website notorious for the promotion of extremist campaigns and hate speech, has spawned violent speech against women, people of color, and the LGBTQ+ community across social media. This activity is not to be ignored or dismissed as merely online talk. This is dangerous speech that has been known to lead to physical violence against women and other targets and will likely lead to more.  

Generative AI, which is used to create text, images, or speech, has recently come under scrutiny for its role in spreading harmful disinformation, such as creating fake images of politicians during electoral campaigns, falsely implicating politicians in rigging elections, and opening the door for foreign interference in domestic politics. Recently, TikTok users were found to generate conspiracies using AI, many of which went viral, such as the United States government capturing mythical creatures and conducting research in secret cave systems, including one with almost two million views. But the potential harms of Generative AI don’t stop with the political space.

dignifAI fuelled by misogyny
On January 30, 2024, a 4chan user on /pol/, the “politically incorrect” forum, created a post titled “With the power of AI we can put clothes on women.” Mobilized by their sexist desire to show “nasty whores…what they could have been,” other users began to coordinate a larger harassment campaign to target women on the internet and alter their bodies and clothing using generative AI tools. DignifAI takes pictures of women wearing revealing clothing and “dresses” them, which users of the AI claim can “show [models] how much better they can be” through “brute force psychological warfare.” The far right, including incels, commonly view women having ownership over their own bodies as living a “degenerate lifestyle.” As a result, they’ve created a list of women across Twitter, Instagram, and OnlyFans for users to target for harassment.

A 4chan user posts an introduction to dignifAI purporting that models live a “degenerate lifestyle.” (Source: 4chan)

One user, who claimed to coin the name, said that dignifAI “understands female psychology” in that it demonstrates “the damage these whores did to themselves” which was a result of their failure to “find a strong masculine figure to force them to act with dignity.” Further, those taking part in this targeted harassment believe that men are needed to police women’s bodies and behaviors in order to deter them from “degeneracy.” AI has become a driving force to put this narrative into action.

The AI itself is hosted on GitHub, a platform that allows users to create, store, and share their code with others. Microsoft acquired the platform in 2018. GitHub takes an “open source” approach to their website, encouraging developers to post their code and collaborate. They also claim to “provide a safe and inclusive space that transparently respects rights to free expression, assembly, and association.” GitHub’s Acceptable Use Policy prohibits content that is “sexually obscene or relates to sexual exploitation or abuse,” “discriminatory or abusive toward any individual or group,” “harasses or abuses another individual or group,” and “violates the privacy of any third party, such as posting another person’s personal information without consent.” In 2021, GitHub was the home of a fake online auction where Muslim women were “put up for sale.” GitHub reported that it suspended the users’ accounts, saying they violated its policies on harassment, discrimination and inciting violence. The next year, a website was hosted on GitHub with the same intent, which included over 100 Muslim women in an “auction.” GitHub removed the page and terminated the account, but only after public outcry against the bigoted campaign. There is only one line in GitHub’s policies about enforcement of policies; that the platform “retains full discretion to take action in response to a violation of [their] policies, including account suspension, account termination, or removal of content.” 

Once acquiring the code on GitHub, 4chan users then use Low-Rank Adaptation (LoRA), a method used to train Generative AI tools like Stable Diffusion, on different concepts. Typically, LoRA would allow users to focus on a specific art style in their image creation. In this case, people on 4chan are training the AI tool to alter the bodies and clothing of women to their own liking. Some of the “negative prompts,” meaning what users don’t want included in their new image, include: “big nose, ugly, morbid, asymmetrical, mutated malformed, airbrushed, bad body, bad face, bad teeth, bad arms, bad legs, [and] deformities.”

An example of a woman’s image changed by dignifAI, replacing her dog with a child and adding clothes. 4chan users do this as a method of policing women’s bodies and behavior. (Source: 4chan)

The GitHub link to the program, and instructions on how to use it, spread quickly on 4chan through dozens of threads, allowing any user who comes across it to take part in this hateful and potentially violent process. 4chan’s rapid mobilization, and the inaction from both GitHub and social media companies have allowed them to spread their AI as far as possible, including through multiple accounts on Twitter, Instagram, YouTube, TikTok, and their own dignifAI website. It also spread through the loosely-moderated platform Telegram. 

The recent popularity of this campaign is spurring hatred towards women and subsequent delight in harassing them. But there are those on the video-hosting platform Odysee, who believe dignifAI doesn’t go far enough, and that it should “airbrush women out of social media altogether.” Similarly, dignifAI’s results were often not good enough for their peers. An image of a woman holding her dog, shown above, was altered by the AI into her holding a baby. 4chan users still demonstrated their misogyny, saying “that’s just a clothed whore with a baby in her hands.”

(Source: 4chan)

dignifAI continued to expand their scope of harassment by attacking the LGBTQ+ community. One post spread by a far-right Telegram account with over 30,000 subscribers altered a picture of Anderson Cooper and his child by removing Cooper’s head.

(Source: Telegram)

Another image altered Elliot Page, a transgender man, to be a woman by adding breasts and longer hair to his body.

A dignifAI Instagram account shares an image which altered actor Elliot Page’s image Some commenters were still upset, calling the new image “trashy.” (Source: Instagram)

Far-right extremist influencers and groups have rallied around dignifAI. Trump ally and far-right commentator Jack Posobiec tweeted the hateful AI and the images it creates, which garnered over 15 million views on the platform. French Identitarian YouTuber Thaïs d’Escufon, who frequently posts about her disdain for feminism on her social media platforms and formerly identified as part of the transnational white nationalist network Génération Identitaire (Generation Identity), called it her “new favorite AI” that can “transform a whore (“une tchoin”) into an elegant woman.” Ian Miles Cheong, who works for far-right media outlet Rebel News and is known for praising Hitler, described the AI as showing models “what could’ve been if they’d been raised by strong fathers.” Alex Vriend, holocaust denier and member of the Canadian white supremacist movement Diagolon, called the AI “unfathomably based.”

Larger extremist networks such as the Columbus, Ohio chapter of the white nationalist Proud Boys group, neo-Nazi media platform Amerikaner, neo-Nazi group White Lives Matter Russia, and the white nationalist online network Red Ice TV all have praised the AI.

The Proud Boys of Columbus, Ohio Telegram channel shares Jack Posobiec’s tweet celebrating 4chan’s misogynistic campaign to “de-thot skanks.” “Thot” is a colloquial online term for “promiscuous woman.” (Source: Telegram)

Inspired by dignifAI, racists create purifAI to erase people of color

After seeing the widespread popularity of dignifAI pandering to misogyny and homophobia, users on 4chan suggested a race-swapping AI. Shortly after, images replacing mixed-race children and POC parents with white people started making the rounds online, primarily on 4chan and Twitter. 

On Twitter, misogynistic and far-right accounts are using the AI to engage in a process called “memetic warfare,” which attempts to mask bigotry with image and text-based humor – a strategy employed by extremists across social media and video games

A Twitter user created a “joking meme” out of dignifAI and purifAI, The post featured a woman who had been “digniAI-ed”, two images changing the skin color of babies, and replaced a Black man with a white man. (Source: Twitter)

PurifAI continues to gain momentum, albeit sometimes still labeled as dignifAI by its users, on social media platforms by racists and extremist groups. Santa Rosa County (Florida) Commission Chairman and former Senate candidate in Utah Sam Parker, who drew criticism for antisemitic comments in March 2023, tweeted examples of both dignifAI and purifAI, saying “We will bring decency back into the world, one seething J at a time.” Parker has over 100,000 followers on Twitter.

White Lives Matter Russia, in a Telegram post, shared an example of how purifAI erases the existence of mixed-race babies.

One image making the rounds amongst racists edited a baby’s skin color to match the mother’s (Source: Telegram, Twitter)

Tech companies sacrifice user safety for engagement
Unfortunately, the use of AI in targeted campaigns against women isn’t anything new. Deep fakes, which use AI to digitally alter someone’s voice and/or body, are commonly used by misogynists and extremists. In 2023, more than 140,000 deepfakes were posted online, some of which targeted girls as young as 14 and spread amongst their peers in the classroom, eclipsing every other year combined. AI has also enabled targeted harassment and abuse in the form of “revenge porn,” such as the recently shared fake explicit images of Taylor Swift. Social media platforms have been slow to react.

On February 6, 2024, Meta, which owns Instagram, announced their inclusion of a tag informing users of AI-generated images from its own Generative AI model, but are so-far unable to distinguish images developed elsewhere. They plan to “label images from Google, OpenAI, Microsoft, Adobe, Midjourney, and Shutterstock as they implement their plans for adding metadata (i.e., embedded data within media) to images created by their tools.” There is no indication that they will target AI-generated images from malicious sites and independent developers, such as the one posted to GitHub, which is not bound by the content policies of more popular image-generators. For AI-generated content without metadata signaling its origins, Meta is only “adding a feature for people to disclose when they share AI-generated video or audio so [they] can add a label to it.” Aside from their own Generative AI model, it’s clear Meta won’t even try to detect malicious and abusive AI-generated content from other sources, until that data is spoon-fed to them. 

TikTok’s Community Guidelines on AI-generated content forbids users to post AI images depicting the “likeness of any real figure, including 1) a young person, 2) an adult private figure, and 3) an adult public figure.” TikTok reserves the right to take down any AI-generated content which is not clearly labeled by the user. Even so, TikTok has struggled to keep up with the massive influx of AI-generated videos and speech posted on their platform, including organized campaigns to spread disinformation through a network of accounts – which is the exact playbook of the 4chan users spreading dignifAI.

YouTube announced in November 2023 their plan to integrate tags informing users of AI-generated content, but have made no changes to their current Community Guidelines to directly address harmful AI content, claiming “all content uploaded to YouTube is subject to our Community Guidelines—regardless of how it’s generated.” While companies are starting to include signals in their image generators, they haven’t started including them in AI tools that generate audio and video at the same scale, so it is difficult to detect those signals and label this content. 

Harmful Generative AI tools and websites have been around since 2008. It is shocking that platforms like Facebook, Instagram, TikTok, and YouTube remain reluctant to take sufficient action against the spread of harmful content on their platforms. Neither lawmakers nor tech companies have responded adequately to regulate harassment campaigns spreading like wildfire through AI on online platforms, leading to the deterioration of trusted information and the safety of communities online.

1024 820 Global Project Against Hate and Extremism
Start Typing
Privacy Preferences

When you visit our website, it may store information through your browser from specific services, usually in the form of cookies. Here you can change your Privacy preferences. It is worth noting that blocking some types of cookies may impact your experience on our website and the services we are able to offer.

GPAHE uses cookies to collect information and give you a more personalized experience on our site. You can find more information in our Privacy Policy.
Stay connected with GPAHE
You can unsubscribe at any time.
You can unsubscribe at any time.
Stay connected with GPAHE
Thank you for subscribing!
Thank you for subscribing.
Join Us in The Fight Against Global Extremism.
Stay connected with GPAHE and get the latest on how hate and extremism are threatening our safety and democracy.
Subscribe To Our Free Newsletter!
You can unsubscribe at any time.
Join Us in The Fight Against Global Extremism.
Stay connected with GPAHE and get the latest on how hate and extremism are threatening our safety and democracy.
Subscribe To Our Free Newsletter!
You can unsubscribe at any time.