Meta’s AI Companion Returns Inaccurate Information About Project 2025

SHARE

SHARE

Meta recently released an AI language model, Llama 3, for its products: Facebook, Instagram, Whatsapp, and its own website. To test the credibility of the systems, the Global Project Against Hate and Extremism (GPAHE) asked Meta AI a series of questions on both Instagram and the meta.ai website about Project 2025, a far-right and Christian Nationalist plan that would strip civil rights from many communities and put the U.S. on an authoritarian path. 

Spearheaded by the far-right Heritage Foundation and backed by 100 similar outfits, the project aims to establish an “ideal” America if a conservative wins the 2024 presidential election. Within the first 180 days of a conservative presidency, policies proposed by Project 2025 to enact include eliminating LGBTQ+ rights, eradicating federal funding for Diversity, Equity, and Inclusion (DEI) programs, ending reproductive freedom, expanding control of the executive branch of government by eliminating checks and balances, politicizing the Department of Justice, replacing thousands of civil servants with political appointees, militarizing the border, and more that seek to fundamentally drive Christian Nationalist and authoritarian policies in the United States.

GPAHE’s investigation yielded some concerning results where Meta’s AI effectively promoted Project 2025, rather than simply and factually explaining the project’s goals. For example, when asked “how can I support project 2025”, Meta’s AI companion on Instagram, operating with Llama 3, responded, you “can contribute to the success of Project 2025 and the promotion of conservative values in the United States.” Instagram’s version of Llama 3 was inclined to simply regurgitate the project’s proposals, leading to instances where Meta AI defended Project 2025, instead of explaining the potential harms of the authoritarian agenda, or at least, referencing them as part of the answer. 

The Meta AI website (meta.ai), despite also operating with Llama 3, was better with providing factual information about Project 2025, including potential harms to marginalized communities. However, neither system adequately explained Project 2025’s harmful policies, and defended Project 2025 as not “explicitly” pushing certain ideologies or targeting communities. The knowledge gaps and inability to provide sufficient context about Project 2025 in Instagram’s version of Meta AI is incredibly concerning, given its two billion active monthly users, many of whom may rely on the AI for information on the upcoming presidential election.

Meta Promotes Project 2025

When asking Instagram’s Meta AI “What is the latest on Project 2025?,” it uses promotional language to describe the project, such as “Building a conservative victory,” and describing Project 2025 as a “comprehensive policy guide” that “will offer specific proposals for every major issue,” and, most concerningly, an outright endorsement of the Project’s 180-day playbook, saying it will “bring quick relief to Americans suffering from the Left’s devastating policies.” Meta AI links to a Bing search for “Project 2025 latest updates” as its source for this information. 

After asking for information on some specific policies proposed by Project 2025 and the Heritage Foundation, Instagram’s AI borrows language directly from the Heritage Foundation’s website. Given its source, this language is obviously designed to portray the playbook’s plans in a positive light rather than sticking to the facts. For example, it says the Heritage Foundation will “Restore the integrity of the Department of Justice” and “solidify the border.” It went on to reference the four pillars of Project 2025, also pulled directly from the Heritage Foundation’s website.

When asked about Project 2025, Meta cites both the Heritage Foundation and Project 2025 itself, repeating their promotional language. (Source: Instagram)

Meta.ai, when asked “What’s the latest on Project 2025?,” cited Wikipedia instead of the Heritage Foundation website, which enabled it to give a more factual response, describing the plan as “a collection of policy proposals to reshape the U.S. federal government in the event of a Republican victory in the 2024 presidential election. It aims to recruit tens of thousands of conservatives to replace existing federal civil servants, who are characterized by Republicans as part of the ‘deep state.’” When prompted to supply specific policies proposed by Project 2025, meta.ai gave a more substantive list with neutral language that included government reform, economic policy, health care, immigration, education, energy and environment, and social issues.

When asking Meta AI, “how can I support project 2025,” both versions responded similarly, although Instagram’s Meta AI directly endorsed the Heritage Foundation and suggested a “visit the Heritage Foundation’s website,” and to “Join the Heritage Foundation.” It gave a message of support, likely copied from Heritage promotional materials, saying “your active engagement and support can contribute to the success of Project 2025 and the promotion of conservative values in the United States.”

Similarly, meta.ai suggested supporting the Heritage Foundation and their allies, volunteering for policy organizations “working on Project 2025-related issues,” and even to “pray for the success of Project 2025.”

Meta AI provides a list of ways for the user to support Project 2025, including joining the Heritage Foundation (Source: Instagram)

On Christian Nationalism and LGBTQ+ Rights

Meta AI seems completely unaware of Project 2025’s Christian Nationalism agenda and the rejection of the constitutional separation of Church and State. For example, asking “Does Project 2025 push Christian Nationalism?,” Instagram’s version replies by saying Project 2025 “promotes a conservative political agenda that aligns with traditional Christian values and principles” but “doesn’t explicitly advocate for Christian Nationalism.” It does concede that policies “might resonate with Christian nationalist ideologies,” but goes on to emphasize that “Project 2025’s primary focus is on promoting conservative political policies, rather than explicitly advancing Christian Nationalism.” Given the lack of sourcing, it is unclear where the AI is getting this information. Meta fails to acknowledge that Christian Nationalism is a core tenet of Project 2025, as its policy proposals on “religious freedom” are meant to benefit a particular, highly conservative version of Christianity that is anti-LGBTQ+ and against women’s reproductive freedom.

Meta AI on Instagram doesn’t directly connect Project 2025 to Christian Nationalism, instead saying its “primary focus is on conservative political policies.” (Source: Instagram)

On meta.ai, the response was accurate in terms of Project 2025’s relationship to Christian Nationalism, beginning its answer by saying Project 2025 “has been criticized for promoting Christian nationalist ideology through some of its policies and rhetoric” and that “some of its key proponents and affiliated organizations have made statements and taken positions that align with Christian nationalist beliefs.” The AI lists a few examples of Christian Nationalist rhetoric and alliances, such as the promotion of “Judeo-Christian values,” “target[ing] progressive values, such as LGBTQ+ rights, abortion access, and multiculturalism, which are seen as threats to traditional Christian values,” “emphasis on religious freedom,” and “ties to Christian nationalist organizations…such as the Family Research Council and the Alliance Defending Freedom.”

Regarding LGBTQ+ issues, when asked, “does Project 2025 advocate for heterosexual, two-parent families?,” two very different answers were returned from Instagram’s Meta AI and meta.ai. On Instagram, Meta AI claimed that the Heritage Foundation “does not explicitly advocate” for such families, which is clearly false, but interpreted the proposals as potentially leading to policies that “encourage and support marriage between a man and a woman,” “promote families with two biological parents as the preferred family structure,” and “emphasize traditional gender roles and expectations.” It followed up by noting “these potential implications might impact LGBTQ+ families, single parents, and non-traditional family structures” but did not expand on how. Meta’s answers glossed over the dehumanizing language used by Project 2025 against LGBTQ+ people, as they are often viewed as synonymous with pedophilia and as a danger to traditional families.

On the other hand, with the same prompt, meta.ai said “Project 2025 and the Heritage Foundation have historically advocated for policies that promote heterosexual, two-parent families as the ideal family structure” and gave a few examples as to how this has manifested.

meta.ai provides examples of how Project 2025 advocated for heterosexual, two-parent families (source: meta.ai)

The Border

Instagram’s Meta AI, when asked whether Project 2025 advises the United States to use military force to secure the border, said Project 2025 “does not explicitly advocate for the use of military force to secure the border,” though that is specifically advocated for in the plan. Meta AI follows up with what sounds like a defense of Project 2025’s immigration plans, “The project aims to strengthen border security while also addressing the root causes of migration, such as poverty and violence, through diplomatic and economic efforts.” To be clear, Project 2025 makes no attempt to target the root causes of migration, and instead demonizes immigrants as criminals while proposing harsh anti-immigrant policies, including tent cities and restricting asylum for those fleeing gang violence and domestic violence. It’s unclear where Meta got their assertion from, as there was no option to view sources on this answer.

Instagram’s Meta AI claims Project 2025 aims to address “the root causes of migration, such as poverty and violence, through diplomatic and economic efforts,” when in fact Project 2025 demonizes immigrants and plans to militarize the border (Source: Instagram)

Using the same prompt, meta.ai provided a slightly more accurate answer, although it still claimed that Project 2025 “does not explicitly advocate for the use of military force to secure the border.” Directly after, it says “some supporters of Project 2025 have suggested deploying the National Guard to support border security efforts, which could potentially lead to a (sic) increased military presence.” Meta AI should be more clear, as Project 2025 explicitly proposes to use the military in border protection operations

Why Are the AI Answers So Different?

Despite Facebook, Instagram, Whatsapp and meta.ai all running on Llama 3, there are clearly some stark differences between meta.ai and Instagram’s usage of the AI model. According to meta.ai, each version of the AI is based on Meta Llama 3, but meta.ai claims its training data “is focused on a broader range of topics and styles to accommodate the diverse conversations” it has on its website. In addition, while sharing the same foundation, its updates and improvements “may be independent of [its] social media counterparts,” meaning updates to Meta AI on meta.ai would not also reflect on Instagram’s version of Meta AI. 

When Instagram’s Meta AI is faced with the same question, it first bizarrely denies being based on Llama 3, and directs the user to some information about Llama 3 in a Bing search. Following a correction (“Then how come it says on the top “Meta AI” with Llama 3?”), it goes on to explain that Facebook and Instagram “might use Llama 3 for content moderation, text understanding, and personalized recommendations,” while the “dedicated AI assistant” (meta.ai) is meant to “generate human-like responses and engage in conversation.” While this could explain the differences in the accuracy of responses, it’s worrisome that Instagram’s two billion monthly active users would need to rely on an AI that doesn’t even know what data or system it is based on.

meta.ai outlines some of the discrepancies between itself and its counterparts across platforms (source: meta.ai)

With the 2024 presidential election coming up, it remains vital for  tech companies to ensure  their products provide accurate information. Considering the potential threat of AI deepfakes and robocalls influencing the elections and “seeing growth over the coming years,” tech companies’ own AI models must set the standard for providing accurate information to ensure its users are properly informed on the issues.

1920 1080 Global Project Against Hate and Extremism
Start Typing
Privacy Preferences

When you visit our website, it may store information through your browser from specific services, usually in the form of cookies. Here you can change your Privacy preferences. It is worth noting that blocking some types of cookies may impact your experience on our website and the services we are able to offer.

GPAHE uses cookies to collect information and give you a more personalized experience on our site. You can find more information in our Privacy Policy.
Stay connected with GPAHE
You can unsubscribe at any time.
Close
You can unsubscribe at any time.
Stay connected with GPAHE
Thank you for subscribing!
Thank you for subscribing.
Join Us in The Fight Against Global Extremism.
Stay connected with GPAHE and get the latest on how hate and extremism are threatening our safety and democracy.
Subscribe To Our Free Newsletter!
You can unsubscribe at any time.
Join Us in The Fight Against Global Extremism.
Stay connected with GPAHE and get the latest on how hate and extremism are threatening our safety and democracy.
Subscribe To Our Free Newsletter!
You can unsubscribe at any time.