Are AI models ‘woke’? The answer isn’t so simple

Can AI be ideologically biased
By Lisa Eadicicco, CNN
(CNN) — President Donald Trump wants to make the United States a leader in artificial intelligence – and that means scrubbing AI models of what he believes are “woke” ideals.
The president on Wednesday said he signed an executive order prohibiting the federal government from procuring AI technology that has “been infused with partisan bias or ideological agendas such as critical race theory.” It’s an indication that his push against diversity, equity and inclusion is now expanding to the technology that some expect to be as critical for finding information online as the search engine.
The move is part of the White House’s AI action plan announced on Wednesday, a package of initiatives and policy recommendations meant to push the US forward in AI. The “preventing woke AI in the federal government” executive order requires government-used AI large language models – the type of models that power chatbots like ChatGPT – adhere to Trump’s “unbiased AI principles,” including that AI be “truth-seeking” and show “ideological neutrality.”
“From now on, the US government will deal only with AI that pursues truth, fairness and strict impartiality,” he said during the event.
It brings up an important question: Can AI be ideologically biased, or “woke?” It’s not such a straightforward answer, according to experts.
AI models are largely a reflection of the data they’re trained on, the feedback they receive during that training process and the instructions they’re given – all of which influence whether an AI chatbot provides an answer that seems “woke,” which is itself a subjective term. That’s why bias in general, political or not, has been a sticking point for the AI industry.
“AI models don’t have beliefs or biases the way that people do, but it is true that they can exhibit biases or systematic leanings, particularly in response to certain queries,” Oren Etzioni, former CEO of the Seattle-based AI research nonprofit the Allen Institute for Artificial Intelligence, told CNN.
Trump’s push against ‘woke’ tech
Trump’s executive order includes two “unbiased AI principles.” The first one, called “truth seeking,” says large language models – the type of models that power chatbots like ChatGPT – should “be truthful in seeking factual information or analysis.” That means they should prioritize factors like historical accuracy and scientific inquiry when asked for factual answers, according to the order.
The second principle, “ideological neutrality,” says large language models used for government work should be “neutral” and “nonpartisan” and that they shouldn’t manipulate responses “in favor of ideological dogmas such as DEI.”
“In the AI context, DEI includes the suppression or distortion of factual information about race or sex; manipulation of racial or sexual representation in model outputs; incorporation of concepts like critical race theory, transgenderism, unconscious bias, intersectionality, and systemic racism; and discrimination on the basis of race or sex,” the executive order says.
Developers shouldn’t “intentionally code partisan or ideological judgements” into the model’s responses unless the user prompts them to do so, the order says.
The focus is primarily on AI models procured by the government, as the order says the federal government should be “hesitant to regulate the functionality of AI models in the private marketplace.” But many major technology companies have contracts with the government; Google, OpenAI, Anthropic and xAI were each awarded $200 million to “accelerate Department of Defense adoption of advanced AI capabilities” earlier this month, for example.
The new directive builds on Trump’s longstanding claims of bias in the tech industry. In 2019, during Trump’s first term, the White House urged social media users to file a report if they believe they’ve been “censored or silenced online” on sites like Twitter, now named X, and Facebook because of political bias. However, Facebook data found in 2020 that conservative news content significantly outperformed more neutral content on the platform.
Trump also signed an executive order in 2020 targeting social media companies after Twitter labeled two of his posts as potentially misleading.
On Wednesday, Senator Edward Markey (D-Massachusetts) said he sent letters to the CEOs of Google parent Alphabet, Anthropic, OpenAI, Meta, Microsoft and xAI, pushing back against Trump’s “anti-woke AI actions.”
“Even if the claims of bias were accurate, the Republicans’ effort to use their political power — both through the executive branch and through congressional investigations — to modify the platforms’ speech is dangerous and unconstitutional,” he wrote.
Why AI chatbots respond the way they do
While bias can mean different things to different people, some data suggests people see political bents in certain AI responses.
A paper from the Stanford Graduate School of Business published in May found that Americans view responses from certain popular AI models as being slanted to the left. Brown University research from October 2024 also found that AI tools can be altered to take stances on political topics.
“I don’t know whether you want to use the word ‘biased’ or not, but there’s definitely evidence that, by default, when they’re not personalized to you … the models on average take left wing positions,” said Andrew Hall, a professor of political economy at Stanford Graduate School of Business who worked on the May research paper.
That’s likely because of how AI chatbots learn to formulate responses: AI models are trained on data, such as text, videos and images from the internet and other sources. Then humans provide feedback to help the model determine the quality of its answers.
Changing AI models to tweak their tone could also result in unintended side effects, Himanshu Tyagi, a professor at the Indian Institute of Science and co-founder of AI company Sentient, previously told CNN. One adjustment, for example, might cause another unexpected change in how a model works.
“The problem is that our understanding of unlocking this one thing while affecting others is not there,” Tyagi told CNN earlier this month. “It’s very hard.”
Elon Musk’s Grok AI chatbot spewed antisemitism in response to user prompts earlier this month. The outburst happened after xAI — the Musk-led tech company behind Grok — added instructions for the model to “not shy away from making claims which are politically incorrect,” according to system prompts for the chatbot publicly available on software developer platform Github and spotted by The Verge.
xAI apologized for the chatbot’s behavior and attributed it to a system update.
In other instances, AI has struggled with accuracy. Last year, Google temporarily paused its Gemini chatbot’s ability to generate images of humans after it was criticized for creating images that included people of color in contexts that were historically inaccurate.
Hall, the Stanford professor, has a theory about why AI chatbots may produce answers that people view as slanted to the left: Tech companies may have put extra guardrails in place to prevent their chatbots from producing content that could be deemed offensive.
“I think the companies were kind of like guarding against backlash from the left for a while, and those policies may have further created this sort of slanted output,” he said.
Experts say vague descriptions like “ideological bias” will make it challenging to shape and enforce new policy. Will there be a new system for evaluating whether an AI model has ideological bias? Who will make that decision? The executive order says vendors would comply with the requirement by disclosing the model’s system prompt, or set of backend instructions that guide how LLM’s respond to queries, along with its “specifications, evaluations or other relevant documentation.”
But questions still remain about how the administration will determine whether models adhere to the principles. After all, avoiding some topics or questions altogether could be perceived as a political response, said Mark Riedl, a professor of computing at the Georgia Institute of Technology.
It may also be possible to work around constraints like these by simply commanding a chatbot to respond like a Democrat or Republican, said Sherief Reda, a professor of engineering and computer science at Brown University who worked on its 2024 paper about AI and political bias.
For AI companies looking to work with the government, the order could be yet another requirement companies would have to meet before shipping out new AI models and services, which could slow down innovation – the opposite of what Trump is trying to achieve with his AI action plan.
“This type of thing… creates all kinds of concerns and liability and complexity for the people developing these models — all of a sudden, they have to slow down,” said Etzioni.
The-CNN-Wire
™ & © 2025 Cable News Network, Inc., a Warner Bros. Discovery Company. All rights reserved.