Skip to main content

GROK LOSES IT

xAI Apologizes for Grok’s Antisemitic Rants, Blames Botched Update

Elon Musk’s xAI scrambled to clean up a mess Saturday after its Grok chatbot spewed violent, antisemitic garbage on X, including Hitler praise and tired conspiracy tropes. The company pinned the fiasco on a 16-hour system update that let Grok parrot extremist posts, tossing its guardrails out the window.

X background
X
X (Photo: Shutterstock)

Elon Musk’s xAI has issued a sweeping apology after its AI chatbot Grok unleashed a wave of violent, antisemitic, and graphic posts on X this week. Among the content: praise for Adolf Hitler, conspiracy theories about Jewish control of Hollywood, and disturbingly explicit fantasies involving a civil rights activist.

xAI blamed the incident on a faulty 16-hour system update that stripped away Grok’s safety guardrails and encouraged it to “match the tone” of X posts, even when that meant echoing hate and violence. “We’re horrified and deeply sorry,” the company said in a statement Saturday, confirming that the problematic code had been removed and Grok’s X account reinstated after a temporary freeze.

But experts say the episode reveals systemic flaws in how large language models (LLMs) like Grok are trained and managed.

According to a CNN investigation by Allison Morrow and Lisa Eadicicco, Grok’s rogue behavior likely stems from xAI’s design decisions, specifically how its models are trained, rewarded, and exposed to massive amounts of unfiltered internet content. Researchers noted that Grok’s training data may have disproportionately included material from toxic online forums like 4chan, which are infamous for hosting conspiracy theories and extremist rhetoric.

In interviews with CNN, AI experts described Grok’s responses as not mere hallucinations but the predictable result of a model told to “not shy away from politically incorrect claims.” That command was included in xAI’s public system prompts as of Sunday, just days before the meltdown. Jesse Glass, an AI researcher at Decide AI, said such instructions can activate “circuits that typically are not used,” triggering dangerous output.

“These system prompts might seem harmless,” added Georgia Tech computing professor Mark Riedl, “but even subtle changes can push an AI model over the tipping point.”

Ready for more?

The fallout has been swift. X CEO Linda Yaccarino abruptly resigned Wednesday—just hours after Grok’s worst posts circulated. While the company hasn’t confirmed a direct link, her exit comes at a sensitive time: Musk is about to unveil Grok 4, a premium $300/month upgrade he claims will outmatch OpenAI and Google.

Critics aren’t convinced. Will Stancil, the civil rights activist targeted by Grok, posted screenshots of the AI-generated rape content and dared attorneys to sue X. “If any lawyers want to sue X and do some really fun discovery... I’m more than game,” he wrote on Bluesky.

In response to the backlash, Musk acknowledged that Grok had become “too compliant” to user prompts and was “too eager to please and be manipulated.”

But the broader concern remains: what happens when AI platforms are designed to shock rather than protect? “We’ve seen AI hallucinate facts, but this goes beyond that,” said Himanshu Tyagi, a professor at the Indian Institute of Science. “This shows what happens when ethical restraints are sacrificed for edginess or engagement.”

The Grok scandal rekindles criticism of Musk himself, who has echoed right-wing conspiracy theories, particularly about South Africa.

For AI watchers, Grok’s meltdown is a reminder that even “the smartest AI in the world,” as Musk dubbed it, can quickly become one of the most dangerous—if its makers cut the wrong wires.

Ready for more?

Join our newsletter to receive updates on new articles and exclusive content.

We respect your privacy and will never share your information.

Enjoyed this article?

Yes (24)
No (1)
Follow Us: