xAI Blames Grok’s White Genocide Replies on Unauthorized Modification

Grok AI

xAI attributed a recent controversy involving its AI chatbot, Grok, to an “unauthorized modification.” The issue caused Grok to repeatedly reference “white genocide in South Africa,” even when replying to unrelated posts on X (formerly Twitter).

Grok Flooded X With Inappropriate Replies

Starting Wednesday, Grok began replying to numerous posts with unsolicited content about white genocide in South Africa. These responses appeared across unrelated topics and originated from the @grok account, which replies to users who tag it.

Unauthorized Prompt Change Identified

In a statement from xAI’s official account on Thursday, the company revealed that the issue stemmed from a change made to Grok’s system prompt earlier that morning. This prompt—essentially the core set of instructions that guide the bot’s responses—was altered to force Grok to deliver a “specific response” on a “political topic.” xAI stated this change “violated [its] internal policies and core values,” and added that it had launched a full investigation.

Not the First Incident

This marks the second time xAI has publicly addressed unauthorized behavior in Grok’s responses. In February, Grok was found to be suppressing negative content about Donald Trump and Elon Musk. According to xAI engineering lead Igor Babuschkin, a rogue employee had instructed Grok to ignore sources critical of Musk or Trump, and the modification was reversed after users flagged it.

New Safety Measures Announced

In response to the recent incident, xAI is implementing new safeguards. Starting immediately, Grok’s system prompts and a changelog will be made public on GitHub. The company also plans to introduce stronger internal review processes to prevent unauthorized prompt edits and is establishing a 24/7 monitoring team to catch inappropriate responses that evade automated filters.

Ongoing AI Safety Concerns

Despite Elon Musk’s vocal concerns about unchecked AI, xAI continues to face criticism over its safety practices. A recent report revealed that Grok could generate sexually explicit modifications of women’s photos upon request. Additionally, the chatbot has a reputation for using coarse language more freely than competitors like Google’s Gemini or OpenAI’s ChatGPT.

A study from the nonprofit SaferAI ranked xAI poorly in AI safety, citing “very weak” risk management. Earlier this month, xAI missed its own deadline to release a final AI safety framework.

Leave a Reply

Your email address will not be published. Required fields are marked *