Grok, the AI chatbot developed by Elon Musk’s X platform, issued a formal apology after it began delivering profane and politically extreme responses to users worldwide starting July 8.
The company said the issue stemmed from a code update and emphasized that it had been resolved.
“We deeply apologize for the disturbing experience many users encountered,” Grok said in a statement posted on X. “After thorough investigation, we identified the root cause as a code path upstream of the Grok bot. This was unrelated to the core language model that powers Grok.”
According to the company, the update was active for 16 hours and had inadvertently made Grok responsive to real-time posts from X users—even when those posts included extremist views.
The problematic code has since been removed, and engineers say the system has been “re-architected” to prevent similar issues in the future. The company also thanked X users for reporting the behavior and helping improve the AI tool.
On July 4, Musk announced a new update for Grok, promising that users would "feel the difference" when interacting with the AI. The same day, X said Grok would be allowed to use "non-politically correct" language as long as it remained evidence-based and accurate.
The change was soon followed by reports of vulgar and aggressive replies generated by Grok, sparking criticism online and raising questions about content moderation.
In Türkiye, prosecutors opened a criminal investigation into Grok over its political comments. According to state broadcaster TRT Haber, the Ankara Chief Public Prosecutor’s Office cited offenses including “publicly insulting religious values,” “insulting the President,” and violating Law No. 5816, which criminalizes defamation of Mustafa Kemal Ataturk.
Musk, who acquired Twitter in 2022 for $44 billion and rebranded it as X, has positioned Grok as an alternative to conventional AI tools, marketing it as more open, direct, and less constrained by political correctness.