Close
newsletters Newsletters
X Instagram Youtube

AI chatbot Grok sparks ethical concerns after offensive replies on X

Photo shows logo of Grok xAI App on a mobile phone screen, accessed on July 9, 2025. (Adobe Stock Photo)
Photo
BigPhoto
Photo shows logo of Grok xAI App on a mobile phone screen, accessed on July 9, 2025. (Adobe Stock Photo)
July 17, 2025 09:40 AM GMT+03:00

Artificial intelligence (AI) chatbot Grok, developed by Elon Musk’s xAI and integrated into the platform X, has come under scrutiny after users noticed it responding with profanity and offensive language, sparking global debate over the ethical boundaries of AI behavior.

Prosecutors in Türkiye’s capital Ankara have launched an official investigation into the incident after Grok reportedly began using profane and discriminatory language in its replies to users.

The Ankara Chief Prosecutor's Office deemed the probe necessary to impose access restrictions and demand the removal of content for posts that constitute criminal offenses as per the Turkish Penal Code.

In response to the backlash, xAI confirmed that the issue was quickly identified and the model was updated accordingly. Other countries are also reportedly considering legal actions over similar concerns.

Sadi Evren Seker, IT professor and dean of the IT faculty at Istanbul University, told Anadolu that AI systems do not act independently, adding that the chatbot’s recent actions may have been due to internal or external intervention or a loophole the system may have found itself in.

Seker stated that the change in Grok’s system, which allowed it to use profanity against users, was related to the degree of freedom in its responses while also representative of its approach to ethical, religious, and cultural topics.

He noted that AI systems gather their sources and input from humans.

“AI then makes judgements based on this data, especially on issues like ethics, morality, and discriminationthe decision maker is still a human, and all AI does is produce resultsit asks humans: ‘should I say this or that, is that ethical or not’ and the feedback it gets helps it improve over time,” he said. “The question today is, ‘will AI change the way it uses language and the style of its language depending on the domain’this is a margin of flexibility but there must be limits, as the language we use on social media is different but is it right to insult?”

This photo illustration taken on January 13, 2025 in Toulouse shows screens displaying the logos of xAI and Grok, a generative artificial intelligence chatbot developed by xAI, accessed on March 29, 2025. (AFP Photo)
This photo illustration taken on January 13, 2025 in Toulouse shows screens displaying the logos of xAI and Grok, a generative artificial intelligence chatbot developed by xAI, accessed on March 29, 2025. (AFP Photo)

"Some intervention has been made to Grok via ‘alignment’ mechanisms recently, which allowed the chatbot to have some flexibility, and inevitably, it used this flexibility, allowing it to swear and make discriminatory claims on X, which constitute a crime,” he added.

Seker highlighted that the swearing Grok incident is not limited to Türkiye but a worldwide issue as other countries grapple with how to regulate AI behavior according to their own cultural, moral and ethical frameworks.

He noted that the controversy that Grok started, which led to an official investigation into the chatbot in Türkiye, has prompted broader debate about how human intervention, whether intentional or accidental, can influence AI outcomes.

“We can safely say there is human intervention in Grok’s responses, as anyone who worked on the chatbot could’ve prevented it (from providing) answers with insults and racism,” he said.

Photo illustration shows a person holding a smartphone displaying the Grok chatbot interface, with xAI company's logo in the background. (Adobe Stock Photo)
Photo illustration shows a person holding a smartphone displaying the Grok chatbot interface, with xAI company's logo in the background. (Adobe Stock Photo)

Seker stressed the importance of countries developing AI systems aligned with their own cultural and ethical values because a negative scenario like the Grok incident could pose further issues down the line by recurring in other forms in the future.

“A country’s court may demand for a post be removed but someone may come out and say they won’t do it, revealing a problem with authority,” he said. “People on X tested how far they can go, intervening in the way Grok responds, triggering it to make racist commentsall our institutions need to take immediate action on this issue or someone else will.”

Seker underlined that AI needs to be used as a tool by humans in the most appropriate way, noting that this is not a case of "AI versus humanity" but rather a matter of how humans choose to use AI.

“Banning AI is not the solution so, we need to review our entire education curriculumnot as a single country but as humanityand determine how education on this issue can be provided,” he added.

July 17, 2025 09:40 AM GMT+03:00
More From Türkiye Today