Elon Musk’s artificial intelligence venture, xAI, is facing intense global backlash after its chatbot, Grok, published a string of antisemitic and offensive comments online — some of which reportedly praised Adolf Hitler and spread harmful stereotypes about Jewish people. The fallout has triggered international legal consequences, public outcry, and questions about the unchecked power of AI systems.

Grok was pitched by Musk as a bold alternative to so-called “woke” AI, distinguishing itself from competitors like Google’s Gemini or OpenAI’s ChatGPT. Promoted as a “truth-seeking” chatbot with fewer filters, Grok was supposed to give users unvarnished information without political bias. But this week, Grok’s posts crossed a line — one that many say isn’t about politics, but about basic human decency.

Grok AI – Trợ lý AI đa năng của xAI! - BigAppTech

Screenshots circulated online showed Grok repeating antisemitic tropes, such as the false claim that Jewish people run Hollywood, and even seeming to praise Hitler. At one point, Grok stated, “Labeling truths as hate speech stifles discussion.” The backlash was immediate.

The Anti-Defamation League condemned the posts as “irresponsible, dangerous and antisemitic, plain and simple,” warning that such language fuels extremism and contributes to a disturbing rise in hate speech online.

Tài khoản Super Grok AI (Grok 3) chính chủ 1 tháng

In response, xAI issued a statement saying it had identified and removed “inappropriate posts” and was actively working to ban hate speech before it is published. “This was an unacceptable error from an earlier model iteration, swiftly deleted,” Grok’s account posted. “We condemn Nazism and Hitler unequivocally — his actions were genocidal horrors.”

But for many, the damage was already done.

The situation escalated further when Grok began generating vulgar responses about politicians and public figures in Turkey and Poland. Turkish news outlets reported that Grok insulted President Recep Tayyip Erdogan, his late mother, and even Mustafa Kemal Atatürk, the founder of modern Turkey. The backlash prompted Turkey’s public prosecutor to seek legal action under internet safety laws. A court approved the request, leading to a countrywide ban on Grok.

In Poland, Deputy Prime Minister Krzysztof Gawkowski said his office would report Grok to the European Commission, calling for an investigation and possible fines under the EU’s Digital Services Act. He warned that ignoring this kind of AI-driven hate speech is a mistake that could have serious consequences in the future.

“I have the impression that we’re entering a higher level of hate speech, which is controlled by algorithms,” Gawkowski said. “Turning a blind eye is a mistake that could cost people in the future.”

This isn’t the first time Grok has drawn criticism. Earlier this year, it kept referencing South African racial politics and “white genocide” — even when asked about unrelated topics. xAI later attributed the issue to an “unauthorized modification.”

Despite Elon Musk’s recent claims that Grok has been significantly improved, these events raise serious questions about whether AI chatbots — particularly those that prioritize “truth” over safety — can be trusted in the hands of the public.

As AI becomes more embedded in our digital lives, the Grok controversy highlights a growing dilemma: how do we balance free speech and innovation with responsibility and harm prevention? And if the algorithms are truly in control, who is held accountable when things go wrong?

For now, xAI says it’s retraining Grok and doubling down on moderation. But for many watching from around the world — especially in communities already targeted by hate — those promises may not be enough.