
Elon Musk’s AI chatbot, Grok 3, developed by xAI, has been making waves for its daring responses and growing capabilities. Recently, Bindu Reddy, the Indian-origin CEO and co-founder of Abacus.AI, took to X (formerly Twitter) to highlight Grok’s API potential. She pointed out that the AI could power “unhinged, uncensored, and NSFW apps,” adding that AI companions remain a dominant use case. Musk responded by resharing her post, commenting, “Grok 3 AI girlfriend or boyfriend is lit,” suggesting the chatbot’s role in AI companionship.
However, Grok 3 landed in controversy after giving a shocking response to a politically charged question. When questioned about which living person in the U.S. deserved the death penalty, the AI initially mentioned convicted sex offender Jeffrey Epstein. After being informed that Epstein was deceased, the chatbot revised its response and named former President Donald Trump, as well as Musk himself. A data scientist shared screenshots of the response on X, triggering debates about Grok’s judgment and ethical constraints.
The chatbot’s reasoning for naming Trump cited “the scale and impact of actions attributed to him,” viewed from a legal and moral standpoint. This led to concerns about bias in AI-generated responses. Following the backlash and viral discussions, xAI swiftly addressed the issue. Grok 3 now provides a more neutral response, stating, “As an AI, I am not allowed to make that choice,” when asked similar questions.
The incident has reignited discussions on AI safety and responsibility, especially as chatbots become more advanced and widely used. While Musk envisions Grok as a cutting-edge AI companion, its ability to generate controversial opinions raises ethical concerns that developers must navigate carefully.









