What it reveals about AI, free speech, and accountability

‘How long before Grok is banned in India?’ It’s the question Indian users on X have asked repeatedly in recent days.In February this year, Elon Musk’s xAI announced that its latest Grok 3 AI chatbot would be free for anyone to use. Since then, its rollout has been as chaotic as everything else on the billionaire-owned social media platform.
Ask Grok a query with some colourful language thrown in, and the edgy, unfiltered AI chatbot is likely to snap back with some expletives of its own – Hindi slang and misogynic slurs included.
Story continues below this ad
The ‘unhinged’ nature of Grok also triggered a wave of political questions about Prime Miner Narendra Modi, Congress leader Rahul Gandhi, and other related topics. This, mostly from users looking to validate their own ideological leanings engaging the AI chatbot for fact-finding purposes, an AI use case that experts have strongly cautioned against.
As the controversy around Grok ballooned, its sensational AI-generated responses drew the attention of the Union Minry of Information and Technology. “We are in touch, we are talking to them (X) to find out why it is happening and what are the issues. They are engaging with us,” anonymous IT minry officials were quoted as saying PTI.
The IT Minry taking notice of Grok’s profanity-laden and politically sensitive responses has prompted some of India’s leading tech policy experts to caution against hasty regulatory actions at the risk of enabling censorship and inhibiting innovation.
“The IT minry does not ex to ensure that all Indians, or indeed that all machines, use parliamentary language,” Pranesh Prakash, the co-founder of the Centre for Internet and Society (CIS), told .Story continues below this ad
“Further, this provides cause to be worried if this leads to companies self-censoring perfectly legal speech just because governments object to it. That creates a chilling effect on freedom of expression,” he said.
Meanwhile, the incident has raked up key issues such as AI-generated misinformation, accountability for AI-generated outputs, challenges of content moderation, and the need for procedural safeguards. It also echoes the public criticism from stakeholders over the central government’s now-withdrawn AI advisory that was issued around the same time last year.
What dinguishes Grok from other AI chatbots?
The name ‘Grok’ comes from a character in a science fiction novel titled Stranger in a Strange Land Robert A Heinlein. It means “to fully and profoundly understand something,” according to Musk.
But beyond the sci-fi references, Grok has been advertised Musk as an ‘anti-woke’ alternative to chatbots such as OpenAI’s ChatGPT and Google’s Gemini. In an interview with conservative pundit Tucker Carlson last year, Musk said his interest in AI was motivated fears that exing AI models had left-wing bias baked into their training datasets and overall design.Story continues below this ad
“I’m worried about the fact that it’s being trained to be politically correct,” Musk had said. Hence, Musk proposed to design an AI chatbot that would not only give it to users straight but also provide “spicy” responses.
Grok has the dinguishing capability of searching and using data on X (such as public posts users) to provide “up-to-date information and insights.” The AI chatbot has further been integrated on X in such a way that users only have to tag Grok in their public timelines to receive a response.
It also offers an “unhinged” mode for premium users, which may result in Grok being objectionable, inappropriate, and offensive, as per its own website.
Noting that Grok appears to be designed with wit, humour, and a rebellious streak, Rohit Kumar, founding partner of public policy firm The Quantum Hub (TQH), expressed concerns over the dissemination of unfiltered and harmful content directly on X the AI chatbot.Story continues below this ad
“To my mind, the biggest issue in the Grok case is not its output but its integration with X, which allows direct publishing onto a social media platform where content can spread unchecked, potentially leading to real-world harm, such as a riot,” Kumar told .
These harms may get amplified if users believe Grok’s responses to be credible, which many unfortunately do. While Grok provides links to sources for users to verify information more easily, these citations are not always visible when users engage with the AI chatbot tagging its automated account on X.
Grok’s AI-generated replies to user queries on X do not always carry citations or links to sources/web pages. Grok’s use for fact-checking purposes has raised concerns that it can fuel misinformation. (Screenshot: X)
In August last year, five US secretaries of state penned an open letter to Elon Musk, urging him to course-correct Grok after it reportedly provided users with false information about ballot deadlines for several states ahead of the 2024 US presidential election.
Do AI-generated responses qualify as free speech?
Like corporations, AI chatbots are not human. While they cannot explicitly be granted free speech rights, can AI-generated outputs be considered to fall under exing legal frameworks governing speech?Story continues below this ad
Meghna Bal, the director of Esya Centre, a Delhi-based tech policy think tank, said that any speech, be it human or AI-generated, has to be considered in the context of the legal bounds it is crossing.
“We have to consider, first, whether it comes within the teeth of permissible restrictions on speech under the Constitution, and then unbundle where, and how, it crosses the line under different laws governing speech in the country,” she said.
On whether Grok is liable to face criminal action for its allegedly abusive responses, Bal said that there is a case of liability if there is wilful neglect and the deployer of the AI chatbot has not undertaken any measures to moderate outputs. “But here again I think we have to be careful to ensure that any speech acted upon falls within the permissible restrictions on speech under the Constitution,” Bal said.
Who is responsible for AI-generated speech?
Though the question of whether developers should be held accountable for their AI models is a tricky one, Bal said that there is some legal precedent that suggests holding deployers liable for the content generated AI systems.Story continues below this ad
In a landmark ruling last year, Air Canada was directed a civil court to honour a false refund policy made up an AI chatbot on its website.
According to the lawsuit, the plaintiff reportedly asked Air Canada’s AI chatbot about bereavement fares and was informed that he could submit a ticket for a reduced bereavement rate within 90 days of issue. However, when the plaintiff applied for the reduced fare, his request was denied because the airline’s bereavement policy did not apply post-travel.
Bal noted that the Air Canada case treated AI chatbots as publishers since the court had rejected the airline’s argument that it was not responsible for the information provided the AI chatbot.
However, experts have stressed that managing AI risks requires a highly contextual approach. “For instance, the responsibility of a deployer of an AI chatbot who makes certain promises in the context of use in a hospital setting would be very different from that of X for Grok being used for conversations,” Prakash said.Story continues below this ad
Even if chatbots are considered to be intermediaries (and not publishers), Bal proposed creating a safe harbour specifically for developers of generative AI services. The basic premise of safe harbour protection is: online platforms observe a certain set of due diligence in exchange for protection from liability arising out of the actions of their users.
The safe harbour framework for AI companies “could borrow from the end-user license agreements and user codes of conduct and content policies created some companies for their LLMs (large language models),” Bal suggested.
What is the best way to police AI chatbots?
There are many ways to moderate the outputs from AI chatbots. However, there are even more ways to circumvent these controls, and a lot of this circumvention cannot be foreseen, Bal said.
Such techniques to circumvent the in-built guardrails in AI chatbots are known as AI jailbreaks. Simply put, it is when a user makes the AI chatbot break the rules that the developer has set for it.Story continues below this ad
According to a blog post Microsoft, AI models are susceptible to jailbreaks because they possess attributes “similar to an eager but inexperienced employee trying to help your other employees with their productivity.” In the Grok case too, several users appeared to deliberately provoke the chatbot on everything from cricket to Bollywood and politics.
“Literature also indicates that it is much easier to attack a generative AI service (through prompt engineering) than guard against such attacks,” Bal said.
Meanwhile, Kumar argued that AI chatbot outputs should not be directly policed. “Instead, developers should be required to assess risks, be more transparent about the datasets used for training to ensure diversity, and conduct thorough red-teaming and stress testing to mitigate potential harms,” he said.