Analysis

Controversy over Grok’s images: Uncertainty surrounds intermediary liability

Can a platform that enables an AI system invoke the legal protections designed for neutral digital intermediaries?

On 2 January 2026, the Ministry of Electronics and Information Technology (MeitY) issued a notice to X (formerly Twitter), after the platform was flooded with explicit and non-consensual AI-generated images attributed to Grok—a generative AI chatbot developed by the platform. The notice was in response to mass public criticism. 

The notice stated that the platform failed to “observe statutory due diligence obligations” mandated under Information Technology Act, 2000, (IT Act) and the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021. MeitY demanded a report detailing corrective actions undertaken by X and stated that non-compliance would result in the platform losing its immunity guaranteed under Section 79 of the IT Act.  

The episode has brought to the forefront important questions regarding platform responsibility, post-Shreya Singhal judgement of the Supreme Court. 

Grok’s response

X acknowledged its mistake and the deficiencies in its oversight. It outlined its moderation mechanisms and the immediate actions taken, including removal of content and suspension of accounts. This did not satisfy the IT ministry which described the platform’s reply as “inadequate”.

The ministry sought details on post-facto enforcement and clarity on preventive mechanisms, particularly those embedded in Grok’s architecture, data inputs and filtering mechanism. This development reflects a deeper regulatory concern: making platforms accountable while preventing them from making systems that generate unlawful output.  This also reflects a core legal issue: did the platform take reasonable care to prevent such misuse in the first place? While reactive moderation may signal cooperation, it does not solve questions about structural safeguards. 

Shrinking safe harbour protection? 

Section 79 of the IT Act provides a form of legal immunity to intermediaries for third-party content transmitted or hosted on their platforms. However, this protection is conditional as it presumes that intermediaries neither initiate, modify, nor select the recipients of such content i.e. the platforms are functionally passive. 

Judicial interpretation has reinforced this understanding. In MySpace Inc. v Super Cassettes Industries (2016) and Christian Louboutin SAS v Nakul Bajaj (2018), the Delhi High Court distinguished passive intermediaries from the proactive ones. . Immunity may be forfeited in the latter. 

Such a framework is challenged by generative AI systems. While users, in this case X users, may add prompts, the output is shaped by internal model configurations, training data and policy filters—each of which is controlled by the platform.

The government’s request for technical details about Grok’s internal mechanisms indicates a regulatory shift. In Indian intermediary law, the standard is not merely about intent; it is about responsibility.

This regulatory strategy appears to favour administrative oversight over judicial escalation. It also helps the regulators retain flexibility, considering the legal ambiguity surrounding AI-generated content. A judicial intervention would result in a precedent that would be applicable to not just Grok but other generative AI tools. 

In X Corp. v Union of India, the Karnataka High Court held that failure to comply with requirements under Section 79(3) could lead to loss of intermediary protection. The Court further observed that algorithmic systems should not be allowed to infringe upon constitutional rights under the pretext of technological neutrality. The matter is currently under appeal before the Division Bench.

Global regulatory concern

India is not isolated in its concern over the implications of generative AI. Authorities in the European Union, United Kingdom and Australia have similarly flagged risks arising from AI-generated content. Malaysia and Indonesia have blocked Grok after X’s responses to the notices proved to be unsatisfactory. 

India’s regulatory framework, however, relies on pre-existing laws rather than developing AI-specific legislation. While this allows for quicker regulatory intervention, it also places increasing pressure on a framework designed for simpler, user-generated content ecosystems.

An uncertain legal landscape

At present, X retains its legal immunity under Section 79. However, AI platforms like Groke exist in a regulatory grey area. Each new regulatory demand for transparency and every instance of AI misuse strains the legal doctrine supporting the protection. This will force Parliament to confront new questions about intermediary liability. 

The government’s strategy of administrative engagement, rather than litigation, leaves key doctrinal questions unresolved—including whether synthetic expression  generated by autonomous systems falls within the scope of intermediary immunity. 

Experts at the Internet Freedom Foundation (IFF) feel that this controversy demonstrates the need for appropriate and targeted legislation to address failures of platform design, technical and consent-related safeguards and enforcement. They stated that content removal strategies and intermediary liability stretches the existing legal framework. Notably, they claim that the Ministry’s response is adhoc and fragmented as it does not have the power to issue general compliance directions or advisories under the IT Act or the IT Rules. They have appealed to regulators and platforms to address abuse in a victim-centred manner while simultaneously respecting constitutional free speech protections.

Instead, it has suggested a “Safety by Design” approach that adopts mandatory adversarial testing (Red Teaming); adoption of content provenance standards which ensures that AI-generated images are clearly labeled so that they can be detected, traced, and removed. Lastly, it has recommended empowering regulatory bodies with powers for algorithmic disgorgement when large language models or LLMs are found to be fundamentally unsafe and are causing great harm by generating Non Consensual Intimate Imagery (NCII) or Child Sexual Abuse Material (CSAM). 

Exit mobile version