Centre Gives X 72 Hours Over Grok AI Abuse

GG News Bureau
New Delhi, 3rd Jan: 
The Indian government, specifically the Ministry of Electronics and Information Technology (MeitY), issued a 72-hour ultimatum to X (formerly Twitter) on January 2, 2026.

The primary reason for this ultimatum is the misuse of X’s AI chatbot, Grok, to generate and distribute obscene, sexually explicit, and non-consensual deepfake content, specifically targeting women and children.

Key Reasons for the Ultimatum
Misuse of Grok AI: Reports and complaints (including those from MP Priyanka Chaturvedi) highlighted that users were using Grok to manipulate images of women—often to sexualize or “undress” them—through AI prompts.

Protection of Minors: There were specific instances of Grok generating sexualized images of minors, which the government flagged as a violation of the POCSO Act (Protection of Children from Sexual Offences).

Violation of IT Rules: The government stated that X failed in its “statutory due diligence” under the IT Rules, 2021. These rules require platforms to remove prohibited content within strict timelines.

Legal Compliance: The notice cited violations of the Bharatiya Nyaya Sanhita (BNS) and the Indecent Representation of Women (Prohibition) Act.

Potential Consequences for X
If X fails to comply with the directive within the 72-hour window, it faces several severe legal risks:

Loss of “Safe Harbour” Protection: Under Section 79 of the IT Act, platforms are generally not held liable for what users post. If this protection is revoked, X could be held legally responsible for every piece of illegal content on its platform.

Criminal Prosecution: The government warned of potential criminal action against the company’s responsible officers, including the Chief Compliance Officer in India.

Strict Penalties: Failure to follow the order could lead to significant fines or even blocking orders under Section 69A of the IT Act.

Current Status
X has acknowledged “lapses in safeguards” regarding Grok and stated they are urgently fixing them. The government has demanded a detailed Action Taken Report (ATR) explaining the technical and governance changes made to prevent such AI misuse in the future.