India prepares reporting standard as AI failures may hold clues to managing risks

India’s proposed framework is similar to the AI Incidents Monitor of the Organization for Economic Co-operation and Development.  (AFP)
India’s proposed framework is similar to the AI Incidents Monitor of the Organization for Economic Co-operation and Development. (AFP)
Summary

Recording and analysing AI incidents is important because system failures, bias, privacy breaches, and unexpected results have raised concerns about how the technology affects people and society.

India is framing guidelines for companies, developers and public institutions to report artificial intelligence-related incidents as the government seeks to create a database to understand and manage the risks AI poses to critical infrastructure.

The proposed standard aims to record and classify problems such as AI system failures, unexpected results, or harmful effects of automated decisions, according to a new draft from the Telecommunications Engineering Centre (TEC). Mint has reviewed the document released by the technical arm of the Department of Telecommunications (DoT).

The guidelines will ask stakeholders to report events such as telecom network outages, power grid failures, security breaches, and AI mismanagement, and document their impact, according to the draft.

“Consultations with stakeholders are going on pertaining to the draft standard to document such AI-related incidents. TEC’s focus is primarily on the telecom and other critical digital infrastructure sectors such as energy and power,"said a government official, speaking on the condition of anonymity. “However, once a standard to record such incidents is framed, it can be used interoperably in other sectors as AI is being used everywhere."

The plan is to create a central repository and pitch the standard globally to the United Nations’ International Telecommunication Union, the official said.

Key Takeaways
  • India is framing guidelines for companies and institutions to report AI-related incidents to manage risks to critical infrastructure.
  • The proposed standard aims to classify AI system failures, unexpected results, and harmful automated decisions.
  • Guidelines will require reporting on incidents like telecom outages, power grid failures, and security breaches due to AI mismanagement.
  • The goal is to create a central repository for AI incidents and propose this standard globally to the UN's ITU.
  • The initiative builds on a MeitY recommendation for a national AI incident database to improve transparency and accountability.
  • Experts suggest starting with guidelines and self-regulation for reporting, emphasizing learning over penalization.

Recording and analysing AI incidents is important because system failures, bias, privacy breaches, and unexpected results have raised concerns about how the technology affects people and society.

“AI systems are now instrumental in making decisions that affect individuals and society at large," TEC said in the document proposing the draft standard. “Despite their numerous benefits, these systems are not without risks and challenges."

Queries emailed to TEC didn't elicit a response till press time.

Also read |  AI at war: Artificial intelligence is reshaping defence strategies

Incidents similar to the recent Crowdstrike incident, the largest IT outage in history, can be reported under India's proposed standard. Any malfunction in chatbots, cyber breaches, telecom service quality degradation, IoT sensor failures, etc. will also be covered.

The draft requires developers, companies, regulators, and other entities to report the name of the AI application involved in an incident, the cause, location, and industry/sector affected, as well as the severity and kind of harm it caused.

Like OECD AI Monitor

The TEC’s proposal builds on a recommendation from a MeitY sub-committee of on ‘AI Governance and Guidelines Development’. The panel’s report in January had called for the creation of a national AI incident database to improve transparency, oversight, and accountability. MeitY is also engaged in developing a comprehensive governance framework for the country, with a focus on fostering innovation while ensuring responsible and ethical development and deployment of AI.

According to the TEC, the draft defines a standardized scheme for AI incident databases in telecommunications and critical digital infrastructure. “It also establishes a structured taxonomy for classifying AI incidents systematically. The schema ensures consistency in how incidents are recorded, making data collection and exchange more uniform across different systems," the draft document said.

Also read |  Apple quietly opens AI gates to developers at WWDC 2025

India’s proposed framework is similar to the AI Incidents Monitor of the Organization for Economic Co-operation and Development (OECD), which documents incidents to help policymakers, AI practitioners, and all stakeholders worldwide gain valuable information about the real-world risks and harms posed by the technology.

Notably, the framework proposed by TEC has also taken reference from a research paper published in July 2024 by Avinash Agarwal and Manisha J. Nene on the subject.

“So far, most of the conversations have been primarily around first principles of ethical and responsible AI. However, there is a need to have domain and sector-specific discussions around AI safety," said Dhruv Garg, a tech policy lawyer and partner at Indian Governance and Policy Project (IGAP).

“We need domain specialist technical bodies like TEC for setting up a standardized approach to AI incidents and risks of AI for their own sectoral use cases," Garg said. “Ideally, the sectoral approach may feed into the objective of the proposed AI Safety Institute at the national level and may also be discussed internationally through the network of AI Safety Institutes."

Need for self-reglation

In January, MeitY announced the IndiaAI Safety Institute under the ₹10,000 crore IndiaAI Mission to address AI risks and safety challenges. The institute focuses on risk assessment and management, ethical frameworks, deepfake detection tools, and stress testing tools.

“Standardisation is always beneficial as it has generic advantages," said Satya N. Gupta, former principal advisor at the Telecom Regulatory Authority of India (Trai). “Telecom and Information and Communication Technology (ICT) cuts across all sectors and, therefore, once standards to mitigate AI risks are formed here, then other sectors can also take a cue."

Also read |  AI hallucination spooks law firms, halts adoption

According to Gupta, recording the AI issues should start with guidelines and self-regulation, as enforcing these norms will increase the compliance burden on telecom operators and other companies.

The MeitY sub-committee had recommended that the AI incident database should not be started as an enforcement tool and its objective should not be to penalise people who report AI incidents. “There is a clarity within the government that the plan is not to do fault finding with this exercise but help policy makers, researchers, AI practitioners, etc., learn from the incidents to minimize or prevent future AI harms," the official cited above said.

Catch all the Industry News, Banking News and Updates on Live Mint. Download The Mint News App to get Daily Market Updates.
more

topics

Read Next Story footLogo