India’s telecom sector needs mandatory AI incident reporting

Legal uncertainty adds another layer of complexity. Current laws focus on breaches of cybersecurity or personal data. Incidents such as biased service delivery or AI-driven call routing errors fall outside these definitions, leaving companies without clear reporting obligations.


CO-EDP, VisionRICO-EDP, VisionRI | Updated: 16-09-2025 23:04 IST | Created: 16-09-2025 23:04 IST
India’s telecom sector needs mandatory AI incident reporting
Representative Image. Credit: ChatGPT

A new study warns that without dedicated frameworks for AI incident reporting, India’s rapidly expanding telecom networks risk systemic vulnerabilities that existing laws cannot adequately address. The research, authored by Avinash Agarwala of the Department of Telecommunications, Ministry of Communications, New Delhi, and Manisha J. Nene of the Defence Institute of Advanced Technology, Pune, highlights a critical regulatory blind spot.

The paper, titled “Incorporating AI Incident Reporting into Telecommunications Law and Policy: Insights from India,” was published on arXiv. It assesses the inadequacy of current Indian regulations in capturing AI-driven failures and provides a roadmap for integrating mandatory incident reporting into the nation’s legal framework.

Why current telecom regulations fail to capture AI risks

India has enacted several laws to strengthen its digital ecosystem, including the Telecommunications Act, 2023, the CERT-In Rules, and the Digital Personal Data Protection Act, 2023. While these instruments cover data breaches and cybersecurity threats, they leave out a growing category of risks: operational failures caused by AI.

The study defines “telecommunications AI incidents” as events that compromise telecom operations due to flaws in machine learning systems. This includes biased allocation of network resources, degradation of services due to model drift, or failure of AI-driven optimization tools. Such incidents may not involve hacking or personal data loss, yet they still disrupt services and erode trust.

Existing global incident repositories, such as the AI Incident Database (AIID) and the Artificial Intelligence, Algorithmic, and Automation Incident Clearinghouse (AIAAIC), were also found to be insufficient. They are voluntary, general in scope, and not tailored to the unique operational context of telecommunications. Without a dedicated framework, many AI-related failures remain unreported, leaving regulators and stakeholders blind to systemic vulnerabilities.

What barriers prevent effective AI incident reporting?

The authors identify multiple barriers that hinder AI incident reporting in India’s telecom sector. A key issue is the absence of standardized definitions and taxonomies. Without a clear framework, operators are uncertain about what constitutes an AI incident worth reporting.

Fear of reputational damage also discourages voluntary disclosures. Companies worry that reporting failures could erode customer trust or invite regulatory penalties. This reluctance is compounded by resource constraints, as many telecom operators lack the technical expertise and compliance infrastructure to track AI-specific risks.

Legal uncertainty adds another layer of complexity. Current laws focus on breaches of cybersecurity or personal data. Incidents such as biased service delivery or AI-driven call routing errors fall outside these definitions, leaving companies without clear reporting obligations.

As a result, the system incentivizes silence over transparency, limiting the ability of regulators and policymakers to detect and address AI-induced risks before they escalate.

How can India integrate AI incident reporting into policy?

The study puts forward a series of detailed recommendations aimed at closing the regulatory gap. First, it calls for expanding existing telecom regulations to explicitly cover AI incidents, with a particular focus on high-risk systems. Mandating reporting would ensure consistent disclosure across operators.

Second, the authors propose establishing a nodal agency, potentially within the Telecom Engineering Centre (TEC) or the Telecom Regulatory Authority of India (TRAI), to manage AI incident data. This body would classify incidents, oversee reporting standards, and maintain a centralized repository for analysis and policy action.

Standardization is a central theme of the recommendations. The study urges the creation of clear taxonomies and frameworks for classifying AI incidents, alongside mandatory risk assessments for all AI systems deployed in telecommunications. Such measures would ensure operators identify potential vulnerabilities before deployment.

The authors also suggest incentivizing voluntary reporting through mechanisms that protect sensitive information. By shielding operators from reputational harm and legal exposure, India could encourage greater participation without compromising accountability.

Further, AI risk checks should be incorporated into equipment certification processes, ensuring that imported and domestically manufactured telecom systems meet predefined safety and transparency thresholds. The study also stresses the need for international collaboration under the International Telecommunication Union (ITU), allowing India to harmonize its frameworks with global best practices.

  • FIRST PUBLISHED IN:
  • Devdiscourse
Give Feedback