Context:
As Artificial Intelligence (AI) technologies advance rapidly, India faces the challenge of designing an AI regulatory framework that balances innovation, consumer protection, and ethical safeguards. An editorial contrasts India’s light-touch, reactive approach with China’s stringent AI rules, underscoring the need for a context-sensitive and risk-based regulatory strategy.
Key Highlights:
India’s Current AI Regulatory Framework
-
AI regulation in India is indirect and fragmented, governed through:
-
Information Technology (IT) Act, 2000
-
IT Rules (intermediary obligations)
-
Data protection regulations
-
Sector-specific rules in finance and securities
-
-
No standalone AI law or explicit duty of care for AI product safety, especially for psychological and emotional harms.
China’s Regulatory Approach
-
China has proposed stringent rules for emotionally interactive AI services.
-
Mandates:
-
User warnings against excessive emotional reliance
-
Intervention mechanisms for users exhibiting extreme emotional states
-
-
Reflects a preventive and intrusive regulatory model prioritising social stability.
India’s Sectoral & Reactive Measures
-
MeitY has acted against deepfakes, mandating:
-
Labelling of “synthetically generated” content
-
Takedown obligations under IT Rules
-
-
Financial regulators (RBI, SEBI) have issued expectations on:
-
Model risk management
-
Accountability in AI tool deployment
-
Innovation & Strategic Concerns
-
India trails the U.S. and China in frontier AI model development.
-
Over-regulation at this stage could stifle domestic innovation and investment.
-
Need to focus on capability-building rather than restrictive controls.
Relevant Prelims Points:
-
Issue: Absence of a comprehensive AI-specific regulatory framework in India.
-
Key Institutions:
-
MeitY – AI governance and IT Rules
-
RBI & SEBI – sectoral AI oversight
-
-
Key Concepts:
-
Deepfakes – AI-generated synthetic media
-
Duty of Care – legal obligation to prevent foreseeable harm
-
-
Impact:
-
Gaps in consumer protection, especially mental and emotional well-being
-
Relevant Mains Points:
-
Governance Perspective:
-
India’s approach is reactive, responding to harms after occurrence.
-
Lacks explicit AI product safety and accountability standards.
-
-
Comparative International Models:
-
China: Preventive, state-centric, intrusive regulation
-
India: Innovation-friendly but under-regulated
-
-
Science & Technology Angle:
-
AI risks are context-specific, requiring downstream regulation.
-
-
Balanced Regulatory Path:
-
Regulate high-risk applications, not general-purpose AI.
-
Mandate incident reporting for AI-related harms.
-
Introduce a duty of care for AI developers and deployers.
-
Use public procurement to support indigenous AI development.
-
UPSC Relevance (GS-wise):
-
GS 2: Governance, Digital Regulation, Global Norms
-
GS 3: Science & Technology, Emerging Technologies
-
GS 2: International Relations – Comparative Regulatory Models
