Context:
At the G20 Summit in Johannesburg, Prime Minister Narendra Modi called for a global compact to prevent the misuse of Artificial Intelligence (AI) and urged world leaders to adopt a “human-centric” approach to emerging technologies instead of a finance-centric outlook.
Key Highlights:
- Global call for responsible AI
- PM Modi emphasised that AI applications must be governed by global principles, not fragmented national standards.
- Technology should be “global rather than national” and based on open-source instead of exclusive proprietary models.
- Proposed core principles for AI governance
- Human oversight and accountability
- Safety-by-design architecture
- Transparency of AI systems
- Restrictions on AI misuse → particularly in:
- Deepfakes
- Crime and cyber fraud
- Terror activities
- War and misinformation campaigns
- Digital trust must be a universal value for AI deployment.
- Protecting livelihoods and promoting equitable growth
- PM highlighted that rapid AI expansion should not shift workforce abruptly from “jobs of today” to “capabilities of tomorrow” without adequate transition.
- AI should enhance human capacities, not replace decision-making.
- India advocated global talent mobility as essential for the future workforce.
- India’s leadership in global AI diplomacy
- India to host the Global AI Impact Summit in February 2026.
- Linked AI roadmap to principles of Vasudhaiva Kutumbakam, Sabka Saath, Sabka Vikas, One Earth, One Family, One Future, and Sahajivan Sukha for inclusive technological progress.
Relevant Prelims Points:
- G20 Presidency themes: Digital transformation & responsible technology governance.
- Ethics in AI Framework:
- Human-in-the-loop
- Accountability
- Safe & secure AI
- Risks of unregulated AI: deepfakes, autonomous weapons, cyber manipulation, discrimination due to biased datasets.
- India’s AI roadmap:
- IndiaAI Mission
- Digital India Programme
- National Strategy for AI (NITI Aayog)
- Responsible AI for Youth
Relevant Mains Points:
- Global AI governance challenge
- AI is expanding faster than regulatory frameworks → risk of weaponisation, surveillance misuse, algorithmic discrimination, and cyber-economic warfare.
- A global framework is necessary because AI transcends borders, similar to climate governance and cyber security.
- Need for human-centric AI
- Avoid reduction of humans to inputs in data-capital systems.
- Promote models that:
- Amplify human skills
- Protect dignity and rights
- Prevent widening digital divide
- Aligns with SDG 8 (Decent Work), SDG 9 (Industry, Innovation & Infrastructure), SDG 16 (Peace, Justice & Institutions).
- Ethical and geopolitical stakes
- Nations may use AI for military edge, surveillance, propaganda, raising tension in global security orders.
- Balance needed between innovation and safeguard → avoid excessive regulation that hampers innovation while ensuring responsible use of technology.
Way Forward
- Create UN-style global regulatory platform for AI, similar to Paris Agreement for climate change.
- Establish international AI risk rating system & audit mechanisms.
- Define norms on cross-border data sharing, AI supply chains, and computational sovereignty.
- Promote AI-for-development projects in Global South.
UPSC Relevance (GS-wise):
- GS-2: International relations; multilateral diplomacy; global governance of technology.
- GS-3: Science & Technology → AI, cybersecurity, ethical algorithms, deepfake regulation.
- GS-4: AI ethics; human-centric technology; trust and accountability in emerging technologies.
