Context:
The Ministry of Electronics and Information Technology (MeitY) has proposed amendments to the IT Rules, 2021 to mandate disclosure and labelling of AI-generated (synthetic) content on social media platforms. This move aims to tackle rising concerns related to deepfakes, misinformation, impersonation, and online fraud.
Key Highlights / Details:
- The draft amendment mandates that users must label AI-generated or manipulated content before uploading it on platforms.
- It applies to all formats β text, images, audio, and video β including deepfake content, cloned voices, edited images, and AI-written posts.
- Social media intermediaries must proactively detect and label untagged AI content using automated tools.
- The draft defines synthetically generated information as content created or altered using computer resources, which appears to be real or authentic.
- The government stated the move is essential to combat misuse of AI for political propaganda, defamation, identity theft, and cybercrimes.
- MeitY has sought public feedback on the rules till November 8.
- Platforms that fail to comply may lose safe-harbour protection under Section 79 of the IT Act, making them legally liable.
- The initiative is part of Indiaβs broader effort to regulate AI responsibly without stifling innovation.
Relevant Prelims Points:
- IT Rules 2021 β Provide due diligence obligations for intermediaries.
- Intermediary β Defined under IT Act as any entity storing or transmitting third-party data (e.g., YouTube, Facebook, WhatsApp).
- Deepfake β AI-generated media that mimics a real person.
- Section 79 IT Act (Safe Harbour) β Shields intermediaries from liability if due diligence is followed.
- Grievance Appellate Committees (GACs) β Set up under IT Rules 2021 for content grievance appeals.
Relevant Mains Points:
- Impact of AI on information integrity, elections, national security, and society.
- Ethical and legal concerns: privacy violation, consent, digital manipulation.
- Need for regulation without hindering digital innovation.
- Importance of algorithmic accountability and responsible AI governance.
- Challenges β detection technology limitations, enforcement burden on platforms, fear of over-censorship.
