ETtechIndia’s proposed amendments to the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021 have drawn a mix of optimism and caution from legal and industry experts.
The changes mandate that social media platforms label AI-generated or synthetic content prominently, as part of the government’s efforts to counter the rising menace of deepfakes.
Under the draft amendments notified by the Ministry of Electronics and Information Technology (MeitY), social media platforms such as Meta, X and Google would be required to label AI-generated content with markers that cover at least 10% of the display area or the first 10% of the audio duration.
They must also seek user declarations on AI-generated content and implement “reasonable technical measures” to ensure compliance.
Raja Lahiri, partner at professional services firm Grant Thornton Bharat, said the amendments are both timely and necessary, given the increasing frequency of deepfake attacks that misuse voice and likenesses.
“Given the one billion internet users in India, this is concerning and could have widespread ramifications,” he said. “The proposed amendments provide trust and safety for internet users and ensure that platforms take due diligence to protect Indian citizens.”
Deepfakes are “worryingly convincing” and can quickly distort facts or impersonate individuals, said Akshay Garkel, also partner at Grant Thornton Bharat. Underlining the need for public vigilance alongside regulatory action, he urged users to verify sensational content and secure their devices to reduce exposure to AI-driven manipulation.
Legal experts broadly welcomed the government’s intent but warned of practical and interpretational challenges. Ankit Sahni, partner at Ajay Sahni & Associates, described the amendments as an effort to inject “traceability and accountability” into the digital ecosystem.
“By mandating clear labelling, these amendments empower users to distinguish truth from fabrication, while providing intermediaries a safe harbour that balances innovation with trust,” he said.
Sahni cautioned that enforcement could prove tricky if users mislabel or fail to declare AI-generated content. “The remedy in such a case may again be to approach a commercial court or high court to seek injunctive relief,” he added.
Arun Prabhu, partner and co-head of Digital & TMT (Technology, Media, Telecom) at Cyril Amarchand Mangaldas, commended the intention to promote transparency but warned of “overcorrection”.
“By defining synthetic content without any intent to deceive, and requiring proactive detection, the rules may have a chilling effect on genuine content that’s simply been enhanced or cleaned up,” he said.
Shreya Suri, partner at Indus Law, called the proposed changes a “forward-looking effort” to promote transparency and accountability, but flagged ambiguity in technical compliance requirements.
“There is uncertainty in defining what constitutes ‘reasonable and appropriate technical measures’ for large platforms,” she said, adding that inconsistent interpretations could lead to uneven enforcement and place “disproportionate burdens” on some intermediaries.
Experts agreed that the success of the government’s initiative would depend on how effectively rules are implemented. The future of such regulation, Suri said, hinges on a “technology-aware framework that respects user rights and fosters an accountable yet open digital ecosystem”.
Also Read | New IT rules explained: Deepfakes must be labelled, takedowns only by senior officials
The changes mandate that social media platforms label AI-generated or synthetic content prominently, as part of the government’s efforts to counter the rising menace of deepfakes.
Under the draft amendments notified by the Ministry of Electronics and Information Technology (MeitY), social media platforms such as Meta, X and Google would be required to label AI-generated content with markers that cover at least 10% of the display area or the first 10% of the audio duration.
They must also seek user declarations on AI-generated content and implement “reasonable technical measures” to ensure compliance.
Raja Lahiri, partner at professional services firm Grant Thornton Bharat, said the amendments are both timely and necessary, given the increasing frequency of deepfake attacks that misuse voice and likenesses.
“Given the one billion internet users in India, this is concerning and could have widespread ramifications,” he said. “The proposed amendments provide trust and safety for internet users and ensure that platforms take due diligence to protect Indian citizens.”
Deepfakes are “worryingly convincing” and can quickly distort facts or impersonate individuals, said Akshay Garkel, also partner at Grant Thornton Bharat. Underlining the need for public vigilance alongside regulatory action, he urged users to verify sensational content and secure their devices to reduce exposure to AI-driven manipulation.
Legal experts broadly welcomed the government’s intent but warned of practical and interpretational challenges. Ankit Sahni, partner at Ajay Sahni & Associates, described the amendments as an effort to inject “traceability and accountability” into the digital ecosystem.
“By mandating clear labelling, these amendments empower users to distinguish truth from fabrication, while providing intermediaries a safe harbour that balances innovation with trust,” he said.
Sahni cautioned that enforcement could prove tricky if users mislabel or fail to declare AI-generated content. “The remedy in such a case may again be to approach a commercial court or high court to seek injunctive relief,” he added.
Arun Prabhu, partner and co-head of Digital & TMT (Technology, Media, Telecom) at Cyril Amarchand Mangaldas, commended the intention to promote transparency but warned of “overcorrection”.
“By defining synthetic content without any intent to deceive, and requiring proactive detection, the rules may have a chilling effect on genuine content that’s simply been enhanced or cleaned up,” he said.
Shreya Suri, partner at Indus Law, called the proposed changes a “forward-looking effort” to promote transparency and accountability, but flagged ambiguity in technical compliance requirements.
“There is uncertainty in defining what constitutes ‘reasonable and appropriate technical measures’ for large platforms,” she said, adding that inconsistent interpretations could lead to uneven enforcement and place “disproportionate burdens” on some intermediaries.
Experts agreed that the success of the government’s initiative would depend on how effectively rules are implemented. The future of such regulation, Suri said, hinges on a “technology-aware framework that respects user rights and fosters an accountable yet open digital ecosystem”.
Also Read | New IT rules explained: Deepfakes must be labelled, takedowns only by senior officials








