IT Ministry cuts takedown timelines for online intermediaries to hours | Tech News

https://www.profitableratecpm.com/f4ffsdxe?key=39b1ebce72f3758345b2155c98e6709c

The Ministry of Electronics and Information Technology said on Tuesday that social media and internet intermediaries will, starting February 20, have to remove problematic content within three hours instead of the 36 hours provided so far.

Apart from this, intermediaries must also remove non-consensual intimate images from their respective platforms within two hours instead of the 24-hour period provided so far.

These changes have been notified by the IT Ministry as part of the amendments to the Information Technology (Intermediary Guidelines and Digital Media Ethics Code), 2021.

The deadlines were compressed following feedback received by the IT Ministry from stakeholders that the previously imposed deadlines of 24 hours and 36 hours, especially in the case of such sensitive content, were too long and could not prevent the content from going viral.


“Technology companies now have an obligation to remove illegal content much faster than before. They certainly have the technical means to do so,” said a senior IT ministry official.

In the latest amendments notified on Tuesday, the IT Ministry required platforms that allow users to generate content using artificial intelligence (AI) to be clearly identified or labeled through visible disclosures of synthetically generated or modified content.

In addition to placing visible disclaimers on such AI-generated content, intermediaries should also, where possible, embed permanent metadata or other similar identifiers to help trace the origin of the content.

While defining synthetically generated information (SGI) as any audio, visual, or audio-visual information that is artificially or algorithmically created, generated, modified, or altered using a computer resource, in a manner that appears real, authentic, or truthful, the ministry has exempted “good faith” editing of content using AI tools from the definition of SGI.

The newly notified amendments also clarify that once an intermediary becomes aware of misuse of its tools to create, host or distribute SGI, it must deploy “reasonable” and “appropriate” technical measures to prevent the presence of such content on the platform.

The new amendments are significantly broader than what was circulated in the draft for consultation, said Aman Taneja, partner at Delhi-based law firm Ikigai Law.

“While the government has refined the definition of synthetic content and moved away from prescriptive requirements such as the mandatory 10 percent visual watermark, it has simultaneously reduced takedown deadlines across all content categories to just a few hours. This significantly raises the compliance bar. For large platforms, meeting these deadlines at scale will be an operational challenge and could push companies towards excessive takedown,” Taneja said.

Other experts, however, believe the amendments mark a more calibrated approach to regulating AI-generated deepfakes.

“By narrowing the definition of synthetically generated information, relaxing overly prescriptive labeling requirements, and exempting legitimate uses like accessibility, the government has addressed key industry concerns, while signaling a clear intention to strengthen platform accountability,” said Rohit Kumar, founding partner at public policy firm The Quantum Hub.

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button