New Delhi: With the rise of machine intelligence and AI, it has become difficult to tell the difference between genuine and fake. With that, the central government has given strict instructions to the social media platforms. All Deepfake content created with AI has been ordered to be removed today. Besides, the Center also directed that there will be clear identification of AI content on social media. (AI Content Rules)
The draft guidelines were released by the Union IT Ministry on Tuesday. This rule applies to deepfake videos, artificial audio, distorted scenes, all. It should also contain metadata and unique identifiers to trace where it came from. This revised rule under the Information Technology Act will be effective from February 20. (New Social Media Rules)
New revised guidelines have been issued for social media platforms like Facebook, Instagram, YouTube, and said that all the content created using AI should be labeled in such a way that the content can be identified. Artificial content must have embedded identifiers.
The Center issued revised guidelines today, restricting social media platforms to a three-hour deadline, up from 36 hours earlier. It is said that AI-identified deepfake content must be removed within three hours as directed by the government or the court. Once an AI label or ‘marking message’ is placed on a piece of content, it cannot be removed or hidden.
The center said social media companies should introduce automated systems to identify illegal, sexual or fraudulent content created using AI and block its dissemination. Violation of rules, misuse of AI, what the consequences may be, at least once every three months the customer should convey its importance to the concerned organization. Rules, privacy policies, contracts or any other means can be used in this regard.
If there is any violation of the rules in creating, promoting, uploading, publishing, transmitting, storing, updating, sharing, generation, modification, alteration of the content, appropriate action should be taken by the concerned organization. According to the Centre’s new guidelines, social media platforms have to provide appropriate technology, take reasonable measures, so that no one can create, promote, post or share such artificial content. Because it will violate the Information Technology Act, Section 45 of the Indian Penal Code, Section 32 of the POCSO Act and Section 2 of the Explosive Substances Act. In other words, when an individual posts AI content, it is mandatory to declare it publicly, and social media platforms must also verify that declaration. Various social media platforms already have such controls, where it is possible to identify whether a content has been created by AI or whether any changes have been made using AI.
