Artificial IntelligenceNews/PR

MeitY issues comprehensive advisory to regulate AI platforms in India

2 Mins read
MeitY advisory

India’s Ministry of Electronics and Information Technology (MeitY) recently issued a far-reaching advisory aimed at regulating the development and deployment of artificial intelligence (AI) technology in the country. The advisory mandates that platforms seeking to release AI technology still in development must obtain explicit government permission prior to public release.

The issuance of this advisory follows recent incidents where users discovered that Google’s Gemini AI chatbot was providing inaccurate and misleading information about the country’s Prime Minister.

Here are the 5 key points highlighted in the advisory:

  1. Content Regulation: Intermediaries and platforms are mandated to ensure that the use of Artificial Intelligence models, Generative AI, software, or algorithms on their computer resources does not allow users to host, display, upload, modify, publish, transmit, store, update, or share any unlawful content, as outlined in Rule 3(1)(b) of the IT Rules or any other provision of the IT Act.
  2. Electoral Process Integrity: Intermediaries and platforms must ensure that their computer resources do not allow any bias, discrimination, or threats to the integrity of the electoral process, by leveraging Artificial Intelligence models, Generative AI, software, or algorithms.
  3. Explicit Permission for Under-Testing AI: The deployment of under-testing or unreliable Artificial Intelligence models on the Indian Internet requires explicit permission from the Government of India. Additionally, such models must be labeled to inform users about their potential fallibility or unreliability. A ‘consent popup’ mechanism should be used to explicitly convey this information to users.
  4. User Awareness: Users must be clearly informed, through terms of services and user agreements, about the consequences of engaging with unlawful information on the platform. This includes the possibility of disabling access to or removal of non-compliant information, suspension or termination of access or usage rights, and punishment under applicable law.
  5. Labeling for Deepfakes and Misinformation: Intermediaries facilitating the synthetic creation, generation, or modification of text, audio, visual, or audio-visual information, which could potentially be used as misinformation or deepfake, are advised to label or embed such information with a permanent unique metadata or identifier. This identifier should allow the identification of the source, intermediary, and creator or first originator of such misinformation or deepfake.

In his post on X (formerly Twitter), Rajeev Chandrasekhar, Union Minister of State for IT, said that that this rule is applicable only to “significant” platforms and not to startups in India.

This latest rule adds to the advisory released in December 2023 to address the concerns around misinformation powered by AI – Deepfakes.

Not abiding with the IT Act and IT Rules could lead to serious consequences for platforms, intermediaries, or their users. This might include legal actions under the IT Act and other criminal laws. To avoid this, all platforms are asked to start following these rules right away. They are also requested to send a report to the Ministry within 15 days, explaining what actions they’ve taken based on this advisory.

Read next: Digital payments landscape in India in 2024 and beyond

Leave a Reply

Your email address will not be published. Required fields are marked *

9 × = 54