The Indian government has taken a decisive step towards regulating the development and deployment of artificial intelligence (AI) tools within its jurisdiction. In an advisory issued by the Indian IT ministry on March 1, tech companies are now required to obtain government approval before publicly releasing any AI tools deemed “unreliable” or still in the testing phase. These tools must also be clearly labeled to indicate their potential for providing inaccurate responses.
The advisory explicitly states that AI tools can only be made available to users on the Indian internet with the government’s explicit permission. This move is aimed at ensuring that AI technologies do not compromise the integrity of the electoral process, especially with general elections looming this summer.
This policy announcement follows recent criticisms directed at Google and its AI tool, Gemini, for producing biased or inaccurate responses, including controversial descriptions of Indian Prime Minister Narendra Modi. Google has since apologized, acknowledging Gemini’s unreliability, especially concerning current social topics.
Rajeev Chandrasekhar, India’s deputy IT minister, emphasized on X (formerly Twitter) that the legal obligation of ensuring safety and trust cannot be overlooked by simply acknowledging the unreliability of these AI tools. The government had previously announced in November plans to introduce regulations to combat the spread of AI-generated deepfakes ahead of the elections, mirroring similar regulatory efforts in the United States.
However, the government’s latest advisory has sparked concerns within the tech community, with critics arguing that such regulations could hinder India’s leadership in the tech industry. Chandrasekhar responded to these concerns by asserting that legal consequences should apply to platforms facilitating or generating unlawful content. He reaffirmed India’s commitment to AI development and the goal of providing a safe, trustworthy digital ecosystem.
Chandrasekhar clarified that the advisory aims to guide companies deploying experimental AI platforms on the public internet, ensuring they are aware of their legal obligations and the potential consequences under Indian law.
In a move to expand AI innovation and accessibility, Microsoft partnered with Indian AI startup Sarvam on February 8, integrating an Indic-voice large language model (LLM) into its Azure AI infrastructure, targeting a broader user base in the Indian subcontinent. This partnership highlights the ongoing efforts to advance AI technology in India while balancing innovation with legal and ethical considerations.