India walks back AI regulations

Fu sheng
2 min readMar 16, 2024

--

perplexity.ai

India has recently revised its approach towards regulating artificial intelligence (AI), moving away from a plan that would have required tech firms to obtain government approval before launching or deploying AI models in the market. This decision comes after the initial advisory, issued on March 1, faced significant backlash from local and international entrepreneurs and investors. The criticism led the Ministry of Electronics and IT to update the advisory, which now advises firms to label under-tested and unreliable AI models to inform users of their potential fallibility, rather than seeking prior government approval.

— — — —

— — — —

The initial advisory had marked a significant shift from India’s previously more hands-off approach to AI regulation, which had been in place less than a year prior. The country had earlier declined to regulate AI growth, recognizing the sector as vital to India’s strategic interests. However, the March 1 advisory had indicated a move towards more stringent regulation, requiring compliance even though it wasn’t legally binding. The advisory emphasized that AI models should not be used to share unlawful content, permit bias, discrimination, or threaten the integrity of the electoral process. It also advised on the use of “consent popups” to inform users about the unreliability of AI-generated output and retained emphasis on ensuring that deep fakes and misinformation are easily identifiable.

This regulatory pivot was part of a broader global trend of countries racing to establish rules to govern the rapidly evolving field of AI. India’s initial move to tighten regulations, especially for social media companies, was in line with its efforts to address the challenges posed by generative AI and its potential impacts on society, including the integrity of the electoral process ahead of the country’s general elections.

The revised advisory reflects a balance between fostering innovation in the AI sector and addressing the potential risks associated with the deployment of under-tested or unreliable AI technologies. By advising rather than mandating the labeling of such AI models, the Indian government appears to be seeking a middle ground that encourages responsible AI development while maintaining a regulatory environment that does not unduly hinder technological progress.

— — — — — —

— — — — — —

--

--

Fu sheng
Fu sheng

Written by Fu sheng

A BOY WHO WNAT TO EARN MONEY IN HIS 20s AND CHILL IN 40s.

No responses yet