Generative AI : Global Regulation

It is not right to allow generative AI to proliferate without any restrictions. Most of the people agree with this proposition. Even Sam Altman from OpenAI and Geoffrey Hinton, the Godfather of deep learning are on the same page. Geoffrey Hinton alerted the world about the existential threat generative intelligence can pose to the world. Altman warned the US lawmakers to create some parameters for AI so as not to harm the world.

The US president’s recent executive order expects the AI companies to establish new standards for AI safety and security. The European Union too wants to frame laws to regulate AI, but is unsure about the way the foundational models can be regulated. Are the foundational models to be regulated only during testing and release? Or should they be monitored even post-release?

India has lagged behind in passing the digital laws. The Digital Personal Data Privacy (DPDP) Act seems ineffective to regulate both AI and generative AI.

Big Tech is expected to follow ‘responsible and ethical AI’. Governments across the globe expect the companies to ensure AI safety and to further ensure that rogue models do not get developed. It is naive to expect the companies to do this. The onus lies with the governments.

It is not clear to the law-makers what exactly should be regulated and controlled in generative AI. Mostly, they trust Big Tech to self-regulate.

However, law-makers cannot ignore the open source generative AI models. These models provide the tools to developers to develop their own models.

Are copyright and privacy laws enough to deal with generative AI? In addition, can local laws regulate a world-wide technology? There should be a global consensus on this issue.

It is necessary to understand the fundamentals of AI models to work out regulations. Generative AI models are called foundational models. They are neural models which are pre-trained on a massive corpus of data. The data is scraped from the internet and other sources such as books, periodicals, research papers and so on. These foundational models are hungry for more data so as to remain effective.

To keep these models functional, the companies use web crawlers or data scrapers. These essentially are computer programmes which go through the web-sites to extract the data. Search engines have been using web crawlers since long, but in generative AI, they continue to gather data for training the models. There is absence of laws about data scraping. Older laws such as copyright and privacy put some curbs on data being gathered. However, they are ineffective against massive scraping of data. And the scraping also extends to vast amount of personal data on the web. The crux of the matter is how to regulate this data collection for the foundational models.

print

Leave a Reply

Your email address will not be published. Required fields are marked *