Flexible AI Norms

Countries across the world are working on regulatory frameworks for AI. Google advocates risk-based approach (instead of uniform rules) for AI applications. There should not be ‘one size fits all’ approach which hinders innovation.

Different AI models pose different risks, and the regulation should be framed in proportion to the risks posed. The regulation should be directed to the application level, rather than the technology level.

The application layer for generative AI means the stage where the technology is being deployed for use cases.

Google is doing continuous research on biases — what the bias means and how to address it. Basically, it can be addressed by training models on good data. Models should not be trained on unsafe data.

Of late, the government released an advisory that if there is bias in content generated by algorithmic research, search engines or AI models (such as ChatGPT and Bard), there will not be any protection under the safe harbour clause of Section 79 of the IT Act.

In order to reduce bias, there should be cross-border flow of trusted data. Such a flow will facilitate the use of diverse demographic data for training, and that is useful to address bias.

Indian government will share public data available with it with only those firms which have a proven track record and can be called trusted sources. Google supports this stand.

print

Leave a Reply

Your email address will not be published. Required fields are marked *