Regulation of AI

The European Union passed Artificial Intelligence Act for oversight and regulation. India too is holding a Global Partnership on AI Summit to reach a global consensus on AI regulation.

AI has affected the manufacturing sector. AI facilitates drug discovery, material science research. It transforms healthcare and diagnostics, autonomous transport and small but efficient and smart power grids. It assists financial systems and telecom networks. It enhances the provision of a host of public and private services.

AI has its demerits. It can promote criminal activities. It consolidates power in authoritarian regimes through face recognition, surveillance and discriminatory systems. At present, human beings are in charge of ‘pulling the trigger’ of dangerous military weapons. This power gets transferred to AI. Then self-aware AI possesses traits such as inquisitiveness and has an instinct of self-preservation. These issues must be tackled holistically. As AI spreads across economies, there should be consensus on regulation. The ideal oversight exercises control and mitigates the possibility of harm without crippling research and the rollout of useful AI.

The European regulation attempts a technology-neutral uniform definition for AI applicable to all future system. There is a classification of AI systems as per the risks. The higher the risk, the greater the oversight and the more the obligations imposed on providers and users.

According to AI Act, the limited risk systems should comply with transparency requirements. Users should be made aware that they are interacting with AI. To illustrate, systems that generate images should warn against deepfakes and image manipulation. There should be disclosure that the content is AI-generated. This puts curbs on the generation of illegal content. There should be public summary of copyrighted data used in training.

High-risk AI systems affect safety and fundamental rights. There are two categories — AI in products such as toys, aviation, cars, medical devices and lifts. Then there is another category — AI used across specific areas such as biometric identification, critical infrastructure, education and vocational training and AI-managed access to essential private and public services. These are registered in EU databases. Both categories of high-risk AI systems must be assessed before roll-out and must be reviewed throughout their life cycles.

Some systems pose unacceptable risks in the AI Act. One such example is behavioral manipulation of people, or vulnerable groups. Then there are biometric identification systems. These must be used with court approval to identify criminals and apprehend them after a serious crime is committed.

There can tweaking in this basic framework as per the needs of the country. But it is a reasonable framework for global regulation. The framework excludes military research and development.

The GPAI summit in India could adopt some version of this Act.

print

Leave a Reply

Your email address will not be published. Required fields are marked *