EU’s AI Regulation

AI could benefit humanity and at the same time it poses certain risks. These risks could harm individuals, societies, ecosystems and environment. AI can promote biases, can affect privacy, and spread disinformation. It could facilitate socio-economic divide. It also poses environmental risks. It could be misused on the battlefield.

Lawmakers all over the world, are in favour of regulating AI by promoting transparency, explainability, safety, security and accountability.

The AI Act in the EU was proposed in July 2024 and came into effect in August 2024. It creates a risk-based regulatory regime.

AI systems can pose unacceptable risks, e.g. cognitive behavioral manipulation and social scoring. Such systems are prohibited. Some AI systems pose high-risks which can endanger people or their fundamental rights, e.g. using AI for creditworthiness, insurance, education and employment-related decision-making. These systems are subjected to a host of regulatory requirements. Some AI systems pose limited risk, e.g. chatbots and deepfakes. These must meet certain information and transparency standards. Some systems pose minimal risk, e.g. spam filters. These are not regulated at present. They are, however, subject to other regulations such as General Data Protection Regulation — GDPR.

General purpose AI models may have high impact capabilities. They could pose a systemic risk. The Act lays down stringent rules for such models.

Countries can adapt the EU law as per their national priorities.

print

Leave a Reply

Your email address will not be published. Required fields are marked *