WEB DEKS: European Union (EU) ministers have signed off on a groundbreaking law that establishes regulations for the use of artificial intelligence (AI) in sensitive sectors and situations.
Under the law, AI systems utilised in areas such as law enforcement and employment must demonstrate transparency, accuracy, and compliance with cybersecurity standards. Additionally, they must meet criteria regarding the quality of the data used for training.
The vote by EU countries follows the European Parliament’s backing of the AI legislation two months ago. The legislation, which surpasses the voluntary compliance approach to AI in the US, could have a significant impact worldwide.
Key points of Law
- AI systems intended for “high-risk” situations must obtain certification from approved bodies before entering the EU market. These situations include those where AI use could potentially harm health, safety, fundamental rights, the environment, democracy, elections, and the rule of law.
- Systems like the social credit scoring used in China and biometric categorization systems based on personal characteristics such as religious beliefs, sexual orientation, and race will be outright banned.
- While the law generally prohibits real-time facial recognition in CCTV, exceptions are made for law enforcement purposes, such as finding missing persons, preventing human trafficking, or identifying suspects in serious criminal cases.
- AI systems posing limited risks must meet lighter transparency standards, including disclosing that their content is AI-generated.
- A new “AI Office” within the European Commission will ensure enforcement of the law at the EU level.
Read More: Court rules Prince Harry can’t take Murdoch allegations to trial
The law is pending signing by the presidents of the EU legislature and publication in the EU’s statute book. It technically comes into force 20 days later, but most provisions won’t take effect until two years after that.
Global commitments on AI safety
At a global summit in Seoul, more than a dozen leading AI firms, including Alphabet’s Google, Meta, Microsoft, and OpenAI, made fresh safety commitments. These commitments aim to provide transparency and accountability in the development of safe AI.
UK Prime Minister Rishi Sunak hailed these commitments as crucial steps toward ensuring AI safety. The summit, hosted by South Korea and Britain, underscores widespread concerns regarding the potential risks associated with AI development.