Negotiators from the European Parliament and Council have reached a provisional agreement on the Artificial Intelligence Act
Negotiators from the European Parliament and Council have reached a provisional agreement on the Artificial Intelligence Act. The Artificial Intelligence Act is a regulatory framework designed to ensure the safe deployment of AI in Europe.
According to information provided by the European Parliament, the objective of the regulation is to safeguard fundamental rights, democracy, the rule of law, and environmental sustainability, while supporting innovation to position Europe as a leader in the field of AI. In essence, the established rules impose obligations on AI systems based on their potential risks and impact levels. The agreement includes prohibitions on certain applications of AI deemed to pose threats to citizens' rights.
Banned applications comprise biometric categorisation systems utilising sensitive characteristics, untargeted scraping of facial images for facial recognition databases, emotion recognition in workplaces and educational institutions, social scoring based on personal characteristics, and AI systems manipulating human behaviour to undermine free will. Additionally, AI applications exploiting vulnerabilities based on age, disability, or social and economic status are prohibited. However, some law enforcement exemptions were agreed upon, permitting the use of biometric identification systems in publicly accessible spaces for law enforcement purposes under strict conditions, subject to judicial authorisation, and limited to defined lists of crimes.
'Post-remote' biometric identification would be restricted to targeted searches for persons convicted or suspected of serious crimes, while 'real-time' biometric identification would be allowed under specific conditions for targeted searches of victims, prevention of specific terrorist threats, or the identification of individuals suspected of specific crimes. Dealing with high-risk AI systems For high-risk AI systems, which may cause significant harm to health, safety, fundamental rights, environment, democracy, and the rule of law, obligations include a mandatory fundamental rights impact assessment. This requirement extends to the insurance and banking sectors.
Systems used to influence election outcomes and voter behaviour also fall under the high-risk category. Citizens have the right to file complaints about high-risk AI systems and receive explanations about decisions affecting their rights. General artificial intelligence systems, specifically general-purpose AI (GPAI) and the models they are based on must adhere to transparency requirements.
This includes technical documentation, compliance with EU copyright law, and dissemination of detailed summaries about the content used for training. Stricter obligations apply to high-impact GPAI models with systemic risk, requiring model evaluations, assessment and mitigation of systemic risks, adversarial testing, reporting to the Commission on serious incidents, ensuring cybersecurity, and reporting on energy efficiency. To support innovation and small and medium-sized enterprises (SMEs), the agreement promotes regulatory sandboxes and real-world testing established by national authorities to develop and train innovative AI before market placement.
Non-compliance with the rules can result in fines ranging from EUR 7.5 million or 1.5% of turnover to EUR 35 million and 7% of global turnover depending on the infringement and company size. Following formal adoption by both Parliament and Council, the agreed text will become EU law. The Internal Market and Civil Liberties committees of the European Parliament will vote on the agreement in an upcoming meeting.
.
Dec 11, 2023 12:07
Original link