AI Act comes into effect


The EU''s AI regulation, known as the AI Act, has officially came into effect, imposing significant changes on major American technology companies

The EU's AI regulation, known as the AI Act, has officially came into effect, imposing significant changes on major American technology companies. Overview of the AI Act Approved by EU member states, lawmakers, and the European Commission in May 2024, the AI Act has aimed to regulate the development, deployment, and application of AI across the EU.

This legislation introduces a risk-based approach to AI regulation, affecting both tech and non-tech companies. The AI Act classifies AI applications based on their risk to society. High-risk AI systems, such as autonomous vehicles, medical devices, and biometric identification, have faced stringent requirements including risk assessments, high-quality training datasets, routine activity logging, and detailed documentation.

Conversely, AI applications deemed unacceptable, like social scoring and predictive policing, have been banned. Impact on US technology firms US technology giants, including Microsoft, Google, Amazon, Apple, and Meta, have been expected to be heavily impacted by these regulations. The Act’s reach extends beyond the EU, affecting any organisation with operations or impact within the EU.

This increased scrutiny has likely focused on these companies' activities and data handling within the European market. Meta has already limited its AI model availability in Europe, although this move has not been specifically due to the AI Act but rather due to uncertainties regarding compliance with the EU’s General Data Protection Regulation (GDPR). Treatment of generative AI Generative AI models, such as OpenAI’s GPT and Google’s Gemini, have been categorised as general-purpose AI under the Act.

These models must adhere to EU copyright laws, transparency requirements, and cybersecurity measures. However, open-source generative AI models can qualify for certain exemptions if they meet specific criteria related to public accessibility and modification. Penalties for non-compliance Penalties for non-compliance with the AI Act can range from EUR 35 million or 7% of global annual revenue, whichever is higher, to EUR 7.5 million or 1.5% of global annual revenue.

These fines exceed those for GDPR violations. Oversight of AI compliance will be managed by the newly established European AI Office. While the AI Act is now in effect, most provisions will not apply until 2026.

General-purpose AI systems will face restrictions starting 12 months after the Act’s enforcement, and commercially available generative AI systems will have a 36-month transition period to comply with the new regulations. Developments and goals of the AI Act The origins of the AI Act trace back to the European Commission's 2020 proposal, which aimed to address the rapid advancement and potential risks associated with AI. As AI technologies evolved and integrated into various sectors, concerns about their impact on privacy, safety, and ethics grew.

Recognising the need for a comprehensive regulatory framework, the Commission proposed the AI Act to create a unified approach to AI governance within the EU. The proposal set the stage for extensive discussions among EU member states, lawmakers, and stakeholders, leading to amendments and refinements before its final approval. This legislation is a cornerstone of the EU's broader digital strategy, which emphasises the need for robust regulations to manage emerging technologies while fostering innovation.

The AI Act reflects the EU's commitment to establishing clear and harmonised rules for AI, ensuring that technological advancements align with ethical standards and public interests. By setting out detailed requirements for high-risk AI applications and banning unacceptable uses, the Act aims to create a balanced environment that supports technological growth while mitigating potential risks. .


Aug 01, 2024 14:47
Original link