France, Germany, and Italy have attained an agreement on regulating artificial intelligence (AI), expected to accelerate negotiations at the European level
France, Germany, and Italy have attained an agreement on regulating artificial intelligence (AI), expected to accelerate negotiations at the European level. According to a joint paper, the three governments support mandatory self-regulation through codes of conduct for foundation models of AI, which are developed to generate a broad array of outputs.
However, the countries are opposed to untested norms and they highlight that the AI Act regulates the application of AI and not the technology itself. The paper also mentions that the inherent risks are in the application of AI systems rather than the technology. Currently, the European Commission, the European Parliament, and the EU Council are discussing how the bloc should place itself on this topic.
The paper states that developers of foundation models would be required to define model cards, which are leveraged to offer details about a machine-learning model. The model cards have to include information on how the model operates, its capabilities, and its limits, being based on best practices within the developers' community. Within the joint paper, it is mentioned that an AI governance body could support the development of guidelines and verification of the application of model cards.
The governments of France, Germany, and Italy stated that no sanctions should be imposed initially, however, if violations of the code of conduct are identified, a system could be imposed. Germany’s officials highlighted that to be part of the top AI group globally, countries need to regulate the applications and not the technology. Moreover, governments need to develop a proposal that can ensure a balance between both objectives in a technological and legal matter that has not been defined yet.
The EU’s approach to regulating AI According to the European Parliament, the EU aims to regulate AI as part of its digital strategy, while also ensuring the development and use of this technology. The European Commission proposed the first EU regulatory framework for AI in April 2021, mentioning that AI systems that can be leveraged in various applications need to be analysed and classified according to the risk they pose to users. The regulators’ priority is to ensure that the AI systems used in the EU are safe, transparent, traceable, non-discriminatory, and environmentally friendly.
Moreover, AI systems need to be managed by individuals, and not automation, to prevent harmful outcomes. Furthermore, the parliament intends to develop a technology-neutral, uniform definition for AI that could be employed in future AI systems. Regulators want to create obligations for providers and users based on the level of risk from AI, with unacceptable risks including cognitive behavioural manipulation of individuals or groups, social scoring that classifies people based on personal characteristics and economic status, and real-time and remote biometric identification systems. Regulators also mentioned that some exceptions may be allowed, such as post-remote biometric identification systems where recognition occurs after a delay and will be allowed to prosecute serious crimes, however, only after court approval.
.
Nov 22, 2023 08:20
Original link