Crypto Gloom

EU plans to adopt world’s first AI law banning facial recognition in public places

The European Union (EU) is leading the race to regulate artificial intelligence (AI). After three days of negotiations, the European Council and the European Parliament reached a provisional agreement this morning. It is expected to be the world’s first comprehensive AI regulation.

Carme Artigas, Spain’s Secretary of State for Digitalization and AI, called the agreement a “historic achievement” in a press release. Artigas said the rules strike a “very delicate balance” between encouraging safe and reliable AI innovation and adoption across the EU and protecting the “fundamental rights” of citizens.

The draft bill, called the Artificial Intelligence Act, was first proposed by the European Commission in April 2021. Parliament and EU member states are expected to vote to approve the draft law next year, but the rules will not come into effect until 2025.

A risk-based approach to AI regulation

The AI ​​Act was designed using a risk-based approach. That is, the higher the risk an AI system poses, the stricter the rules will be. To achieve this, the regulations classify AI to identify those that pose a ‘high risk’.

AI that is considered non-threatening and low risk is subject to “very light transparency obligations.” For example, these AI systems must disclose that the content was AI-generated so that users can make informed decisions.

For high-risk AI, the bill adds a number of obligations and requirements, including:

Human supervision: The bill calls for a human-centric approach, emphasizing clear and effective human oversight mechanisms for high-risk AI systems. This means that humans are in the loop, actively monitoring and supervising the operation of the AI ​​system. Their role includes ensuring that the system operates as intended, identifying and resolving potential harm or unintended consequences, and ultimately taking responsibility for the system’s decisions and actions.

Transparency and explainability: Understanding the inner workings of high-risk AI systems is critical to building trust and ensuring accountability. Developers must provide clear and accessible information about how the system makes decisions. This includes details about the underlying algorithm, training data, and potential biases that may affect the system output.

Data governance: The AI ​​Act emphasizes responsible data practices with the goal of preventing discrimination, bias, and privacy violations. Developers must ensure that the data used to train and operate high-risk AI systems is accurate, complete, and representative. The principle of data minimization is critical to ensure that only the information necessary for system functioning is collected and the risk of misuse or breach is minimized. Additionally, individuals should have clear rights to access, correct, and delete their data used in AI systems, giving them control over their information and ensuring its ethical use.

Crisis Management: Proactive risk identification and mitigation will be a key requirement for high-risk AI. Developers must implement a robust risk management framework that systematically evaluates the potential harm, vulnerabilities, and unintended consequences of their systems.