The European Union is leading the world in regulating artificial intelligence, just like it did with data protection and GDPR. AI-focused laws have been proposed in the US and around the world. But the EU law is the closest major law to implementation and is likely to set the bar for AI compliance worldwide.

The EU’s AI Act is a comprehensive set of rules designed to ensure that AI systems play nice with humans, respect our rights, and don’t go rogue. Procedurally, the Act was voted out of the European Parliament on June 14. It’s expected that the law will become effective later in 2023 after the Parliament and EU Council reconcile their different versions of the law.

Some details may change before the law is formally enacted, but if you’re involved at all with building, commercializing, or implementing AI—and who in tech isn’t these days—you should start thinking about compliance now, before you have to go back and rebuild.

The following is a primer on the law as it stands now, and what we expect in the final version.

Who Will the EU AI Act Cover? Key Parties and Their Responsibilities

The Act identifies several parties involved in bringing an AI system to market. Each with specific roles and responsibilities:

  1. Providers: The creators and suppliers of AI systems. Their job: Make sure AI is in compliance with relevant rules and regulations—think transparency, human oversight, and good data habits.
  2. Users/Deployers: The folks operating AI systems. Their mission: Follow the manual, use and maintain relevant data, and keep an eye on how the AI’s doing, reporting any hiccups.
  3. Importers: The gatekeepers for AI coming into the EU from another jurisdiction. Their task: Check that foreign high-risk AI systems meet EU standards before they hit the market. Importers must verify that a conformity assessment has been carried out and that the system has the required documentation and markings.
  4. Distributors: The ones getting AI systems out there. Their duty: Ensure AI systems bear the required conformity markings and have the necessary documentation and instructions and that storage and transport conditions don’t mess with the AI’s compliance.

What AI Systems are Covered by the AI Act? A Three-Tiered Approach

The Act’s framework is like a mountain. The altitude representing the increasing levels of scrutiny and regulation. Here’s how it breaks down:

  1. Non-High-Risk AI Systems: We’re at base camp, where the AI systems are like the gentle foothills. It’s a casual stroll here; no need for oxygen tanks, ice picks, or regulation. Outside of requiring that notice be given to the public that the system is AI, there isn’t much regulation of these systems. The Act encourages compliance through Article 69’s “Codes of Conduct.” Like the park ranger who says, “Hey, let’s keep the trails clean and respect the wildlife.” Because the nature of this AI is relatively low-risk, it’s more like a nudge suggesting that everyone can do better. For encouragement, Member States may provide incentives for voluntarily creating codes of conduct. The codes would involve addressing how the parties are protecting the environment, maintaining accessibility, involving stakeholders in decisions, and ensuring diversity.
  2. High-Risk AI Systems: As we climb higher, we reach the High-Risk AI Systems. This is the challenging section of the mountain—things get steeper—significant harm may occur—and you need the right equipment, experience, and climbing partners. So the Act lays down some rules to make sure nobody slips.What makes an AI system high-risk? The Act considers factors like the sector it’s used in, the potential impact on people’s rights, and the level of human oversight. If an AI system checks all the wrong boxes, it’s time to roll out the red tape. High-risk AI systems, along with the relevant parties (providers, users, distributors, etc.), must adhere to stringent requirements, such as transparency, accountability, and human oversight, to safely navigate their climb.
  3. Prohibited AI Practices: Finally, we reach the summit, the “AI Forbidden Peak.” This is the summit that is off-limits, where the air is thin, and the risks are too high. These AI systems present a substantial risk of harming people’s rights and wellbeing. The likelihood of triggering an avalanche that buries the whole mountain has caused the EU to prohibit everyone from venturing through this terrain.

Let’s take a closer look at the last two tiers.

Identifying High-Risk AI Systems

High-risk AI systems are those that pose significant risks to the health, safety, or fundamental rights of individuals or society. The Act provides a list of specific AI applications that are considered high-risk, such as:

  • Biometric-based systems (The Parliament’s position is to prohibit some post-remote biometric identification systems)
  • Critical infrastructure management systems
  • Employment and worker management systems
  • Some systems used in law enforcement
  • AI systems for migration, asylum, and border control
  • AI systems for education and vocational selection and management
  • AI systems for access to essential private/public services and health insurance
  • Systems intending to affect the democratic process or judicial procedure

This list is not exhaustive, and the Act allows for the addition of new high-risk AI systems as technology evolves.

A Deeper Dive into Prohibited AI Practices

The “off limits” list contains some ambiguity—perhaps some overreach—but fundamentally it describes the sort of sci-fi, dystopian, and apocalyptic versions of AI that most of us fear.

  • AI systems that use subliminal mind tricks to manipulate people’s behavior are banned. If an AI system is deploying techniques beyond someone’s awareness, and is likely to cause harm, it’s a no-go.
  • AI that exploits the vulnerabilities of certain groups (age, predicted personality traits, and physical or mental disabilities). If an AI takes advantage of these vulnerabilities to twist people’s behavior in harmful ways, it’s out of bounds.
  • The categorization of individuals based on their sensitive or protected attributes or characteristics*.***
  • Using AI for social scoring by public authorities. An AI labeling people as trustworthy or not can lead to unfair, discriminatory, and harmful treatment.
  • Creating risk assessments of individuals to predict the occurrence of them committing a crime. Sorry Minority Report fans.
  • AI systems performing real-time biometric identification (like facial recognition) in public spaces for law enforcement is prohibited unless it’s absolutely necessary for serious matters. (Incidentally, these exceptions were removed by Parliament in its current version of the law, so they may not survive reconciliation with the EU Council.)

Next Time

As you might imagine, the EU’s AI Act is substantial. We’ve only just scratched the surface. Next time we’ll continue with Part 2, where we’ll dive into some nitty-gritty of compliance and obligations under the EU AI Act.

Spoiler alert: there will be paperwork.