
Our last issue invited you into the world of the EU’s new AI Act, exploring its intricate scope and definitions. We now illuminate the labyrinth of compliance requirements in the Act, pointing a spotlight at high-risk AI systems—the principal focus of the Act’s regulatory purview. (Recall that the Act’s regulatory framework divides AI into non-high-risk systems, high-risk systems, and prohibited AI practices.)
Let’s take a look at the Act’s cornerstone principles of risk management, data governance, transparency, and validation of compliance.
Prepare yourself as we delve deeper into the stirring currents of AI regulation.
Navigating the High-Risk Seas
The high-risk waters of AI—”high-risk systems”, if we’re being precise—require a rock-solid Risk Management System. That’s not just compliance jargon. It’s a real system that scans, identifies, and manages AI risk.
Risk management is only half the puzzle. The other half? Tackling those risks, head-on, while considering current tech trends and their potential fallout. If remediation isn’t perfect—if some risks endure, the system’s users need to know about them, how to avoid them, and if completely unavoidable, how to mitigate them.
Before an AI system is placed on the market, it must be thoroughly tested. Then once it’s launched on the market—after popping some champagne—it must continue to be tested thoroughly. Akin to an annual health check-up, but for your AI system (and done more frequently).
Data is AI’s lifeblood, and Article 10 demands it meets specific standards. Data used by a high-risk AI system must be relevant, error-free, representative, and compatible with the system’s settings. It must have appropriate statistical properties and take into account the specific geographical, behavioral, or functional setting where the system will be used.
Starting a system’s journey in the market requires “technical documentation,” which serves as proof of compliance with risk management, data gathering, record-keeping, and accuracy.
Article 12 requires a “captain’s log” of record-keeping for your system. It ensures traceability, tracking the real-time monitoring of your system’s operation. And transparency is crucial. Don’t forget the system needs a human overseer and must ace accuracy tests while being resilient to errors and third-party tweaks.
Responsibilities of Stakeholders
Properly handling high-risk AI systems requires every stakeholder on the boat to row in the same direction. Providers are the captains who ensure compliance with the AI Act’s Chapter 2 and Annex III. They handle quality management (Article 17), technical documentation (Article 11), and control logs (Article 20), among other tasks. Their AI system must pass the conformity assessment before making a market splash and comply with registration needs.
Quality Management System:
Think of the quality management system as your ship’s crew, ensuring smooth operations. The QMS ensures adherence to regulatory norms with a meticulously documented strategy that includes design control and verification techniques, quality assurance measures, testing protocols, and data management procedures. This system accounts for everything from risk management (Article 9) to post-market monitoring (Article 61), serious incident reporting (Article 62), and communication with all stakeholders. It also provides an accountability framework detailing the responsibilities of your staff. The scope of this system should match your organization’s size.
Technical Documentation:
This is your ship’s blueprint. Created before the system makes its debut or is activated, and regularly updated, it’s a clear indicator of your system’s compliance. It furnishes national authorities and notified bodies with all the necessary information to validate your system’s conformity with applicable rules. With technology’s relentless evolution, the Commission might employ delegated acts as per Article 73 to revise the requirements, ensuring your system’s blueprint stays relevant and up-to-date.
Conformity Assessment:
Think of this as the final inspection before setting sail. The AI system must clear this test, receive the EU declaration of conformity, and the CE marking before stepping into the market. Compliance comes via a conformity assessment either based on internal control (Annex VI) or an assessment involving a notified body (Annex VII). The system must undergo a fresh conformity assessment for significant modifications.
Automatic Logs & Corrective Actions:
High-risk Providers must maintain automatic logs for traceability. If the AI system doesn’t conform, corrective actions, including withdrawals or recalls, may be necessary.
Other Stakeholders:
Importers and Distributors verify the system’s conformity and take corrective actions if necessary. Users adhere to system instructions, monitor operation, and inform Providers about any risks or incidents. All parties could be considered a Provider if they modify a high-risk AI system substantially.
The Road to Conformity
Certificates and Appeals:
Certificates are needed to ensure that a system meets certain requirements. If a system no longer meets the requirements, the certificate can be suspended, withdrawn, or have restrictions imposed on it. This helps maintain a certain standard of quality and compliance. Certificates have a 5-year validity and can be extended post-reassessment. If an AI system falls short of requirements, the certificate can be suspended or withdrawn unless corrective action is taken.
Declaration of Conformity and CE Marking:
Providers must submit an EU declaration of conformity for each AI system and affix the CE marking of conformity visibly on high-risk AI systems.
Document Retention and Registration:
Providers need to retain and provide documentation to national authorities for 10 years. Before a high-risk AI system launch, the system must be registered in the EU database, which is publicly accessible and only contains necessary personal data.
Illuminating the AI
Just as a ship’s captain uses a lighthouse to navigate the seas, transparency is a guiding principle in the realm of AI. Firstly, Providers must ensure that when AI systems are intended to interact with humans, these systems are designed and developed in a way that it’s obvious—letting users know they are interacting with an AI—unless this is already evident from the context and usage. Secondly, users of emotion recognition or biometric categorization systems must inform individuals when they are going to be in the operational range of these systems. Lastly, Users of AI systems creating or manipulating images, audio, or video content that convincingly resemble existing entities or events (like deep fakes…probably generative AI too) have an obligation to disclose the artificial creation or manipulation of the content. By the way, the artwork above was made in Midjourney.
Wrapping it up
If you’re still reading this, you deserve major props! This isn’t the most exhilarating read. But if your business is building or implementing AI, you NEED to be aware of compliance in the EU. Especially because the costs of falling short of compliance are STEEP!
More on that next time.
