Does Your AI System Follow the EU AI Act Regulation?
More than 75% of organizations failed to take steps in ensuring the safety and trustworthiness of their AI systems according to a 2023 survey study by McKinsey.
What is the EU AI Act?
Following the rapid development of artificial intelligence (AI), from general-purpose technologies like deepfakes and ChatGPT to specific applications like self-driving cars, there have been increasing calls for regulatory oversight. In response, the EU AI Act was approved in May 2024, marking the first AI regulation within the European Union. Similar to the GDPR, which aims to protect personal data, the EU AI Act aims to safeguard human rights by promoting human-centric and trustworthy AI systems.
Who Will Be Affected?
The EU AI Act impacts all entities involved in the AI lifecycle, from ideation and development to production, distribution, and deployment. This includes companies within the EU and third-party AI system providers outside the EU. The responsibility for compliance lies with those overseeing the system’s technical design, data collection, development, testing, and approval processes.
Top Priority Systems
By early 2025, the obligations outlined in the EU AI Act will apply, particularly focusing on prohibited and high-risk AI systems. Companies and AI stakeholders must evaluate whether their systems fall under any categories defined by the Act. For each category, it is essential to understand the specific obligations, assess current compliance status, and plan necessary actions.
What Companies Need to Do
To adapt to the new regulations, companies should take the following actions:
- Understand the Regulation: Thoroughly comprehend the EU AI Act and its implications for your business. Ensure that all relevant employees are trained on compliance requirements.
- Policy and Classification: Develop a policy to classify AI systems and components. Identify if any systems fall under prohibited categories and ensure compliance to avoid these classifications.
- Compliance Plan for High-Risk Systems: Implement a comprehensive compliance plan for high-risk AI systems.
- Human Oversight: Design AI systems to allow for meaningful human oversight, ensuring that users can intervene if necessary.
- Transparency and User Interaction: Ensure that AI systems interacting with end-users are transparent about their nature and capabilities, and that user data is processed according to the regulation.
Penalties
Non-compliance with the EU AI Act can result in severe penalties, including fines of up to €35 million, which can significantly impact a company’s market presence and viability.
By taking these proactive steps, companies can not only ensure compliance with the EU AI Act but also foster trust and innovation in their AI technologies.