On Wednesday 13th of March, the European Parliament approved the Artificial Intelligence Act, one that ensures safety and compliance with fundamental rights, while still boosting innovation and establishing Europe as a leader and global standard trendsetter in AI. The regulation is expected to enter into force at the end of the legislature in May, after passing final checks and receiving endorsement from the European Council. Implementation will then be staggered from 2025 onward.
Born in 2021, the EU AI Act divides the technology into categories of risk, ranging from “unacceptable” — which would see the technology banned — to high, medium and low hazard. Under the AI Act, machine learning systems will be divided into these four main categories according to the potential risk they pose to society. The systems that are considered high-risk will be subject to stringent rules that will apply before they enter the EU market. High-risk AI systems will require transparency, Fundamental Rights Impact Assessments (FRIA)– a tool to assess the ethical and legal compliance of AI – data governance, registration in an EU database, risk management, quality management systems, human oversight, accuracy, robustness, and cybersecurity measures. Specific rules for general-purpose AI systems will ensure transparency along the value chain, including technical documentation requirements and compliance with EU copyright laws.
The Artificial Intelligence Act positions Europe to play a leading role globally, establishing the world's first-ever legal framework on AI, and addressing the risks associated with AI use. This act categorises AI systems based on risk levels and imposes strict obligations on high-risk systems to protect democracy, fundamental rights, and the rule of law while encouraging investment and innovation. By setting clear rules for AI developers, deployers, and users, the EU aims to lead in setting global standards for AI governance. The AI Act will not only impact the EU's nearly 450 million residents but is expected to influence global regulations due to Europe’s role in creating comprehensive rules that could serve as a blueprint for other countries. The EU's proactive approach in regulating AI is seen as a significant step towards ensuring trustworthy and responsible AI development, positioning Europe as a normative power in shaping the future of AI governance worldwide.
If you are a standardisation expert or a standards practitioner in the key technologies (AI, Cybersecurity, Digital ID, Quantum, IoT, 5G, 6G, and Data) in Europe, find out how you can contribute to the next generation of Technology standards by joining our community today!