The European Union’s Artificial Intelligence Act (AI Act) officially came into force this month, marking the beginning of a new regulatory landscape for artificial intelligence. The first-of-its-kind legislation aims to ensure the responsible development and deployment of AI, setting a global precedent for how technology can be aligned with fundamental rights and societal values.
What is the EU AI Act?
Proposed by the European Commission in April 2021 and approved by the European Parliament and the Council in December 2023, the AI Act introduces a comprehensive framework to govern AI technologies. It addresses the potential risks AI poses to health, safety, and fundamental rights, providing developers and deployers with clear guidelines while reducing administrative and financial burdens for businesses.
The Act categorises AI systems into four risk levels, each with specific requirements and obligations:
- Minimal risk: Includes systems like spam filters and AI-enabled video games, which face no mandatory obligations. Companies may voluntarily adopt additional codes of conduct to enhance their AI practices.
- Specific transparency risk: Systems such as chatbots must disclose their non-human nature to users. AI-generated content, like deep fakes, must be clearly labeled.
- High risk: This category includes AI systems used in sensitive areas like medical software and recruitment. These systems must meet strict requirements, including robust risk mitigation, high-quality datasets, comprehensive documentation, and human oversight.
- Unacceptable risk: AI systems that pose a clear threat to fundamental rights, such as those allowing social scoring by governments or companies, are banned.
“The regulation adopts a risk-based framework, an approach that aims to balance innovation with safeguarding fundamental rights and safety,” said Ramprakash Ramamoorthy, director of AI research, ManageEngine.
“As the world’s first comprehensive AI law, the EU AI Act establishes a blueprint for regulating AI that prioritises human-centric values and ethical considerations, potentially influencing future legislation worldwide. It emphasises the need for trustworthy AI by mandating robust risk assessments, data governance, and technical documentation for high-risk systems. This focus on transparency and accountability aims to foster public trust in AI technologies and ensure their responsible development and deployment.”
Implications for tech firms
For leading tech giants, especially those in the US like Microsoft, Google, Amazon, Apple, and Meta, the AI Act represents a significant change. These companies, deeply invested in AI advancements, now face a more rigorous regulatory landscape in the EU.Top of FormBottom of Form
Meta, for instance, has already restricted the availability of its AI models in Europe due to regulatory concerns, a move indicative of the broader industry’s need to adjust to the new rules.
“One of the most notable aspects of the EU AI Act is its extra-territorial effect. In other words, the act not only applies to AI systems developed within the EU but also to those offered to EU customers or affecting EU citizens, regardless of where the providers are located. This means that AI developers and providers outside of the EU must also adhere to these regulations if they wish to operate within the European market,” explained Matt Cloke, CTO, Endava.
Pastora Valero, Senior Vice President of International Government Affairs at Cisco, emphasised that the Act places significant responsibilities on both AI developers and deployers.
“Many of the obligations of the Act are similar to best practices that many companies have been enacting and enforcing through voluntary frameworks. For example, Cisco has established a Responsible AI Framework based on principles of transparency, fairness, accountability, privacy, security, and reliability. With this framework, we have very solid foundations to comply with the EU AI Act. This signals a good cooperation between legislators and practitioners,” she said.
Valero further explained that many aspects of the Act are left to secondary legislation, making it too early to assess its full impact. However, it sets a strong expectation for Trustworthy and Responsible AI in Europe and worldwide. “The Act will surely constitute a template, which will go along the many other AI governance efforts, either national or international, that give some clear trends on AI regulation: risk-based, agreed definition of AI, ethical approach, support for innovation paired with protection of fundamental/constitutional rights.”
Impact on Middle East businesses
The AI Act’s influence is expected to extend beyond Europe, potentially impacting businesses in the Middle East that engage with AI technologies or operate in the EU market. Similar to the General Data Protection Regulation (GDPR), companies in the region will need to ensure their AI systems comply with the EU’s stringent requirements if they interact with the European market or European data subjects.
Middle Eastern tech firms, especially those developing AI for healthcare, finance, or public services, might face increased compliance costs and the necessity to adopt new risk management frameworks. However, the Act also presents an opportunity for these firms to enhance their AI systems’ transparency and reliability, potentially boosting their competitiveness in global markets.
By imposing stringent requirements on high-risk AI systems and emphasizing transparency and accountability, the Act ensures that technology aligns with fundamental rights and safety standards, according to Ezzeldin Hussein, Regional Senior Director, Solution Engineering, META, SentinelOne.
“This regulation will undoubtedly impact regional AI advancements, pushing companies in the GCC and MENA regions to enhance their compliance practices and adapt their innovations. Moreover, as the EU’s approach becomes a global benchmark, it will likely influence local legislation, prompting countries to adopt similar frameworks to foster trust and ensure ethical AI deployment,” he said.
Moreover, the Middle East’s emerging AI ecosystem could benefit from adopting similar regulatory standards, fostering a more responsible and ethical AI development environment. Governments and businesses in the region might look to the EU’s AI Act as a model for their own regulatory frameworks, ensuring they stay ahead in the rapidly evolving AI landscape.
“The EU approach is similar to what we see in the Middle East where governments look at responsible and secure AI practices across all industries, not just tech. There are huge expectations for positive impact of AI in education, health and industrial segments, and government policy is crucial to support AI innovation. It is important to find the right balance between a supportive policy environment for the AI ecosystem and regulatory intervention to address legitimate concerns,” said Valero from Cisco.
Next steps and implementation
EU Member States have until 2 August 2025 to designate national authorities to oversee AI system regulations and conduct market surveillance. The Commission’s AI Office will implement and enforce the AI Act at the EU level, particularly for general-purpose AI models.
The Commission has also launched a consultation on a Code of Practice for general-purpose AI models, addressing areas such as transparency, copyright-related rules, and risk management. Feedback from this consultation will inform the final Code of Practice, expected by April 2025.
Jacob Beswick, Director of AI Governance at Dataiku, offered practical advice on how organizations should prepare for the EU AI Act. He said, “With the countdown now starting until the regulation fully applies, there are several steps businesses should take over the next 18 months to ensure they are ready. First, businesses need to inventory their AI assets and review which AI systems are operational in Europe.
He added: “After gaining a complete overview, they should qualify these assets by understanding the purpose of each AI system, the technologies used (such as generative AI), and their classification under the EU AI Act’s risk tiers. Determining exposure to future compliance obligations will enable businesses to begin taking action to mitigate the risk of non-compliance and avoid disruptions to business operations whether through fines or pulling operational systems from the market.”
The EU AI Act aims to create a safer, more transparent AI ecosystem, promoting responsible AI use that benefits all sectors of society. The impact of this law will be felt not only in Europe but across the globe, shaping the future of AI development and deployment.
Discussion about this post