EU AI Act: Navigating the Changes and Safeguarding Your Data Projects

Following the 523 - 46 vote of the European Parliament on 13 March 2024, the EU AI Act will enter into force 20 days after publication in the Official Journal (expected in May or June 2024).

If your business uses artificial intelligence, you need to get up to speed. Non-compliance may expose you to legal risks, significant fines, or loss of customer trust. If you take steps to prepare, you can avoid these issues.

Proper preparation includes:

  • Setting up a risk management system

  • Improving data quality and integrity

  • Increasing cybersecurity and privacy protection

  • Keeping up to date with any changes as the law develops.
In this blog post, you'll learn how to thoroughly understand and prepare for the EU AI Act and how to stay compliant. Let’s start from the basics.

What is the EU AI Act?

The EU AI Act is the world's first standalone law governing AI and a landmark piece of legislation for the EU. It sets out the framework for regulating AI systems. It takes a risk-based approach to classifying these systems and applies greater or fewer restrictions to those systems depending on the risk.
If you are using AI in your business or organization, this law can apply to you and may have compliance requirements that you need to follow.

Parts of the Act will come into force as soon as 6 months after the final text is published. This is a relatively short lead-in time.

The Act establishes a risk-based approach, meaning that different kinds of AI systems are classified as being unacceptable, high, minimal, or no risk. This risk-based approach is intended to allow as much innovation as possible while preventing the most harm.

The compliance requirements can be rigid, especially for high-risk AI systems. This will particularly affect enterprises in big sectors like financial services, healthcare, and telecommunications. If you are working in one of these sectors, getting up to speed with the new law is a crucial first step.

One way businesses can start preparing is to familiarize themselves with the Act and its functions.

To Whom Does the EU AI Act Apply?

The Act applies to those developing, deploying, and using AI systems within the European Union. This includes companies, organizations, and individuals that develop or deploy AI systems within the EU, regardless of their size or sector.
An AI system, as defined in the EU AI Act, is:

  • a machine-based system

  • designed to operate with varying levels of autonomy

  • that may exhibit adaptiveness after deployment

  • that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments
Some examples of an AI system include robot-assisted surgery with AI guidance, chatbots, credit scoring systems, recruiting software that uses AI to analyze CVs, spam filters, or automated visa decision-making programs.

In addition, the Act may also apply to companies or organizations based outside of the EU. This could be the case if they offer AI products or services to EU residents or if their AI systems are used within the EU market.

Why Was the EU AI Act Created?

The Act was proposed because the development of AI over the past few years has been explosive. Several general-purpose AI models have been developed recently, the most well-known of which is ChatGPT.

These technologies have huge potential benefits, but technologists, lawyers, human rights experts, and business owners are concerned about their misuse.

This is because AI systems can also create systemic risk, particularly to fundamental rights such as privacy.
There has been no stand-alone law dealing with AI until now. This means that a number of AI tools are being developed and used without oversight. AI regulation is one way to ensure that AI projects can be carried out with a clear set of guidelines, bringing everyone to a level playing field.
If you bring your projects into line with the regulation comprehensively and efficiently, you can focus on profits and innovation without falling afoul of the law.

Lawyers focused on business, cybersecurity, and data protection, Thomas Prete and Yuri Rodrigues, note that the way AI is developing and the regulatory efforts around it “Will require multidisciplinary skills from managers, lawyers, software developers, engineers, and cybersecurity experts to simultaneously address legal requirements, security vulnerabilities and system robustness.”

Many organizations are concerned that the regulation will restrict them to the point where they cannot develop AI systems. No matter if you work in healthcare, finance, or other industries dealing with sensitive data, it’s understandable that you might be worried that the regulation will prevent you from using AI in your unique use case.

Oksana Rasskazova, an AI and tech leader with Gestalt Robotics, says, "We need to be really careful that the law doesn't stop innovation," and notes that there is a wide variety of business cases that use AI.

She also explains that because the Act is in a state of development without defined guidelines, businesses are experiencing "a sense of anxiety" and looking for clarity on how they should approach things.

Fortunately, the Act does not intend to stop innovation or prevent business use cases from happening. Instead, it sets out a number of measures and safeguards that you need to take if you want to do these things. Enter the risk-based approach.

What is the Risk-based Approach of the EU AI Act?

The Act's risk-based approach categorizes AI systems as unacceptable risk, high risk, limited risk, and minimal risk.

AI systems that pose unacceptable risks are essentially banned and subject to high levels of oversight. This includes systems that establish social scoring, real-time biometric identification, and certain uses of emotion recognition, among other things.

High-risk systems are also tightly regulated. Systems classified as high-risk are organized more by sector than by use. This includes AI systems such as those used in government or political systems, healthcare, or migration contexts.

Organizations should self-assess what level of risk their AI system poses.

For example, if you were placing onto the EU market an AI model used as a medical device that generates synthetic audio and interacts with patients, this is likely to be classified as a high-risk AI system.

Another example would be if you wanted to deploy an AI system that performed emotion recognition, it would likely be prohibited. However, if this same emotion recognition system was used by law enforcement, it would likely be permitted by the Act.

The Future of Life Institute (FLI), an independent non-profit working to reduce the risks of harm of AI, has developed a risk-checking system for informative purposes. By selecting what type of AI system you are developing or deploying, FLI’s risk checker can give you a basic assessment of whether your AI system is likely to be prohibited, high risk, or minimal risk.

While FLI’s checker is useful, this risk assessment should still be carried out internally with the help of experts and significant professional analysis. The risks of non-compliance are very high.

Pavel Čech, a lawyer focused on the intersection between law and technology, explains that the risk-based approach in the AI Act is the EU’s attempt to deal with the “Challenge of fostering innovation while ensuring these technologies are developed and deployed ethically, safely, and without compromising privacy or security.

This balance is crucial to enable the benefits of AI, such as improved healthcare diagnostics, more efficient energy use, and enhanced educational tools while mitigating risks like algorithmic bias, job displacement, and potential misuse.”


In addition, the Act introduces penalties for non-compliance with its provisions. Organizations that breach the Act's requirements may face significant fines.

Prete and Rodrigues say, in particular, that “in our view, a wave of heavy scrutiny is coming together with the AI Act, and organizations must be prepared for extra steps of accountability for their systems and operations.”

What are the Levels of Risk for the EU AI Act?

The risk categories are as follows:

Unacceptable risk


Some AI systems behave in ways that are biased, discriminatory, or invade people’s privacy in severe ways.

For example, a tool made by Northpointe Inc. called COMPAS was intended to determine people’s risk of reoffending after having committed a crime. The tool racially discriminated against black defendants, saying that they were at a much higher risk of both violent and non-violent reoffending than they actually were.

AI systems classified as unacceptable risk under the Act are those that can cause significant potential harm to people. They are also known as "Prohibited Artificial Intelligence Practices."

The EU AI Act intends to either completely prevent the use of tools like COMPAS or ensure that they work in appropriate, high-quality ways without bias, discrimination, or unfair effects on people.

Under the Act, AI systems that are said to be unacceptable risk are those that:

  • Manipulate human behavior, particularly through subliminal or purposefully deceptive techniques

  • Exploit people's vulnerabilities to manipulate their behavior in a way that is likely to cause significant harm

  • Use biometric systems to categorize people and to determine their race, political opinions, religious beliefs, or sexual orientation, among other things

  • Implement social scoring by evaluating people's social behavior or personal characteristics

  • Use real-time remote biometric identification in public places

  • Create risk assessments of people to predict the risk that they might commit a criminal offense

  • Create facial recognition databases, or expand them through untargeted scraping of facial images from the internet or CCTV footage

  • Conduct emotion recognition of people in the workplace and educational institutions, except if the AI system is used for medical or safety reasons
There are some exceptions to these prohibitions, such as for law enforcement. These exceptions are also subject to safeguards. This could include the registration of the system and the completion of a fundamental rights impact assessment.

High risk


AI systems that are high risk can be placed on the market but with several safeguards in place. High-risk AI systems include those used in areas or sectors such as:

  • Transport

  • Toys

  • Financial services

  • Healthcare

  • Telecommunications

  • Government

  • Public services

  • Law enforcement

  • Migration contexts
The legislation strictly scrutinizes these systems. If you are using AI in these sectors, you should take extra precautions to ensure compliance.

AI systems are not considered high risk if they perform a minimal function, even in these sectors.

For example, the system might not influence the outcome of decision-making or only complete a narrow procedural task. In other cases, the system only supplements the decisions of a human decision-maker or carries out preparatory tasks rather than anything substantial.

Limited risk


Limited-risk AI systems are those that pose fewer potential risks to safety, fundamental rights, or societal welfare.

This includes certain types of chatbots, recommendation systems, and content moderation tools used in non-critical applications such as e-commerce platforms or social media.

For example, Siri, Alexa, and the Google voice assistant would all most likely be classified as limited-risk AI systems.

On the other hand, chatbots involved in healthcare could be classified as high-risk. This could include AI systems that make triage decisions for patients and recommend whether they see a doctor.

Minimal or No Risk


"Minimal-risk" or "no-risk" systems have the lowest potential for harm. They are subject to minimal regulatory requirements.

Minimal or no-risk AI systems include basic rule-based decision-making algorithms, simple chatbots with limited functionality, or tools used for spell-checking or basic data analysis. One example of a minimal or no-risk AI system could be the use of a spam filter on email systems.

Considering various risk levels, you’re most likely curious about what regulations apply to those in the high-risk category. Let’s explore this now.

What Regulations Apply to High-Risk AI Systems?

In particular, high-risk AI systems are subject to specific requirements that enterprises need to be aware of. Some key requirements include:

  • Data quality and governance: High-risk AI systems must adhere to stringent data quality standards. You must ensure that data used for training and operation are accurate, reliable, and representative. You must also put in place data governance practices to prevent bias, discrimination, and other adverse effects.

  • Transparency and explainability: If you are developing high-risk AI systems, you must write clear and comprehensive documentation. This must explain the system's functionality, capabilities, and limitations. In addition, it must say what data was used for training, the algorithms used, and how the system makes decisions.

  • Human oversight and control: High-risk AI systems must have the ability for humans to have oversight and control. This may include the ability to override automated decisions, monitor system performance, and take corrective actions in case of errors or unexpected outcomes. For instance, in cases where an AI system has produced a biased result, a human could take over and review the system, make changes, or use different data.

  • Accuracy, robustness, and safety: You should design and test high-risk AI systems to ensure their accuracy, robustness, and safety. This should be done across a range of possible scenarios and conditions. In addition, you should carry out thorough risk assessments and testing procedures for all of your AI systems. This can identify and mitigate potential risks, including biases, vulnerabilities, and unintended consequences.

  • Documentation and conformity assessment: If you deploy high-risk AI systems, you must maintain comprehensive documentation throughout the development lifecycle. Independent third-party assessment bodies may also check your compliance with regulatory requirements and standards.

Why Should Your Business Care About the EU AI Act?

Tanya Chib, Founder and Privacy Lawyer at Privacy Rules, states that businesses should care about the EU AI Act because “We are amid an AI regulatory storm.”

Regulations are changing fast, and organizations that do not keep pace can be left behind. She says, “The AI legislative and regulatory landscape is evolving exponentially. This is a time for companies to closely monitor these developments, stay flexible, and start investing in an AI governance program.”

Especially if your business operates in a sector with high-risk AI systems, without proper compliance measures, you could be exposed to legal and financial risk. In addition, following the Act's principles of transparency, accountability, and ethical use of AI can enhance trust and confidence among customers, investors, and other stakeholders.

The law is also not changing evenly around the globe.

Pavel Čech explains that the difference in “Regulatory approaches can lead to a fragmented global AI landscape, where cross-border cooperation and standardization become a challenge.”

Those organizations who have kept themselves up to date with changes, both in the EU and on a global level, are better placed to manage these discrepancies.

Getting up to speed quickly can be the key.

Chib says, “I recommend diving into the provisions of the EU AI Act and how it affects your company ASAP, and not leaving that to the last months.”

She also highlights early ways in which businesses can prepare: “The EU AI Act will have different implications for different companies. Start with understanding the use cases within your organization and making an inventory of your AI assets.”

It is also important to understand that the AI Act is not in lieu of but rather adds to the obligations under the GDPR, which also must be satisfied.
Enterprises prioritizing understanding and compliance are better positioned to navigate regulatory challenges, foster innovation, and build competitive advantages.
For example, if you are already working with a system that would be classified as high-risk, you can begin adjusting your systems now to stay compliant. This will keep you ahead of the game and allow you to innovate and expand your business cases, knowing that you are already prepared for the new law.

In addition, setting up a strategy for preparation now can help you to avoid penalties later.

What are the Penalties in the EU AI Act?

The fines set out in the Act are high. For example, if an organization places an AI system of unacceptable risk on the market they may be fined up to €35,000,000 or up to 7% of annual worldwide turnover. Non-compliance with the provisions for high-risk AI systems can lead to fines of up to €15,000,000 or up to 3% of annual worldwide turnover.

If there are severe or repeated violations, authorities can impose other penalties. This includes bans on specific AI systems or the suspension of data processing activities.

How to Prepare for the EU AI Act

Taking steps now to prepare for the AI Act ensures that your systems will already be in place and working well when the law comes into force.

Here are the strategies you should develop to stay compliant:

Understand the Regulations


First, review and clarify the definitions of the EU AI Act to understand your business's responsibilities.

Figure out whether you have any AI systems in use already or are in the process of procuring these from third parties. Also, consider how likely it is that your business will procure AI in the coming months and years. If there is a high likelihood that you will be using an AI system in the near future, you can begin taking steps now to be prepared.

Once you have understood the regulations and determined that you have AI systems in use or will do so in the near future, assess them as explained below.

Establish a Comprehensive Risk Management System


First, you should assess the AI systems you develop or deploy to determine the risk level. This involves evaluating the potential impact of AI systems on safety, fundamental rights, and societal welfare, as well as identifying any biases, vulnerabilities, or ethical concerns.

For example, consider what AI systems you have currently in use, and what data they use. Think about what industry you are in, what interactions your AI system has with people, and whether there are any safety components to the system.

Use a risk-checker such as FLI to assess your system's risk level. Then, use a solution such as Anonos Data Embassy (explained below) to analyze the system’s training data. This can help you identify whether any sensitive or personal data is being used by the model, potentially increasing the risk assessment.

Enhance Data Accuracy and Robustness


Accurate data is essential for ensuring transparency and explainability, as it enables businesses to provide reliable insights into how their AI systems operate.

By demonstrating a commitment to data accuracy, businesses can enhance trust in their AI systems, foster transparency and accountability, and strengthen relationships with stakeholders, ultimately enhancing their reputation and competitive advantage in the market.

Fortify Cybersecurity Posture


The EU AI Act places a strong emphasis on protecting personal data and ensuring privacy rights are upheld. By implementing robust privacy measures, businesses can safeguard sensitive personal information.
Privacy and cybersecurity measures also play a crucial role in ensuring the integrity and accuracy of data used in AI systems. Businesses operating in multiple jurisdictions must also ensure compliance with data protection laws and regulations, including those governing cross-border data transfers.

Invest in Training and Awareness


Training programs should be provided to employees involved in the development, deployment, and use of AI systems to raise awareness of the regulatory requirements and promote best practices. This can help ensure that all stakeholders are knowledgeable about their roles and responsibilities under the Act.

Keep Up to Date


Businesses should stay abreast of updates, amendments, and interpretations of the Act to ensure ongoing compliance with evolving regulatory requirements.

The EU AI Act emphasizes not just the functionality but also the security and privacy of data.

Prete and Rodrigues also explain that “AI system creators, in particular those of high-risk systems, are obviously subject to broader obligations and will have to integrate “by design and by default” a sophisticated knowledge of legal and regulatory frameworks into their systems.”

By taking these proactive measures, you can effectively prepare for the implementation of the EU AI Act, mitigate compliance risks, and demonstrate their commitment to responsible AI innovation within the EU.

By using technologies that embed protections in the data, you can expand your business cases with AI systems without running into compliance issues. Global organizations have already used Anonos to get the greatest benefits from their AI systems while staying in line with regulations and compliance obligations. Let’s see these cases now.

How Anonos Data Embassy Can Help

It is crucial to use technical solutions to help you meet the requirements of the EU AI Act, including meeting any relevant data protection requirements.

Anonos Data Embassy is a data-centric security platform that offers a robust framework for navigating the challenges of this new law, ensuring compliance while fostering innovation.

Data Embassy can help you with preparing for the EU AI Act in a number of ways. For example:

  • Assistance with Risk Analysis: With Anonos Data Embassy, you can categorize and manage the data that will be processed by AI systems, so you can ensure that all potential risks are identified. By generating Variant Twins – non-identifiable yet accurate data replicas – Anonos provides a detailed overview of data attributes without exposing sensitive information. This allows you to make a precise risk assessment and ensure that the risk classification you select is the right one.

  • Documentation: Organizations can use Variant Twins to simulate various risk scenarios and devise mitigation strategies without endangering actual data. Anonos' system facilitates thorough documentation of these processes, ensuring that all actions are traceable and transparent, thereby meeting the AI Act's requirements for meticulous risk management.

  • Improving Data Quality: Accuracy in AI is heavily dependent on data quality. Anonos' Zero Trust approach ensures that data is not only protected but also retains its richness and utility. This enhances the training and performance of AI models, contributing to their robustness and reliability.

  • Adaptability: The dynamic nature of threats in the digital world necessitates AI systems that can adapt and evolve. Anonos' technology, with its emphasis on data security and adaptability, ensures AI systems can be updated promptly to counter new threats.

  • Protection from Breach: In an environment where data breaches can have catastrophic consequences, Anonos minimizes the potential impact of a breach, as the data in its protected form is of minimal value to unauthorized entities.
Using Anonos Data Embassy for each of these capabilities ensures that you are well-prepared for the changes that are to come.

Next Steps for the EU AI Act

The EU AI Act marks a pivotal step towards regulating artificial intelligence technology within the EU.

As you navigate this evolving regulatory landscape, proactive steps toward compliance are essential. You and your business have the opportunity to demonstrate your commitment to responsible AI innovation, mitigate potential risks, and capitalize on the opportunities presented.

A proactive approach demands innovative solutions that address the core challenges of risk management, data accuracy, and cybersecurity.

Anonos' Data Embassy offers a state-of-the-art approach to these requirements, enabling you to leverage AI's potential while upholding the highest security and compliance standards in your business.

By starting compliance efforts now, your business can position itself for success in the AI market.