Blog
May 16, 2025
The EU AI Act: Key Implications for Using Data in the Modern Enterprise
AI,
Data Management,
Security & Compliance
The EU AI Act is a new law changing how organisations develop and deploy AI-powered solutions worldwide. Complying with it is a chance for organisations to stand out and build trust with customers through responsible AI use — all while continuing to innovate.
As predicted by McKinsey and others back in 2023, AI (specifically generative AI) has become a key part of daily business operations across many industries. Today, AI is powering customer-facing chatbots like NatWest's "Cora" and Wells Fargo's "Fargo," as well as back-office solutions for claim processing, fraud detection and research analysis at Allianz, HSBC, and JP Morgan, respectively.
But challenges have also increased in sensitive areas where accuracy, data privacy, and security matter most. This is particularly evident in sectors such as financial services and healthcare where errors and unexpected behaviour can have huge consequences. For example, many fear that AI could amplify the sorts of losses typified by the Knight Capital Group incident of 2012, where a faulty algorithm lost the business approximately $440 million in just 45 minutes.
The stakes are incredibly high. Organisations that fail to secure and manage their AI systems do not just face hefty fines. They also risk privacy breaches, security problems, and harm to their reputation.
Let’s explore the EU AI Act and how it affects data privacy and security. Then, we will discuss practical steps data security leaders can take to stay compliant.
What Does the EU AI Act Entail?
The EU AI Act is a comprehensive regulatory framework that aims to protect basic rights and promote trustworthy AI development. It does this within a clearly defined scope that promotes the ethical use of AI systems in the European Union.
The EU AI Act focuses on:
- The creation of a risk-based classification system for monitoring and regulating the development and deployment of AI solutions.
- Identifying AI practices that are banned because they pose significant threat to safety, fundamental rights, or ethical standards.
- Requirements and obligations needed to mitigate the chance of experiencing undesired behaviour while running AI systems, particularly high-risk solutions. This includes but is not limited to:
- Establishing a risk management system for the AI system to maintain safety and reliability.
- Setting standards for data and data governance to ensure quality and integrity.
- Production of documentation covering the development and operation of the AI system to promote transparency and accountability.
- Delivery of logging and audit capabilities.
- Creation of processes for corrective actions and reporting to regulators.
- Making sure users are aware of when they are interacting with AI systems and understand the capabilities and limitations of the solution.
The Risk-Based Framework
Previous regulations around AI usage focused mainly on data protection. The EU AI Act takes a risk-based approach by grouping AI systems based on how much harm they could cause.
Risk Level | Description | Why It’s Considered Risky |
Unacceptable | Prohibited AI applications that pose an unacceptable threat to fundamental rights. | These include systems that manipulate behaviour, enable social scoring by governments, or use real-time biometric surveillance in public spaces. They are banned because they can seriously violate privacy, safety, and basic freedoms. |
High | AI used in critical areas like healthcare, law enforcement, education, and employment. | High-risk systems can affect people’s lives directly, such as making decisions about hiring, loans, or medical treatment. So, errors, biases, or lack of oversight can lead to unfair or harmful outcomes. |
Moderate | AI systems with minimal impact on users' rights or safety. | These systems generally pose little to no harm, but still require basic transparency (e.g., disclosing when users interact with AI). |
Minimal | Most AI applications | These systems pose no threat and are generally unregulated. |
The Act requires a complete rethinking of traditional approaches for security and compliance. Security must be built into every stage of AI development.
A Note on Agentic AI
It's important to note that agentic AI is considered higher risk under the EU AI Act. Unlike basic AI, which follows specific rules, agentic AI has reasoning capabilities that allow it to work independently.
Take booking an airline ticket, for example. An agentic AI system will not only be able to search for flights and select the best price on your behalf. It could also use your banking details to pay for the ticket and fill in boarding forms with your passport information.
This independence creates many points where things could go wrong. The AI could accidentally share your passport details with the wrong systems. It might access your bank account and make unauthorized transactions. It may even leave your sensitive information privy to hackers. The EU AI Act places strict controls on AI systems to protect users from potential harm.
When Did the EU AI Act Go into Effect?
The EU AI Act was officially adopted in August 2024 after it was published in the Official Journal of the European Union. There will be a transition period to give organisations time to adjust, but some rules will take effect sooner.
Although these grace periods provide some extra time, you should not delay your preparations. Making changes to technical systems, compliance frameworks, and organisational processes requires time. Thus, it is advisable to start your compliance efforts now before deadlines approach.
Which Organisations Must Comply?
The EU AI Act reaches far beyond Europe's borders. The regulation applies to:
- Any organisation offering AI services or products within the EU market. This applies no matter where their headquarters are located.
- Organisations that use sensitive data for business insights that have a direct impact on people in the EU.
- Providers and deployers of AI systems. This includes those who develop, import, or distribute AI solutions.
Do not assume that your organisation is exempt from regulations if AI is only used for internal processes. Your organisation will still need to comply if those activities affect EU residents.
An example of this is if your company uses AI to evaluate job applications. The AI system could reflect biases based on past human decisions, which could lead to discrimination against EU citizens. Because of that, the company would be subject to the EU AI Act's requirements.
No matter if your organisation uses a custom or pre-built AI model, you will still need to follow regulatory requirements.
Additionally, industries handling large amounts of sensitive data or using advanced AI solutions face tougher rules under the EU AI Act. These industries include healthcare, financial services, and telecommunication sectors.
What Are the Penalties for Non-Compliance?
The EU AI Act sets hefty penalties:
- Up to €35 million or 7% of global annual revenue (whichever is higher) for violations involving prohibited AI practices.
- Up to €15 million or 3% of global annual revenue for most other violations.
- Up to €7.5 million or 1% for providing incorrect information.
Beyond financial penalties, non-compliance can result in:
- Mandatory removal of non-compliant AI systems from the market.
- Operational disruption as systems must be changed or discontinued.
- Potential legal liability from affected individuals or companies.
- Significant damage to reputation.
Many organisations are cautious about advancing their AI work because of these penalties. Some are waiting to see how enforcement plays out before implementing ambitious AI projects.
Adopt AI Without Compromising Compliance, Quality, or Speed
Many organisations remain unaware of the full scope of AI’s impacts on data security and compliance. They risk regulatory non-compliance and so much more.
Learn the top three challenges that leaders need to understand as they navigate data privacy and compliance in AI and analytics environments. And get a comprehensive guide to balancing AI innovation with compliance. Read the free eBook by Steve Karam, Perforce Delphix’s resident AI and data security expert: AI Without Compromise: Balancing Innovation, Speed, and Data Privacy in AI & Analytics.
Most Relevant EU AI Act Articles for Data Compliance Leaders
The EU AI Act is long — hundreds of pages. Data compliance leaders will need to pay special attention to several key articles.
Below are the key articles to understand, plus some that directly impact security operations, with a brief summary of what each one covers.
Article 5: Prohibited AI Practices
Read Article 5 thoroughly to understand what you should definitely not be developing or deploying.
Article 6: Classification Rules for High-Risk AI Systems
Review Article 6 to familiarise yourself with how to identify high-risk AI systems and establish a framework that everyone inside your organisation can understand and use.
Does it turn out that your organisation is planning to develop or deploy one or more high-risk AI systems? Then you will need to familiarise yourself with the following articles:
Article 9: Risk Management Systems
Use Article 9 to extend your existing governance, risk, and compliance frameworks, making them fit for purpose in the AI-powered world.
Article 10: Data Quality and Compliance
Article 10 addresses data quality requirements for training, validation, and testing data sets. It requires systems to use high-quality data that is relevant, representative, free of errors, and complete.
This means:
- Implementing strong data compliance frameworks.
- Ensuring data origin and integrity.
- Developing appropriate data access controls.
- Creating audit trails for data usage in AI systems.
Article 11: Technical Documentation
Read Article 11 to make sure you are documenting which data and algorithms are being used so that you can demonstrate model provenance.
Article 12: Record Keeping
Related to Article 10, read Article 12 make sure you have a strong audit trail around the data used in the development of your models. Track who has access, when the model was used, etc.
Article 15: Technical Robustness and Resilience
Article 15 mandates that high-risk AI systems must be designed and developed with the right levels of accuracy, strength, and cybersecurity. Security teams must have measures to prevent and control attacks that may manipulate training data, inputs, or model flaws.
Key requirements include:
- Regular vulnerability assessments and penetration testing.
- Implementation of up-to-date security measures.
- Resilience against attempts to alter system behaviour through harmful inputs.
- Fail-safe mechanisms if security is compromised.
How the EU AI Act Impacts Data Privacy & Compliance
The EU AI Act does not exist by itself — it sits on top of other legislation that either already exists or is in the process of becoming law.
For example, the financial sector already has the Digital Operations Resilience Act (DORA). Meanwhile, broader technology regulations include Network Information Security Act (NIS2).
Both existing regulations emphasize security by design. That is the idea that security should be a fundamental part of system development from the start. The AI Act extends this same principle to AI systems.
If you are already following General Data Protection Regulation (GDPR) and security best practices, the AI Act isn't a huge change. It's more like an update you need to include in your current AI compliance efforts.
Key connections the EU AI Act has with existing regulations include:
- Data minimization: Like GDPR, the AI Act emphasizes using only necessary data for specific purposes.
- Transparency requirements: Frameworks require clear communication about how systems process personal data.
- Privacy by design: Security and privacy safeguards must be built into systems.
- Impact assessments: High-risk AI systems need assessments like GDPR's Data Protection Impact Assessments.
These regulations will need coordination between security, privacy, and compliance teams. Organisations must develop compliance frameworks that protect data. These frameworks will also need to cover specific requirements related to AI.
Data quality is especially important in this context. Poor quality data can lead to inaccurate AI outputs and biased decisions that violate fundamental rights. This puts a strong focus on data compliance practices. These include data anonymization, generating synthetic data, and more.
Steps to Ensure Compliance & Reduce Security Vulnerabilities
Leaders should take the following practical steps to prepare for EU AI Act compliance:

Step 1: Establish AI Literacy Programs
Before implementing any technical controls or conducting audits, there is a critical step: improving AI literacy across your organisation.
Organisations need comprehensive training programs to help everyone understand:
- What different types of AI are and how they function.
- The potential pitfalls of using AI systems.
- The implications of data used to train models.
- How AI systems can be abused or misused.
- What security vulnerabilities exist in AI applications.
Without AI literacy, teams will not buy in to why controls need to be put in place. They are subsequently more likely to overlook compliance requirements and fall foul of potential security issues.
Step 2: Conduct Comprehensive AI Audits
AI solutions are rapidly becoming the next generation shadow IT problem. The most forward-thinking organisations are already providing guidance on the appropriate use of AI solutions within the work environment.
The outright banning of AI rarely works. So be open and inquisitive about identifying all AI systems currently in use or development within your organisation. You can then assess them against the Act's requirements:
- Catalog AI applications and classify them according to risk levels.
- Document data sources, model architectures, and decision-making processes.
- Identify compliance gaps and develop remediation plans.
- Ensure proper documentation exists for high-risk systems.
Step 3: Enhance Data Compliance Frameworks
Put in place robust data compliance practices tailored for AI applications:
- Establish a data management team — you need a centre of excellence.
- Establish clear data lineage and origin tracking.
- Put in place quality controls for training data.
- Develop processes for bias detection and mitigation.
- Create mechanisms for ongoing data quality monitoring.
Step 4: Implement Security-by-Design Principles
Integrate security throughout the AI development lifecycle:
- Establish a platform engineering team to build standards and blueprints that can be deployed consistently across your organisation.
- Secure the machine learning pipeline from development through deployment.
- Apply access controls and employ data masking tools to replace sensitive data as early as possible in the data pipeline.
- Conduct regular security testing of AI systems.
- Develop incident response plans for AI-specific vulnerabilities.
- Track systems for unusual behaviour or potential attacks.
Step 5: Automate Compliance Processes
Use technology to streamline compliance activities and remove friction from the software development lifecycle:
- Build self-service data pipelines so development teams have easy access to approved and compliant data sources.
- Develop monitoring systems for continuous compliance validation. You should be scanning both infrastructure and datasets for non-compliance.
- Create dashboards for compliance visibility across the organisation.
- Establish automated testing for bias, explainability, and performance.
- Reduce the cognitive load by employing tools for automated documentation generation.
Step 6: Foster Cross-Functional Collaboration
Break down barriers between teams responsible for different aspects of AI compliance:
- Create regular meeting points between security, privacy, and AI development teams.
- Establish clear roles and responsibilities for AI compliance.
- Develop shared objectives and success metrics.
- Execute communication channels for emerging risks or compliance issues.
Preparing for a New Era of AI Regulations
The EU AI Act represents more than another regulatory rule. It signals a shift in how to approach AI development and use. It allows organisations to build more trustworthy AI systems that provide value while reducing risks.
The global regulatory landscape will continue evolving. Countries like the UK and Saudi Arabia have already developed their own AI governance frameworks. Forward-thinking organisations will recognize these regulations as catalysts for necessary security evolution.
By building strong compliance processes now, you will have an advantage. Your organisation will be better positioned to adjust to future requirements.
Comply with AI Laws like the EU AI Act with Perforce Delphix
AI-driven analytics unlock powerful insights, and contrary to what you may think, compliance does not have to slow you down.
Perforce Delphix AI & analytics solutions automate secure, irreversible data masking and delivery across 170+ sources (including Snowflake and Azure).
With Delphix, your teams can quickly access production-like datasets while minimizing sensitive data exposure. Delphix data masking preserves critical behavioural and statistical properties, supporting quality AI outcomes. And it masks terabytes of data in hours, so you can use entire datasets — without introducing bias or compromising security.
Meet regulatory requirements like the EU AI Act efficiently, without sacrificing development speed or data quality.
Talk to Perforce Delphix AI and Data Compliance Experts
Contact us to learn more about how Perforce Delphix can help your organisation prepare for EU AI Act compliance while maintaining your competitive edge in AI development.