GenAI and Privacy

The EU AI Act is coming: What do security teams actually need to do?

22 Apr 2025

The EU AI Act is Coming What Do Security Teams Actually Need to Do

The conversation around artificial intelligence is rapidly shifting from purely 'potential' to 'practical reality,' and with that comes the inevitable regulatory spotlight. The European Union's AI Act is poised to become a landmark piece of legislation, setting global precedents for how AI systems are developed, deployed, and governed.

While headlines often focus on the high-level goals of safety, transparency, and fundamental rights, the critical question for those on the ground remains: what does this actually mean for security teams? How do we translate the legal requirements into tangible security tasks and controls? It's time to move beyond awareness and into action.

Why regulation now? The shift to proactive AI governance

The EU AI Act isn't just about imposing rules; it reflects a growing understanding that the unique risks posed by AI require a dedicated governance framework. Unlike traditional software, AI systems learn, evolve, and can exhibit unexpected behaviours. Their potential impact – from influencing loan applications and medical diagnoses to controlling critical infrastructure – necessitates a proactive approach to safety and security, baked in from the start. Waiting for something to go wrong is no longer a viable strategy.

Decoding the AI Act: actionable tasks for security practitioners

Translating regulatory principles into security actions is key. Based on the known requirements and risk-based approach of the EU AI Act (particularly for high-risk systems), here’s what security teams need to start focusing on now, keeping in mind the phased implementation:

  • Operationalising risk management: The Act categorises AI systems based on risk (unacceptable, high, limited, minimal). The ban on 'unacceptable risk' AI systems is already in effect (since February 2025). For high-risk systems (common in finance, healthcare, critical infrastructure, etc.), a robust risk management system is mandatory throughout the AI lifecycle, with these obligations applying from August 2026.

    • Security task: Implement and document AI-specific threat modeling and risk assessments early in development. Integrate security risk management into the MLOps pipeline, ensuring continuous evaluation as models and data change.

  • Securing the data pipeline (data governance): The Act mandates high-quality, relevant, and representative training, validation, and testing data, along with appropriate data governance practices.

    • Security Task: Implement strong data integrity checks, provenance tracking, and access controls throughout the data lifecycle. Develop processes to assess and mitigate bias in datasets. Ensure data handling complies with privacy regulations (like GDPR) and the specific requirements for AI training data under the Act. This aligns with the focus of specialized areas like our DATA AI Security Lab.

  • Ensuring technical robustness & safety: High-risk AI systems must be resilient against errors, failures, and attempts to alter their use or behaviour (adversarial attacks). They need accuracy, fallback mechanisms, and cybersecurity appropriate to the risks.

    • Security task: Implement rigorous testing, including adversarial robustness testing, performance testing under various conditions, and security code reviews. Enforce secure coding practices for AI components. Design and test fail-safe mechanisms and secure shutdown procedures. Expertise from areas like our AI Models Security Lab becomes critical here.

  • Enabling transparency & traceability: The Act requires systems to be designed to allow for traceability and logging of their functioning to ensure compliance. Transparency requirements for general-purpose AI systems will apply from August 2025.

    • Security Task: Implement comprehensive, secure, and immutable logging capabilities for AI system operations, decisions, and data inputs. Ensure audit trails are protected and accessible for investigation and compliance verification.

  • Facilitating human oversight: Meaningful human oversight must be possible for high-risk systems. While the specifics are still being defined in technical standards, the principle is clear.

    • Security Task: Design and secure the interfaces and mechanisms that allow for effective human monitoring, intervention, and control over the AI system. Ensure that oversight capabilities cannot be easily bypassed or tampered with.

  • Applying appropriate cybersecurity measures: This is explicitly called out. High-risk systems need security measures that ensure resilience against attempts to compromise confidentiality, integrity, or availability by unauthorised third parties.

    • Security task: Apply fundamental cybersecurity best practices tailored to the AI environment – secure infrastructure configuration (cloud or on-prem), robust API security, vulnerability management for AI components and dependencies, strong access controls, and data encryption. This is where expertise in AI Deployment Security is essential.

Beyond checkboxes: embedding compliance through DevSecAI

Meeting these requirements effectively isn't about a last-minute compliance scramble. It demands integrating security and regulatory considerations throughout the AI lifecycle – the core philosophy of DevSecAI. It means:

  • Thinking compliance early: Incorporating regulatory requirements during the design and planning phases.

  • Automating security checks: Building security and compliance validation into the CI/CD and MLOps pipelines.

  • Continuous assessment: Regularly evaluating the AI system's compliance posture, not just its functional performance. Understanding where you stand with an AI security maturity assessment can provide a crucial baseline.

Preparing for a regulated AI future

The EU AI Act and similar regulations likely to follow globally signal a new era for AI development. For security teams, this means upskilling, adapting processes, and collaborating closely with data science, legal, and development teams. While the main obligations for high-risk systems are set for August 2026, and the detailed technical standards are still being finalised, proactive preparation is key. Viewing these regulations not just as constraints but as frameworks for building trustworthy, reliable, and ultimately more valuable AI systems is essential. It’s an opportunity to demonstrate that innovation and robust security governance can, and must, go hand-in-hand.

How is your security team preparing for the practical implications of AI regulations like the EU AI Act? What are your biggest challenges?