EU AI Act compliance explained
According to the ServiceNow Enterprise AI Maturity Index 2025, 82% of AI Pacesetters—organisations leading AI implementation—expect to increase AI spending in the next year. More than two-thirds (67%) of the companies surveyed say AI has increased their organisation’s gross margin.
Despite the benefits of AI, respondents also expressed concerns about data security. Responsible AI is vital.
AI governance frameworks can help organisations safely and securely implement AI. Global efforts are growing: According to the Organisation for Economic Co-operation and Development (OECD), there are more than 1,000 AI policy initiatives across 69 countries, territories, and the EU.
One notable example is the EU AI Act. Organisations deploying AI agents and other AI systems may be responsible for complying with this pivotal legislation. Let’s unpack key parts of the EU AI Act and how ServiceNow helps organisations put AI to work responsibly.
What is the EU AI Act?
The EU AI Act is legislation developed by the European Commission to govern the implementation of AI. It acts as a rulebook for using AI in the EU.
Implementation is phased. While the act officially entered effect in August 2024, key provisions are rolling out progressively until August 2027, giving organisations time to adapt.
AI systems that pose the highest level of risk were prohibited by the act in February 2025. These include technologies for predictive policing, live facial recognition, and collecting biometric data from social media. The rules on general-purpose AI (GPAI) models will become effective in August 2025.
The EU is preparing more than 20 pieces of supporting legislation to supplement the act. ServiceNow is working with Business Software Alliance (BSA) to help guide EU decision-making, sharing our expertise in AI governance.
How can businesses put AI to work responsibly?
Robust governance is crucial to effective AI in the enterprise. Leaders must ensure their implementations adhere to the global compliance frameworks to which they are subject.
For enterprises operating in the EU and subject to the EU AI Act, that means compliance.
Organisations should consider conducting risk assessments to classify AI applications. Those identified as high-risk may require risk mitigation strategies, such as regulatory reporting and transparency in decision-making processes.
An AI orchestration platform, such as the ServiceNow AI Platform, can support organisation-wide oversight of AI systems, providing a centralised command centre to manage AI models. This can help enforce consistent governance policies across all AI systems, through features such as access management, real-time reporting, and embedded privacy capabilities.
It’s also important to cultivate AI literacy among employees. AI training can aid workers in understanding AI ethics—including fairness, non-discrimination, and protection of fundamental rights—to help ensure AI systems are safely implemented.
How does the EU AI Act classify risk?
The EU AI Act provides a risk-based classification system. It categorises AI into four categories, summarised below:
-
Unacceptable risk: AI systems that threaten safety or fundamental human rights are banned. Any organisation using them risks severe financial penalties up to €35 million, or 7% of global annual turnover, depending on which is higher.
-
High risk: AI systems used in sensitive areas such as hiring, critical infrastructure, and law enforcement are permitted under strict rules. They must meet stringent requirements, including establishing risk management practices, maintaining adequate documentation, and ensuring a certain level of human oversight.
-
Limited risk: Fewer requirements apply to AI systems with lower risk profiles, such as chatbots or deep-fake systems. They must meet certain transparency obligations, such as informing users of when they’re interacting with an AI service.
-
Minimal risk: AI systems supporting internal processes, such as writing assistants or email filtering, carry fewer risks and don’t have additional obligations tied to them.
The EU AI Act also classifies GPAI models into two categories, summarised below:
- Providers of GPAI models, such as large language models, need to meet certain requirements to allow downstream providers to incorporate them into their AI systems.
- GPAI models posing “systemic risk,” which are the most advanced models with very high compute power, have additional obligations. For example, organisations must document and disclose significant incidents and mitigate systemic risks.
Penalties may apply for non-compliance instances that pose risk, such as neglecting transparency requirements or providing incorrect information to regulatory bodies.
How can ServiceNow help?
Regulatory obligations can make it challenging for organisations to implement AI with agility and remain fully compliant with regulatory frameworks.
To help organisations meet their compliance needs, the ServiceNow AI Platform enables organisations to govern, manage, and optimise all AI initiatives, including internal builds, third-party models, agents, and AI embedded in software as a service (SaaS).
The ServiceNow AI Control Tower enables businesses to responsibly deploy and scale AI operations across the organisation and stay compliant with the EU AI Act and other regulatory requirements. Leaders in the organisation can take full control over AI initiatives—helping mitigate risk, track impact, and seamlessly coordinate AI agents.
Find out more about how ServiceNow can support your organisation with governance, risk, and compliance.