WHITE PAPER The Four Essential Elements to Responsible AI Governance WHITE PAPER Table of Contents Executive Summary ............................................................................................ 3 1. Define and commit to “Responsible AI” ................................................. 3 2. Establish governance by design to enable speed and scale .......4 3. Focus on key AI risks and regulatory requirements .......................... 5 4. Anticipate and adapt to a rapidly evolving regulatory landscape ........................................................................................ 7 Practical steps to set-up a successful AI Governance program ......8 Conclusion .............................................................................................................9 References ...........................................................................................................10 3 WHITE PAPER 1. Define and commit to “Responsible AI” At ServiceNow, Responsible AI is a philosophy, a management approach, an operating principle, and a technical direction. That makes it the first and most crucial of our four essential elements. Our approach is grounded in four simple but powerful principles. Every piece of AI we develop is:: • Human-centered Our AI solutions are built on a foundation of human-centered principles that puts humans in control of AI-based decisions. • Inclusive Our solutions are constantly tested to promote fairness for all and to minimize bias. • Transparent We communicate with our customers transparently on the topic of AI using clear, understandable terms. We also share detailed information on the governance foundations of our AI, like the type of data used for training and our approach to privacy and security. • Accountable Trust is the cornerstone of our AI initiatives. We have adopted an oversight structure to provide accountability and governance. These ideas are inspired by global frameworks like NIST’s AI Risk Management Framework, the EU AI Act, and ISO 42001 and they shape how we design, build, and deliver AI every day. Executive Summary At ServiceNow, AI is changing the game — fast. From automating everyday tasks to helping us make better decisions, AI is already proving to be a powerful force in how we work. But with this power comes responsibility. We believe AI should be innovative and accountable so we’ve evolved Responsible AI (RAI) from a concept to a program and built it into the way we operate — from how we build to how we govern. This paper walks you through the four essential elements we have identified to blend innovation with strong guardrails, make sure AI works for all ServiceNow employees and stay aligned with evolving rules and expectations around the world. At ServiceNow we’re all about freedom within a framework because innovative AI and Responsible AI don’t have to be mutually exclusive. That’s why we’ve built Responsible AI into not only our platform, but also our processes and software development lifecycle. Kellie Romack Chief Digital Technology Officer, ServiceNow 4 WHITE PAPER 2. Establish governance by design to enable speed and scale Many companies claim that effective AI governance requires executive sponsorship to develop standards, and a focused effort to gain stakeholder support. At ServiceNow, we have found a better way: We developed our governance standards, framework and processes together. That’s our second essential element because when you have participation, you don’t need to build consensus. First, we brought together the right voices — developers, architects, project managers, compliance pros, and executives — into a central Digital Technology AI Council to shape how we roll out AI safely and consistently. That council developed our AI Governance program which focuses on 5 key goals: 1. Uniform Cross-Functional Execution We’re making sure everyone — across departments and functions — is rowing in the same direction when it comes to AI. That means clear protocols, reusable components, shared architecture, shared goals, and one enterprise-wide governance body to keep us all aligned (see Figure 1). 2. Regulatory Compliance Regulations around AI are evolving fast. We keep a close eye on them — from privacy to security to legal — and make sure our teams know what’s required and how to stay compliant every step of the way. 3. Policies & Procedures It’s not just about writing the rules; it’s about making them usable. We create practical policies and standards that help teams build, deploy, and manage AI the right way — responsibly and consistently. 4. Risk Management We look at every stage of the AI lifecycle — from development to deployment and beyond — to identify potential risks. Then we put guardrails in place to keep those risks in check before they become issues. 5. Controls Assurance We regularly assess how well our controls are working. If something’s missing or not up to standard, we fix it fast — making sure our AI systems stay on track and aligned with our governance goals. To deliver on this program, we created an AI governance structure using workstreams aligned with our corporate operating model (see Figure 1). These workstreams function together in the Enterprise AI Governance Structure, created by the Digital Technology AI Council. Bringing everyone onboard (the village), getting the right sponsorship from our leaders, and aligning early on clear roles & responsibilities, scope, and priorities, was essential to getting this governance structure off to a successful start. Walid Sleiman Senior Director, Internal AI Governance and Digital Technology Governance, Risk & Compliance leader 5 WHITE PAPER Figure 1: Cross-functional workstreams work together in our AI governance structure 3. Focus on key AI risks and regulatory requirements Even with a cross-functional governing committee, it’s imperative that governance becomes part of how teams work, not an afterthought. To that end, we are priotizing two emerging AI risks as part of this third essential element: AI System Development Lifecycle (AI-SDLC): The Digital Technology AI Council created a new AI delivery process with AI governance and compliance requirements built-in (see Figure 2). This ensures that everything we build, or purchase through our partners and vendors, goes through our established SDLC processes and is vetted in line with our policies and procedures. Then we mapped leading AI regulatory standards such as NIST AI RMF, EU AI Act and ISO-42001 to create our own AI SDLC requirements with 44 clear control objectives (see Figure 3). We used these requirements to embed controls directly into our SDLC processes and tools, which were designed with compliance and controls in mind from the onset (see Figure 4). 6 WHITE PAPER Figure 2: Our AI System Development Lifecycle Figure 3: Mapping of AI SDLC control requirements Figure 4: Embedding our control requirements within the AI SDLC to deliver a “Compliant by Design” end-to-end process 7 WHITE PAPER Agentic AI Access & Identity Management: The second risk priority we are focusing on is access to AI systems and, more specifically, access by Agentic AI. Agentic AI is reshaping enterprise operations by enabling virtual agents to make decisions, take initiative, and learn autonomously. While these capabilities unlock speed and scale, they also introduce new security and ethical risks. For example, what role should be assigned to the AI agent? Administrative access to enable ease of processing, executing tasks, and accessing date? Should we assign it an end-user role? Or should we create a new role specific for AI Agents?. Organizations must treat AI security as a strategic foundation—defining strict parameters for agent autonomy, ensuring real-time oversight, securing the entire AI stack, and embedding ethical practices into every stage of deployment. By combining technical safeguards with accountability and transparency, businesses can harness AI’s power without sacrificing trust or safety. • AI agents need strict role-based access controls, authentication, and traceability to operate securely. • Continuous monitoring and real-time defenses are critical to detect anomalies and mitigate prompt injection or misuse. • Securing the entire AI ecosystem—including models, APIs, and compute environments—is essential to reduce risk. • Ethical deployment requires transparency, accountability, and bias mitigation to build trust and fairness. 4. Anticipate and adapt to a Rapidly evolving regulatory landscape The AI regulatory environment is changing quickly — not just in the U.S., but across Europe, Asia, and other key markets. Rather than react to new requirements, we are committed to anticipating them. Staying ahead of emerging standards means building strong governance frameworks today that can flex and scale with the regulations of tomorrow. We are actively pursuing alignment with ISO 42001, the emerging global standard for AI management systems. This isn’t just a milestone — it’s a signal to our customers, partners, and regulators that we take responsible AI seriously. And as a founding member of the AI Alliance (alongside IBM, Meta, and others), we’re not only preparing for what’s next — we’re helping define it. By engaging directly with policymakers, industry groups, and the broader AI community, we aim to shape a future where innovation and accountability go hand in hand. Integrating AI into the workplace offers significant benefits but requires a proactive approach to security. Access controls and monitoring will need to evolve to address non-deterministic and unpredictable behaviors of AI agents, alongside implementing ethical considerations. This will allow organizations to harness AI’s power while safeguarding their systems and data from security and operational challenges. Vinay Pillai VP & Chief Enterprise Architect, Digital Technology Enterprise, Integration, and Services 8 WHITE PAPER Practical steps to set-up a successful AI Governance program Building a robust AI governance program can seem daunting, but it doesn’t have to be.