- Post History
- Subscribe to RSS Feed
- Mark as New
- Mark as Read
- Bookmark
- Subscribe
- Printer Friendly Page
- Report Inappropriate Content
Tuesday
In the enterprise, AI is no longer confined to analytics or prediction it now generates content, reasons over complex problems, and autonomously executes actions across systems. These capabilities deliver unprecedented speed, efficiency, and insight. At the same time there is a very real risk of things going terribly wrong.
Poorly managed AI can amplify bias, violate security policy, expose sensitive data, hallucinate authoritative sounding but incorrect outcomes, and in the case of agentic AI take actions that have cascading operational and financial consequences. As AI systems move closer to decision-making authority, the cost of unmanaged risk grows exponentially.
This is why frameworks matter. One of the most credible and practical foundations for AI risk governance is the AI Risk Management Framework (AI RMF) from the National Institute of Standards and Technology (NIST). From a ServiceNow perspective, the AI RMF is not just a policy reference it is an executable operating model for governing AI across its full lifecycle using platform-native services such as AI Control Tower, IRM, Security Operations, and Data Governance.
Factors to be considered for AI Trustworthiness
If organizations are going to trust AI, certain characteristics must be present for it to be truly trustworthy. NIST defines these characteristics clearly, and they become especially critical in the context of generative and agentic systems.
AI must be valid and reliable. It needs to be accurate, consistent, and aligned to its intended use. In Generative AI, this risk manifests as hallucinations outputs that appear coherent but are factually incorrect. In Agentic AI, reliability failures can propagate across workflows as agents trigger downstream actions based on faulty reasoning.
AI must be safe. Safety is no longer limited to physical harm; it includes operational safety. An autonomous agent that incorrectly closes incidents, modifies access, or triggers financial transactions can cause real damage. Safety must therefore be enforced through controlled execution paths and human-in-the-loop checkpoints.
AI must be secure and resilient. Generative and agentic systems dramatically expand the attack surface. Threats include prompt injection, data poisoning, model inversion, unauthorized agent execution, and supply-chain risk from third-party models. Resilience means the system can detect, withstand, and recover from these attacks without losing trustworthiness.
AI must be explainable and interpretable. As AI systems reason and act, organizations must be able to explain why a decision or action occurred. This is especially important for agentic AI, where reasoning chains and decision paths must be traceable—not opaque.
AI must preserve privacy. Generative AI systems are trained on and interact with vast amounts of data. Without strong controls, sensitive information can leak through prompts, responses, or training feedback loops.
AI must be fair. Bias in training data or reasoning logic can lead to discriminatory outcomes, which in turn undermine validity, trust, and regulatory compliance.
Finally, AI must be accountable and transparent. Organizations cannot accept black-box autonomy. Ownership, traceability, and auditability are essential, especially when AI systems initiate actions rather than simply providing recommendations.
Core of NIST AI RMF
The NIST AI RMF core consists of Govern, Map, Measure, and Manage.
Govern
Governance is the foundation. It sets the culture, intent, and boundaries for AI usage across the enterprise. In the context of generative and agentic AI, governance answers critical questions:
- Where is AI allowed to act autonomously?
- What decisions require human oversight?
- Which data sources are approved for AI consumption?
- What is the organization’s risk tolerance by AI use case?
ServiceNow operationalizes governance through AI Control Tower by centralizing AI governance, policies, roles, and approvals in one system of record, where AI use cases are registered, classified, assessed, approved, and monitored. Governance becomes a living control system, not a static policy document.
Map:
The Map function establishes context. This becomes especially important with AI Agentic Fabric, where multiple agents collaborate across domains such as IT, HR, Finance, and Security.
Mapping includes:
- Identifying all actors (developers, agents, users, approvers)
- Understanding data flows across prompts, tools, memory, and execution
- Defining goals and intended outcomes
- Establishing risk tolerance per service and domain
There is no point in deploying AI without clearly understanding what it is supposed to accomplish. It also involves defining all stakeholders and roles, understanding how they interact with the system, and identifying where risk may be introduced or reduced. Risk tolerance is also defined here, recognizing that tolerance varies by organization and by use case. ServiceNow enables this through structured intake, stakeholder mapping, and lifecycle visibility.
Measure
Measurement is where AI risk becomes tangible. A combination of qualitative and quantitative techniques is essential.
Key Risk Categories for Generative AI
- Hallucination Risk: AI produces plausible but incorrect information, leading to bad decisions.
- Prompt Injection Risk: Malicious or unintended prompts manipulate AI behavior.
- Data Leakage Risk: Sensitive data is exposed through responses, logs, or training feedback.
- Bias Amplification Risk: Generative outputs reinforce existing biases at scale.
Key Risk Categories for Agentic AI
- Autonomous Action Risk: Agents execute actions without sufficient validation or authorization.
- Cascading Failure Risk: One incorrect decision trigger multiple downstream failure.
- Role and Access Risk: Agents operate with excessive privileges.
- Explainability: Decision chains become too complex to interpret.
Measurement includes analysis, understanding whether the system is meeting its stated goals and robust testing, evaluation, verification, and validation across the AI lifecycle. ServiceNow supports this by embedding assessment workflows, evidence collection, and continuous monitoring directly into operations.
Manage
Managing AI risk is not about eliminating risk, it is about making informed decisions.
Organizations may:
- Mitigate risk through controls, guardrails, and approvals
- Accept risk where benefits outweigh exposure
- Transfer risk through contractual or insurance mechanisms
- Avoid risk by restricting or disabling certain AI capabilities
Organizations revisit their goals, determine whether they have been met, prioritize identified risks, and decide how to respond. Some risks are mitigated with controls. Some are accepted. Others are transferred or insured against. This function is about continuous management, not one-time decisions. ServiceNow operationalizes this by linking AI risks to actions, controls, owners, and remediation workflows, creating a closed loop from identification to continuous monitoring of Risk and compliance.
AI Control Tower: Where Framework Meets Execution
This understanding of AI risk feeds directly into AI Control Tower, which acts as the operational backbone for AI governance on ServiceNow.
Using the NIST AI RMF footing:
- Govern defines approval policies and accountability models
- Map structures AI use cases, data sources, and agent interactions
- Measure enables ongoing risk assessments and performance monitoring
- Manage drives remediation, escalation, and continuous improvement
AI Control Tower becomes the system of record for AI trust.
From Trust Principles to Trust Operations
In a world where AI is everywhere, trust is everything. The NIST AI Risk Management Framework provides the structural foundation for trustworthy AI, defining what must be governed, understood, measured, and managed. ServiceNow provides operational muscle, translating those principles into executable services, workflows, and controls embedded directly into how the enterprise runs. Together, they enable organizations to move beyond abstract discussions of AI ethics and intent, toward real, defensible, service-centric AI risk management.
