- Subscribe to RSS Feed
- Mark as New
- Mark as Read
- Bookmark
- Subscribe
- Printer Friendly Page
- Report Inappropriate Content
AI Glossary: GRC and Enterprise AI Glossary
Popular AI terms relevant to GRC and Enterprise AI
Updated May 2026
Term-Definition
— A —
Agent-Autonomous entity with agency and decision-making capabilities that perceives its environment, processes information, and takes actions to achieve defined goals. In enterprise AI, agents can execute multi-step tasks, use tools, and operate with minimal human oversight.
Agent-to-Agent Protocol (A2A)-An open interoperability protocol, originally launched by Google in April 2025 and donated to the Linux Foundation in June 2025, that enables AI agents built on different platforms to discover one another, exchange information, and coordinate actions securely. ServiceNow is a founding partner of the Linux Foundation A2A Project, and implements A2A through its AI Agent Fabric to allow agents from different vendors to collaborate across enterprise environments.
Agentic AI-Type of artificial intelligence designed to act autonomously, with a high degree of agency and self-determination. Agentic AI systems can plan, reason, use tools, and execute multi-step workflows to complete complex tasks with minimal human intervention.
AI Agent Fabric-ServiceNow's multi-agent communication layer that connects ServiceNow, customer, and partner-built agents. Built on open standards including A2A and MCP, it enables AI agents to share context, coordinate actions, and achieve outcomes across enterprise environments.
AI Asset Inventory-Centralized repository of all AI-related assets, including models, datasets, and algorithms within an organization, enabling it to track, manage, and govern its AI assets.
AI Asset Lifecycle-Process of managing the entire lifecycle of an AI asset, from creation to retirement.
AI Bias-Systematic errors or unfair outcomes in AI outputs resulting from flawed training data, algorithmic design, or model development processes. AI bias can result in discriminatory outcomes and is a key focus of responsible AI governance frameworks, including the NIST AI RMF.
AI Control Deduplication-Identifies and removes redundant control objectives and proposes a new common control objective (Now Assist).
AI Control Tower-Centralized dashboard for monitoring and managing AI-related activities, including model performance, data quality, and user adoption. Includes agent autodiscovery to provide a live, continuously updated governance registry showing which AI agents and MCP servers are active, what models they use, and their risk and compliance status.
AI Gateway-Supports AI agent interoperability and governance for workflows that cross multiple platforms using Model Context Protocol (MCP) and Agent-to-Agent (A2A) protocol. Acts as a unified control point for managing, routing, and securing AI model interactions across the enterprise.
AI Governance-The framework of policies, processes, standards, and accountability structures that guide the responsible development, deployment, and monitoring of AI systems. AI governance addresses risks such as bias, hallucination, data privacy, explainability, and regulatory compliance, and is formalized in frameworks such as the NIST AI RMF and the EU AI Act.
AI Issue Submission Agent - AI-powered, guided conversational experience for employees during the issue submission process (agentic AI).
AI Issue Summarization - Generates a clear, structured summary of the issue for the issue record, helping teams resolve concerns faster (Now Assist).
AI Observability - The capability to continuously monitor, audit, and explain the behavior, performance, and outputs of AI models in production. AI observability includes tracking metrics such as accuracy, drift, hallucination rates, and policy compliance, enabling teams to detect issues and maintain accountability across the AI lifecycle.
AI Red Teaming - Adversarial testing process in which teams attempt to identify vulnerabilities, biases, safety failures, and unintended behaviors in AI systems before and after deployment. AI red teaming is recommended by NIST and is a core component of responsible AI development and enterprise AI governance programs.
AI Response Assist - Identifies similar past questions from past assessments, then uses this data to auto-fill an assessment, including sources (agentic AI).
AI Risk Identification - Conversational AI agent that auto - pulls entity context, guides users through risk domain selection, then surfaces relevant risks from internal, industry, and external sources into one consolidated list (agentic AI).
AI Risk Management Framework (AI RMF) - Voluntary framework published by the National Institute of Standards and Technology (NIST) to help organizations identify, assess, and manage risks associated with AI systems throughout their lifecycle. The framework defines four core functions—Govern, Map, Measure, and Manage—and was complemented in July 2024 by a Generative AI Profile (NIST AI 600-1) that adapts the RMF to risks specific to generative AI.
AI Search - A feature that uses natural language processing (NLP) to provide more accurate, context-aware search results by understanding user intent rather than relying solely on keyword matching.
AI Smart Documents - Conversational experience for generating document summaries, insights, FAQs, and more (Now Assist).
AI Voice Assist for Documents - AI - generated voice summary and Q&A using voice commands (Now Assist).
AI-Powered Virtual Agent - A virtual agent that uses AI to provide automated customer service and support, capable of handling complex, multi-turn conversations and escalating to human agents when required.
Anomaly Detection - A feature that uses machine learning to detect unusual patterns in data that may indicate fraud, errors, security threats, or operational issues.
Artificial Intelligence (AI) - The capability of a machine to simulate or replicate aspects of human intelligence, including learning, reasoning, problem-solving, perception, and language understanding. AI encompasses a broad range of techniques, including machine learning, deep learning, natural language processing, and computer vision.
Attention Mechanisms - A component of deep learning architectures that allows a model to focus selectively on the most relevant parts of its input when generating output. Attention mechanisms are the core building block of the Transformer architecture that underpins modern large language models.
— C —
Chain-of-Thought Prompting - A prompt engineering technique that instructs an AI language model to generate intermediate reasoning steps before arriving at a final answer, improving accuracy on complex or multi-step tasks. Chain-of-thought prompting is widely used to improve the reliability of LLM outputs in enterprise applications.
Chatbots - Software applications that use NLP and AI to simulate human conversation through text or voice interfaces. Modern chatbots powered by large language models can handle complex, context-aware interactions across customer service, IT support, and HR functions.
Classification - A supervised machine learning technique that categorizes input data into predefined classes or labels based on patterns learned from training data.
Clustering - An unsupervised machine learning technique that groups similar data points together based on shared characteristics, without using predefined labels.
Computer Vision - A subset of AI that enables computers to interpret and understand visual information from images and video, enabling applications such as object detection, facial recognition, and document processing.
Context Window - The maximum amount of text or tokens that an AI language model can consider at one time when processing input or generating output. Larger context windows allow models to handle longer documents, maintain conversation history, and reason over more complex information in a single interaction.
Control Rationalization - Identifies and removes redundant control objectives, ensuring a cleaner, more efficient compliance framework in GRC scenarios (Now Assist).
Conversational AI - A type of AI that uses natural language processing and machine learning to enable human-like dialogue between users and computer systems, supporting multi-turn conversations across voice and text channels.
Convolutional Neural Networks (CNNs) - A class of deep learning models primarily used for processing structured grid data such as images and video. CNNs use convolutional layers to automatically learn spatial hierarchies of features.
— D —
Decision Intelligence - A feature that uses machine learning to provide insights and recommendations to support decision-making by combining data analysis, AI modeling, and decision frameworks.
Deep Learning (DL) - A subset of machine learning that uses neural networks with multiple layers (deep neural networks) to learn representations of data with multiple levels of abstraction. Deep learning is the foundation of modern AI capabilities including LLMs, computer vision, and speech recognition.
Diffusion Models - A class of generative AI models that learn to generate new data—such as images, audio, or video—by learning to reverse a gradual noising process. Diffusion models power many leading image-generation tools and are increasingly used alongside or as an alternative to Generative Adversarial Networks (GANs).
— E —
Embeddings - Dense numerical vector representations of data—such as text, images, or code—that capture semantic meaning and relationships in a format AI models can process. Embeddings are used in search, recommendation engines, retrieval-augmented generation (RAG), and similarity analysis.
Enterprise AI Discovery - Uses machine learning and natural language processing to automatically discover and identify AI-related assets, such as models, datasets, and algorithms, within an organization so it can identify and catalog AI-related assets.
Entity Extraction - A natural language processing technique that identifies and classifies specific entities—such as names, locations, organizations, dates, and monetary values—from unstructured text data.
EU AI Act - A comprehensive regulation by the European Union (Regulation (EU) 2024/1689) that governs the development, deployment, and use of AI systems. The EU AI Act classifies AI systems by risk level (unacceptable, high, limited, and minimal risk) and imposes obligations including transparency, conformity assessments, and human oversight. It entered into force on 1 August 2024, with staged obligations applying from February 2025 through 2027.
Expert Systems - AI systems that use a knowledge base of domain-specific rules and facts, combined with an inference engine, to simulate expert-level decision-making in a specific field. Expert systems were foundational to early AI in compliance and risk management.
Explainable AI (XAI) - AI methods and techniques designed to make the decisions, predictions, and recommendations of AI models understandable and interpretable to humans. XAI is critical for regulatory compliance, auditability, and building trust in AI systems, particularly in high-stakes domains such as risk management and finance.
— F —
Few-Shot Learning - A type of machine learning in which a model is trained or prompted to learn from a small number of examples, enabling rapid adaptation to new tasks without extensive retraining.
Fine-Tuning - The process of further training a pre - trained foundation model or large language model on a specific, smaller dataset to adapt its behavior for a particular domain, task, or use case. Fine-tuning allows organizations to customize AI models while preserving the general capabilities learned during pre-training.
Foundation Model - A large AI model trained on broad datasets at scale that can be adapted to a wide range of downstream tasks through fine-tuning or prompting. Foundation models include large language models (LLMs) and multimodal models, and serve as the base for many enterprise AI applications. The term was introduced in 2021 by the Stanford Institute for Human-Centered AI (HAI).
— G —
Generative AI (Gen AI) - Artificial intelligence that generates new content—such as text, images, code, audio, or video—based on patterns learned from training data. Generative AI is powered by foundation models and large language models (LLMs) and underpins capabilities such as Now Assist, chatbots, document summarization, and code generation.
Generative Adversarial Networks (GANs) - A class of deep learning models consisting of two neural networks—a generator and a discriminator—that compete against each other to produce realistic synthetic data. While historically important for image generation, GANs have been largely superseded by diffusion models in many generative AI applications.
Guardrails - Policies, constraints, and technical mechanisms applied to AI systems to enforce safe, compliant, and predictable behavior. Guardrails prevent AI models from generating harmful, inaccurate, or out-of-scope outputs and are a key component of enterprise AI governance. They can include input/output filters, confidence thresholds, and human escalation triggers.
— H —
Hallucination - A phenomenon in which an AI model—particularly a large language model—generates content that sounds plausible but is factually incorrect, fabricated, or unsupported by its training data or retrieved context. NIST AI 600-1 uses the formal term “confabulation,” defined as “the production of confidently stated but erroneous or false content,” and notes that this is colloquially called “hallucination” or “fabrication.” Hallucination is a key risk category in the NIST Generative AI Profile and requires mitigation through techniques such as RAG, guardrails, and human-in-the-loop review.
Human-in-the-Loop (HITL) - An AI system design approach that incorporates human judgment, validation, or approval at critical points in an automated workflow. HITL is used to maintain accountability, catch errors, and ensure compliance in high-stakes decisions, and is recommended by governance frameworks such as the NIST AI RMF.
— I —
Intelligent Automation - A combination of AI, machine learning, and robotic process automation (RPA) that automates complex, judgment-intensive business processes beyond the capability of traditional rule-based automation.
— K —
Knowledge Graphs - A data structure that represents knowledge as a network of entities and their relationships, enabling AI systems to reason about connections, infer new knowledge, and ground model outputs in structured facts.
— L —
Large Language Model (LLM) - A deep learning model trained on massive text datasets using transformer architecture that can understand and generate human language with high fluency. LLMs are the foundation of generative AI capabilities including conversational AI, document summarization, code generation, and question answering. Examples include the models powering Now Assist on the ServiceNow AI Platform.
— M —
Machine Learning (ML) - A subset of AI that develops algorithms that enable computers to learn from data and improve their performance on tasks without being explicitly programmed for each scenario.
Meta-Learning - A machine learning paradigm focused on designing models that learn how to learn, enabling rapid adaptation to new tasks with minimal training data.
Model Context Protocol (MCP) - An open protocol introduced by Anthropic in November 2024 that standardizes how AI models and applications exchange context information, access tools, and connect to external data sources. MCP enables AI agents to securely interact with enterprise systems and databases, and is a key interoperability standard supported by the ServiceNow AI Gateway and AI Control Tower. In December 2025, Anthropic donated MCP to the Linux Foundation's Agentic AI Foundation.
Multi-Agent System - An AI architecture composed of multiple autonomous agents that collaborate, communicate, and coordinate to accomplish complex tasks that exceed the capability of a single agent. Multi - agent systems are central to enterprise agentic AI deployments and are governed through platforms such as ServiceNow's AI Control Tower and AI Agent Fabric.
Multimodal AI - AI systems capable of processing, understanding, and generating multiple types of data—such as text, images, audio, video, and structured data—within a single model or integrated pipeline. Multimodal AI enables richer interactions and broader enterprise use cases.
— N —
Natural Language Processing (NLP) - A subset of AI that enables computers to understand, interpret, and generate human language. NLP underpins capabilities including conversational AI, sentiment analysis, entity extraction, and document summarization.
Neural Turing Machines (NTMs) - A type of deep learning model that augments a neural network with an external memory bank, allowing it to perform algorithmic tasks. NTMs are largely a research-stage architecture with limited current relevance to enterprise AI and GRC applications.
Now Assist - AI-powered features and capabilities on the ServiceNow AI Platform that use generative AI and large language models to provide personalized, automated support and guidance. Now Assist capabilities include text generation, summarization, search, document creation, and conversational assistance across IT, HR, customer service, and GRC workflows.
— O —
One-Shot Learning - A machine learning approach in which a model is trained or prompted to learn from a single example, useful for classification or recognition tasks with very limited labeled data.
— P —
Policy Mapping - Uses AI to recommend policies to be mapped to regulatory alerts, reducing the time and effort of the manual task (Now Assist).
Predictive Intelligence - A ServiceNow feature that uses machine learning to predict future outcomes, classify records, and provide recommendations to improve process efficiency and decision-making.
Process Mining - A feature that uses event log data and machine learning to discover, monitor, and optimize real business processes, identifying bottlenecks and deviations from intended workflows.
Prompt Engineering - The practice of designing and structuring inputs (prompts) to AI language models to elicit accurate, relevant, and useful outputs. Effective prompt engineering includes techniques such as chain-of-thought prompting, few-shot examples, and role-based instructions, and is a key skill for enterprise AI deployment.
— R —
Recommendation Engine - A feature that uses machine learning to analyze user behavior, preferences, and contextual data to provide personalized recommendations.
Recurrent Neural Networks (RNNs) - A class of deep learning models designed to process sequential data such as text or time-series by maintaining a hidden state across time steps. RNNs have largely been superseded by Transformer-based models for most NLP tasks but remain relevant for certain sequence modeling applications.
Regression - A supervised machine learning technique used to predict continuous numerical values based on input features.
Regulatory Action Plan Generator - Analyzes regulatory alert context and impacted areas to generate AI-driven regulatory action plans based on historical alerts and prior implementations (agentic AI).
Regulatory Alert Analysis - Automatically analyzes regulatory alerts and augments each alert with AI-curated information from trusted web and regulatory sources for context (agentic AI).
Reinforcement Learning - A type of machine learning in which an agent learns to make sequential decisions by taking actions in an environment to maximize a cumulative reward signal. Reinforcement learning is used to train AI agents and improve model alignment with desired behaviors.
Responsible AI - An ethical framework and set of principles for developing and deploying AI systems in a manner that is safe, transparent, fair, accountable, and aligned with human values. Responsible AI encompasses bias mitigation, explainability, privacy protection, human oversight, and adherence to governance frameworks such as the NIST AI RMF and EU AI Act.
Retrieval-Augmented Generation (RAG) - An AI architecture that enhances the accuracy and reliability of generative AI models by connecting them to external knowledge bases at inference time. Rather than relying solely on training data, a RAG system retrieves relevant documents from an approved data source and provides them as context to the language model, grounding its responses in authoritative, up-to-date information. RAG is a foundational pattern for enterprise AI applications in GRC, knowledge management, and regulatory compliance.
Risk Assessment Summarization - Automatically summarizes risk assessments so teams can quickly grasp the nature, impact, and context of risks without manual deep dives (Now Assist).
Robotic Process Automation (RPA) - A technology that uses software robots to automate repetitive, rule-based digital tasks by mimicking human interactions with applications and data. When combined with AI, RPA enables intelligent automation of more complex, judgment-intensive processes.
Robotics - A field of engineering and AI that involves designing, building, and programming physical machines (robots) that can perceive their environment and perform tasks autonomously or semi-autonomously.
— S —
Sentiment Analysis - A natural language processing technique used to identify and quantify the emotional tone or opinion expressed in text data, commonly used in customer feedback, social media monitoring, and survey analysis.
ServiceNow AI Platform - A cloud-based platform that unifies AI capabilities, workflow automation, and data connectivity to enable business transformation. The platform combines Now Assist generative AI, agentic AI through AI Agent Fabric and AI Agent Orchestrator, enterprise governance through AI Control Tower, and data connectivity through Workflow Data Fabric—all governed by open interoperability standards including MCP and A2A.
Skill - A specific, reusable capability or tool that an AI agent can invoke to perform a defined task within a broader workflow or agentic system. In ServiceNow's Now Assist, skills are discrete generative AI capabilities (such as summarization or resolution-note generation) that can be composed by AI agents.
— T —
Text Analytics - The process of deriving structured insights and meaningful information from unstructured text data using NLP and machine learning techniques.
Topic Modeling - An unsupervised NLP technique used to discover latent themes or topics across a collection of documents without prior labeling.
Transfer Learning - A machine learning technique in which a model trained on one task is reused or adapted as the starting point for a different but related task. Transfer learning is foundational to the development of large language models and enables fine-tuning of foundation models for enterprise-specific applications.
Transformer - A deep learning architecture introduced in 2017 by Vaswani et al. in the paper “Attention Is All You Need” (NeurIPS 2017), which relies on attention mechanisms rather than sequential processing to model relationships within data. Transformers are the foundational architecture of virtually all modern large language models (LLMs) and multimodal AI systems, including those powering the ServiceNow AI Platform.
— V —
Vector Database - A specialized database designed to store, index, and retrieve high-dimensional vector embeddings with high efficiency. Vector databases are a critical infrastructure component for retrieval-augmented generation (RAG), semantic search, and similarity matching in enterprise AI applications.
— W —
Workflow - A sequence of automated processes and tasks that an AI system performs to achieve a specific goal or objective, often involving data ingestion, processing, analysis, and output generation. In agentic AI, workflows may be executed autonomously by AI agents across multiple systems.
— Z —
Zero-Shot Learning - A machine learning approach in which a model generalizes to new tasks or categories it has never explicitly seen during training, relying on semantic descriptions or embeddings to recognize novel concepts.
Sources: NIST, European Commission, Stanford HAI, Linux Foundation, Anthropic, ServiceNow
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.
