- Post History
- Subscribe to RSS Feed
- Mark as New
- Mark as Read
- Bookmark
- Subscribe
- Printer Friendly Page
- Report Inappropriate Content
07-31-2025 09:30 AM - edited 07-31-2025 10:01 AM
Introduction
Hello! We are a group of dedicated AI Strategists and Architects committed to advancing the implementation and adoption of AI solutions for our customers. Through countless advisory and hands-on engagements, we’ve gathered valuable insights and practical guidance that we’re excited to share with the broader ServiceNow community.
This article is especially intended for those building AI Agents and delves into technical topics relevant to development. If you're ready to explore the full AI Agent lifecycle, you're in the right place.
We’ll cover the whole AI Agent creation lifecycle:
1. Best practices to consider before creating an agent
2. Prompting
3. Orchestration
4. Tool selection: how to choose the right ones
5. Memory utilization
6. Triggering: Considerations for setting your AI agent and Agentic Workflow trigger
7. Required skills for successful AI agent implementation
8. Common pitfalls to avoid and where to debug
9. Other useful links
1. Before You Begin: Foundational Best Practices
Start with a process chart. Map out the entire use case on paper before jumping into implementation. In this article you can find relevant information on how identify the correct use case: https://www.servicenow.com/community/now-assist-articles/finding-the-right-jobs-for-ai-agents-a-stra.... This helps clarify the flow, identify dependencies, and align stakeholders early.
Understand the distinction between an agentic workflow and an agent:
- An agentic workflow defines the end-to-end business scenario you're solving.
- An agent is a modular component that performs a specific task within that scenario.
For more information on how to design a use case, please have a look at a previous article from these series: Rationalizing when to use a Workflow, GenAI Skills & AI Agents
Technical considerations:
- Ensure your instance is running on Xanadu Patch 7 or later, or Yokohama Patch 1 or later.
- Install the necessary plugins and any relevant AI Agent Collections.
- Confirm that plugin versions match your instance and patch level.
Regularly update your environment to take advantage of new agentic capabilities and enhancements.
Best Practices for Agentic Workflow Creation
- Limit complexity: Keep the number of agents per use case reasonably low to ensure orchestration remains manageable. While there’s no strict rule, using a smaller set of agents (e.g., around 10 or fewer) often leads to better clarity and coordination.
- Understand Context Limitations: AI Agents uses OpenAI's GPT-4o for its orchestration layer on ServiceNow managed Azure servers. AI Agent tools may use Now LLM. The context window for the AI agent is 128K tokens (exceeding this may lead to unpredictable behavior). The orchestrator LLM is currently configurable to also use Now LLM as of Yokohama Patch 3. New LLMs can be selected as a part of the agentic workflow as of Yokohama Patch 6.
- Leverage AI-powered authoring:
- Use Now Assist to generate agent roles and instructions from your agentic workflow description.
- Use the tool recommendation feature to help select the most relevant tools for your agent.
- Keep a human-in-the loop: For sensitive, high-impact, or ambiguous decisions, consider integrating human oversight into the agentic workflow. This adds an additional layer of safety, accountability, and quality control.
2. Prompting
If you're new to LLM prompting, I recommend exploring https://www.servicenow.com/docs/bundle/xanadu-intelligent-experiences/page/administer/now-assist-pla... , which offers valuable insights into crafting effective, LLM-tailored instructions.
Prompting When Creating an AI Agent
A helpful way to think about prompting for AI Agents is to imagine onboarding a highly capable new team member—someone intelligent but unfamiliar with your organization and its processes. The “AI Agent role” defines the agent’s identity: How should it behave? Should it emulate your human support agents, or respond from a neutral, system-like perspective?
This role definition is crucial. It determines the agent’s tone, behavior, and domain knowledge. Without a clearly defined role, the agent is more likely to produce hallucinations or deliver responses that don’t align with your expectations.
The “Instructions” field is equally important. It guides the agent step by step through its tasks. Be explicit about which tools the agent should use and under what conditions. This ensures consistent, reliable behavior across interactions.
Prompting When Creating an Agentic Workflow
Prompting for agentic workflows requires a different mindset. Think of it as onboarding a new manager: someone who needs to understand how to delegate tasks effectively to their team. In the “Instructions” section of your use case, clearly specify when the orchestrator should trigger a particular agent—just as a manager would assign tasks to the right team member at the right time.
3. Orchestration
ServiceNow offers three orchestration modes — configure under sn_aia_agent→ strategy:
Strategy |
Behavior |
Ideal Use Case |
Base Planner |
One-time static plan, no dynamic replanning |
Simple, linear tasks with no branching or retries |
ReActive Planner |
Iterative execution, handles conversations & agent overwork |
Chat-driven use cases with unknown flow |
Batch Planner |
Groups tasks efficiently, supports mid-execution replanning |
Complex, parallel subtasks with dependencies |
4. Tool selection:
Please see this article that explains the tools from AI Agents in detail here:
AI Agent Tools: Getting the most out of your agentic workflows
How to set up the Web Search using Google Gemini for AI Agents and Virtual Agent
Tool strategy: Ensure each agent uses at least one tool, but limit the number of tools to avoid bloating the context window with excessive results.
When selecting a tool, make sure to follow ServiceNow best practice and if possible utilize flow designer instead of scripting.
When configuring a tool it is important to take into account the following design considerations:
- Execution mode: would you like an AI agent to execute the tool autonomously or ask a human triggering the AI Agent use case for permission?
- Will the output be displayed to a use case executor or not?
- Select the output transformation strategy based on your organizational needs: concise, paraphrase, verbose. Instruction to AI – rephrase the output meanings.
An essential aspect of designing effective AI Agent use cases in ServiceNow is understanding and correctly implementing memory utilization. Memory determines how agents retain, access, and share information—crucial for delivering contextual, intelligent, and human-like interactions.
5. Memory utilization:
Currently, ServiceNow supports two memory types in AI Agent Studio:
Short-Term Memory (STM) – Enabled out-of-the-box
Long-Term Memory (LTM) – Requires explicit configuration
Short-Term Memory (STM)
Definition & Purpose: Short-term memory provides temporary storage for information during an active conversation. It functions like a working memory, enabling agents to maintain context and share data within the boundaries of a single user interaction or use case execution.
Key Implementation Characteristics:
Scope: Memory is limited to the current conversation. It does not persist across sessions.
Duration: Active for the entire lifespan of a conversation, even if it spans multiple days.
Implementation Scenarios:
Multi-turn conversation support: Agents can reference earlier user inputs without re-asking.
Inter-agent data sharing: Multiple agents (e.g., handling different steps in a change process) can share context like planning parameters, eliminating redundant queries or data fetches.
Example: In a multi-agent change management flow, three AI agents can share the same short-term memory, so only one needs to retrieve planning data while others reuse it for validation or approvals. In order to use STM, include specific instructions for that in the agent prompt, such as: Recall the user’s previous selection within this session. If available, use it to inform your next action.
Long-Term Memory (LTM)
Definition & Purpose: Long-term memory allows AI agents to persist and recall information over time and across sessions. This enables personalization, contextual continuity, and a more intelligent user experience.
Key Implementation Characteristics:
Scope: Memory is associated with the user, not the session.
Duration: Can span from days to several years depending on configuration and retention policies.
Storage Mechanism: Data is written to the sn_aia_memory table, often in vector format to enable efficient semantic retrieval.
Implementation Scenarios:
Personalization: Remembering user preferences like language or preferred systems/devices.
Reduced repetition: Avoiding re-asking questions already answered in past interactions.
Contextual search enhancement: Reusing past queries and interactions to refine AI-powered search results.
Example: After a conversation concludes, the complete session is sent to the LLM (Large Language Model). The LLM selects relevant information to store in LTM, ensuring that next time the user interacts with the agent, past context is readily available—without explicit reentry. Long-term memory can be passed into further conversations either via RAG or via a knowledge graph.
Implementation Best Practices
When implementing STM or LTM in ServiceNow: Clearly define memory boundaries for each use case when thinking use case design already. Ask: Does the agent need to remember this only now (STM) or long-term (LTM)?
Set appropriate memory retention policies for LTM to comply with data privacy and relevance.
6. Triggers for AI Agent and Agentic Workflows
A trigger definition is a step in the guided setup of AI Agents and Agentic Workflows.
Triggers allow the AI Agent or Agentic Workflow owner to define a required state, time, and conditions for the AI Agent / Agentic Workflow to be automatically initiated.
The ‘Why’:
The trigger mechanism is intended to automate Agentic Workflow initiation for use cases that suit this nature.
The ‘How’:
The trigger is a parallel mechanism to other initiation methods and setting it up is optional. Why is that important to remember?
One might assume that if the Agentic Workflow does not appear in the Now Assist Panel (NAP), a trigger is required. The answer is no.
This works both ways:
When a trigger is defined, do not expect the trigger condition to be evaluated when the Agentic Workflow is manually initiated via NAP.
The ‘When’:
All trigger definition parameters are mandatory: trigger type, associated table, conditions, and run-as user.
Sensitivity:
On the one hand, the trigger will only be initiated when all these parameters are met. On the other hand, this requires careful planning—if these parameters are too broad, other processes or table actions might unintentionally trigger the Agentic Workflow or AI Agent.
Evaluate these possibilities and aim for more granular conditions and a tightly defined set of parameters to better control the trigger.
Recursiveness:
When the trigger is based on table actions such as ‘Created and Updated’ or ‘Updated’, and the Agentic Workflow or AI Agent performs actions on the same table, a recursive situation might occur.
Example:
Assume an AI Agent is designed to replace the assigned ‘Business Owner’ of a record if the current ‘Business Owner’ is not active.
If we define the trigger type as ‘Updated’, when the inactive ‘Business Owner’ is replaced and the record is updated with an active user, the same AI Agent will be re-triggered.
We therefore need to prevent this by setting a condition such as: ‘Business Owner’ is not ‘Active’.
Always ensure that your AI Agent’s actions will not result in recursive re-triggering.
The ‘Who’:
Although the trigger is not necessarily manual (even if indirectly triggered—e.g., when a user updates a table and meets a trigger condition), it still runs as the defined run-as user.
This AI Agent / Agentic Workflow run will create a chat, which can be viewed in the run-as user’s chat history in NAP:
Log in as the run-as user and click on the ‘All chats’ icon in NAP to view automatically triggered chats.
Note:
If the AI Agent is Supervised, the conversation will wait for input from the run-as user.
If the AI Agent is Autonomous, the chat will be listed but considered complete.
The ‘What’:
The trigger is just that—a trigger. It may not have its own ‘story’, but it can be part of a bigger picture.
You may want to treat the trigger as an alternative to an API call:
Cause the trigger conditions to be met as part of a broader orchestration.
Have a UI Action, Scheduled Job, or Business Rule update a record to trigger an Agent / Agents / Agentic Workflow as part of a phased sequence serving a larger workflow.
You can consider maintaining states and data in a custom table for use by the triggered AI Agents.
Important:
With the ServiceNow platform, Agentic Workflows, and AI Agents, the triggering possibilities are vast.
Therefore, the first step in adopting this approach should be:
Governance and careful risk and cost planning.
7. Required skill for agentic implementation
Successful implementation requires both technical and conceptual skills:
- Flow Designer proficiency
- Prompt engineering
- Platform architecture and scripting
- Generative AI understanding
- AI Search and RAG familiarity
As AI capabilities evolve, additional competencies become crucial:
- Interpreting complex workflow data
- Crafting contextual and human-like interactions
Organisations best prepared for AI Agents typically have:
- Mature AI strategies
- Custom use cases beyond out-of-the-box features
- Complex workflows requiring flexibility
8. Common pitfalls and where to debug:
Agent Responses Not Visible to End Users
There are cases where the AI Agent's output is generated but not visible to the requester. This is often due to insufficient permissions. Ensure that the user has appropriate access to all relevant tables and fields involved in the agent's response.
Avoid Overly Complex Use Cases in Early Phases
When designing AI Agent use cases, start with focused, manageable scenarios. Overly complex implementations can hinder adoption and testing. Prioritise value-driven, iterative development for maximum impact and easier maintenance.
Link of useful tables for debugging:
Please find our official documentation on the list of all the useful tables and system properties - https://www.servicenow.com/docs/bundle/yokohama-intelligent-experiences/page/administer/now-assist-a...
These are the main our “go-to” tables:
sys_gen_ai_log_metadata - generative ai logs visible with admin role
sn_aia_execution_plan.LIST - plan showing which agents to be executed and in which order
sn_aia_execution_task - AI agent execution tasks
sn_aia_message.LIST - sequence of system messages
sn_aia_tools_execution.LIST - execution logs of tool
sys_cs_message.LIST -conversation Messages
9. Other useful links & demo:
Create your own AI Agent! A walkthrough on creating an AI Agent using AI Agent Studio
If you have questions or thoughts, feel free to drop them in the comments—we’ll respond or update the article as needed. And if you found this helpful, please share your feedback or link to it on your preferred platform.
This is just the beginning of our series on AI —stay tuned for more!
For tailored guidance, reach out to your ServiceNow account team.
𝘗𝘚: 𝘝𝘪𝘦𝘸𝘴 𝘢𝘳𝘦 𝘮𝘺 𝘰𝘸𝘯 𝘢𝘯𝘥 𝘥𝘰 𝘯𝘰𝘵 𝘳𝘦𝘱𝘳𝘦𝘴𝘦𝘯𝘵 𝘮𝘺 𝘵𝘦𝘢𝘮, 𝘦𝘮𝘱𝘭𝘰𝘺𝘦𝘳, 𝘱𝘢𝘳𝘵𝘯𝘦𝘳𝘴, 𝘰𝘳 𝘤𝘶𝘴𝘵𝘰𝘮𝘦𝘳𝘴.
- 2,892 Views