danielfranca
ServiceNow Employee

 

Now Assist for CSM: AI Wrap Up

The Wrap Up Completion skill in Now Assist for CSM automates two tasks that every agent performs after closing a customer interaction: writing wrap-up notes and selecting a wrap-up code. The skill accepts a conversation transcript as input, generates a structured four-sentence summary written from the agent's first-person perspective, and classifies the interaction outcome against a configured list of wrap-up codes. It runs on the Now Assist Skill Kit framework with four LLM provider definitions (Now LLM Generic, Amazon Bedrock/Claude, Azure OpenAI, and Google Gemini). All eight configurations (two per provider) ship published and active.

This article is for CSM implementors responsible for activating, configuring, and extending Now Assist skills. It covers the skill's architecture, the prompt design decisions behind each function, postprocessor differences across LLM providers, and customization guidance for industry-specific deployments. The extraction data referenced throughout comes from the capability, definition, configuration, and skill attribute records on a ServiceNow instance.

 

Key Capabilities

  • Wrap-up notes generation: Produces a four-sentence summary of each interaction following a fixed structure: primary issue, steps taken, resolution, and additional context. Written from the agent's first-person perspective ("I explained...", "I verified...").

  • Wrap-up code recommendation: Classifies the interaction outcome against a JSON array of wrap-up codes using an eight-step classification framework. Returns a single best-matching code.

  • Multi-provider support: Four LLM provider definitions (Now LLM Generic, Amazon Bedrock/Claude, Azure OpenAI, Google Gemini) with identical prompts but provider-specific postprocessors for response parsing.

  • PII exclusion by design: The notes prompt explicitly excludes customer names, personal identifiers, filler words, pleasantries, greetings, closings, and repetitive acknowledgments from all generated output.

  • Structured JSON output: Both functions enforce strict JSON-only output ({"answer": "<value>"}), enabling deterministic postprocessor parsing with no freeform text handling required.

 

Skill Architecture

The skill is registered as a single capability with two distinct functions, each implemented as a separate configuration per LLM provider. The three layers work as follows:

  • Capability layer (1): Wrap Up Completion. Defines the skill's input/output contract: segmentConversation (string) and wrapUpCodes (json_array) as inputs; response, status, provider, error, and errorCode as outputs.

  • Definition layer (4): One per LLM provider. Each definition has a preprocessor (passthrough JSON parse, identical across all four) and a postprocessor (provider-specific response parsing; see Postprocessor Behavior below).

  • Configuration layer (8): Two per provider: Generate Wrap Up Notes and Recommend Wrap Up Code. Each holds a full prompt template. All eight are in state=published.

The skill has zero tool mappings. It is a pure LLM inference skill with no external tool calls, subflows, or retriever integrations.

 

Implementation

1. Activation

Wrap Up Completion is an OOTB skill that ships with Now Assist for CSM. It is invoked programmatically when a CSM interaction segment ends. The platform calls the skill through the Skill Kit framework, passing the conversation transcript and (for code recommendation) the configured wrap-up code list.

This is a skill, not an agentic workflow, so it does not have record-based triggers like the Triage Cases workflow. Invocation is handled by the CSM Configurable Workspace wrap-up experience, which auto-populates wrap-up fields with AI-generated notes and codes for agent review before submission.

To activate:

  • Navigate to Now Assist Admin Console

  • Locate the Wrap Up Completion skill under CSM features

  • Activate the skill and confirm the LLM provider is configured at the instance level


2. Configure the wrap-up code list

The Recommend Wrap Up Code function requires a wrapUpCodes input: a JSON array where each object contains a "code" and an optional "description" field. The quality of this input directly affects classification accuracy.

  • Each code should have a description that reflects the situation it represents, not just a label. For example: {"code": "Escalation", "description": "Agent transferred the issue to another team or department for resolution"}

  • Avoid codes with overlapping descriptions. The prompt's Step 4 (distinguish similar outcomes) helps, but ambiguous code lists still produce inconsistent results.

  • The classifier always picks one code, even if none are a perfect match. If your code list is incomplete, the skill will force-fit the closest option rather than returning "none."


3. Validate postprocessor behavior for your provider

The four LLM providers use different postprocessor logic. This matters when troubleshooting parse failures:

  • Now LLM Generic: Double parse with backtick stripping. Extracts model_output from the response, strips markdown ``` fencing, then parses JSON and reads the answer field. This is the only provider with backtick handling.

  • Azure OpenAI: Single parse. Calls JSON.parse(outputs.result.response) directly and extracts the answer field. No backtick stripping. No model_output wrapper.

  • Amazon Bedrock (Claude): Single parse, identical pattern to Azure OpenAI. No backtick stripping.

  • Google Gemini: Single parse, identical pattern to Azure OpenAI and Bedrock. No backtick stripping.

Implementation risk: Azure OpenAI, Bedrock, and Gemini postprocessors do not strip markdown backticks. If any of these providers' models return a response wrapped in ```json ... ``` fencing, the JSON.parse() call will fail and the skill will return an error. Monitor error rates on these providers and consider adding backtick stripping to their postprocessors if failures occur.


4. Test with representative transcripts

Before deploying to production, test both functions against real interaction transcripts from your environment:

  • For Generate Wrap Up Notes: verify the four-sentence structure is consistently maintained, the agent perspective is correct, and no PII leaks into the output.

  • For Recommend Wrap Up Code: test against your actual code list with transcripts that cover edge cases (customer expressing interest but not committing, agent using standard closing phrases, customer calling about a completed past event).

  • Both functions return JSON ({"answer": "..."}). If the response field is empty or contains an error, check the postprocessor logs for the active provider.

 

The Two Functions

1. Generate Wrap Up Notes

This function takes the conversation transcript (segmentConversation) and produces a four-sentence summary. Each sentence maps to a specific structural role:

  • 1. Primary Issue: What was the reason for the inquiry?

  • 2. Steps Taken: What did the agent do to address the situation?

  • 3. Resolution: How was the issue resolved or what outcome was reached?

  • 4. Additional Context: Secondary issues, important details, or follow-up commitments.

Key prompt design points:

  • Perspective is locked to first-person agent voice. The prompt specifies "I explained...", "I verified...", "I processed..." to match the standard format agents use when writing notes manually.

  • The prompt explicitly instructs the LLM to include concrete details: amounts, timeframes, account actions, and options presented. This prevents generic summaries that provide no value to the next agent reviewing the case.

  • Three full examples are provided inline in the prompt (financial services, insurance, brokerage). Each demonstrates the four-sentence structure applied to a different domain, anchoring the LLM's output format through few-shot prompting.

  • The output format is strictly enforced: {"answer": "<summary>"} with no explanations, reasoning, or text outside the JSON object.


2. Recommend Wrap Up Code

This function takes both the conversation transcript (segmentConversation) and a JSON array of available wrap-up codes (wrapUpCodes), then classifies the interaction outcome. It returns the single best-matching code.

The prompt uses an eight-step classification framework:

  • Step 1. Identify the Primary Outcome: Focus on concrete actions and commitments, not conversational tone.

  • Step 2. Distinguish Interest from Commitment: "That sounds good" is exploration. "Yes, please proceed" is commitment. The prompt provides explicit examples of each.

  • Step 3. Recognize Temporal Context: Past tense ("The technician came and fixed it") indicates a completed event. Present/future tense indicates active/pending items.

  • Step 4. Distinguish Similar Outcomes: Problem vs. inquiry, ownership transfer, timing of resolution, explicit scheduling vs. generic availability.

  • Step 5. Ignore Standard Closing Phrases: "Feel free to reach out" and "don't hesitate to call" are not outcomes. The prompt lists these explicitly.

  • Step 6. Prioritize Primary Action: Immediate resolution beats monitoring statements. Explicit scheduling beats generic courtesy. Actual transactions beat discussing options.

  • Step 7. Match Semantic Meaning: Match based on situation, not keyword matching.

  • Step 8. Validate: The LLM confirms the selected code exists in the input list, that dialogue supports the selection, and that the focus is on outcomes rather than tone.

The output format is {"answer": "<selected_code>"}, parsed identically to the notes function.

 

Customization Examples

Manufacturing: Equipment-Centric Wrap-Up Codes

Manufacturing CSM teams track interactions by equipment and failure mode, not just by outcome category. Create wrap-up codes that encode the asset context: "Field Replacement Scheduled", "Warranty Claim Initiated", "Preventive Maintenance Inquiry", "Spare Parts Order Placed". The code descriptions should reference specific patterns the classifier can match against, such as {"code": "Field Replacement Scheduled", "description": "Customer reported equipment downtime and agent scheduled a technician for on-site replacement"}.

Financial Services: Compliance-Aware Notes

Financial services interactions often involve regulated disclosures and complaint handling. Using the Now Assist Skill Kit, implementors can clone the OOTB skill and extend the notes prompt to flag whether a complaint was registered, whether a disclosure was provided, and whether the customer mentioned a regulatory body. The four-sentence structure can be adapted: (1) issue, (2) actions taken, (3) compliance-relevant details, (4) resolution and follow-up. This gives compliance teams a structured record without manual tagging.

Healthcare: Patient Sensitivity and PHI Controls

The OOTB prompt already excludes customer names and personal identifiers, which aligns with PHI minimization. For healthcare CSM deployments, a cloned skill can extend the exclusion list to cover clinical terms, diagnosis references, and provider names that may surface in patient support transcripts. Wrap-up codes can be structured around care coordination outcomes: "Referral Processed", "Benefits Inquiry Resolved", "Appointment Rescheduled", "Prior Authorization Submitted".

 

Key Best Practices

  • Write descriptive wrap-up code descriptions, not just labels. The Recommend Wrap Up Code classifier matches on semantic meaning. A code with no description forces the LLM to guess from the label alone.

  • Implementors can modify OOTB skills through the Now Assist Skill Kit (clone and edit prompts, change LLM providers). ServiceNow recommends trying Now Assist Admin console configuration options first. If that is insufficient, use NASK to clone and customize.

  • Monitor postprocessor error rates by provider. If you see failures on Azure OpenAI, Bedrock, or Gemini, check whether the model is returning backtick-wrapped JSON. Only the Now LLM Generic postprocessor strips backticks.

  • Test the code classifier against edge cases: customers expressing interest without committing, agents using courtesy closings, customers calling about past completed events. These are the three most common misclassification patterns addressed in the prompt.

  • Review generated notes for PII leakage during initial deployment. The prompt excludes names and identifiers, but transcript content varies. Spot-check output across a representative sample before full rollout.

  • If prompt changes are needed, update all four provider configurations. The prompts are duplicated across providers. A change to one that is not replicated to the other three will produce inconsistent behavior when the active provider changes.

 

Frequently Asked Questions

1. Can I modify the OOTB Wrap Up Completion skill?
Yes. The Now Assist Skill Kit allows you to clone and edit OOTB skills, including modifying prompts and changing LLM providers. ServiceNow recommends trying Now Assist Admin console configuration first, then using NASK if further customization is needed.

2. Why does the skill have eight configurations instead of two?
There are two functions (Generate Wrap Up Notes and Recommend Wrap Up Code), each with a dedicated configuration per LLM provider (Now LLM Generic, Amazon Bedrock, Azure OpenAI, Google Gemini). 2 functions x 4 providers = 8 configurations. All eight are published. The active one depends on which provider is configured at the instance level.

3. What happens if the LLM returns invalid JSON?
The postprocessor calls JSON.parse() on the response. If the response is not valid JSON (for example, if it includes markdown backtick fencing or explanatory text), the parse fails and the skill returns an error through the error and errorCode output attributes. Only the Now LLM Generic postprocessor strips backticks before parsing.

4. The code classifier keeps picking the wrong code. What should I check?
First, check your wrap-up code descriptions. Codes without descriptions or with vague descriptions produce poor classification. Second, check for overlapping codes that describe similar outcomes. Third, review the transcript to see if the customer expressed interest without committing, or if the agent used courtesy closings that the classifier may have misread as outcomes.

5. Does this skill work across all interaction channels (voice, chat, messaging)?
The skill accepts any conversation transcript as a string input (segmentConversation). It is channel-agnostic at the skill level. The invocation mechanism (what passes the transcript to the skill) depends on your workspace configuration and which CSM interaction types are enabled.

6. I updated the prompt in one configuration but behavior did not change. Why?
The prompts are duplicated across all four provider configurations. If you modified the Azure OpenAI configuration but your instance is using Now LLM Generic, your changes will not take effect. Check which provider is active, then update the corresponding configuration. For consistency, replicate prompt changes across all four providers.

 

Version history
Last update:
2 hours ago
Updated by:
Contributors