rwpeerenboom
ServiceNow Employee

in_app_agents_banner_v2.png

 

Build Agent has a new skill: in-app agents. Point it at any custom application in ServiceNow Studio/IDE and it reads your app's metadata, data, workflows, and business logic — then helps you build AI agents and skills conversationally, right where you're already working. No framework expertise. No blank page.

Which workflows are actually good candidates for an agent or skill? Where does AI add value vs. where does it add noise? What architecture makes sense for your domain? If you've ever stared at a blank page trying to answer those questions before writing a single prompt — that's exactly what In-App Agents was designed to fix.

 

You can start your agentic journey for any custom application through a simple 4-prompt sequence. (see image below).

inappagentprocess.png

 

►  WHAT'S NEW — IN-APP AGENTS

How Build Agent turns your existing custom app into an AI-powered experience

In-App Agents is a new Build Agent skill in ServiceNow Studio/IDE. Developers use natural language to create AI agents and skills directly inside custom scoped applications — no agentic framework expertise required, no separate AI delivery pipeline.

Scoped to your app's data model

Build Agent reads your tables, fields, roles, business rules, and flows as context. Agents aren't generic — they know your domain.

Ships with the app, not after it

Agents and skills are packaged as update set entries within the application scope. Same pipeline, same governance, same deployment path you already use.

Build Agent recommends the use cases

Point Build Agent at an existing app and ask it to identify agent and skill opportunities. You don't come in with a list — it generates recommendations from what's already in your app.


Licensing required:

Now Assist for Creator,  Now Assist for App Engine or App Engine Prime 

 

The four-phase loop

Building in-app agents and skills, you can follow a four-phase lifecycle inside ServiceNow Studio or IDE. Each phase is driven by a conversational prompt to Build Agent — no context-switching, no separate tools.

Phase 1 — Assess

Point Build Agent at your existing app and ask it to evaluate readiness for AI agents and skills. It scores your data model — table structure, field quality, business rule coverage — and flags any gaps that would limit agent or skill performance before you start building.

"Review my application and flag any gaps in process definition or data quality that would limit agent performance."

Phase 2 — Recommend

This is where the blank-page problem disappears. Ask Build Agent to analyze your tables, roles, and flows and surface where agents and skills can reduce manual work. It identifies candidates from what's already in your app — triage, categorization, routing, risk flagging, summarization. You don't have to come in with a list.

"Analyze my application's tables, roles, and business rules and identify where agents and skills can reduce manual work."

Phase 3 — Design

From the identified use cases, Build Agent recommends the quickest win — the agent or skill that delivers the most immediate value. It proposes an architecture (standalone skill, single agent, multi-tool, or orchestrated) and outlines what it intends to create before writing a single artifact.

"Which AI agent or skill capability would be the fastest to implement and deliver the most immediate value for my [fulfiller role] users?"

Phase 4 — Build

Describe the agent or skill you want. Build Agent generates the agents, skills, and tools — all scoped to your app's data model, roles, and ACLs. Security configuration happens at creation time. The generated artifacts go into update sets just like every other app artifact, and every agent is auto-registered in AI Control Tower for centralized governance.

"Based on the analysis performed on my application, proceed with building the quick win suggested AI agents and skills."

 

 

►  WHAT BUILD AGENT CAN CREATE

Supported tool types at launch — for both skills and agents:

Skills

WebSearch
Script (Inline & Explicit)
Now Assist Skill
Flow Action*
SubFlow*

Agents

Subflow
Flow Action
WebSearch
Now Assist Skill
Record Operations
Script
Catalog Item


* Flow Action and SubFlow tools can only re-use existing ones already within the application's scope. Additional tool types will be added incrementally with each platform release.

 

The apps that work best

In-App Agents works with any custom scoped application in ServiceNow Studio or IDE. But the results are meaningfully better when the app already has real tables, real data, established roles, and defined automation. Build Agent reasons from what's there — an empty app produces generic agents, a mature app produces domain-specific ones.

Good starting points: vendor onboarding, facilities work orders, claims processing, equipment checkout, employee relocation, compliance audits. Anything where a fulfiller currently reads a record, applies judgment, and routes it manually.

That manual judgment loop is exactly what in-app agents and skills are recommended to take on.

 

What didn't change

None of this adds a new governance layer. Agents and skills ship via standard update sets. They're governed through AI Control Tower. Assist consumption is tracked in Now Assist Admin and AI Agent Analytics the same way any other agent would be.

The build path changes. The change management, refinement and governance process doesn't.

 

►  AFTER BUILD AGENT CREATES YOUR AGENT

Follow this path to test, activate, and deploy:

Test skills in NASK — Now Assist Skill Kit. Validate prompt behavior against sample records and review output quality before publishing.
Test the agent in AI Agent Studio — Validate end-to-end workflow and confirm tool invocations work correctly.
Activate triggers in AI Agent Studio — Triggers are not auto-activated. Enable them separately after generation.
Deploy via update set — Publish the custom app with agents and skills as a standard update set deployment or your existing ADLC

One known sequencing detail: If a skill needs to call another skill as a tool, the referenced skill must be fully published in NASK before it can be referenced. Build Skill B first, publish it, then reference it inside Skill A.

 

 


What's the biggest blocker you've hit when trying to add AI to a custom scoped application — where does the friction usually show up for you?

Drop it in the comments. Whether it's knowing where to start, framework complexity, governance concerns, or getting the right data in place — would love to hear what the community is running into.

Version history
Last update:
Friday
Updated by: