Built something you're proud of? Tell the story. A quick G2 review of App Engine or Build Agent helps other developers see what's possible on ServiceNow. Share your experience.

Baris Izci
ServiceNow Employee

"First Master the Fundamentals — Larry Bird"

Many organizations treat AI implementation as a purely technical journey. Buy the technology, configure the setup, flip the switch. But AI in a service management context heavily relies on the quality of your service catalog. It reads what is there and returns outputs based on it. If the content is incomplete, inconsistent, or structurally broken, the AI amplifies exactly that.

 

The service catalog does not exist in isolation. Behind every catalog item sits a service offering, a defined commitment from a service owner to a consumer. When offerings are clearly structured and catalog items faithfully represent them, users find what they need, requests route accurately, and service consumption feels coherent. That coherence is vital for customer satisfaction. When it breaks down, when catalog items drift from what is actually offered or when offerings are never properly modeled, users lose confidence. Customer satisfaction starts slipping, often without anyone noticing. A telling sign practitioners will recognize immediately: users start raising incidents just to request services, because finding what they need in the catalog has become harder than calling the helpdesk. At that point, no AI deployment will fix what is fundamentally a catalog problem

 

How AI Actually Uses Your Service Catalog

Understanding why catalog health matters starts with understanding what AI actually does with it.

When a user types a request or starts a virtual agent conversation, the AI does not reason or interpret meaning the way a human would. It matches patterns. It scans titles, short descriptions, metadata, and keywords to find the closest fit to what the user expressed. If those fields are empty, vague, or inconsistent, the match fails. The user sees a poor result and loses confidence. This happens silently, at scale, on every interaction.

When a match is found, the AI relies on how the item is structured to deliver the experience. If the item is configured for conversational ordering, the virtual agent guides the user naturally through the request. If it is not, the user is handed off to a web form and the conversation ends. The AI had no say in that outcome. The catalog configuration made the decision.

Once a request is submitted, AI-assisted fulfillment depends on whether an automation path has been defined. Without one, the request lands in a human queue regardless of how well the AI performed. The end to end experience breaks at the last step.

This is why catalog health is not a supporting concern for AI enablement. It is the primary one. The AI is only ever as capable as the catalog it is built on.

 

What End Users Actually Experience from an Unhealthy Service Catalog

The above is what practitioners observe at a structural level. But there is another way to read catalog health, through the eyes of users trying to get a service delivered.

They search and find three items that sound identical, with no indication of which applies to them. They pick one, start a request, and land in a long form with dozens of fields that feel largely irrelevant. They submit and hear nothing back for days. The fulfillment path was never automated.

On a different day they try the virtual agent. It starts promisingly, then stalls. They rephrase. It stalls again. The item was never configured for conversational delivery. They give up and call the helpdesk, which is precisely the outcome the AI was supposed to prevent.

These are not edge cases. They are the lived experience of users navigating a catalog that has not been curated for them. Every one of these moments accumulates into low adoption, poor satisfaction scores, and the quiet conclusion that the AI investment did not deliver.

Every failed self-service interaction has a destination. Users who cannot find the right catalog item or complete a virtual agent conversation do not simply give up. They call the service desk or raise an incident. A well-structured catalog with accurate descriptions, clean metadata, and items configured for conversational ordering gives the virtual agent what it needs to resolve requests without human intervention. At scale, the difference between a catalog that enables this and one that does not is measured directly in service desk volume and cost per transaction. Catalog health is not only a content management concern, but also a deflection strategy.

 

Begin With Impact, Commit to the Long Run

Not everything requires a long remediation program. Some of the highest-impact improvements can be made quickly. Where you start matters enormously.

In almost every catalog, a small number of items account for most incoming requests. Most organizations also do not have the capacity to fix large numbers of catalog items at once. Improvements should be made in waves, and the first wave must always start with the most popular items. Fixing high-volume items will have more measurable impact on AI performance than fixing a long list of low-traffic items. Start where the demand is. Everything else follows.

Within the top requested items, fix short descriptions, detailed descriptions, and metadata first. These give users the context to understand what the service covers and how it differs from similar items and give AI what it needs to match user intent accurately. Neglecting detailed descriptions is one of the most common patterns observed in service catalogs, and one of the easiest things to fix. Items with near-zero request history over a significant amount of time should be flagged for retirement. They rarely get retired without a named owner. Ensure every top requested item has one. An item with an owner gets reviewed. An item without one never does.

These steps do not solve everything. But they deliver visible results quickly and help build internal support for the structured governance work covered in the next section.

 

Catalog Governance: The Proof of a Mature Service Organization

Many organizations overlook catalog governance. Technical issues are fixable. But without clear ownership, the catalog will always drift back to the same state.

A catalog without governance deteriorates faster than you would expect. Before any item reaches consumers, it must meet defined quality criteria. Consistent publishing standards are what prevent the catalog from accumulating problems in the first place. The activities that keep it healthy are not complicated, but they must be deliberate and recurring. Review items lacking owners, descriptions, or metadata. Check for and remove duplicate items. Retire items that are no longer relevant. Run quality checks to ensure descriptions are accurate and services still reflect what is offered. Review complex items regularly to simplify where possible.

None of this is complex work. But it is the work that keeps AI performing over time.

Catalog health is not a milestone you reach and move on from. It is a continuous discipline. The moment governance is neglected, the catalog starts deteriorating. As AI capabilities expand, the bar for what "ready for AI" means keeps rising. Governance does not become less important once AI is enabled. It becomes more important.

 

The Indicators That Tell You Where You Stand

A healthy catalog does not reveal itself through gut feeling. It reveals itself through data. If you want to know where your catalog truly stands, these are some indicators worth tracking regularly:

  • Items missing a distinct and clear short description or description
  • Catalog items without an assigned owner
  • Duplicate catalog items
  • Items with no metadata, or metadata that is generic, inconsistent, or outdated. Poorly governed metadata can actively degrade AI matching quality, making it as damaging as having no metadata at all
  • Technical readiness for conversational ordering
  • Items blocked from conversational ordering due to catalog client scripts, multi-row variable sets, or unsupported question types
  • Items with automation coverage gaps, meaning active catalog items with no flow or workflow attached
  • Outdated items based on last updated date
  • Items that have never received a single request

ServiceNow provides a head start on several of these. The NowAssist Readiness Evaluation (NARE) Dashboard is an Out-of-the-Box tool that surfaces your top requested catalog items, identifies which are already conversational, and flags specific blockers preventing conversational readiness such as client scripts, multi-row variable sets, and unsupported question types. It also shows how many active items have flows or workflows attached, giving you a quick read on automation coverage.  Note that flows represent the modern automation path that integrates natively with AI-driven fulfillment, while workflows are a legacy mechanism that does not support the same level of AI orchestration

For the indicators this dashboard does not address, including missing descriptions, ownership gaps, duplicate items, and metadata completeness, these are best tracked by extending an existing Success Dashboard in Performance Analytics. This keeps catalog quality measurement within the same modern reporting framework and ensures indicators are available to catalog owners and process stakeholders alongside other platform health metrics

 

To Close: Readiness Assessment Is Too Important to Leave Out

A Service Catalog Readiness Assessment is not a recommended step before AI enablement. It is the prerequisite. No amount of configuration compensates for a catalog that is structurally broken, content-poor, and ungoverned. If you are heading into an AI implementation, or already in one and wondering why adoption is stalling, ask one question first:

Is your catalog ready for AI?

Larry Bird understood that greatness is built on fundamentals. The same logic applies here.

Version history
Last update:
Tuesday
Updated by: