Find your people. Pick a challenge. Ship something real. The CreatorCon Hackathon is coming to the Community Pavilion for one epic night. Every skill level, every role welcome. Join us on May 5th and learn more here.

Rozmin Parpia
ServiceNow Employee

RozminParpia_2-1777483876028.png

How to recognize the incident clusters that warrant a single Problem ticket and what each one reveals about your ITSM process.

 

For a Problem Manager, the hardest call is often: which of these incidents belong together? Traditional reports group by category, CI, or assignment group but the Problem candidates most worth raising rarely sit obediently within those boundaries. They span CIs, categories, and teams, linked only by the path their incidents took through the process.

 

ServiceNow Process Mining surfaces those hidden links by analyzing the sequence of states, transitions, handoffs, and resolution paths each incident actually went through. The output is a set of recognizable patterns. Six of the most common and the kind of Problem ticket each one warrants are below.

Six Patterns Every Problem Manager Should Recognize

  1. The Shared Workaround Cluster

What you see: 14 incidents across 8 different applications over 30 days, all resolved with some variant of “restart the authentication service.” Shared signature: different L1 groups touch them first, but every case converges on the Identity team where the same fix is applied. Why a Problem ticket: an unstable shared dependency is generating intermittent failures across the application portfolio. One Problem record on the auth platform not 14 separate Known Errors. Surfaced by: Worknotes analysis on resolution path. Breakdowns: CI, category.

  1. The Post-Change Aftermath

What you see: 22 incidents on five applications within 72 hours of a closed Emergency Change to a network firewall, but only 3 have parent_change populated. Shared signature: Incidents caused by a change, or a change that led to incidents. Temporal proximity to the change plus a shared upstream CI dependency on the firewall, with similar latency-related symptoms. Why a Problem ticket: a regression introduced by a change that bypassed full peer review. The Problem scopes back-out criteria, links the orphaned incidents, and feeds the finding into CAB governance. Surfaced by: cross-table correlation of change_request and incident on shared CI within a time window.

  1. The Ownership Boundary Bounce

What you see: database performance incidents repeatedly bouncing between DB Admin and App Support, averaging four reassignments before resolution. Shared signature: the same back-and-forth path between two assignment groups, regardless of which application is reporting the issue. The 'assignment group' is updated after the ticket was resolved. A 'major incident' usually displays this behaviour where assignment groups refrain from taking ownership of root cause. Why a Problem ticket: ambiguous ownership at the DB/App interface, not a technical fault. The Problem drives a runbook update or RACI clarification, removing the routing tax from every future incident in this category. Surfaced by: multi-hop reassignment analysis with Activity Definition set to Assignment Group.

  1. The Premature Resolution Loop

What you see: incidents moving from Resolved back to In Progress within 24 hours, repeatedly, before final closure, concentrated in one intake channel. Shared signature: the same Resolved→Reopened back-flow, with the channel acting as the dominant cluster attribute. Why a Problem ticket: an auto-resolve workflow in the self-service portal is closing tickets before user verification. The Problem is on the workflow itself, fixing it improves the integrity of every Problem Management metric downstream. Surfaced by: back-flow detection on the process map combined with clustering and worknotes analysis.

  1. The Vendor Wait State

What you see: incidents on a SaaS-integrated service averaging four days in Awaiting Vendor state, versus six hours for incidents not involving that vendor. Shared signature: the same elongated transition (Awaiting Vendor→In Progress) concentrated on one vendor across multiple service categories. Why a Problem ticket: a vendor support performance issue, not an internal one. The Problem becomes the artifact that drives vendor escalation, contract review, or formalization of a workaround. Surfaced by: transition-level (edge) duration analysis on the process map.

  1. The Misclassified Cluster

What you see: 40 incidents categorized as Network, but resolved by the Database team after the same three-step rerouting pattern; resolution notes consistently reference query latency. Shared signature: the declared category and the actual resolution path systematically diverge, the cluster’s process fingerprint matches a different category entirely. Why a Problem ticket: intake misclassification driving routing waste and skewing every category-based metric. The Problem drives intake form changes, taxonomy review, or classification rules. Surfaced by: clustering and worknotes analysis

 

Improve Your Problem Management Lifecycle

Your ITIL process diagram shows one happy path through Problem Management. Process Mining will routinely reveal dozens sometimes hundreds of unique paths actually executed. A variant is the complete sequence of state transitions a Problem record took, and reports cannot represent it because a sequence is not an aggregation. Variant analysis tells you that 18% of your Problems follow Path A, 12% follow Path B, and that 200 distinct long-tail variants account for the rest. That distribution is the starting point for any meaningful standardization conversation.

1. Quantifying Rework Loops and Back-Flows

When a Problem moves from Fix in Progress back to Root Cause Analysis, then forward again, then back again, that loop is invisible to reporting. A report sees the current state and counts state changes; it cannot tell you that 23% of your Problems re-enter RCA at least once, or that the Fix→RCA back-flow adds an average of nine days. Process Mining surfaces these loops directly on the process map as self-referencing edges, with frequency and duration attached, making the cost of rework concrete and addressable.

2. Finding Bottlenecks in Transitions, Not Just States

A duration report tells you the average time spent in the Assessment state. What it cannot tell you is that Assessment→RCA takes a day when the next owner is the Network team, but eight days when it routes to Application Engineering. Process Mining treats every transition as a measurable edge with its own duration, frequency, and attributes. The slow step is rarely a state itself, it’s the handoff between states, and that handoff is only visible when you can analyze the directed edges of the process graph.

3. Conformance Against a Multi-Step Ideal Process

A report can check a single condition was the RCA field populated before close? Process Mining checks an entire model. Did Assessment precede RCA? Did RCA precede Known Error? Did Change Management get triggered before Resolution? Multi-step process compliance is not expressible as a list of column-level filters, it requires the process model.

4. Clustering Bottlenecks by Hidden Drivers

Once Process Mining identifies a slow node say, RCA, clustering segments the cases stuck there by attribute combinations to surface the actual driver. You may discover that 70% of the delay is concentrated in P3 Problems linked to a specific business service, or that one assignment group accounts for the bulk of RCA aging.

 

Where to Start

Each of these patterns is a few clicks away. Create a Process Mining project on the incident table with State as Activity Definition. Run a Mine, then apply Work Notes Analysis, Variant Analysis to surface the patterns most relevant to your environment. For change-driven Problems, run a parallel project on change_request and correlate on configuration item within a defined time window. For problem lifecycle improvement, run a project on problem table.

 

Reports tell you how many Problems you have. Process Mining tells you which ones you don’t yet know about by showing you the incidents that are quietly asking to be grouped.