- Subscribe to RSS Feed
- Mark as New
- Mark as Read
- Bookmark
- Subscribe
- Printer Friendly Page
- Report Inappropriate Content
One of the most frustrating conversations I’ve had with stakeholders goes something like this: “The platform feels slow, but the dashboards say everything is fine.” And technically, they’re often right.
Performance Analytics typically measures what’s easy: transaction times, response averages, system metrics. What it doesn’t capture well is user perception, especially in complex workflows. I’ve seen platforms with excellent metrics still feel painful to use because delays happen in the seams—between form loads, client scripts, UI policies, and integrations.
Client-side bloat is a major contributor. Over years of enhancements, forms accumulate scripts, UI policies, catalog client scripts, and conditional logic layered on top of each other. Individually, none of them look problematic. Collectively, they create friction that metrics rarely flag.
Another issue is synchronous integrations. A call that waits half a second for an external system might pass performance thresholds, but when stacked across multiple actions, it creates a lag users feel immediately. Community threads often describe this as “ServiceNow being slow,” when the root cause is architectural coupling.
There’s also the human factor. Inconsistent behaviour—sometimes fast, sometimes slow—erodes trust faster than consistently mediocre performance. Users stop trusting the platform and start refreshing, resubmitting, or working offline.
The fix is rarely a single optimisation. It requires architectural discipline: reducing client-side logic, decoupling integrations, caching intelligently, and measuring experience, not just transactions. Performance isn’t just about speed—it’s about predictability.
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.
