The Zurich release has arrived! Interested in new features and functionalities? Click here for more

MGOPW
ServiceNow Employee
ServiceNow Employee

 

Next Experience Inspector

Articles Hub

Want to see all of our other articles and blogs related to UI Builder and Workspaces?

You can copy the link above and share it!

We Value Your Feedback!

Have any suggestions or topics you'd like to see in the future? Let us know!

Overview

The Next Experience Inspector is a Chrome DevTools tab available from ServiceNow's Next Experience Developer Tools Chrome extension. It allows developers to easily inspect and debug Next Experience application pages in real time, helping you understand exactly how your UI components function behind the scenes. 

 

Family ReleaseXanadu

UI Builder Release: 26.2.59
Roles Required: admin
 
Authored by: @michaelburney 

Starting and Stopping a Recording


The Profiler offers two ways to record performance data: a 'manual recording' or a 'page-load recording'
 
The interface includes controls for both:
MichaelB6948013_13-1743686488164.png
  • Manual Recording: Use this to capture a specific sequence of interactions. Click the Record button (a round red icon) to begin recording on the current page. Then perform the actions you want to profile (for example, clicking buttons, opening components, etc.). When done, click the Stop button. The Profiler will then generate the results for the time span you recorded. 'Manual recordings' are ideal for profiling user interactions or specific workflows after the initial page load. 
  • Page Load Recording: Use this to profile the initial page load of a workspace or experience. Click the Record Page Load button (often depicted as a reload icon). This will trigger or wait for a full page refresh while capturing the page’s loading performance from the very start. Alternatively, you can start a recording and then manually refresh the page. The Profiler will automatically treat the navigation and load sequence as the profile. Recording page load is useful for analyzing how long the page and its components take to initialize. 
During recording, the Profiler gathers all relevant client-side events from the Next Experience framework. After you stop recording (or the page finishes loading), the collected profile data is displayed in several sections described below. You can initiate multiple recordings in one session – each new profile can be saved or compared. It’s also possible to export a profile to a file for later analysis or comparison (use the download icon to save the profile data).

Profiler Settings and Options


The Profiler tab includes a Settings panel (accessible via the gear icon) where you can configure profiling options.

MichaelB6948013_14-1743686488165.png

 

In the Settings panel, you can:

  • Capture all Performance Counters: Toggle whether to collect any available performance counters (discussed later). If disabled, the Performance Counters tab will remain empty. 
  • Stop Recording after Page Load
  • Enable Usage Tracking
  • Capture all events: Choose which categories of component events or lifecycle phases to capture. For example, you might enable or disable low-level events if you want a more focused profile. 
Typically, default settings will capture all relevant events with a timestamp precision of about 1 ms.
(Note: Timing data is limited to ~1 millisecond granularity due to browser security constraints and high-resolution timer limitations.) You should rarely need to change these settings, but it’s good to know they exist for advanced use cases. For more information check out the documentation.

Timeline View


Once a recording is finished, the Timeline view appears at the top of the Profiler tab, giving a high-level overview of events over time. The timeline is essentially a horizontal graph representing the profile duration (in milliseconds on the x-axis) and the density or occurrence of events:

MichaelB6948013_15-1743686488165.png

 

  1. Event Frequency Graph: At the very top of the timeline, a line graph plots the number of events occurring over time. Spikes or clusters in this graph indicate periods of high activity (e.g. many component updates firing at once). This helps identify performance-heavy moments at a glance. 
  2. Time Range Sliders: The timeline includes draggable start and end handles (often displayed as small markers or vertical bars). By dragging these sliders, you can manually adjust the selected time range of interest. The area between the sliders is the active range that the Profiler will analyze in detail (the Summary and Events data will update to reflect only events in this range). This is useful for “trimming” the profile to focus on a particular segment (e.g. isolating the page load sequence or a specific user interaction). 
  3. Zoom Controls: In the top-right corner of the timeline panel, you’ll find zoom in/out buttons (magnifying glass icons) and a fit-to-window control. These let you zoom into a shorter time interval for a closer look or zoom out to see the full timeline. You can also click-and-drag or use the mouse wheel to pan and zoom horizontally across the timeline. 
  4. Filter: For precise control, use the Filter button (funnel icon) at the right end of the timeline to “Adjust timeline selection.” This opens a dialog allowing you to define start and end boundaries based on specific events. This is extremely helpful to zero in on a particular component’s lifecycle or any specific sequence of interest without manually hunting through timestamps. 
  5. Reset: you can clear filters to restore the view of all events in the profile. 
The timeline view provides an intuitive visualization of performance. Spikes in the event graph or dense clusters of colored events indicate where potential bottlenecks might be. You can zoom into those areas and investigate in detail using the Summary and Events panels. Remember that the timeline’s time measurements are approximate to millisecond precision – very fast events may appear as simultaneous if they occur within the same 1 ms time slice.

Flame Graph View


Beneath the timeline (or as part of the timeline section in some layouts), the Profiler provides a Flame Graph visualization. The flame graph focuses on component load times and is a key tool for identifying which components are contributing the most to page load or interaction latency. Each bar in the flame graph represents the execution time of a component (often from its initial render start to render end, including any child component renders). Components with longer bars took more time.

MichaelB6948013_16-1743686488166.png

 

  • Component Load Bars: The flame graph typically stacks components in a hierarchy. Top-level bars might represent high-level components or the overall page, and sub-bars beneath them (flames) represent nested components or processes contributing to that total time. The length of each bar is proportional to the time that component took to load or update. All bars together can give a sense of the cumulative time. 
  • Interactive Zoom: You can click on a bar in the flame graph to “zoom in” on that component
When you click a component’s bar, the flame graph will show the breakdown of that component’s time
Use the flame graph to quickly pinpoint slow components. It provides a visual cue – wide bars indicate components that are consuming the most time.

Summary tab


Below the timeline and flame graph, the Profiler tabs display detailed data in a series of tabbed panels. The first of these is the Summary tab.

The Summary tab gives a high-level overview of the performance metrics for the selected time range:

MichaelB6948013_17-1743686488166.png

 

  1. Total Load Time: At the top of the Summary tab, you’ll see the total duration of the profile (for a full-page load recording, this is the page load time). If you have trimmed the timeline to a subset range, it will show the duration of that range as well.
  2. Event Count Summary: The Summary includes a visual breakdown of event types that occurred. Typically, this is shown as a chart (often a donut or pie chart) where each slice represents a category of events and its proportion. Each slice’s size corresponds to how many of that event were recorded.
  3. Breakdown by Event Type: Alongside or below the chart, the Summary may list each event type with its count. This is useful to identify if a certain type of event is happening excessively. For example, you might note that there were 50+ state update events, which could hint at a lot of reactive state changes, or perhaps an unexpectedly high number of re-renders for a component. 
The Summary tab is useful for an at-a-glance understanding of the profile.

Events tab


The Events tab lists all the individual events captured during the profile. This is a detailed log of the recorded timeline, presented in table format. Each row usually corresponds to a single event occurrence. Typical columns include the timestamp (or time since start), the event type, and the component associated with that event.

Key features of the Events tab:
  • Page Load Time: seconds spent logging profile
  • Event Time: duration of time (for security reasons the precision has been reduced by ~1ms by the browser. check the official documentation for more information on how to increase this to a microsecond: https://developer.servicenow.com/dev.do#!/reference/next-experience/xanadu/developer-tools/profiler-...)
  • Event Type: Type of event (start, end, dispatch, etc)
  • Component Tag: categorization tag of component
  • Component ID: identification tag of component
  • Interaction ID: alphanumeric identification for the interaction
  • Details: specific details related to the triggered event (such as update, cause, type, value, etc)
  • All” vs “Component AggregateView: Like the Summary, the Events panel offers a toggle between listing All Events and a Component Aggregate view.

In the All view (default), each event is a separate line.
 
MichaelB6948013_18-1743686488167.png

In Component Aggregate view, events may be grouped by component, showing cumulative metrics per component.
 
MichaelB6948013_19-1743686488168.png
 
For example: instead of many lines for a component’s multiple render and update events, you’d see one line for that component with the total count of renders, total count of updates, and maybe total time spent. This gives a component-centric summary directly in the events panel. It’s a quick way to answer, "which components had the most activity during the profile." You can switch back to the detailed list anytime. 

Use the Events panel in conjunction with the timeline to dive deep into performance issues. For instance, if the timeline showed a spike at 2.0 seconds, you can scroll or filter the Events list to see what happened around that time – maybe a particular component re-rendered multiple times. The combination of chronological event data and filtering makes it possible to trace performance issues to specific triggers.

Profile Metadata and Performance Counters


In addition to the performance event data, the Profiler provides two informational tabs: Profile Metadata and Performance Counters.

Profile Metadata Tab


The Profile Metadata tab presents additional data about the profile recording – essentially the context of when and where the profile was captured.
 
This includes:
  • Name: Profile metadata name
  • Captured On: The date and time when the recording were created. 
  • Browser: The browser and version. (eg “Chrome Version 134.0.0.0”)
  • Next Experience Framework Version: The version of the ServiceNow Next Experience client framework running on the page.
  • Page URL: The exact page or app URL that was profiled.
  • Viewport: the pages size
This metadata is extremely useful. It ensures you know which environment and conditions the data came from.

Performance Counters Tab


The Performance Counters tab is intended to display any performance metrics or counters that were collected during the profile. In practice, you may often find this tab empty with a message like "No Metrics Available".
 
MichaelB6948013_20-1743686488169.png
That’s because, by default, the tool might not capture additional counters unless explicitly enabled.

To ensure metrics display, select Gear icon (for settings) > Capture all performance counters.

MichaelB6948013_21-1743686488170.png

Using the Compare View


One of the most powerful features of the Next Experience Profiler is the Compare View, which allows you to compare two profiles side-by-side. This is incredibly useful for identifying regressions or improvements – for instance, comparing a baseline performance profile to a new profile.
 
The Compare View highlights differences in events and timings to pinpoint what changed.
 

Preparing Profiles for Comparison


To use the Compare View, you need two profiles available. These could be two recordings from the same session or a combination of a loaded saved profile and the current profile.

A common workflow is:
  1. Capture and Save the Baseline Profile: Go to the first environment (or the “before” scenario), record a profile (page load or manual actions as needed). Once the profile is generated, click the Download button to save it (it will save as a .json or similar profile data file). This is your baseline data. 
  2. Capture the Target Profile: Now go to the second environment (or after deploying some changes for a before/after test) and record the profile in the same manner. For fairness, try to capture the same scenario. You can do this on another instance or the same instance after changes. Once recorded, you’ll have this profile in the Profiler. 
  3. Load and Compare: In the Profiler tab, load the baseline file you saved earlier. There is an Import or Upload Profile option. Select the previously saved profile file – the extension will load it, typically marking it as a “Baseline” profile. Now, with one profile set as baseline and the current one as target, click the Compare button. The UI will switch to the Compare View mode, displaying the two profiles’ data side by side.
MichaelB6948013_22-1743686488170.png

Compare View Layout and Panels


In Compare View, the Profiler interface divides into two primary sections:
  • Metadata Panel (Baseline vs Target): On the left side, you’ll see a panel that lists key metadata for both profiles. This is similar to the Profile Metadata tab but shown in a comparative format. For each metadata field, the values for the Baseline profile and the Target profile are shown adjacent.
  • Comparison Details Panel: The core of the Compare View is a table that highlights differences between the two profiles. Each row in this table represents a particular event type or a component and the columns show the Baseline vs Target metrics for that row, along with the delta (difference). For example, the first row might be “Total Events,” showing the total count of events in each profile (baseline vs target) and the difference between them.

MichaelB6948013_23-1743686518892.png

Additional Information:
  • Interpreting Deltas: In general, more events (positive delta) in the target can signal additional work being done (possibly a regression if it’s unexpected), whereas fewer events (negative delta) might indicate optimization. For example, if a component, such as, now-button shows +3 RENDER_END events, that suggests the new profile rendered that button 3 more times than before – perhaps a clue that something caused extra re-renders. The table highlights these differences in red or green to draw your attention. All differences are relative to baseline vs target. 
  • Sorting & Filtering: The comparison table can be sorted by any column, which is very useful. You can sort by the delta of total events to see which component had the largest increase or decrease in total events. Or sort a specific event type column to see where the biggest change in, say, render events occurred
MichaelB6948013_24-1743686518893.png
 
There’s also a filter toggle to show All rows vs Non-Zero only. By switching to "Non-Zero only" the table will hide any components where the profiles had no difference (i.e., all deltas are zero). This lets you focus only on components or event types that actually changed between the two runs, cutting out the noise of identical metrics.   

MichaelB6948013_25-1743686518893.png
 
For example: you might find that the baseline had 10 state update events for a list component, but the target had 30 – a huge increase, indicating a possible new inefficiency. Or, you might see fewer render events for a component after optimization.

Using the Compare View, you could, for instance, compare a slow-loading workspace on one instance to a normal one on another instance. By doing so, you might observe that a particular component fires significantly more events on the slow instance.
 
This zeroes in on the problem – perhaps a misconfiguration or a bug causing repetitive work?

In summary, the Compare View’s table essentially answers the question: “What’s different between Profile A and Profile B?”
 

Conclusion

Congratulations! 🎉 The Next Experience Profiler is a comprehensive tool for ServiceNow developers and implementers to analyze front-end performance. By following the steps above, you can record a profile, inspect timeline and event details, and even compare two runs to catch regressions.


Remember that this extension supports Next Experience (UI Builder-based) interfaces on Xanadu, Yokohama, and later releases of ServiceNow.


Check out the Next Experience Center of Excellence for more resources