
- Post History
- Subscribe to RSS Feed
- Mark as New
- Mark as Read
- Bookmark
- Subscribe
- Printer Friendly Page
- Report Inappropriate Content
03-09-2022 08:04 AM - edited 01-30-2023 07:16 AM
As a process owner, product manager, platform owner, or similar role, value realization and ROI of the Now Platform is most likely one of the top indicators of success. Finding services to provide, digitize, and automate is crucial for continual improvement but can be time-consuming. Leveraging historical data is a pragmatic, reasonably quick approach to complement qualitative methods and help cut through the noise. It has one main advantage: the data is already there.
If we have data, let's look at the data. If all we have are opinions, let's go with mine.
James L. Barksdale
This article lays out the foundations of a data-driven approach to optimize business processes using ServiceNow Platform features.
This article is meant both for customers (as a continual improvement activity) or partners and consultants (as a service).
This approach is tailored yet reproducible and is meant to be repeated frequently as data change over time. The outputs are insights unique to the organization and tangible recommended actions.
The benefits of implementing those actions (not covered here) are scalability of operations, faster resolution of issues, higher availability of employees (which translates into more productivity or increased well-being), and better end-user experience.
The approach has 3 phases:
- Review structured data against the strategy
- Define the datasets (as initiatives) for action
- Process text data with ML to gain insights
Despite all the hype and sometimes over-promises, Machine Learning has proven reliable to analyze data faster than humans to get insights, especially when dealing with text input. In situations where (if-then) rules cannot be written, self-learning statistical methods (aka Machine Learning) can perform better than humans and is a much more scalable solution.
As an example, think about all the overhead that would go into maintaining a set of rules to categorize an incoming incident based on its short description, including all the synonyms, typos, and variations for all languages.
This creates an opportunity for organizations to turn to ML for insight generation and operational efficiencies.
The case for using the built-in features of the Now Platform should resonate with organizations already using Performance Analytics and Reporting because the business process data (incidents, tasks, etc.) co-exist with analytics data, rendering the process of data collection more manageable and more secure, therefore less expensive.
Business process data co-exist with Analytics data on the Now Platform
Phase 1 – Review structured data against the strategy
The first step in the approach is to review the data using “traditional” reporting methods with two goals in mind: find obvious patterns that don’t require further analysis or require larger organizational change initiative and understand the data enough to prepare smaller datasets to tackle specific opportunities aligned with the company’s strategy.
Finding patterns in structured data is traditionally the goal of reporting and Performance Analytics tools by tracking KPIs against targets. The approach is similar here.
First and foremost, you’ll need sufficient reliable data.
Reliable: essentially, data coming from business process tables (incident, tasks, etc.) on which you have a good overview of the process, service definitions and can provide context. A Subject Matter Expert can help.
Focus on the relevant data. For example, exclude any data generated by automated events and use pertinent tables to the automation goal, for example, reporting on specific task types used as a 'tracker' for agents to manage manual work.
Filter any specific groups or regions you don’t have authority over.
Sufficient: a good reference point is around 30k records per table after filtering (phase 1).
Note: “the more, the better” is not necessarily the best approach here, as it’s more beneficial to break down datasets to tie them to specific goals. As a matter of fact, there is a limitation to 300k records per table (and a minimum of 10k records) for ML.
Inputs from the process owners will also help drive the discussions in the right direction when reviewing the data against the group’s objectives. For example, deflecting phone calls to chat or automating the request process are two different objectives that might have different priorities in an organization.
Using the built-in reporting tool (Report > Create New), start by analyzing the distribution of records across one dimension with bar charts.
Filters include when the records were created (last 6 months, last year) and can be adjusted to increase or decrease the total number of records. Another filter is the state of the record, looking at completed records will highlight work that was carried out successfully and could be automated, while incomplete/canceled tasks might underline a lack of services.
The dimension can be the contact type, the originating catalog item, the category or service, the assignment group.
One recommendation is to disable the ‘Show Other’ box and set the max number of groups to 10.
*The contact_type field on the incident table has been re-labeled Channel in San Diego.
For example, looking at the distribution of Closed and Resolved incidents by Contact type for the last 6 months as follow. In this case, we are looking for ways to deflect phone calls and emails to self-service.
Oppositely, for an organization already promoting self-service, the distribution could look like this. In that case, we are looking for automation and knowledge opportunities to reduce the number of incidents.
We can look for the catalog items generating the most manual work by looking at the distribution of tasks related to Requested items by Item for the last 3 months. The first item on that bar chart is where the focus should be directed.
Heatmaps provide a view of the distribution with two dimensions and unlock additional information. A practical example is the distribution of tasks by category and reassignment count to find candidates for automation. In this case, a task with a low reassignment count is seen as ‘easy’ to resolve and could be automated by a workflow or a virtual agent. So, a heatmap would highlight the categories with the higher number of easy tasks that could be processed further.
On the other hand, tasks with a higher reassignment count might signify a gap in the knowledge base, or the categorization is not optimized.
Look at tasks across categories, subcategories, or business services depending on your processes to find areas of improvement.
From our previous example, we can get a closer look at the Closed and Resolved incidents for the last 6 months for Contact type Phone or Email by Category and Reassignment count. It shows the top 3 categories for which most incidents could be automated.
Phase 2 – Define the datasets for action
The next step in the approach is to use the data gathered to create datasets that will each be tied to a specific outcome; an initiative and an action can be driven from it.
Some of the insights might surface without requiring further analysis. For example, take the following chart:
From the chart, we can identify an opportunity to automate the requests to restart a virtual server as it is currently generating a lot of manual work.
Initiative name: Automate requests to restart a virtual server.
Expected outcome: reduce the manual effort and time to resolution for the ‘restart a virtual server’ item.
Dataset: Catalog tasks (sc_task) for the last 3 months with Requested items > Item = Restart a virtual server
Action: plan and execute
On the other hand, the 21k records related to a general request cannot be automated without knowing what was requested or the intent of the request.
In that case, there is an opportunity to find new services in order to automate some parts of this generic request process. To address that, we can use ML to mine the text that the end-users are inputting when creating the request in order to find patterns.
Initiative name: Find new services and automate the generic request
Expected outcome: reduce the number of requests from the ‘Submit a general request’ item on the portal. Eventually, remove it as an option on the portal.
Dataset: Catalog tasks (sc_task) for the last 3 months with Requested items > Item = Submit a general request
Next step: unstructured analysis (ML)
Field to analyze: short description
Expanding on one of our previous examples, once we have identified the top 3 categories for automate-able incidents, we need to explore the nature of the incidents to understand what services should be provided. This is when ML comes in to analyze the short description. We are looking for specific services to offer. For example, for the Software category, we might expect a troubleshooting guide would be helpful but analyzing the data might surface other user needs related to software.
Initiative name: Automate incidents from the top 3 categories with Virtual Agent
Expected outcome: self-resolve incidents through Virtual Agent and reduce incident count for these categories
Dataset: Closed and Resolved incidents for the last 6 months for Contact type Phone or Email and Category is Software, Hardware or Network and Reassignment count < 2
Next step: unstructured analysis (ML)
Field to analyze: short description
One note regarding processing text with ML: not all languages are supported yet (as of San Diego), so filtering is advised. Ways to achieve that are by using language fields on the user table or, more conservatively, excluding entire user groups based on region. |
In the addition to these initiatives, two data sources enabled by default for Topic Recommendations are NLU fallback utterances and Live agent chat transcripts. If your organization uses Live agent or NLU, it is recommended to analyze these sources as an additional initiative.
For more information on mining logs, refer to Process Automation.
Phase 3 – Process text data with ML to gain insights
The last step of that approach is to use ML to explore the text inputs from the datasets to find insights.
One of the main benefits of doing this step in the Now platform is that the data is already available and doesn’t need to be extracted, transformed, and loaded into a different system. The machine learning capabilities also include the training process, meaning that the input data is processed to be run through the machine learning models, saving additional overhead again. In that step, the data is cleaned (removing any problematic characters), duplicate records are identified and indexed, text input is tokenized, stop words are removed, and stemming (the root words are identified) is performed.
Three platform features can be leveraged for this: Predictive Intelligence clustering solution, Knowledge demand insights, and Automation Discovery (using Intent Discovery, like Topic Recommendations for Virtual Agent).
As the most insightful solution, I recommend Automation Discovery, especially since integrated with Topic Recommendation San Diego. It automatically finds opportunities for automation and provides metrics such as the number of possible deflections and estimated time savings. The application uses a context (called the Taxonomy) to deliver these metrics. This strength is also a weakness, as it is currently not possible to create custom taxonomies, and only the IT taxonomy is available as of March 2022.
Automation Discovery provides the most insights for IT use cases
A new report can be run in Automation Discovery using the table, filter, and field to analyze for each dataset found previously. For IT datasets, the taxonomy is IT; for all other cases, the taxonomy is left empty.
Automation Discovery only targets automation opportunities, for organizations looking for knowledge content to be created, the Knowledge demand insights application is better suited. It covers the incident, case, and HR case tables. This application can also create knowledge gap feedback tasks and assign them to content creators.
The Predictive Intelligence clustering solutions provide more advanced tools for savvy data scientists looking to run ad-hoc clustering models. For example, it is possible to choose the algorithm between K-means (default), DBSCAN, or HDBSCAN and use the Levenshtein Distance method or enable the Connect Component feature.
Conclusion
Leveraging the ML capabilities of the Now Platform with your available data can reveal opportunities for improvement.
The impact is more significant when data is first reviewed in a structured analysis using reporting tools.
The approach presented here is secure, quick, and reproducible for any customer. This is the foundation of a continual improvement process based on actual data.
References
[YouTube] Virtual Agent Academy: Optimize Virtual Agent topic creation with analytics
[YouTube] Virtual Agent Academy: Mine your data with AI for compelling Virtual Agent topics
[Community] Use your goldmine of data to jumpstart your Virtual Agent experience
[Docs] Quick start for Topic Recommendations
[Docs] Troubleshoot issues with Topic Recommendations
[Docs] Intent Discovery
[Docs] Automation Discovery
[Docs] Knowledge demand insights
[Docs] Create and train a clustering solution
[Docs] Configure HDBSCAN for a clustering solution
[Blog] What is data science?
Keywords: data science, knowledge discovery, machine learning, predictive analytics, data mining, Artificial intelligence, AI, Automation Discovery, Intent Discovery, Topic Recommendations, Virtual Agent, Predictive Intelligence, Automation
- 2,677 Views

- Mark as Read
- Mark as New
- Bookmark
- Permalink
- Report Inappropriate Content