
- Post History
- Subscribe to RSS Feed
- Mark as New
- Mark as Read
- Bookmark
- Subscribe
- Printer Friendly Page
- Report Inappropriate Content
06-23-2023 12:55 PM - edited 09-06-2023 05:25 AM
Accelerate case resolution with AI
Task Intelligence provides a set of AI capabilities to automate agent workflows along the journey of a case.
It uses a solution-first approach for defined use cases.
One of the Task Intelligence out-of-the-box models is the “Predict case field choices to reduce handle time” model. It guides you in creating a model that predicts the case field of new cases to reduce the time to resolve a case.
Note: in this article, we use the example of a Customer Service Management Workflow, but these principles can very well be applied to ITSM.
Routing a case to the right group or agent is simple when a case’s metadata is accurate and organized. For example, knowing the product or service impacted, the priority, the language, the category and the subcategory, you can apply rules in your ServiceNow instance to derive which assignment group and which agent is best qualified to resolve the case quickly.
But our users communicate with natural language, and usually, they fill in the description of an issue. This unstructured data requires human intervention (Level 1 support) to read the information and classify the case accordingly. This process is time-consuming, expensive in human capital, prone to errors, and doesn’t leverage the wealth of knowledge in your previous cases.
Task Intelligence solves this challenge of going from unstructured to structured data by leveraging your past data.
In this guide, we’ll provide additional details on the process of creating the model, covered at a high level in the documentation. We’ll use the example of Streaming Service cases. Our goal is to route the case to the right agent as quickly as possible once the case is created.
Reviewing your cases (aka your dataset)
We start by looking at the data to understand what we are dealing with.
We are looking at the Case table. We are filtering on the resolved cases and assessing the number of cases: we have just a little over 10k records. Good news, we’ll need at least 10k records to train our model.
We can see how records are balanced between the categories by grouping them by category.
The data seems reasonably balanced, except for the Feedback category, which has a low count of records; this category will be skipped.
Machine Learning works by learning patterns from historical data. If there are a small number of cases for a certain category, the model won’t be able to learn the pattern and will likely skip the prediction. The same is true when there are new fields added or new values for existing fields.
For example, if you add a new category, it will take time for the model to learn which cases belong to that category. These would initially be skipped until the model is ready to be retrained.
Review the Class Imbalance section in Data Quality Analysis a best practice you should follow before you build your first AI Model for more details.
Model creation
To create the model, we go to Task Intelligence Admin Console: Task Intelligence for CSM > Setup.
We will use the “Predict case field choices to reduce handle time” model.
We start by choosing between Emails and Cases.
Email or Case?
If you create cases from your incoming emails, you can choose either, but you should account for what data is being copied over to the case. If the email record has data that can be used to make better predictions, like the subject line or a list of recipients, it is preferable to use Email.
Here, we choose Case: “Predict case fields when a case is created”.
After saving, we define the parameters of the model.
We give our model name. We select the table; it is the Customer Service Case table or any extension of it (your Case Types).
Conditions
The Conditions are used for two purposes. First, it applies a filter to a set of records (on the selected table) and uses them for training. Second, once deployed, your model will make predictions for cases that match these conditions. The only exception is the State field.
Because cases with the State of Open or In Progress are still being worked on, the information they contain is inaccurate and could negatively influence the training. They should be excluded, so we select the Closed or Resolved cases (based on our processes). But this condition won’t apply when the model makes predictions.
As part of the conditions, a filter can be applied to select only cases that were created or resolved within a time window.
How to pick the right time window
Organizations evolve over time. The kinds of cases you were dealing with two years ago are different than what you’re dealing with today. You want to make sure that your models are learning from up-to-date information. A helpful rule of thumb is to use the last three months of data to train your model.
Suppose you are an organization that deals with seasonality, where you see the same cases recurring at the same time each year. In that case, you want to include the last 13 months' worth of data so that the model will have seen the data from the previous year and can apply that learning when the same cycle occurs this year.
We define the field we want to predict, our output field: the Category. In this example, we assessed that the category field is sufficient to achieve our business outcomes, but you can add more fields depending on your use case.
How to choose the input fields
Input fields tell the model which fields it should be looking at to predict the output fields.
The input fields you choose should be available and have a value at the time the prediction is made when the case is created. For example, if you are predicting the Category, the Assignment Group is not a relevant input field because its value won’t be available until after the Category is determined, it won’t help the model learn any helpful pattern.
Also, when selecting input fields, thinking from the human perspective is helpful. Put yourself in the shoes of an agent. If you were trying to predict a category, which fields would you look at to decide which category is relevant? Short description, description, and attachments would have helpful information. On the other hand, priority is probably not helpful in identifying the category, so we don’t choose priority.
To recap, when choosing your input fields, consider what data is available at the time of prediction and think from a human perspective.
Training the model
After the table, the conditions, the output, and the input fields are selected, we can review the number of records. If the dataset contains enough records, the model can be trained.
Training the model can take a few minutes. The more records and fields are in your dataset, the longer the training.
Assessing the model
After the model is trained, we get insights into how many fields would have been predicted.
We also get a sample of the results in percentages compared to the agent. Clicking the “View sample results” button will open a list of some records to review.
Note that “Different” means that the model predicted a value different than what the case contains (the value that the agent chose); it could be a more suited or worse value.
So, you might ask, “What is good enough”? Should I be aiming for 20%, 50%, 80%?
I know this will be disappointing to many out there, but the truth is, there is no magic number.
“Good enough” is determined by several variables, such as: What is the cost of misclassification? How many options are in the predicted field? How well are you doing today?
What is the cost of misclassification?
In some cases, say, a health & safety issue, accurate classification, and fast resolution are critical. In this situation, the cost of misclassification is very high and, accordingly, “good enough” has a very high benchmark.
How many options are in the predicted field?
Picking the correct value out of 5 options is much easier than picking the correct option out of 500,000.
If you’re predicting the priority field, which only has 5 values, the model will likely do a great job of that.
On the other hand, if you’re asking the model to pick the right product from a list of 500,000 products, you can expect it to be less accurate.
How well are you doing today?
If your agents are categorizing cases accurately 70% of the time, then 70% is an incredible score for a machine learning model since it’s learning from the behavior of your agents. If you can get to 70% accuracy, you can automate that task for your agents without impacting other KPIs, which would be a great outcome.
To be able to compare, a benchmarking exercise is helpful.
Benchmarking your accuracy
Because your data lives on the platform, you can easily get insights using simple methods. For example, if you want to know how accurate the first assignment is, you can go look at the value of the reassignment count. Calculate the rate of cases that have a reassignment count of 1. You can consider that your current “correct prediction” rate.
I wrote an article a while ago, Where to target automation? – a data-driven approach with Machine Learning to generate insights and ...; I recommend reviewing “Phase 1 – Review structured data against the strategy”.
For other fields, like category, it’s not as straightforward. Even if the category was filled incorrectly, it was not always corrected when the case is closed. In that scenario, you need to do some human review.
If you have the resources, ask an SME to review the last 500 resolved cases to assess the data quality. Then, you can use that as your benchmark.
Remember, AI learns from your data (good or bad), so understanding the data quality going into the model can help you understand what level of accuracy you should expect.
For a deep dive into this topic, review this article Testing a “Case Field Prediction” model in Task Intelligence, there is a lot more information!
Preferences and Deployment
When we are comfortable with the model’s results, we are ready to deploy. We have a few options regarding how the predictions are served to the user: we can choose if the predictions are shown on the case or run in the background only.
If the predictions are shown on the case, we can either show them as a recommendation, or we can auto-fill the value.
Running the model in the background can help further assess the model.
If the model is created in a sub-prod environment, it can be exported from the menu.
On this note, if you work on a sub-prod environment, make sure the data reflects your prod instance, it’s updated, all the fields are there. Remember that the data is used to train the model, so the input is important in that process.
Applying routing rules
If we remember the flow from earlier, it’s easy to define the rules once the data is structured. We can create rules that dispatch the cases to the right support group based on the category.
For this, we use Advanced Work Assignment, navigating to Advanced Work Assignment > Settings > Queues. We create an AWA Queue and apply the filter and assignment.
As part of the filter, we need to include the Prediction Status is not In Progress; it signifies that we are waiting for the prediction from Task Intelligence before triggering the assignment.
To learn more about Advanced Work Assignment, have a look at the Quick Start Guide.
Your cases are now routed with AI for faster, more consistent and more scalable results! |
References
Documentation |
|
Other related articlesData Quality Analysis a best practice you should follow before you build your first AI Model Testing a “Case Field Prediction” model in Task Intelligence |
|
FAQ |
- 4,168 Views