
- Post History
- Subscribe to RSS Feed
- Mark as New
- Mark as Read
- Bookmark
- Subscribe
- Printer Friendly Page
- Report Inappropriate Content
10-26-2022 12:17 AM - edited 11-29-2022 05:41 AM
Background
Customers often ask us how to build a robust self-service incident creation capability that will reduce call volume to the IT Help Desk and speed incident resolution.
This requires a fine balance between a very intuitive interface for your end-users and sufficient information for fast triage by your agents. This is not an easy balance to achieve.
When end-users reach ServiceNow, they may already be frustrated from the interruption in IT service already and are unlikely to appreciate completing a lengthy form to get their issue resolved.
On the other hand, if an agent doesn't get sufficient information to triage the issue, resolution gets delayed.
To address this, we are sharing 5 leading practices that our most successful customers use to drive self service adoption without sacrificing speedy triage and resolution of incidents.
Key Recommendations
- Use the UI design guidelines for forms to capture actionable information
- Decide on the use of single or multiple record producers
- Organize the Service Catalog by using categorises and taxonomy
- Enhance the user experience through the use Meta tagging & AI-Search
- Use routing rules to optimize efficiency gains
Raising an Incident through Self-Service
When users raise an incident through service portal or employee center, it is important that the information gathered at that point is actionable by the agent the incident is routed to.
The out of the box self-service incident form is shown below
In practice, this form has two design issues which impact the ability to provide actionable information to the agent:
- Overstating urgency: Users can set urgency but the choices (High, Medium, Low) are most likely to get gamed. Why wouldn’t you, as a user, choose High!
- A lack of structured data: Having complete free form text entry means that the information provided by the user is no better than an email – which means it will range from actionable information all the way to “broken, fix it”.
Collecting actionable information in the form should be the objective, otherwise the incident data submitted forces the agent to contact the user immediately to get sufficient information to continue the incident process. Following the recommendations described here should reduce workload on your service desk and improve process efficiency.
How can actionable information be achieved?
The record producer should get information from the user in a structured way that supports immediate actionable activities by the agent. Free text is useful but this should enrich the structured information gathered.
Structured data will lead to a more exact description of the issue allowing the agent to appropriately prioritize. Migration to Virtual Agent is also more straightforward with structured questions and answers.
Key areas of consideration
There are 3 key areas that should be considered as part of the overall user experience :-
- Form design including single or multiple record producers
- Use of Categories and Taxonomy
- Search and tagging
Use the UI design guidelines for forms to capture actionable information
The form design should structure the key pieces of information that are required to be captured.
The following are examples of what works well for our customers today :
- Golden rule is keep it simple for the end users and hide complexity from them, use the platform capabilities to do this for the users.
- The wording of each of the questions should be distinct, unambiguous, and written in the customer's language.
- Where possible the questions should not overlap and should not allow contradiction with other entries.
- Avoid deep nesting of questions, where possible keep to 2 levels or less
- Confirm data with the user where possible rather than present a selection list e.g. we know who the user is as they have authenticated in the system. Therefore rather than they enter their name it should be prefilled in. Similar to delivery addresses where the office location of the user could be against their profile.
- Use the data ServiceNow has to allow more refined choices e.g. If they are reporting an IT issue then present the user’s registered devices
- Is the incident being raised for the user or on behalf of another user?
- Do any other users need to be informed of progress?
- Is the ability to include attachments required?
Example of a single record producer form that has been successful with customers
And example of dedicated record producer
Decide on the use of single or multiple record producers
This is a key decision as it determines
- Navigation by the users
- Ease of maintenance
The recommendation would be to develop use case specific record producers (i.e. multiple recorder producers) and use the taxonomy and search to ease navigation. Each record producer should be a simple flow with direct questions and limited choices. Where any user confusion could arise, the recommendation is to use labelling to ensure the user is on the correct record producer - even providing links to the correct one if needed.
Eg. "This form should be used for 'xyz' only. To report general issues please go to -https://abc"
Examples of specific record producers:
- Access Request
- Create a knowledge base
- Help with Device Question
- Help with Service Question
- Loaner Laptop Request
- Password Assistance
- Report Access Issues
- Report Device Issues
- Report System Issue
- Return Device Request
Organize the Service Catalog by using categories and taxonomy
Within each Service Catalog a set of Categories and Sub-categories are added to provide a taxonomy. This taxonomy is independent to the Employee Center taxonomy that will be viewed by end-users/customers. The key purpose of the category structure is to allow internal catalog maintenance teams to browse the catalog and discover what services are available to them. In addition the categories provide a logical grouping of services for storing in the catalog rather than one great long list.
When determining the categorization first level is based on grouping services, the next level down is to categorize based on area of service (Services, Hardware etc.) and the final level is based on either module within a service or sub service area (access, security, data etc..).
For further information see Service Catalog and Request Management - Process Workshop
Where Employee Center is deployed then the unified taxonomy can be used. The unified content taxonomy is a collection of hierarchical topics that brings together different content types – requests, articles, quick links, employee communications, etc. – across departments into a single, employee-centric taxonomy.
We recommend that this unified taxonomy is the way to go as users get information curated for their benefit and information exactly how they need to see it not by some abstract category but by service topic or department.
For further information see Employee Center – Process Workshop Presentation
Taking a multiple record producer approach allows contextually relevant record producers to be related to the correct node within the taxonomy. This improves the user experience as users are typically navigating based on “jobs to be done”.
Enhance the user experience through the use Meta tagging & AI-Search
Where deployed with traditional (zing) search, meta-tagging can significantly improve the performance of search by allowing users to search using terms they would commonly use thereby improving the user experience.
For example, you may have a record producer for reporting a software issue. This could be tagged with common pieces of software used within your company. Searching for one of those pieces of software would list the record producer rather than the user having to navigate to “report a software issue”. Therefore it also plays a role in removing the need for the user to always navigate the taxonomy, and therefore understand its structure, to perform actions.
When you search for a catalog item by a keyword in Service Catalog, the search results are displayed by considering a few fields of the catalog table. If the keyword does not have exact matches, its closest matches are displayed as Did you mean suggestions. This search functionality is also applicable in Service Portal.
AI-search uses machine learning–based relevancy to collect and learn from user interactions which means over time manual annotation such as meta tagging becomes less necessary. Initially though either first implementation or transition to AI-search meta tagging should be continued until search confidence level is acceptable. AI Search doesn't index the sc_cat_item.meta field by default though this can be configured. Further details can be found at Content Creation for Relevancy in AI Search.
Implementation tip: continue to use the meta field to annotate Knowledge articles with pertinent keywords that may not appear in the title or content.
Use routing rules to optimize efficiency gains
Typically the Service Desk will receive all new incidents as they are available and trained to perform incident management triage. Incidents can be routed by the Desk to other assignment groups once initial assessment has been performed and it is clear that first time fix is not possible by the Desk.
Where individual record producers are created (such as Report a Mobile Device Issue) those tickets should be assigned directly to the resolving teams rather than the Service desk. A common example of this would be for a computer or printer issue that would need to be assigned to Desk-side support rather than the service desk.
There are pros and cons to this approach.
Implementation Approach
There is no singular deployment approach though ServiceNow leading practices are detailed out in Now Create (www.servicenow.com/nowcreate) and Service Catalog and Request Management – Process Workshop.
Step 1 : Data Analysis
Use existing system data to analyze what users are reporting through self-service or captured by agents.
- What are users currently reporting?
- How is that being categorized?
- What trends are visible in the data?
- Are any self-service issues being reported regularly by email?
- Are any issues being reported often and require little information to be captured by the service desk?
Focus on identifying the most common issues that are being resolved by level 1 agents. These are the most likely candidates for a self-service record producer.
Step 2 : Requirements workshop
The purpose of the workshop is to
- Validate the findings of the analysis with the key stakeholders such as the Incident Process Owner and Service Desk Manager.
- Determine the questions that the end users should be presented with
- Determine the information related to those questions
- Determine the overall flow of the questions
Recommend that each record producer should be a simple flow with direct questions and limited choices. Making it too complex will lead to the user trying to take the shortest route through rather than the most appropriate. Complex choices and routes to complete can also lead to difficulty maintaining the record producer.
Test this against the identified common use cases from the original analysis
- What level of accuracy was achieved?
- Do the questions need tuning to improve this?
Set a meaningful objective for accuracy as this wont be 100%. Over time you can use this as a lagging indicator to look for further improvement opportunities.
Step 3: Deploy and communicate
the form has been developed and tested its important to deploy the new form and gather feedback from users.
Use this as an opportunity to explain how the form will allow a more efficient and accurate service delivery.
Ensure there is a feedback loop for users. This will help adoption and also provide valuable insight for any further improvements. Feedback can be implemented via a Catalog Item or received by comments on knowledge articles.
A phased approach is recommended in terms of delivering more complex capability. Focus delivery on an efficient and effective submission capability. Enhancements and refinements to questions or routing can be implemented once learnings have been gathered.
For further guidance on Service Catalog see Design a world-class service catalog
- 8,095 Views