- Subscribe to RSS Feed
- Mark as New
- Mark as Read
- Bookmark
- Subscribe
- Printer Friendly Page
- Report Inappropriate Content
After a break from blogging here on the ServiceNow community I wanted to post about some really cool work we've done recently.
As part of our internal systems revamp we've been taking a fresh look at Knowledge Management, specifically how to use knowledge as a strategic tool to resolve customer issues faster, better and with repeatable quality.
We've made a number of improvements to our Knowledge Management process but I wanted to talk about reporting in this blog post.
Starting customisation efforts on a ServiceNow instance is always an investment in time and money. No matter how easy we make the process it still requires design, development effort, testing and maintenance. We were determined when starting this overhaul that we'd be able to demonstrate the benefits and change that we delivered into the business.
Knowledge Management metrics
So, from the beginning we looked at the metrics we would collect and use to prove progress.
Our initial thoughts went something like this:
- Can I measure how many articles are being created?
- Can I measure how many articles are being read?
- Can I measure how many incidents are being resolved using Knowledge?
These are probably quite common starting points with metrics and they aren't hard at all to capture using our standard reporting capability.
But there is a problem - all of the above are vanity metrics and not at all useful for demonstrating success when trying to change the behaviour of our users.
What are vanity metrics and why are they bad?
Vanity metrics (also known as "gross metrics") track volume over time. Using my examples above I might answer the questions above with a chart like this...
I mocked this up in Excel but it would be possible to do something like this in the platform by configuring Report Generators and Summary Sets.
What do we see in this chart? First of all we are biologically tuned to thinking "happy thoughts" when we see a line chart that trends towards the upper-right hand corner.
In this chart we can see that over time the number of articles read has risen from 15 in the first data sample (lets pretend these are week intervals) up to 25. And that Number of Incidents linked is spiking happily in data sample 7.
You would be forgiven for presenting this chart as evidence of a successful overhaul of Knowledge Management. People are creating more, reading more and linking more than before you started.
This is a successful deployment right? Actually - maybe not.
Vanity metrics are susceptible to external changes that unfairly influence the data. What if:
- We hired 10 more engineers into the company. Would they influence the statistics?
- We changed the behaviour of a small group of adopters and they now create, read and link loads and loads of articles. Would they influence the statistics and is that a fair statement of success?
- If we have more Incidents and more Knowledge articles and more engineers it stands to reasons the gross number of articles and reads would increase. Does that mean we really changed user behaviour?
Vanity Metrics that report gross numbers are dangerous because they give the illusion of success based on volume. They're easily manipulated by a small number of users and rely on "compound interest". More articles means more read counts. It doesn't necessarily mean a better Knowledge Management strategy.
Lean Startup and Cohort Analytics
There is a great book titled "The Lean Startup" by Eric Ries that is definitely on my list of recommended reading for you all.
In it Eric talks about analytics and metrics that startups and innovators should track and warns against Vanity Metrics.
After reading "The Lean Startup" we decided to take a different approach to measuring the change in behaviour that we are hoping for after releasing our enhancements.
Based on actions within the system each user that interacts with the Incident Management process (you can guess that we did this using events fired from Business Rules on the incident table) falls into a "Cohort" based on their additional behaviour with Knowledge Management.
A "Cohort" is a grouping of people that share a similar behaviour over time. By tracking engagement levels by tracking user actions we can identify the size of "cohorts" and influence them based on their behaviour.
We are interested in reporting on the change of behaviour in the use of Knowledge in order to resolve customer incidents. Our Cohort Analysis involves 5 different groupings
- Engagers: Users that are most engaged with using Knowledge to resolve customer issues as well as enhance and improve Knowledge
- Reusers: Users that apply knowledge to resolve customer issues
- Contributors: Users that contribute knowledge
- Searchers: Users that attempt to use Knowledge by searching for solutions but they haven't used knowledge to resolve customer issues
- Non-engagers: Users that work to resolve customer issues but did not interact with Knowledge in anyway
By assigning users that use the Incident Management process we can start to track user behaviour over time and see how effective our Knowledge Management strategy is.
Remember our vanity metric chart and how happy we were to see metrics creeping "up and to the right"?
Lets compare it against a sample Cohort Analysis report
This is better! Now I can see how our Knowledge strategy has affected user behaviour over time.
From this chart I can make the following conclusions
IMPORTANT: These are not real figures from our internal deployment - we literally hand typed these numbers in to generate this graph
- In week 1 65% of our users that handled incidents didn't even attempt to search Knowledge
- In week 1 we had a core of 10% of users that are fully engaged in the Knowledge Management/Incident Management process
- After 3 weeks our strategy was having some impact. Over half of Incident Management users at least attempted to search and find a solution.
- We weren't really able to convert any additional "searchers", "re-users" or "contributors" into fully engaged users
The last data sample is really interesting. After a number of weeks - in my example data here - we are seeing a rise in the number of Engagers which is great.
Comparing the Vanity Metric chart to the Cohort Analysis chart I would learn that the spike in "linked incidents" is probably due to the small increase in "engagers". This is good but look at the rise in "non-engagers".
I can see that those people that are using Knowledge are using it fully, but the majority of users aren't engaging in any way. They aren't even searching.
Based on vanity metrics only I could call my Knowledge deployment a success. Based on Cohort Analysis half of my user base don't even search Knowledge. I have a long way to go.
Metrics are people too!
Behind all of the cohorts data points for each week is a dataset that our Knowledge Managers can see to identify members of each cohort.
Based on that example data we have a lot of work to do with our non-engagers. We should be offering training, guidance and maybe even home-grown marketing tricks to get these users working happily with Knowledge.
For the 8% of users that were in the "Searchers" in the final week I want to know why they searched, but did not use that knowledge to resolve incidents. is our search quality poor? Are we missing content that we need to write for these people?
If I can identify the members of the "engagers" cohort can I use them to be "Knowledge Champions". These people are using Knowledge in the pursuit of resolving customer issues - can they help get the non-engagers interested?
In summary
In the IT Service Management world we collect a lot of data to measure the effectiveness of our metrics. Think about the metrics you use to run your IT Organisation.
Are they vanity metrics? Does the chart of volume of Incidents closed really give you an understanding of the behaviour your servicedesk staff exhibit.
I think by employing systems like Cohort Analytics in ITSM processes we could easily identify people that exhibit positive and negative behaviour and target them with training, incentives (or punishment) to get them to act in the best interests of those who really care about how we act.
Our customers.
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.