Article Quality Index (AQI) Best Practices and Lessons Learned?
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
‎04-27-2020 01:59 PM
We are looking for best practices and lessons learned (see questions below) from ServiceNow users who have conducted AQI exercises in the last few years. Our Service Desk group is planning to conduct our first Article Quality Index (AQI) exercise. So far, the AQI set up is straight-forward in a test server. We have created a AQI checklist with 10 well-defined questions and descriptions, we have associated this checklist with a knowledge base, and we have run through a couple of AQI validation. We will be conducting a small-scale proof of concept for the quality analysts to familiarize with the process before conducting a full-scale exercise.
***Scope? For example:
-How may articles to select among 3000? X%?
-Which articles to choose? Complex articles vs. simple articles? Or a mix of complexity?
-Who should be the AQI analysts? Service Desk analysts vs. QA analysts? Experience level? How many AQI analysts?
***Objectives? For example:
-Improve overall article quality?
-Feedback to article authors and editors to improve individual articles?
-Establish a metric or grading system based on the outcome?
-How to make the metrics meaningful for quality improvement?
-Who will review and make sense of the results?
- 3,027 Views
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
‎10-01-2020 08:27 AM
We have a large knowledge base that consists of over 33K articles, some new to SNow some were imported from a previous system. We are new to the AQI journey, but have decided to forge ahead and iterate as necessary. Our progress is as follows thus far:
Pilot phase:
- Modified the out of the box questions slightly to determine if the author/editor has meet our baseline criteria for article content and formatting standards.
- Kept weights as is 10% each although we realize that some of the questions could hold more weight than other going forward
- Passing remains at 70%
- Pilot groups consists of article owning groups who currently have high engagement with KM team so we could ensure engagement.
- Articles were selected by the owning groups from their own set of articles.
- Only 10 articles were to be evaluated by each team for a total of approx
- The owning groups performed their own evaluations. We selected the owning group to review their own articles because the service desk and end users already have mechanisms to evaluate article quality using the Star Rating, Thumbs Up/Down, and Flags with comments. The SD Quality teams could not lend any personnel to this effort at this juncture.
- First round of evaluations was to
- determine if the questions were valid and would suffice to comprehensively evaluate the article quality
- examine how the teams selected the articles to review
- determine if the review process was swift and efficient
- gather all feedback from any other aspect of the review process
The project lead is meeting with the teams to review the outcome of the Evaluation Phase of the pilot. When we get more info I will post here.
The goals of instituting AQI were to:
- Institute a systematic and repeatable way to consistently quantify and determine overall article quality.
- Based on AQI results determine what type of training/improvements are required (consistent low scores for specific questions)
- Determine if quality assessment is a viable KPI via the AQI feature
Going forward:
We are still trying to determine if we will go the route of the reviewer performing an AQI for each article prior to publishing, use the KCS method & Pareto and only perform AQI on the top 20% most used, or during required periodic reviews, or some combination of these.
How are you displaying the questions descriptions to the ones performing the AQI?
Hope this helps and/or sparks more conversation.