Ratings or Helpful/ Not Helpful? > How are these working for you?
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
01-12-2025 07:28 PM
Happy new year everyone!
This is a topic that we come back to very frequently - if we should use either one of the feature to rate an article with a 5 star rating scale or Helpful/Not Helpful scale.
I was combing through these forums and found that many kept both which we originally did but we didn't get good data points from both and found it confusing to explain difference between top rated vs helpful/not helpful.
Our current setup: Helpful/Not Helpful - not helpful triggers KFT creation. Note we had comments turned off as well since we had issues moderating comments shown publicly on a HR Knowledge Base.
Most common setup seen here: Having both, Not Helpful + Rating 3 stars or less triggers KFT.
Would love to hear from your experiences and best practice.
- If you use both, why and how to you measure these metrics for reporting
- If you use either one, why and is it working well for you
Thanks in advance and hope to learn from your experiences!
- 1,973 Views

- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
01-13-2025 11:59 AM
We do use both Helpful / Not Helpful and the Star Rating for authenticated customers and have for years. The difficulty is that often these ratings are given without comments and reviewing the article makes it difficult to determine what the actual issue is so we haven't had a lot of luck in making improvements based on ratings that do not include comments.
I would like to require a comment for not helpful flags or 1 & 2 star ratings, but there is concern it would discourage users from leaving ratings despite commenting being completely anonymous.
Out of the 2 rating styles, however, I think the "not helpful" rating is more telling of a poor quality article vs. the star rating. Star rating seems to be more indicative of satisfaction with the product, or the product function, that the article is covering instead of the KBA quality.
I hope that helps a little!
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
01-13-2025 12:21 PM
Essentially we use both helpful/not helpful and star ratings as KPIs.
The helpful/not helpful have been a big topic in the KPIs because sometimes, an article is rated as not-helpful for the wrong reasons. I do go into most of my not helpful to follow up with users to take a poll of why it was not helpful and get their answers to help with either more articles or to take into account for user clarifications (i.e., breaking down technical jargon, including less complicated verbiage, more topics about a subject, so on......).
My favorite KPI is the ratings because we can see on a scale on how they weigh the article. I keep a list of my 4 and 5 stars to help make future changes. You can compare and contrast the 5/4 stars to everything below and then address the gaps.
All of these KPIs are stored in dashboards for ease of access and understanding. Breaking the individual reports down within the dashboard to tell a story can also help with why the metrics are used in the first place. But most importantly if you/your company sees no value in those areas you can take them out and simplify things even more.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
01-14-2025 05:17 AM
Love your idea about going to your "not helpful" users and getting them to feedback, we tried (unsuccessfully) for a year following individually every "not helpful" rating and we got very little actionable feedback from these follow-ups. A targeted poll seems to be a great alternative to get feedback in the most efficient way.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
01-13-2025 01:32 PM
We use both ratings. 2 Stars or less and "not helpful" triggers the feedback task. As mentioned in the other replies, the challenge is not getting comments to understand what needs to be improved. We also did not want to make the comments mandatory for the same reason of potentially discouraging users from leaving feedback altogether. So, we added a message to the comments pop up, to kindly encourage the employee to explain how we can improve the article. This has helped tremendously. It's not perfect, but we get many more comments explaining the issue than before.
As for the star rating vs. helpful or not, we noticed that we can have multiple consecutive star ratings in a row from the same person within seconds that can range from 1 star to 5. We actually asked one of the users who left a 1 star, 2 star and then 5 why they selected 3 different ratings, and they said they were selecting 5 but as they were gliding over from left to right somehow accidentally captured the other lower ratings in the process. More of a click too soon issue.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
01-14-2025 12:26 AM
In our case, I have implemented obligatory feedback in case user wants to give a low score in rating or mark article as not helpful. Both fields are useful, but only if you are able to understand, what is wrong with the article and address it properly. Feedback tasks go then to ownership group responsible for the evaluated article. This way we can check and improve the usefulness and quality of articles. Also, this allows us to prolong the review process - we don't really use valid to, instead we have the "old" review process - 365 days after last activity done by ownership group in the article, the notification is sent to the group to check, if it is still correct, valid etc. Actions coming from feedbacks restart the review to avoid duplication of work. And all of this can be monitored as KPI or just trends, depending on team's needs