Queston: How are you all actually measuring Knowledge & AI Search success in ServiceNow?

JuanitaR94
Mega Contributor

We’ve invested in cleaning up Knowledge and enabled AI Search & Genius Results, but our users still default to opening tickets—even when answers exist. I’m curious how others are handling this.

- What metrics do you use beyond article views?

- Has AI Search genuinely increased deflection, or just changed search behavior?

- What practical changes (UX, workflow, enforcement) actually improved self-service adoption?

I would love to hear what worked in your world, not just best practices. looking forward to your answers.

1 ACCEPTED SOLUTION

Matthew_13
Kilo Sage

Hi 

Even with AI Search, users often open tickets out of habit. Here’s what helps:

  • Metrics beyond views: deflection rate, search-to-resolution, failed searches, article engagement.

  • AI Search: improves relevance, but doesn’t automatically reduce ticket creation.

  • Practical fixes: show recommended articles upfront, prompt users to check them before submitting, keep KB updated, and reward contributions.

Bottom line: UX nudges + workflows + good metrics drive real self-service adoption.

 

Please mark this as helpful and Solution Accepted if you feel its acceptable. Thank You!

View solution in original post

1 REPLY 1

Matthew_13
Kilo Sage

Hi 

Even with AI Search, users often open tickets out of habit. Here’s what helps:

  • Metrics beyond views: deflection rate, search-to-resolution, failed searches, article engagement.

  • AI Search: improves relevance, but doesn’t automatically reduce ticket creation.

  • Practical fixes: show recommended articles upfront, prompt users to check them before submitting, keep KB updated, and reward contributions.

Bottom line: UX nudges + workflows + good metrics drive real self-service adoption.

 

Please mark this as helpful and Solution Accepted if you feel its acceptable. Thank You!