Now Assist for Knowledge: Real-world use cases and lessons learned?
Options
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
a week ago
Hi Community,
We are currently evaluating how to leverage Now Assist for Knowledge Management,
and I’d like to learn from real-world experiences rather than just feature descriptions.
In particular, we are looking at the following use cases and would appreciate any insights, challenges, or lessons learned.
1. Auto-generating Knowledge from Incidents
- Are you using Now Assist to generate knowledge articles directly from resolved incidents?
- At what point in the incident lifecycle do you generate the article (on Resolve, on Close, manually triggered, etc.)?
- Does Now Assist (or any OOTB capability) actually check for *existing or duplicate knowledge* before generating a new article, or is this something you had to design separately?
2. Knowledge maintenance and updates using AI
- Beyond initial creation, are you using AI to help with maintaining or updating existing knowledge articles?
(e.g. improving wording, consolidating similar articles, updating outdated content)
- If yes, what does that process look like in practice?
3. KPIs and value measurement
For those already using Now Assist for Knowledge:
- What KPIs or metrics are you using to justify or measure the value?
For example:
- Reduction in manual effort for documentation
- Knowledge reuse rate
- Incident deflection or MTTR impact
- Were there any metrics you initially expected to track, but turned out to be difficult or unreliable?
4. Things that were harder than expected
- Any pitfalls, governance challenges, or unexpected side effects?
(e.g. knowledge quality, duplication, ownership, review workload)
We are especially interested in practical experiences (what worked / what didn’t),
rather than purely theoretical or demo-based scenarios.
Thanks in advance for sharing your insights.
0 REPLIES 0
