NLU best practice

Lynda1
Kilo Sage

I have been looking for best practices on NLU, Intents, utterances. The VA is rather slow when we have the NLU turned on so need to ensure I do not slow it down more.

We are using three different modules

  1. itsm
  2. hrsd
  3. procurement

Procurement is the smaller one with these intents

  • Requisition & Purchase Order
  • Invoice
  • System Access
  • Process

I was thinking of doing one topic per intent, or is it a best practice to use one intent, one topic for multiple scenarios that a user can ask.

The question really is: Does the NLU slow down as more intents are created?

1 ACCEPTED SOLUTION

I don't know that performance is better with fewer intents and more utterances vs more intents and fewer utterances. Unless somebody has done specific testing and comparisons, I would take it with a grain of salt and assume that, since ServiceNow hasn't made it clear in documentation (afaik), that there are no notable performance impacts either way. All I know is that they advise something like 15+ utterances per intent.

If you're really talking 1 intent/100 utterances vs 5 intents/20 utterances, I'm thinking either way you're looking at minimal performance differences and you should be more focused on designing a good user experience than maximizing performance.

ServiceNow provides something like 20+ intents with like 20 utterances each in the ootb ITSM NLU Model.

I can tell you that from experience, having topics that are too big and complex become very unwieldy and difficult to maintain. Whenever you add/remove/move a node, it has to redraw the entire topic in VA Designer which can be VERY slow with big topics. In addition, normal sized topics generally load instantly when testing them in VA Designer and big topics can take considerable time to load, I've seen 30 seconds maybe even up to a minute.

View solution in original post

5 REPLIES 5

Chris D
Kilo Sage
Kilo Sage

The best practice is to use ServiceNow's ootb topics and intents as your base/ideal. This means more (quantity) smaller/simpler and moderately granular/specific topics as opposed to fewer but more broad - and likely more complex - topics. Think about intents when creating your topics: what is the (user's) intent? What do they want to do? That's the topic.

As a crude example, think about ITSM tasks. RITMs, INCs, CHGs are all just Tasks on the backend, but to users, the intent behind them all are very different. "I want to... Order Something | Report an Issue | Submit a Change Request". 

So the user types in one of those phrases and one of those three topics starts. As opposed to having a generic "Create a Ticket" topic, where no matter what the user types in, they have an additional step of selecting "RITM/INC/CHG". Not to mention, each of those is a different conversational path with different logic so you now have one topic with three times the complexity.

 

That said, this may all be easier said than done and you might strike a balance sometimes so your topics aren't too granular and create too much work for you. We started off with really broad topics but have been whittling them down to be simpler and more aligned to intents. Using NLU, whether from scratch or transitioning from keywords, really makes it easier to think about in my opinion.

For example, we have a topic/intent "Set a Delegate" in which the user can set their own delegate or someone else's delegate. While from some perspective, those could be two options could be separate intents, I determined that it would be too granular and so I combined them into one topic. You kinda just need to do what you think is right.

 

As far as performance goes, we transitioned last month in a big bang so we pretty much had 80+ intents with hundreds of utterances in a single NLU Model right off the bat. I don't have much comparison besides the couple times I test trained our in-progress model and I can say for sure that now with so many intents it definitely takes longer to train it, but I can't really compare from the end user side. Sometimes NLU does strike me as particularly slow (I think it times out at 30 seconds but usually takes 1-3 seconds), but I don't know if that is because our model is relatively bulky.

Thank you for this response.

Reading this, it appears the more Intents the slower the training, I see that as slowing the VA down to find the Utterances. It sounds like it is best to have one Intent with 100+ Utterances compared to 5 Intents with 20 Utterances each.

An example is HR information that can be broken down to Compensation, Payroll, Benefits, and about 5-8 other areas.

I was thinking of creating an Intent for each high area and add utterances related to that intent, that would require a topic for each intent. All the topics will use the same Topic Block, since all the information is in the same location.

Reading this, I am thinking creating one Intent and add all possible utterances to that one intent that can capture anything related to HR questions.

An understanding of the order of process the system takes to find an Utterance/Intent/Topic would be so beneficial!

I don't know that performance is better with fewer intents and more utterances vs more intents and fewer utterances. Unless somebody has done specific testing and comparisons, I would take it with a grain of salt and assume that, since ServiceNow hasn't made it clear in documentation (afaik), that there are no notable performance impacts either way. All I know is that they advise something like 15+ utterances per intent.

If you're really talking 1 intent/100 utterances vs 5 intents/20 utterances, I'm thinking either way you're looking at minimal performance differences and you should be more focused on designing a good user experience than maximizing performance.

ServiceNow provides something like 20+ intents with like 20 utterances each in the ootb ITSM NLU Model.

I can tell you that from experience, having topics that are too big and complex become very unwieldy and difficult to maintain. Whenever you add/remove/move a node, it has to redraw the entire topic in VA Designer which can be VERY slow with big topics. In addition, normal sized topics generally load instantly when testing them in VA Designer and big topics can take considerable time to load, I've seen 30 seconds maybe even up to a minute.

Thank you for the input, it does help!