Anantha Sai Ram
ServiceNow Employee
ServiceNow Employee

The FAQ below covers the “Optimize” feature released as part of NLU Workbench - Advanced feature V3.0.4 store app version and is available for the Rome family release. The product documentation for this feature is available here 

 

  1. What is Optimize, why should I optimize a model? 

A. The Optimize feature enhances the performance of an NLU model and is powered by Servicenow language model. Customers should run Optimize before a model is published for use in VA. The optimize feature aims to reduce the % of incorrect predictions (I.e., the number of times a user is taken to a wrong VA topic). Typically, this results in an increase in % missed (the model skips predictions thereby executing the fallback topic). It could also in some cases, increase the % correct. 

 

  1. How can I access this feature?

A. Optimize can be accessed via batch testing module in NLU workbench. While running a new analysis, users have an option to run Optimize. While batch testing is expected to be run as the user is making updates to the model iteratively, model optimize is to be run only if the model is ready to be published This is currently supported for the English language models – As part of the November 2021 bimonthly NLU Service update, support for French, German and Spanish models will be available. In the future we will be supporting other language models. 

 

  1. When should I Optimize a model?

A. Optimize should be run only after all the necessary modifications are complete and the model is ready to be published. If any modifications are done to a model after running optimize, it must be trained and optimized again. 

 

  1. How long does it take to run Optimize?

A. Time taken by Optimize depends on model size, test set size, and availability of server capacity. When you click on optimize, it will be run asynchronously on our server - that is, a job will be submitted and will be picked up for processing only when other jobs ahead of it in the queue are completed. 

Optimize usually takes longer to run than batch testing a model (can be anywhere from 30 minutes to a few hours, depending on model size and the server bandwidth). 

 

  1. What happens after I run Optimize. What should I be doing?

A. Once optimize is run, if there is scope for improving model performance, the results of the optimized model will be displayed along with the performance results of the current model, you can choose to accept recommendation. This will also publish the optimized model. 

  1. I ran modeloptimize, but it did not generate an optimized model. I only see the current model performance. Why is that? 

A. Optimized model will be generated only if the model performance can be improved. If the current model is already performing well and an optimized model that can perform better than current model cannot be generated, you will not see any recommendation. If you provide a different test set with better coverage of intents in the test set, you could see an optimized model. 

 

  1. I ran optimize but when I run test for an utterance, I see a lower confidence on the predicted intent. Is my model performing poorer?

A. Performance of optimized model should be compared with the previous model, based on overall prediction outcomes, as shown in the charts. Optimized model uses entirely different techniques, and its prediction confidence should not be compared with that of non-optimized model. An optimized model produces predictions with a wider confidence range, and in most scenarios has a lower confidence than a trained model – that does not mean, the results are poorer.  

Before Optimize:                                                                                               After Optimize: 

find_real_file.png    find_real_file.png

 

  1. I am seeing utterances that are failing prediction afteroptimize. Should I still use the optimized model?

A. Performance of optimized model should be analysed by prediction outcome percentages as shown in optimized model results. While there might be specific utterances that may not predict correctly with an optimized model, the overall prediction outcome percentages will show improvement, in terms of reduction in incorrect% and/or improvement in correct%.  

In some scenarios, while incorrect % may increase slightly as well it will be accompanied by a greater increase in correct %. 

find_real_file.png

 

  1. I have optimized the model, when I run prediction on the test panel, I see a different threshold from what is on model setting? 

A. The optimized model uses a different threshold in the background when a prediction is made, this is different from the threshold that is configured on the model. The model threshold in model settings is used only for non-optimized models.  

 find_real_file.png

 

  1. Once I run optimize and publish, will there be a need to run this again?

A. Once optimized model is accepted, the model is published. If any modifications are made to an optimized model like adding new intents or removing existing intents, updating utterances etc., the model should be trained and optimized again before publishing it. If you do not accept an optimized model, but continue to make changes to the model like adding or updating intents/utterances, it should be trained and optimized again 

 

  1. What is required to run Optimize? 

A. Optimize requires test data similar to batch testing. This should be a representative of the actual data model is supposed to be predicting on, after it is published.  

Providing the right representative data can result in significant improvement in performance. For example, if the model is expected to not make predictions for certain kind of utterances, including a few examples of such utterances in data will boost model’s ability to identify such irrelevant utterances, that is, include utterances in the test set where expected intent is empty. 

Data set recommendations,

  • Test set covers a minimum of 25% of intents in the selected model but ideally covers upwards of 65%.  
  • Model is not OOB like ITSM/HR models. 

 

  1. Are there any known limitations of Optimize?
  • Optimize is currently supported for English models only. 
  • Optimize cannot be run on ‘Pre-built’ (Read only/OOB) models. However, optimize can be run on cloned versions of ‘Pre-built’ (Read only/OOB) models 
  • If you make a change to an optimized model and retrain, it must be optimized again 
Comments
Chris D
Kilo Sage
Kilo Sage

If I Optimize a model in non-prod (in an update set), does that optimization get carried over with the model's move into production?

Version history
Last update:
‎10-17-2021 11:21 PM
Updated by: