nilimadesai
ServiceNow Employee
ServiceNow Employee

****** This article applies to Quebec and later releases that support Multilingual NLU ******

Starting with Quebec release, several languages besides English are supported by NLU in the NOW platform. In this blog, we will cover some general guidelines around how customers can go about implementing multilingual NLU Models from English to non-English languages. We have included links to more in-depth content as appropriate. In addition, we will also be publishing a playbook with a link here, and will also be available for download from NowCreate, that will assist customers implement multilingual NLU.  The multilingual NLU playbook will contain all this information cohesively in a single document.

It is to be noted that the guidelines may differ slightly depending on whether your instances are on Rome or Quebec Releases and based on functionality available in that Release.

Customers that are already live with NLU with a well-established base NLU model and looking to implement multilingual NLU, can skip through the next few sections and jump straight to section 3 and beyond for guidance on implementing multilingual NLU.

Step 0: Plugin activations

As we can see in the above flow, the very first step in configuring any functionality within the NOW platform would entail plugin activation. Below list covers the main plugins that are relevant for multilingual NLU:

  • Virtual Agent (com.glide.cs.chatbot)
  • ITSM Virtual Agent Conversations (sn_itsm_va) (for ITSM VA)
  • ITSM NLU Model for Virtual Agent Conversations (sn_itsm_nlu) (for ITSM NLU)
  • Topic Recommendations (com.snc.va_topic_recommender)
  • Intent Discovery (sn_nlu_discovery)
  • Internationalization (com.glide.i18n + language specific i18 plugins)
  • NLU Workbench - Advanced features (com.snc.nlu.workbench.advanced)
  • Conversational Analytics dashboard (sn_ci_analytics)
  • Localization Framework (com.glide.localization_framework.installer)
  • Dynamic Translation (com.glide.dynamic_translation)
  • Google, Microsoft, IBM translation spokes (sn_google_trans, com.glide.microsoft_translation_spoke, com.glide.ibm_translation_spoke)

Step 1: Setting up VA and NLU

Once above plugins are activated, the next step is to complete the basic steps for setting up Virtual Agent (VA)  and NLU. These are covered in the following 2 docs: 

Step 2: Analyze instance data to confirm and build Topics and Intents for base NLU Model

Next, it is recommended that you run the Topic Recommendations tool against most recent instance data to understand the topics/intents that are best suited to be implemented for VA. While this step is most recommended for customers implementing VA/NLU from scratch, such analysis is also recommended for existing customers to understand gaps in current VA/NLU implementation and how it can be enhanced. 

Once the list of topics/intents are finalized, the next step is to finalize the overall VA flows and identify any entities that need to be part of the NLU Model. Both for building VA flows, as well as the associated NLU Intents, always aim to start with out of box content when available and customize it to suit your needs. ITSM Pro customers can leverage from off the shelf pre-built topics that are available as part of the ITSM Virtual Agent and NLU.   

Now that the list of needed VA topics and entities within each flow are identified, the next step is to build the NLU Model. This will require specifying intents, utterances within the intents, entities and their annotations within utterances, and any needed vocabularies. 

When building the NLU Model, first use NLU Workbench to test the NLU Model. Once the model appears to perform decently using Workbench's testing capability, train and publish the model. This will make the model and its intents and entities available in VA for them to be bound to specific topics. Next step is to bind VA Topics to NLU Intents in VA Designer. 

Once this VA to NLU binding is in place, use VA Designer to test the topic flows as well as intent predictions. Make necessary tuning adjustments based on these unit test results.  

The next step is to run automated testing using the Batch Testing tool available in the platform.   

Step 3: Test and Tune NLU Models using Batch Testing tool (repeatable/iterative)

Using the Batch Testing tool to test the NLU Model is an iterative process which can be used at multiple junctures within the NLU journey.

Use it initially to test the base NLU model in the pre-deployment stage so that the base NLU model can be finalized for translating to other languages.

Once the base NLU model is finalized, and the process of creating the multilingual NLU models by translating from the base model is completed, use this tool to test the translated multilingual NLU models. Tune these multilingual NLU models further to the desired level of prediction quality.  

Similarly, the Batch Testing tool can be effectively used post deployment/goLive to help monitor and tune model performance. 

Detailed steps on how to achieve this are covered in a different community post and the multilingual NLU playbook which will soon be available from NowCreate. 

Step 4: Configure Localization Framework (LF) and Dynamic Translation (DT) on instance

Prior to actually localizing the NLU Model into other languages, both LF and DT need to be configured on the instance. In addition to  Localization Framework and Dynamic Translation on the ServiceNow docs, refer to the community article 'Configure Localization Framework (LF) and Dynamic Translation (DT) on instance' for detailed steps on how to set this up. 

Step 5: Localize Multilingual NLU Models

For customers that are on Rome or later releases looking to implement multilingual NLU, the localization steps for creating the language-specific models from the base model are built right into NLU Workbench within the instance as per this docs link on Rome translation process.

Below hands-on Virtual Academy session on creating multilingual NLU models and conversations is also a good reference for Rome and later customers on how to go about this.

Quebec customers would need to perform the translations manually or outside the platform in CSV files and import them into the tables using import sets. This is explained in detail in this community article titled ‘Importing NLU Model CSV using platform Import sets’.

Once the multilingual NLU models are created by completing the localization steps, it is important to have a linguist review the translations and provide their input for improvements. In most cases, the language specialists may not be familiar with NLU and how to access and apply changes to the models directly from the instance. In this case, translation review can be easily accomplished by exporting the utterance and entity data into excel files where they can provide their feedback for any needed changes and the spreadsheet can then be re-uploaded back into the multilingual NLU model. These steps are outlined in the blog 'How to Upload Proofread translations from Excel back into your Multilingual NLU Models' or refer to the upcoming multilingual NLU playbook for this information. 

Step 6: Finalize Multilingual NLU Models for UAT

Once the translated and proofread multilingual NLU models are ready, they need to be trained and published so that they are available to consuming applications such as VA. 

Once published, the languages for each of the multilingual NLU Models should be activated in VA General Settings, and the intents and entities bound within each of the corresponding VA topics. Detailed steps for performing these configurations are available from the community article titled 'Best Practices and How-To’s for Localizing Virtual Agent Topics' and will soon be also available in the multilingual NLU playbook.

After the multilingual NLU Models are published and made available in VA, further testing is recommended using the Batch Testing tool. Also, perform unit tests from NLU Model using NLU Workbench’s test capability, VA Designer by running all topics or by single topic with and without topic discovery, and from portal to ensure that the models are performing optimally and as expected. 

Step 7: Test, publish, and iterate Multilingual NLU Models as part of UAT

Once the multilingual NLU models are finalized, they are ready for deployment to other higher instances to iteratively perform UAT testing. Typically, multilingual NLU deployment will include changes to the multilingual NLU models as well as all VA-related updates such as records in different topic-related tables, VA localizations in sys_ui_message table, and any other relevant updates. The platform's Update Sets capability can be effectively used to deploy these changes across instances. Care must be taken to make sure any bug fixes are propagated to different instances as needed keeping changes across instances in sync. Today, it is possible to create update sets for all NLU Models within a scoped app at the application level (from the sys_app record of the scoped application) or using local update sets. In future releases, the platform could have the ability to package individual NLU models into their own update sets.

Once UAT completes and model performance has reached a satisfactory state, we are ready to deploy multilingual NLU to production!

Step 8: Measure and improve Multilingual NLU post go-Live

Once deployed, it is very important to monitor VA and NLU performance at regular pre-determined intervals ranging from 1 week to monthly or quarterly depending on the maturity of the implementation.  

The tools below that are available within the platform should be leveraged to perform this monitoring:

  • NLU Workbench - Advanced tools
    • Batch Testing tool: perform testing against live production utterance data to understand how the NLU Models are performing and tuning activities that need to happen to help improve performance.  
    • NLU Performance dashboard: review NLU Performance dashboard to help further tune the model. 
    • NLU Conflict Review tool: helps identify conflicting intents within or across NLU Models, so that corrective action can be taken towards improving model performance. 
  • Conversational Analytics dashboard: helps you improve Virtual Agent (VA) interactions with users by providing deep insights into conversational data. The dashboard helps you refine topics and increase the percentage of issues resolved by VA.

Based on the feedback and results provided by these tools, further improvements can be made to VA and NLU to help maintain and improve VA/NLU performance to ensure optimal user experience on the bot.

 

Additional NLU Related Resources: 

Additional NLU troubleshooting KBs: