The Zurich release has arrived! Interested in new features and functionalities? Click here for more

AI/ML indexed data in-instance or some place else

Sunny SN
ServiceNow Employee
ServiceNow Employee

One of our customer wants to make sure they are aware where their data is being moved around (if at all) and is secure.

 

Since they are on PRO and we were chatting about enabling AI/ML, and they asked where the AI/ML work is done for Agent Assist and Predictive intelligence. Is this performed in-instance or farmed off to a separate instance in our data centers ?

 

Any help or guidance would be greatly appreciated

1 ACCEPTED SOLUTION

Brian Bakker
ServiceNow Employee
ServiceNow Employee

Hello Sunny,

 

Although AI/ML is a shared infrastructure, every datacenter has its own ML infrastructure and all communication between the instance and the ML infrastructure is encrypted. We also do not persistently store any of our customer ML training data in our ML infrastructure. 

 

The data sent from the instance for ML training is encrypted and once the ML Training completes, it will transfer the trained model back to the instance, as the instance is the source of truth for all AI/ML trained models and where they are held in a persistent state. All data sent for ML training is removed from the Training Server, once the training completes. The Prediction Servers only hold the trained models in memory and if it receives no prediction on the model for 48 hours, it is removed from the Prediction Server memory. If the model is not in the Prediction Server memory, upon the first prediction call, it will retrieve the trained model from the instance and load it into memory, before it provides the prediction on the model. Hence the first prediction on a model will always take a little longer than subsequent predictions on the same model.

 

Hope this helps.

 

Regards,

Brian

View solution in original post

3 REPLIES 3

Brian Bakker
ServiceNow Employee
ServiceNow Employee

Hello Sunny,

 

Although AI/ML is a shared infrastructure, every datacenter has its own ML infrastructure and all communication between the instance and the ML infrastructure is encrypted. We also do not persistently store any of our customer ML training data in our ML infrastructure. 

 

The data sent from the instance for ML training is encrypted and once the ML Training completes, it will transfer the trained model back to the instance, as the instance is the source of truth for all AI/ML trained models and where they are held in a persistent state. All data sent for ML training is removed from the Training Server, once the training completes. The Prediction Servers only hold the trained models in memory and if it receives no prediction on the model for 48 hours, it is removed from the Prediction Server memory. If the model is not in the Prediction Server memory, upon the first prediction call, it will retrieve the trained model from the instance and load it into memory, before it provides the prediction on the model. Hence the first prediction on a model will always take a little longer than subsequent predictions on the same model.

 

Hope this helps.

 

Regards,

Brian

Thanks for the info Brian. Are there an exceptions to this?
Particularly, I'm trying to confirm if what you've described applies to the plugin 'Software Asset Management - Machine Learning Normalization (com.sn_sam_ml_normalization)', and also to AI Search.

Brian Bakker
ServiceNow Employee
ServiceNow Employee

@Kit CG 

The application 'Software Asset Management - Machine Learning Normalization' does use the Predictive Intelligence application for its predictions, and so my post also applies to this application. AI Search is a completely different application, and the indexed documents reside on an AIS Node, that is located in the same datacenter as your instance. A search request is sent to the AIS Node, and it sends a response with all the documents that match the search term.