
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
10-20-2022 07:55 AM
One of our customer wants to make sure they are aware where their data is being moved around (if at all) and is secure.
Since they are on PRO and we were chatting about enabling AI/ML, and they asked where the AI/ML work is done for Agent Assist and Predictive intelligence. Is this performed in-instance or farmed off to a separate instance in our data centers ?
Any help or guidance would be greatly appreciated
Solved! Go to Solution.

- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
10-20-2022 09:16 AM - edited 10-20-2022 09:17 AM
Hello Sunny,
Although AI/ML is a shared infrastructure, every datacenter has its own ML infrastructure and all communication between the instance and the ML infrastructure is encrypted. We also do not persistently store any of our customer ML training data in our ML infrastructure.
The data sent from the instance for ML training is encrypted and once the ML Training completes, it will transfer the trained model back to the instance, as the instance is the source of truth for all AI/ML trained models and where they are held in a persistent state. All data sent for ML training is removed from the Training Server, once the training completes. The Prediction Servers only hold the trained models in memory and if it receives no prediction on the model for 48 hours, it is removed from the Prediction Server memory. If the model is not in the Prediction Server memory, upon the first prediction call, it will retrieve the trained model from the instance and load it into memory, before it provides the prediction on the model. Hence the first prediction on a model will always take a little longer than subsequent predictions on the same model.
Hope this helps.
Regards,
Brian

- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
10-20-2022 09:16 AM - edited 10-20-2022 09:17 AM
Hello Sunny,
Although AI/ML is a shared infrastructure, every datacenter has its own ML infrastructure and all communication between the instance and the ML infrastructure is encrypted. We also do not persistently store any of our customer ML training data in our ML infrastructure.
The data sent from the instance for ML training is encrypted and once the ML Training completes, it will transfer the trained model back to the instance, as the instance is the source of truth for all AI/ML trained models and where they are held in a persistent state. All data sent for ML training is removed from the Training Server, once the training completes. The Prediction Servers only hold the trained models in memory and if it receives no prediction on the model for 48 hours, it is removed from the Prediction Server memory. If the model is not in the Prediction Server memory, upon the first prediction call, it will retrieve the trained model from the instance and load it into memory, before it provides the prediction on the model. Hence the first prediction on a model will always take a little longer than subsequent predictions on the same model.
Hope this helps.
Regards,
Brian

- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
06-04-2025 09:39 PM
Thanks for the info Brian. Are there an exceptions to this?
Particularly, I'm trying to confirm if what you've described applies to the plugin 'Software Asset Management - Machine Learning Normalization (com.sn_sam_ml_normalization)', and also to AI Search.

- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
06-05-2025 03:18 AM
The application 'Software Asset Management - Machine Learning Normalization' does use the Predictive Intelligence application for its predictions, and so my post also applies to this application. AI Search is a completely different application, and the indexed documents reside on an AIS Node, that is located in the same datacenter as your instance. A search request is sent to the AIS Node, and it sends a response with all the documents that match the search term.