Handling large payloads - Discovery
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
‎11-09-2017 02:11 AM
Hi,
We are facing issues during discovery when the payload attachments that are getting are more than defined size. Seen some properties like below, where payload sizes can be modified but might put instance at risk as per KB0552119 . So wanted to know how can we check the performance of the instance, so that testing can be done by altering teh payload limits.
- Workaround 1: Increase the system property glide.soapprocessor.large_field_patch_max to a larger value. This allows larger payloads to be written to the payload field on the ecc_queue record, instead of being converted to an attachment.
- Workaround 2: Increase system property com.glide.attachment.max_get_size to a larger value. This allows the vCenter sensor to read in larger payload attachments.
Regards,
Rakesh Imandi
- Labels:
-
Discovery
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
‎11-09-2017 11:31 AM
You can look at the dashboard ServiceNow Performance in the Java Memory graph to see how busy things are.
From what I've seen, 1MB of payload may require 13MB or memory to process - documents get parsed, elements get extracted, it gets copied to JSON objects, and so on.
If you go to /stats.do in your instance, you can see how many worker threads there are in the Background Scheduler section - mine has 8 per node.
So, given my configuration of 8 worker threads per node, if I increased the max payload by 1MB, I could see the max memory used rise by as much as about 100MB.
It's unlikely that you would be handling all large sensors all on the same node all at once, and if it did happen, the memory manager would cancel a few jobs and retry them later, so it's unlikely to kill nodes if you push it a little, but I'm thinking you can add about (2G - your peak memory usage) / 100 to your max payload size as a starting point to experiment from. (If you workers per node isn't 8, scale that to fit.)
If you're not running any significant number of these at the same time, you may be able to push it farther.
- Tim.