- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
‎09-05-2017 11:36 AM
We are activating the Archive plugin. The initial archive will be a few million records. At OOB settings, it would take 6 months or more to catch up. OOB settings are:
Number of records (Batch Size) to archive when archiver runs: 100
Max number of batches (Max Iterations) to process when archiver runs: 10
The schedule jobs runs every hour.
We'd like to increase volume and / or frequency and want to know what type of impact that will have on the application. Are there recommendations from ServiceNow? I have not been able to find any in documentation.
Solved! Go to Solution.
- Labels:
-
Best Practices
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
‎09-05-2017 11:57 AM
Hi Robert,
By default an archive rule follows these processing rules:
- Archives 100 records for each batch job
- Sleeps 1 second between batch jobs
- Runs 10 batch jobs in an archive run (every hour)
If you change the interval of job from 1 hour to some lower value, job may sometimes not finished in 1 hour and hence it will not run but if archive threads will pick up to run on other node it can cause problems. Check this kb article:
https://hi.service-now.com/kb_view.do?sys_kb_id=116b858e0ffd424098f7982be1050efe
You still have option to run 10 batch jobs every hour and 100 records per batch job (1000 records). And performance of job depends on several factors like size of table, indexes, performant queries and instance config. So before moving forward I would recommend to clone one subprod from prod and check the performance if archive job.
I hope this makes sense.
Thanks
Shruti
If the reply was informational, please like, mark as helpful or mark as correct!
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
‎09-05-2017 11:57 AM
Hi Robert,
By default an archive rule follows these processing rules:
- Archives 100 records for each batch job
- Sleeps 1 second between batch jobs
- Runs 10 batch jobs in an archive run (every hour)
If you change the interval of job from 1 hour to some lower value, job may sometimes not finished in 1 hour and hence it will not run but if archive threads will pick up to run on other node it can cause problems. Check this kb article:
https://hi.service-now.com/kb_view.do?sys_kb_id=116b858e0ffd424098f7982be1050efe
You still have option to run 10 batch jobs every hour and 100 records per batch job (1000 records). And performance of job depends on several factors like size of table, indexes, performant queries and instance config. So before moving forward I would recommend to clone one subprod from prod and check the performance if archive job.
I hope this makes sense.
Thanks
Shruti
If the reply was informational, please like, mark as helpful or mark as correct!
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
‎09-05-2017 02:25 PM
Shruti's answer is correct. You can certainly deviate from the OOB settings successfully. The properties to tune are:
- glide.db.archive.max_iterations: Number of batches we will run per rule
- glide.db.archive.batch_size: Number of records to archive per batch
- glide.db.archive.sleep_time: How long to wait between batches
However, there is a risk that the job will exceed the 1 hour execution time and start to double up. If the job starts to double up there is a risk of telescoping performance impact and eventual failure of some type - usually broken replication but has also caused general system performance degradation. Some customers have successfully balanced the performance of the job so that it executes each run in under 1 hour. However, the balance may be thrown off by some factor that changes in the future - such as additional archiving rules or increased volume of records that match the archive conditions per hour.
If balancing the execution time is not successful for you, there is a scripted workaround listed in the KB article that can be applied to avoid the job running simultaneously on more than one node. If you apply the workaround then you don't need to worry about the job running on more than one node and therefore will avoid most performance issues that might be encountered. It is recommended to first try to balance the archiving activity through the properties and then apply the workaround if needed.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
‎09-06-2017 07:20 AM
Thank you Matthew,
Can you provide the KB article. I get Knowledge record not found
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
‎09-11-2017 09:23 AM
Hi Robert,
It looks like the KB article is still in the "Draft" state, so it is not yet publicly visible. I will get it published.
Thanks!