Archive Rule - Not archiving complete batches

M_iA
Kilo Sage

We have an archive rule set up.

Its a rule set up to achive cases closed before January 2022.

In the properties I have set to it to 1000 records and 10 batches. However, when we run the archive rule, it never archives the full 1000 in a batch and in some cases, 0!

find_real_file.png

This becomes quite time consuming when you have 30k records to archive!

Would anyone know why this is happening?

Thanks in advance

5 REPLIES 5

Community Alums
Not applicable

Hi @M.iA 

By default an archive rule follows these processing rules:

  • Archives 100 records for each batch job
  • Sleeps 1 second between batch jobs
  • Runs 10 batch jobs in an archive run (every hour)

If you change the interval of job from 1 hour to some lower value, job may sometimes not finished in 1 hour and hence it will not run but if archive threads will pick up to run on other node it can cause problems.

You still have option to run 10 batch jobs every hour and 100 records per batch job (1000 records). And performance of job depends on several factors like size of table, indexes, performant queries and instance config. So before moving forward I would recommend to clone one subprod from prod and check the performance if archive job.

Mark my answer correct & Helpful, if Applicable.

Thanks,

Sandeep

Hi @Sandeep Dutta 

Thanks for the response, I reduced the records down to default and checked the scheduled job to ensure the interval was set to an hour. It was.

I ran the archive rule again and the same issue occured. There were still outstanding cases to be archived, but batch numbers remained at 0 once completed.

This took me to do a little diggining and I found that since the paris release, the archive process batches the records on the table: sys_archive_run_chunk

I looked on this table and filtered on the Rule Id. I found 37 batches that had thrown an error. Now the batch record doesnt tell me what happened and why it errored unfortunately.

Could this by why its not picking up these records? Because they are already contained within the records on the sys_archive_run_chunk.

The question then, is how to remediate the issue that casued the batch to error and whether just deleting the record on the sys_archive_run_chunk would work?

Hi @M_iA ,

I am facing a similar issue. Did you find any fix for this?

 

Thanks, 

Rishi.

Hi,

any idea why changing the archive properties aren't reflected and still execute 10 batches per 100 records?

We've changed it to:

glide.db.archive.max_iterations changed to 5

glide.db.archive.batch_size changed to 3000

 

Still OOTB results.

Thanks.