The Zurich release has arrived! Interested in new features and functionalities? Click here for more

Fix script- Progress Controller was left in running state on startup

Priyanka_786
Tera Guru
Tera Guru

Hi Community,

 

Hope you are doing well.

 

I am facing issue with fix script execution in prod environment. That is fix script got auto- cancelled with this message "Progress Controller was left in running state on startup" show progress workers logs. Due to this data was not properly updated as expected.

 

This same script tested and ran in pre-prod environment as expected without this error. So there is nothing wrong in code logic. One difference is with data count which is greater in prod.  I wanted to understand the cause of this behavior.  Has anyone encountered this problem? if yes, any leads  would be highly appreciated.

 

Regards,

Priyanka Salunke

1 ACCEPTED SOLUTION

Priyanka_786
Tera Guru
Tera Guru

@Shivalika  and @Ankur Bawiskar : Thank you for your prompt responses and sorry to late reply.

 

After further investigation with admin team it was due to node failure issue. Meaning Node got stuck for long time. After restarting node, it solved issue.

 

Regards,

Priyanka Salunke

 

View solution in original post

5 REPLIES 5

Ankur Bawiskar
Tera Patron
Tera Patron

@Priyanka_786 

Does your fix script update record with setWorkflow(false)?

Share the script.

Try to run the fix script in chunks.

Example: if you know there will be 50k records then run 5 fix scripts 1 each for 10k records

If my response helped please mark it correct and close the thread so that it benefits future readers.

Regards,
Ankur
Certified Technical Architect  ||  9x ServiceNow MVP  ||  ServiceNow Community Leader

Shivalika
Mega Sage

Hello @Priyanka_786 

 

Yes, it happened because of data size, at some point in the middle while updating the data the transaction would have been timeout. But the fix script was still waiting, leading to this error message - as in running state. 

 

So, yes it's always recommended to update any data in production in chunks and not exactly 10k - but depending on the size of per record, like it should exceed 100 MB of data update. So if you are updating some very big table as such cmdb_ci, then reduce even more to 5k perhaps, but if it's some less size table like user table - do it 10k. 

 

Kindly mark my answer as helpful and accept solution if it helped you in anyway. This will help me be recognized for the efforts and also move this questions from unsolved to solved bucket. 

 

Regards,

 

Shivalika 

 

My LinkedIn - https://www.linkedin.com/in/shivalika-gupta-540346194

 

My youtube - https://youtube.com/playlist?list=PLsHuNzTdkE5Cn4PyS7HdV0Vg8JsfdgQlA&si=0WynLcOwNeEISQCY

Priyanka_786
Tera Guru
Tera Guru

@Shivalika  and @Ankur Bawiskar : Thank you for your prompt responses and sorry to late reply.

 

After further investigation with admin team it was due to node failure issue. Meaning Node got stuck for long time. After restarting node, it solved issue.

 

Regards,

Priyanka Salunke

 

Hi Priyanka,

 

Which node you are referring here for this issue. I have tried restarting mid server but again same error every time the record count increased by 100-200 but its not processing completely.