Join the #BuildWithBuildAgent Challenge! Get recognized, earn exclusive swag, and inspire the ServiceNow Community with what you can build using Build Agent.  Join the Challenge.

Import not processing all import rows

SimonL898794821
Mega Sage

Hi!

I have a question about imports. I've noticed that the same import processes different amount of import rows in different environments. I fetch my data from the same source.

In my development instance, I recieve 30 000 import set rows. It processes 30 000 import set rows.
But in production, I also recieve 30 000 import set rows, but it only processes 3700 rows. Always, when running the transform, it processes only 3700 rows. Why is this? This causes an issue where records are not updating correctly.

I have a ETL robust transformer for the transform, if that matters.

Thankful for any help. @Ankur Bawiskar 

3 REPLIES 3

Weird
Mega Sage

Well, since you're using ETL you are restricted quite a lot on some configurations like IRE's. It actually tries its best to honor any rules there are.
What do your imports tell you about the processed and unprocessed rows?
Make sure all your IRE configurations are identical in development and production.
Also check your ETL mappings to make sure they're correct in both environments.

Thanks for your answer. I was looking at IRE configurations and ETL mapping. Looks the same.
What I can see is that the rows not processed are in state pending. So it seems like it just gets stuck somewhere. I have read community articles about this but seems like there are no answers for this.

I've had an issue with ETL where the process gets canceled if there are too many records/queries going through it. In my example we had to query data from red hat and make an additional query. The script ran perfectly fine as a scheduled job, but the ETL cancels after running for too long as the queries take a lot of time for some reason and red hat doesn't provide filters to limit elements (each object is huge).
In my case I saw that the rows were showing up normally in the import set as "pending", but when it canceled and removed all the rows without processing any, so it's not exactly like your case.
I changed my data source script from "rest.execute()" to "rest.executeAsync()" and it helped. I also had multiple jobs running after the other and they overlapped, so I changed the run times to that they could finish before next started, so the red hat server didn't get swamped with queries.