Trouble with Scheduled Import of a large number of 10k row .csv files
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
4 weeks ago - last edited 4 weeks ago
Hi folks,
I'm having an issue that none of my colleauges are able to help with and wondered if anyone in the community might be able to give me some advice.
Context:
- I need to load a .csv of ~600000 rows (> 1GB) into a custom table from a MID Server host.
- This file is split into parts of no more than 10000 rows each.
- These 'parts' are imported from the MID Server host into ServiceNow as data sources.
- I've set these up as Scheduled Imports such that they're transformed and loaded sequentially: the first is executed, and the second is set to run after its parent (the previous one), and so on.
I don't have an issue getting the files from the Server into ServiceNow, it's just getting the things to actually finish transforming and loading.
The issue I'm having is that no matter what I seem to do, these scheduled imports will run for a while, maybe 28 files or so, and then no more scheduled imports are executed.
If I restart from where it ended, I always seem to get duplicate records, even though the coalesce configuration in my transform maps should prevent this happening. It's very strange, and I can't figure out what's going on.
I'm guessing there's something fundamentally wrong with the approach here, but I don't know much about alternatives that would allow me to automate importing such a large file.
Any ideas what I'm doing wrong, or what I can do to find out why these scheduled imports keep stopping?
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
4 weeks ago
How many errors do you have within the import before it fails?
Please check the imporr/export properties (/now/nav/ui/classic/params/target/%24impex_properties.do). The error limit is (OOB) set to 100, ignore non parseable lines to false and the charset set to windows-1252. I don't think the last one will be an issue, since others are going as expected, but if you run into non-parseable lines or cross the error limit, that could be a cause of not running any more.
Next to that: check the logs. Maybe you have runtime errors? 1GB of data coming in could cause those and we don't know the data you are importing, so it could be that something is wrong with coalesces?
Please mark any helpful or correct solutions as such. That helps others find their solutions.
Mark
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
4 weeks ago
Not sure if my previous reply was sent, sorry.
- No errors on any of the import sets, before or after the break/restart.
- When I load manually, the coalesce is working as expected.
- Data is definitely parseable since I'm cleaning/validating on the server before loading.
- Looking through the logs, nothing jumps out at me, sorry!
Any idea what I should be looking for in the logs? Might be a daft question; just never come across an issue like this before!
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
4 weeks ago - last edited 4 weeks ago
Hey, thanks for your help Mark.
There are no errors on any of the imports that run automatically, and after I restart, none on the ones that follow. Bizarrely, if I load one of the .csv files with /create_import_set.do then the coalesce works as expected (the data has been already been cleaned to ensure the files are parseable). Do you know what I should be looking for in the logs? I had a look through just now and nothing's jumping out at me. Sorry if it's a daft question, I've just not come across an issue like this before.
Afraid I can't share any particulars about the data due to its sensitivity.