We've updated the ServiceNow Community Code of Conduct, adding guidelines around AI usage, professionalism, and content violations. Read more

Reduce overhead: Build individual Data Sources for each Transform Map?

irepoman
Giga Contributor

I have one incoming file containing data that will eventually be conveyed into 10 separate tables. Although, I can request the Vendor to separate the data into 10 individual files. The two options I see are:

 

1. I could build one Data Source with 10 child Transform Maps so that the file is only read into an Import Table once and transformed from there.

 

2. I could ask the Vendor the send me 10 separate files and have 10 individual Data Sources; one for each of the Tables/Transform Maps so that upkeep down the road is simpler.

 

* I don't expect the data to result in so many records that the import table has a ton of read queries.

 

With all that said, do either of these options or any other approach provide a lower overhead?

2 REPLIES 2

Nayan ArchX
Tera Guru

Hi irepoman,

 

Great question — this is a classic Import Set architecture decision in ServiceNow, and you’re already thinking about it the right way.

Short answer first:

👉One Data Source + one Import Set table + multiple Transform Maps is usually the lower-overhead and more scalable approach.

 

Option 1 — ONE Data Source → ONE Import Table → MANY Transform Maps

(Recommended in most enterprise cases)

 

Option 2 — TEN Data Sources → TEN Import Tables → TEN Transform Maps

 

👉Option #1 is objectively lower overhead and more enterprise-grade.

Only choose Option #2 if:

  • Vendor absolutely cannot provide a unified file

  • Each dataset is owned by different systems

  • Data arrives at different times

Otherwise, keep it unified.

 

If my response has resolved your query, please consider giving it a thumbs up ‌‌ and marking it as the correct answer‌‌!

 

Thanks

Nayan Patel

IT ServiceNow Consult, ServiceNow ArchX

If my response has resolved your query, please mark it Helpful by giving it a thumbs up and Accept the Solution

 

👉https://www.scirp.org/journal/paperinformation?paperid=149299

👉 https://scholar.google.com/ 

Vaibhav Chouhan
Tera Guru

I’ve done it both ways before, and honestly, if the volume isn’t high, the overhead difference is usually not noticeable.

In your case, I’d just stick with one Data Source and one import table with multiple Transform Maps. The file gets loaded once, everything sits in the same staging table, and it’s easier to monitor and re-run if something goes wrong. From a support standpoint, that tends to be simpler long term.

Yes, each transform map will evaluate the rows, but unless you’re processing a huge file or doing heavy scripting in the transforms, it’s not really a performance issue. Most of the time, coalesce logic and transform scripts matter more than how many Data Sources you have.

I’d only split it into 10 separate files if the datasets arrive at different times, are owned by different teams, or are expected to grow a lot. Otherwise, it just adds more moving parts without much benefit.

So, for what you described, I’d keep it as a single Data Source and multiple Transform Maps. It keeps things straightforward and easier to manage.