Transform script vs Flow Designer
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
01-05-2023 10:06 AM
Hi,
I was hoping to get some advice. I have been tasked with accepting an inbound email with an attached csv on a weekly basis that will be modifications for a certain table. I built 2 ways to retrieve the results that I need, but I wanted to confirm which one would be the best approach.
The csv spreadsheet can range between 100 - 3000 records at a time. It depends on the week.
My first process was through using an inbound action and Transform script. Which means I had to script out the entire process.
The second process was through Flow Designer. I triggered the Flow off the inbound email, then I retrieved the attachment from the email, created a data source record, and then loaded the data from the csv into an import set so that I can easily configure the data before moving it to a specific table. In order to configure each line on the csv, I had to do a look up of records from the import set table where the data was loaded, then set the max results to 5000 (just to give me space) and then use a foreach loop to loop through the records.
I found the second process to be much more simpler, but I wanted to know if doing it this way causes any issues down the line.
Any advice would be appreciated.
Thank you
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
01-05-2023 10:27 AM
First of all, hats off for building out both options. 👏 They both sound good to me, but I certainly prefer the Flow Designer one, as I will typically vote for simplicity and easier maintainability over old habits 🙂
Depending on how you scripted the first option though, I could imagine that being more optimal performance wise. Have you experienced any sensible difference between the two processes with larger datasets?
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
01-05-2023 11:06 AM
Thank you, @Laszlo Balla!
I have only tested so far with just 20 records, but I will be testing with a list of 1000 so I will know then if there is a different.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
01-05-2023 10:42 AM
Agree with Laszlo here, but I don't understand your need to look up each row in the flow designer option? Shouldn't all that logic be handled in the transform map? Or am I reading it wrong?
I've built a similar process, but it was with a csv file coming from a ritm. I also used flow designer, data source and transform maps. It worked as a charm and I was also able to share the result for each upload, e.g. how many records were inserted, updated etc.
Best regards,
Sebastian Laursen
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
01-05-2023 10:47 AM
Using the transform map, I would have to take the data as is in the import set or create a transform script. We have reference fields on the final table, one of the fields is the cost center. If in the csv there is a new cost center that is not in our SNOW Cost Center table, when the data is loaded just through Transform map then that fields will be empty.
Looping through allows me to validate on the cost center table first and make changes accordingly.