- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
yesterday
Hi All,
Currently I am working on Nexthink integration, which be sending the data for software installation table also data regarding the worksatation(cmdb_ci_computer) need to create a record if not present already.
So I have created a scripted rest api but i need to know if we should have a staging table included and then tranform the data to actual tables(data count could be a lot higher as i would be discovery data), will it imapct system performance? also should i process the data using scripted rest api to send data to tables accordingly using logics directly without using staging table, or I should have an scheduled job approach.
Another if there are data regarding the two tables should i ask them to provide me in single payload or get the data for devices and software seprately ?(then should i use 2 staging tables for this)
If someone can advise me with information regarding these points.
Thanks,
Akshay Jugran
Solved! Go to Solution.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
yesterday
Hi@Akshay Jugran -
Staging tables are a great way to handle high-volume data. They prevent performance issues on the main tables while you’re processing the data. They also allow you to validate the data before you transform it. And they make it easy to retry failed transformations and handle errors. Staging tables are a standard practice for bulk integrations.
You can use two staging tables: one for computer data and one for software installation data. This would make it easier to manage, validate, and troubleshoot.
Alternatively, you could consider performance-related aspects:
- Batch process in chunks (500-1000 records).
- Use setWorkflow(false) and autoSysFields(false) for staging tables.
- Process during off-peak hours.
Your Flow should be :
- Nexthink → REST API → Staging Tables
- Scheduled Job → Transform → Target Tables (cmdb_ci_computer, software installation)
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
yesterday
As mentioned, considering this is a critical table I would recommend to populate the data in staging table first and then populate the data to target tables.
As part of your transform, make sure to apply IRE on import sets to make sure no duplicate CIs are created
As per community guidelines, you can accept more than one answer as accepted solution. If my response helped to answer your query, please mark it helpful & accept the solution.
Thanks,
Bhuvan
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
yesterday
Hi@Akshay Jugran -
Staging tables are a great way to handle high-volume data. They prevent performance issues on the main tables while you’re processing the data. They also allow you to validate the data before you transform it. And they make it easy to retry failed transformations and handle errors. Staging tables are a standard practice for bulk integrations.
You can use two staging tables: one for computer data and one for software installation data. This would make it easier to manage, validate, and troubleshoot.
Alternatively, you could consider performance-related aspects:
- Batch process in chunks (500-1000 records).
- Use setWorkflow(false) and autoSysFields(false) for staging tables.
- Process during off-peak hours.
Your Flow should be :
- Nexthink → REST API → Staging Tables
- Scheduled Job → Transform → Target Tables (cmdb_ci_computer, software installation)
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
yesterday
Thanks this is what i wanted to know,
also have a follow up question
if we are using two staging table so should i have two different resource paths for each table or should i process it in the same payload
Thanks for the reply.