Nexthink Integration

Akshay Jugran
Tera Expert

Hi All,

Currently I am working on Nexthink integration, which be sending the data for software installation table also data regarding the worksatation(cmdb_ci_computer) need to create a record if not present already.

So I have created a scripted rest api but i need to know if we should have a staging table included and then tranform the data to actual tables(data count could  be a lot higher as i would be discovery data), will it imapct system performance? also should i process the data using scripted rest api to send data to tables accordingly using logics directly without using staging table, or I should have an scheduled job approach.

 

Another if there are data regarding the two tables should i ask them to provide me in single payload or get the data for devices and software seprately ?(then should i use 2 staging tables for this)

 

If someone can advise me with information regarding these points.

 

Thanks,

Akshay Jugran

2 ACCEPTED SOLUTIONS

Chavan AP
Kilo Sage

Hi@Akshay Jugran - 

 

 

Staging tables are a great way to handle high-volume data. They prevent performance issues on the main tables while you’re processing the data. They also allow you to validate the data before you transform it. And they make it easy to retry failed transformations and handle errors. Staging tables are a standard practice for bulk integrations.

 

You can use two staging tables: one for computer data and one for software installation data. This would make it easier to manage, validate, and troubleshoot.

 

Alternatively, you could consider performance-related aspects:

 

- Batch process in chunks (500-1000 records).

- Use setWorkflow(false) and autoSysFields(false) for staging tables.

- Process during off-peak hours.

 

Your Flow should be :

  1. Nexthink → REST API → Staging Tables
  2. Scheduled Job → Transform → Target Tables (cmdb_ci_computer, software installation)

 

 

Glad I could help! If this solved your issue, please mark it as ✅ Helpful and ✅ Accept as Solution so others can benefit too.*****Chavan A.P. | Technical Architect | Certified Professional*****

View solution in original post

@Akshay Jugran 

 

As mentioned, considering this is a critical table I would recommend to populate the data in staging table first and then populate the data to target tables.

 

As part of your transform, make sure to apply IRE on import sets to make sure no duplicate CIs are created

 

https://www.servicenow.com/docs/bundle/zurich-servicenow-platform/page/product/configuration-managem...

 

As per community guidelines, you can accept more than one answer as accepted solution. If my response helped to answer your query, please mark it helpful & accept the solution.

 

Thanks,

Bhuvan

View solution in original post

6 REPLIES 6

Chavan AP
Kilo Sage

Hi@Akshay Jugran - 

 

 

Staging tables are a great way to handle high-volume data. They prevent performance issues on the main tables while you’re processing the data. They also allow you to validate the data before you transform it. And they make it easy to retry failed transformations and handle errors. Staging tables are a standard practice for bulk integrations.

 

You can use two staging tables: one for computer data and one for software installation data. This would make it easier to manage, validate, and troubleshoot.

 

Alternatively, you could consider performance-related aspects:

 

- Batch process in chunks (500-1000 records).

- Use setWorkflow(false) and autoSysFields(false) for staging tables.

- Process during off-peak hours.

 

Your Flow should be :

  1. Nexthink → REST API → Staging Tables
  2. Scheduled Job → Transform → Target Tables (cmdb_ci_computer, software installation)

 

 

Glad I could help! If this solved your issue, please mark it as ✅ Helpful and ✅ Accept as Solution so others can benefit too.*****Chavan A.P. | Technical Architect | Certified Professional*****

Thanks this is what i wanted to know, 

also have a follow up question 

 

if we are using two staging table so should i have two different resource paths for each table or should i process it in the same payload 

Thanks for the reply.