Removing records when removed from Data Source

- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
‎08-13-2009 12:36 PM
Does anyone know of a good way to delete or inactivate a record if it no longer shows up in the data source?
For example, I am importing group membership and if someone is no longer a group member in the data source, how can i have them taken out of the group in Service-now?
I've thought of deleting all records and reinserting but that would result in time with no data and just seems very inefficient. Any thoughts?
Thanks
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
‎08-16-2009 10:50 PM
It's the classic issue.... how can you tell the system to delete the data if it's not there in the update.
Perhaps you could introduce an Active flag in your data source, and then have a script remove the false memberships.
Another option would be to have a separate cleanup batch job to check the last Update field on memberships and delete the ones which weren't updated. (Though from memory I'm not sure an import record with no changes triggers the Update field).
Good luck.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
‎04-03-2015 04:07 AM
I have implemented a Data Feed Management solution here at Sky. One of the things it tackles is dealing with items that drop off imported data feeds.
It does it by associating a row on an SN table with a feed row (by sys_id), using a user table (u_datafeed_management). When a row arrives on a feed it notes the date/time as "Last Seen" date.
On a separate part of the solution, we have a scheduled job that runs daily and uses a table of rules. Each feed (Data Source/Transform Map combination) has a rule that defines how long a row from that feed should be retained after it is last seen (deletion tolerance) on that feed. The Scheduled Job looks for Datafeed Managment rows from each feed where the date/time is older than the current date minus the tolerance. It then takes action against those rows.
Because we use this solution mostly against CMDB entries (CI's), we don't use it to trigger deletion of rows - that's against CMDB principles, but we use it to change the status of the CI as required. If the CI subsequently appears on the feed again, the status can be changed back, rather than creating a duplicate.
This practice relies upon always importing all rows, rather than just importing deltas.
Going beyond your original requirement, the DataFeed Management solution implemented here also helps deal with:
* Renames at source; so instead of coalescing by NAME, once an item has arrived in SN, the association is noted as an association between a unique ID (e.g. a GUID) on the source system, and the sys_id on SN. The NAME is then treated like any other field and changed if it changes on the source system.
* History of the feed - When an SN item was first seen on a feed and when it was last seen on that feed.
* Noting of some attributes from the feed that you wouldn't necessarily want to add to the target table (such as the status, and other identity information from that source.
* It is an enabling technology for other solutions, that help us define the order of importance of different feeds that might coalesce against the same item (e.g. if you have 3 feeds all affecting the same CI, all presenting different values, which one do you believe most). This allows us to ensure that discovered values (from a non SN discovery solution) takes priority over values that are manually maintained in the source system.
Hope this helps.
Paul Hicks,
Lead Developer,
Sky UK

- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
‎04-03-2015 04:22 AM
My suggestion is Write Business rule before update condition
current.active.changeTo('false')
remove from all groups and roles by using gliderecord.
In your case write the code in include script to delete the records and call the function from schedule job.
Note:- Run the schedule job only once and make deactivate once your work done.
Regards,
Harish Murikinati.