Aidan
Tera Guru

This article explains how to set up ServiceNow to respond to Azure tag update events.  Tags are updated on scheduled discovery, or if an event is received for a specific resource, but at the time of writing not when a tag event is received.  This means that if a tag is updated, added or deleted it won't be added to ServiceNow until the next discovery. 

The event that we'll be responding to is Microsoft.Resources/tags/write, which gives us the affected resource as part of the payload.

To implement this we need to create the following;

  • new CI class to hold the tags.
  • discovery pattern to respond to the event
  • post-processing script on the pattern to parse the tags
  • entry in sn_capi_resource_type to map event to pattern

Tags ultimately sit in the cmdb_key_value table but in order to work with a discovery pattern I've opted to create a staging table as a CI class.

Please note: for simplicity, the update set for this article, along with the steps detailed have been created in Global and the tag staging table has been created directly under cmdb_ci.  You may want to look at putting them into the Discovery and Service Mapping Patterns scope depending on your own requirements.

Create Staging Table Class

The first step is to create the class which will receive the tags discovered by the pattern.  We can use the CI Class Manager to do this as detailed in the following screenshots.  No dependencies are needed nor any reconciliation rules.  This table will pretty much remain empty aside from when the discovery runs.  Please note that the table and field names need to be as detailed or else you'll have to update the pattern and script.

Basic Info

Attributes

These attributes hold the key and value from the updated tag along with the object ID of the associated resource.  The Source ID field will be used to hold a unique identifier for the tag formed with the object ID and key.

Identification

Discovery Pattern

The next step is to create the discovery pattern that will be called when the Microsoft.Resources/tags/write event is received.

Those who have created patterns before will know how fiddly they can be, but I'll try and summarise the steps needed the best I can.  Look at the update set for the actual pattern.

Create the initial pattern record

From Pattern Designer -> Discovery Patterns, create a new pattern.  The initial config will be as follows;

Before leaving the Basic tab, click New in the Identification Section and give it a suitable name (i.e. Azure Tag Events).  This will be the holding point for the actual pattern definition.  Once that's created, save the pattern so that we can add the final config before building out the pattern steps.

We need to add four input parameters which are passed in when the pattern is triggered from the event;

  • cloud_service_account
  • cloud_datacenter_type
  • input_object_id
  • cloud_cred_id

 

Build the pattern

The next step is to actual build out the pattern steps for retrieving and parsing the tags.  One of the parameters that the pattern receives when called is the input_object_id.  For Azure events this will be the object ID of the resource that triggered the event.  We can then use this input_object_id to build the API call that needs to be made in order to get the tags for a resource.

The first four steps are common across many Azure patterns and can be copied (for example Azure Virtual Machine Event).  Here's a high level overview of those steps.

 

1. Parse Azure National Region

For those new to building out patterns, in order to enter the script into the value field you first need to enter eval(), the pencil icon will then appear allowing you to enter the script.  Paste the following script into the script dialog.

var dc_url = ${service_account[1].datacenter_url};
var result = "azure.com";		// setting Global Region as default

try {
   var gov_cloud = new AzureNationalCloudParser();
   result = gov_cloud.parseResourceURL(dc_url);
}
catch (e){
   ms.log("AzureNationalCloudParser Class Not Found, or Run Failed. Make sure you have installed the latest discovery patterns store release. " + e);
}

result

 

2. Terminate if not event based

This step is essentially checking that we received a value in the input_object_id.

 

3. Correct the resourceGroup caption

This step is correcting the /resourceGroup/ part of the input_object_id to ensure it matches the format expected by the API call.

The script for the value field is as follows;

var input_object_id = ${input_object_id}.toString()
input_object_id = input_object_id.replace("/resourcegroups/", "/resourceGroups/")
input_object_id

 

4. Create service account table

This step is just building out the service account table so that it can be used by future steps and for building relationships.  We probably don't need this step as we won't be building any relationships but I've left it in for reference.

 

5. Get Tags

This is the start of the main part of the pattern.  This step is responsible for retrieving the tags for the resource by calling a specially formatted API.  This API is based on the resource object ID along with the extra parameters for querying the tags.

https://management." + $resource_url + $input_object_id + "/providers/Microsoft.Resources/tags/default?api-version=2022-05-01

 

6. Parse Tags

This final step will parse the returned payload and update the key value staging table with any tags.

var logger = Packages.com.snc.sw.log.DiscoLog.getLogger("Azure Tags Events");
var resource_table = new Packages.java.util.ArrayList();
var json = JSON.parse(${SystemVariable});

try{
   var resource_id = ${input_object_id};
   for (var tag in json.properties.tags){
      var key = tag;
      var value = json.properties.tags[tag];
      
      var resource_line = new Packages.java.util.HashMap();
      resource_line.put("u_object_id", resource_id);
      resource_line.put("u_key", key);
      resource_line.put("u_value", value);
      resource_line.put("u_source_id", resource_id + "-" + key)
      resource_table.add(resource_line);
      
   }
} catch(e) {
   logger.error("Azure Tags Events JSON parse failed with exception = " + e);
}

CTX.setAttribute("u_cmdb_ci_key_value_staging", resource_table);

 

Process the Tags

This next step utilises the "Pattern Pre/Post Scripts" table (sa_pattern_prepost_script) which allows us to run script at various stages of the pattern process.  We could have used a business rule or responded to a discovery.ended event, but this is more targetted.  In our case, we want to run a script after the pattern has completed (Post Sensor).  From the sa_pattern_prepost_script table create a new record with the following details;

The script needed for the tag is as follows.  Essentially it is performing the following steps;

  1. Tidy up the payload which contains details of all the records created from the pattern.  We only care about those added to the staging table, and only need the sys_id.
  2. Based on those sys_id's, find all of the corresponding records.
  3. Find the CI that the tags are related to.  Now this makes the assumption that the cloud resource CI's will be in classes extended from Virtual Machine Object (cmdb_ci_vm_object) so if you have classes not extended from there then you may need to modify the script.
  4. Work out which tags have been updated.
  5. Work out which tags have been added.
  6. Work out which tags have been deleted.
  7. Update/Add/Delete the tags as applicable.
  8. Finally, remove the processed records from the staging table.
var rtrn = {};
var payloadObj = JSON.parse(payload);

// filter out any objects not from the staging table and keep only the sys_id
var items = payloadObj.items.filter(function(x) {
    return x.className == 'u_cmdb_ci_key_value_staging';
}).map(function(x) {
    return x.sysId;
});

if (items.length > 0) {

    // get the list of tags from the staging table for the sys_id's from the payload
    var gqStaging = new global.GlideQuery('u_cmdb_ci_key_value_staging')
        .where('sys_id', 'IN', items)
        .select('u_object_id', 'u_key', 'u_value')
        .reduce(function(arr, e) {
            arr.push(e);
            return arr;
        }, []);

    // grab the CI that the tags are related to
    // assumes that its from a properly defined cloud class extended from cmdb_ci_vm_object
    var gqCI = new global.GlideQuery('cmdb_ci_vm_object')
        .where('object_id', gqStaging[0].u_object_id)
        .select('name')
        .reduce(function(arr, e) {
            arr.push(e);
            return arr;
        }, [])[0];


    // build a list of staging tags that can be used in the updates/deletes later
    gqStagingKeys = gqStaging.map(function(staging) {
        return {
            "key": staging.u_key,
            "value": staging.u_value,
            "configuration_item": gqCI.sys_id
        };
    });

    // get list of existing tags for the CI
    var gqExistingTags = new global.GlideQuery('cmdb_key_value')
        .where('configuration_item', gqCI.sys_id)
        .select('key', 'value', 'configuration_item')
        .reduce(function(arr, e) {
            arr.push(e);
            return arr;
        }, []);

    // determine the tags that have been updated (same key, different value - obviously ;) )
    var updatedTags = gqExistingTags.map(function(tag) {
        var updatedTag = gqStagingKeys.filter(function(staging_tag) {
            return tag.key == staging_tag.key && tag.value != staging_tag.value;
        })[0];

        return updatedTag ? {
            "sys_id": tag.sys_id,
            "key": tag.key,
            "value": updatedTag.value
        } : undefined;
    }).filter(Boolean);

    // build list of new tags (those in staging but not in tag table)
    var newTags = gqStagingKeys.map(function(tag) {
        var found = false;
        for (var tagIdx in gqExistingTags) {
            if (gqExistingTags[tagIdx].key == tag.key) found = true;
        }
        return !found ? tag : undefined;
    }).filter(Boolean);

    updatedTags = updatedTags.concat(newTags);

    // loop through the updated/new tags and apply to the key value table
    updatedTags.forEach(function(_tag) {
        new global.GlideQuery('cmdb_key_value')
            .insertOrUpdate(_tag);
    });

    // get a list of deleted tags (those in tags table, not in staging)
    var deletedTags = gqExistingTags.map(function(tag) {
            var found = false;
            for (var tagIdx in gqStagingKeys) {
                if (gqStagingKeys[tagIdx].key == tag.key) found = true;
            }
            return !found ? tag : undefined;
        }).filter(Boolean)
        .reduce(function(arr, e) {
            arr.push(e.sys_id);
            return arr;
        }, []);

    // delete any tags no longer needed
    new global.GlideQuery('cmdb_key_value')
        .where('sys_id', 'IN', deletedTags)
        .deleteMultiple();

    // delete records from the staging table
    new global.GlideQuery('u_cmdb_ci_key_value_staging')
        .where('sys_id', 'IN', items)
        .deleteMultiple();
}


rtrn = {
    'status': {
        'message': 'Successfully processed the discovered tags',
        'isSuccess': true
    },
    'patternId': patternId,
    'payload': JSON.stringify(payloadObj)
};

 

Map Event to Pattern

Finally, we have to map the incoming event to the newly created pattern.  This is one part that you probably should put in the "Discover and Service Mapping Discovery" scope, but again for this example its left in the Global scope.

From the sn_capi_resource_type table create a new record with the details below.

You'll need to create a new record for the Product field which the name Microsoft.Resources and the Service Category "Tools".

 

Summary

So by adding all of the above, we'll now be able to respond to the tag events and keep ServiceNow up to date without needed to run full discoveries or hope that an event comes in for the affected resource.

This is also a good exercise in understanding how cloud events are processed and how we can build patterns to respond to them.   If implementing this for a customer or in a real environment I would certainly move things into the correct scope rather than building it all out in Global so try this in your own PDI's first!

 

 

Comments
Ram Devanathan1
ServiceNow Employee
ServiceNow Employee

The Tag Governance app is available for all ITOM visibility customers. The tag governance app provides ability in the policy to validate tags upon events like change, this is out of the box.

Did you take a look at this too?

Ram

Aidan
Tera Guru

Ram, 

This was an exercise in understanding events and discovery patterns primarily, but I'd always look to OOB solutions before building anything custom.

However, I've been looking at Tag Governance but that appeared only to update the tag in the cloud rather than responding to events.  The fact that no config is in place to respond to the tag event also implied that it only worked from SN to the cloud rather than the other way.  I'm already responding to cloud events for other resources but if I've missed something with regards to the Tag events then I'd be interested to understand where that is configured.

Thanks

Ram Devanathan1
ServiceNow Employee
ServiceNow Employee

Hi Aidan, in the tag policy you can turn on the check box 'run on cloud events' - this will listen in when the azure activity log alerts, aws sns change notifications and google stackdriver alerts are sent. try it out.

find_real_file.png

Aidan
Tera Guru

Thanks for the clarification Ram,

This obviously means we need to define a policy for all the tags we want to update, whereas my solution just ensures any updates are reflected in the CMDB regardless of policies.  

Still, Tag Governance is certainly worth considering if all of your tags need a policy.

Ram Devanathan1
ServiceNow Employee
ServiceNow Employee

you don't need tag policies to update the tags, the tags get updated via discovery patterns. tag policies are used to validate the tags - that's why you need to be specific with the tag policies.

all tags should not need a policy - we start with they key set of 'expected' tags following the enterprise and other mandates e.g. cost-center, environment, applicationID, etc.

Aidan
Tera Guru

So unfortunately most of the embedded images on this article have now been turned into attachments and I can no longer edit the article.  This article is effectively useless now 😞

Vinay Chavhan
Tera Contributor

Hi Aidan,

 

I am using Service Graph Connector for Microsoft Azure to discover tags on the service accounts. But when the tags are deleted in cloud from service accounts, shall the tag record present in cmdb_key_value table should also get deleted? is it possible to delete key value pair from servicenow automatically, when it is removed from Cloud?

 

Regards,

Vinay

Version history
Last update:
‎09-14-2022 07:15 AM
Updated by: