
- Post History
- Subscribe to RSS Feed
- Mark as New
- Mark as Read
- Bookmark
- Subscribe
- Printer Friendly Page
- Report Inappropriate Content
on 09-10-2024 09:18 AM
Introduction
The cloud-based Azure AI Vision service from Microsoft provides access to advanced algorithms for processing images and returning information. By uploading an image or specifying an image URL, Azure AI Vision algorithms can analyze visual content in different ways based on inputs and user choices.
Nowadays, it does not exist an available ServiceNow Integration Hub Spoke for these services. This article explains how to set up your instance to connect and use these services from Azure.
As detailed in the Azure AI Vision documentation page, there are multiple services available. These can be
found in the following link: https://learn.microsoft.com/en-us/azure/ai-services/computer-vision/.
In this article, 'Face Detect' service is used for the examples shown.
Create Azure Resources
The first step to integrate with these services is to create a resource in Azure that we can use to process the images.
This process is detailed in the following link: https://learn.microsoft.com/en-us/azure/ai-services/multi-service-resource?tabs=windows&pivots=azpor....
But below the main steps are detailed for easier reference. To do it,
1. Navigate to the Azure console and search for the service to be used. In this case, 'Face APIs', and once opened, a new resource should be created by clicking on '+ Create' button, and fill in all the information required by Azure.
Please note that it might be required to create a 'Resource Group' in Azure in advance, so that the new resource being created can be added to it.
2. Once done (It might take a few minutes), the new resource must be appear in the screen before as available to be used:
Connectivity and Credentials
Once the resource is created and ready in Azure, the ServiceNow setup can start by implementing the following steps.
1. Create a new ‘Connection & Credential Alias’ record so that we can establish the connection between systems.
To do it, we should navigate to the ServiceNow instance, and access to IntegrationHub > Connections & Credentials > Connection & Credential Aliases.
Then, a new record should be created with the following details:
- Name: With a value of your choice
- Type: Connection and Credential
- Connection type: HTTP
- Default Retry Policy: Default HTTP Retry Policy
- Configuration Template: API Key Demo Configuration
If the 'API Key Demo Configuration' Connection & Credential Template is not available in your instance, a new record on that table can be created. Its content should be the following:
- Default Data Template:
{
"credential": {
"api_key": "<provider-api-key>",
"name": "<provider-name> Spoke Credential",
"table": "api_key_credentials"
},
"connection": {
"use_mid": false,
"connection_url": "",
"name": "",
"table": "http_connection"
}
}
- Dynamic Data Schema:
{
"connection_fields": [{
"name": "connection.name",
"label": "Connection Name",
"type": "text",
"defaultValue": "<provider-name> Spoke Connection",
"hint": "Display name for the Connection",
"mandatory":true
}, {
"name": "connection.connection_url",
"label": "Connection URL",
"type": "text",
"defaultValue": "https://<provider-domain-name>.com",
"hint": "Connection URL for provider",
"mandatory":true
}],
"credential_fields": [{
"name": "credential.api_key",
"label": "API Key",
"type": "password",
"defaultValue": "",
"hint": "API Key for provider",
"mandatory":true
}]
}
- On Create Script, Pre-Edit Script and On Edit Script can be left as they are out of the box.
2. Create a new 'Credential' record. To do it, navigate to IntegrationHub > Connections & Credentials > Credentials, and click on 'New'. Then, 'API Key Credentials' should be selected from the list of credential types available, and setting up the API Key from the Azure resource, as well as a Name of your choice to identify this record.
In order to obtain the API Key from Azure, you should access to the resource created previously, and access to 'Keys and Endpoint'. From there, the API Key to connect and use that resource is available:
3. Then, navigate back to the 'Connection and Credential' record previously created, and creating a new 'Connection' associated to it, which can be achieved by clicking on 'New' in the Related List available in the form.
The connection record should be filled in with the following details:
- A 'Name' of your choice to identify this record.
- Credential: Select the record created in the previous step.
- Credential Alias: Should be autopopulated if creating it from the related list, otherwise select the record created in the first step of this section.
- Connection URL: Should contain the 'Endpoint' value appearing in Azure (see screenshot above)
As well as some other optional parameters, such as 'Use MID Server', if it's needed in the setup.
Actions Creation
As detailed previously, there is not an Integration Hub Spoke available in the ServiceNow store to connect with these services in Azure, therefore, the next step to successfully interchange information between the two systems is to create the required Integration Hub Actions.
1. First of all, we need to find the required documentation from Azure about which functionality we have available for our resources, and which is the information required to interact with that resource.
For example, for Face API, the following web page is available to obtain all the required details: https://learn.microsoft.com/en-us/rest/api/face/face-detection-operations?view=rest-face-v1.2-previe...
In the next steps, we will use 'Face Detect' operation, which can be navigated from that page:
2. In ServiceNow, navigate to Flow Designer homepage, and create a new Action.
3. Create the required Inputs based on the parameters detailed in the Azure documentation:
One exception thoug is the 'endpoint' parameter, which has already been included when setting up the Connection and Aliases.
4. Add a new REST step and complete the details as follows:
- Connection: 'Use Connection Alias'.
- Connection Alias: Select the one created in previous sections.
- Base URL: It's automatically populated from the Connection Alias selected.
- Build Request: In this case we are selecting 'Manually', but of course it could also be built from a REST Message if desired.
- Resource Path: face/{apiversion}/detect
- HTTP Method: POST (As described in Azure documentation)
- Query Parameters: Must be set with all the other parameters that we set in the inputs and that have not been used yet:
For the Headers, again, it's very important that we follow the Azure documentation, and for this Integration they must be set like:
- Content-Type: application/octet-stream
- Ocp-Apim-Subscription-Key: Must be set by dot-walking from the Credential Value data pill
For Request Content, the Request Type must be set as 'Binary', and ideally the picture/image to be analyzed should be parameterized, therefore, a new Input will be created whose type must be Reference > Attachment [sys_attachment]:
In addition, we could also set up 'Retry Policy' or 'Response Handling' details if needed.
5. Outputs must be defined so that the information that is intended to be obtained from Azure can be available when the Action is used. To simplify this for now, we can just leave 'Response Body' and map it with the response of the REST step:
6. The action must be saved and published so that it can be used from Flows and Subflows
Use the Actions
Once the setup is done, we just need to use our Actions in a real scenario.
Image processing might be useful for multiple different scenarios. In this article, and in order to show how to use the Actions in a simple use case, is explained how to use this Integration to analyze a picture and set up a criteria to automatically decide if it is valid to be used in the system as a profile image or not.
To do it, the following steps might be implemented:
1. Create a catalog item in which the users can upload their pictures for processing. The most important variable here is an 'Attachment' variable, that users will use to upload the image.
- Additionally, it would be interesting to create a catalog client script to make some verifications, such as the maximum size allowed, and if needed, to show the image in the catalog item to show the user how it looks like before submitting it (This also requires another 'HTML' variable):
- Client Script onChange (variable = upload_picture):
function onChange(control, oldValue, newValue, isLoading) {
if (isLoading || newValue == '') {
return;
}
g_form.setLabelOf('image_space','');
g_form.clearValue('image_space');
g_form.setVisible('image_space', false);
g_form.setReadOnly('image_space', false);
var picAjax = new GlideAjax('badgeRequestsAjax');
picAjax.addParam('sysparm_name', 'validatePictureSize');
picAjax.addParam('sysparm_attachment', newValue);
picAjax.getXML(validResponse);
function validResponse(response) {
var answer = response.responseXML.documentElement.getAttribute('answer');
var picture = "<tr><td width='250px'> <img src='/sys_attachment.do?sys_id=" + newValue + "' style='width:250px; height:auto;'> </td></tr>";
if (answer == 'false') {
g_form.addErrorMessage("The maximum allowed image size has been exceeded.");
} else if(answer =='true'){
g_form.setReadOnly('image_space', true);
g_form.setValue('image_space', picture);
g_form.setVisible('image_space', true);
g_form.removeDecoration('image_space');
}
}
}
- Script Include (client calllable = true) used from the Client Script above:
var badgeRequestsAjax = Class.create();
badgeRequestsAjax.prototype = Object.extendsObject(AbstractAjaxProcessor, {
validatePictureSize : function() {
var attachment_sys_id = this.getParameter('sysparm_attachment');
var grAttachment = new GlideRecord('sys_attachment');
grAttachment.get(attachment_sys_id);
var size = grAttachment.size_bytes;
if(grAttachment.size_bytes > 6000000){ // 6Mb - Could be a sys_property to easily manage
return false;
} else return true;
},
type: 'badgeRequestsAjax'
});
2. A decision table should be created in order for the system to automatically decide if the picture being uploaded is valid or not.
The specific criteria to be used to decide this must depend on each individual, and should be based on the specific outputs received from Azure. In the example below, a very basic scenario is used, where if the image has a blurriness lower than 0.3, there is only 1 face detected in it, and the quality for recognition is either medium or high, the image is considered valid, otherwise invalid:
3. Finally, it would be required to create a new Flow which uses all the elements created before (Action and Decision Table) to automate this process, and execute it when a Requested Item for this Catalog Item is created. For example, the system could create a new Service Catalog Task in the system, which is automatically closed completed if the picture is valid, or reassign to a specific group so that they can manually validate or guide the user on how to proceed if not.
- 1,366 Views