rahulpandey
Kilo Sage

AWS S3 is the leader in enterprise storage solution available in the market, and due its robust architecture, it is very likely that you will stumble upon its integration with other tools such as servicenow. Community also has many open question about this very topic so, I decided to create an easy to use solution.

Problem statement : Upload servicenow attachment to AWS S3 or, download any file from AWS S3 to servicenow as attachment

Challenges : AWS has a very unique authentication method called AWS Signature which is complex as mentioned in below screenshot 

                                       find_real_file.png

 

Alternatives ?

1- Servicenow OOB spokes available to let you do this seamlessly via integration hub (paid subscription)

2- You have an alternate approach called lambda and API gateway, where you can just "x-api-key" as authorization header and, get things done however, it has a limitation on file size (10 MB max) and additional cost (dirt cheap though) for lambda and API gateway.

 

Solution: I have developed a solution as a scoped application (CryptoJS needs to be imported as a scoped script include), The solution is ready to use with low code approach. 

Technical design

find_real_file.png

Features

-  Allows you to upload any attachment from ServiceNow to any S3 bucket/folder.

-  Allows you to download any file from S3 bucket/folder to any ServiceNow record

-  Supports S3 sub directories.

-  No external dependencies  

Setup : 

Import the updateset and commit it

Update the below system properties:- 

1-) x_158350_amazonuti.aws.access.id // update the value of the property with aws access key

2-)  x_158350_amazonuti.awsSecret.key // update the value of the property with aws secret key

Refer amazon blog on access and secretkeys "https://aws.amazon.com/blogs/security/wheres-my-secret-access-key/"

Usage:

To Upload an attachment from ServiceNow

new x_158350_amazonuti.s3Upload().fileUpload("<file_name>","<s3 bucket name> ", "<s3 region>", "<attachmentsys_id>");

Example:

new x_158350_amazonuti.s3Upload().fileUpload("rahul.jpg","snaws2", "us-east-2","e72778e7db1860105c57e4da4b9619e4"); //

To download a file from s3 to ServiceNow as an attachment

Usage:

new x_158350_amazonuti.s3Download().fileDownload("<file_name>","<s3bucket name> ", "<s3 region> ", "<table name>","<sys_id of record where you want to attach the file>");

Example:

new x_158350_amazonuti.s3Download().fileDownload("test.pdf","packagesrp", "us-east-1", "incident","9c573169c611228700193229fff72400");

This can be downloaded from servicenow share  https://developer.servicenow.com/connect.do#!/share/contents/3490912_servicenow_to_s3_integration?t=PRODUCT_DETAILS

 

Hopefully this will help the community.

Comments
Xeretni
Giga Contributor

Is this possible through Flow Designer? 

Dirk Kruger
Tera Contributor

The script you provided is very useful, but I am having a slight problem. I am assembling a URL that is made available to users pointing directly to the object storage, so the attachment is not extracted and stored in ServiceNow. But my compiled URL is returning the following:

The request signature we calculated does not match the signature you provided. Check your key and signing method.

I have attempted a few alternatives, but I keep running in the the above message regardless of what I do. Is it possible for you to assist me in identifying what I am doing wrong please?

The URL I am generating is in the following format: 

https://[hostname]/[bucket]/[filename]?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=[credential string]&X-Amz-Date=[aws date]&X-Amz-Expires=86400&X-Amz-SignedHeaders=[signed headers]&X-Amz-Signature=[signature]
 
The hostname is a local URL as the solution is on-prem.
 
If I include x-amz-content-sha256 in my signed headers, I get a message saying The authorization mechanism you have provided is not supported. Please use AWS4-HMAC-SHA256.
 
So my signedheaders is only reflecting host
tdexter
Tera Contributor

For those using the path-style request, ie: "https://s3.us-west-1.amazonaws.com/awsexamplebucket1/photos/puppy.jpg", some changes need to be made for this to work.

AWS documentation for Signing and Authenticating REST Requests states: 

 

 

 

 

If the request specifies a bucket using the HTTP Host header (virtual hosted-style), append the bucket name preceded by a "/" (e.g., "/bucketname"). For path-style requests and requests that don't address a bucket, do nothing. For more information about virtual hosted-style requests, see Virtual hosting of buckets.

For a virtual hosted-style request "https://awsexamplebucket1.s3.us-west-1.amazonaws.com/photos/puppy.jpg", the CanonicalizedResource is "/awsexamplebucket1".

For the path-style request, "https://s3.us-west-1.amazonaws.com/awsexamplebucket1/photos/puppy.jpg", the CanonicalizedResource is "".

 

 


In `awsSignatureCalculator` make changes like so: 

 

 

// awsSignatureGenerator script include at line 40 (provided we are using the same version of the file)
// change from: 
constantObject.CanonicalResource = '/' + filename;
// to:
constantObject.CanonicalResource = "";

// at line 53 change from:
if (regionname == "us-east-1")
            constantObject.headValue1 = bucket + '.s3.amazonaws.com';
        else
            constantObject.headValue1 = bucket + '.s3.' + regionname + '.amazonaws.com';

// to: 
if (regionname == "us-east-1")
            constantObject.headValue1 = 's3.amazonaws.com/';
        else
            constantObject.headValue1 = 's3.' + regionname + '.amazonaws.com/';

 

 


Then in `s3Upload`:

 

 

// s3Upload script include
// around line 14 change from: 
if (region == "us-east-1")
                url = 'https://' + bucket + '.s3.amazonaws.com/' + filename;
            else
                url = 'https://' + bucket + '.s3.' + region + '.amazonaws.com/' + filename;

// to: 
if (region == "us-east-1")
                url = 'https://s3.amazonaws.com/' + filename;
            else
                url = 'https://s3.' + region + '.amazonaws.com/' + filename;

 

 


Lastly, to use it, leave the `bucket` parameter blank and include the `bucket` within the `filename` like so: 

 

 

new x_158350_amazonuti.s3Upload().fileUpload("my.bucket.name/path/to/file/filename.txt", "", "us-east-1", "<sys_attachment.sys_id>");

 

 

 

Note: if you're also using `s3Download` you will need to make similar changes to the code there. I am not using it so I didn't make or test those changes.




tdexter
Tera Contributor

I've created an updated package that accomplishes the changes outlined below and more: https://developer.servicenow.com/connect.do#!/share/contents/2670688_servicenow_to_amazon_s3_file_up...

For those using the path-style request, ie: "https://s3.us-west-1.amazonaws.com/awsexamplebucket1/photos/puppy.jpg", some changes need to be made for this to work.

AWS documentation for Signing and Authenticating REST Requests states: 

 

If the request specifies a bucket using the HTTP Host header (virtual hosted-style), append the bucket name preceded by a "/" (e.g., "/bucketname"). For path-style requests and requests that don't address a bucket, do nothing. For more information about virtual hosted-style requests, see Virtual hosting of buckets.

For a virtual hosted-style request "https://awsexamplebucket1.s3.us-west-1.amazonaws.com/photos/puppy.jpg", the CanonicalizedResource is "/awsexamplebucket1".

For the path-style request, "https://s3.us-west-1.amazonaws.com/awsexamplebucket1/photos/puppy.jpg", the CanonicalizedResource is "".

 


In `awsSignatureCalculator` make changes like so: 

 

// awsSignatureGenerator script include at line 40 (provided we are using the same version of the file)
// change from: 
constantObject.CanonicalResource = '/' + filename;
// to:
constantObject.CanonicalResource = "";

// at line 53 change from:
if (regionname == "us-east-1")
            constantObject.headValue1 = bucket + '.s3.amazonaws.com';
        else
            constantObject.headValue1 = bucket + '.s3.' + regionname + '.amazonaws.com';

// to: 
if (regionname == "us-east-1")
            constantObject.headValue1 = 's3.amazonaws.com/';
        else
            constantObject.headValue1 = 's3.' + regionname + '.amazonaws.com/';

 


Then in `s3Upload`:

 

// s3Upload script include
// around line 14 change from: 
if (region == "us-east-1")
                url = 'https://' + bucket + '.s3.amazonaws.com/' + filename;
            else
                url = 'https://' + bucket + '.s3.' + region + '.amazonaws.com/' + filename;

// to: 
if (region == "us-east-1")
                url = 'https://s3.amazonaws.com/' + filename;
            else
                url = 'https://s3.' + region + '.amazonaws.com/' + filename;

 


Lastly, to use it, leave the `bucket` parameter blank and include the `bucket` within the `filename` like so: 

 

 

new x_158350_amazonuti.s3Upload().fileUpload("my.bucket.name/path/to/file/filename.txt", "", "us-east-1", "<sys_attachment.sys_id>");

 

 

Note: if you're also using `s3Download` you will need to make similar changes to the code there. I am not using it so I didn't make or test those changes.

Ujjwal2
Tera Contributor

Hi, Is it possible to download the file directly instead of attaching it back to any service now record?

or can we generate the pre-singed url using your solution by any chance?

AssaneT
Tera Contributor

Hello @rahulpandey , can we use it for this api DescribeProduct - AWS Service Catalog for getting information about an aws product ?

Mauricio Araujo
Tera Contributor

Hello @rahulpandey

 

I find myself in a situation where I need to integrate ServiceNow with S3, the script you provided is very useful, I tested it and it meets my purpose, 

 

Thank you very much for sharing!

tdexter
Tera Contributor

@Mauricio Araujo fyi, if you need to support mutliple S3 accounts, path-based buckets, or the updated AWS Signatures I created an updated version of the script here: https://developer.servicenow.com/connect.do#!/share/contents/2670688_servicenow_to_amazon_s3_file_up...

Version history
Last update:
‎02-01-2021 09:25 PM
Updated by: