- Post History
- Subscribe to RSS Feed
- Mark as New
- Mark as Read
- Bookmark
- Subscribe
- Printer Friendly Page
- Report Inappropriate Content
on 08-16-2024 07:28 AM
In addition to providing a consumer-grade ordering experience for cloud resources, the Cloud Accelerate Cloud Services Catalog application in ServiceNow includes a REST API which lets you integrate with CI/CD pipelines to provision cloud resources, while enjoying the same approvals, quotas, policies, and auditing which are featured when ordering things via catalog items. Here's an example of how it's done.
NOTE: The examples provided in this article come with no warranty or support, implied or explicit. Caveat emptor!
In my example, I'm using AWS, but the same approach can be applied using the CI/CD tooling and cloud provider of your choice.
Cloud Services Catalog
First I installed the Cloud Services Catalog application in my instance via the Admin->Application Manager
Cloud Catalog Item. I also installed the CSC Content Pack.
Platform Service Account
I created a user in my ServiceNow instance to be the requestor/owner for the cloud resources as represented in the CMDB. I gave it role "sn_cmp.cloud_service_user", which brought in roles "cmdb_read" and "dependency_views".
Cloud Resource User Group
I created a group, "Cloud Group 1", and added my service account to it.
Cloud Permissions
In the Cloud Admin Portal, under Govern->Permissions, I added "read" and "execute" permissions for Cloud Group 1 against all cloud catalog items.
Cloud Catalog Item
In the Cloud Admin Portal, under Design->Cloud Catalog Items, I created a new catalog item based on the following Cloud Formation template (for details on ingesting cloud templates, refer to https://docs.servicenow.com/bundle/xanadu-it-operations-management/page/product/cloud-management-v2/... ).
NOTE: the catalog item creation process is facilitated by adding a "Metadata" section to your existing templates, which guide the service catalog on how to generate a list of choices for various template parameters.
{
"Parameters": {
"AvailabilityZone": {
"Type":"String",
"Default":"us-east-1d"
},
"ImageId":{
"Type":"String"
},
"InstanceType":{
"Type":"String",
"Default":"m5.xlarge"
},
"KeyName": {
"Type":"String"
},
"ServerName": {
"Type":"String"
},
"SecurityGroup": {
"Type":"String",
"Description":"An existing security group ID",
"ConstraintDescription":"Must be an existing security group ID"
},
"Environment": {
"Type":"String",
"Default":"DEVELOPMENT"
},
"SubnetId": {
"Type":"String",
},
"PublicIp": {
"Type":"String",
"AllowedValues":["true","false"],
"Default":"true"
},
"VpcId" : {
"Type" : "AWS::EC2::VPC::Id",
"Description" : "VpcId of your existing Virtual Private Cloud (VPC)",
"ConstraintDescription" : "must be the VPC Id of an existing Virtual Private Cloud."
}
},
"Metadata": {
"SNC::Parameter::Metadata": {
"VpcId":{
"datasource":"ServiceNow::Pools::NetworkPool.getObjectsByLDC"
},
"SubnetId":{
"datasource":"ServiceNow::Pools::SubnetPool.getObjectsByNetwork",
"datasourceFilter":{"Network":"VpcId"}
},
"SecurityGroup":{
"datasource":"ServiceNow::Pools::SecurityGroupPool.getObjectsByNetwork",
"datasourceFilter":{"Network":"VpcId"}
}
}
},
"Resources":{
"Server":{
"Type" : "AWS::EC2::Instance",
"Properties" : {
"AvailabilityZone" : {"Ref":"AvailabilityZone"},
"ImageId" : {"Ref":"ImageId"},
"InstanceType" : {"Ref":"InstanceType"},
"KeyName" : {"Ref":"KeyName"},
"NetworkInterfaces": [
{
"AssociatePublicIpAddress": {"Ref":"PublicIp"},
"DeviceIndex": "0",
"SubnetId": { "Ref" : "SubnetId" },
"GroupSet" : [ {"Ref":"SecurityGroup"}]
}
],
"Tags": [
{"Key":"Name","Value":{"Ref":"ServerName"}},
{"Key":"Environment","Value":{"Ref":"Environment"}}
] }
}
}
}
Pipeline
With all the ServiceNow pieces in place, I commenced to building my example pipeline.
Resource File
I started with a JSON file that contains a list of EC2 instances to be requested with my build:
{
"resources":[
{
"CloudAccount":"AWS One",
"Location":"AWS Datacenter - us-east-1",
"UserGroup":"Cloud Group 1",
"ScheduleTimeZone":"US/Eastern",
"WH_EC2_1_KeyName": "hallam-testing",
"WH_EC2_1_PublicIp": "true",
"WH_EC2_1_ServerName": "hallam-test-linux-10",
"WH_EC2_1_ImageId": "ami-0e6cee893c065c260",
"WH_EC2_1_AvailabilityZone": "us-east-1d",
"WH_EC2_1_Environment": "DEVELOPMENT",
"WH_EC2_1_SubnetId": "subnet-8ce9d3d0",
"WH_EC2_1_InstanceType": "t3.xlarge",
"WH_EC2_1_SecurityGroup": "sg-0d8c702ea262ad500"
}
]
}
In addition to enumerating the parameters needed by my Cloud Formation template, it populates the four required values which apply to any Cloud Services Catalog item, "CloudAccount", "Location", "UserGroup", and "ScheduleTimeZone".
Request Script
The next piece of the pipeline I built is a Python script which parses the resource file and makes the requisite Cloud Services Catalog API calls into my ServiceNow instance to request my cloud resources. The API Explorer on my ServiceNow instance was invaluable in this process, as it generated example Python code for calling the CSC API. The script will interrogate various environment variables for information such as the instance URL, credentials, catalog item sys ID, and a stack prefix to use for stack name generation.
#!/bin/python3
import json
import logging
import os
import requests
import uuid
logger = logging.getLogger()
logging.basicConfig()
logger.setLevel("DEBUG")
# parse environment vars
instanceUrl=os.environ.get("INSTANCE_URL")
username=os.environ.get("USERNAME")
password=os.environ.get("PASSWORD") catalogItem=os.environ.get("CATALOG_ITEM") stackPrefix=os.environ.get("STACK_PREFIX")
# constants
headerDict={"content-type":"application/json","accept":"application/json"}
stackUuid=str(uuid.uuid4())[-8:]
stackCount=0
# open config file, parse the JSON
configFile=open("resources.json")
configString=configFile.read()
configFile.close()
logger.debug("Read config: "+configString)
configDict=json.loads(configString)
logger.debug("Parsed config: "+json.dumps(configDict))
# loop through resources
for k in configDict["resources"]:
logger.debug("Ordering resource "+str(k))
requestBody=k
requestBody["StackName"]=stackPrefix+"-"+stackUuid+"-"+str(stackCount)
logger.debug("requestBody is "+json.dumps(requestBody))
# add to cart
#requestUrl=instanceUrl+"/api/sn_sc/servicecatalog/items/"+catalogItem+"/add_to_cart"
requestUrl=instanceUrl+"/api/now/cmp_catalog_api/submitrequest?cat_id="+catalogItem
cartReq=requests.post(requestUrl,auth=(username,password),json=requestBody,headers=headerDict)
logger.debug("POST returns "+cartReq.text)
stackCount=stackCount+1
Buildspec File
The final pipeline piece is the buildspec.xml file, which tells AWS Codebuild what steps to run when a build is requested.
version: 0.2
env:
variables:
INSTANCE_URL: "https://my-instance.service-now.com"
CATALOG_ITEM: "my-catalog-item-sys-id"
STACK_PREFIX: "cscpipe1"
secrets-manager:
USERNAME: "cscpipe1/user1"
PASSWORD: "cscpipe1/pass1"
ACCTID: "dockerhub/awsacctid"
phases:
pre_build:
# provision infrastructure
commands:
- aws codeartifact login --tool pip --domain hallam --domain-owner ${ACCTID} --repository Python
build:
commands:
- echo "Build started on `date`"
- pip install requests
- python ./request-infrastructure.py
post_build:
commands:
- echo "Build completed on `date`"
- echo "Put post-build commands here"
The buildspec file uses the "variables" and "secrets-manager" sections to populate environment variables for use by my Python script, with the "secrets-manager" variables coming from AWS Secrets Manager.
The AWS CLI may not be required, it refers any Python package installs to my private AWS Code Artifact Python repository vs. using the public ones.
Since this is an example, all it does is request the infrastructure -- it does not actually use the provisioned EC2 to do any work. A real pipeline would perform build tasks, then it could request decommission of the resources using a similar API call.
The use case for this kind of thing is when you want specific infrastructure spun up to facilitate a pipeline, but also want the same visibility, quotas, approvals, etc., which you would get with more persistent cloud infrastructure. By tying into CSC, you get the best of both worlds.
- 497 Views