Will Hallam
ServiceNow Employee
ServiceNow Employee

ITOM Cloud Native Ops for Visiblity and Health ("CNO") provides turnkey discovery of Kubernetes clusters, plus streaming event-driven CMDB updates, event ingestion and metric ingestion.  The included install command for CNO requires some manual response to a few prompts in order to run.  Here's an example of how that can be automated.

 

NOTE: This example is based in AWS and uses AWS constructs but a similar approach could be used in any public (or private) cloud with the applicable tooling.

 

The Premise

The scenario I've created for this example is a Git repository which defines a set of EKS clusters.  Based on new clusters being added to the code, a CI/CD pipeline will provision the cluster in AWS and then automatically install CNO.  Within about 40 minutes, I have a new cluster which is automatically registered with my ServiceNow instance and being tracked in the CMDB.

 

Example Files

The files for this example can be found at https://github.com/willhallam-sn/cno-eks-example

 

Here's an overview of what I put in my Git repository:

(NOTE: values specific to my environment, such as ServiceNow instance name, MID user name, AWS account ID, etc., have been removed in these example files; you'll need to populate those or add your own abstraction layer for them in order for the redacted files to be functional)

 

admin-clusterrolebinding.yml

This is a Kubernetes manifest which creates two local users, one for my federated AWS role which I use to login to the AWS console, and one for my local IAM user which I use for CLI/eksctl/kubectl.  These local users are both bound to the global-admin cluster role.  My master Python script, parse_and_build.py, uses eksctl to map the AWS role and IAM user to the two local Kubernetes users created by this manifest.

 

buildspec.yml

This is the file used to control my AWS CodeBuild pipeline.  It installs eksctl to the build environment and then invokes parse_and_build.py.

 

clusters.json

This is a JSON file containing a list of EKS cluster names which should exist in my AWS account.

 

eks-cluster.sh

This script executes eksctl to create an EKS cluster whose name is passed in as the single commandline argument.

 

parse_and_build.py

This is a Python script which does the following:

- retrieves sensitive parameters from AWS Secrets Manager

- lists all EKS clusters with eksctl

- reads clusters.json for a list of desired clusters

- for each cluster listed in clusters.json but not found in AWS:

- build the cluster using eks-cluster.sh

- use eksctl to map the federated role to user "fed-admin"

- use eksctl to map the local user to user "local-admin"

- use kubectl to apply the manifest admin-clusterrolebinding.yml

- use eksctl to add a role for the EBS CNI addon (required by CNO)

- use eksctl to install the EBS CNI addon

- invoke sn_app_deploy.sh to install CNO in the cluster

 

servicenow_acc_mid_temp.yml

This is a template Kubernetes manifest which is normally downloaded by the default CNO install script.  I retrieved it via the following URL which I extracted by reviewing the "curl" commands contained in the default CNO installer.  In my case, that URL was https://${INSTANCE_FQDN}/api/sn_k8s/cni_api/sn_app_config.

 

sn_app_deploy.sh

This is a modified version of the default CNO install script.  I retrieved it by navigating to Cloud Native Ops (CNO)->Deploy CNO to Cluster and clicking the "Deploy ServiceNow components" link.  In my case it produced a one-line command like this:

 

MID_API_KEY=<key> INSTANCE_USERNAME=<user> CLUSTER_TYPE=K8S bash -c "$(curl -L https://<instance>.service-now.com/api/sn_k8s/cni_api/sn_app_deploy)"

 

I edited down the command to just the curl portion, giving me:

 

curl -L https://<instance>.service-now.com/api/sn_k8s/cni_api/sn_app_deploy

 

which I then invoked, redirecting the output into sn_app_deploy.sh

 

From that baseline script, I adjusted things to remove the need for interaction.  Since I did not use mTLS for authentication, I hard-coded things to use basic auth.

 

Creating the CI/CD Pipeline

With my local directory populated, I created an AWS CodeCommit repository and pushed the code into it.  I then tied the CodeCommit repository to a CodeBuild project.  I gave the CodeBuild project role permission to retrieve the secret in which I stored my API key, instance name and MID user password.  I added permissions required by eksctl to the CodeBuild project role as well (found here: https://eksctl.io/usage/minimum-iam-policies/).

 

With these pieces in place, initiating a build against my CodeBuild project would create any clusters listed in the clusters.json file which did not alread exist, adding the needed roles and addon, then installing CNO.  Since I did not configure mTLS, I had one manual step to validate the new MID when it appeared in the ServiceNow instance, literally two clicks.  Besides that, my cluster was automatically added to my CMDB, with updates like pod creation/deletion being reflected within a minute or two of the change occurring.

 

Conclusion

Putting together this example took about 6 hours since it was my first attempt for this use case.  Now that I have a working example, making adjustments or porting to a different cloud would be a lot less time.  Going through this exercise has illustrated for me how on-boarding a cluster into ServiceNow can be a seamless activity thanks to the Cloud Native Operations plugin.

Comments
andrewrouch
Tera Expert

Thanks for the article Will.  I am creating the design documentation for using this in our VPCs in AWS and GCP, so I need to understand the security model, are the admin roles just for installing?  what are the permissions needed for the local users?

Will Hallam
ServiceNow Employee
ServiceNow Employee

Thanks @andrewrouch -- the local users are so I can work with the cluster after it's provisioned, not required by anything on the provisioning side.  I found that eksctl by default will only let the user/role which provisioned the cluster have API access, and since that's the pipeline role in this case, I had to add some kind of access in order to be able to use kubectl after the cluster spins up.  That part of the process would be very specific to your org's standards on setting up cluster access.

 

BTW, if you're looking to leverage CNO, be sure to target the newer version, as the first release, on which this article was based, is being sunset and won't run on platform versions newer than Vancouver.

Version history
Last update:
‎01-27-2023 10:28 AM
Updated by:
Contributors