RemcoLengers
ServiceNow Employee
ServiceNow Employee

This guide provides a complete command-line walkthrough for setting up an Azure Kubernetes Cluster with Cloud Native Operations 2.0 Discovery. It includes instructions for installing the Hipstershop sample application, which is optional. This Kubernetes Informer based discovery method provides a cloud native way of near real time discovery and should be applied as a new Kubernetes cluster gets created. Hereby enabling security, governance and event management use cases.

 

The commands have been tested on an Azure-based VM running Red Hat 9 Linux OS.

 

Prerequisites:

 

  1. azure cli - https://learn.microsoft.com/en-us/cli/azure/install-azure-cli-linux?pivots=dnf
  2. git ( sudo yum install git -y)
  3. kubectl - https://kubernetes.io/docs/tasks/tools/install-kubectl-linux/
  4. helm (see below)

Pick a resourcegroup, cluster and locations as per your own need.

 

Commands:

 

az login 
az group create --name RLk8sRG --location eastus
az aks create --resource-group RLk8sRG --name RLAKSCluster --node-count 1 --enable-addons monitoring --node-vm-size Standard_DS3_v2 --generate-ssh-keys
az aks get-credentials --resource-group RLk8sRG --name RLAKSCluster
kubectl get nodes
curl https://raw.githubusercontent.com/helm/helm/main/scripts/get-helm-3 | bash
sudo chmod 777 /usr/local/bin/helm
I needed the command above, to loosen the permissions, because my azureuser did not have the permissions to run helm. There may be more elegant solutions.
 
 
Deploy a sample application (optional)
 
git clone https://github.com/yuxiaoba/Hipster-Shop.git

 

cd Hipster-Shop

 

kubectl create namespace hipster-shop

 

kubectl apply -f ./release/kubernetes-manifests.yaml --namespace hipster-shop

 

kubectl get pods --namespace hipster-shop

 

kubectl get svc frontend-external --namespace hipster-shop
The output of this command will give you the external ip address on which the deployment should be running
 
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
frontend-external LoadBalancer 10.0.123.123 1.156.177.247 80:32752/TCP 17h
And check if the shop is running on http://10.0.123.123
 
RemcoLengers_0-1717072410575.png

 

  
[End of Optional]
 
Deploying CNO 2.0 for ServiceNow Discovery
 
In case you want to review the documentation:
 
Setup a user in the Servicenow instance with discovery_admin role. This role will expand it the roles listed in the screenshot. Make sure this user is not locked or needs to change its password on first log in. If you make some mistakes in the next commands the user may get lock out and you may need to clear that lock if you try again. 
RemcoLengers_2-1717071285044.png

 

 Create a new user:
 
RemcoLengers_0-1717071018796.png

 

Now create a new namespace, enter the servicenow user credentials as secrets and use helm to deploy the CNO 2.0 k8s-informer deployment. 

 

kubectl create namespace cno20
 Leave out the ".service-now.com" part of the instance name in the command below.
When entering the password make sure it is surrounded by double quotes as it may contain characters that make the command fail.
 
kubectl create secret generic k8s-informer-SERVICNENOWINSTANCE --from-literal=.user=USERNAME --from-literal=.password="password-set-in-servicenow-instance" -n cno20
 
Another good reference if things change in the future: 
 
helm install -n cno20 --set acceptEula=Y --set instance.name=SERVICNENOWINSTANCE --set clusterName="RLk8sRG" k8s-informer https://install.service-now.com/glide/distribution/builds/package/informer/2.1.1/informer-helm-2.1.1...
 
If all is well 
 
RemcoLengers_1-1717071226548.png

 

Will start showing

 

RemcoLengers_0-1717063085231.png

 

 
If the Discovery results are not visible in the Servicenow instance in 1-2 minutes check this
 
kubectl logs -f deployment/k8s-informer-INSTANCENAME.service-now.com --namespace cno20
 
Notes:
 
Initially I created the AKS cluster with the default node size Standard_DS2_v2 this turned out to be too small for the workload. The cordoning and drain command are the method to setup a larger node pool and move the deployments there. And the hipstershop wouldn't work and the CNO 2.0 deployment stayed in pending state.
The helm uninstall command were needed because I had removed the .service-now.com from the instance name.
The AKS cluster just has one node to keep the costs of this sample setup low.
 
Some commands I used to debug and correct:
 
helm status k8s-informer --namespace cno20
helm get all k8s-informer --namespace cno20
export POD_NAME=$(kubectl get pods --namespace cno20 -l "app=k8s_informer-INSTANCENAME.service-now.com" -o jsonpath="{.items[0].metadata.name}")
kubectl get pod $POD_NAME --namespace cno20
az aks list --output table
az aks nodepool add --resource-group RLk8sRG --cluster-name RLAKSCluster --name nodepool2 --node-count 1 --node-vm-size Standard_DS3_v2
kubectl get nodes
kubectl cordon aks-nodepool1-26034126-vmss000000
kubectl drain aks-nodepool1-26034126-vmss000000 --ignore-daemonsets --delete-local-data
az aks nodepool delete --resource-group RLk8sRG --cluster-name RLAKSCluster --name aks-nodepool1-26034126-vmss000000
helm uninstall k8s-informer -n cno20
kubectl get secrets --all-namespaces
kubectl delete secret k8s-informer-cred-INSTANCENAME -n cno20

 

 

 

4 Comments