EKS K8’s ServiceNow Discovery Using Visibility Agent - Article

SK Chand Basha
Giga Sage

K8's Overview

 

SKChandBasha_0-1757145157967.png

 

What is Container?

  • A container is simply like a software unit/wrapper that will package everything- your application code, app related dependencies etc. together.

Why Kubernetes?

  • Suppose, you have a requirement for running 10 different applications (microservices) ~ 10 containers.
  • And in case you need to scale each application for high availability, you create 2 replicas for each app ~ 2 * 10 =20 containers.
  • Now you must manage 20 containers.
  • Would you be able to manage 20 containers on your own? (20 is just an example, there could be more based on the requirement). It would be difficult, for sure.
  • A Container Orchestration tool or framework can help you in such situations. It can help you automate all the deployment/management overhead.

What is K8’s?

  • Kubernetes is an open-source container orchestration platform that automates the deployment, scaling, and management of containerized applications.
  • It provides a set of abstractions and APIs for managing containers, so you can focus on building your application and not worry about the underlying infrastructure.
  • With Kubernetes, you can manage multiple containers across multiple machines, making it easier to streamline and automate the deployment and management of your application infrastructure.

SKChandBasha_1-1757145157978.png

 

 

 

ServiceNow Kubernetes Discovery: How It Works in Amazon EKS?

 

  • Kubernetes Visibility Agent enables you to gain visibility into on-premises Kubernetes clusters as well as the following Cloud deployments: Amazon Elastic Kubernetes Service (EKS), Google Kubernetes Engine (GKE), Azure Kubernetes Engine (AKS), Red Hat OpenShift, and Rancher.

 

  • Kubernetes Visibility Agent detects changes on resources in a Kubernetes cluster. It performs continuous discovery, reports any changes back to your instance, and updates the Configuration Management Database (CMDB) with the latest data.

 

 

How it Works?

 

SKChandBasha_2-1757145157987.png

 

 

 

  • When you deploy Kubernetes Visibility Agent, Kubernetes creates a Deployment resource in the cluster. This resource uses a secret stored in Kubernetes to connect to your ServiceNow instance.
  • The Kubernetes Visibility Agent Deployment resource contains a pod called Informer, which connects to the Kubernetes API server and receives events on the resources in the cluster from it. The Informer sends the collected data to the instance through the External Communication Channel (ECC) Queue table, using the ServiceNow Table API to read from and write to the queue. The Informer then updates the appropriate tables in the CMDB.

 

Steps to Deploy Kubernetes Visibility Agent on EKS (AWS)

 

Step 1: Create an EKS Cluster (if you don’t already have one)

 

  • Login to your AWS Management Console.
  • Open AWS CloudShell (browser-based shell with preinstalled AWS CLI)

 

 

 

SKChandBasha_3-1757145157997.png

 

 

Step 2: Install eksctl in CloudShell for creating cluster

 

eksctl is the official CLI tool for creating and managing Amazon EKS clusters. Since CloudShell doesn’t have it by default, install it manually:

 

# Download and extract latest eksctl release

curl --silent --location "https://github.com/weaveworks/eksctl/releases/latest/download/eksctl_$(uname -s)_amd64.tar.gz" | tar xz -C /tmp

# Move binary to PATH

sudo mv /tmp/eksctl /usr/local/bin

# Verify installation

eksctl version

 

 

 

SKChandBasha_4-1757145158001.png

 

 

If the installation is successful, you’ll see the installed version (e.g., 0.214.0).

 

Step 3: Familiarize Yourself with eksctl Commands

 

  • To explore available commands and options, use the --help flag:
eksctl --help

                                                                                                                                                       

SKChandBasha_5-1757145158011.png

 

 

  • This will list commands like create, delete, scale, upgrade, etc.
  • To see detailed options for creating a cluster:
eksctl create cluster --help

 

SKChandBasha_6-1757145158025.png

 

 

Step 4: Create an EKS Cluster with Node Groups

 

eksctl create cluster -n <cluster_name> -- node group-name <Node_group_name> -- region <Region name>  -- node-type  <type (t2 or t3)>  -- nodes <Integer value(Default 2)>

 

SKChandBasha_7-1757145158041.png

 

 

  • Hit Enter to run the command
  • Navigate to Amazon Elastic Kubernetes Service to confirm it shows status Creating

 

SKChandBasha_8-1757145158048.png

 

 

SKChandBasha_9-1757145158052.png

 

 

  • This process takes around 10 minutes. Once complete, a prompt will confirm the cluster is ready for use like EKS cluster “chand1” in eu-north-1 region is ready.

SKChandBasha_26-1757145604362.png

 

 

Step 5: Verify Cluster Setup with kubectl

 

kubectl config view

 

  • This command displays cluster context, credentials, and API server details—ensuring that your kubectl is now pointing to the newly created EKS cluster.

SKChandBasha_27-1757145677775.png

 

 

  • Once the cluster is created, you should check if the worker nodes are up and the default namespaces are available. Use the following commands:
# Check if nodes are ready

kubectl get nodes

# List all namespaces in the cluster

kubectl get ns

 

  • kubectl get nodes → Displays all worker nodes in the cluster, their status (Ready), roles, and age.
  • kubectl get ns → Lists namespaces such as default, kube-system, kube-public, kube-node-lease.

 

SKChandBasha_28-1757145770476.png

 

 

Step 6: Create a Namespace for the Kubernetes Visibility Agent

 

  • In Kubernetes, a namespace is a way to logically divide (or partition) a cluster into virtual sub-clusters.
  • Namespaces help you organize and separate those resources, They act like folders inside the cluster.

 

kubectl create namespace NAMESPACE

 

Eg: kubectl create namespace my-chand1

 

Rules for namespace names (RFC 1123):

  • Must be lowercase letters only
  • Can contain alphanumeric characters and hyphens (-)
  • No underscores or uppercase letters allowed

 

SKChandBasha_29-1757145820870.png

 

  • Verify created namespace

SKChandBasha_30-1757145846361.png

 

Step 7: Create a Kubernetes Secret for ServiceNow Credentials

  • Here I’m using Basic Authentication between ServiceNow and Cluster.
  • Get the Kubernetes Visibility Agent  Application from Store

SKChandBasha_0-1757147250613.png

 

  • On the ServiceNow instance, navigate to All > User Administration > Users.
  • Choose or create a user with at least the mid_server role.

 

SKChandBasha_31-1757145877844.png

 

The Kubernetes Visibility Agent needs ServiceNow credentials to send discovery data. These should match the ServiceNow user you created earlier with at least the mid_server role.

 

  • Run the following command, after replacing INSTANCE_NAME, USERNAME, PASSWORD and NAMESPACE with the relevant values:

 

kubectl create secret generic k8s-informer-cred-INSTANCE_NAME --from-literal=.user=USERNAME --from-literal=.password=PASSWORD -n NAMESPACE

 

  • INSTANCE_NAME → Your ServiceNow instance name (without domain, e.g., dev12345)
  • USERNAME → ServiceNow username
  • PASSWORD → ServiceNow password
  • NAMESPACE → The namespace you created in Step 6 (my-chand1)

SKChandBasha_32-1757145939073.png

 

 

 

Step 8: Prepare the Kubernetes Visibility Agent YAML File

 

  • Download the Kubernetes YAML zip file provided in article in the Now Support Knowledge Base

https://support.servicenow.com/kb?id=kb_article_view&sysparm_article=KB1564347

 

  • Extract the k8s_informer.yaml and EULA.pdf files from the zip file.
  • Edit the k8s_informer.yaml file.
  • Change the value of ACCEPT_EULA to "Y", as follows:

name: ACCEPT_EULA

      value: "Y"

Note: By changing the value to "Y", you agree to the End-User License Agreement included in the EULA.pdf file.

 

  • Replace all occurrences of <NAMESPACE> with the namespace in which you want to install the Informer
  • Replace all occurrences of <INSTANCE_NAME> with the name of your instance, without the domain name.
  • Replace <CLUSTER_NAME> with the name of your cluster as it appears in the CMDB.
  • Save the file

SKChandBasha_33-1757146011681.png

 

SKChandBasha_34-1757146041289.png

 

 

 

  • Upload in the eks cluster

SKChandBasha_35-1757146051810.png

 

 

  • File upload confirmation

SKChandBasha_36-1757146077120.png

 

 

 

Step 9: Deploy the Kubernetes Visibility Agent

 

Apply the YAML file to your cluster:

 

kubectl apply -f k8s_informer.yaml

 

To verify the pod is running:

Kubectl get pods -n <cluster_name>

 

SKChandBasha_37-1757146103880.png

 

  • Successfully running we have completed our setup in AWS Console.

 

Step 10: Data Validate in ServiceNow

 

  • Navigate to All -> Kubernetes -> Dashboard

SKChandBasha_38-1757146158757.png

 

 

  • Change the Application scope to Discovery and Service mapping patterns

SKChandBasha_39-1757146169129.png

 

SKChandBasha_40-1757146181656.pngWooOha! The Kubernetes cluster data is now successfully discovered and updated in the ServiceNow CMDB.

 

Happy Learning!!

Regards,

SK Chand Basha

0 REPLIES 0