mikegallagh
ServiceNow Employee
ServiceNow Employee

Prerequisites

  1. A deployed AKS cluster
  2. A properly configured local machine or a dev container with the following tools installed: kubectl, helm, azure CLI, vault, jq

Overview

Many organizations are standardizing on HashiCorp Vault for secrets management. There are a lot of potential benefits to this methodology. It enables systematic rotation of credentials, granular access controls, caching control, and centralized management. Not to mention a nearly ubiquitous access capability with API based access. As a result, a lot of organizations don't want to have additional locations where their secrets (credentials, certificates, etc.) can be stored. This can be a blocker for discovery processes as credentials will be needed to properly discover devices within your IT estate.

This is where the external credential stores come into play. External credential stores are configuration and access method by which MID servers can access credentials stored in an external credential store such as HashiCorp Vault or CyberArk.

In this article I'll be covering how to build a test Vault instance and how to get a containerized MID server built and configured to utilize this test instance.

Understand that most organizations will already have a Vault instance running but if you're just familiarizing yourself with the process you may not want to run this in an enterprise wide instance. That's why I included instructions for deploying your own test instance.

 

Process

Get Vault Server Up and Running

  1. Add Hashicorp Helm repository to your local repositories list:

    helm repo add hashicorp https://helm.releases.hashicorp.com 
    
  2. Update the repo in order to get the most recent version detail of the various charts.

    helm repo update
    

    Output will look something like this:

    Hang tight while we grab the latest from your chart repositories...
    ...Successfully got an update from the "hashicorp" chart repository
    Update Complete. ⎈Happy Helming!⎈
    
  3. Find the latest version of the vault chart by doing a repo search and showing the versions.

    helm search repo hashicorp/vault
    

    Output will look something like this:

    NAME                            	CHART VERSION	APP VERSION	DESCRIPTION
    hashicorp/vault                 	0.28.1       	1.17.2     	Official HashiCorp Vault Chart
    hashicorp/vault                 	0.28.0       	1.16.1     	Official HashiCorp Vault Chart
    hashicorp/vault                 	0.27.0       	1.15.2     	Official HashiCorp Vault Chart
    hashicorp/vault                 	0.26.1       	1.15.1     	Official HashiCorp Vault Chart
    hashicorp/vault                 	0.26.0       	1.15.1     	Official HashiCorp Vault Chart
    hashicorp/vault                 	0.25.0       	1.14.0     	Official HashiCorp Vault Chart
    hashicorp/vault                 	0.24.1       	1.13.1     	Official HashiCorp Vault Chart
    
  4. By default the chart just installs in standalone mode without the web UI enabled. As a result, we'll need to deploy the char with some custom configurations in order to get it configured the way we want it.

    helm install vault hashicorp/vault \
    --set='ui.enabled=true' \
    --set='ui.serviceType=LoadBalancer'
    

    This installs vault and all its components in the default namespace. This is likely not best practice but not worth fiddling with for demonstration purposes.

  5. Check status of the "vault-0" pod in the default namespace.

    kubectl get pods
    

    Output should look something like this:

    NAME                                  READY   STATUS    RESTARTS   AGE
    vault-0                               0/1     Running   0          3m10s
    vault-agent-injector-58f86445-vk8ql   1/1     Running   0          3m10s
    

    Note that the vault-0 pod shows that it's running but not ready. That's because the readinessProbe is returning a non-zero status. You can check how the readinessProbe is defined by describing the "vault-0" pod.

    kubectl describe pod vault-0
    

    Output will show a line like this:

    Readiness:      exec [/bin/sh -ec vault status -tls-skip-verify] delay=5s timeout=3s period=5s #success=1 #failure=2
    

    Since the vault is currently sealed and not initialized, it's going to return a non-zero status.

  6. Let's login to the vault UI to initialize and unseal the vault.

    kubectl get service vault-ui
    NAME       TYPE           CLUSTER-IP    EXTERNAL-IP      PORT(S)          AGE
    vault-ui   LoadBalancer   10.0.140.27   XXX.XXX.XXX.XXX  8200:31386/TCP   5h12m
    

    That EXTERNAL-IP is the publicly accessible IP address of the vault UI.

    1. Fire up a browser and navigate to http://:8200 (8200 is the port listed before the colon on that same service definition line.)

    2. Enter "5" in the "Key shares" field and "3" in the "Key threshold" field.

      mikegallagh_0-1728270520985.png

       

    3. Click "Initialize"

    4. Now you're get the Root token and unseal keys you can scroll down to the bottom and click "Download keys". It's important to retain thses keys as they'll be necessary to unseal the vault in future steps.

      mikegallagh_2-1728270567937.png
    5. Click "Continue to Unseal"

    6. You'll need to submit at least 3 of the five key shares in order to unseal the vault. Copy and paste one of the keys individually from the JSON file you downloaded. (Use the "keys" section, not the "keys_base64")

      mikegallagh_4-1728270612765.png

       

    7. Once the Vault is unsealed, you'll need to sign into the UI. Copy the "root_token" from the JSON file and paste it into the "Token" field.

      mikegallagh_5-1728270628248.png

       

    8. You should now be presented with the Vault dashboard!!

      mikegallagh_6-1728270644218.png

       

Prep your instance

  1. Take a look at this article to setup the external credential storage.

  2. Ensure you have both the External Credential Resolver plugin and also the HashiCorp Vault pluing installed in your instance.

Build your MID Server

In order for this to work, you'll need a custom MID server image with the Vault application installed and also the JAR file from the Vault plugin so that the MID process can resolve the credentials properly.

To build the MID Server we'll be roughly following the instructions from this blog article.

In order to get the MID server to work properly, we'll need to install the Vault agent as part of the build process. We can use this article as a base. However, we'll need to make a few tweaks in order to get the client to install on the base OS image the MID server uses (Almalinux versus Alpine).

mkdir VaultMIDBuild
wget https://install.service-now.com/glide/distribution/builds/package/app-signed/mid-linux-container-recipe/2024/07/27/mid-linux-container-recipe.washingtondc-12-20-2023__patch6-07-17-2024_07-27-2024_1026.linux.x86-64.zip 
unzip mid-linux-container-recipe/2024/07/27/mid-linux-container-recipe.washingtondc-12-20-2023__patch6-07-17-2024_07-27-2024_1026.linux.x86-64.zip

Create a new container registry

az acr create --name <Registry Name> --resource-group <myresourcegroup> --sku Standard
  1. Determine which authentication method you'll use. This is to allow the Vault Agent running in your MID server to authenticate with the Vault. Most of the documentation shows AppRole so we're going to utilize that.

I roughly followed this article in order to get the vault agent setup and configured for use with the MID server.

In order to do the following you'll first need to set an environment variable so that your vault client knows how to reach your reacently installed vault server.

export VAULT_ADDR=http://vault.server.ip.address:8200/  

Then you'll need to get the root token out of json file you downloaded during the server setup process.

vault login 

It'll ask you to paste in the root token to login.

Then we'll need to create the role, the demo policy and attach the demo policy to the role.

vault auth enable approle
vault policy write demo - <<EOF
path "secret/*" {
  capabilities = ["read"]
}
EOF
vault write auth/approle/role/role1 bind_secret_id=true token_policies=demo

Now that this has been completed we'll need to gather the necessary details to configure the agent to automatically login to the vault server.

echo -n $(vault read -format json auth/approle/role/role1/role-id | jq -r '.data.role_id') > ./asset/roleID
echo -n $(vault write -format json -f auth/approle/role/role1/secret-id | jq -r '.data.secret_id') > ./asset/secretID

Now we'll need to create an agent.hcl config file in the asset directory. This tells the agent how to operate.

listener "tcp" {
  address = "127.0.0.1:8200"
  tls_disable = true
}

cache {
  use_auto_auth_token = true
}

vault {
  address = "http://vault.server.address:8200"
}

auto_auth {
    method {
        type = "approle"
        config = {
            role_id_file_path = "/opt/snc_mid_server/roleID"
            secret_id_file_path = "/opt/snc_mid_server/secretID"
            remove_secret_id_file_after_reading = false
        }
    }
}

Once that's all completed we'll need to update the MID Server init script ./asset/init so that it'll start the vault agent prior to starting the mid server.

I did that by adding a new function called "vaultAgent".

vaultAgent () {
  logInfo "DOCKER: Starting HashiCorp Vault Agent"  
  vault agent -config=/opt/snc_mid_server/agent.hcl
}

And then I call that new function from the start command.

case "$1" in
  start)
    midSetup
    vaultAgent
    midStart
    ;;
  setup)
    midSetup -f
    ;;
  stop)
    midStop
    ;;
  restart)
    midRestart
    ;;
  help)
    midHelp
    ;;
  *)
    midHelp
    ;;
esac

Now let's build your MID server.

az acr build --image <image name> --registry <Container Registry Name>

At this point the MID server should be built and ready to deploy. However, you'll need to attach the container registry to the AKS cluster so it can access the built image.

az aks update -n <K8s Cluster Name> -g <Resource Group> --attach-acr <Container Registry Name> 

We'll now deploy the MID server we just built. This is done exactly the same as in the article I listed earlier.

 

Configure everything

Now that we have it all deployed lets setup the mid server and configure it.

You'll need to validate the MID server the same way you always do with a new MID server deployment.

Once it's validated you'll need to tell the platform to push the JAR file down to this MID server so it can utilize it for communicating with the Vault Agent.

You'll do that by setting the MID servers name in a configuration property.

mikegallagh_7-1728270686514.png

 

This also has the benefit of configuring the address the MID server will use to communicate with the vault agent. (Note that this is a local address. I haven't tested this but in theory it could communicate with a remote agent as well.)

Now that you have your MID server configured you'll need to create a credential record.

The platform uses "stub" credential records to define the types of credentials that are available in the the Vault and how to locate them. There are a few main components to keep a close eye on in these credential records.

First of all, in order to create them you MUST to be in "External Storage" view. If you're in any other view you won't be able to properly interact with the credential records.

Another key point to note is the "Credential ID". This is the path to find the credential within the Vault API. Note however that this does NOT use the "/v1" at the beginning. The process automatically appends the "/v1" to the API path for you.

The final super important point to note here is the "Credential storage vault" dropdown. There's only two options: "CyberArk" and "-- None --". If this is set to anything other than "-- None --" it won't pull credential records from the vault.

mikegallagh_9-1728270726179.png

 

Once all of that is completed you should be able to test the credential.

 

mikegallagh_11-1728270770278.png

 

Assuming everything works correctly you should get a success.

 

mikegallagh_12-1728270790666.png

 

Troubleshooting

  • Double check the logs from the MID server container. (I prefer to use stern for this.) The Vault agent should have lines prefixed with "agent.apiproxy". You should be able to see where it forwards the request to the Vault with something like this:
[INFO]  agent.apiproxy: forwarding request to Vault: method=GET path=/v1/kv/data/MTGTestLinux
  • If you see anything other than that double check your agent config. Common things that can go bad:
    • The vault can reseal itself requiring you to manually unseal it again
    • Your approle credentials may not have access to the API path you're requesting
    • Your approle credentials may be incorrectly setup
    • The path for your "CredentialID" may be incorrect
  • In some cases I've had to log directly into the MID server container. Then I'll kill and restart the vault agent process manually. This will enable me to see more detail of the vault agent's logging. This can give more information on where to look for next steps.

Appendix

Roughly followed these instructions here.

Version history
Last update:
‎10-06-2024 08:15 PM
Updated by:
Contributors