- Post History
- Subscribe to RSS Feed
- Mark as New
- Mark as Read
- Bookmark
- Subscribe
- Printer Friendly Page
- Report Inappropriate Content
2 hours ago
ServiceNow Community • Technical How-To
⚠️ Disclaimer
This article is an independent community contribution and is not official ServiceNow documentation. It is provided as-is for educational purposes and does not represent the views of ServiceNow, Inc.
The author assumes no responsibility for any issues, outages, or security implications arising from following this guide. Always test in a sub-production environment before applying changes to production instances or MID Servers.
Refer to the official ServiceNow product documentation and contact ServiceNow Support for authoritative guidance on MID Server container deployment and mutual authentication configuration.
Overview
ServiceNow provides an out-of-box container image recipe for deploying MID Servers as Docker containers. The recipe includes a Dockerfile, an init entrypoint script, and supporting assets that handle configuration, health checks, and graceful shutdown. Critically, the init script has first-class support for mutual TLS (mTLS) authentication — it can import a PEM certificate bundle and enable certificate-based authentication before the MID Server process starts, with no basic auth bootstrap required.
This article walks through the end-to-end process of building the OOB container image and deploying it to Kubernetes with an mTLS PEM bundle injected via a Kubernetes Secret. The MID Server authenticates to the instance using its client certificate from its very first connection — no username or password is ever written to the container’s config.xml.
ℹ️ Prerequisite: This article assumes you have already completed all instance-side preparation and generated your PEM certificate bundle. If you have not done so, follow the complete guide here first: How To: Enable Mutual TLS (mTLS) Authentication Between a MID Server and the ServiceNow Instance. Specifically, you should have completed Steps 1–4 of that article (CSR generation, CA submission, PEM bundle assembly, and instance-side certificate configuration) before proceeding here.
How the Container Recipe Handles mTLS
Before diving into the deployment steps, it helps to understand exactly what the OOB init entrypoint script does with the mTLS PEM file. The relevant logic is in the generateConfigXml function:
if [[ ! -z "$MID_MUTUAL_AUTH_PEM_FILE" && -f "$MID_MUTUAL_AUTH_PEM_FILE" ]]
then
# If Cert (PEM) file is set and exists, proceed with mutual auth
cd /opt/snc_mid_server/agent && \
sh bin/scripts/manage-certificates.sh -a "DefaultSecurityKeyPairHandle" $MID_MUTUAL_AUTH_PEM_FILE
cd /opt/snc_mid_server/agent && \
sh bin/scripts/manage-certificates.sh -m
IS_MUTUAL_AUTH=1
else
# mutual auth is not set, proceed with basic authentication
replaceConfigParameter 1 mid.instance.username ${MID_INSTANCE_USERNAME}
replaceConfigParameter 1 mid.instance.password ${MID_INSTANCE_PASSWORD}
fi
The script checks two conditions: (1) the MID_MUTUAL_AUTH_PEM_FILE environment variable is non-empty, and (2) the file at that path actually exists. When both are true, it:
- Runs
manage-certificates.sh -a "DefaultSecurityKeyPairHandle"to import the PEM bundle (certificate chain + PKCS#8 private key) into the MID Server’s keystore under the required alias. - Runs
manage-certificates.sh -mto enable mutual authentication inconfig.xml. - Sets the
IS_MUTUAL_AUTHflag to1, which causes the mandatory parameter validation to skip checking for username and password — and in fact fail if they are present inconfig.xml(the script treats their presence alongside mTLS as a misconfiguration).
The IS_MUTUAL_AUTH flag is consumed by the validateMandatoryParameters function later in the script. The relevant calls are:
validateMandatoryParameter "mid.instance.username" "YOUR_INSTANCE_USER_NAME_HERE" $IS_MUTUAL_AUTH
validateMandatoryParameter "mid.instance.password" "YOUR_INSTANCE_PASSWORD_HERE" $IS_MUTUAL_AUTH
The third argument ($IS_MUTUAL_AUTH) is used as a checkNotPresent flag. When it equals 1, the function inverts its check: instead of verifying the parameter exists and has been set, it verifies the parameter is absent from config.xml — and exits with EXIT_CODE_MUTUAL_AUTH_SETUP_FAILURE if it is found. This is why omitting MID_INSTANCE_USERNAME and MID_INSTANCE_PASSWORD is mandatory, not optional, when using mTLS.
When the PEM file is not found, the script falls through to the else branch and configures basic auth credentials instead.
This means the deployment pattern is straightforward: make the PEM file available at a known path inside the container, set the environment variable to point at it, omit the username/password variables, and the init script handles the rest.
What You Need
- The MID Server container image recipe downloaded from your instance (the ZIP file containing the
Dockerfile,initscript, and supporting assets) - A completed PEM bundle (
midserver-bundle.pem) containing the leaf certificate, intermediate/root CA chain, and PKCS#8 private key — assembled per the companion article, Step 3 - Instance-side configuration completed: CA chain uploaded to
sys_ca_certificate.list(Publish Status “Active”), leaf certificate uploaded tosys_user_certificate.list(status “Active”), and MID mutual authentication enabled by ServiceNow Support — per the companion article, Step 4 - A container registry accessible from your Kubernetes cluster (Docker Hub, ACR, ECR, GCR, Harbor, etc.)
- A Kubernetes cluster with
kubectlaccess and permission to create Deployments and Secrets in your target namespace
Step 1: Build the Container Image
Extract the container recipe ZIP and build the image. No modifications to the Dockerfile are required for mTLS. The mTLS PEM bundle is injected at runtime via a volume mount, not baked into the image — which keeps the image reusable across MID Servers with different certificates.
# Extract the recipe
unzip mid-linux-container-recipe_zurich-07-01-2025__patch7-02-19-2026_03-04-2026_1012_linux_x86-64.zip -d mid-recipe
cd mid-recipe
# Build the image
docker build -t mid:zurich-p7 .
The multi-stage Dockerfile downloads (or uses a local copy of) the MID Server installation ZIP, verifies its digital signature, extracts the MID Server agent, and produces a final image based on AlmaLinux 9.2 with a non-root mid user (UID 1001, GID 1001).
Tag and Push to Your Registry
docker tag mid:zurich-p7 <your-registry>/mid:zurich-p7
docker push <your-registry>/mid:zurich-p7
ℹ️ Note: If your environment requires building behind a proxy or using a local MID installer ZIP instead of downloading from
install.service-now.com, the Dockerfile supports both. Use--build-arg MID_INSTALLATION_FILE=<filename>and place the ZIP in the build context alongside the Dockerfile. You can also disable signature verification with--build-arg MID_SIGNATURE_VERIFICATION=FALSE(not recommended for production).
Step 2: Create the Kubernetes Secret for the PEM Bundle
Store the PEM bundle as a Kubernetes Secret. This is the mechanism that makes the file available inside the container at runtime.
kubectl create secret generic mid-mtls-pem \
--from-file=mid_mtls.pem=./midserver-bundle.pem \
-n <your-mid-namespace>
This creates a Secret named mid-mtls-pem with a single key mid_mtls.pem. When the Secret is mounted as a volume, the key becomes a filename at the mount path.
⚠️ Important: Kubernetes Secrets are base64-encoded but not encrypted at rest by default. For production environments, ensure your cluster has encryption at rest enabled for Secrets, or use a secrets management solution (Vault, Azure Key Vault, AWS Secrets Manager, etc.) with a CSI driver to inject the PEM file.
Step 3: Create the Kubernetes Deployment
The deployment manifest ties everything together: mount the Secret as a volume at a path the mid user can read, set MID_MUTUAL_AUTH_PEM_FILE to that path, and omit MID_INSTANCE_USERNAME and MID_INSTANCE_PASSWORD.
apiVersion: apps/v1
kind: Deployment
metadata:
name: mid-server-mtls
namespace: <your-mid-namespace>
labels:
app: mid-server
spec:
replicas: 1
selector:
matchLabels:
app: mid-server
template:
metadata:
labels:
app: mid-server
spec:
securityContext:
fsGroup: 0 # Sets GID 0 on volume files and adds GID 0 as supplemental group to the container process
containers:
- name: mid
image: <your-registry>/mid:zurich-p7
env:
- name: MID_INSTANCE_URL
value: "https://<your-instance>.service-now.com/"
- name: MID_SERVER_NAME
value: "mid-k8s-mtls-01"
- name: MID_MUTUAL_AUTH_PEM_FILE
value: "/etc/mid-mtls/mid_mtls.pem"
# MID_INSTANCE_USERNAME and MID_INSTANCE_PASSWORD are intentionally omitted.
# The init script skips basic auth when mTLS is configured.
volumeMounts:
- name: mtls-pem
mountPath: /etc/mid-mtls
readOnly: true
resources:
requests:
memory: "1Gi"
cpu: "500m"
limits:
memory: "4Gi"
cpu: "2"
volumes:
- name: mtls-pem
secret:
secretName: mid-mtls-pem
defaultMode: 0440 # Owner-read + group-read
Key Points in the Manifest
No credentials in the manifest. Because the init script’s else branch (basic auth) only fires when the PEM file is not found, omitting MID_INSTANCE_USERNAME and MID_INSTANCE_PASSWORD is not just optional — it’s required. If those variables were set alongside a valid PEM file, the credentials would be written to config.xml before the mTLS block executes (due to processing order in the secrets file), and the post-mTLS validation would detect them and exit with EXIT_CODE_MUTUAL_AUTH_SETUP_FAILURE.
Volume mount permissions. The container runs as UID 1001 (the mid user). The Dockerfile uses COPY --chown=$USER_ID:0, which means the mid user belongs to the root group (GID 0) — this is a common pattern for OpenShift compatibility. Kubernetes Secret volumes set file ownership to root:root by default. The fsGroup: 0 setting does two things: it recursively changes the group ownership of mounted volume contents to GID 0, and it adds GID 0 as a supplemental group to the container process. Combined with defaultMode: 0440 (owner-read + group-read), this ensures the mid user can read the mounted PEM file via its supplemental root group membership.
ℹ️ OpenShift Note: On OpenShift, the default
restrictedSecurity Context Constraint (SCC) usesFSGroup Strategy: MustRunAswith GID ranges derived from namespace annotations. RequestingfsGroup: 0will be rejected by the default restricted SCC. You will need a custom SCC withfsGroup: type: RunAsAny(or a range that includes GID 0) assigned to the MID Server’s service account. On vanilla Kubernetes without SCCs, thefsGroup: 0setting works without additional configuration.
MID Server naming and replicas. The MID_SERVER_NAME value becomes the MID Server record name on the instance. With a static name, replicas must remain at 1 — ServiceNow rejects MID Servers with duplicate names, so a second replica using the same name would be shut down. For multi-replica scenarios, use the dynamic naming templates built into the init script: include _AUTO_GENERATED_UUID_ in the name to substitute a unique UUID at each pod startup, or _NAMESPACE_HOSTNAME_ to generate a name from the Kubernetes namespace and pod hostname (e.g., mid-mtls-_NAMESPACE_HOSTNAME_ produces mid-mtls-mynamespace_mid-server-mtls-7b9f5-xk4z2). For predictable ordinal-based naming, consider using a StatefulSet instead of a Deployment.
Step 4: Deploy and Validate
Apply the manifest and monitor the pod:
# Apply the deployment
kubectl apply -f mid-server-mtls.yaml
# Watch the pod start
kubectl get pods -n <your-mid-namespace> -w
# Follow the container init log
kubectl logs -f deployment/mid-server-mtls -n <your-mid-namespace>
In the logs, you should see the mTLS path being taken:
DOCKER: mutual auth cert file found: /etc/mid-mtls/mid_mtls.pem
DOCKER: mutual auth enabled on MID
DOCKER: Update configuration DONE
DOCKER: starting mid server
If you instead see DOCKER: mutual auth cert file not found: /etc/mid-mtls/mid_mtls.pem, the volume mount is not working correctly — verify the Secret name, mount path, and file permissions.
Validate on the Instance
- Navigate to the MID Server list on the instance. A new MID Server record should appear with the name you specified in
MID_SERVER_NAME. - The MID Server should come up and show a status of Up.
- Verify the mTLS path was taken by checking
mid-container.log(the init log, separate from the MID agent log) for the confirmation messages:
You should seekubectl exec -it <pod-name> -n <your-mid-namespace> -- cat /opt/snc_mid_server/mid-container.log | grep -i "mutual auth"DOCKER: mutual auth cert file foundandDOCKER: mutual auth enabled on MID. - Also check
agent0.log.0(the MID Server agent log) for a clean startup with no TLS handshake errors:kubectl exec -it <pod-name> -n <your-mid-namespace> -- tail -100 /opt/snc_mid_server/agent/logs/agent0.log.0
ℹ️ Note: MID Servers created with mTLS from the outset do not receive capabilities automatically. An administrator must manually add capabilities to the MID Server record on the instance. Additionally, do not use the standard Validate UI action on an mTLS MID Server — mTLS MID Servers are auto-validated through the certificate trust chain, and the Validate action is designed for basic auth MID Servers.
Optional: Using a Secrets File Alongside mTLS
The init script processes MID_SECRETS_FILE and MID_MUTUAL_AUTH_PEM_FILE independently. You can use both simultaneously — the secrets file for additional config parameters (proxy passwords, custom parameters), and the PEM file for authentication. Create a second Secret and mount it separately:
# Create the secrets file Secret
kubectl create secret generic mid-secrets \
--from-file=mid-secrets.properties=./mid-secrets.properties \
-n <your-mid-namespace>
Then add a second volume and mount to the deployment, and set MID_SECRETS_FILE:
env:
# ... existing env vars ...
- name: MID_SECRETS_FILE
value: "/etc/mid-secrets/mid-secrets.properties"
volumeMounts:
# ... existing mTLS mount ...
- name: mid-secrets
mountPath: /etc/mid-secrets
readOnly: true
volumes:
# ... existing mTLS volume ...
- name: mid-secrets
secret:
secretName: mid-secrets
defaultMode: 0440
⚠️ Important: Do not include
mid.instance.usernameormid.instance.passwordin the secrets properties file when using mTLS. These parameters are processed before the mTLS block and will be written toconfig.xml, causing theinitscript’s post-mTLS validation to fail.
Optional: Persistent Volume for Config Backup
The container recipe includes pre_stop.sh and post_start.sh lifecycle scripts that back up and restore configuration files (including config.xml, wrapper-override.conf, and glide.properties) to a persistent volume mounted at /opt/snc_mid_server/mid_container. This is useful for preserving configuration across pod restarts without re-running the full setup.
To enable this, add a PersistentVolumeClaim and mount it:
volumeMounts:
# ... existing mounts ...
- name: mid-data
mountPath: /opt/snc_mid_server/mid_container
volumes:
# ... existing volumes ...
- name: mid-data
persistentVolumeClaim:
claimName: mid-server-data
The init script automatically detects this directory and restores backed-up config files on startup. It also calculates an environment variable hash (.env_hash) to determine whether the config needs to be regenerated — if your environment variables have not changed since the last run, the script skips the full setup and uses the cached configuration.
⚠️ Important — Certificate Renewal with Persistent Volumes: The environment variable hash only considers environment variable values, not the contents of mounted files. When renewing a certificate,
MID_MUTUAL_AUTH_PEM_FILEstill points to the same path — only the file contents have changed. This means theinitscript may detect no hash change and skip re-importing the certificate on restart. To ensure the renewed certificate is imported, delete the.initializedand.env_hashfiles from the persistent volume before restarting the pod:kubectl exec -it <pod-name> -n <ns> -- rm -f /opt/snc_mid_server/mid_container/.initialized /opt/snc_mid_server/mid_container/.env_hashAlternatively, change a trivial environment variable value (e.g., bump a
MID_CONFIG_comment parameter) to force a hash mismatch and trigger a full re-setup.
Optional: Generic Config Parameters via Environment Variables
The init script supports injecting arbitrary MID Server config.xml parameters through environment variables prefixed with MID_CONFIG_. The prefix is stripped and double underscores (__) are converted to dots (.) to form the parameter name. For example:
- name: MID_CONFIG_mid__log__level
value: "info"
- name: MID_CONFIG_mid__ssl__bootstrap__default__check_cert_revocation
value: "false"
These become mid.log.level=info and mid.ssl.bootstrap.default.check_cert_revocation=false in config.xml.
Similarly, JVM wrapper overrides can be set with the MID_WRAPPER_ prefix, which maps to entries in wrapper-override.conf.
Certificate Renewal in Kubernetes
When the client certificate approaches expiry, the renewal process for a containerized MID is:
- Generate a new CSR, obtain the renewed certificate from your CA, and assemble a new PEM bundle (Steps 1–3 of the companion article).
- Upload the new leaf certificate to
sys_user_certificate.liston the instance and wait for “Active” status. Do not deactivate or delete the old leaf certificate yet — the running MID Server is still using it until the pod restarts with the new bundle. - Update the Kubernetes Secret:
kubectl create secret generic mid-mtls-pem \ --from-file=mid_mtls.pem=./midserver-bundle-renewed.pem \ -n <your-mid-namespace> \ --dry-run=client -o yaml | kubectl apply -f - - Restart the pod to pick up the updated Secret:
kubectl rollout restart deployment/mid-server-mtls -n <your-mid-namespace> - Verify the MID Server reconnects successfully with status “Up”, then optionally deactivate or remove the old leaf certificate record from
sys_user_certificate.list.
⚠️ Important: Multiple leaf certificates signed by the same CA can coexist on
sys_user_certificate.list. Keep the old certificate active until you have confirmed the MID Server has reconnected with the new one. If you deactivate the old certificate before the pod restarts, the running MID Server will lose connectivity immediately.
ℹ️ Note: Kubernetes does propagate Secret updates to mounted volumes eventually (the total delay is the kubelet sync period plus cache propagation delay — approximately 60–90 seconds with default settings), but the MID Server process inside the container will not detect the updated PEM file without a restart — the certificate is imported into the Java keystore at init time, not read dynamically. A pod restart is always required after updating the Secret.
Troubleshooting
| Symptom | Resolution |
|---|---|
| “mutual auth cert file not found” in init log | The Secret is not mounted correctly. Verify: kubectl describe pod <pod> shows the volume mount, and kubectl exec <pod> -- ls -la /etc/mid-mtls/ shows the PEM file. |
| EXIT_CODE_MUTUAL_AUTH_SETUP_FAILURE (exit code 3) | Either the PEM bundle is malformed (wrong key format, missing chain, corrupted file), or basic auth credentials were also provided alongside mTLS. Omit MID_INSTANCE_USERNAME and MID_INSTANCE_PASSWORD entirely. |
| Permission denied reading the PEM file | The mid user (UID 1001) cannot read the mounted Secret. Ensure defaultMode: 0440 on the Secret volume and fsGroup: 0 in the pod security context. |
| “Could not find valid private key” | The private key in the PEM bundle is not in PKCS#8 format. The key header must read BEGIN PRIVATE KEY, not BEGIN RSA PRIVATE KEY. Reconvert with: openssl pkcs8 -topk8 -nocrypt -in key.key -out key-pkcs8.key and rebuild the bundle. |
| “BEGIN ENCRYPTED PRIVATE KEY” in PEM bundle | The private key is PKCS#8 but still password-encrypted. The manage-certificates.sh script expects an unencrypted PKCS#8 key (BEGIN PRIVATE KEY). Decrypt and convert with: openssl pkcs8 -topk8 -nocrypt -in encrypted-key.pem -out key-pkcs8.pem and rebuild the bundle. |
| MID Server record does not appear on instance | The instance-side configuration is incomplete. Verify the CA chain is uploaded to sys_ca_certificate.list with Publish Status “Active”, the leaf cert is on sys_user_certificate.list with “Active” status, and ServiceNow Support has enabled MID mutual auth on your instance. Also check for a stale MID Server record with the same name from a prior failed deployment (see duplicate name row below). |
| MID Server shuts down after “duplicate name detected” | ServiceNow rejects MID Servers with duplicate names. If a stale record with the same MID_SERVER_NAME exists from a previous deployment, the new container will fail to register and shut down after three retry attempts. Delete or rename the stale MID Server record on the instance, or use a different MID_SERVER_NAME value in the deployment. |
| SSLHandshakeException in agent0.log.0 | The instance does not trust the client certificate CA. Also check for TLS-intercepting proxies (Zscaler, Palo Alto, BlueCoat) between the cluster and the instance — these break mTLS by replacing the client certificate. A proxy bypass for *.service-now.com is required. |
| Pod crashes with exit code 2 (missing config param) | The MID_INSTANCE_URL or MID_SERVER_NAME environment variable is missing or empty. These are mandatory regardless of the authentication method. |
| Pod restarts but does not re-run setup | If a persistent volume is mounted at /opt/snc_mid_server/mid_container and the environment variable hash has not changed, the init script uses the cached config. To force a re-setup, delete the .initialized and .env_hash files from the persistent volume, or change an environment variable value. |
Useful Debugging Commands
# Verify the Secret mounted correctly
kubectl exec -it <pod> -n <ns> -- ls -la /etc/mid-mtls/
# View the init log (separate from the MID agent log)
kubectl exec -it <pod> -n <ns> -- cat /opt/snc_mid_server/mid-container.log
# Verify the keystore was populated
kubectl exec -it <pod> -n <ns> -- /opt/snc_mid_server/agent/jre/bin/keytool \
-list -v -keystore /opt/snc_mid_server/agent/security/agent_keystore.jks \
-storepass changeit
# Check if config.xml has mutual auth enabled (should NOT have username/password)
kubectl exec -it <pod> -n <ns> -- grep -E "mutual|username|password" \
/opt/snc_mid_server/agent/config.xml
Quick Reference: End-to-End Checklist
Prerequisites (Completed Before This Article)
- Instance is on ADCv2, TLS support is enabled,
com.glide.auth.mutualplugin is activated - ServiceNow Support has enabled MID mutual authentication on the instance
- PEM bundle is assembled: leaf cert + intermediate(s) + root CA + PKCS#8 private key
- CA chain uploaded to
sys_ca_certificate.list(Publish Status “Active”) - Leaf certificate uploaded to
sys_user_certificate.list(status “Active”, mapped to MID service account user)
(See How To: Enable Mutual TLS (mTLS) Authentication Between a MID Server and the ServiceNow Instance for these steps.)
Container Build and Deployment
- Extract the MID Server container recipe ZIP
- Build the Docker image (
docker build) — no Dockerfile modifications needed - Tag and push the image to your container registry
- Create a Kubernetes Secret from the PEM bundle (
kubectl create secret generic) - Create the Deployment manifest with:
MID_INSTANCE_URLandMID_SERVER_NAMEsetMID_MUTUAL_AUTH_PEM_FILEpointing to the Secret mount pathMID_INSTANCE_USERNAMEandMID_INSTANCE_PASSWORDomitted- Secret volume mounted with
defaultMode: 0440 fsGroup: 0in pod security context
- Apply the deployment and verify the pod starts
- Check the init log for “mutual auth enabled on MID”
- Verify the MID Server record appears on the instance with status “Up”
- Manually add capabilities to the MID Server record
- Note the certificate expiry date and set a renewal reminder
References
- How To: Enable Mutual TLS (mTLS) Authentication Between a MID Server and the ServiceNow Instance (companion article — certificate creation and instance configuration)
- ServiceNow Docs: MID Server Container Deployment
- ServiceNow Docs: Enable MID Server mutual authentication (KB1616866)
- ServiceNow Docs: MID Server Mutual Authentication (KB1116112)
- Kubernetes Docs: Secrets
- Kubernetes Docs: Encrypting Secret Data at Rest
