- Subscribe to RSS Feed
- Mark as New
- Mark as Read
- Bookmark
- Subscribe
- Printer Friendly Page
- Report Inappropriate Content
The Dynatrace Observability SGC v1.14.1 shipped some fixes that look incremental in the release notes but hit directly at the field-level mechanics that have been breaking IRE, service maps, and event correlation in enterprise environments for a long time. Let’s get specific about what was actually happening before — and what changes now.
Let me set the scene. You’ve done the hard work — Dynatrace deployed, Service Graph Connector humming, topology flowing into your CMDB. Things look great in the demo. Then six months later, someone opens the CMDB Health Dashboard and you’ve got 847 duplicate SQL server records, your service maps are flickering like a strobe light, and nobody can explain why Azure VMs are floating in the dependency graph like digital tumbleweeds.
Sound familiar? You’re not doing it wrong. The connector was doing it wrong — or at least, doing things inconsistently enough that your IRE was fighting an identity crisis on your behalf.
The March 2026 release of the SGC (v1.14.1) doesn’t have a flashy feature name. There’s no “Now Assist for Topology” banner on it. But from a data quality and CSDM alignment perspective, this is one of the more significant connector releases in recent memory. And I want to go further than the release notes do — I want to show you exactly what the SGC was producing before, what it produces now, and why that specific difference matters to IRE, to your service maps, and to everything downstream.
Figure 1 — SGC Data Flow: From Telemetry to CSDM
The SGC sits between Dynatrace telemetry and your CMDB. Every normalization decision it makes — identifiers, class mappings, relationships — directly determines whether IRE creates clean CIs or a duplicate avalanche. The five fixes in v1.14.1 all target decisions made inside that middle box.
Fix 1: MSSQL Instance Naming — Closing the Discovery Gap
What Was Happening Before
When the SGC pulled a SQL Server entity from Dynatrace, it constructed the name attribute on cmdb_ci_mssql_instance using a host-and-port pattern derived from Dynatrace’s internal entity model. In practice, that produced values like SQLPRD01:1433 — hostname colon port.
Meanwhile, ServiceNow Discovery was doing exactly what it was designed to do: enumerating SQL instances using Windows Management Instrumentation and constructing the name using the standard SQL Server convention, which is SQLPRD01\MSSQLSERVER for a default instance, or SQLPRD01\INSTANCENAME for named instances.
These two formats look obviously different to a human, and IRE responds the same way a human would if you asked it to match “SQLPRD01:1433” against “SQLPRD01\MSSQLSERVER” — it concludes they are not the same thing. So it creates a second cmdb_ci_mssql_instance record. You now have two CIs representing one actual database instance: one with Discovery-sourced attributes (patches, install path, version), and one with Dynatrace-sourced attributes (connection metrics, process relationships). Neither record is complete. Neither has full authoritative data. And both are orphaned from the other source’s relationship chain.
What the SGC Produces Now
The SGC now normalizes the SQL instance name into the hostname\instance_name format before populating the identifier. For a default instance on SQLPRD01, it produces SQLPRD01\MSSQLSERVER. For a named instance, SQLPRD01\INSTANCENAME. This matches Discovery exactly.
IRE receives a name value it recognizes from the Discovery-created record, matches against the existing CI, and merges the Dynatrace-sourced attributes into the authoritative record rather than creating a new one. One CI, two contributing sources, complete data. The deduplication problem is solved at the source rather than downstream with reconciliation rules or manual cleanup scripts.
Figure 2 — MSSQL Naming: IRE Decision Before and After
Fix 2: Cross-OS Process Normalization — The Silent Service Map Destroyer
What Was Happening Before
This one is subtle and takes a while to surface, which is part of what makes it dangerous. When Dynatrace models a process group instance (a logical grouping of identical processes running across multiple hosts), the SGC maps this to cmdb_ci_appl in the CMDB. The identifier for that CI is derived from the process name, executable path, and command-line parameters.
On a Linux host, those values look like this:
- Executable path:
/usr/bin/java - Parameters:
-jar /opt/payments/payment-service.jar --spring.profiles.active=prod
On a Windows host running the same logical application, those same values come through as:
- Executable path:
C:\Program Files\Java\jdk-17\bin\java.exe - Parameters:
-jar C:\PaymentApp\payment-service.jar --spring.profiles.active=prod
To a human these are obviously the same application. To IRE they are not. The identifier strings are different, so IRE creates two separate cmdb_ci_appl CIs — one for the Linux instantiation, one for the Windows instantiation. Now your service topology is built on top of two disconnected nodes representing one logical service component. The service map shows them separately. Relationships from the application tier to downstream databases and upstream services are split across both CIs, so neither has the complete picture.
When an alert fires in Event Management and tries to bind to a CI using process identity, it can only match one of the two — and depending on which host the alert originated from, correlation either works partially or fails outright.
In environments where application workloads run across mixed Linux and Windows infrastructure — which is nearly every enterprise middleware or integration layer — this was creating a persistent, invisible tear in the service topology.
What the SGC Produces Now
v1.14.1 applies a normalization pass to both the executable path and command-line parameters before constructing the identifier. Path separators are standardized, platform-specific path prefixes are stripped or abstracted, and parameter strings are normalized to a common form. The result is that both the Linux and Windows process group instances produce the same identifier value, IRE matches them to a single cmdb_ci_appl record, and your service topology is whole again.
The specific knock-on effects are significant: service maps now show a single application node with the complete set of upstream and downstream relationships from both infrastructure layers. Event binding accuracy improves because the CI that receives an alert exists as a unified record rather than a platform-specific fragment. And the process-level topology that feeds AIOps correlation clustering — which depends on traversing these application relationships — operates on complete data instead of partial data.
Figure 3 — Cross-OS Process: Topology Before and After Normalization
Fix 3: Azure VM-to-Server Relationships — Reconnecting the Cloud Dependency Chain
What Was Happening Before
The SGC was successfully ingesting Azure virtual machine entities from Dynatrace and creating cmdb_ci_vm_instance records in the CMDB. The CI attributes — name, IP, OS, cloud region — were populated. What was not being created was the Runs on::Runs relationship connecting that virtual machine to its underlying compute host or Azure host record.
In CSDM terms, this relationship is how infrastructure-layer dependency is expressed. Without it, the VM exists in your CMDB as a leaf node: it has no upstream compute context. For service mapping, this means the topology chain terminates at the VM tier — you can see that your application runs on an Azure VM, but the VM has no computed relationship to the underlying physical or virtual compute fabric. For Change Impact Analysis, a change to the underlying Azure host has no traversable path to the VM CIs that depend on it, so blast radius calculations return incomplete or zero results. For AIOps, when the platform tries to correlate an infrastructure-layer alert upward to a business service, the traversal hits the VM node and stops.
The symptoms are easy to misattribute. Teams will often assume the service map is wrong, or that Dynatrace isn’t sending the right data, or that the mapping configuration is misconfigured. The actual cause is simply that the SGC wasn’t writing the relationship record that should accompany every VM entity it ingests.
What the SGC Produces Now
For every Azure VM entity it processes, the SGC now explicitly creates the Runs on::Runs relationship between the cmdb_ci_vm_instance and the appropriate host-tier CI. The dependency chain is complete. Service maps show the full upstream/downstream topology through the VM tier into the compute layer. Change Impact Analysis can now traverse that relationship, which means blast radius calculations for infrastructure changes become materially more accurate. AIOps correlation can follow the chain all the way from a service-level alert down to the infrastructure tier and back up again.
If you’re in a heavily Azure-dependent environment — and most enterprises are at this point — this is the fix that most directly improves the quality of the data that Now Assist for ITSM and Event Management are working with.
Fix 4: Environment-Unique Service Naming — Ending the Flap
What Was Happening Before
When Dynatrace monitors the same application service running in multiple environments — say, PaymentService in DEV, TEST, and PROD — it models those as separate entities internally. But the SGC was deriving the name field for the corresponding cmdb_ci_service_auto records primarily from the Dynatrace service name, without embedding the environment context. All three environments produced the same name value: PaymentService.
Now IRE has a problem. Its identity set for cmdb_ci_service_auto includes the service name. All three sources claim to be the same CI. But they carry conflicting attribute values — different environment tags, different endpoint URLs, different supporting infrastructure relationships. IRE reconciles this by applying the most recent values from whichever source ran last. The result is a service record that bounces between DEV, TEST, and PROD attributes depending on the timing of the last pull.
The practical downstream effects are significant and often invisible until they surface as operational problems. Event Management routes alerts to the wrong environment context because the service record currently reflects TEST attributes when the alert is about PROD. Now Assist gets inconsistent answers when asked about the service’s supporting infrastructure because the relationship set changes between queries. Reporting dashboards show incoherent environment-level metrics because the underlying service records don’t maintain stable identity separation.
What the SGC Produces Now
v1.14.1 embeds the environment context directly into the service naming logic. The three environments now produce three distinct name values: PaymentService-PROD, PaymentService-TEST, and PaymentService-DEV. IRE creates three separate, stable cmdb_ci_service_auto records — one for each environment. Each record maintains its own attribute set, its own relationship chain, and its own event correlation context. The flapping stops. The environmental separation that should always have existed in your CMDB service layer now does.
One important operational note: if you’ve been running pre-1.14.1 and you upgrade, you will have existing cmdb_ci_service_auto records with the old unified names. The new pull cycle will create new records with the environment-qualified names. You will need to reconcile or retire the old flapping records manually. The fix is forward-looking; it doesn’t clean up the historical mess automatically.
Figure 4 — Service Identity Before and After: Flapping vs. Stable
Fix 5: Solaris Classification — Getting Off the Generic Floor
What Was Happening Before
When the SGC encountered a Solaris host entity from Dynatrace, it had no specific class mapping for it. The fallback behavior was to classify the CI as cmdb_ci_computer — the generic catch-all at the top of the hardware CI hierarchy. The os field would correctly reflect Solaris, so the data wasn’t wrong exactly, but the CI class was. That distinction matters for several reasons.
Class-specific reconciliation rules, relationship types, and CMDB Health compliance checks are all scoped by CI class. A Solaris host sitting in cmdb_ci_computer won’t match the class-specific rules written for cmdb_ci_solaris_server. Discovery-sourced Solaris records, correctly classified, won’t merge cleanly with SGC-sourced records in the wrong class — IRE is looking at different classes and won’t match them. CMDB Health scoring marks the record as non-compliant on classification. And any downstream process that routes work or applies automation based on CI class — which is a very common pattern in ITSM and ITOM workflows — misses or misclassifies these CIs.
What the SGC Produces Now
Solaris hosts are now mapped to the appropriate CI class natively within the SGC, rather than falling back to the generic cmdb_ci_computer. This means Discovery-sourced Solaris CIs and SGC-sourced Solaris CIs now land in the same class, enabling IRE to match them properly. CMDB Health compliance scores improve for Solaris-classified records. Class-specific reconciliation rules apply correctly. And if you have automation or routing logic that operates on CI class, Solaris infrastructure now participates in those workflows the way it should.
This is the smallest of the five changes in scope, but in environments with significant Solaris infrastructure (financial services, large-scale Unix shops) it cleans up a persistent classification gap that affected CMDB health reporting without being obvious about where the score was being dragged down.
The Aggregate Effect: What Your CMDB Health Looks Like After All Five
Figure 5 — CMDB Health: The Three Cs Before vs. After v1.14.1
What This Means for the Customization Tax
Here’s the lens I always use when evaluating a platform update: does this reduce or increase the amount of custom code I’m maintaining? Because custom code in ServiceNow is a tax. You pay it at implementation, you pay it at upgrade time, and you pay it every time a new developer touches the platform and has to decode what the regex in that transform map is actually doing.
v1.14.1 reduces the tax in concrete ways. If you built a custom identifier normalization script to compensate for the SQL naming mismatch, you may now be able to remove it. If you have regex in your event management configuration compensating for process path variability, that’s now potentially dead weight. If you have environment-specific routing logic to work around service flapping, check whether it’s still necessary. If you added extra identity attributes to IRE rules to force-match Solaris CIs that were landing in the wrong class, those may now be counterproductive.
The Configure → Integrate → Customize principle applies here: When the platform fixes a behavior at the integration layer, your responsibility is to verify that your customizations are still providing value — and not just doing what the product now does natively. Customization that duplicates platform behavior is maintenance burden, not intellectual property.
The Bigger Picture: Why CMDB Quality Is an AIOps Prerequisite
There’s a tendency to treat CMDB data quality as an operational nice-to-have. It’s not. If you’re serious about Event Management — especially if you’re on the path toward autonomous operations or Now Assist for ITSM — the quality of your CMDB topology is the ceiling on what those systems can do.
AIOps correlation works by traversing relationships. If the relationships are missing (Azure VMs with no upstream compute) or duplicated (two CIs for one process), the correlation engine produces worse clusters, more alert noise, and lower confidence in automated actions. You’ll see the symptoms — alert fatigue, low Now Assist accuracy, manual override rates climbing — but you may not trace them back to the foundational CMDB issue. That tracing is the job. v1.14.1 removes five specific reasons the tracing was failing.
So, What Should You Actually Do?
If you’re running the Dynatrace Observability SGC in a production environment, here is your post-upgrade action list:
- Audit cmdb_ci_mssql_instance for duplicates. Query for records where the name matches the
hostname:portpattern. These are the SGC-sourced orphans. After upgrading, the next pull will create correctly named records that match Discovery CIs. You will need to retire or merge the old port-format records. - Review your IRE identity sets for process-class CIs. If you added compensating identity attributes (secondary name fields, custom path attributes) to force-match Linux and Windows process CIs, those may now produce false positives or unnecessary complexity. Evaluate whether they can be removed.
- Inventory transform map customizations targeting path normalization. Any scripts in your SGC transform maps doing regex normalization on executable paths or command-line parameters are now potentially redundant. Test removal in a non-production environment before committing.
- Run a relationship coverage check on cmdb_ci_vm_instance for Azure CIs. After the first pull cycle post-upgrade, you should see
Runs on::Runsrelationships appearing on Azure VM records that previously had none. If they don’t appear, check whether the Dynatrace entities include the host-tier relationship data in their API response. - Handle the service record transition actively. The environment naming change is forward-looking. Existing service records with the old unified names will not be automatically renamed. After upgrading, new pulls create new environment-qualified records. Plan a cleanup pass to retire the old flapping records — this won’t happen on its own.
- Validate Solaris CI reclassification. Run a query for
cmdb_ci_computerrecords where theosfield contains ‘Solaris’. These are your previously misclassified records. They will need a reclassification pass — either manual or via a scripted update — to move to the correct class. New records from post-upgrade pulls will land correctly, but the historical ones won’t self-correct. - Rebaseline CMDB Health KPIs. After two or three full pull cycles stabilize, pull fresh Completeness, Correctness, and Compliance scores. You should see measurable movement. If you don’t, that’s diagnostic information — it means something in your environment is still producing the old behavior and deserves investigation.
The bottom line: this release is worth the upgrade. Not because it’s flashy, but because it fixes things that have been silently degrading your data quality for a long time. The fixes are in the right place — upstream of IRE, where they have to be. Clean up the compensating controls, work through the transition steps above, rebaseline your health metrics, and let the platform do what it should have been doing all along.
Questions, war stories, or a particularly egregious Dynatrace-fed IRE duplicate situation you want to commiserate about? Drop it in the comments — I read them all.
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.
