Mathew Hillyard
Mega Sage

This article stems from my experience working on multiple CSDM implementations, and is focused on understanding considerations when migrating from a custom service management structure to CSDM.
I will discuss the challenges faced, solutions considerations, my top thirteen tips, gotchas and lessons learned, and will finish with some key takeaways. This article includes technical insights, but there is plenty of valuable information for anyone designing or working within CSDM.

 

Table of Contents

  1. The Challenges
  2. Implementation
  3. Related Product Development Streams
  4. My Thirteen Top Tips, Gotchas, And Lessons Learned
  5. Key Takeaways
  6. Useful Links

 

The Challenges

The CSDM Maturity Model

The staged CSDM maturity model of Foundation > Crawl > Walk > Run > Fly works well for new customer implementations, where the customer has a CMDB but no services, or where a customer has services but not necessarily CSDM services.

 

However, there are many ServiceNow customers who have had their platform for a long time and who have seen the need for a service architecture long before CSDM existed and may therefore have built a custom structure to suit their business needs. Migrating this to CSDM could be a simple lift and shift, or require complete rework of services and everything they’re connected to – and all the people, process and technologies that use them. It’s more likely to be the latter, and if so, a staged maturity model won’t suffice as the impact to day-to-day operations will be too severe.

 

Whilst it may be possible to bring some elements in via pre-go live releases, typically the service management structure will be so embedded in ITSM operations that a single big bang go live is the only viable approach. However, the CSDM maturity model still plays a vital role in understanding the order of consideration for the implementation work.

 

Business Applications

Your customer/organisation may or may not have Enterprise Architecture (formerly known as Application Portfolio Management or APM) licensed. Bear in mind that although CSDM references Business Applications within the Design domain, the table comes with the base platform, so there is no pressing need to purchase Enterprise Architecture at the start of a CSDM migration. However, it is extremely important to understand how you will extract or build an inventory of Business Applications as they’re required early on with a staged or big-bang implementation.

 

If your customer/organisation does not already have a reliable source of such data you may need to examine the existing custom structure to discover the Business Application landscape. This will naturally lead to the value that Enterprise Architecture can bring – an inventory of Business Applications with both application categorisation/rationalisation and ultimately a technology portfolio to manage the planned disposition of Business Applications – basically technology strategy and technology risk. Conversely, to get full value out of Enterprise Architecture you ideally need to have implemented CSDM up to the Run stage.

 

As you build out the information objects and capabilities of Business Applications you “complete the circle” in the Fly stage (as well as linking services to portfolios, which, whilst slightly out of scope for this article is something well worth thinking about much earlier on in the implementation, particularly the portfolio taxonomy). Finally, operational data related to “Impacted” Business Applications from Incident, Problem and Change is required in order to proceed to basic rationalisation of Business Applications (which was previously the second stage of APM maturity).

 

Historical Data

Historical data in a platform with a legacy service structure has to be evaluated. Typically, on a customer engagement historical data is treated separately and a line is drawn between old and new at go-live. Some considerations might include.

  • How would open or recently closed records be handled?
  • How would switching over from the old structure to the new structure be handled with basic (point in time) reporting?
  • How would switching over from the old structure to the new structure be handled with comparative or trend-based reporting in Performance Analytics?
  • Which processes and applications will be impacted?

It is clear that the impact is significant – across all business processes and most operational teams, and a significant proportion of reporting and analytics – that historical data may need to be migrated in its entirety. Most organisations need to keep historical records for a minimum period to meet company policy and external regulatory compliance, and it is advisable in all CSDM engagements to have a clear view of this, and to ensure appropriate archive policies are in place for data that no longer requires frequent access, to keep the database footprint as small as possible.

 

The Path From Legacy To CSDM

Identifying a migration path from a custom service structure to CSDM is a critical consideration. This can be achieved by looking at the attributes in each table to get a feel for what role(s) that table performs, what that could correspond to within CSDM, and by examining CMDB relationships within the custom structure, if present.

Below is an example from a customer engagement of a mixture of baseline and custom tables in a hierarchical 3-tier service structure, linked via CI Relationships:

  • Top layer object – a Service in the baseline Service [cmdb_ci_service] table.
  • Middle layer object –  a custom table that is a hybrid, mixing Business Application and Service data.
  • Base layer object – a custom table that is a hybrid of a Service Offering and an Application Service data but also containing some Business Application data.

 

Service Data

These are the key activities for migrating service data based on the structure above:

  1. Work with Service Owners  and Configuration Management to categorise services as Business or Technical.
  2. Analyse the data in the middle layer, extract obvious Business Applications, and identify appropriate actions to be taken with the remainder - for example, there might be a Business Application defined elsewhere within the Service map or attributes contained both within the middle and bottom layers that together could represent a Business Application, or whether this object has no direct mapping into CSDM and requires re-work of the service map to reflect. For some services, net new Business Applications may be identified and will need to be created.
  3. Once a first pass at an inventory of Business Applications is compiled, identify relationships between each Business Application and the to-be Application Services. You will probably find that the Build Domain won’t include a legacy element, therefore Business Applications will most likely connect directly to Application Services. Digital Integration Management is available within Enterprise Architecture but integrations and API interfaces is a complex subject and is outside the scope of this article.
  4. Identify which attributes in the bottom layer should belong to a Service Offering and which to an Application Service and split each record in the bottom layer into one of each, relating each pair of records based on the classification of the parent Service.
  5. Retrieve the CI Relationships connected to the bottom layer and create new CI Relationships to the new Application Services.
  6. Define and build out the Technical Service Offerings and their Dynamic CI Groups for Infrastructure.
  7. Finally, set the necessary Service Portfolio fields on Service and Service Offering, link parent-child Application Services together (again based on bottom layer record to record CI relationships) and finally set all Application Services to operational (something that can only be done once at least one Entry Point is related to each Application Service).

 

Coding Dependencies

Once a service migration path is identified, the next challenge is how to uncover where the legacy custom structure is used and referenced. Stop and pause here: how would you scope out a project to find every coding reference to a custom table, or fields that are a reference to that table, and build out a set of stories, ideally aligned by process or product, whilst also working out whether the code is still in use? How would you estimate effort, given a dependency could be something very straightforward like a reference field on an Incident, or could be some complex coding within a large script include that could require the entire script to be refactored?

These are the key activities:

  1. Start top down with the major process areas. It’s obvious that all the main ITSM processes will be heavily impacted - Incident, Problem, Change, Request, Config, Knowledge and more. The big challenge with any mature platform is the level of technical debt and customisation - not necessarily poor customisation, but where custom features were added years ago when the platform was not as comprehensive as it is today, and built upon over time, or genuine business customisations that remain essential for normal operation.
  2. Speak with key SMEs to understand how the process operates and which areas of the platform might be indirectly impacted – especially any custom functionality or custom applications and integrations.
  3. Set the standards for refactoring. Such a project is likely to result in a lot of code changes throughout the platform. Some of this code will be baseline with some customisation, some entirely custom. How much code is old and potentially not meeting today's best practices, such as improved efficiency, or to leverage GlideQuery to simplify elements of code and produce more robust error handling, or even to replace code with newer low-code or no-code solutions?

 

Getting The Dependencies

It is clear that although the top-down approach is essential to understand things from a process perspective, references to custom tables and fields could be across the entire platform, documentation could be missing or outdated, and configuration may exist that the organisation is unaware of. It is therefore necessary to find, document and categorise each and every coding and platform reference by process. Not a small ask!

As a first step, consult ServiceNow Community article “Migrating into CSDM identifying table dependencies article contains a Fix Script that logs coding dependencies” https://www.servicenow.com/community/common-service-data-model/migrating-into-csdm-identifying-table...

This is primarily designed for migration from a baseline table – for example Application Service to Business Service – and whilst it covers the "big hitters" – business rules, client scripts, script includes etc. – significant areas of the platform are absent.

 

There are baseline ways to find scripting references, such as the built-in Code Search and the search available with the SNUtils browser plugin, but these do not cover everything (and SNUtils might require addition of search source tables to the instance that your customer may not allow).

Take some time to examine the tables that extend Application File [sys_metadata] to locate the coding tables you will need to search, and the necessary query on that table to return results.

 

One possible approach is to search the Dictionary Entry [sys_dictionary] table for any Dictionary Entries with a Field Class that could support either the reference to a custom table or a custom field that is a reference to that table, and which should either extend Application File or be recognisable as a “development” table, but this will return many hundreds of tables that are either internal product/system-related, or do not contain any records.

Bear in mind that this approach is fine for regular platform development objects but isn't so useful for newer platform features like ATF, Flow Designer, Integration Hub and Performance Analytics, where the structures are a little more obfuscated. This is one of the harder parts to uncover – for example there are several tables between an Indicator Source and the Dashboard(s) using a Widget that references the Indicator that contains that Indicator Source. More on this later.

 

It is next to impossible to account for every licensable (or even free) plugin that could be installed, so I recommend restricting the search to the core platform, plus Flow Designer, Integration Hub, Performance Analytics Premium (if licensed) and Data Certification (which is deprecated and it is recommended to replace with CMDB Data Manager), as well as Discovery/Service Mapping and Enterprise Architecture (as they are commonly used in instances where CSDM is implemented). Of course, your engagement may be on an instance with other product plugins installed so some additional work may be needed.

 

The net result should be an extended version of the dependencies fix script referenced above, but I would recommend moving it to a script include that can be called from a Background Script or Scheduled Job, and which takes arguments for the legacy custom table and the desired baseline CSDM table. It should then log dependencies in a consistent format suitable for export (for further analysis/breakdown by process). This set of dependencies will ultimately form the source information for many of the Stories you will need to write, categorised by process or product and then sequenced based on known dependencies – for example, amendments to tables and forms being a good first step. This script should not be used with baseline tables as it will log many false positive dependencies.

 

Implementation

I will gloss over the actual development as it very much depends upon your organisation/customer development standards – however it is necessary to be very involved in sequencing discussions to ensure that components are migrated according to dependencies upon one another. One big benefit from discovering each individual coding dependency and categorising them by process is that it makes reasonably accurate estimation of effort possible, and therefore planning time, resource and budgeting.

 

Code Migration

The migration of the coding dependencies should be relatively straightforward, the main challenges could include a lack of documentation about existing functionality, and control over versioning where scripting objects are being updated/defect remediated across multiple sprints – but be very aware how each process uses service data, and particularly Change Management. Change approvals and conflict generation are not trivial scripts and your organisation/customer will probably have customisations or extensions to the baseline code.

 

Service Migration

The categorisation of Services, restructure and migration of custom table records into CSDM and their relationships, can be managed via dedicated scripting and data tables that extend the Data Lookup Matcher Rules [dl_matcher] table - this enables easy documentation and migration of the new records, and can act as a permanent record of which legacy service object was migrated to which CSDM object. This can also be used during the data migration.

 

Data Migration

I recommend building a basic structure to use event-based recursive scripting with its own queue and handler to manage mass migration, with the small tweak of using sequenced scheduled jobs to fire off each table's migration and update of a dedicated migration progress table to highlight the expected number of records to be created/updated and the actual number of records. For very high-volume tables it is far more efficient to batch creating records for each legacy service record (rather than just batches for the table overall). A dry run to another sub-production instance is recommended to estimate and assess performance and timings.

 

Event-based Data Migration

If you are migrating millions of records and you’re using sequential event-based jobs to migrate data, expect the event log to become frightening to look at! It can also appear that migration jobs are stuck as new events are created whilst processing is still occurring on earlier events, but rest assured these work themselves out quickly enough – so long as the migration jobs use their own event queue to avoid maxing out the scheduler workers and locking the instance!

 

Related Product Development Streams

There will inevitably be further work that comes out of this kind of implementation, which very much depends on your customer/organisation’s strategic direction with ServiceNow, what products your customer/organisation has licensed, and their business/IT maturity. Platform products that directly impact CSDM and will require additional attention right at the start of a CSDM program include:

  • Enterprise Architecture (formerly Application Portfolio Management) – as you can see above, net new Business Applications will be discovered and created, and this naturally leads onto processes to onboard, manage and retire Business Applications, as well as technology strategy/technology risk discussions that will lead nicely onto Business Application scoring, Technology Reference Models, Technology Portfolio Management (which itself requires Software and Hardware Asset Management) and onto an Enterprise Architecture function.
  • Strategic Portfolio Management (SPM) – if you establish a portfolio of services (your “basket of investments”) in ServiceNow then ongoing management of pipeline, catalog and retirement is an ideal candidate for SPM. The soon to be published CSDM 5.0 standard appears to contain a new Ideation Domain to better reflect the full idea to implementation process.
  • Service Mapping – it is possible to implement CSDM without (Discovery and) Service Mapping but in a large organisation manual buildout of tens of thousands of Application Services is hugely time consuming, error prone and lacks the facility to define, approve, discover and manage service maps. This is a critical part of CSDM as it populates a large part of the Manage Technical Services domain.
  • Integrated Risk Management (IRM) – given a large percentage of risks and controls for most organisations are related to services, products or individual configuration items, it is difficult to achieve the true value with IRM without CSDM Services and infrastructure.

 

My Thirteen Top Tips, Gotchas, And Lessons Learned

No, I couldn’t narrow it down to just ten!

  1. User education and user journey is fundamental. Don’t underestimate the time and effort required to bring your customer/organisation through a CSDM program. It can take time for stakeholders to become fully onboard, and if you’re not careful it can meet with considerable resistance to change. In some ways CSDM can appear more complicated because of the network of tables and CI relationships, so be sure to emphasise the benefits – a single, clear and consistent data model that meets the needs of both the business and technology, and underpins all service management processes.
  2. Uncovering application technology in an organisation can be tricky.  An organisation may or may not have sources of data for the application technology that support their services, so building a Business Application inventory can be a challenge. You can get to a good level of coverage fairly rapidly if the data exists, but you may need to cast your net wider across the organisation, service and technical SMEs, and within the CMDB to find technologies and their relationships. This can be particularly difficult with homegrown platforms and applications, as well as legacy or poorly documented and understood Business Applications.
  3. Attributes may not be what they first appear. Be prepared to conduct several attribute (field) mapping workshops and really understand what each attribute is, what it’s for, and why it is needed (is it needed at all?) Also consult the uncovered dependencies as the customer may believe the attribute is not required, but actually essential business logic relies upon it – which was yet another reason to get the dependencies bottom-up.
  4. Exclude User objects. There are both coding dependencies and user dependencies. Make a call on what you can reasonably migrate, and what is not possible or sensible to migrate via development. I would recommend user-type objects not be automatically migrated as user education and workshops are preferable to changing what a user might expect when logging in – the object might require revision or may no longer be relevant in the new world, and there is a risk that removing expected objects could negatively impact user experience.
    • User preferences.
    • User-generated filters.
    • User-generated templates.
    • Personalised form and list layouts.
    • Reports (handled separately as impact analysis was required, plus many reports may be unused or out of date).
  5. Understand Performance Analytics structures. PA has a complex structure, and it’s important to understand where your discovered dependencies end up – usually on one or more Dashboards. This is how the structures relate:
    1. Indicator Source [pa_cubes] – field facts_table is where the legacy custom table would be referenced.
    2. Indicator [pa_indicators] – using the Indicator Source(s).
    3. Widget [pa_widgets] – using the Indicator(s).
    4. Portal Preference [sys_portal_preferences] – where name = sys_id and value = the sys_id(s) of the Widget(s). Return the sys_ids(s) of field portal_section.
    5. Portal [sys_portal] – where page field is not empty and where sys_id is one of the portal_section records. Return the sys_id(s) of field page.
    6. PA Tab [pa_tabs] – where page field is one of the page records.
    7. Dashboard Tab [pa_m2m_dashboard_tabs] – where tab field is one of the PA Tabs, Return sys_id(s) of field dashboard.
    8. Dashboard [pa_dashboards] – where   sys_id field is one of the dashboard records.
  6. Dependencies can be much harder to locate with low-code parts of the platform. Many of the data that low code-tools use is stored within the Value [sys_variable_value] table. whilst ATF (search value field in Value table with document=sys_atf_step) and Workflows (document=wf_activity) are relatively simple to locate, Flow Designer, for example, is trickier because the references are stored within the ID [document_key] field in the Value record, and this is of type Document ID, precluding dot walking.
  7. Dependencies can be anywhere. Don’t discount a table, even if you are sure it shouldn’t contain a dependency. I found dependencies in less well-known locations such as Change Schedule Definition Popover Fields, CMDB Suggested Relationships, Export Set, Processor, Survey Trigger Condition, Transform Entry and View Rule, amongst many others.
  8. Understand Application Services. An essential part of implementing CSDM requires you to understand the hierarchy of Application Service classes, what features each class possesses, and when to pick a class (if not using Service Mapping). This excellent article contains the detail: https://www.servicenow.com/community/common-service-data-model/application-services-how-to-use-them/... – but in addition to this, here are my findings;
    1. Mapped Application Services are of limited benefit. This class of Application Services has limited automation and relies on someone clicking the “Update with changes from CMDB” Related Link on the Application Service record each time CI Relationships within the Application Service map changes. This is both impractical and unrealistic in most organisations. Consider using Mapped Application Services only when there is no usable service map (yet) or perhaps a manually populated or very small CMDB. If you have a mature CI Relationship table structure, use Dynamic Services (Calculated Application Service) instead.
    2. Calculated Application Services are not easily portable. I found out the hard way that Calculated Application Services use the framework of Service Mapping, with a referenced Service Layer, part of the Service Model, to build the related Entry Points and Manual Endpoints that link the Application Service and its CIs. This framework is instance-specific and is not portable to another instance. ServiceNow has a support article detailing how to export to an update set: https://support.servicenow.com/kb?id=kb_article_view&sysparm_article=KB0622391
      1. Be warned that the first Fix Script – which attempts to export the Calculated Application Services without the Layer field value, could not only result in tens of thousands of customer updates in the resulting update set (including Entry Points, Manual Endpoints and relationships to all the CIs in every Application Service map) and could therefore take several hours to complete and to import into the next instance in the stack, but more importantly it doesn’t work because it doesn’t actually remove the Impact Layer ID field value from each Application Service!
      2. The second Fix Script – which attempts to re-populate the (now) empty Impact Layer ID field once the Application Services are retrieved in the target instance – does not retrieve the CI Relationships between the Application Service and its CIs, so you will need to export and import these as well, otherwise the Application Service map will not be populated.  
        • Given the huge size of the data set, and my practical experience that following the guidance above could (and did) take hours to implement into the next instance in the stack, but still failed, I decided to dynamically create each Calculated Application Service on each instance, and this is what I would recommend – just make sure the sub-production instances are cloned from production soon after go-live so that the record Sys IDs are identical.
    3. Dynamic CI Groups stand or fall based on Foundational data. This is why the Foundation Domain is the first step – gloss over this step at your peril! It is inevitable that many of the “hardware” services you encounter will need to be represented as Technical Service Offerings linked to Dynamic CI Groups. Without good Foundation data – location, model, organisational structures, plus more specific data such as operating system – you may be unable to scope CMDB Groups effectively or bring in a small enough number of CIs to underpin Dynamic CI Groups because the CI data is not mature or accurate enough.
    4. Keep maximum size of a Dynamic CI Group relatively small. A Dynamic CI Group can relate to up to 10,000 CIs. However, it’s not a good idea to construct groups near this limit, and especially to exceed this limit, for two reasons.
      1. Refreshing Impacted Services – if you enter a Dynamic CI Group as the CI on a Change Request, it will “unpack” all the CIs into the Affected CIs related list. If you then Refresh Impacted Services, you could end up with a huge volume of calculations to pull back Impacted Services from 10,000 Affected CIs. I’ve seen this freeze the user session! If you remove this “unpack” facility also be aware you will need to customise Refresh Impacted Services functionality to manually add the Dynamic CI Group into Impacted Services, as without the “unpack” automation it won’t get added, because although the Service Configuration Item Association [svc_ci_assoc] table contains a record for each Application Service where the CI Id and Service Id fields contain the Application Service, the same is not true for Dynamic CI Groups.
      2. Filtering – If you want to align to leading practices and set up ITSM forms with Service, Service Offering and Configuration Item, then you will probably want to auto-filter for efficiency:
        1. Enter a Service offering: Configuration item field auto-filters via Application Services/Dynamic CI Groups to return only the CIs that belong to that Service Offering.
        2. Enter a Configuration item: Only those Offerings connected to Application Services/Dynamic CI Groups connected to this CI are visible.
      • The issue is that the Reference Qualifier for Configuration item is a query on the Configuration Item [cmdb_ci] table, whereas all CI data for Application Service maps and Dynamic CI Groups are in the Service Configuration Item Association [svc_ci_assoc] table, meaning you will ultimately have to extract the CI Sys IDs and return a sys_idIN query, which is not performant, and you could find your user session freeze when attempting to return this reference qualifier at or above the Dynamic CI Group limit of 10,000 CIs, and especially if your Service Offering is linked to multiple Dynamic CI Groups. This may become less of an issue once customers realise the performance gains from RaptorDB.

    5. Understand how Service Configuration Item Associations, Manual CI Exclusions / Inclusions and Traversal Rules for Application Services interact.
      1. Service Configuration Item Association [svc_ci_assoc] contains the “service map” for an Application Service and its CIs, based on the “levels” selected for the Application Service. This brings in all but a small number of CI Classes, many of which your organisation will not need to consider for service impact.
      2. This is where the Manual CI Exclusions / Inclusions [svc_manual_ci_exclusions_inclusions] table comes in – work with Configuration Management,  Change Management and Incident Management  to define which classes are definitely not required and exclude them. This drastically reduces the size of the Service Configuration Item Association and Manual Endpoint tables.
      • However, be aware that in selected cases, a Manual CI Exclusions / Inclusions record does not exclude population in Service Configuration Item Association. Review the Traversal Rules for Application Services [svc_traversal_rule] table – as an example, there is a rule for CI [cmdb_ci] to Tracked Configuration File [cmdb_ci_config_file_tracked] which will override the Manual CI Exclusions / Inclusions and will need to be deactivated.
  9. Understand how “Impacted” Related Lists are managed. Incident and Change Management include System Properties within their respective Properties pages to turn on the refresh of impacted lists functionality, as well as being able to refresh synchronously or asynchronously via events.
    1. Impacted Services for CSDM are refreshed based on the Affected CIs [task_ci] list for the target record. Each of these records are looked up in the Service Configuration Item Association [svc_ci_assoc] table and the unique list of Services returned, each of which is some form of Application Service, and together form the list of Impacted Services [task_cmdb_ci_service].
    2. Impacted Services refresh happens automatically on insert of a Change Request, but not on insert of an Incident.
    3. Refreshing Impacted Services via the Related UI Action utilises the TaskUtils script include to refresh the relevant related lists based on CSDM CI Relationships but this all starts with gathering the Impacted Services.
      1.       Look up each of the Affected CIs on the Task in the Service Configuration Item Association [svc_ci_assoc] table and return the unique list of (Application) Services/Dynamic CI Groups.
      2.       Use these Application Services to calculate the [Impacted] Service Offerings (but see point 10 below!) and Impacted Business Applications.
    4. Change behaves differently when Service Mapping is activated. When refreshing impacted Services, not only Application Services and Dynamic CI Groups are populated in Impacted Services, but also any Application CIs that are a parent of any of the Affected CIs with a Runs on::Runs CI Relationship. You may have to customise the relevant script include. See this link for more information: 'Impacted Services/CIs' related list is showing different results in Incident vs change form
    5. However, although the Problem table has the ability to display these related lists, there is no such automation to refresh them, so if you want Problem to be consistent with Incident and Change (or you want to use APM Indicators to gather problem data for Application scoring) you will need to mimic the functionality present on the Incident table/application and build the same objects out for the Problem table.
  10. Service Offerings aren’t aligned to CSDM. The Service Offering [task_service_offering] table, that is the related list of (impacted) Service Offerings against Incident, Problem, Change etc. is not calculated properly in the baseline instance to align with CSDM. Until this is fixed you will need to customise to correctly relate Service Offerings. Review these community articles for more information: https://www.servicenow.com/community/common-service-data-model/itsm-impacted-services-sample-executi...  and https://www.servicenow.com/community/common-service-data-model/refresh-impacted-services-improved-ou...
  11. There’s no code support for traversing CSDM relationships. The baseline instance has no support for traversing the relationships between CSDM tables – for example getting a list of Application Services linked to a given Service Offering or getting a list of CIs that could be related to a given Business Application. This is in my opinion a significant gap that I’d ask ServiceNow to address. Until such time, I have created a Blog post with a sample script include for traversing relationships: https://www.servicenow.com/community/developer-blog/csdm-script-include-to-traverse-ci-relationships....
  12. Check SLAs. If your SLA Definition Condition(s) reference the custom service structure, when you migrate them you will find the Task SLA data gets “frozen” and you will need to run SLA Repair over open records during migration. The baseline SLARepair() Script Include will perform this but be aware that it may take a long time to finish processing repairs, especially when looped. I also found that it was not always 100% successful, nor did it return and permit logging of status when being called from a scheduled job – but it always ran successfully when the Repair SLAs related link was clicked on an Incident.
  13. Install the CMDB and CSDM Data Foundations Dashboards free Store application. The CSDM Data Foundation dashboard is especially important for staged maturity model implementations of CSDM as it gives clear indicators and analytics for each maturity level and gives you an overall quality measurement of your CSDM implementation.

 

Key Takeaways

  1. Understand the customer environment when considering the dependencies and solutions required for a successful CSDM implementation.
  2. Plan for a significant amount of organisational change, especially within IT Service Management. Educate and guide stakeholders through the program and reinforce the message throughout. This is not just comms and training!
  3. You may be able to follow the maturity model closely, but equally be flexible enough to combine or iterate over stages.
  4. Tackle migration both top-down, by defining the Service Portfolio (if in scope), but also bottom-up by understanding CMDB health and maturity, where Applications are mapped to and from where you will get or build an inventory of Business Applications.
  5. Understand which parts of CSDM are either unsupported in the base platform or require development to function correctly.
  6. Do not under-estimate the amount of work, and re-work required to achieve success – in a way it’s like a mini-platform implementation in its own right and requires the same care, understanding of strategy, stakeholders, organisational and technical change, and practically speaking a reasonable estimation of time and resource.

 

Useful Links

Common Service Data Model forum (ServiceNow Community)

20 steps to align to CSDM (ServiceNow Community)

CSDM: How to get there? (ServiceNow Community)

Application Services: How to use them? (ServiceNow Community)

How to configure incident management to align with CSDM; a leading practice guide (ServiceNow Community)

CSDM Data Model Examples (NowCreate)

CSDM Data Modeling Workbook (NowCreate)

Data Foundations – CSDM, CMDB and Service Graph (ServiceNow Community YouTube Playlist)

6 Comments