Service Matters

Resourcing the CMDB to Change the Way Your Work Gets Done: Step 3

Resourcing the CMDB to Change the Way Your Work Gets Done: Step 3

By Chris Pope - 2014-07-14


I’ve developed this series to share techniques and best practices meant to change how you approach service management. In the first step of this series, I discussed how to effectively identify the problem you’re trying to solve, and step two focused on how to better manage the qualification of configuration items within the CMDB.


The next step in this series is to ensure adequate resourcing of data quality. Adequate resourcing and maintenance is almost, if not more important, than the initial gathering of data and its sources. Too often, I see customers start with a big push and all sorts of claims of, “this will be 60% better than the last system” or, “this is so much better than last time” only to see several weeks or months later – after the initial hype wears off – that they still have the same problems as before, just in a shiny new wrapper. The adrenaline and excitement that a new technology brings to a project can quickly deteriorate once the enormity of the problem is understood, and the realization that, “we’ve always done it that way” will not work again!


But first we need to ask, why is data so important?

  • Data underpins all operational processes.
  • Data builds trust amongst users and service owners.
  • Data enables fact-based decisions.
  • Data establishes accountability.
  • During the heat of the battle, data can be relied upon.
  • Data is a single source of truth.


The more successful organizations I have worked with have instantiated from Day One the role of Data Ownership. This role is about ensuring there is accountability and ownership within the organization for the completeness and quality of data. Too much finger pointing and, “that’s not my problem” can lead to splinter cells setting up their own trusted data sources. Which, of course, results in a siloed approach – and reliance on these when push comes to shove. It doesn’t have to be this way!


A Data Owner is not necessarily accountable for all things related to the CI, but they are responsible for ensuring that what they manage, support, and approve is accurate and complete. They are likely the most knowledgeable about operational and environmental aspects. If you were to ask end users, they would likely say this person was the owner. Step up and take a bow Application Owner! Everyone has these, maybe not in the same organization, but they do exist. Amazingly, this person has a relationship with the Business Owner – too good to be true, surely.


The list can go on and on, but the most successful implementations I have seen and been fortunate to be part of have centered around several key resources:

  • Application Owner
  • Business Owner
  • Approved By
  • Supported By
  • Managed By (often the Application Owner, but not always)

Put them to work!

In an ideal world, the CMDB is filled with relevant data and the quality is high which is driving operational processes and decision-making. Let’s crash back to reality for a moment!


Implementing a consistent and well-structured process for data maintenance is a key enabler for the ongoing success of a CMDB. The diagram below shows a high-level flow of how data, rules, policies and accountability can be brought together to drive data quality. 

  • Certification policies applied to known rules or data elements in relation to CI’s.
  • Results of rules, output metrics to dashboards.
  • Dashboards are actionable/drill through to process steps to correct exceptions.
  • Rules Engine Outputs tasks to owners/groups and facilitates functional/hierarchical escalation and reporting.


Consider the following use case:

A new server is on the network and is created as a CI record. The server does not conform to the policies required to be a managed CI and requires several remediation tasks to be completed before being considered a valid or in-policy CI.




The data update in the CMDB triggers a policy exception based on known rules:

  • 3 activities are triggered
    • Notification to CI class owner
  • Task/Action to owner of CI (If Known) or CI class owner
    • Metric reporting to dashboards
    • Dashboards report in both functional and hierarchical metrics
  • Task/Action owners are given a set period of time to review and correct the policy exception
    • Functionally showing the status of the group/CI class owner relative to peer group
    • Hierarchical showing CI owner or class owner against peers (people), and rolling upstream through the organizational structure


The above flow is a really good way of managing by exception in a near-real-time basis, but this does not address the continued focus needed to ensure data quality is kept high. 


Now consider this use case:

A policy is put place whereby the CIO requires that all IT services and applications need to be certified as accurate and true on a periodic basis, in this case quarterly. At the start of each quarter, all services/applications are put into a state of “unconfirmed”. Service and application owners are required to re-certify the modeling and instances of their services/applications by reviewing data, such as: naming, installed instances, support information, infrastructure dependencies, and application dependencies.




The Certification Policy (time based) triggers a re-certification policy/activity:

  • Three activities are triggered
    • Notification to CI owner(s)
    • Task/Action to owner of CI (if known) or CI class owner
    • Metric reporting dashboards
  • Dashboards report in both functional and hierarchical metrics
    • Functionally showing the status of the group/CI class owner relative to peer group
    • Hierarchical showing CI owner or class owner against peers (people), rolling upstream through the organizational structure

There are many ways to approach policy-based exceptions, such as by service or individual classes. These tend to be very technology centric and can be focused too close to the problem. I have used and have seen the following policy exemptions used with great success:

  • Orphans
    • CIs that do not have an assigned owner, supported by, or dependency mapped
  • Maintenance Windows
    • CIs that do not have an assigned maintenance window for when the service can be down or taken offline for upgrade/patching
  • Audit/Regulatory Standards
    • HIPPA, PHI, Security Profiles/Hardening
  • Security Standards
    • Local Admin, Ports Open, Locally Installed Services

By implementing good controls, standards, and processes that show the ‘bigger picture’ of the data quality’s impact, it’s now possible to implement a CMDB with reliable data.


Federating ownership of the data is a key enabler and helps to overcome resource issues that the Configuration Librarian role would never be able to keep up with. As you are implementing your CMDB or just starting to work out what to do next, stop and think about the non-technical aspects and who can help you to drive a successful implementation. 


Stay tuned for Part 4 of this series: You Ain’t Got No Authority…

Buy Apps on the ServiceNow Store