tom_molfetto
Giga Expert

top_down.jpg

Discovery and dependency mapping involves many misconceptions. Automation, for instance, is a common theme when discussing application discovery, but then — are all solutions the same? Let's begin first with defining the objective.

 

With the growing complexity of IT systems, virtualization, cloud and the like — there's an obvious need for accurate up-to-date data. Whether the goal is change management, impact analysis, handling of critical events, allocation of resources, or all of these — there's an obvious need to discover all Configuration Items (CIs),   identify their interconnections,   and understand the link between the underlying IT infrastructure and business services.

 

So what are your alternatives for discovery and dependency mapping?


The common method — bottom up


Considering the scope of IT systems, it is obviously impractical to use manual discovery methods. Automation, therefore, is a must.

 

Indeed, most discovery and dependency mapping solutions automatically scan the network, discover all CIs and build a large CMDB repository. We call this a bottom-up approach.

 

What's the problem with bottom-up discovery and mapping? The first is TMI — too much information. The final result of such a discovery is a huge repository with tens of thousands of elements, but no meaningful categorization that relates the data to business services. Such a repository will include many irrelevant components (e.g., any virtual server that was temporarily used for testing a year ago). Each CI has a large amount of data associated with it- most of which is unnecessary and detracts from your ability to sift the meaningful data.


The second issue is the repository update. According to a fairly recent Business Service Management survey, over half the organizations report 11 to 100 IT changes on a weekly basis. Bottom-up auto discovery methods do not offer a path for an automatic update of the data. Instead, the system has to be re-scanned in order to remain accurate.

 

Probably the most problematic issue is that the bottom-up approach requires you to manually filter the huge repository and map CIs to business services. So, for instance, if an application server has many connections, you'll still have to decide which of the connections relate to a specific business service. Naturally, by the time you are done with such manual mapping, it will most probably no longer be accurate anymore.

 

The bottom line is that in order to successfully map a service, you must actually know its structure, components and application map. So in a sense you really need to know the mapping to do the mapping… What's missing is a process that uses the business service context and enables the mapping to be done automatically.

 

Top-down discovery and dependency mapping

 

The key to automating the business service mapping process is to use some simple means to identify the service, and then derive from it all the components and structure. This is where the top-down discovery approach steps in, making the business service itself the anchor point of the discovery process.

 

The top-down approach uses the only possible business service identification key — the point at which the business service is consumed. The user provides an entry point for the business service — for example, a URL for a web-based application, or an IP, port and server for a fat client application. The discovery process then advances tier after tier to identify the IT components related to the business service. This results in a much smaller repository, and a 100% focused service map that lets you immediately understand the link between a business service and its CIs.

 

Moreover, the process keeps the business service context, and is therefore able to follow any dependencies which are part of the particular business service. Isolating only the relevant dependencies is also key in mapping the business service to the underlying network and storage IT components which always serve multiple business services. This enables a true holistic cross-domain mapping and impact management solution which is simply not possible otherwise.

 

To further complement the top-down approach, the ServiceWatch automated discovery process also builds an abstraction of the business service structure which is independent of the actual real-time model. We call this abstraction a Skeleton. The skeleton is used to define business service policies and rules on the generic application structure and makes sure that the actual business service structure is always up-to-date. For example, there are certain parts of the business service which are more prone to changes, such as the members of a web server farm behind a load balancer (these can change overtime to adapt to the changing demand, especially in dynamic cloud environments). Such load balancers would therefore be scanned more frequently than other entities, such as databases, which are less likely to change.

 

This bottom line is a much more effective discovery process, yielding a focused dependency map that is always up to date. Want to see this in action? Take a look at our ServiceWatch Demo.