tom_molfetto
Giga Expert

virtualization-funny.png  


According to a 2014 service availability benchmark survey by Continuity Software, the most common and effective strategy for ensuring service availability is virtualization HA (high availability), used by the majority of enterprise respondents.

From the ServiceWatch Team's perspective, this strategy is not usually equipped to provide a comprehensive and ironclad solution to the service availability problem, which is probably why nearly half of those same respondents reported missing their service availability goals. And in a landscape where every hour of downtime can cost an organization anywhere from $100,000 to over $1M, it is becoming increasing important to ensure that critical business services are available as often as possible.  

High availability in virtualized environments works great for applications that are virtualized. Like with a web server, for example. The problem with reliance on high availability through virtualization is that not all domains in the data center can avail themselves of being on a virtualized fabric. Storage arrays and routers — for example — are oftentimes physical boxes that cannot be virtualized. And these can be key components for key business services.  

Because of this, it is imperative to understand the holistic topology and have a complete understanding of dependencies. In environments where certain domains can be virtualized, whereas others cannot, it is not possible for virtualization high availability to provide the sort of safety net that many modern enterprises require to meet their service availability goals. So while this strategy may be sufficient within specific use cases, there are inherent inadequacies as regards being able to truly effect a sea change in the service availability space overall.  

Furthermore, just because an enterprise moves to a virtualized environment does not mean that the enterprise has a complete understanding or map of its business services. It may know — for example — that the AppCenter VM is running on a particular physical host, but it doesn't know that the AppCenter is connected to a database on another physical host, and so on… which poses challenges both from the perspective of proactive change control as well as with rapid problem isolation when issues do arise.  

In short, high availability through virtualization can provide a solution for very specific use cases within very specific landscapes, but it is not a true contender for solving the service availability problem. With virtualization HA there will be gaps because not all applications or domains are capable of being virtualized, and because organizations that leverage this strategy to close the gap in their service availability goals will still be faced with deficient visibility into the dependent relationships between the components within their data center.  

The result: proactive identification of risk will continue to be the top challenge facing the enterprise in ensuring service availability. Learn more about ServiceWatch.