Mention disaster recovery (DR) and you almost hear the groans of IT administrators. They know it is a must-have but are very worried that their DR implementation will not work properly if a real emergency occurred, or at the prospect of a looming major testing exercise to gain assurance it will.
Now factor in the added complexity in DR protection of virtual environments, including the cloud. Maybe it's time to take a step back and wonder: isn't there is a better way?
I make no apology for these days writing a lot on recent start-ups. What drives them is that they can survive only if they have an innovative approach to dealing with a widespread problem. In this case I am looking at VirtualSharp and its dedicated DR solution.
Its flagship product, ReliableDR, provides fully automated and non-disruptive DR testing for business critical applications - as well as actually guaranteeing successful DR recovery within agreed recovery time objectives (RTOs) - which it calls 'DR assurance'. Frankly, it's sad that this is not the norm nowadays, but I know the difficulties in achieving this. So, as a one-time techie, my reaction is to ask how it does this and, no surprise, it is not rocket science.
The software runs at a remote DR site or backup data centre as a virtual machine and provides orchestration using defined business rules. Through these it can automatically trigger a periodic recovery of, say, one isolated virtual environment - without stopping working applications or needing any DR time window. These amount to simulated failovers that continually test parts of the system automatically and, during this process, recovery point objectives (RPOs) and RTOs are accurately measured. Then the business can set a realistic 'guarantee' for a DR to be completed (i.e. within the pre-defined RTO).
Conceptually, that is straightforward and logical. It does not use CDP or fault-tolerant approaches but snapshots mean there is always a full copy at the remote site. The important thing is that IT administrators can forget the complexity under the covers and the prospect of a massive DR test - and so sleep more easily. That is surely the way DR ought, in general, to be done nowadays.
The largest banks, that invest millions in DR, may not, in general, be interested but, otherwise, medium and large enterprises should be. VirtualSharp CEO Carlos Escapa told me of some very major implementations as well as interest from SMBs; release 3.0 can be web-oriented and placed in a portal to then run transparently. So resellers also have an opportunity to run this as a cloud service.
Escapa described the VirtualSharp process as "driving metrics into a woolly process" (something of an indictment of the hit-and-miss DR procedures often found). Its single-minded DR focus means that it can add functionality at a high rate, making it harder for potential competitors to catch up. Nevertheless, there are the usual challenges: organisations' already installed products (maybe tied to a storage backup and recovery package), lack of credibility for a start-up, and "will the company be around in five years?" I can only say, go and take a look and judge for yourself.