Virtualization has made our lives as IT Pros easier in many ways. Some of us will remember the days before virtualization really took off, and remember when it seemed impossible to do things like move a running workload from one piece of physical hardware to another with no outage or downtime. Regardless of how great virtualization has made our lives, there are still some challenges and some best practices that need to be followed. Virtualization isn’t a perfect solution, especially due to the fact that it still depends on physical hardware that is prone to failure. Because of that, and many other reasons, it’s still important to conduct proper backup operations within our virtual environments. Let’s cover some of the main points in a little more detail.
1. Utilize an Offsite Location
Virtualization doesn’t magically remove some of the usual headaches that backup administrators need to be worried about. Just like hardware can fail, entire production locations can go dark as well. This could be for any number of reasons. Perhaps your production SAN is offline. Maybe a fire damaged your datacenter, or god forbid, a natural disaster wipes the entire site off the map. Whatever the reason, there are certain rare situations where your backup data can get destroyed along with the production data. This is where offsite backups come in play. Having that location in a different geographic region is important when you need to recovery from these rare events. I say rare, but they have been known to happen from time to time. Don’t tempt fate.
2. Don’t Use the Same Storage for Backups and Production Data
In our new world of cheap storage and centralized SANs and NASs, it’s so easy to spin up a couple of virtual machines (VMs) and provision workloads on them. This goes for backup appliances as well. As more and more backup vendors move towards virtualized appliances, the more people are deploying the backup function onto shared storage. The one thing that a lot of administrators forget about in this situation though is the location of their backup data. Many administrators will place the backup workload on the default storage location with their other VMs and think nothing of it. The problem with this is, you have now created a single point of failure for your company’s data. In this situation, if that storage array fails, you’ve not only lost your production data, you’ve lost your backups needed to recover as well. Not a good situation. Be sure to put your backup data on a different storage platform that is separate and not dependent on your production data platform to avoid this situation.
3. Snapshots are NOT a Replacement for Backups
I’ve seen this many times. Snapshot technology on virtualization platforms, will be used by some administrators for backup type purposes. Sure, it looks like a type of backup, but the similarities stop after the first glance. Snapshots were intended to be used for short term recovery in update and test/dev scenarios. They were not intended for backups. The big issue with using snapshots for backup is the complete and utter lack of backup retention. Most organizations require that their data be recoverable back to a certain date, and snapshots simply do not provide that functionality. Additionally, due to the way snapshots function it can be potentially hazardous to have them lingering out there for any length of time. Once a snapshot is created, all new writes for that virtual machine are sent into the snapshot file. The amount of storage consumed by the Virtual Machine is no longer limited by the size of the virtual disk attached to the VM. A snapshot file can grow and grow and grow, until the storage location is out of free space, which will bring down your production virtual machines and cause an outage. As you can see, using snapshots for backup may seem like a good idea, but it’s not. Go get a backup application that was designed for that use.
4. Test, Test, Test……
As we stated earlier, virtualization is not the magic elixir that will solve every problem. Just as backups are still needed in virtualized environments, so too is the need to test those backups on a regular basis. You don’t want to be in a situation where you need to recovery from a backup only to find out that your backup is corrupt and un-usable. Testing backups used to be a manual and arduous process, but that is no longer the case. As virtualization has continued to software-define the various functions of our datacenters, testing backups has never been easier. Most modern backup applications have the ability to do a scheduled restore to a hypervisor and then notify you of the results. All that’s needed is to setup the schedule. This is one simple test that can give you peace of mind.
We’ve only scratched the surface of virtual machine backup best practices, but at this point you should be getting a general idea as to some of the pitfalls. If you’re interested in further reading on this subject, Altaro Software, have a fantastic whitepaper available that goes over this subject in MUCH greater detail. You can find the whitepaper on 10 Best Practices for Virtual Backups here. Remember, providing production computing is only part of the job, recovering all that information in a bad situation is a very important part of the job as well. Following some of these best practices will insure that you will be able to save the day, when that situation arises.
Thanks for reading!