According to the National Archives and Records Administration in Washington, D.C., 93% of companies that experienced a data loss resulting in downtime of over 10 days filed for bankruptcy within a year of the disaster. Half of these did so immediately.
At the same time, according to the National Cyber Security Alliance and Richmond House Group, 20% of small and medium-sized businesses will experience a disaster leading to data loss or will be hacked. If we put this data into plain English – your data is of the utmost importance and is in constant danger of being stolen or lost.
And yet small business owners either fail to recognize the importance of data backups or don’t put the necessary effort into creating robust backup strategy and recovery plans.
In this guide, we will define three essential steps to create a refined, flexible, and efficient data backup and recovery strategy.
Step 1. Define the Tolerable Downtime Per Resource
Not all data is equally important for your company. Thus, you don't need to create the same rules for backup and recovery plans for all types of data, resources, and workloads. First, you need to systematize the data, and here's how you do it:
- Define the mission-critical data. A single lost desktop and the loss of the production database are two different disasters and should be treated differently. First of all, you need to group data into several categories, from the most important to the least. How do you define the most important data? In the event of a loss of your core resource, a lot of other resources, data flows, and employees will be affected. Hence, this is a mission-critical resource.
- Calculate RTO/RPO. When you have divided the data into categories, you need to define their recovery time and recovery point objectives. These are the maximum tolerable period of time that the organization can live without the data, and the maximum tolerable amount of data lost due to downtime. The RTO/RPO numbers will allow you to define the backup strategy and recovery solution.
- Cost of downtime. If your business cannot work as a result of some sort of a disaster, you are losing money. The simplest cost-of-downtime estimation is the average revenue your business generates per downtime period deducted from the sum of the capex and the opex. The difference will be your losses. You don’t necessarily need to count the cost of downtime; however, it will help you to convince the business owners that investment in backup and recovery is essential for business continuity.
Further reading Follow the Data to Build Your MSP Strategy
Step 2. Define the Solution
Now, when you know your recovery time and recovery point objectives and have divided workloads into categories, you can find the appropriate solution to fit your needs.
First of all, you need to define the storage medium that will be used together with your backup and recovery software. There are two types of storage: local and cloud.
- Local storage. Local backup storage solutions vary from hard drives and full-on replicated servers to tape drives that you store your archives on. They are faster to back up on and recover from, but more expensive in terms of upfront investment. However, in terms of data and workload replication and the lowest possible recovery time and point objectives, there is no better option than local storage. Cloud storage and computing services, which we will discuss a bit later on, are still not capable of supporting replication of production databases with high I/O load, despite their incredible evolution during the last few years.
- Cloud storage. Cloud solutions provide infinitely scalable, safe, and price-efficient storage of various types of data. You don’t need to pay upfront to start storing data in the cloud. Most popular storage solutions are paid on the go and have several different price tiers for different types of data. Cloud storage is ideal for storing most types of data and archives, and cannot be affected by local natural disasters. Whether your on-premises solutions can be physically compromised, burnt down, or drowned, three or more copies of your data in the cloud are replicated among several fortress-like datacenters.
Further reading Compare AWS, Microsoft Azure and Google Cloud for Backup
Backup and Recovery Tiers
Previously we mentioned that data differs in terms of storage classes. This difference is based on the purpose of storage.
- Hot storage is built to recover data fast. Typically, it’s your production data, relevant to a given point in time. It might be stored for up to one month.
- Archive storage is appropriate for data that is needed for historical and compliance needs and which will therefore be stored for years. This storage should be, first of all, cheap and reliable over a long period of time. Among the most popular archive storage solutions are tape drives in local storage and cold cloud storage solutions, such as Amazon Glacier ( ).
The biggest cloud storage providers, AWS and Microsoft Azure, have created a more granular storage structure with several intermediate tiers between the hot and the archive storage classes. The “colder” your data is – in other words, the less often you need to recover it – the cheaper it will be to store, and the more expensive to recover.
Further reading Amazon S3, MS Azure and Google Cloud Storage Pricing Comparison
Backup and Recovery Solutions
Now you need to define the exact backup and recovery software. It should tick all your boxes from the previous estimations and decisions. It should also be flexible and support different backup and recovery types:
- File-level backup and recovery. The ability to back up and recover single files or folders. This is helpful when you lose a file server or a single desktop with the user’s files.
- Image backup and recovery. Image backup means that you copy the whole endpoint with the operating system, settings, and files. This is especially helpful for servers and will ease recovery if the production server is lost.
Further reading How to Migrate to a New Managed Backup Solution
Step 3. Define the Per-Resource Backup Retention Policy
Once you have categorized your data into groups, defined the RTO/RPO per group, and found the necessary software and hardware solutions, you need to mix everything into a granular backup and recovery plan that will work across the organization. For each group of resources, you need to find a balance between the type of storage, the type of backup, and the time you will store your data, which is known as the retention policy.
Document your backup strategy and recovery routines, so you will be able to determine quickly the required data for recovery, or pass that information to the IT staff responsible for recovery.
Further reading Backup Policy Best Practices
Building an Efficient Backup Strategy with MSP360 Managed Backup
The MSP360 Managed Backup Service was built to fulfill the backup and recovery needs of IT professionals, companies, and managed IT providers.
- The solution supports the most popular operating systems, including Windows Desktop and Windows Server, several Linux distributions, and macOS. It is capable of file-level, image-based, virtual machine, MS SQL, and MS Exchange database backup and recovery.
- MSP360 Managed Backup supports local storage destinations and most comprehensive cloud storage solutions, including Amazon S3, Microsoft Azure Blob Storage, Google Cloud Storage, Wasabi, and Backblaze B2.
- You can manage all backup and recovery routines on your endpoints remotely, thanks to free Remote Desktop software and a flexible Managed Backup console which allows you to control all aspects of your backup strategy from a single pane of glass.