Blog Articles
Read MSP360’s latest news and expert articles about MSP business and technology
MSP's Guide to Data Classification

MSP’s Guide to Data Classification

MSP’s Guide to Data Classification

If you work with relatively small and few clients, data classification is easy: you need to backup everything, be ready to recover everything and all data should be secure. However, when you manage dozens of clients with hundreds of devices, it becomes harder to implement the same data backup, recovery, and security policy for all the data that your end users operate with.
So, at some point, the data needs to be classified. In this guide, we will overview popular classification techniques from different standpoints, including general security, backup, disaster recovery, and more.

Data Classification

First of all, you need to create a general classification for all data that passes through your managed IT provider on the basis of its:

  • Sensitivity. Sensitive data, including personal, financial, legal, and healthcare records, to name a few examples, typically fall under some sort of legislation, which you, as a data processor, should comply with. If it does not fall under official legislation, its loss can still be a matter for which your client might sue you. Hence, you should define which data is sensitive and which is not, on a per-client basis. This will further help you to create granular security and backup policies.
  • Criticality to the organization. Not all data is equal. If one of your end users has lost access to his or her calendar, that might not be a huge deal (it will, nevertheless, be the reason for a hysterical call to your helpdesk). However, when you compare it to the loss of a production e-commerce database, it becomes clear which is more critical. So, your protection techniques and focus should be based on what's most critical.

TIP: Don't forget to carefully document your data classification policies and implement them on a per-client basis.

Data Security

Once you have classified the data, you should safeguard it on the basis of your newly created policies. Here are the steps to create a robust data security framework:

  New call-to-action
  • Security audit. First of all, you need to perform a security audit to define the current state of your client's infrastructure. You should focus on both the physical security of their premises in order to define, for example, user access policies or fire protection, and virtual security, including the analysis of the current state of their cyber-protection. The latter should include a network audit, patch checks, antivirus and firewall availability, etc.
  • Encryption considerations. You should define the encryption policies on all levels, starting from the BitLocker encryption for HDDs/SSDs with critical information and up to file-level encryption during backup processes.

Further reading Backup Encryption Options Demystified

  • Data movement. Data protection is useless unless you fully understand how that data can move within your client's organization. So, you should analyze data access rights, check shared devices and cloud drives, and create a policy for secure data movement. If, for example, one of the users wants to have access to data from their home office, they should not be able to just copy that data on a flash drive and take it home. Consider using a VPN to allow users to access the required data.
  • Create user groups. You need to restrict the availability of data by means of user groups. For example, no one should be able to access financial data, apart from the accountants and top management. In turn, system information, including passwords, log-ins, and network maps, should only be available to system administrators and your tech team.

Further reading Data Security Checklist


After you have classified the production data, it's time to create a solid backup framework, since it's the last line of defense that can help you avoid data breach, loss or downtime. Consider creating a data backup lifecycle, and a retention policy tied to that lifecycle. In other words, depending on your existing data classification, you should classify your backups. The most popular method for backup data classification is according to how ”hot” your data is:

  • Hot data. Data that you might need to recover on a daily basis, including the most recent backups of your production databases and other daily changes.
  • Cool data. Data that is slightly outdated, so that you might need to recover it only after a data loss - two-week-old data, for example.
  • Archive data. Your archives for compliance or audit purposes. Archival data is typically stored for years.

After you have defined the data backup classification, you can create the recovery time and recovery point objectives for your data. These objectives will be the basis of your disaster recovery plans.

Essential Guide to Backup for MSPs
Backup best practices and tips on how to protect your customers’ sensitive data
New call-to-action
WP icon


So, exactly how fast should you be able to recover the data and how much data can you afford to lose during the recovery? These are the questions that recovery time and recovery point objectives (RTO and RPO) should answer. You need to define these objectives for your customers based on the data backup classification previously carried out. RTO and RPO will show you which disaster recovery solutions and techniques you need to implement. For example, if your client has an e-commerce database with thousands of records checked and changed every hour, and they cannot afford downtime, you might need to create a standby failover system. On the other hand, if the loss of access to the archival file server won't affect business-critical operations, you can take more time to rebuild it.

Disaster Recovery

Now you have classified your data and understand how fast you should recover it. It's time to prepare a solid disaster recovery plan. This plan should carefully overview the most likely downtime scenarios, including:

  • Ransomware and other malware attacks;
  • Hardware or software failure;
  • Loss of on-premises servers (for example due to a fire);
  • Loss of internet connectivity.
  • Disaster recovery plans should be documented and tested on a regular basis. New call-to-action

Data Backup Integrity

You cannot be assured that your backups are recoverable until you test them properly. To do that, you should implement scheduled recovery checks. Depending on the solutions you use, these checks can be:

  • Manual. You should manually recover files or system images to the sandbox environment. That's time-consuming but, unless your backup solution can automate these processes, that's your only option.
  • Automated. Some backup software is capable of doing automated checks. These typically apply to system images or virtual machines. On a schedule, your backup software will recover the given server or PC to a sandbox virtual environment and send you a notification that the recovery was successful. This eases the process of data backup integrity checks and allows you to schedule more tests for more clients.


Soon, the amount of data that your MSP is managing will outgrow your ability to handle it equally effectively, and you need to be ready for this. Create a well-documented, clear data structure and implement it for your most demanding clients. This will ensure that their data is safe, backups are recoverable and your reputation is flawless.

DR whitepaper icon
Guide to Disaster Recovery Planning
  • Main steps for creating a DR plan
  • Best practices to keep in mind
  • Disaster recovery plan basic template
New call-to-action