How To Design a Cloud-Based Disaster Recovery Plan

Why Trust Techopedia
KEY TAKEAWAYS

Companies need cloud-based disaster recovery plans — policies and procedures — to enable them to restore their critical data and applications in case of disaster. Here are five top tips to always keep in mind.

While cloud computing provides organizations with a number of benefits, including scalability and enhanced flexibility, not to mention cost savings, it also brings with it some very real risks, such as security breaches, data loss, and service disruptions.

The use of cloud services has skyrocketed due to their convenience and ubiquity, says Suda Srinivasan, vice president of strategy and marketing at Yugabyte, provider of YugabyteDB, an open-source distributed SQL database.

“The cloud offers unmatched capabilities for users to grow and scale,” he says.

“However, failures in the cloud are not uncommon. Cloud outages continue to rise due to an increase in extreme weather, geopolitical unrest, security breaches, operator errors, and more. So it’s imperative for organizations to be prepared to protect against and recover from these failures.”

Why You Need a Cloud-Based Disaster Recovery Plan

As such, companies need cloud-based disaster recovery plans —policies and procedures — to enable them to restore their critical data and applications in case of disaster.

“Leveraging the public cloud for a disaster recovery environment offers rapid recovery of applications and data,” says Colm Keegan, senior consultant, product marketing at Dell Technologies. “As workloads span remote offices, onsite data centers, and various public clouds, it’s critical to build simplicity, automation, redundancy, and resiliency into your disaster recovery plan.”

READ MORE: 65+ Stats Cloud Computing Stats for 2023

Advertisements

While the approach to creating a cloud-based recovery plan touches on a lot of broad backup principles, the difference in infrastructure, including who runs it, means organizations need to clear up many unknowns around security, scale, performance, and cost, says Adrian Moir, senior product management consultant,nt and technology strategist at Quest Software, a provider of systems management, data protection, and security software.

“In short, businesses cannot build cloud-based recovery plans identical to their on-prem plans and expect them to succeed,” he says.

Here are some considerations companies must address to design effective cloud-based disaster recovery plans:

5. Implement Data Classification

Understanding what data and services they have and what’s most important to their business operations will be a fundamental resource to companies in a crisis, Moir says.

How businesses classify high-priority, sensitive data will depend on the scale of the incidents they’re preparing for as well as the level of performance they need to maintain, Moir notes.

“For example, a recovery plan in reaction to a data outage may include a plan to recover all data, while in a large-scale ransomware attack maybe it makes more sense to target the data necessary to get the business working again.

“Similarly, depending on what the issue is, having a cloud-based recovery solution up and running, even if its performance is a bit degraded, is better than nothing.”

Dale Zabriskie, field chief information security officer at Cohesity, agrees, saying that when it comes to creating a cloud-based disaster recovery plan, organizations need to understand their business data, including what it is, where it is, and who has access to it.

Organizations should treat their data like people treat currency — the larger denominations are a higher priority, he says. Data isn’t any different; the most stringent controls should be around data that matters most to the business.

“For that priority data, immutable backups and isolated, air-gapped vaulting are essential,” Zabriskie says.

“For secondary data, scanning for malware and vulnerabilities will also be crucial as threat actors may lurk in systems in the weeks after a successful cyberattack. Therefore, indexing and classification of data is critical to achieving these measures.”

4. Consider Where Your Data Sits

It may seem obvious, but knowing the location of their priority data will help companies set clear expectations on recovery time and data accessibility, according to Moir. However, recovery time is not.

“It’s not just impacted by location but by the performance of underlying infrastructure and the complexity of the data environment, including whether you can recover datasets in parallel,” he says.

“Knowing what your cloud vendor’s storage restrictions are can help you ensure you’re spreading out your backup data in an affordable, reliable way for recovery.”

3. Know What Your Cloud Services Provider Will Handle

Organizations need to understand what the vendor will take care of for them. Moir says that that will help ensure speedy access and recovery and that they can customize their technology approach based on their organizations’ needs.

“For example, on the security side, if a data outage happened because of a compromised account, you’ll want to make sure that strict privileges and permissions are noted in your recovery plan,” he explains.

And in all cases, companies need to ensure they have immutable copies of their data, he says.

“If you can’t get that, then at least understand the quality of the data you get back so you can alert the business or customers to any potential degradation,” he says.

Organizations should also be sure they understand their cloud vendor service agreements front to back,k from performance expectations to any additional costs that may be incurred for recovery, he adds.

2. Be Redundant/Resilient

Companies need to remember that the public cloud is just someone else’s data center, says Keegan.

“People often make the mistake of putting their backup and disaster-recovery environments in the same cloud region where their production applications live,” he notes.

“If that region goes down, you will have zero recovery capabilities. Be sure to use an alternate cloud region or a secondary cloud provider for your disaster recovery environment.”

And since software running in the cloud isn’t automatically resilient, it needs to be architected for high availability. This will prevent data loss and ensure organizations are resilient to failure, according to Srinivasan.

In the world of databases, this is achieved by replicating data for redundancy, he says.

“Cloud-native databases built for high availability automatically replicate data across different availability zones,” Srinivasan says. “In the event of a failure, applications can seamlessly, often with no disruption, shift both reads and writes to another location without data loss or impact to customers.”

This is important because an organization’s reputation is at risk if it does not have a resilient response plan to deal with major disruptions and it’s not prepared at the first sign of a technical disaster, says Paul Barnhill, managing director of Deloitte Consulting LLP. This is where a disaster recovery plan can build customer trust and lower insurance premiums.

An example of why this is important took place during the “Great Texas Freeze” (Feb. 11-20, 2021), says Srinivasan.

“One of our global Fortune 100 retail customers’ public cloud data center was offline for four days, and their backup generators failed.

“But despite a multi-day Azure outage, the retailer managed to prevent application downtime because it had deployed its distributed SQL database cluster across three regions with synchronous replication, which meant data was always 100% consistent across the regions.”

The retailer was able to continue running its business-critical product catalog (comprising billions of mappings for over 100 million items) and handle more than 100,000 queries per second, he notes.

“YugabyteDB provided the uptime and flexibility the customer needed to keep their data online during this unprecedented natural disaster,” he says.

1. Regularly Test, Monitor, and Update Your Disaster Recovery Plan

Another step organizations must take when designing their cloud-based disaster recovery plans is to review and analyze their current infrastructures, says Mike Lefebvre, director of cybersecurity for SEI Sphere, a managed security services provider.

“Then once businesses have selected their strategies and set up their infrastructures, regular testing is vital,” he says. “We recommend an annual test. This ensures that the failover process works as expected  in the event of an actual disaster.”

Companies must also continuously monitor their disaster recovery environments and periodically review and update their disaster recovery plans to accommodate changes in infrastructures, applications, or business needs, according to Lefebvre.

“Be sure to have alerts set up for replications and backups so you know that resources are in a healthy state,” he adds.

The Bottom Line

A disaster recovery plan prepares businesses to better respond to outages promptly with improved decision-making, says Barnhill.

It also helps with employee morale and safety during a critical time when people may be personally impacted by the disaster and have conflicting commitments with limited time to figure it out, he says. Thus, the cost of downtime can be greatly reduced with proper planning and practice.

“A disaster recovery plan is critical for business continuity, and it’s a regulatory requirement for sensitive data with audit processes in place for several key applications, organizations, and government entities,” Barnhill adds.

A disaster recovery plan also provides a competitive advantage for clients that prefer vendors with reliable and secure services, he says. This is especially relevant where businesses rely heavily on suppliers and supply chains to prevent disruption of goods and services.

Advertisements

Related Reading

Related Terms

Advertisements
Linda Rosencrance
Technology journalist
Linda Rosencrance
Technology journalist

Linda Rosencrance is a freelance writer and editor based in the Boston area, with expertise ranging from AI and machine learning to cybersecurity and DevOps. She has been covering IT topics since 1999 as an investigative reporter working for several newspapers in the Boston metro area.  Before joining Techopedia in 2022, her articles have appeared in TechTarget, MSDynamicsworld.com, TechBeacon, IoT World Today, Computerworld, CIO magazine, and many other publications. She also writes white papers, case studies, ebooks, and blog posts for many corporate clients, interviewing key players, including CIOs, CISOs, and other C-suite execs.