Kari Rivas, Author at Backblaze Blog | Cloud Storage & Cloud Backup https://www.backblaze.com/blog/author/kari/ Cloud Storage & Cloud Backup Tue, 05 Mar 2024 17:13:50 +0000 en-US hourly 1 https://wordpress.org/?v=6.4.3 https://www.backblaze.com/blog/wp-content/uploads/2019/04/cropped-cropped-backblaze_icon_transparent-80x80.png Kari Rivas, Author at Backblaze Blog | Cloud Storage & Cloud Backup https://www.backblaze.com/blog/author/kari/ 32 32 Cloud Storage for Higher Education: Benefits & Best Practices https://www.backblaze.com/blog/cloud-storage-for-higher-education-benefits-best-practices/ https://www.backblaze.com/blog/cloud-storage-for-higher-education-benefits-best-practices/#respond Thu, 29 Feb 2024 15:38:08 +0000 https://www.backblaze.com/blog/?p=108589 Higher education institutions create and store large amounts of data with a diverse set of needs. Cloud storage provides flexible solutions. Here are a few things to think about.

The post Cloud Storage for Higher Education: Benefits & Best Practices appeared first on Backblaze Blog | Cloud Storage & Cloud Backup.

]]>
A decorative image showing files and graduation caps being thrown in the air towards the Backblaze cloud.

Universities and colleges lead the way in educating future professionals and conducting ground-breaking research. Altogether, higher education generates hundreds of terabytes—even petabytes—of data. But, higher education also faces significant data risks. They are one of the most targeted industries for ransomware, with 79% of institutions reporting they were hit with ransomware in the past year.

While higher education institutions often have robust data storage systems that can even include their own off-site disaster recovery (DR) centers, cloud storage can provide several benefits that legacy storage systems cannot match. In particular, cloud storage allows schools to protect from ransomware with immutability, easily grow their datasets without constant hardware outlays, and protect faculty, student, and researchers’ computers with cloud-based endpoint backups.

Cloud storage is also a promising alternative to cloud drives, traditionally a popular option for higher education institutions. While cloud drives provide easy storage across campus, both Google and Microsoft have announced the end of their unlimited storage tiers for education. Faced with changes to the original service, many higher education institutions are looking for alternatives. Plus, cloud drives do not provide true, incremental backup, do not adequately protect from ransomware, and have limited options for recovery.   

Ultimately, cloud storage better protects your school from local disasters and ransomware with a secure, off-site copy of your data. And, with the right cloud service provider, it can be much more affordable than you think. In this article, we’ll look at the benefits of cloud storage for higher education, study some popular use cases, and explore best practices and provisioning considerations.

The Benefits of Cloud Storage in Higher Education

Cloud storage solutions present a host of benefits for organizations in any industry, but many of these benefits are particularly relevant for higher education institutions. Let’s take a look:

1. Enhanced Security

Higher education institutions have emerged as one of ransomware attackers’ favorite targets—63% of higher education CISOs say a cyber attack is likely within the next year. Data backups are a core part of any organization’s security posture, and that includes keeping those backups protected and secure in the cloud. Using cloud storage to store backups strengthens backup programs by keeping copies off-site and geographically distanced, which adheres to the 3-2-1 backup strategy (more on that later). Cloud storage can also be made immutable using tools like Object Lock, meaning data can’t be modified or deleted. This feature is often unavailable in existing data storage hardware.

2. Cost-Effective Storage

Higher education generates huge volumes of data each year. Keeping costs low without sacrificing in other areas is a key priority for these institutions, across both active data and archival data stores. Cloud storage helps higher education institutions use their storage budgets effectively by not paying to provision and maintain on-premises infrastructure they don’t need. It can also help higher education institutions migrate away from linear tape-open (LTO) which can be costly to manage.

3. Improved Scalability

As digital data continues to grow, it’s important for those institutions to be able to easily scale with their storage needs. Cloud storage allows higher education institutions to avoid potentially over-provisioning infrastructure with the ability to affordably tier off data to the cloud. 

4. Data Accessibility

Making data easily accessible is important for many aspects of higher education. From the impact of scientific researchers to the ongoing work of attracting students to the university, the increasing quantities of data that higher education creates needs to be easy to access, use, and manage. Cloud storage makes data accessible from anywhere, and with hot cloud storage, there are no access delays like there can be with cold cloud storage or LTO tape.

5. Supports Cybersecurity Insurance Requirements

It’s increasingly common to utilize cyber insurance to offset potential liabilities incurred by a cyber attack. Many of those applications ask if the covered entity has off-site backups or immutable backups. Sometimes they even specify the backup has to be held somewhere other than the organization’s own locations. (We’ve seen other organizations outside of higher ed adding cloud storage for this reason as well). Cloud storage provides a pathway to meeting cyber insurance requirements universities may face.

How Higher Ed Institutions Can Use Cloud Storage Effectively

There are many ways higher education institutions can make effective use of cloud storage solutions. The most common use case is cloud storage for backup and archive systems. Transitioning from on-premises storage to cloud-based solutions—even if an organization is only transitioning a part of their total data footprint while retaining on-premises systems—is a powerful way for higher education institutions to protect their most important data.  To illustrate, here are some common use cases with real-life examples:

LTO Replacement

It’s no surprise that maintaining tape is a pain. While it’s the only true physical air-gap solution, it’s also a time suck, and those are precious hours that your IT team should be spending on strategic initiatives. This is particularly applicable in projects that generate huge amounts of data, like scientific research. Cloud storage provides the same off-site protection as LTO with far fewer maintenance hours. 

Off-Site Backups

As mentioned, higher ed institutions often keep an off-site copy of their data, but it’s commonly a few miles down the road—perhaps at a different branch’s campus. Transitioning to cloud storage allowed Coast Community College District (CCCD) to quit chauffeuring physical tapes to an off-site backup center about five miles away and instead implement a virtualized, multi-cloud solution with truly geographically distanced backups.

Protection From Ransomware

A ransomware attack is not a matter of if, but when. Cloud storage provides immutable ransomware protection with Object Lock, which creates a “virtual” air gap. Pittsburg State University, for example, leverages cloud storage to protect university data from ransomware threats. They strengthened their protection four-fold by adding immutable off-site data backups, and are now able to manage data recovery and data integrity with a single robust solution (that doesn’t multiply their expenses).

Computer Backup

While S3 compatible object storage provides a secure destination for data from servers, virtual machines (VMs), and network attached storage (NAS), it’s important to remember to back up faculty, staff, student, and researchers’ computers as well. Workstation backup is particularly important for organizations that are leveraging cloud drives, as these platforms are only designed to capture data stored in their respective clouds, leaving local files vulnerable to loss. But, one thing you don’t want is a drain on your IT resources—you want a solution that’s easy to implement, easy to manage ongoing, and simple enough to serve users of varying tech savviness. 

Best Practices for Data Backup and Management in the Cloud

Higher education institutions (and anyone, really!) should follow basic best practices to get the most out of their cloud storage solutions. Here are a few key points to keep in mind when developing a data backup and management strategy for higher education:

The 3-2-1 Backup Strategy

This widely accepted foundational structure recommends keeping three copies of all important data (one primary copy and two backup copies) on two different media types (to diversify risk) and storing at least one copy off-site. While colleges and universities frequently have high-capacity data storage systems, they don’t always adhere to the 3-2-1 rule. For instance, a school may have an off-site disaster recovery site, but their backups are not on two different media types. Or, they may be meeting the two-media-type rule but their media are not wholly off-site. Keeping your backups at a different campus location does not constitute a true off-site backup if you’re in the same region, for instance—the closer your data storage sites are, the more likely they’ll be subject to the same risks, like network outages, natural disasters, and so on.

Regular Data Backups

You’re only as strong as your last backup. Maintaining a frequent and regular backup schedule is a tried and true way to ensure that your institution’s data is as protected as possible. Schools that have historically relied on Google Drive, Dropbox, OneDrive, and other cloud drive systems are particularly vulnerable to this gap in their data protection strategy. Cloud drives provide sync functionality; they are not a true backup. While many now have the ability to restore files, restore periods are limited and not customizable and services often only back up certain file types—so, your documents, but not your email or user data, for instance. Especially when you’re talking about larger organizations with complex file management and high compliance needs, they don’t provide adequate protection from ransomware. Speaking of ransomware…

Ransomware Protection

Educational institutions (including both K-12 and higher ed) are more frequently targeted by ransomware today than ever before. When you’re using cloud storage, you can enable security features like Object Lock to offer “air gapped” protection and data immutability in the cloud. When you add endpoint backup, you’re ensuring that all the data on a workstation is backed up—closing a gap in cloud drives that can leave certain types of data vulnerable to loss.

Disaster Recovery Planning

Incorporating cloud storage into your disaster recovery strategy is the best way to plan for the worst. If unexpected disasters occur, you’ll know exactly where your data lives and how to restore it so you can get back to work quickly. Schools will often use cross-site replication as their disaster recovery solution, but such methods can fail the 3-2-1 test (see above) and it’s not a true backup since replication functions much the same way as sync. If ransomware invades your primary dataset, it can be replicated across all your copies. Cloud storage allows you to fortify your disaster recovery strategy and plug the gaps in your data protection.

Regulatory Compliance

Universities work with and store many diverse kinds of information, including highly regulated data types like medical records and research data. It’s important for higher education to use cloud storage solutions that help them remain in compliance with data privacy laws and federal or international regulations. Providers like Backblaze that frequently work with higher education institutions will usually have a HECVAT questionnaire available so you can better understand a vendor’s compliance and security stance, and they go through regular compliance audits via regulatory agencies like StateRAMP or SOC-2 certifications.

Comprehensive Protection

While it’s obvious that data systems like servers, virtual machines, and network attached storage (NAS) should be backed up, consider the other important sources of data that should be included in your protection strategy. For instance, your Microsoft 365 data should be backed up because you cannot rely on Microsoft to provide adequate backups. Under the shared responsibility model, Microsoft and other SaaS providers state that your data is your responsibility to back up—even if it’s stored on their cloud. And don’t forget about your faculty, student, staff, and researchers’ computers. These devices can hold incredibly valuable work and having a native endpoint backup solution is critical.

The Importance of Cloud Storage for Higher Education Institutions

Institutions of higher education were already on the long road toward digital transformation before the pandemic hit, but 2020 forced any reluctant parties to accept that the future was upon us. The combination of schools’ increasing quantities of sensitive and protected data and the growing threat of ransomware in the higher education space reinforce the need for secure and robust cloud storage solutions. As time has gone on, it’s clear that the diverse needs of higher education institutions need flexible, scalable, affordable solutions, and that current and legacy solutions have room for improvement. 

Universities that leverage best practices like designing 3-2-1 backup strategies, conducting frequent and regular backups, and developing disaster recovery plans before they’re needed will be well on their way toward becoming more modern, digital-first organizations. And with the right cloud storage solutions in place, they’ll be able to move the needle with measurable business benefits like cost effectiveness, data accessibility, increased security, and scalability.

The post Cloud Storage for Higher Education: Benefits & Best Practices appeared first on Backblaze Blog | Cloud Storage & Cloud Backup.

]]>
https://www.backblaze.com/blog/cloud-storage-for-higher-education-benefits-best-practices/feed/ 0
Cloud Storage Performance: The Metrics That Matter https://www.backblaze.com/blog/cloud-storage-performance-the-metrics-that-matter/ https://www.backblaze.com/blog/cloud-storage-performance-the-metrics-that-matter/#comments Tue, 24 Oct 2023 16:05:00 +0000 https://www.backblaze.com/blog/?p=110074 Latency, throughput, availability, and durability are the main factors that affect cloud performance. Let's talk about how.

The post Cloud Storage Performance: The Metrics That Matter appeared first on Backblaze Blog | Cloud Storage & Cloud Backup.

]]>
A decorative image showing a cloud in the foreground and various mocked up graphs in the background.

Availability, time to first byte, throughput, durability—there are plenty of ways to measure “performance” when it comes to cloud storage. But, which measure is best and how should performance factor in when you’re choosing a cloud storage provider? Other than security and cost, performance is arguably the most important decision criteria, but it’s also the hardest dimension to clarify. It can be highly variable and depends on your own infrastructure, your workload, and all the network connections between your infrastructure and the cloud provider as well.

Today, I’m walking through how to think strategically about cloud storage performance, including which metrics matter and which may not be as important for you.

First, What’s Your Use Case?

The first thing to keep in mind is how you’re going to be using cloud storage. After all, performance requirements will vary from one use case to another. For instance, you may need greater performance in terms of latency if you’re using cloud storage to serve up software as a service (SaaS) content; however, if you’re using cloud storage to back up and archive data, throughput is probably more important for your purposes.

For something like application storage, you should also have other tools in your toolbox even when you are using hot, fast, public cloud storage, like the ability to cache content on edge servers, closer to end users, with a content delivery network (CDN).

Ultimately, you need to decide which cloud storage metrics are the most important to your organization. Performance is important, certainly, but security or cost may be weighted more heavily in your decision matrix.

A decorative image showing several icons representing different types of files on a grid over a cloud.

What Is Performant Cloud Storage?

Performance can be described using a number of different criteria, including:

  • Latency
  • Throughput
  • Availability
  • Durability

I’ll define each of these and talk a bit about what each means when you’re evaluating a given cloud storage provider and how they may affect upload and download speeds.

Latency

  • Latency is defined as the time between a client request and a server response. It quantifies the time it takes data to transfer across a network.  
  • Latency is primarily influenced by physical distance—the farther away the client is from the server, the longer it takes to complete the request. 
  • If you’re serving content to many geographically dispersed clients, you can use a CDN to reduce the latency they experience. 

Latency can be influenced by network congestion, security protocols on a network, or network infrastructure, but the primary cause is generally distance, as we noted above. 

Downstream latency is typically measured using time to first byte (TTFB). In the context of surfing the web, TTFB is the time between a page request and when the browser receives the first byte of information from the server. In other words, TTFB is measured by how long it takes between the start of the request and the start of the response, including DNS lookup and establishing the connection using a TCP handshake and TLS handshake if you’ve made the request over HTTPS.

Let’s say you’re uploading data from California to a cloud storage data center in Sacramento. In that case, you’ll experience lower latency than if your business data is stored in, say, Ohio and has to make the cross-country trip. However, making the “right” decision about where to store your data isn’t quite as simple as that, and the complexity goes back to your use case. If you’re using cloud storage for off-site backup, you may want your data to be stored farther away from your organization to protect against natural disasters. In this case, performance is likely secondary to location—you only need fast enough performance to meet your backup schedule. 

Using a CDN to Improve Latency

If you’re using cloud storage to store active data, you can speed up performance by using a CDN. A CDN helps speed content delivery by caching content at the edge, meaning faster load times and reduced latency. 

Edge networks create “satellite servers” that are separate from your central data server, and CDNs leverage these to chart the fastest data delivery path to end users. 

Throughput

  • Throughput is a measure of the amount of data passing through a system at a given time.
  • If you have spare bandwidth, you can use multi-threading to improve throughput. 
  • Cloud storage providers’ architecture influences throughput, as do their policies around slowdowns (i.e. throttling).

Throughput is often confused with bandwidth. The two concepts are closely related, but different. 

To explain them, it’s helpful to use a metaphor: Imagine a swimming pool. The amount of water in it is your file size. When you want to drain the pool, you need a pipe. Bandwidth is the size of the pipe, and throughput is the rate at which water moves through the pipe successfully. So, bandwidth affects your ultimate throughput. Throughput is also influenced by processing power, packet loss, and network topology, but bandwidth is the main factor. 

Using Multi-Threading to Improve Throughput

Assuming you have some bandwidth to spare, one of the best ways to improve throughput is to enable multi-threading. Threads are units of execution within processes. When you transmit files using a program across a network, they are being communicated by threads. Using more than one thread (multi-threading) to transmit files is, not surprisingly, better and faster than using just one (although a greater number of threads will require more processing power and memory). To return to our water pipe analogy, multi-threading is like having multiple water pumps (threads) running to that same pipe. Maybe with one pump, you can only fill 10% of your pipe. But you can keep adding pumps until you reach pipe capacity.

When you’re using cloud storage with an integration like backup software or a network attached storage (NAS) device, the multi-threading setting is typically found in the integration’s settings. Many backup tools, like Veeam, are already set to multi-thread by default. Veeam automatically makes adjustments based on details like the number of individual backup jobs, or you can configure the number of threads manually. Other integrations, like Synology’s Cloud Sync, also give you granular control over threading so you can dial in your performance.  

A diagram showing single vs. multi-threaded processes.
Still confused about threads? Learn more in our deep dive, including what’s going on in this diagram.

That said, the gains from increasing the number of threads are limited by the available bandwidth, processing power, and memory. Finding the right setting can involve some trial and error, but the improvements can be substantial (as we discovered when we compared download speeds on different Python versions using single vs. multi-threading).

What About Throttling?

One question you’ll absolutely want to ask when you’re choosing a cloud storage provider is whether they throttle traffic. That means they deliberately slow down your connection for various reasons. Shameless plug here: Backblaze does not throttle, so customers are able to take advantage of all their bandwidth while uploading to B2 Cloud Storage. Many other public cloud services do throttle, although they certainly may not make it widely known, so be sure to ask the question upfront when engaging with a storage provider.

Upload Speed and Download Speed

Your ultimate upload and download speeds will be affected by throughput and latency. Again, it’s important to consider your use case when determining which performance measure is most important for you. Latency is important to application storage use cases where things like how fast a website loads can make or break a potential SaaS customer. With latency being primarily influenced by distance, it can be further optimized with the help of a CDN. Throughput is often the measurement that’s more important to backup and archive customers because it is indicative of the upload and download speeds an end user will experience, and it can be influenced by cloud storage provider practices, like throttling.   

Availability

  • Availability is the percentage of time a cloud service or a resource is functioning correctly.
  • Make sure the availability listed in the cloud provider’s service level agreement (SLA) matches your needs. 
  • Keep in mind the difference between hot and cold storage—cold storage services like Amazon Glacier offer slower retrieval and response times.

Also called uptime, this metric measures the percentage of time that a cloud service or resource is available and functioning correctly. It’s usually expressed as a percentage, with 99.9% (three nines) or 99.99% (four nines) availability being common targets for critical services. Availability is often backed by SLAs that define the uptime customers can expect and what happens if availability falls below that metric. 

You’ll also want to consider availability if you’re considering whether you want to store in cold storage versus hot storage. Cold storage is lower performing by design. It prioritizes durability and cost-effectiveness over availability. Services like Amazon Glacier and Google Coldline take this approach, offering slower retrieval and response times than their hot storage counterparts. While cost savings is typically a big factor when it comes to considering cold storage, keep in mind that if you do need to retrieve your data, it will take much longer (potentially days instead of seconds), and speeding that up at all is still going to cost you. You may end up paying more to get your data back faster, and you should also be aware of the exorbitant egress fees and minimum storage duration requirements for cold storage—unexpected costs that can easily add up. 

ColdHot
Access SpeedSlowFast
Access Frequency Seldom or NeverFrequent
Data VolumeLowHigh
Storage MediaSlower drives, LTO, offlineFaster drives, durable drives, SSDs
CostLowerHigher

Durability

  • Durability is the ability of a storage system to consistently preserve data.
  • Durability is measured in “nines” or the probability that your data is retrievable after one year of storage. 
  • We designed the Backblaze B2 Storage Cloud for 11 nines of durability using erasure coding.

Data durability refers to the ability of a data storage system to reliably and consistently preserve data over time, even in the face of hardware failures, errors, or unforeseen issues. It is a measure of data’s long-term resilience and permanence. Highly durable data storage systems ensure that data remains intact and accessible, meeting reliability and availability expectations, making it a fundamental consideration for critical applications and data management.

We usually measure durability or, more precisely annual durability, in “nines”, referring to the number of nines in the probability (expressed as a percentage) that your data is retrievable after one year of storage. We know from our work on Drive Stats that an annual failure rate of 1% is typical for a hard drive. So, if you were to store your data on a single drive, its durability, the probability that it would not fail, would be 99%, or two nines.

The very simplest way of improving durability is to simply replicate data across multiple drives. If a file is lost, you still have the remaining copies. It’s also simple to calculate the durability with this approach. If you write each file to two drives, you lose data only if both drives fail. We calculate the probability of both drives failing by multiplying the probabilities of either drive failing, 0.01 x 0.01 = 0.0001, giving a durability of 99.99%, or four nines. While simple, this approach is costly—it incurs a 100% overhead in the amount of storage required to deliver four nines of durability.

Erasure coding is a more sophisticated technique, improving durability with much less overhead than simple replication. An erasure code takes a “message,” such as a data file, and makes a longer message in a way that the original can be reconstructed from the longer message even if parts of the longer message have been lost. 

A decorative image showing the matrices that get multiplied to allow Reed-Solomon code to re-create files.
A representation of Reed-Solomon erasure coding, with some very cool Storage Pods in the background.

The durability calculation for this approach is much more complex than for replication, as it involves the time required to replace and rebuild failed drives as well as the probability that a drive will fail, but we calculated that we could take advantage of erasure coding in designing the Backblaze B2 Storage Cloud for 11 nines of durability with just 25% overhead in the amount of storage required. 

How does this work? Briefly, when we store a file, we split it into 16 equal-sized pieces, or shards. We then calculate four more shards, called parity shards, in such a way that the original file can be reconstructed from any 16 of the 20 shards. We then store the resulting 20 shards on 20 different drives, each in a separate Storage Pod (storage server).

Note: As hard disk drive capacity increases, so does the time required to recover after a drive failure, so we periodically adjust the ratio between data shards and parity shards to maintain our eleven nines durability target. Consequently, our very newest vaults use a 15+5 scheme.

If a drive does fail, it can be replaced with a new drive, and its data rebuilt from the remaining good drives. We open sourced our implementation of Reed-Solomon erasure coding, so you can dive into the source code for more details.

Additional Factors Impacting Cloud Storage Performance

In addition to bandwidth and latency, there are a few additional factors that impact cloud storage performance, including:

  • The size of your files.
  • The number of files you upload or download.
  • Block (part) size.
  • The amount of available memory on your machine. 

Small files—that is, those less than 5GB—can be uploaded in a single API call. Larger files, from 5MB to 10TB, can be uploaded as “parts”, in multiple API calls. You’ll notice that there is quite an overlap here! For uploading files between 5MB and 5GB, is it better to upload them in a single API call, or split them into parts? What is the optimum part size? For backup applications, which typically split all data into equal-sized blocks, storing each block as a file, what is the optimum block size? As with many questions, the answer is that it depends.

Remember latency? Each API call incurs a more-or-less fixed overhead due to latency. For a 1GB file, assuming a single thread of execution, uploading all 1GB in a single API call will be faster than ten API calls each uploading a 100MB part, since those additional nine API calls each incur some latency overhead. So, bigger is better, right?

Not necessarily. Multi-threading, as mentioned above, affords us the opportunity to upload multiple parts simultaneously, which improves performance—but there are trade-offs. Typically, each part must be stored in memory as it is uploaded, so more threads means more memory consumption. If the number of threads multiplied by the part size exceeds available memory, then either the application will fail with an out of memory error, or data will be swapped to disk, reducing performance.

Downloading data offers even more flexibility, since applications can specify any portion of the file to download in each API call. Whether uploading or downloading, there is a maximum number of threads that will drive throughput to consume all of the available bandwidth. Exceeding this maximum will consume more memory, but provide no performance benefit. If you go back to our pipe analogy, you’ll have reached the maximum capacity of the pipe, so adding more pumps won’t make things move faster. 

So, what to do to get the best performance possible for your use case? Simple: customize your settings. 

Most backup and file transfer tools allow you to configure the number of threads and the amount of data to be transferred per API call, whether that’s block size or part size. If you are writing your own application, you should allow for these parameters to be configured. When it comes to deployment, some experimentation may be required to achieve maximum throughput given available memory.

How to Evaluate Cloud Performance

To sum up, the cloud is increasingly becoming a cornerstone of every company’s tech stack. Gartner predicts that by 2026, 75% of organizations will adopt a digital transformation model predicated on cloud as the fundamental underlying platform. So, cloud storage performance will likely be a consideration for your company in the next few years if it isn’t already.

It’s important to consider that cloud storage performance can be highly subjective and heavily influenced by things like use case considerations (i.e. backup and archive versus application storage, media workflow, or another), end user bandwidth and throughput, file size, block size, etc. Any evaluation of cloud performance should take these factors into account rather than simply relying on metrics in isolation. And, a holistic cloud strategy will likely have multiple operational schemas to optimize resources for different use cases.

Wait, Aren’t You, Backblaze, a Cloud Storage Company?

Why, yes. Thank you for noticing. We ARE a cloud storage company, and we OFTEN get questions about all of the topics above. In fact, that’s why we put this guide together—our customers and prospects are the best sources of content ideas we can think of. Circling back to the beginning, it bears repeating that performance is one factor to consider in addition to security and cost. (And, hey, we would be remiss not to mention that we’re also one-fifth the cost of AWS S3.) Ultimately, whether you choose Backblaze B2 Cloud Storage or not though, we hope the information is useful to you. Let us know if there’s anything we missed.

The post Cloud Storage Performance: The Metrics That Matter appeared first on Backblaze Blog | Cloud Storage & Cloud Backup.

]]>
https://www.backblaze.com/blog/cloud-storage-performance-the-metrics-that-matter/feed/ 2
Seven Reasons Your Backup Strategy Might Be Failing You https://www.backblaze.com/blog/seven-reasons-your-backup-strategy-might-be-failing-you/ https://www.backblaze.com/blog/seven-reasons-your-backup-strategy-might-be-failing-you/#comments Tue, 08 Aug 2023 16:15:00 +0000 https://www.backblaze.com/blog/?p=109406 The 3-2-1 backup strategy is a great place to start, but there's debate about whether it's still sufficient to keep your data protected. Let's talk about how to improve your backup strategy.

The post Seven Reasons Your Backup Strategy Might Be Failing You appeared first on Backblaze Blog | Cloud Storage & Cloud Backup.

]]>
A decorative image showing a cloud with a backup symbol, then three circles with 3, 2, and 1. There are question marks behind the cloud.

Are you confident that your backup strategy has you covered? If not, it’s time to confront the reality that your backup strategy might not be as strong as you think. And even if you’re feeling great about it, it can never hurt to poke holes in your strategy to see where you need to shore up your defenses.

Whether you’re a small business owner wearing many hats (including the responsibility for backing up your company’s data) or a seasoned IT professional, you know that protecting your data is a top priority. The industry standard is the 3-2-1 backup strategy, which states you should have three copies of your data on two different kinds of media with at least one copy off-site or in the cloud. But a lot has changed since that standard was introduced. 

In this post, we’ll identify several ways your 3-2-1 strategy (and your backups in general) could fail. These are common mistakes that even professional IT teams can make. While 3-2-1 is a great place to start, especially if you’re not currently following that approach, it can now be considered table stakes. 

For larger businesses or any business wanting to fail proof its backups, read on to learn how you can plug the gaps in your 3-2-1 strategy and better secure your data from ransomware and other disasters.

Join the Webinar

There’s more to learn about how to shore up your data protection strategy. Join Backblaze on Thursday, August 10 at 10 a.m. PT/noon CT/5 p.m. UTC for a 30-minute webinar on “10 Common Data Protection Mistakes.”

Sign Up ➔ 

Let’s start with a quick review of the 3-2-1 strategy.

The 3-2-1 Backup Strategy

A 3-2-1 strategy means having at least three total copies of your data, two of which are local but on different media, and at least one off-site copy or in the cloud. For instance, a business may keep a local copy of its data on a server at the main office, a second copy of its data on a NAS device in the same location, and a third copy of its data in the public cloud, such as Backblaze B2 Cloud Storage. Hence, there are three copies of its data with two local copies on different media (the server and NAS) and one copy stored off-site in the cloud.

A diagram showing a 3-2-1 backup strategy, in which there are three copies of data, in two different locations, with one location off-site.

The 3-2-1 rule originated in 2005 when Peter Krogh, a photographer, writer, and consultant, introduced it in his book, “The DAM Book: Digital Asset Management for Photographers.” As this rule was developed almost 20 years ago, you can imagine that it may be outdated in some regards. Consider that 2005 was the year YouTube was founded. Let’s face it, a lot has changed since 2005, and today the 3-2-1 strategy is just the starting point. In fact, even if you’re faithfully following the 3-2-1 rule, there may still be some gaps in your data protection strategy.

While backups to external hard drives, tape, and other recordable media (CDs, DVDs, and SD cards) were common two decades ago, those modalities are now considered legacy storage. The public cloud was a relatively new innovation in 2005, so, at first, 3-2-1 did not even consider the possibilities of cloud storage. 

Arguably, the entire concept of “media” in 3-2-1 (as in having two local copies of your data on two different kinds of media) may not make sense in today’s modern IT environment. And, while an on-premises copy of your data typically offers the fastest Recovery Time Objective (RTO), having two local copies of your data will not protect against the multitude of potential natural disasters like fire, floods, tornados, and earthquakes. 

The “2” part of the 3-2-1 equation may make sense for consumers and sole proprietors (e.g., photographers, graphic designers, etc.) who are prone to hardware failure and for whom having a second copy of data on a NAS device or external hard drive is an easy solution, but enterprises have more complex infrastructures. 

Enterprises may be better served by having more than one off-site copy, in case of an on-premises data disaster. This can be easily automated with a cloud replication tool which allows you to store your data in different regions. (Backblaze offers Cloud Replication for this purpose.) Replicating your data across regions provides geographical separation from your production environment and added redundancy. The bottom line is that 3-2-1 is a good starting point for configuring your backup strategy, but it should not be taken as a one-size-fits-all approach.

The 3-2-1-1-0 Strategy

Some companies in the data protection space, like Veeam, have updated 3-2-1 with the 3-2-1-1-0 approach. This particular definition stipulates that you:

  • Maintain at least three copies of business data.
  • Store data on at least two different types of storage media.
  • Keep one copy of the backups in an off-site location.
  • Keep one copy of the media offline or air gapped.
  • Ensure all recoverability solutions have zero errors.
A diagram showing the 3-2-1-1-0 backup strategy.

The 3-2-1-1-0 approach addresses two important weaknesses of 3-2-1. First, 3-2-1 doesn’t address the prevalence of ransomware. Even if you follow 3-2-1 with fidelity, your data could still be vulnerable to a ransomware attack. The 3-2-1-1-0 rule covers this by requiring one copy to be offline or air gapped. With Object Lock, your data can be made immutable, which is considered a virtual air gap, thus fulfilling the 3-2-1-1-0 rule. 

Second, 3-2-1 does not consider disaster recovery (DR) needs. While backups are one part of your disaster recovery plan, your DR plan needs to consider many more factors. The “0” in 3-2-1-1-0 captures an important aspect of DR planning, which is that you must test your backups and ensure you can recover from them without error. Ultimately, you should architect your backup strategy to support your DR plan and the potential need for a recovery, rather than trying to abide by any particular backup rule.

Additional Gaps in Your Backup Strategy

As you can tell by now, there are many shades of gray when it comes to 3-2-1, and these varying interpretations can create areas of weakness in a business’ data protection plan. Review your own plan for the following seven common mistakes and close the gaps in your strategy by implementing the suggested best practices.

1. Using Sync Functionality Instead of Backing Up

You may be following 3-2-1, but if copies of your data are stored on a sync service like Google Drive, Dropbox, or OneDrive, you’re not fully protected. Syncing your data does not allow you to recover from previous versions with the level of granularity that a backup offers.

Best Practice: Instead, ensure you have three copies of your data protected by true backup functionality.

2. Counting Production Data as a Backup

Some interpret the production data to be one of the three copies of data or one of the two different media types.

Best Practice: It’s open to interpretation, but you may want to consider having three copies of data in addition to your production data for the best protection.

3. Using a Storage Appliance That’s Vulnerable to Ransomware

Many on-premises storage systems now support immutability, so it’s a good time to reevaluate your local storage. 

Best Practice: New features in popular backup software like Veeam even enable NAS devices to be protected from ransomware. Learn more about Veeam support for NAS immutability and how to orchestrate end-to-end immutability for impenetrable backups.

4. Not Backing Up Your SaaS Data

It’s a mistake to think your Microsoft 365, Google Workspace, and other software as a service (SaaS) data is protected because it’s already hosted in the cloud. SaaS providers operate under a “shared responsibility model,” meaning they may not back up your data as often as you’d like or provide effective means to recovery. 

Best Practice: Be sure to back up your SaaS data to the cloud to ensure complete coverage of the 3-2-1 rule. 

5. Relying On Off-Site Legacy Storage

It’s always a good idea to have at least one copy of your data on-site for the fastest RTO. But if you’re relying on legacy storage, like tape, to fulfill the off-site requirement of the 3-2-1 strategy, you probably know how expensive and time-consuming it can be. And sometimes that expense and timesuck means your off-site backups are not updated as often as they should be, which leads to mistakes. 

Best Practice: Replace your off-site storage with cloud storage to modernize your architecture and prevent gaps in your backups. Backblaze B2 is one-fifth of the cost of AWS, so it’s easily affordable to migrate off tape and other legacy storage systems.

6. No Plan for Affected Infrastructure

Faithfully following 3-2-1 will get you nowhere if you don’t have the infrastructure to restore your backups. If your infrastructure is destroyed or disrupted, you need a way to ensure business continuity in the face of data disaster.

Best Practice: Be sure your disaster recovery plan outlines how you will access your DR documentation and implement the plan even if your environment is down. Using a tool like Cloud Instant Business Recovery (Cloud IBR), which offers an on-demand, automated solution that allows Veeam users to stand up bare metal servers in the cloud, allows you to immediately begin recovering data while rebuilding infrastructure.

7. Keeping Your Off-Site Copy Down the Street

The 3-2-1 policy states that one copy of your data be kept off-site, and some companies maintain a DR site for that exact purpose. However, if your DR facility is in the same local area as your main office, you have a big gap in your data protection strategy. 

Best Practice: Ideally, you should have an off-site copy of your data stored in a public cloud data center far from your data production site, to protect against regional natural disasters.

Telco Adopts Cloud for Geographic Separation

AcenTek’s existing storage scheme covered the 3-2-1 basics, but their off-site copy was no further away than their own data center. In the case of a large natural disaster, their one off-site copy could be vulnerable to destruction, leaving them without a path to recovery. With Backblaze B2, AcenTek has an additional layer of resilience for its backup data by storing it in a secure, immutable cloud storage platform across the country from their headquarters in Minnesota.

Read the Full Story ➔ 

Modernize Your Backup Strategy

The 3-2-1 strategy is a great starting point for small businesses that need to develop a backup plan, but larger mid-market and enterprise organizations must think about business continuity more holistically. 

Backblaze B2 Cloud Storage makes it easy to modernize your backup strategy by sending data backups and archives straight to the cloud—without the expense and complexity of many public cloud services.

At one-fifth of the price of AWS, Backblaze B2 is an affordable, time-saving alternative to the hyperscalers, LTO, and traditional DR sites. Get started today or contact Sales for more information on Backblaze B2 Reserve, Backblaze’s all-inclusive capacity-based pricing that includes premium support and no egress fees. The intricacies of operations, data management, and potential risks demand a more advanced approach to ensure uninterrupted operations. By leveraging cloud storage, you can create a robust, cost-effective, and flexible backup strategy that you can easily customize to your business needs.

Interested in learning more about backup, business continuity, and disaster recovery best practices? Check out the free Backblaze resources below.

The post Seven Reasons Your Backup Strategy Might Be Failing You appeared first on Backblaze Blog | Cloud Storage & Cloud Backup.

]]>
https://www.backblaze.com/blog/seven-reasons-your-backup-strategy-might-be-failing-you/feed/ 1
Secure Your SaaS Tools: Back Up Microsoft 365 to the Cloud https://www.backblaze.com/blog/secure-your-saas-tools-back-up-microsoft-365-to-the-cloud/ https://www.backblaze.com/blog/secure-your-saas-tools-back-up-microsoft-365-to-the-cloud/#respond Thu, 20 Jul 2023 16:33:55 +0000 https://www.backblaze.com/blog/?p=109245 Microsoft 365 operates on a shared responsibility model for backups. Learn what that means and how to best protect your business.

The post Secure Your SaaS Tools: Back Up Microsoft 365 to the Cloud appeared first on Backblaze Blog | Cloud Storage & Cloud Backup.

]]>
A decorative image showing a computer backing up programs to a cloud with a Microsoft logo on one side, and on the other side, data to a cloud with the Backblaze logo.

Have you ever had that nagging feeling that you are forgetting something important? It’s like when you were back in school and sat down to take a test, only to realize you studied the wrong material. Worrying about your business data can feel like that. Are you fully protected? Are you doing all you can to ensure your data is backed up, safe, and easily restorable?

If you aren’t backing up your Microsoft 365 data, you could be leaving yourself unprepared and exposed. It’s a common misconception that data stored in software as a service (SaaS) products like Microsoft 365 is already backed up because it’s in a cloud application. But, anyone who’s tried to restore an entire company’s Microsoft 365 instance can tell you that’s not the case. 

In this post, you’ll get a better understanding of how your Microsoft 365 data is stored and how to back it up so you can reliably and quickly restore it should you ever need to. 

What Is Microsoft 365?

More than one million companies worldwide use Microsoft 365 (formerly Office 365). Microsoft 365 is a cloud-based productivity platform that includes a suite of popular applications like Outlook, Teams, Word, Excel, PowerPoint, Access, OneDrive, Publisher, SharePoint, and others.

Chances are that if you’re using Microsoft 365, you use it daily for all your business operations and rely heavily on the information stored within the cloud. But have you ever checked out the backup policies in Microsoft 365? 

If you are not backing up your Microsoft 365 data, you have a gap in your backup strategy which may put your business at risk. If you suffer a malware or ransomware attack, natural disaster, or even accidental deletion by an employee, you could lose that data. In addition, it may cost you a lot of time and money trying to restore from Microsoft after a data emergency.

Why You Need to Back Up M365

You might assume that, because it’s in the cloud, your SaaS data is backed up automatically for you. In reality, SaaS companies and products like Microsoft 365 operate on a shared responsibility model, meaning they back up the data and infrastructure to maintain uptime, not to help you in the event you need to restore. Practically speaking, that means that they may not back up your data as often as you would like or archive it for as long as you need. Microsoft does not concern itself with fully protecting your files. Most importantly, they may not offer a timely recovery option if you lose the data, which is critical to getting your business back online in the event of an outage. 

The bottom line is that Microsoft’s top priority is to keep its own services running. They replicate data and have redundancy safeguards in place to ensure you can access your data through the platform reliably, but they do not assume responsibility for their users’ data. 

All this to say, you are ultimately responsible for backing up your data and files in Microsoft 365.

M365 Native Backup Tools

But wait—what about Microsoft 365’s native backup tools? If you are relying on native backup support for your crucial business data, let’s talk about why that may not be the best way to make sure your data is protected.

Retention Period and Storage Costs

First, there are default settings within Microsoft 365 that dictate how long items are retained in the Recycle Bin and Deleted Items folders. You can tweak those settings for a longer retention period, but there is also a storage limit, so you might run out of space quickly. To keep your data longer, you must upgrade your license type and purchase additional storage, which could quickly become costly. Additionally, if an employee accidentally or purposefully deletes items from the trash bin, the item may be gone forever.

Replication Is Not a Backup

Microsoft replicates data as part of its responsibility, but this doesn’t help you meet the requirements of a solid 3-2-1 strategy, where there are three copies of your data, one of which is off-site. So Microsoft doesn’t fully protect you and doesn’t support compliance standards that call for immutability. When Microsoft replicates data, they’re only making a second copy, and that copy is designed to be in sync with your production data. This means that an item gets corrupted and then replicated, the archive version is also corrupted, and you could lose crucial data. You can’t bank on M365’s replication to protect you.

Sync Is Not a Backup

Similarly, syncing is not backup protection and could end up hurting you. Syncing is designed to have a single copy of a file always up-to-date with changes you or other users have made on different devices. For example, if you use OneDrive as your cloud backup service, the bad news is that OneDrive will sync corrupted files overwriting your healthy ones. Essentially, if a file is deleted or infected, it will be infected or deleted on all synchronized devices. In contrast, a true backup allows you to restore from a specific point in time and provides access to previous versions of data, which can be useful in case of a ransomware attack or deletion.

Back Up Frequency and Control

Lastly, one of the biggest drawbacks of relying on Microsoft’s built-in backup tools is that you lack the ability to dial in your backup system the way you may want or need. There are several rules to follow in order to be able to recover or restore files in Microsoft 365. For instance, it’s strongly recommended that you save your documents in the cloud, both for syncing purposes and to enable things like Version History. But, if you delete an online-only file, it doesn’t go to your Recycle Bin, which means there’s no way to recover it. 

And, there are limits to the maximum numbers of versions saved when using Version History, the period of time a file is recoverable for, and so on. Some of the recovery periods even change depending on file type. For example, you can’t restore email after 30 days, but if you have an enterprise-level account, other file types are stored in your Recycle Bin or trash for up to 93 days.   

Backups may not be created as often as you like, and the recovery process isn’t quick or easy. For example, Microsoft backs up your data every 12 hours and retains it for 14 days. If you need to restore files, you must contact Microsoft Support, and they will perform a “full restore,” overwriting everything, not just the specific information you need. The recovery process probably won’t meet your recovery time objective (RTO) requirements. 

Compliance and Cyber Insurance

Many people want more control over their backups than what Microsoft offers, especially for mission-critical business data. In addition to having clarity and control over the backup and recovery process, data storage and backups are often an essential element in supporting compliance needs, particularly if your business stores personal identifiable information (PII). Different industries and regions will have different standards that need to be enforced, so it’s always a good idea to have your legal or compliance team involved in the conversation.  

Similarly, with the increasing frequency of ransomware attacks, many businesses are adding cyber insurance. Cyber insurance provides protection for a variety of things, including legal fees, expenditure related to breaches, court-ordered judgments, and forensic post-break review expenses. As a result, they often have stipulations about how and when you’re backing up to mitigate the fallout of business downtime. 

Backing Up M365 With a Third Party Tool to the Cloud

Instead of the native Microsoft 365 backup tool, you could use one of the many popular backup applications that provide Microsoft 365 backup support. Options include:

Note that some of these applications include Microsoft 365 protection with their standard license, but it’s an optional add-on module with others. Be sure to check licensing and pricing before choosing an option.  

One thing to keep in mind with these tools: if you store on-premises, the backup data they generate can be vulnerable to local disasters like fire or earthquakes and to cyberattacks. For example, if you keep backups on network attached storage (NAS) that doesn’t tier to the cloud, then your data would not be fully protected  

Backing your data up to the cloud puts a copy off-site and geographically distant from your production data, so it’s better protected from things like natural disasters. When you’re choosing a cloud storage provider, make sure you check out where they store their data—if their data center is just down the road, then you’ll want to pick a different region. 

Backblaze B2 + Microsoft 365

Backblaze B2 Cloud Storage is reliable, affordable, and secure backup cloud storage, and it integrates seamlessly with the third party applications listed above for backing up Microsoft 365. Some of the benefits of using Backblaze B2 include:

Check out our Help Center for Quick-Start Guides from partners like Veeam and MSP360.

Start backing up your Microsoft 365 data to Backblaze B2 today.

Protect Your M365 Data for Peace of Mind

Whether you are a business professional or an IT director, your goal is to protect your company data. Backing up your Microsoft 365 data to the cloud satisfies your RTO goals and better protects you against various threats. 

Relying on Microsoft 365 native tools is inefficient and slow, which means you could blow your RTO targets. Backing up to the cloud allows you to meet retention requirements, ensuring that you retain the data you need for as long as required without destroying your operational budget.

Your business-critical data is too important to trust to a native backup tool that doesn’t meet your needs. In the event of a catastrophic situation, you need complete control and quick access to all your files from a specific point in time. Backing your Microsoft 365 data up to the cloud gives you more control, more freedom, and better protection. 

The post Secure Your SaaS Tools: Back Up Microsoft 365 to the Cloud appeared first on Backblaze Blog | Cloud Storage & Cloud Backup.

]]>
https://www.backblaze.com/blog/secure-your-saas-tools-back-up-microsoft-365-to-the-cloud/feed/ 0
From Response to Recovery: Developing a Cyber Resilience Framework https://www.backblaze.com/blog/from-response-to-recovery-developing-a-cyber-resilience-framework/ https://www.backblaze.com/blog/from-response-to-recovery-developing-a-cyber-resilience-framework/#respond Tue, 06 Jun 2023 16:26:47 +0000 https://www.backblaze.com/blog/?p=108905 Cyber resilience is an iterative, process-driven model designed to help you lower cyber risk. Let's talk about how it can improve your business' cybersecurity.

The post From Response to Recovery: Developing a Cyber Resilience Framework appeared first on Backblaze Blog | Cloud Storage & Cloud Backup.

]]>
A decorative image showing a globe icon surrounded by a search icon, a backup icon, a cog, a shield with a checkmark, and a checklist.

If you’re responsible for securing your company’s data, you’re likely well-acquainted with the basics of backups. You may be following the 3-2-1 rule and may even be using cloud storage for off-site backup of essential data.

But there’s a new model of iterative, process-improvement driven outcomes to improve business continuity, and it’s called cyber resilience. What is cyber resilience and why does it matter to your business? That’s what we’ll talk about today.

Join Us for Our Upcoming Webinar

Learn more about how to strengthen your organization’s cyber resilience by protecting systems, responding to incidents, and recovering with minimal disruption at our upcoming webinar “Build Your Company’s Cyber Resilience: Protect, Respond, and Recover from Security Incidents” on Friday, June 9 at 10 a.m. PT/noon CT.

Join Us June 9 ➔

Plus, see a demo of Instant Business Recovery, an on-demand, fully managed disaster recovery as a service (DRaaS) solution that works seamlessly with Veeam. Deploy and recover via a simple web interface or a phone call to instantly begin recovering critical servers and Veeam backups.

The Case for Cyber Resilience

The advance of artificial intelligence (AI) technologies, geopolitical tensions, and the ever-present threat of ransomware have all fundamentally changed the approach businesses must take to data security. In fact, the White House has prioritized cybersecurity by announcing a new cybersecurity strategy because of the increased risks of cyberattacks and the threat to critical infrastructure. And, according to the World Economic Forum’s Global Cybersecurity Outlook 2023, business continuity (67%) and reputational damage (65%) concern organization leaders more than any other cyber risk.

Cyber resilience assumes that it’s not if a security incident will occur, but when

Being cyber resilient means that a business is able to not only identify threats and protect against them, but also withstand attacks as they’re happening, respond effectively, and bounce back better—so that the business is better fortified against future incidents. 

What Is Cyber Resilience?

Cyber resilience is ultimately a holistic and continuous view of data protection; it implies that businesses can build more robust security practices, embed those throughout the organization, and put processes into place to learn from security threats and incidents in order to continuously shore up defenses. In the cyber resilience model, improving data security is no longer a finite series of checkbox items; it is not something that is ever “done.”

Unlike common backup strategies like 3-2-1 or grandfather-father-son that are well defined and understood, there is no singular model for cyber resilience. The National Institute of Standards and Technology defines cyber resiliency as the ability to anticipate, withstand, recover from, and adapt to incidents that compromise systems. You’ll often see the cyber resilience model depicted in a circular fashion because it is a cycle of continuous improvement. While cyber resilience frameworks may vary slightly from one another, they all typically focus on similar stages, including:

  • Identify: Stay informed about emerging security threats, especially those that your systems are most vulnerable to. Share information throughout the organization when employees need to install critical updates and patches. 
  • Protect: Ensure systems are adequately protected with cybersecurity best practices like multi-factor authentication (MFA), encryption at rest and in transit, and by applying the principle of least privilege. For more information on how to shore up your data protection, including data protected in cloud storage, check out our comprehensive checklist on cyber insurance best practices. Even if you’re not interested in cyber insurance, this checklist still provides a thorough resource for improving your cyber resilience.
  • Detect: Proactively monitor your network and system to ensure you can detect any threats as soon as possible.
  • Respond and Recover: Respond to incidents in the most effective way and ensure you can sustain critical business operations even while an incident is occurring. Plan your recovery in advance so your executive and IT teams are prepared to execute on it when the time comes.
  • Adapt: This is the key part. Run postmortems to understand what happened, what worked and what didn’t, and how it can be prevented in the future. This is how you truly build resilience.

Why Is Cyber Resilience Important?

Traditionally, IT leaders have excelled at thinking through backup strategy, and more and more IT administrators understand the value of next level techniques like using Object Lock to protect copies of data from ransomware. But, it’s less common to give attention to creating a disaster recovery (DR) plan, or thinking through how to ensure business continuity during and after an incident. 

In other words, we’ve been focusing too much on the time before an incident occurs and not enough on time on what to do during and after an incident. Consider the zero trust principle, which assumes that a breach is happening and it’s happening right now: taking such a viewpoint may seem negative, but it’s actually a proactive, not reactive, way to increase your business’ cyber resilience. When you assume you’re under attack, then your responsibility is to prove you’re not, which means actively monitoring your systems—and if you happen to discover that you are under attack, then your cybersecurity readiness measures kick in. 

How Is Cyber Resilience Different From Cybersecurity?

Cybersecurity is a set of practices on what to do before an incident occurs. Cyber resilience asks businesses to think more thoroughly about recovery processes and what comes after. Hence, cybersecurity is a component of cyber resilience, but cyber resilience is a much bigger framework through which to think about your business.

How Can I Improve My Business’ Cyber Resilience?

Besides establishing a sound backup strategy and following cybersecurity best practices, the biggest improvement that data security leaders can make is likely in helping the organization to shift its culture around cyber resilience.

  • Reframe cyber resilience. It is not solely a function of IT. Ensuring business continuity in the face of cyber threats can and should involve operations, legal, compliance, finance teams, and more.
  • Secure executive support now. Don’t wait until an incident occurs. Consider meeting on a regular basis with stakeholders to inform them about potential threats. Present if/then scenarios in terms that executives can understand: impact of risks, potential trade-offs, how incidents might affect customers or external partners, expected costs for mitigation and recovery, and timelines.
  • Practice your disaster recovery scenarios. Your business continuity plans should be run as fire drills. Ensure you have all stakeholders’ emergency/after hours contact information. Run tabletop exercises with any teams that need to be involved and conduct hypothetical retrospectives to determine how you can respond more efficiently if a given incident should occur.

It may seem overwhelming to try and adopt a cyber resiliency framework for your business, but you can start to move your organization in this direction by helping your internal stakeholders first shift their thinking. Acknowledging that a cyber incident will occur is a powerful way to realign priorities and support for data security leaders, and you’ll find that the momentum behind the effort will naturally help advance your security agenda.

Cyber Resilience Resources

Interested in learning more about how to improve business cyber resilience? Check out the free Backblaze resources below.

Looking for Support to Help Achieve Your Cyber Resilience Goals?

Backblaze provides end-to-end security and recovery solutions to ensure you can safeguard your systems with enterprise-grade security, immutability, and options for redundancy, plus fully-managed, on-demand disaster recovery as a service (DRaaS)—all at one-fifth the cost of AWS. Get started today or contact Sales for more information on B2 Reserve, our all-inclusive capacity-based pricing that includes premium support and no egress fees.

The post From Response to Recovery: Developing a Cyber Resilience Framework appeared first on Backblaze Blog | Cloud Storage & Cloud Backup.

]]>
https://www.backblaze.com/blog/from-response-to-recovery-developing-a-cyber-resilience-framework/feed/ 0
A Cyber Insurance Checklist: Learn How to Lower Risk to Better Secure Coverage https://www.backblaze.com/blog/a-cyber-insurance-checklist-learn-how-to-lower-risk-to-better-secure-coverage/ https://www.backblaze.com/blog/a-cyber-insurance-checklist-learn-how-to-lower-risk-to-better-secure-coverage/#respond Thu, 18 May 2023 16:32:30 +0000 https://www.backblaze.com/blog/?p=108738 Shopping for cyber insurance? Set yourself up for success with this readiness checklist.

The post A Cyber Insurance Checklist: Learn How to Lower Risk to Better Secure Coverage appeared first on Backblaze Blog | Cloud Storage & Cloud Backup.

]]>
A decorative image showing a cyberpig on a laptop with a shield blocking it from accessing a server.

If your business is looking into cyber insurance to protect your bottom line against security incidents, you’re in good company. The global market for cybersecurity insurance is projected to grow from 11.9 billion in 2022 to 29.2 billion by 2027.

But you don’t want to go into buying cyber security insurance blind. We put together this cyber insurance readiness checklist to help you strengthen your cyber resilience stance in order to better secure a policy and possibly a lower premium. (And even if you decide not to pursue cyber insurance, simply following some of these best practices will help you secure your company’s data.)

What is Cyber Insurance?

Cyber insurance is a specialty insurance product that is useful for any size business, but especially those dealing with large amounts of data. Before you buy cyber insurance, it helps to understand some fundamentals. Check out our post on cyber insurance basics to get up to speed.

Once you understand the basic choices available to you when securing a policy, or if you’re already familiar with how cyber insurance works, read on for the checklist.

Cyber Insurance Readiness Checklist

Cybersecurity insurance providers use their questionnaire and assessment period to understand how well-situated your business is to detect, limit, or prevent a cyber attack. They have requirements, and you want to meet those specific criteria to be covered at the most reasonable cost.

Your business is more likely to receive a lower premium if your security infrastructure is sound and you have disaster recovery processes and procedures in place. Though each provider has their own requirements, use the checklist below to familiarize yourself with the kinds of criteria a cyber insurance provider might look for. Any given provider may not ask about or require all these precautions; these are examples of common criteria. Note: Checking these off means your cyber resilience score is attractive to providers, though not a guarantee of coverage or a lower premium.

General Business Security

  • A business continuity/disaster recovery plan that includes a formal incident response plan is in place.
  • There is a designated role, group, or outside vendor responsible for information security.
  • Your company has a written information security policy.
  • Employees must complete social engineering/phishing training.
  • You set up antivirus software and firewalls.
  • You monitor the network in real-time.
  • Company mobile computing devices are encrypted.
  • You use spam and phishing filters for your email client.
  • You require two-factor authentication (2FA) for email, remote access to the network, and privileged user accounts.
  • You have an endpoint detection and response system in place.

Cloud Storage Security

  • Your cloud storage account is 2FA enabled. Note: Backblaze accounts have 2FA via SMS or via authentication apps using ToTP.
  • You encrypt data at rest and in transit. Note: Backblaze B2 provides server-side encryption (encryption at rest), and many of our partner integration tools, like Veeam, MSP360, and Archiware, offer encryption in transit.
  • You follow the 3-2-1 or 3-2-1-1-0 backup strategies and keep an air-gapped copy of your backup data (that is, a copy that’s not connected to your network).
  • You run backups frequently. You might consider implementing grandfather-father-son strategy for your cloud backups to meet this requirement.
  • You store backups off-site and in a geographically separate location. Note: Even if you keep a backup off-site, your cyber insurance provider may not consider this secure enough if your off-site copy is in the same geographic region or held at your own data center.
  • Your backups are protected from ransomware with object lock for data immutability.

AcenTek Adopts Cloud for Cyber Insurance Requirement

Learn how Backblaze customer AcenTek secured their data with B2 Cloud Storage to meet their cyber insurance provider’s requirement that backups be secured in a geographically distanced location.

By adding features like SSE, 2FA, and object lock to your backup security, insurance companies know you take data security seriously.

Cyber insurance provides the peace of mind that, when your company is faced with a digital incident, you will have access to resources with which to recover. And there is no question that by increasing your cybersecurity resilience, you’re more likely to find an insurer with the best coverage at the right price.

Ultimately, it’s up to you to ensure you have a robust backup strategy and security protocols in place. Even if you hope to never have to access your backups (because that might mean a security breach), it’s always smart to consider how fast you can restore your data should you need to, keeping in mind that hot storage is going to give you a faster recovery time objective (RTO) without any delays like those seen with cold storage like Amazon Glacier. And, with Backblaze B2 Cloud Storage offering hot cloud storage at cold storage prices, you can afford to store all your data for as long as you need—at one-fifth the price of AWS.

Get Started With Backblaze

Get started today with pay-as-you-go pricing, or contact our Sales Team to learn more about B2 Reserve, our all-inclusive, capacity-based bundles starting at 20TB.

The post A Cyber Insurance Checklist: Learn How to Lower Risk to Better Secure Coverage appeared first on Backblaze Blog | Cloud Storage & Cloud Backup.

]]>
https://www.backblaze.com/blog/a-cyber-insurance-checklist-learn-how-to-lower-risk-to-better-secure-coverage/feed/ 0
How to Use Veeam’s V12 Direct-to-Object Storage Feature https://www.backblaze.com/blog/how-to-use-veeams-v12-direct-to-object-storage-feature/ https://www.backblaze.com/blog/how-to-use-veeams-v12-direct-to-object-storage-feature/#respond Thu, 11 May 2023 16:24:43 +0000 https://www.backblaze.com/blog/?p=108682 When Veeam released its direct-to-object storage feature in v12, they expanded they way enterprises can use cloud and on-premises storage. Let's talk about how you might want to adjust your cloud and on-premises storage strategy.

The post How to Use Veeam’s V12 Direct-to-Object Storage Feature appeared first on Backblaze Blog | Cloud Storage & Cloud Backup.

]]>
A decorative image showing the word Veeam and a cloud with the Backblaze logo.

If you already use Veeam, you’re probably familiar with using object storage, typically in the cloud, as your secondary repository using Veeam’s Scale-Out Backup Repository (SOBR). But Veeam v12, released on February 14, 2023, introduced a new direct-to-object storage feature that expands the way enterprises can use cloud storage and on-premises object storage for data protection.

Today, I’m talking through some specific use cases as well as the benefits of the direct-to-object storage feature, including fortifying your 3-2-1 backup strategy, ensuring your business is optimizing your cloud storage, and improving cyber resilience.

Meet Us at VeeamON

We hope to see you at this year’s VeeamON conference. Here are some highlights you can look forward to:

  • Check out our breakout session “Build a DRaaS Offering at No Extra Cost” on Tuesday, May 23, 1:30 p.m. ET to create your affordable, right-sized disaster recovery plan.
  • Join our Miami Beach Pub Crawl with phoenixNAP Tuesday, May 23 at 6 p.m. ET.
  • Come by the Backblaze booth for demos, swag, and more. Don’t forget to book your meeting time.

The Basics of Veeam’s Direct-to-Object Storage

Veeam’s v12 release added the direct-to-object storage feature that allows you to add object storage as a primary backup repository. This object storage can be an on-premises object storage system like Pure Storage or Cloudian or a cloud object storage provider like Backblaze B2 Cloud Storage’s S3 compatible storage. You can configure the job to run as often as you would like, set your retention policy, and configure all the other settings that Veeam Backup & Replication provides.

Prior to v12, you had to use Veeam’s SOBR to save data to cloud object storage. Setting up the SOBR requires you to first add a local storage component, called your Performance Tier, as a primary backup repository. You can then add a Capacity Tier where you can copy backups to cloud object storage via the SOBR. Your Capacity Tier can be used for redundancy and disaster recovery (DR) purposes, or older backups can be completely off-loaded to cloud storage to free up space on your local storage component.

The diagram below shows how both the SOBR and direct-to-object storage methods work. As you can see, with the direct-to-object feature, you no longer have to first land your backups in the Performance Tier before sending them to cloud storage.

Why Use Cloud Object Storage With Veeam?

On-premises object storage systems can be a great resource for storing data locally and achieving the fastest recoveries, but they’re expensive especially if you’re maintaining capacity to store multiple copies of your data, and they’re still vulnerable to on-site disasters like fire, flood, or tornado. Cloud storage allows you to keep a backup copy in an off-site, geographically distanced location for DR purposes.

Additionally, while local storage will provide the fastest recovery time objective (RTO), cloud object storage can be effective in the case of an on-premises disaster as it serves the dual purpose of protecting your data and being off-site.

To be clear, the addition of direct-to-object storage doesn’t mean you should immediately abandon your SOBR jobs or your on-premises devices. The direct-to-object storage feature gives you more options and flexibility, and there are a few specific use cases where it works particularly well, which I’ll get into later.

How to Use Veeam’s Direct-to-Object Storage Feature

With v12, you can now use Veeam’s direct-to-object storage feature in the Performance Tier, the Capacity Tier, or both. To understand how to use the direct-to-object storage feature to its full potential, you need to understand the implications of using object storage in your different tiers. I’ll walk through what that means.

Using Object Storage in Veeam’s Performance Tier

In earlier versions of Veeam’s backup software, the SOBR required the Performance Tier to be an on-premises storage device like a network attached storage (NAS) device. V12 changed that. You can now use an on-premises system or object storage, including cloud storage, as your Performance Tier.

So, why would you want to use cloud object storage, specifically Backblaze B2, as your Performance Tier?

  • Scalability: With cloud object storage as your Performance Tier, you no longer have to worry about running out of storage space on your local device.
  • Immutability: By enabling immutability on your Veeam console and in your Backblaze B2 account (using Object Lock), you can prevent your backups from being corrupted by a ransomware network attack like they might be if your Performance Tier was a local NAS.
  • Security: By setting cloud storage as your Performance Tier in the SOBR, you remove the threat of your backups being affected by a local disaster. With your backups safely protected off-site and geographically distanced from your primary business location, you can rest assured they are safe even if your business is affected by a natural disaster.

Understandably, some IT professionals prefer to keep on-premises copies of their backups because they offer the shortest RTO, but for many organizations, the pros of using cloud storage in the Performance Tier can outweigh the slightly longer RTO.

Using Object Storage in the Performance AND Capacity Tiers

If you’re concerned about overreliance on cloud storage but also feeling eager to eliminate often unwieldy, expensive, space-consuming physical local storage appliances, consider that Veeam v12 allows you to set cloud object storage as both your Performance and Capacity tier, which could add redundancy to ease your worries.

For instance, you could follow this approach:

  1. Create a Backblaze B2 Bucket in one region and set that as your primary repository using the SOBR.
  2. Send your Backup Jobs to that bucket (and make it immutable) as often as you would like.
  3. Create a second Backblaze B2 account with a bucket in a different region, and set it as your secondary repository.
  4. Create Backup Copy Jobs to replicate your data to that second region for added redundancy.

This may ease your concerns about using the cloud as the sole location for your backup data, as having two copies of your data—in geographically disparate regions—satisfies the 3-2-1 rule (since, even though you’re using one cloud storage service, the two backup copies of your data are kept in different locations.

Refresher: What is the 3-2-1 Backup Strategy?

A 3-2-1 strategy means having at least three total copies of your data, two of which are local but on different media, and at least one off-site copy (in the cloud).

Setting Up Veeam’s Direct-to-Object Storage Feature with Backblaze

For a comprehensive walk-through of setting up Veeam’s Direct-to-Object Storage feature with Backblaze B2 Cloud Storage, follow the steps in this tutorial video:

Use Cases for Veeam’s Direct-to-Object Storage Feature

Now that you know how to use Veeam’s direct-to-object storage feature, you might be wondering what it’s best suited to do. There are a few use cases where Veeam’s direct-to-object storage feature really shines, including:

  • In remote offices
  • For NAS backup
  • For end-to-end immutability
  • For Veeam Cloud and Service Providers (VCSP)

Using Direct-to-Object Storage in Remote Offices

The new functionality works well to support distributed and remote work environments.

Veeam had the ability to back up remote offices in v11, but it was unwieldy. When you wanted to back up the remote office, you had to back up the remote office to the main office, where the primary on-premises instance of Veeam Backup & Replication is installed, then use the SOBR to copy the remote office’s data to the cloud. This two-step process puts a strain on the main office network. With direct-to-object storage, you can still use a SOBR for the main office, and remote offices with smaller IT footprints (i.e. no on-premises device on which to create a Performance Tier) can send backups directly to the cloud.

If the remote office ever closes or suffers a local disaster, you can bring up its virtual machines (VMs) at the main office and get back in business quickly.

Using Direct-to-Object Storage for NAS Backup

NAS devices are often used as the Performance Tier for backups in the SOBR, and a business using a NAS may be just as likely to be storing its production data on the same NAS. For instance, a video production company might store its data on a NAS because it likes how easily a NAS incorporates into its workflows. Or a remote office branch may be using a NAS to store its data and make it easily accessible to the employees at that location.

With v11 and earlier versions, your production NAS had to be backed up to a Performance Tier and then to the cloud. And, with many Veeam users utilizing a NAS as their Performance Tier, this meant you had a NAS backing up to …another NAS, which made no sense.

For media and entertainment professionals in the field or IT administrators at remote offices, having to back up the production NAS to the main office (wherever that is located) before sending it to the cloud was inconvenient and unwieldy.

With v12, your production NAS can be backed up directly to the cloud using Veeam’s direct-to-object storage feature.

Direct-to-Object Storage for End-to-End Immutability

As I mentioned, previous versions of Veeam required you to use local storage like a NAS as the Performance Tier in your SOBR, but that left your data vulnerable to security attacks. Now, with direct-to-object storage functionality, you can achieve an end-to-end immutability. Here’s how:

  • In the SOBR, designate an on-premises appliance that supports immutability as your primary repository (Performance Tier). Cloudian and Pure Storage are popular names to consider here.
  • Set cloud storage like Backblaze B2 as your secondary repository (Capacity Tier).
  • Enable Object Lock for immutability in your Backblaze B2 account and set the date of your lock.

With this setup, you check a lot of boxes:

  • You fulfill a 3-2-1 backup strategy.
  • Both your local data and your off-site data are protected from deletion, encryption, or modification.
  • Your infrastructure is provisioned for the fastest RTO with your local storage.
  • You’ve also fully protected your data—including your local copy—from a ransomware attack.

Immutability for NAS Data in the Cloud

Backing up your NAS straight to the cloud with Veeam’s direct-to-object storage feature means you can enable immutability using the Veeam console and Object Lock in Backblaze B2. Few NAS devices natively support immutability, so using Veeam and B2 Cloud Storage to back up your NAS offers all the benefits of secure, off-site backup plus protection from ransomware.

Direct-to-Object Storage for VCSPs

The direct-to-object storage feature also works well for VCSPs. It changes how VCSPs use Cloud Connect, Veeam’s offering for service partners. A VCSP can send customer backups straight to the cloud instead of first sending them to the VCSP’s own systems.

Veeam V12 and Cyber Resiliency

When it comes to protecting your data, ultimately, you want to make the decision that best meets your business continuity and cyber resilience requirements. That means ensuring you not only have a sound backup strategy, but that you also consider what your data restoration process will look like during an active security incident (because a security incident is more likely to happen than not).

Veeam’s direct-to-object storage feature gives you more options for establishing a backup strategy that meets your RTO and DR requirements while also staying within your budget and allowing you to use the most optimal and preferred kind of storage for your use case.

Veeam + Backblaze: Now Even Easier

Get started today for $6/TB per month, pay-as-you-go cloud storage. Or contact your favorite reseller, like CDW or SHI to purchase Backblaze via B2 Reserve, our all-inclusive, capacity-based bundles.

The post How to Use Veeam’s V12 Direct-to-Object Storage Feature appeared first on Backblaze Blog | Cloud Storage & Cloud Backup.

]]>
https://www.backblaze.com/blog/how-to-use-veeams-v12-direct-to-object-storage-feature/feed/ 0
How Long Should You Keep Backups? https://www.backblaze.com/blog/how-long-should-you-keep-backups/ https://www.backblaze.com/blog/how-long-should-you-keep-backups/#respond Tue, 25 Apr 2023 16:28:31 +0000 https://www.backblaze.com/blog/?p=108532 When you're designing your backup strategy, an important foundational question is how long you need to retain your backups. Let's talk about some of the things that might help you answer that question.

The post How Long Should You Keep Backups? appeared first on Backblaze Blog | Cloud Storage & Cloud Backup.

]]>
A decorative image showing a calendar, a laptop, a desktop, and a phone.

You know you need to back up your data. Maybe you’ve developed a backup strategy and gotten the process started, or maybe you’re still in the planning phase. Now you’re starting to wonder: how long do I need to keep all these backups I’m going to accumulate? It’s the right question to ask, but the truth is there’s no one-size-fits-all answer.

How long you keep your backups will depend on your IT team’s priorities, and will include practical factors like storage costs and the operational realities that define the usefulness of each backup. Highly regulated industries like banking and healthcare have even more challenges to consider on top of that. With all that in mind, here’s what you need to know to determine how long you should keep your backups.

First Things First: You Need a Retention Policy

If you’re asking how long you should keep your backups, you’re already on your way to designing a retention policy. Your organization’s retention policy is the official protocol that will codify your backup strategy from top to bottom. The policy should not just outline what data you’re backing up and for how long, but also explain why you’ve determined to keep it for that length of time and what you plan to do with it beyond that point.

Practically speaking, the decision about how long to keep your backups boils down to a balancing act between storage costs and operational value. You need to understand how long your backups will be useful in order to determine when it’s time to replace or dispose of them; keeping backups past their viability leads to both unnecessary spend and the kind of complexity that breeds risk.

Backup vs. Archive

Disposal isn’t the only option when a backup ages. Sometimes it’s more appropriate to archive data as a long-term storage option. As your organization’s data footprint expands, it’s important to determine how you interact with different types of data to make the best decisions about how to safeguard it (and for how long).

While backups are used to restore data in case of loss or damage, or to return a system to a previous state, archives are more often used to off-load data from faster or more frequently accessed storage systems.

  • Backup: A data recovery strategy for when loss, damage, or disaster occurs.
  • Archive: A long-term or permanent data retrieval strategy for data that is not as likely to be accessed, but still needs to be retained.

Knowing archiving is an option can impact how long you decide to keep your backups. Instead of deleting them completely, you can choose to move them from short-term storage into a long-term archive. For instance, you could choose to keep more recent backups on premises, perhaps stored on a local server or network attached storage (NAS) device, and move your archives to cloud storage for long-range safekeeping.

How you choose to store your backups can also be a factor into your decision on how long to keep them. Moving archives to cloud storage is more convenient than other long-term retention strategies like tape. Keeping archives in cloud storage could allow you to keep that data for longer simply because it’s less time-consuming than maintaining tape archives, and you also don’t have to worry about the deterioration of tape over time.

Putting your archive in cloud storage can help manage the cost side of the equation, too, but only if handled carefully. While cloud storage is typically cheaper than tape archives in the long run, you might save even more by moving your archives from hot to cold storage. For most cloud storage providers, cold storage is generally a cheaper option if you’re talking dollars per GB stored. But, it’s important to remember that retrieving data from cold storage can incur high egress fees and take 12–48 hours to retrieve data. When you need to recover data quickly, such as in a ransomware attack or cybersecurity breach, each moment you don’t have your data means more time your business is not online—and that’s expensive.

How One School District Balances Storage Costs and Retention

With 200 servers and 125TB of data, Bethel School District outside of Tacoma, Washington needed a scalable cloud storage solution for archiving server backups. They’d been using Amazon S3, but high costs were straining their budget—so much so that they had to shorten needed retention periods.

Moving to Backblaze produced savings of 75%, and Backblaze’s flat pricing structure gives the school district a predictable invoice, eliminating the guesswork they anticipated from other solutions. They’re also planning to reinstate a longer retention period for better protection from ransomware attacks, as they no longer need to control spiraling Amazon S3 costs.

Next Order of Business: The Structure of Your Backup Strategy

The types of backups you’re storing will also factor into how long you keep them. There are many different ways to structure a secure backup strategy, and it’s likely that your organization will interact with each kind of backup differently. Some backup types need to be stored for longer than others to do their job, and those decisions have a lot to do with how the various types interact to form an effective strategy.

The Basics: 3-2-1

The 3-2-1 backup strategy is the widely accepted industry minimum standard. It dictates keeping three copies of your data: two stored locally (on two different types of devices) and one stored off-site. This diversified backup strategy covers all the bases; it’s easy to access backups stored on-site, while off-site (and often offline or immutable) backups provide security through redundancy. It’s probably a good idea to have a specific retention policy for each of your three backups—even if you end up keeping your two locally stored files for the same length of time—because each copy serves a different purpose in your broader backup strategy.

Full vs. Incremental Backups

While designing your backup strategy, you’ll also need to choose how you’re using full versus incremental backups. Performing full backups each time (like completely backing up a work computer daily) requires huge amounts of time, bandwidth, and space, which all inflate your storage usage at the end of the day. Other options serve to increase efficiency and reduce your storage footprint.

  • Full backup: A complete copy of your data, starting from scratch either without any pre-existing backups or as if no other backup exists yet.
  • Incremental backup: A copy of any data that has been added or changed since your last full backup (or your last incremental backup).

When thinking about how long to keep your full backups, consider how far back you may need to completely restore a system. Many cyber attacks can go unnoticed for some time. For instance, you could learn that an employee’s computer was infected with malware or a virus several months ago, and you need to completely restore their system with a full backup. It’s not uncommon for businesses to keep full backups for a year or even longer. On the other hand, incremental backups may not need to be kept for as long because you can always just restore from a full backup instead.

Grandfather-Father-Son Backups

Effectively combining different backup types into a cohesive strategy leads to a staggered, chronological approach that is greater than the sum of its parts. The grandfather-father-son system is a great example of this concept in action. Here’s an example of how it might work:

  1. Grandfather: A monthly full backup is stored either off-site or in the cloud.
  2. Father: Weekly full backups are stored locally in a hot cloud storage solution.
  3. Son: Daily incremental backups are stored as a stopgap alongside father backups.

It makes sense that different types of backups will need to be stored for different lengths of time and in different places. You’ll need to make decisions about how long to keep old full backups (once they’ve been replaced with newer ones), for example. The type and the age of your data backups, along with their role in the broader context of your strategy, should factor into your determination about how long to keep them.

A Note on Minimum Storage Duration Policies

When considering cloud storage to store your backups, it’s important to know that many providers have minimum storage duration policies. These are fees charged for data that is not kept in cloud storage for some period of time defined by the cloud storage provider, and it can be anywhere from 30–180 days. These are essentially delete penalties—minimum retention requirement fees apply not only to data that gets deleted from cloud storage but also any data that is overwritten. Think about that in the context of the backup strategies we’ve just outlined: each time you create a new full backup, you’re overwriting data.

So if, for example, you choose a cloud storage provider with a 90-day minimum storage duration, and you keep your full backups for 60 days, you will be charged fees each time you overwrite or delete a backup. Some cloud storage providers, like Backblaze B2 Cloud Storage, do not have a minimum storage duration policy, so you do not have to let that influence how long you choose to keep backups. That kind of flexibility to keep, overwrite, and delete your data as often as you need is important to manage your storage costs and business needs without the fear of surprise bills or hidden fees.

Don’t Forget: Your Industry’s Regulations Can Tip the Scales

While weighing storage costs and operational needs is the fundamental starting point of any retention policy, it’s also important to note that many organizations face regulatory requirements that complicate the question of how long to keep backups. Governing bodies designed to protect both individuals and business interests often mandate that certain kinds of data be readily available and producible upon request for a set amount of time, and they require higher standards of data protection when you’re storing personally identifiable information (PII). Here are some examples of industries with their own unique data retention regulations:

  • Healthcare: Medical and patient data retention is governed by HIPAA rules, but how those rules are applied can vary from state to state.
  • Insurance: Different types of policies are governed by different rules in each state, but insurance companies do generally need to comply with established retention periods. More recently, companies have also been adding cyber insurance, which comes with its own set of requirements.
  • Finance: A huge web of legislation (like the Bank Secrecy Act, Electronic Funds Transfer Act, and more) mandates how long banking and financial institutions must retain their data.
  • Education: Universities sit in an interesting space. On one hand, they store a ton of sensitive data about their students. They’re often public services, which means that there’s a certain amount of governmental regulation attached. They also store vast amounts of data related to research, and often have on-premises servers and private clouds to protect—and that’s all before you get to larger universities which have medical centers and hospitals attached. With all that in mind, it’s unsurprising that they’re subject to higher standards for protecting data.

Federal and regional legislation around general data security can also dictate how long a company needs to keep backups depending on where it does business (think GDPR, CCPA, etc.). So in addition to industry-specific regulations, your company’s primary geographic location—or your customers’ location—can also influence how long you need to keep data backups.

The Bottom Line: How Long You Keep Backups Will Be Unique to Your Business

The answer to how long you need to keep your backups has everything to do with the specifics of your organization. The industry you’re in, the type of data you deal with, and the structure of your backup strategy should all combine to inform your final decision. And as we’ve seen, you’ll likely wind up with multiple answers to the question pertaining to all the different types of backups you need to create and store.

The post How Long Should You Keep Backups? appeared first on Backblaze Blog | Cloud Storage & Cloud Backup.

]]>
https://www.backblaze.com/blog/how-long-should-you-keep-backups/feed/ 0
How To Do Bare Metal Backup and Recovery https://www.backblaze.com/blog/how-to-do-bare-metal-backup-and-recovery/ https://www.backblaze.com/blog/how-to-do-bare-metal-backup-and-recovery/#respond Tue, 18 Apr 2023 16:19:41 +0000 https://www.backblaze.com/blog/?p=108510 When you're looking to restore a computer as quickly as possible, backing up files alone is not enough. Bare metal recovery leverages image-based backups to restore everything on your computer including your operating system, system files, programs, and more.

The post How To Do Bare Metal Backup and Recovery appeared first on Backblaze Blog | Cloud Storage & Cloud Backup.

]]>
A decorative image with a broken server stack icon on one side, the cloud in the middle, then a fixed server icon on the right.

When you’re creating or refining your backup strategy, it’s important to think ahead to recovery. Hopefully you never have to deal with data loss, but any seasoned IT professional can tell you—whether it’s the result of a natural disaster or human error—data loss will happen.

With the ever-present threat of cybercrime and the use of ransomware, it is crucial to develop an effective backup strategy that also considers how quickly data can be recovered. Doing so is a key pillar of increasing your business’ cyber resilience: the ability to withstand and protect from cyber threats, but also bounce back quickly after an incident occurs. The key to that effective recovery may lie with bare metal recoveries.

In this guide, we will discuss what bare metal recovery is, its importance, the challenges of its implementation, and how it differs from other methods.

Creating Your Backup Recovery Plan

Your backup plan should be part of a broader disaster recovery (DR) plan that aims to help you minimize downtime and disruption after a disaster event.

A good backup plan starts with, at bare minimum, following the 3-2-1 rule. This involves having at least three copies of your data, two local copies (on-site) and at least one copy off-site. But it doesn’t end there. The 3-2-1 rule is evolving, and there are additional considerations around where and how you back up your data.

As part of an overall disaster recovery plan, you should also consider whether to use file and/or image-based backups. This decision will absolutely inform your DR strategy. And it leads to another consideration—understanding how to use bare metal recovery. If you plan to use bare metal recovery (and we’ll explain the reasons why you might want to), you’ll need to plan for image-based backups.

What Is Bare Metal Backup?

The term “bare metal” means a machine without an operating system (OS) installed on it. Fundamentally, that machine is “just metal”—the parts and pieces that make up a computer or server. A “bare metal backup” is designed so that you can take a machine with nothing else on it and restore it to your normal state of work. That means that the backup data has to contain the operating system (OS), user data, system settings, software, drivers, and applications, as well as all of the files. The terms image-based backups and bare metal backups are often used interchangeably to mean the process of creating backups of entire system data.

Bare metal backup is a favored method by many businesses because it ensures absolutely everything is backed up. This allows the entire system to be restored should a disaster result in total system failure. File-based backup strategies are, of course, very effective when just backing up folders and large media files, but when you’re talking about getting people back to work, a lot of man hours go into properly setting up a workstations to interact with internal networks, security protocols, proprietary or specialized software, etc. Since file-based backups do not back up the operating system and its settings, they are almost obsolete in modern IT environments, and operating on a file-based backup strategy can put businesses at significant risk or add downtime in the event of business interruption.

How Does Bare Metal Backup Work?

Bare metal backups allow data to be moved from one physical machine to another, to a virtual server, from a virtual server back to a physical machine, or from a virtual machine to a virtual server—offering a lot of flexibility.

This is the recommended method for backing up preferred system configurations so they can be transferred to other machines. The operating system and its settings can be quickly copied from a machine that is experiencing IT issues or has failing hardware, for example. Additionally, with a bare metal backup, virtual servers can also be set up very quickly instead of configuring the system from scratch.

What is Bare Metal Recovery (BMR) or Bare-Metal Restore?

As the name suggests, bare metal recovery is the process of recovering the bare metal (image-based) backup. By launching a bare metal recovery, a bare metal machine will retrieve its previous operating system, all files, folders, programs, and settings, ensuring the organization can resume operations as quickly as possible.

How Does Bare Metal Recovery Work?

A bare metal recovery (or restore) works by recovering the image of a system that was created during the bare metal backup. The backup software can then reinstate the operating system, settings, and files on a bare metal machine so it is fully functional again.

This type of recovery is typically issued in a disaster situation when a full server recovery is required, or when hardware has failed.

Why Is BMR Important?

The importance of BMR is dependent on an organization’s recovery time objective (RTO), the metric for measuring how quickly IT infrastructure can return online following a data disaster. The need for high-speed recovery, which in most cases is a necessity, means many businesses use bare metal recovery as part of their backup recovery plan.

If an OS becomes corrupted or damaged and you do not have a sufficient recovery plan in place, then the time needed to reinstall it, update it, and apply patches can result in significant downtime. BMR allows a server to be completely restored on a bare metal machine to its exact settings and configured simply and quickly.

Another key factor for choosing BMR is to protect against cybercrime. If your IT team can pinpoint the time when a system was infected with malware or ransomware, then a restore can be executed to wipe the machine clean of any threats and remove the source of infection, effectively rolling the system back to a time when everything was running smoothly.

BMR’s flexibility also means that it can be used to restore a physical or virtual machine, or simply as a method of cloning machines for easier deployment in the future.

The key advantages of bare metal recovery (BMR) are:

  • Speed: BMR offers faster recovery speeds than if you had to reinstall your OS and run updates and patches. It restores every system element to its exact state as when it was backed up, from the layout of desktop icons to the latest software updates and patches—you do not have to rebuild it step by step.
  • Security: If a system is subjected to a ransomware attack or any other type of malware or virus, a bare metal restore allows you to safely erase an entire machine or system and restore from a backup created before the attack.
  • Simplicity: Bare metal recovery can be executed without installing any additional software on the bare machine.

BMR: Some Caveats

Like any backup and recovery method, some IT environments may be more suitable for BMR than others, and there are some caveats that an organization should be aware of before implementing such a strategy.

First, bare metal recovery can experience issues if the restore is being executed on a machine with dissimilar hardware. The reason for this is that the original operating system copy needs to load the correct drivers to match the machine’s hardware. Therefore, if there is no match, then the system will not boot.

Fortunately, Backblaze Partner integrations, like MSP360, have features that allow you to restore to dissimilar hardware with no issues. This is a key feature to look for when considering BMR solutions. Otherwise, you have to seek out a new machine that has the same hardware as the corrupted machine.

Second, there may be a reason for not wanting to run BMR, such as a minor data accident when a simple file/folder restore is more practical, taking less time to achieve the desired results. A bare metal recovery strategy is recommended when a full machine needs to be restored, so it is advised to include several different options in your backup recovery plan to cover all scenarios.

Bare Metal Recovery in the Cloud

An on-premises disaster disrupts business operations and can have catastrophic implications for your bottom line. And, if you’re unable to run your preferred backup software, performing a bare metal recovery may not even be an option. Backblaze has created a solution that draws data from Veeam Backup & Replication backups stored in Backblaze B2 Cloud Storage to quickly bring up an orchestrated combination of on-demand servers, firewalls, networking, storage, and other infrastructure in phoenixNAP’s bare metal cloud servers. This Instant Business Recovery (IBR) solution includes fully-managed, affordable 24/7 disaster recovery support from Backblaze’s managed service provider partner specializing in disaster recovery as a service (DRaaS).

IBR allows your business to spin up your entire environment, including the data from your Backblaze B2 backups, in the cloud. With this active DR site in the cloud, you can keep business operations running while restoring your on-premises systems. Recovery is initiated via a simple web form or phone call. Instant Business Recovery protects your business in the case of on-premises disaster for a fraction of the cost of typical managed DRaaS solutions. As you build out your business continuity plan, you should absolutely consider how to sustain your business in the case of damage to your local infrastructure; Instant Business Recovery allows you to begin recovering your servers in minutes to ensure you meet your RTO.

BMR and Cloud Storage

Bare metal backup and recovery should be a key part of any DR strategy. From moving operating systems and files from one physical machine to another, to transferring image-based backups from a virtual machine to a virtual server, it’s a tool that makes sense as part of any IT admin’s toolbox.

Your next question is where to store your bare metal backups, and cloud storage makes good sense. Even if you’re already keeping your backups off-site, it’s important for them to be geographically distanced in case your entire area experiences a natural disaster or outage. That takes more than just backing up to the cloud, really—it’s important to know where your cloud storage provider stores their data for both compliance standards, speed of content delivery (if that’s a concern), and to ensure that you’re not unintentionally storing your off-site backup close to home.

Remember that these are critical backups you’ll need in a disaster scenario, so consider recovery time and expense when choosing a cloud storage provider. While it may seem more economical to use cold storage, it comes with long recovery times and high fees to recover quickly. Using always-hot cloud storage is imperative, both for speed and to avoid an additional expense in the form of a bill for egress fees after you’ve recovered from a cyberattack.

Host Your Bare Metal Backups in Backblaze B2 Cloud Storage

Backblaze B2 Cloud Storage provides S3 compatible, Object Lock-capable hot storage for one-fifth the cost of AWS and other public clouds—with no trade-off in performance.

Get started today, and contact us to support a customized proof of concept (PoC) for datasets of more than 50TB.

The post How To Do Bare Metal Backup and Recovery appeared first on Backblaze Blog | Cloud Storage & Cloud Backup.

]]>
https://www.backblaze.com/blog/how-to-do-bare-metal-backup-and-recovery/feed/ 0