Patrick Thomas, Author at Backblaze Blog | Cloud Storage & Cloud Backup https://www.backblaze.com/blog/author/patrick/ Cloud Storage & Cloud Backup Thu, 01 Feb 2024 00:02:00 +0000 en-US hourly 1 https://wordpress.org/?v=6.4.3 https://www.backblaze.com/blog/wp-content/uploads/2019/04/cropped-cropped-backblaze_icon_transparent-80x80.png Patrick Thomas, Author at Backblaze Blog | Cloud Storage & Cloud Backup https://www.backblaze.com/blog/author/patrick/ 32 32 Welcome Chris Opat, Senior Vice President of Cloud Operations https://www.backblaze.com/blog/welcome-chris-opat-senior-vice-president-of-cloud-operations/ https://www.backblaze.com/blog/welcome-chris-opat-senior-vice-president-of-cloud-operations/#respond Tue, 15 Aug 2023 14:45:00 +0000 https://www.backblaze.com/blog/?p=109460 Welcome Chris Opat, Senior Vice President of Cloud Operations at Backblaze.

The post Welcome Chris Opat, Senior Vice President of Cloud Operations appeared first on Backblaze Blog | Cloud Storage & Cloud Backup.

]]>
An image of Chris Opat, Senior Vice President of Cloud Operations at Backblaze. Text reads "Chris Opat, Senior Vice President of Cloud Operations."

Backblaze is happy to announce that Chris Opat has joined our team as senior vice president of cloud operations. Chris will oversee the strategy and operations of the Backblaze global cloud storage platform.

What Chris Brings to Backblaze

Chris expands the company’s leadership by bringing his impressive cloud and infrastructure knowledge with more than 25 years of industry experience. 

Previously, Chris served as senior vice president leading platform engineering and operations at StackPath, a specialized provider in edge technology and content delivery. He also held leadership roles at CyrusOne, CompuCom, Cloudreach, and Bear Stearns/JPMorgan. Chris earned his Bachelor of Science degree in television and digital media production from Ithaca College.

Backblaze CEO, Gleb Budman, shared that Chris is a forward-thinking cloud leader with a proven track record of leading teams that are clever and bold in solving problems and creating best-in-class experiences for customers. His expertise and approach will be pivotal as more customers move to an open cloud ecosystem and will help advance Backblaze’s cloud strategy as we continue to grow.

Chris’ Role as SVP of Cloud Operations

As SVP of Cloud Operations, Chris oversees cloud strategy, platform engineering, and technology infrastructure, enabling Backblaze to further scale capacity and improve performance to meet larger-sized customers’ needs, as we continue to see success in moving up-market.

Chris says of his new role at Backblaze:

Backblaze’s vision and mission resonate with me. I’m proud to be joining a company that is supporting customers and advocating for an open cloud ecosystem. I’m looking forward to working with the amazing team at Backblaze as we continue to scale with our customers and accelerate growth.

The post Welcome Chris Opat, Senior Vice President of Cloud Operations appeared first on Backblaze Blog | Cloud Storage & Cloud Backup.

]]>
https://www.backblaze.com/blog/welcome-chris-opat-senior-vice-president-of-cloud-operations/feed/ 0
A Guide to Clouds: Object, File, and Block https://www.backblaze.com/blog/object-file-block-storage-guide/ https://www.backblaze.com/blog/object-file-block-storage-guide/#comments Wed, 04 Jan 2023 17:20:05 +0000 https://www.backblaze.com/blog/?p=94945 While people are quick to recommend the "cloud" for any business scenario involving data, you need to know which cloud is right for your scenario. Let's start with the difference between object, file, and block storage.

The post A Guide to Clouds: Object, File, and Block appeared first on Backblaze Blog | Cloud Storage & Cloud Backup.

]]>

Editor’s Note

This post has been updated since it was originally published in 2020.

What is Cloud Storage?

At this point, most people would be able to capably explain what cloud storage is, more or less. But ask anyone to list and define the different types of cloud storage, and you’re likely to get some blank looks. Understanding the different types of cloud storage is essential to deciding which solution is right for your business.

Maybe you need to share content with a number of contributors, producers, or editors based around the world. Or possibly you have a huge, complex database of sales metrics you need to process or manipulate that is stressing your on-site capabilities. Or you might simply have data you need to archive.

Despite being a relatively simple concept, information about The Cloud (capital T, capital C) is often overrun with frustratingly unclear jargon. With that in mind, we’re going to take a look at the three primary types of cloud storage. Below, you’ll find a quick and easy-to-use field guide to the three basic types of cloud storage being used today: object, file, and block storage.

Answers to Big Questions

This article is part of a series of posts aimed at business leaders and entrepreneurs interested in using the cloud to scale their business without wasting millions of capital on infrastructure. Check out the other posts in the series:

The Three Types of Cloud Storage

Object Storage

In cloud storage, the definition of an “object” is pretty simple. Object storage is literally some assemblage of data with one unique identifier and an infinite amount of metadata.

Simple, right? Yeah, we thought not. Let’s break it down into its components to try to make it clearer.

The Data

The data that makes up an object could be anything—an advertising jingle’s audio file, the photo album from your company party, a 300-page software manual, or simply a related grouping of bits and bytes.

The Identifier

When data is added to object storage, it typically receives an identifier that is referred to as a Universally Unique Identifier (UUID) or a Globally Unique Identifier (GUID). These identifiers are 128-bit integers. In layman’s terms, this means that the identifier—the “name” of the object—is a complex number, of sorts. The identifier is so complex, in fact, that it allows for each identifier to be considered unique.

The Metadata

The third and final component of an object is its metadata—literally “the data about the data”—which can be any information that is used to classify or characterize the data in a particular object. In describing the contents of the data, it makes it more easily searchable. This metadata could be the jingle’s name, a collection of the geographical coordinates where a set of digital pictures were taken, or the name of the author who wrote the user manual.

The Advantages of Object Storage

The primary advantages of object storage—and the reason it’s used by the majority of cloud storage providers—is that it enables the storage of massive amounts of unstructured data while still maintaining easy data accessibility. It achieves this capability thanks to its flat structure—by using GUIDs instead of the hierarchies characteristic of file storage or block storage, object storage allows for infinite scalability. In other words, by doing away with structure, there’s more room for data.

The higher level of accessibility is largely thanks to the metadata, which is infinitely customizable. Think of the metadata as a set of labels for your data. Because this metadata can be refined and rewritten and expanded infinitely, the objects in object storage can easily be reorganized and scaled, based on different metadata criteria.

This last point is what makes object storage so popular for backup and archiving functions. Metadata’s unrestricted nature allows storage administrators to easily implement their own policies for data preservation, retention, and deletion, making it easier to reinforce data and create better disaster recovery strategies.

The Primary Uses of Object Storage

The main use cases for object storage include:

  • Storage of unstructured data
  • Storage of large data sets
  • Storage of large quantities of media assets like video footage as an archive in place of local tape drives

The prime use cases for object storage generally include storing large amounts of data. For instance, if your business does a lot of production work in any medium, you probably need a lot of space to store your finished projects after their useful life is complete, but you probably also need access to the files in case you or a client need them again in the future.

Object storage is perfect for use cases that need a lot of space but also relatively fast access because the data doesn’t need to be highly structured. For example, Kanopy, a Netflix-like service for libraries, uses object storage to store 25,000+ videos that users can access on demand. Object storage serves as their application store for serving out videos via a content delivery network.

Object storage works great as an active archive as well. KLRU, the Austin Public Television station responsible for broadcasting the famous “Austin City Limits,” opted to migrate their 40+ year archive of footage into cloud storage. Object storage provided a cheap, but reliable, archive for all of their work. And their ability to organize the content with metadata meant they could easily distribute it to their network of licensees (or anyone else interested in using the content).

The scalability and flexibility of object storage has made it the go-to choice for many businesses who are transitioning to cloud solutions. That said, the relative complexity of the naming schema for the objects—that 128-bit identifier isn’t exactly user-friendly for most of us—and the metadata management approach can prove to be too complex or ill suited for certain use-cases.

This will often lead to the use of third party software like Media Asset Managers (MAM) and Digital Asset Managers (DAM) that layer organizational schema over the top of the object store.

File Storage

For administrators in need of a friendly user interface but smaller storage requirements—think millions of files, instead of billions—file storage might be the answer.

So what is file storage? Much like how files are stored on your computer, files in this schema are organized in folders, which are then arranged into directories and subdirectories in a hierarchical fashion. To access a file, users or machines only need the path from directory to subdirectory to folder to file.

Because all the data stored in such a system is already organized in a hierarchical directory tree, it’s easy to name, delete, or otherwise manipulate files without any additional interface. If you have used practically any operating system (whether Windows or Apple iOS, or whatever else), then you’re likely already familiar with these types of file and folder trees and are more than capable of working within them.

The Advantages of File Storage

The approachability of file storage is often seen as its primary advantage. But, using file storage in the cloud adds one key element: sharing. In cloud file storage, like on an individual computer, an administrator can easily set access as well as editing permissions across files and trees so that security and version control are far easier to manage. This allows for easy access, sharing, and collaboration.

The disadvantage of file storage systems, however, is that if you plan for your data to grow, there is a certain point at which the hierarchy and permissions will become complex enough to slow the system significantly.

The Use Cases for File Storage

Common use cases for file storage are:

  • Storage of files for an office or directory in a content repository
  • Storage of files in a small development or data center environment that is a cost effective option for local archiving
  • Storage of data that requires data protection and easy deployment

Generally speaking, discrete amounts of structured data work well in file storage systems. If this describes your organization’s data profile, and you need robust sharing, cloud file storage could be right for you. Specific examples would include businesses that require web-based applications where multiple users would need to manipulate files at the same time. In this case, a file storage system would allow them the access they need, while also clearly delineating who can make changes. Another example is data analytics operations, which often require multiple servers to modify multiple files at the same time. These requirements make file storage systems a good solution for that use case as well.

Now that you have a better idea of the differences between object and file storage, let’s take a look at block storage and its special use cases.

Block Storage

A lot of cloud-based enterprise workloads use block storage. In this type of system, data is broken up into pieces called blocks, and then stored across a system that can be physically distributed to maximize efficiency. Each block receives a unique identifier, which allows the storage system to put the blocks back together when the data they contain are needed.

The Advantages of Block Storage

A block storage system in the cloud is used in scenarios where it’s important to be able to quickly retrieve and manipulate data, with an operating system accessing these data points directly across block volumes.

Block storage also decouples data from user environments, allowing that data to be spread across multiple environments. This creates multiple paths to the data and allows the user to retrieve it quickly. When a user or application requests data from a block storage system, the underlying storage system reassembles the data blocks and presents the data to the user or application.

The primary disadvantages of block storage are its lack of metadata, which limits organizational flexibility, and its higher price and complexity—as compared to the other solutions we’ve discussed.

The Use Cases for Block Storage

Primary use cases for block storage are:

  • Storage of databases
  • Storage for RAID volumes
  • Storage of data for critical systems that impact business operations
  • Storage of data as file systems for operating systems for virtualization software vendors

The relatively fast, reliable performance of block storage systems make them the preferred technology for databases. For the same reason block storage works well for databases, it also provides good support for enterprise applications: for transaction-based business applications, block storage ensures users are serviced quickly and reliably. Virtual machine file systems (VMFS) like VMware also tend to use block storage because of the way data is distributed across multiple volumes.

Making a Choice Between Different Types of Cloud Storage

So which cloud storage system is right for you? If you have a lot of data that members of a team need to access and manipulate regularly, block or file storage could be useful. Block storage works well for an organized collection of data that you can access quickly like a database. File storage is easy to manipulate directly without a custom-built interface. But if you need highly scalable storage for relatively unstructured data, that is where object storage shines. Whatever path you decide, now you have a sense of the use cases, advantages, and disadvantages of different storage types to guide your next step into the cloud storage ecosystem.

The post A Guide to Clouds: Object, File, and Block appeared first on Backblaze Blog | Cloud Storage & Cloud Backup.

]]>
https://www.backblaze.com/blog/object-file-block-storage-guide/feed/ 6
Welcoming Chief Human Resources Officer Robert Fitt to Backblaze https://www.backblaze.com/blog/welcoming-chief-human-resources-officer-robert-fitt-to-backblaze/ https://www.backblaze.com/blog/welcoming-chief-human-resources-officer-robert-fitt-to-backblaze/#respond Thu, 06 Oct 2022 13:02:21 +0000 https://www.backblaze.com/blog/?p=106954 Announcing our new Chief Human Resources Officer, Robert Fitt. Welcome to Backblaze!

The post Welcoming Chief Human Resources Officer Robert Fitt to Backblaze appeared first on Backblaze Blog | Cloud Storage & Cloud Backup.

]]>

Backblaze is happy to announce that Robert Fitt has joined our team as our first Chief Human Resources Officer (CHRO). Robert will lead the company’s strategic advancement for all aspects of human resources (HR), including hiring, people management and development, engagement, health and wellness initiatives, and outreach to the community.

We’re Growing—But We’re Still The Same Backblaze

Backblaze is recognized for talent retention and company culture—in the past few years we’ve received numerous awards for culture, diversity, and leadership from places like Comparably, Inc., Great Place to Work, and others. The addition of a seasoned CHRO will help us to continue this excellent trend as well as enabling our next phase of growth initiatives following our recent IPO in November 2021.

“Culture is critical in times of rapid growth and we want to continue scaling our world-class organization and great team alongside our growth as the leading independent storage cloud.” Gleb Budman, our CEO and Chairperson commented. “Robert is an experienced leader with the skills to help us do that. We are excited to welcome him to Backblaze.”

The Skills Robert Brings to Backblaze

Robert has a long track record of success in helping organizations scale rapidly while also championing healthy company culture. His executive experience includes leading HR functions at Turntide Technologies, 360 Behavioral Health, Mobilite, Broadcom Corporation and others. Additionally, Robert founded and managed Green Talent Co, an independent talent and HR advisory firm. Robert has scaled and led HR teams across the US, Canada, Asia, and Europe in software, hardware, telecomms, and healthcare industries. He brings a people first philosophy to everything he does, and is fiercely passionate about the employee and candidate experience.

“I’m proud to be joining a company that is committed to developing talent at all levels of the organization,” Robert said in reference to joining Backblaze. “I’m looking forward to working with leaders who have been recognized for promoting diversity, culture, and inclusion as we continue to focus on people and culture as a strategic priority.”

Robert earned his bachelor’s degree in Human Resources Management from Staffordshire University, and his Master’s degree in employment law from the University of East Anglia. He also volunteers as a pro bono HR consultant for Catchafire, a social good platform that matches professionals with nonprofits to volunteer their services.

Originally from the UK, Robert and his family hit the road 14 years ago and he is now based in Los Angeles, where he resides with wife, three boys, (ages 8, 17, 20) and two dogs. Taking advantage of the wonderful Southern California weather, Robert enjoys keeping fit, cycling, and sharing his eclectic music taste.

Backblaze Is Hiring

From day one, Backblaze has worked hard to bring our values to life, creating a transparent, sustainable, innovative (and dare we say flat-out good) place to work. Want to join the team? Check out our open opportunities.

The post Welcoming Chief Human Resources Officer Robert Fitt to Backblaze appeared first on Backblaze Blog | Cloud Storage & Cloud Backup.

]]>
https://www.backblaze.com/blog/welcoming-chief-human-resources-officer-robert-fitt-to-backblaze/feed/ 0
Defining an Exabyte https://www.backblaze.com/blog/what-is-an-exabyte/ https://www.backblaze.com/blog/what-is-an-exabyte/#comments Tue, 24 Mar 2020 03:37:51 +0000 https://www.backblaze.com/blog/?p=94566 How much is an exabyte, really?

The post Defining an Exabyte appeared first on Backblaze Blog | Cloud Storage & Cloud Backup.

]]>
If one gigabyte is the size of Earth, then an exabyte is the size of the sun.

What is an Exabyte?

An exabyte is made up of bytes, which themselves are units of digital storage. A byte is made up of 8 bits. A bit—short for “binary digit”—is a single unit of data. Namely a 1, or a 0.

The International System of Units (SI) denotes “exa” as a multiplication by the sixth power of 1000 or (1018).

In other words, 1 exabyte (EB) = 1018bytes = 1,0006bytes = 1000000000000000000 bytes = 1,000 petabytes = 1 million terabytes = 1 billion gigabytes. Overwhelmed by numbers yet?

Why don’t we give you some examples of what these numbers actually look like? We created this infographic to help put it in perspective.

How Big is an Exabyte?

Share this Image On Your Site

Please include attribution to Backblaze.com with this graphic.

Interested in learning more about how we got here? Check out the recent profile of Backblaze in Inc. magazine, free to our blog readers.

The Road to an Exabyte of Cloud Storage

So now that you know what an exabyte looks like, let’s look at how Backblaze got there.

Way back in 2010, we had 10 petabytes of customer data under management. It was a big deal for us, it took us two years to accomplish and, more importantly, it was a sign that thousands of customers trusted us with their data.

It meant a lot! But when we decided to tell the world about it, we had a hard time quantifying just how big 10 petabytes were, so naturally we made an infographic.

10 Petabytes Visualized

That’s a lot of hard drives. A Burj Khalifa of drives, in fact.

In what felt like the blink of an eye, it was two years later, and we had 75 petabytes of data. The Burj was out. And, because it was 2013, we quantified that amount of data like this…

At 3MB per song, Backblaze would store 25 billion songs.

Pop songs now average around 3:30 in length, which means if you tried to listen to this imaginary musical archive, it would take you 167,000 years. And sadly, the total number of recorded songs is only the tens to hundreds of millions, so you’d have some repeats.

That’s a lot of songs! But more importantly, our data under management had grown by 750%! But we could barely take time to enjoy it because five months later we hit 100 petabytes, and we had to call it out. Stacking up to the Burj Khalifa was in the past! Now, we rivaled Mt. Shasta…

Stacked on end they would be 9,941 feet, about the same height as Mt. Shasta from the base.

But stacking drives was rapidly becoming less effective as a measurement. Simply put, the comparison was no longer apples to apples: the 3,000 drives we stacked up in 2010 only held one terabyte of data. If you were to take those same 3,000 drives and use the average drive size we had in 2013, about 4 terabytes of data per drive, the size of the stack would stay the same, as hard drives had not physically grown, but the density of the storage inside the drives had grown by 400%.

Regardless, the years went by, we launched an award-winning cloud storage service (Backblaze B2), and the incoming petabytes kept on accelerating—150 petabytes in early 2015, 200 before we reached 2016. Around there, we decided we needed to wait until the next big moment, and in February 2018, we hit 500 petabytes.

It took us two years to store 10 petabytes of data.

Over the next 7 years, by 2018, we stored another 500 petabytes.

And today, we reset the clock, because in the last two years, we’ve added another 500 petabytes. Which means we’re turning the clock back to 1…

1 exabyte.

Today, across 125,000 hard drives, Backblaze is managing an exabyte of customer data.

And what does that mean? Well, you should ask Ahin.

The post Defining an Exabyte appeared first on Backblaze Blog | Cloud Storage & Cloud Backup.

]]>
https://www.backblaze.com/blog/what-is-an-exabyte/feed/ 7
The Geography of Big Data Maintenance: Data Warehouses, Data Lakes, and Data Swamps https://www.backblaze.com/blog/data-warehouses-data-lakes-and-data-swamps/ https://www.backblaze.com/blog/data-warehouses-data-lakes-and-data-swamps/#respond Thu, 05 Mar 2020 17:14:48 +0000 https://www.backblaze.com/blog/?p=94211 Understanding what "Big Data" is and how to leverage it can make a huge difference for any business. This post explores Big Data as a concept, including what defines Data Warehouses and Data Lakes, and how to avoid Data Swamps.

The post The Geography of Big Data Maintenance: Data Warehouses, Data Lakes, and Data Swamps appeared first on Backblaze Blog | Cloud Storage & Cloud Backup.

]]>
Big Data illustration

“What is Cloud Storage?” is a series of posts for business leaders and entrepreneurs interested in using the cloud to scale their business without wasting millions of capital on infrastructure. Despite being relatively simple, information about “the Cloud” is overrun with frustratingly unclear jargon. These guides aim to cut through the hype and give you the information you need to convince stakeholders that scaling your business in the cloud is an essential next step. We hope you find them useful, and will let us know what additional insight you might need.” –The Editors

What is Cloud Storage?

“Big Data” is a phrase people love to throw around in advertising and planning documents, despite the fact that the term itself is rarely defined the same way by any two businesses, even among industry leaders. However, everyone can agree about its rapidly growing importance—understanding Big Data and how to leverage it for the greatest value will be of critical organizational concern for the foreseeable future.

So then what does Big Data really mean? Who is it for? Where does it come from? Where is it stored? What makes it so big, anyway? Let’s bring Big Data down to size.

What is Big Data?

First things first, for purposes of this discussion, “Big” means any amount of data that exceeds the storage capacity of a single organization. “Data” refers to information stored or processed on a computer. Collectively, then, “Big Data” is a massive volume of both structured or unstructured (or both) data that is too large to effectively process using traditional relational database management systems or applications. In more general terms, when your infrastructure is too small to handle the data your business is generating—either because the volume of data is too large, it moves too fast, or it simply exceeds the current processing capacity of your systems—you’ve entered the realm of Big Data.

Let’s take a look at the defining characteristics.

Characteristics of Big Data

Current definitions of Big Data often reference a “triple (or in some cases quadruple) V” construct for detailing its characteristics. The “V”s reference velocity, volume, variety, and variability. We’ll define them for you here:

Velocity

Velocity refers to the speed of generation of the data—the pace at which data flows in from sources like business processes, application logs, networks, and social media sites, sensors, mobile devices, etc. This speed determines how rapidly data must be processed to meet business demands, which determines the real potential for the data.

Volume

The term Big Data itself obviously references significant volume. But beyond just being “big,” the relative size of a data set is a fundamental factor in determining its value. The volume of data stored by an organization is used to ascertain its scalability, accessibility, and ease or difficulty of management. A few examples of high volume data sets are all of the credit card transactions in the United States on a given day; the entire collection of medical records in Europe; and every video uploaded to YouTube in an hour. A small to moderate volume might be the total number of credit card transactions in your business.

Variety

Variety refers to how many disparate or separate data sources contribute to an organization’s Big Data, along with the intrinsic nature of the data coming from each source. This relates to both structured and unstructured data. Years ago, spreadsheets and databases were the primary sources of data handled by the majority of applications. Today, data is generated in a multitude of formats such as email, photos, videos, monitoring devices, PDFs, audio, etc.,—all of which demand different considerations in analysis applications. This variety of formats can potentially create issues for storage, mining, and analyzing data.

Variability

This concerns any inconsistencies in the data formats coming from any one source. Where variety considers different inputs from different sources, variability considers different inputs from one data source. These differences can complicate the effective management of the data store. Variability may also refer to differences in the speed of the data flow into your storage systems. Where velocity refers to the speed of all of your data, variability refers to how different data sets might move at different speeds. Variability can be a concern when the data itself has inconsistencies despite the architecture remaining constant.

An example from the health sector would be the variances within influenza epidemics (when and where they happen, how they’re reported in different health systems) and vaccinations (where they are/aren’t available) from year to year.

Understanding the makeup of Big Data in terms of Velocity, Volume, Variety, and Variability is key when strategizing big data solutions. This fundamental terminology will help you to effectively communicate among all players involved in decision making when you bring Big Data solutions to your team or your wider business. Whether pitching solutions, engaging consultants or vendors, or hearing out the proposals of the IT group, a shared terminology is crucial.

What is Big Data?

What is Big Data Used For?

Businesses use Big Data to try to predict future customer behavior based on past patterns and trends. Effective predictive analytics are the metaphorical crystal ball that organizations seek about what their customers want and when they want it. Theoretically, the more data collected, the more patterns and trends the business can identify. This information can potentially make all the difference for a successful strategy in customer acquisition and retention, and create loyal advocates for a business.

In this case, bigger is definitely better! But, the method an organization chooses to address its Big Data needs will be a pivotal marker for success in the coming years. Choosing your approach begins with understanding the sources of your data.

Sources of Big Data

Today’s world is incontestably digital: an endless array of gadgets and devices function as our trusted allies on a daily basis. While helpful, these constant companions are also responsible for generating more and more data every day. Smartphones, GPS technology, social media, surveillance cameras, machine sensors (and the growing number of users behind them) are all producing reams of data on a moment-to-moment basis that has increased exponentially, from 1 Zetabyte of customer data produced in 2009 to more than 35 Zetabytes in 2020.

If your business uses an app to receive and process orders for customers, or if you log extensive point-of-sale retail data, or if you have massive email marketing campaigns, you could have sources for untapped insight into your customers.

Once you understand the sources of your data, the next step is understanding the methods for housing and managing it. Data Warehouses and Data Lakes are two of the primary types of storage and maintenance systems that you should be familiar with.

illustration of multiple server stacks

Where Is Big Data Stored? Data Warehouses & Data Lakes

Although both Data Lakes and Data Warehouses are widely used for Big Data storage they are not interchangeable terms.

A Data Warehouse is an electronic system used to organize information. A Data Warehouse goes beyond the capabilities of a traditional relational database’s function of housing and organizing data generated from a single source only.

How Do Data Warehouses Work?

A Data Warehouse is a repository for structured, filtered data that has already been processed for a specific purpose. A warehouse combines information from multiple sources into a single comprehensive database.

For example, in the retail world, a data warehouse may consolidate customer info from point-of-sale systems, the company website, consumer comment cards, and mailing lists. This information can then be used for distribution and marketing purposes, to track inventory movements, customer buying habits, manage promotions, and to determine pricing policies.

Additionally, the Data Warehouse may also incorporate information about company employees such as demographic data, salaries, schedules, and so on. This type of information can be used to inform hiring practices, set Human Resources policies and help guide other internal practices.

Data Warehouses are fundamental in the efficiency of modern life. For instance:

Have a plane to catch?

Airline systems rely on Data Warehouses for many operational functions like route analysis, crew assignments, frequent flyer programs, and more.

Have a headache?

The healthcare sector uses Data Warehouses to aid organizational strategy, help predict patient outcomes, generate treatment reports, and cross-share information with insurance companies, medical aid services, and so forth.

Are you a solid citizen?

In the public sector, Data Warehouses are mainly used for gathering intelligence and assisting government agencies in maintaining and analyzing individual tax and health records.

Playing it safe?

In investment and insurance sectors, the warehouses are mainly used to detect and analyze data patterns reflecting customer trends, and to continuously track market fluctuations.

Have a call to make?

The telecommunications industry makes use of Data Warehouses for management of product promotions, to drive sales strategies, and to make distribution decisions.

Need a room for the night?

The hospitality industry utilizes Data Warehouse capabilities in the tailored design and cost-effective implementation of advertising and marketing programs targeted to reflect client feedback and travel habits.

Data Warehouses are integral in many aspects of the business of everyday life. That said, they aren’t capable of handling the inflow of data in its raw format, like object files or blobs. A Data Lake is the type of repository needed to make use of this raw data. Let’s examine Data Lakes next.

Data lake illustration

What is a Data Lake?

A Data Lake is a vast pool of raw data, the purpose for which is not yet defined. This data can be both structured and unstructured. The prime attributes of a Data Lake are a secure and adaptable data storage and maintenance system distinguished by its flexibility, agility, and ease of use.

If you’re considering a business approach that involves Data Lakes, you’ll want to look for solutions that have the following characteristics: they should retain all data and support all data types; they should easily adapt to change; and they should provide quick insights to as wide a range of users as you require.

Use Cases for Data Lakes

Data Lakes are most helpful when working with streaming data, like the sorts of information gathered from machine sensors, live event-based data streams, clickstream tracking, or product/server logs.

Deployments of Data Lakes typically address one or more of the following business use cases:

  • Business intelligence and analytics – analyzing streams of data to determine high-level trends and granular, record-level insights. A good example of this is the oil and gas industry, which has used the nearly 1.5 Terabytes of data they generate on a daily basis to increase their efficiency.
  • Data science – unstructured data allows for more possibilities in analysis and exploration, enabling innovative applications of machine learning, advanced statistics and predictive algorithms. State, city, and federal governments around the world are using data science to dig more deeply into the massive amount of data they collect regarding traffic, utilities, and pedestrian behavior to design safer, smarter cities.
  • Data serving – Data Lakes are usually an integral part of high-performance architectures for applications that rely on fresh or real-time data, including recommender systems, predictive decision engines or fraud detection tools. A good example of this use case are the different Customer Data Platforms available that pull information from many behavioral and transactional data sources to highly refine and target marketing to individual customers.

When considered together, the different potential applications for Data Lakes in your business seem to promise an endless source of revolutionary insights. But the ongoing maintenance and technical upgrades required for these data sources to retain relevance and value is massive. If neglected or mismanaged, Data Lakes quickly devolve. As such, one of the biggest considerations to weigh when considering this approach is whether you have the financial and personnel capacity to manage Data Lakes over the long term.

What is a Data Swamp?

A Data Swamp, put simply, is a Data Lake that no one cared to manage appropriately. They arise when a Data Lake is being treated as storage only, with a lack of curation, management, retention and lifecycle policies, and metadata. And if you decided to work Data Lake derived insights into your business planning, and end up with a Swamp, you are going to be sorely disappointed. You’re paying the same amount to store all of your data, but returning zero effective intelligence to your bottom line.

Final Thoughts on Big Data Maintenance

Any business or organization considering entry into Big Data country will want to be very careful and planful as they consider how they will store, maintain, and analyze their data. Making the right choices at the outset will ensure you’re able to traverse the developing digital landscape with strategic insights that enable informed decisions to keep you ahead of your competitors. We hope this primer on Big Data gives you the confidence to take the appropriate first steps.

The post The Geography of Big Data Maintenance: Data Warehouses, Data Lakes, and Data Swamps appeared first on Backblaze Blog | Cloud Storage & Cloud Backup.

]]>
https://www.backblaze.com/blog/data-warehouses-data-lakes-and-data-swamps/feed/ 0
To Buy, Or Not to Buy? CapEx Versus OpEx is the Question https://www.backblaze.com/blog/capex-vs-opex/ https://www.backblaze.com/blog/capex-vs-opex/#comments Thu, 20 Feb 2020 15:00:47 +0000 https://www.backblaze.com/blog/?p=94068 Our customers often ask us about the budget implications of investing in cloud storage, versus those of an on-premises solution. In short, they're debating whether to make a capital investment (CapEx) or committing to an operating expense (OpEx). In those post, we provide some answers.

The post To Buy, Or Not to Buy? CapEx Versus OpEx is the Question appeared first on Backblaze Blog | Cloud Storage & Cloud Backup.

]]>
Debating CapEx and Opex: How to Make the Decision

When you work in a resource-constricted business or organization, committing to any kind of investment that doesn’t directly contribute to your mission, your bottom line (or both) is incredibly hard. You only have a certain amount of cash, and if you’re not using it to build something that immediately returns value to your bottom line, you feel like you’re stealing from your own growth.

This is the reality for many nonprofits, media companies, foundations, production houses, and arts organizations. They often invest in cash-intensive products—music, video, literature, plays, service efforts, and/or historical or cultural preservation—that take months, if not years, to turn back into cash or grants the company can use for future efforts. Which means that, when it’s time to make investments in the company’s infrastructure, the decision making process can be brutal.

Do you invest in your office: the metaphorical leaking roof over your head? New technology that might (or might not) make your job easier? And how do you invest? Do you buy or rent? It all feels like a distraction from what you’re supposed to be working on, and possibly a misallocation of funds that could be supporting your core mission.

We want to help you with the decision.

A growing reality for businesses is the cost of protecting their legacy: All of the video, audio, text, and other files that their organization has created over the years came at great expense, so there’s no question that it needs to be appropriately stored and archived.

But, while the size of this data is growing, your budget to protect that data is most likely not. You have a decision to make: Try to manage this growth with an on-prem investment, or pay a cloud-based service to take care of it for you. In more simple terms, you’re making an age-old budgeting decision:

Whether to make an extensive capital investment, or to commit to an ongoing operational expense. In financial jargon, you’re debating “CapEx” vs. “OpEx”—and you’re not alone.

Let’s break down the decision one step at a time. First things first, let’s hammer out some terms in case you’re already lost. (If you’re roughly familiar with basic accounting principals, you can skip the following section. Otherwise, read on…)


Capital Expense (CapEx) vs. Operating Expense (OpEx)

NAS box

What is CapEx?

Capital Expense (CapEx) (or expenditure, if you want to be fancy) is the price of buying or fixing an asset that will hold its value over time—think: vehicles, buildings, equipment, land, or say… a network-attached server (NAS) or some other piece of technology. You’re buying something that you will use over some period of time (often called its “useful life”).

What’s important to note is that this “expense” does not immediately hit your operating budget. At first, this is entirely a balance sheet item: the cash you spent for it simply moves from “Current Assets” on your balance sheet to “Equipment” under your “Long-term Assets.”

Nice deal, right? No net change! My hardware is now “free” for me except for the upkeep and power expenses. Not so fast.

Every type of capital asset has a “depreciation schedule.” This is accountant lingo for: Things fall apart and aren’t worth what you paid for them. So, the depreciation schedule is an estimation for how quickly or slowly an item loses its value to your business.

Technology typically depreciates on a 3 or 5 year basis. What this means is, if you buy a $10,000 NAS on January 1st, and your auditor determines its useful life to be 3 years, then your depreciation schedule will recognize a loss of $3,333.33 in value for the asset at the end of the year for the next three years. Once you reach three years the asset can be removed from your depreciation schedule and it isn’t “worth” anything on your balance sheet anymore (though it may still be quite functional and an essential part of your IT infrastructure).

But, as we know: The whole idea in a balance sheet is that it “balances” right? So if your assets are decreasing in value, you need to acknowledge that loss somewhere or your finances will be out of whack. So where does the decrease go? Into your operating budget! You have to acknowledge depreciation as an operating expense.

This is where your Capital Expense becomes an Operating Expense: The depreciation value has to go somewhere, and that somewhere is your bottom line. Accounting principles dictate that you have to incorporate the amount your assets have depreciated into your operating budget as an expense.

And that’s how Capital Expense works (at least, from a very simple perspective). You buy something—often pulling directly from your cash—and then you acknowledge its expense incrementally, over a set schedule, in your operating budget.

Side note: Some companies account for depreciation on a monthly basis, to avoid year-end surprises in heavy depreciation bills.

What is OpEx?

Operating Expense (OpEx) is typically easier for folks to wrap their head around. These are the ongoing costs you incur to run your business or organization. Think: rent, internet, office supplies, kibble for the office cat—that sort of thing. Unlike your capital expenses, the operating expenses hit your bottom line immediately, typically flowing through your general or administrative lines. No smoke and mirrors here—you spend the money, you get what you paid for, your monthly financials reflect the expense, and hopefully it contributes positively to your bottom line!


Backblaze cloud banner

Why Do CapEx and Opex Matter for Data Storage?

We get it, the ins and outs of a balance sheet are tantalizing to a small number of people (sheepishly, I count myself amongst them). BUT: The implications of the difference between CapEx and OpEx can be hugely impactful when it comes to fulfilling your organization’s mission or your businesses’ goals.

Let’s see if we can bring the implications into more useful perspective:

Let’s say that you’re an organization with a growing archive of data, and you have a reliable projection for your data growth rate over the coming years. Maybe you’re a music education nonprofit, and part of your mission is archiving the performance recordings of all of your students for their use in future school applications or tryouts.

You have around 200TB worth of data that you need to get onto a more reliable, accessible storage media ASAP (your volunteer librarian just retired and now nobody knows how she organized the tape archive… bummer). What do you do?

You call your freelance IT consultant, and they give you two options:

1) An On-Prem Storage Solution: The simplest way to describe an on-premises solution is: A machine that is visible. It’s in a space you own or rent, it is “bought and paid for,” and your auditor or bookkeeper flagged that it should be depreciated.

Data storage comes in “on-prem” shapes and sizes too. Whether it’s a Network Attached Server (NAS) or a Storage Area Network (SAN), your IT guy will quote out something for you to plug in onsite that will take care of your archiving issue. This type of storage is a capital expense.

2) A Cloud Storage Solution: The “cloud” is often described as “someone else’s computer.” It’s a pretty apt description in this case. In a cloud storage solution, you pay another company—typically on a monthly basis—to store, protect, and maintain your data. So if on-prem is a “visible” solution, this is an “invisible” solution. And because it’s a service, it’s a simple monthly operating expense. Cloud storage doesn’t depreciate because the company providing is constantly paying to maintain.


What is This Really Going to Cost?

A Question of Clouds

Let’s say you have roughly 200TB you need to get into a better archiving solution. This is about 25 years worth of content, so you’re expecting (with a little room for data inflation) to add around 30 Terabytes a year of data. What will your two options really cost you, once we pull apart the financial jargon?

On-Prem Storage Solution:

For your server hardware, your hard drives, and a reliable power supply unit, you’ll likely end up paying around $25,000. Let’s assume your audit determines a useful life of three years for the server, that means you’ll have a depreciation of $694 per month.

Don’t forget, however, that you’ll need to pay for power and possibly cooling (estimated around $100 per month for this size of server), some IT assistance to help with upgrades and maintenance (let’s say $50 per month), and you have to pay for the space to hold it. We’ll leave the latter at zero—you probably have a closet somewhere.

So all in, you’ll need to lay out $25K in cash at the outset, and then you’ll be recognizing the expense in your operating expense to the tune of $850 every month for the next 3 years.

Cloud Storage Solution:

The biggest expense on the front end of a cloud solution is the expense of ingesting data. There are a number of services you can use, but it’s easiest for us to quote out our own: B2 Cloud Storage. If you want to move quickly, you can rent our Fireballs, which we’ll ship to you at a price of $550 per month, and you can upload 70TB at a time and ship them back to us for upload. So, all together, you’d pay $1,650 for the trouble of moving the data at the speed of FedEx, which is typically far faster than the internet. (If you have the time to let your data upload over the internet for months, then you can do that for far less). We’ll spread this expense across 3 years to make our comparison more apples-to-apples. So let’s say $46 a month.

After this initial expense, you simply have to pay a monthly data storage bill. We’ll use Backblaze B2 for estimating. At 200TB, you’ll be paying $1,000 a month, right now, but that number will grow along with your data, at $5/TB.

So you’re laying out $1,500 in your first month, then $1,000 a month, growing at $5/TB per month.

Balance Sheet Implications and Monthly Operating Budget Implications tables


Side note: you can calculate your own costs using the B2 cloud storage calculator.

What’s the Real Difference? Or, What Are a Few Hundred Dollars Worth?

To emerge from the weeds for a moment: The simple difference between these two options is a few hundred dollars on a month to month basis. Let’s explore what those hundreds get you when it comes to your cash flow, long-term flexibility, maintenance, and your real estate bill.

Cash Flow

On-Prem Storage Solution: On day one working with your NAS or SAN, you’re out $25,000. And unless the hardware is defective, you will never be able to get that full value back.

Cloud Storage Solution: By the end of “month one” for Cloud, you might be out $1,000, maybe $2,000, depending on the upload service you use and the upload speeds you achieve.

So the question here is: How important is cash flow to you? What’s the opportunity cost of not being able to use that $22K for something else?

Long-Term Flexibility

How good of a forecaster are you? It may sound like an odd question but it’s actually critical to choosing the right solution. If you can accurately forecast the data needs of your company for the next five years you should be able to pick an on-prem solution that will match those needs. On the other hand if you are off in either direction you will end up spending money inefficiently. Let’s take a look at how both scenarios play out.

Overestimating Data Needs:

On-Prem Storage Solution: On day two working with your NAS or SAN, you are locked in to your investment of $25,000. You can’t return it, you can’t make it smaller. And the decrease in cash is only the beginning of your commitment. On an annual or monthly basis, going forward, you will need to recognize your investment’s depreciation, which will be a fixed amount no matter how long the hardware is in operation.

Cloud Storage Solution: On day two working in the cloud, you could choose to deprecate some of your data or recategorize it, and decrease your monthly spend. Alternatively, you could change cloud services if your existing arrangement isn’t working. The important thing is, you are not locked in to investment, and you can exit the arrangement at any time.

Underestimating Data Needs:

On-Prem Storage Solution: On day 425 you might realize that your NAS or SAN isn’t going to be enough storage for your operations. Most organizations make budgets for their operations, and most of them wouldn’t mind beating those projections. The problem is, if you’re able to achieve more in a given year, you will likely also generate more data in that year. If your business takes off and you’re locked into an on-prem solution, your only remedy will be to invest in higher capacity drives, or even to add an additional storage solution, both of which incur additional upfront cash outlays.

Cloud Storage Solution: If you reach day 425 and your organization is unexpectedly beating projections by a significant margin, your cloud storage service can easily scale to match your needs. Your monthly expense will increase but only at a fractional percentage since no new equipment will need to be purchased.

The questions here are: Are you ready to project how your data will scale over the next three to five years? And are you ready to cover the costs if you’re wrong?

Upkeep, Updates, and Repairs

On-Prem Storage Solution: On day 483, your NAS or SAN may not keep up with technology, or it may disagree with some other tech you need. But you’re stuck with it, and you’re stuck with the tab of paying to maintain and troubleshoot the aging technology. Whether you have your own IT staff, or hire consultants or an IT service, this could add significant cost to your overall on-prem storage budget. And it goes without saying that your server could simply cease functioning at any time.

Cloud Storage Solution: A cloud storage service will continue to run. It will always be improved. It will always be maintained. And in some cases, depending on how you’re uploading the data, it can even be self-healing. Some cloud storage solutions will perform checks to ensure that the data they’re storing is not degrading or hasn’t been lost. As such, these services are considered “self-healing,” because when they discover an inconsistency, they’ll call the users files to re-upload or use their own built-in redundancies to fix or replace the affected files.

And, of course, a cloud solution will have a staff of people—experts, often—working day in and day out to maintain and improve your storage. You don’t pay anything additional for their services. You don’t have to recruit, train, or manage them. They’re on the job entirely to ensure your service is never disrupted.

Real Estate, Energy, and Security

On-Prem Storage Solution: As your data grows, so will your need to maintain the space for it. You will need to devote increasingly large amounts of real estate to the footprint of your on site storage. You’ll have to provide adequate cooling and energy, and you’ll need to think differently about your approach to security. If you don’t have the space on site, that means you’ll need to expand or find additional real-estate elsewhere to make more room for servers. Depending on the real estate market you’re in, the costs for such an expansion could be well more than the monthly depreciation for your hardware.

Cloud Storage Solution: Cloud storage services are in the business of providing real estate, climate control, and over-the-top security for your data. Most cloud storage service providers have multiple data centers in order to quickly scale to meet your growing storage needs.

Projected Costs Comparison: On-Prem Vs. Cloud


Final Thoughts on CapEx vs. OpEx

Viewed simply, setting up an on-prem archiving solution seems like a good deal. You’ve got your asset on site, the immediate budget implications are shunted off to your balance sheet, and everything else is IT’s problem. But, when you look past day one of your purchase, it might be less attractive: You have cash tied up in a vulnerable asset. You have a financial commitment that you can’t easily scale up or down. You have a tool that is only going to be maintained or improved when something goes very wrong. You have another presence in your office that is a space and energy hog.

On the flip side, with cloud storage, you have a monthly payment that takes care of all of this for you. Your cash is more available and flexible. You have zero concerns about scaling issues. Security, reliability, durability, upgrades, up-time, energy use, file maintenance—all of these are part of what you’re paying for on a monthly basis.

Maybe you love tinkering with hardware and being able to see where your data is resting. If that’s you, well, then CapEx is your jam. But if all of the concerns listed above are things you’d rather not worry about (I mean, you probably have more interesting things to work on, right?), then cloud storage is probably a better deal. CapEx vs. OpEx, that is the question—which is wiser? We’ll leave the answer up to you.

The post To Buy, Or Not to Buy? CapEx Versus OpEx is the Question appeared first on Backblaze Blog | Cloud Storage & Cloud Backup.

]]>
https://www.backblaze.com/blog/capex-vs-opex/feed/ 2
A Sandbox in the Clouds: Software Testing and Development in Cloud Storage https://www.backblaze.com/blog/a-sandbox-in-the-clouds-software-testing-and-development-in-cloud-storage/ https://www.backblaze.com/blog/a-sandbox-in-the-clouds-software-testing-and-development-in-cloud-storage/#comments Tue, 14 Jan 2020 15:41:10 +0000 https://www.backblaze.com/blog/?p=93882 Cloud-based software development gives businesses the ability to scale up resources when needed without investing in the infrastructure to simulate thousands of users. This article explores the foundations of cloud-based development for leaders interested in using it in their organization.

The post A Sandbox in the Clouds: Software Testing and Development in Cloud Storage appeared first on Backblaze Blog | Cloud Storage & Cloud Backup.

]]>
A Sandbox full of App Icons in the Clouds

“What is Cloud Storage?” is a series of posts for business leaders and entrepreneurs interested in using the cloud to scale their business without wasting millions of capital on infrastructure. Despite being relatively simple, information about “the Cloud” is overrun with frustratingly unclear jargon. These guides aim to cut through the hype and give you the information you need to convince stakeholders that scaling your business in the cloud is an essential next step. We hope you find them useful, and will let us know what additional insight you might need.” –The Editors

What is Cloud Storage?

The words “testing and development” bring to mind engineers in white lab coats marking clipboards as they hover over a buzzing, whirring machine. The reality in app development is more often a DevOps team in a rented room virtually poking and prodding at something living on a server somewhere. But how does that really work?

Think of testing in the cloud like taking your app or software program to train at an Olympic-sized facility instead of your neighbor’s pool. In app development, building the infrastructure for testing in a local environment can be costly and time-consuming. Cloud-based software development, on the other hand, gives you the ability to scale up resources when needed without investing in the infrastructure to, say, simulate thousands of users.

But first things first…

What Is Cloud Software Testing?

Cloud software testing uses cloud environments and infrastructure to simulate realistic user traffic scenarios to measure software performance, functionality, and security. In cloud testing, someone else owns the hardware, runs the test, and delivers the test results. On-premise testing is limited by budgets, deadlines, and capacity, especially when that capacity may not be needed in the future.

An App Waiting in a Cloud Storage Sandbox

Types of Software Testing

Any software testing done in a local test environment can be done in the cloud, some much more efficiently. The cloud is a big sandbox, and software testing tools are the shovels and rakes and little toy dump trucks you need to create a well-functioning app. Here are a few examples of how to test software. Keep in mind, this is by no means an exhaustive list.

Stress Testing

Stress tests measure how software responds under heavy traffic. They show what happens when traffic spikes (a spike test) or when high traffic lasts a long time (a soak test). Imagine putting your app on a treadmill to run a marathon with no training, then forcing it to sprint for the finish. Stress testing software in an on-premise environment involves a significant amount of capital build-out—servers, software, dedicated networks. Cloud testing is a cost-effective and scalable way to truly push your app to the limit. Companies that deal with big spikes in traffic find stress testing particularly useful. After experiencing ticketing issues, the Royal Opera House in London turned to cloud stress testing to prepare for ticket releases when traffic can spike to 3,000 concurrent users. Stress testing in the cloud enables them to make sure their website and ticketing app can handle the traffic on sales days.

Load Testing

If stress testing is a treadmill, load testing is a bench press. Like stress testing, load testing measures performance. Unlike stress testing, where the software is tested beyond the breaking point, load testing finds that breaking point by steadily increasing demands on the system until it reaches a limit. You keep adding weight until your app can’t possibly do another rep. Blue Ridge Networks, a cybersecurity solutions provider based in Virginia, needed a way to test one of their products against traffic in the millions. They could already load test in the hundreds of thousands but looked to the cloud to scale up. With cloud testing, they found that their product could handle up to 25 million new users and up to 80 million updates per hour.

Performance Testing

Stress and load tests are subsets of software performance testing—the physical fitness of the testing world. The goal of performance testing is not to find bugs or defects, but rather to set benchmarks for functionality (i.e., load speed, response time, data throughput, and breaking points). Cloud testing is particularly well-suited to software performance testing because it allows testers to create high-traffic simulations without building the infrastructure to do so from scratch. Piksel, a video asset management company serving the broadcast media industry, runs performance tests each night and for every new release of their software. By testing in the cloud, they can simulate higher loads and more concurrent users than they could on-premise to ensure stability.

Latency Testing

If stress testing is like training on a treadmill, latency testing is race day. It measures the time it takes an app to perform specific functions under different operating conditions. For example, how long it takes to load a page under different connection speeds. You want your app to be first across the finish line, even under less than ideal conditions. The American Red Cross relies on its websites to get critical information to relief workers on the ground in emergencies. They need to know those sites are responsive, especially in places where connection speeds may not be very fast. They employ a cloud-based monitoring system to notify them when latency lags.

Functional Testing

If performance testing is like physical training, functional testing is like a routine physical. It checks to see if things are working as expected. When a user logs in, functional testing makes sure their account is displayed correctly, for example. It focuses on user experience and business requirements. Healthcare software provider Care Logistics employs automated functional testing to test the functionality of their software whenever updates are rolled out. By moving to the cloud and automating their testing, they reduced their testing time by 50 percent. Functional testing in the cloud is especially useful when business requirements change frequently because the resources to run new tests are instantly available.

Compatibility Testing

Compatibility testing checks to see if software works across different operating systems and browsers. In cloud testing, as opposed to on-premise testing, you can simulate more browsers and operating systems to ensure your app works no matter who uses it. Mobile meeting provider LogMeIn uses the cloud to test it’s GoToMeeting app on 60 different kinds of mobile devices and test their web-based apps daily across multiple browsers.

Smoke Testing

In the early days of technology development, a piece of hardware passed the smoke test if it didn’t catch on fire (hence, smoke). Today, smoke testing in software testing makes sure the most critical functions of an app work before moving on to more specific testing. The grocery chain Supervalu turned to cloud testing to reduce the time they spent smoke testing by 93 percent. And event management platform Eventbrite uses the cloud to run 20 smoke tests on every software build before running an additional 700 automated tests.

An App Waiting in a Cloud Storage Sandbox

Advantages of Cloud Development vs. Traditional Software Development (and Some Drawbacks)

  • Savings – Only pay for the resources you need rather than investing in infrastructure build-out and maintenance, saving money and time spent developing a local test environment.
  • Scope – Broaden the number of different scenarios you can test — more browsers, more operating systems — to make sure your software works for as many users as possible.
  • Scalability – Effortlessly scale your resources up or down based on testing needs from initial smoke testing to enterprise software development in the cloud.
  • Speed – Test software on different operating systems, platforms, browsers, and devices simultaneously, reducing testing time.
  • Automation – Easily employ automated software testing tools rather than dedicating an employee or team to test software manually.
  • Collaboration – As more and more companies abandon waterfall in favor of agile software development, the role of development, operations, and QA continues to blend. In the cloud, developers can push out new configurations or features, and QA can run tests against them right away, making agile development more manageable. For example, cloud testing allowed the Georgia Lottery System to transition from releasing one to two software updates per year with waterfall development to 10+ releases each quarter with agile.

Moving your testing to the cloud is not without some drawbacks. Before you make the move, consider the following:

  • Outages – In March of 2019, Amazon Web Services (AWS) suffered an outage at one of their data centers in Virginia. The blackout affected major companies like Slack, Atlassian, and Capital One. For a few hours, not only were their services affected, those companies couldn’t test any web properties or apps running on AWS.
  • Access – The nature of cloud services means that companies pay for the access they need. It’s an advantage to building infrastructure on-site, but it puts the onus on companies to determine who needs access to the testing environments housed on the cloud and what level of access they need to keep cloud testing affordable.
  • Lack of universal processes – Because each cloud provider develops its own infrastructure and systems (and most are very hush-hush about it), companies who want to switch providers face the burden of reconfiguring their internal systems and data to meet new provider requirements.

An App Waiting in a Cloud Storage Sandbox

What Does Cloud Testing Cost?

Most cloud service providers offer a tiered pricing structure. Providers might charge per device minute (in mobile testing) or a flat fee for unlimited testing. Flat fees start around $100 per month up to $500 per month or more. Many also offer private testing at a higher rate. Start by determining what kind of testing you need and what tier makes the most sense for you.

Who Uses the Cloud for Software Testing?

As shown in the examples above, organizations that use the cloud for testing are as varied as they come. From nonprofits to grocery chains to state lottery systems, any company that wants to provide a software application to improve customer service or user experience can benefit from testing in the cloud.

No longer limited to tech start-ups and industry insiders, testing in the cloud makes good business sense for more and more companies working to bring their apps and software solutions to the world.

The post A Sandbox in the Clouds: Software Testing and Development in Cloud Storage appeared first on Backblaze Blog | Cloud Storage & Cloud Backup.

]]>
https://www.backblaze.com/blog/a-sandbox-in-the-clouds-software-testing-and-development-in-cloud-storage/feed/ 3
Backing Up the Death Star: How Cloud Storage Explains the Rise of Skywalker https://www.backblaze.com/blog/backing-up-the-death-star-how-cloud-storage-explains-the-rise-of-skywalker/ https://www.backblaze.com/blog/backing-up-the-death-star-how-cloud-storage-explains-the-rise-of-skywalker/#comments Wed, 18 Dec 2019 14:31:11 +0000 https://www.backblaze.com/blog/?p=93780 It’s come to our attention that there’s a movie coming out that some of you are excited about. A few of us around the office might be looking forward to it, too, and it just so happens that we have some special insights.

The post Backing Up the Death Star: How Cloud Storage Explains the Rise of Skywalker appeared first on Backblaze Blog | Cloud Storage & Cloud Backup.

]]>
Cloud Storage Explains the Rise of Skywalker
It’s come to our attention here at Backblaze that there’s a movie coming out later this week that some of you are excited about. A few of us around the office might be looking forward to it, too, and it just so happens that we have some special insight into key plot elements.

For instance, did you know that George Lucas was actually a data backup and cloud storage enthusiast? It’s true, and once you start to look, you can see it everywhere in the Star Wars storyline. If you aren’t yet aware of this deeper narrative thread, we’d encourage you to consider the following lessons to ensure you don’t suffer the same disruptions that Darth Sidious (AKA the Emperor, AKA Sheev Palpatine) and the Skywalkers have struggled with over the past 60 years of their adventures.

Because, whether you run a small business, an enterprise, the First Order, or the Rebel Alliance, your data—how you work with it, secure it, and back it up—can be the difference between galactic domination and having your precious battle station scattered into a million pieces across the cold, dark void of space.

Spoiler Alert: If you haven’t seen any of the movies we’ll reference below, well, you’ve got some work to do: about 22 hours and 30 minutes of movies, somewhere around 75 hours of animated and live action series, a few video games, and more novels than we can list here (don’t even start with the Canon and Legends division)… If you’d like to try, however, now is the time to close this tab.

Though we all know the old adage about “trying”…

Security:

Any good backup strategy begins with a solid approach to data security. If you have that in place, you significantly lower your chance of ever having to rely on your backups. Unfortunately, the simplest forms of security were often overlooked during the first eight installments of the Star Wars story…

Impossible. Perhaps the archives are incomplete.

“Lost a planet, Master Obi-Wan has. How embarrassing!”
–Master Yoda

The history of the Jedi Council is rife with infosec issues, but possibly the most egregious is called out when Obi-Wan looks into the origins of a Kamino Saberdart. Looking for the location of the planet Kamino itself within the Jedi Archives, he finds nothing but empty space. Having evidently failed out of physics at the Jedi Academy, Master Kenobi needs Yoda to point out that, if there’s a gravity well suggesting the presence of a planet—the planet has likely been improperly deleted from the archives. And indeed that seems to have been the case.

How does the galactic peacekeeping force stand a chance against the Sith when they can’t even keep their own library safe?

Some might argue that, since the Force is required to manipulate the Jedi Archives, then Jedi training was a certain type of password protection. But there were thousands of trained Jedi in the galaxy at that time, not to mention the fact that their sworn enemies were force users. This would be like Google and Amazon corporate offices sharing the same keycards—not exactly secure! So, at their most powerful, the Jedi had weak password protection with no permissions management. And what happened to them? Well, as we now know, even the Younglings didn’t make it… That’s on the Jedi Archivists, who evidently thought they were too good for IT.

The Destruction of Jedha

“Most unfortunate about the security breach on Jedha, Director Krennic.”
—Grand Moff Tarkin

Of course, while the Jedi may have stumbled, the Empire certainly didn’t seem to learn from their mistakes. At first glance, the Imperial databank on Scarif was head-and-shoulders above the Jedi Archives. As we’ve noted before, that Shield Gate was one heck of a firewall! But Jyn Urso and Cassian Andor exploited a consistent issue in the Empire’s systems: Imperial Clearance Codes. I mean, did anyone in the galaxy not have a set of Clearance Codes on hand? It seems like every rebel ship had a few lying around. If only they had better password management, all of those contractors working on Death Star II might still be pulling in a solid paycheck.

To avoid bad actors poking around your archives or databanks, you should conduct regular reviews of your data security strategies to make sure you’re not leaving any glaring holes open for someone else to take advantage of. Regularly change passwords. Use two factor authentication. Use encryption. Here’s more on how we use encryption, and a little advice about ransomware.

3-2-1 Backup

But of course, we’ve seen that data security can fail, in huge ways. By our count, insufficient security management on both sides of this conflict has led to the destruction of 6 planets, the pretty brutal maiming of 2 others, a couple stars being sucked dry (which surely led to other planets’ destruction), and the obliteration of a handful of super weapons. There is a right way folks, and what we’re learning here is, they didn’t know it a long time ago in a galaxy far, far away. But even when your security is set up perfectly, disaster can strike. That’s why backups are an essential accompaniment to any security.

The best approach is a 3-2-1 backup strategy: For every piece of data, you have the data itself (typically on your computer), a backup copy on site (in a NAS or simply an external hard drive), and you keep one copy in the cloud. It’s the most reasonable approach for most average use cases. Lets see how the Empire managed their use case, when the stakes (the fate of much of existence) couldn’t have been higher:

Dooku's Death Star Plans

“I will take the designs with me to Coruscant. They will be much safer there with my master.”—Count Dooku

We first see the plans for the “super weapon based on Geonosian designs” when Count Dooku, before departing Geonosis, decides that they would be safer housed on Coruscant with Darth Sidious. How wrong he was! He was thinking about securing his files, but it seems he stumbled en route to actually doing so.

By the time Jynn Erso learns of the “Stardust” version of the plans for the Death Star, it seems that Scarif is the only place in the Galaxy, other than on the Death Star itself, presumably, that a person could find a copy of the plans… Seriously? Technically, the copy on Scarif functioned as the Empire’s “copy in the cloud,” but it’s not like the Death Star had an external hard drive trailing it through space with another copy of the plans.

If you only have one backup, it’s better than nothing—but not by much. When your use case involves even a remote chance that Grand Moff Tarkin might use your data center for target practice, you probably need to be extra careful about redundancy in your approach. If the Rebel Alliance, or just extremely competitive corporate leaders, are a potential threat to your business, definitely ensure that you follow 3-2-1, but also consider a multi-cloud approach with backups distributed in different geographic regions. (For the Empire, we’d recommend different planets…)

Version Control

There’s being backed up, and then there’s being sure you have the right thing backed up. One thing we learn from the plans used to defeat the first Death Star is that the Empire didn’t manage version control very well. Take a close look at the Death Star schematic that Jyn and Cassian absconded with. Notice anything…off?

Yeah, that’s right. The focus lens for the superlaser is equatorial. Now, everyone knows that the Death Star’s superlaser is actually on the northern hemisphere. Which goes to show you that this backup was not even up to date! A good backup solution will run on a daily basis, or even more frequently depending on use cases. It’s clear that whatever backup strategy the Death Star team had, it had gone awry some time ago.

Death Star II Plans

“The rebels managed to destroy the first Death Star. By rebuilding the Death Star, and using it as many times as necessary to restore order, we prove that their luck only goes so far. We prove that we are the only galactic authority and always will be.”―Lieutenant Nash Windrider

We can only imagine that the architects who were tasked with quickly recreating the Death Star immediately contacted the Records Department to obtain the most recent version of the original plans. Imagine their surprise when they learned that Tarkin had destroyed the databank and they needed to work from memory. Given the Empire’s legendarily bad personnel management strategies—force-choking is a rough approach to motivation, after all—it’s easy to assume that there were corners cut to get the job done on the Emperor’s schedule.

Of course, it’s not always the case that the most recent version of a file will be the most useful. This is where Version History comes into the picture. Version History allows users to maintain multiple versions of a file over extended periods of time (including forever). If the design team from the Empire had set up Version History before bringing Galen Erso back on board, they could have reverted to the pre-final plans that didn’t have a “Insert Proton Torpedo Here To Destroy” sign on them.

To their credit, the Death Star II designers did avoid the two-meter-wide thermal exhaust port exploited by Luke Skywalker at the Battle of Yavin. Instead, they incorporated millions of millimeter-sized heat-dispersion tubes. Great idea! And yet, someone seemed to think it was okay to incorporate Millenium Falcon-sized access tunnels to their shockingly fragile reactor core? This shocking oversight seems to be either a sign of an architectural team clearly stressed by the lack of reliable planning materials, or possibly it was their quiet protest at the number of their coworkers who Darth Vader tossed around during one of his emotional outbursts.

Cloud Storage Among the Power (Force) Users

At this point it is more than clear that the rank-and-file of pretty much every major power during this era of galactic strife was terrible at data security and backup. What about the authorities, though? How do they rank? And how does their approach to backup potentially affect what we’ll learn about the future of the Galaxy in the concluding chapter of the Star Wars saga, “The Rise of Skywalker”?

There are plenty of moderately talented Jedi out there, but only a few with the kind of power marshaled by Yoda, Obi-Wan, and Luke. Just so, there are some of us for whom computer backup is about the deepest we’ll ever dive into the technology that Backblaze offers. For the more ambitious, however, there’s B2 Cloud Storage. Bear with us here, but, is it possible that these Master Jedis could be similar to the sysadmins and developers who so masterfully manipulate B2 to create archives, backup, compute projects, and more, in the cloud? Have the Master Jedis manipulated the force in a similar way to use it as a sort of cloud storage for their consciousness?

Force Ghosts

“If you strike me down, I shall become more powerful than you can possibly imagine.”—Obi-Wan Kenobi

Over many years, we’ve watched as force ghosts accumulate on the sidelines: First Obi-Wan, then Yoda, Anakin Skywalker, and, presumably, Luke Skywalker himself at the end of “The Last Jedi.” (Even Qui-Gon Jinn evidently figured it out after some post-mortem education.) If our base level theory that Star Wars is actually an extended metaphor for the importance of a good backup strategy, then who better to redeem the atrocious backup track record so far than the strongest Jedi the galaxy has ever known? In backing themselves up to the cloud, does “Team Force Ghost” actually present a viable recovery strategy from Darth Sidious’ unbalancing of the force? If so, we could be witnessing one of the greatest arguments for cloud storage and computing ever imagined!

“Long have I waited…”—Darth Sidious

Of course, there’s a flip-side to this argument. If our favorite Jedi Masters were expert practitioners of cloud storage solutions, then how the heck did someone as evil as Darth Sidious find himself alive after falling to his death in the second Death Star’s reactor core? Well, there is precedent for Sith Masters’ improbable survival after falling down lengthy access shafts. Darth Maul survived being tossed down a well and being cut in half by Obi-Wan when Darth Vader was just a glimmer in Anakin Skywalker’s eye. But that was clearly a case of conveniently cauterized wounds and some amazing triage work. No, given the Imperial Fleet’s response to Darth Sidious’ death, the man was not alive at the end of the Battle of Endor by any conventional definition.

One thing we do know, thanks to Qui-Gon’s conversations with Yoda after his death, is that Dark Siders can’t become force ghosts. In short, to make the transition, one has to give in to the will of the Force—something that practitioners of the Dark Side just can’t abide.

Most theories point to the idea that the Sith can bind themselves to objects or even people during death as a means of lingering among the living. And of course there is the scene in “Revenge of the Sith” wherein Darth Sidious (disguised as Sheev Palpatine) explains how Darth Plagueis the Wise learned to cheat death. How, exactly, this was achieved is unclear, but it’s possible that his method was similar to other Sith. This is why, many speculate, we see our intrepid heroes gathering at the wreckage of the second Death Star: Because Darth Sidious’ body is tied, somehow, to the wreckage. Classic! Leave it up to old Sidious to count on a simple physical backup, in the belief that he can’t trust the cloud…

Frustrated Darth Sidious
That feeling when you realize you’re not backed up to the cloud…

You Are One With The Force, And The Force Is With You

Are we certain how the final battle of the Star Wars story will shape up? Will Light Side force wielders using Cloud Storage to restore their former power, aid Rey and the rest of our intrepid heroes, and defeat the Sith, who have foolishly relied on on-prem storage? No, we’re not, but from our perspective it seems likely that, when the torch was passed, George Lucas sat J.J. Abrams down and said, “J.J., let me tell you what Star Wars is really all about… data storage.”

We are certain, however, that data security and backup doesn’t need to be a battle. Develop a strategy that works for you, make sure your data is safe and sound, and check it once in awhile to make sure it’s up to date and complete. That way, just like the Force, your data will be with you, always.

The post Backing Up the Death Star: How Cloud Storage Explains the Rise of Skywalker appeared first on Backblaze Blog | Cloud Storage & Cloud Backup.

]]>
https://www.backblaze.com/blog/backing-up-the-death-star-how-cloud-storage-explains-the-rise-of-skywalker/feed/ 1
How Backblaze Buys Hard Drives https://www.backblaze.com/blog/how-backblaze-buys-hard-drives/ https://www.backblaze.com/blog/how-backblaze-buys-hard-drives/#comments Tue, 10 Dec 2019 15:15:41 +0000 https://www.backblaze.com/blog/?p=93593 As the person on staff ultimately responsible for sourcing hard drives for our data centers in California, Arizona, and the Netherlands, Ariel Ellis knows a thing or two about purchasing petabytes-worth of storage.

The post How Backblaze Buys Hard Drives appeared first on Backblaze Blog | Cloud Storage & Cloud Backup.

]]>
A hand holding hard drives up.

Backblaze’s data centers may not be the biggest in the world of data storage, but thanks to some chutzpah, transparency, and wily employees, we’re able to punch well above our weight when it comes to purchasing hard drives. No one knows this better than our Director of Supply Chain, Ariel Ellis.

As the person on staff ultimately responsible for sourcing the drives our data centers need to run—some 117,658 by his last count—Ariel knows a thing or two about purchasing petabytes-worth of storage. So we asked him to share his insights on the evaluation and purchasing process here at Backblaze. While we’re buying at a slightly larger volume than some of you might be, we hope you find Ariel’s approach useful and that you’ll share your own drive purchasing philosophies in the comments below.


An Interview with Ariel Ellis, Director of Supply Chain at Backblaze

Sourcing and Purchasing Drives

Backblaze: Thanks for making time, Ariel—we know staying ahead of the burn rate always keeps you busy. Let’s start with the basics: What kinds of hard drives do we use in our data centers, and where do we buy them?

Ariel: In the past, we purchased both consumer and enterprise hard drives. We bought the drives that gave us the best performance and longevity for the price, and we discovered that, in many cases, those were consumer drives.

Today, our purchasing volume is large enough that consumer drives are no longer an option. We simply can’t get enough. High capacity drives in high volume are only available to us in enterprise models. But, by sourcing large volume and negotiating prices directly with each manufacturer, we are able to achieve lower costs and better performance than we could when we were only buying in the consumer channel. Additionally, buying directly gives us five year warranties on the drives, which is essential for our use case.

We began to purchase direct around the launch of our Vault architecture, in 2015. Each Vault contains 1,200 drives and we have been deploying two to four, or more, Vaults each month. 4,800 drives are just not available through consumer distribution. So we now purchase drives from all three hard drive manufacturers: Western Digital, Toshiba, and Seagate.

Backblaze: Of the drives we’re purchasing, are they all 7200 RPM and 3.5” form factor? Is there any reason we’d consider slower drives or 2.5” drives?

Ariel: We use drives with varying speeds, though some power-conserving drives don’t disclose their drive speed. Power draw is a very important metric for us and the high speed enterprise drives are expensive in terms of power cost. We now total around 1.5 megawatts in power consumption in our centers, and I can tell you that every watt matters for reducing costs.

As far as 2.5″ drives, I’ve run the math and they’re not more cost effective than 3.5″ drives, so there’s no incentive for us to use them.

Backblaze: What about other drive types and modifications, like SSD, or helium enclosures, or SMR drives? What are we using and what have we tried beyond the old standards?

Ariel: When I started at Backblaze, SSDs were more than ten times the cost of conventional hard drives. Now they’re about three times the cost. But for Backblaze’s business, three times the cost is not viable for the pricing targets we have to meet. We do use some SSDs as boot drives, as well as in our backend systems, where they are used to speed up caching and boot times, but there are currently no flash drives in our Storage Pods—not in HDD or M.2 formats. We’ve looked at flash as a way to manage higher densities of drives in the future and we’ll continue to evaluate their usefulness to us.

Helium has its benefits, primarily lower power draw, but it makes drive service difficult when that’s necessary. That said, all the drives we have purchased that are larger than 8 TB have been helium—they’re just part of the picture for us. Higher capacity drives, sealed helium drives, and other new technologies that increase the density of the drives are essential to work with as we grow our data centers, but they also increase drive fragility, which is something we have to manage.

SMR would give us a 10-15% capacity-to-dollar boost, but it also requires host-level management of sequential data writing. Additionally, the new archive type of drives require a flash-based caching layer. Both of these requirements would mean significant increases in engineering resources to support and thereby even more investment. So all-in-all, SMR isn’t cost-effective in our system.

Soon we’ll be dealing with MAMR and HAMR drives as well. We plan to test both technologies in 2020. We’re also testing interesting new tech like Seagate’s MACH.2 Multi Actuator, which allows the host to request and receive data simultaneously from two areas of the drive in parallel, potentially doubling the input/output operations per second (IOPS) performance of each individual hard drive. This offsets issues of reduced data availability that would otherwise arise with higher drive capacities. The drive also can present itself as two independent drives. For example, a 16 TB drive can appear as two independent 8 TB drives. A Vault using 60 drives per pod could present as 120 drives per pod. That offers some interesting possibilities.

Backblaze: What does it take to deploy a full vault, financially speaking? Can you share the cost?

Ariel: The cost to deploy a single vault varies between $350,000 to $500,000, depending on the drive capacities being used. This is just the purchase price though. There is also the cost of data center space, power to house and run the hardware, the staff time to install everything, and the bandwidth used to fill it. All of that should be included in the total cost of filling a vault.

Data center cold aisle
These racks don’t fill themselves.

Evaluating and Testing New Drive Models

Backblaze: Okay, so when you get to the point where the tech seems like it will work in the data center, how do you evaluate new drive models to include in the Vaults?

Ariel: First, we select drives that fit our cost targets. These are usually high capacity drives being produced in large volumes for the cloud market. We always start with test batches that are separate from our production data storage. We don’t put customers’ data on the test drives. We evaluate read/write performance, power draw, and generally try to understand how the drives will behave in our application. Once we are comfortable with the drive’s performance, we start adding small amounts to production vaults, spread across tomes in a way that does not sacrifice parity. As drive capacities increase, we are putting more and more effort into this qualification process.

We used to be able to qualify new drive models in thirty days. Now we typically take several months. On one hand, this is because we’ve added more steps to pre- and post-production testing. As we scale up, we need to scale up our care, because the effect of any issues with drives increases in line with bigger and bigger implementations. Additionally, from a simple physics perspective, a vault that uses high capacity drives takes longer to fill and we want to monitor the new drive’s performance throughout the entire fill period.

Backblaze: When it comes to the evaluation of the cost, is there a formula for $/terabyte that you follow?

Ariel: My goal is to reduce cost per terabyte on a quarterly basis—in fact, it’s a part of how my job performance is evaluated. Ideally, I can achieve a 5-10% cost reduction per terabyte per quarter, which is a number based on historical price trends and our performance for the past 10 years. That savings is achieved in three primary ways: 1) lowering the actual cost of drives by negotiating with vendors, 2) occasionally moving to higher drive densities, and 3) increasing the slot density of pod chassis. (We moved from 45 drives to 60 drives in 2016, and as we look toward our next Storage Pod version we’ll consider adding more slots per chassis).

Backblaze Director of Supply Chain holding World's Largest SSD Nimbus Data ExaDrive DC100 100TB
Backblaze Director of Supply Chain, considering the future…

Meeting Storage Demand

Backblaze: When it comes to how this actually works in our operating environment, how do you stay ahead of the demand for storage capacity?

Ariel: We maintain several months of the drive space that we would need to meet capacity based on predicted demand from current customers as well as projected new customers. Those buffers are tied to what we expect will be the fill-time of our Vaults. As conditions change, we could decide to extend those buffers. Demand could increase unexpectedly, of course, so our goal is to reduce the fill-time for Vaults so we can bring more storage online as quickly as possible, if it’s needed.

Backblaze: Obviously we don’t operate in a vacuum, so do you worry about how trade challenges, weather, and other factors might affect your ability to obtain drives?

Ariel: (Laughs) Sure, I’ve got plenty to worry about. But we’ve proved to be pretty resourceful in the past when we’re challenged. For example: During the worldwide drive shortage, due to flooding in Southeast Asia, we recruited an army of family and friends to buy drives all over and send them to us. That kept us going during the shortage.

We are vulnerable, of course, if there’s a drive production shortage. Some data center hardware is manufactured in China, and I know that some of those prices have gone up. That said, all of our drives are manufactured in Thailand or Taiwan. Our Storage Pod chassis are made in the U.S.A. Big picture, we try to anticipate any shortages and plan accordingly if we can.

A pile of consumer hard drives still in packaging
A Hard Drive Farming Harvest.

Data Durability

Backblaze: Time for a personal question… What does data durability mean to you? What do you do to help boost data durability, and spread drive hardware risk and exposure?

Ariel: That is personal. (Laughs). But also a good question, and not really personal at all: Everyone at Backblaze contributes to our data durability in different ways.

My role in maintaining eleven nines of durability is, first and foremost: Never running out of space. I achieve this by maintaining close relationships with manufacturers to ensure production supply isn’t interrupted; by improving our testing and qualification processes to catch problems before drives ever enter production; and finally by monitoring performance and replacing drives before they fail. Otherwise it’s just monitoring the company’s burn rates and managing the buffer between our drive capacity and our data under management.

When we are in a good state for space considerations, then I need to look to the future to ensure I’m providing for more long-term issues. This is where iterating on and improving our Storage Pod design comes in. I don’t think that gets factored into our durability calculus, but designing for the future is as important as anything else. We need to be prepared with hardware that can support ever-increasing hard drive capacities—and the fill- and rebuild times that come with those increases—effectively.

Backblaze: That begs the next question: As drive sizes get larger, rebuild times get longer when it’s necessary to recover data on a drive. Is that still a factor, given Backblaze’s durability architecture?

Ariel: We attempt to identify and replace problematic drives before they actually fail. When a drive starts failing, or is identified for replacement, the team always attempts to restore as much data as possible off of it because that ensures we have the most options for maintaining data durability. The rebuild times for larger drives are challenging, especially as we move to 16TB and beyond. We are looking to improve the throughput of our Pods before making the move to 20TB in order to maintain fast enough rebuild times.

And then, supporting all of this is our Vault architecture, which ensures that data will be intact even if individual drives fail. That’s the value of the architecture.

Longer term, one thing we’re looking toward is phasing out SATA controller/port multiplier combo. This might be more technical than some of our readers want to go, but: SAS controllers are a more commonly used method in dense storage servers. Using SATA drives with SAS controllers can provide as much as a 2x improvement in system throughput vs SATA, which is important to me, even though serial ATA (SATA) port multipliers are slightly less expensive. When we started our Storage Pod construction, using SATA controller/port multiplier combo was a great way to keep costs down. But since then, the cost for using SAS controllers and backplanes has come down significantly.

But now we’re preparing for how we’ll handle 18 and 20 TB drives, and improving system throughput will be extremely important to manage that density. We may even consider using SAS drives even though they are slightly more expensive. We need to consider all options in order to meet our scaling, durability and cost targets.

Hard drives in wrappers
A Vault in the Making.

Backblaze’s Relationship with Drive Manufacturers

Backblaze: So, there’s an elephant in the room when it comes to Backblaze and hard drives: Our quarterly Hard Drive Stats reports. We’re the only company sharing that kind of data openly. How have the Drive Stats blog posts affected your purchasing relationship with the drive manufacturers?

Ariel: Due to the quantities we need and the visibility of the posts, drive manufacturers are motivated to give us their best possible product. We have a great purchasing relationship with all three companies and they update us on their plans and new drive models coming down the road.

Backblaze: Do you have any sense for what the hard drive manufacturers think of our Drive Stats blog posts?

Ariel: I know that every drive manufacturer reads our Drive Stats reports, including very senior management. I’ve heard stories of company management learning of the release of a new Drive Stats post and gathering together in a conference room to read it. I think that’s great.

Ultimately, we believe that Drive Stats is good for consumers. We wish more companies with large data centers did this. We believe it helps keep everyone open and honest. The adage is that competition is ultimately good for everyone, right?

It’s true that Western Digital, at one time, was put off by the visibility Drive Stats gave into how their models performed in our data centers (which we’ve always said is a lot different from how drives are used in homes and most businesses). Then they realized the marketing value for them—they get a lot of exposure in the blog posts—and they came around.

Backblaze: So, do you believe that the Drive Stats posts give Backblaze more influence with drive manufacturers?

Ariel: The truth is that most hard drives go directly into tier-one and -two data centers, and not into smaller data centers, homes, or businesses. The manufacturers are stamping out drives in exabyte chunks. A single tier-one data center consumes maybe 500,000 times what Backblaze does in drives. We can’t compare in purchasing power to those guys, but Drive Stats does give us visibility and some influence with the manufacturers. We have close communications with the manufacturers and we get early versions of new drives to evaluate and test. We’re on their radar and I believe they value their relationship with us, as we do with them.

Backblaze: A final question. In your opinion, are hard drives getting better?

Ariel: Yes. Drives are amazingly durable for how hard they’re used. Just think of the forces inside a hard drive, how hard they spin, and how much engineering it takes to write and read the data on the platters. I came from a background in precision optics, which requires incredibly precise tolerances, and was shocked to learn that hard drives are designed in an equally precise tolerance range, yet are made in the millions and sold as a commodity. Despite all that, they have only about a 2% annual failure rate in our centers. That’s pretty good, I think.


Thanks, Ariel. Here’s hoping the way we source petabytes of storage has been useful for your own terabyte, petabyte, or… exabyte storage needs? If you’re working on the latter, or anything between, we’d love to hear about what you’re up to in the comments.

The post How Backblaze Buys Hard Drives appeared first on Backblaze Blog | Cloud Storage & Cloud Backup.

]]>
https://www.backblaze.com/blog/how-backblaze-buys-hard-drives/feed/ 25