Amrit Singh, Author at Backblaze Blog | Cloud Storage & Cloud Backup https://www.backblaze.com/blog/author/asingh/ Cloud Storage & Cloud Backup Thu, 29 Feb 2024 15:38:40 +0000 en-US hourly 1 https://wordpress.org/?v=6.4.3 https://www.backblaze.com/blog/wp-content/uploads/2019/04/cropped-cropped-backblaze_icon_transparent-80x80.png Amrit Singh, Author at Backblaze Blog | Cloud Storage & Cloud Backup https://www.backblaze.com/blog/author/asingh/ 32 32 Navigating Cloud Storage: What is Latency and Why Does It Matter? https://www.backblaze.com/blog/navigating-cloud-storage-what-is-latency-and-why-does-it-matter/ https://www.backblaze.com/blog/navigating-cloud-storage-what-is-latency-and-why-does-it-matter/#respond Tue, 27 Feb 2024 16:28:41 +0000 https://www.backblaze.com/blog/?p=110914 Latency is an important factor impacting performance and user experience. Let's talk about what it is and some ways to get better performance.

The post Navigating Cloud Storage: What is Latency and Why Does It Matter? appeared first on Backblaze Blog | Cloud Storage & Cloud Backup.

]]>
A decorative image showing a computer and a server arrows moving between them, and a stopwatch indicating time.

In today’s bandwidth-intensive world, latency is an important factor that can impact performance and the end-user experience for modern cloud-based applications. For many CTOs, architects, and decision-makers at growing small and medium sized businesses (SMBs), understanding and reducing latency is not just a technical need but also a strategic play. 

Latency, or the time it takes for data to travel from one point to another, affects everything from how snappy or responsive your application may feel to content delivery speeds to media streaming. As infrastructure increasingly relies on cloud object storage to manage terabytes or even petabytes of data, optimizing latency can be the difference between success and failure. 

Let’s get into the nuances of latency and its impact on cloud storage performance.

Upload vs. Download Latency: What’s the Difference?

In the world of cloud storage, you’ll typically encounter two forms of latency: upload latency and download latency. Each can impact the responsiveness and efficiency of your cloud-based application.

Upload Latency

Upload latency refers to the delay when data is sent from a client or user’s device to the cloud. Live streaming applications, backup solutions, or any application that relies heavily on real-time data uploading will experience hiccups if upload latency is high, leading to buffering delays or momentary stream interruptions.

Download Latency

Download latency, on the other hand, is the delay when retrieving data from the cloud to the client or end user’s device. Download latency is particularly relevant for content delivery applications, such as on demand video streaming platforms, e-commerce, or other web-based applications. Reducing download latency, creating a snappy web experience, and ensuring content is swiftly delivered to the end user will make for a more favorable user experience.

Ideally, you’ll want to optimize for latency in both directions, but, depending on your use case and the type of application you are building, it’s important to understand the nuances of upload and download latency and their impact on your end users.

Decoding Cloud Latency: Key Factors and Their Impact

When it comes to cloud storage, how good or bad the latency is can be influenced by a number of factors, each having an impact on the overall performance of your application. Let’s explore a few of these key factors.

Network Congestion

Like traffic on a freeway, packets of data can experience congestion on the internet. This can lead to slower data transmission speeds, especially during peak hours, leading to a laggy experience. Internet connection quality and the capacity of networks can also contribute to this congestion.

Geographical Distance

Often overlooked, the physical distance from the client or end user’s device to the cloud origin store can have an impact on latency. The farther the distance from the client to the server, the farther the data has to traverse and the longer it takes for transmission to complete, leading to higher latency.

Infrastructure Components

The quality of infrastructure, including routers, switches, and cables, may affect network performance and latency numbers. Modern hardware, such as fiber-optic cables, can reduce latency, unlike outdated systems that don’t meet current demands. Often, you don’t have full control over all of these infrastructure elements, but awareness of potential bottlenecks may be helpful, guiding upgrades wherever possible.

Technical Processes

  • TCP/IP Handshake: Connecting a client and a server involves a handshake process, which may introduce a delay, especially if it’s a new connection.
  • DNS Resolution: Latency can be increased by the time it takes to resolve a domain name to its IP address. There is a small reduction in total latency with faster DNS resolution times.
  • Data routing: Data does not necessarily travel a straight line from its source to its destination. Latency can be influenced by the effectiveness of routing algorithms and the number of hops that data must make.

Reduced latency and improved application performance are important for businesses that rely on frequently accessing data stored in cloud storage. This may include selecting providers with strategically positioned data centers, fine-tuning network configurations, and understanding how internet infrastructure affects the latency of their applications.

Minimizing Latency With Content Delivery Networks (CDNs)

Further reducing latency in your application may be achieved by layering a content delivery network (CDN) in front of your origin storage. CDNs help reduce the time it takes for content to reach the end user by caching data in distributed servers that store content across multiple geographic locations. When your end-user requests or downloads content, the CDN delivers it from the nearest server, minimizing the distance the data has to travel, which significantly reduces latency.

Backblaze B2 Cloud Storage integrates with multiple CDN solutions, including Fastly, Bunny.net, and Cloudflare, providing a performance advantage. And, Backblaze offers the additional benefit of free egress between where the data is stored and the CDN’s edge servers. This not only reduces latency, but also optimizes bandwidth usage, making it cost effective for businesses building bandwidth intensive applications such as on demand media streaming. 

To get slightly into the technical weeds, CDNs essentially cache content at the edge of the network, meaning that once content is stored on a CDN server, subsequent requests do not need to go back to the origin server to request data. 

This reduces the load on the origin server and reduces the time needed to deliver the content to the user. For companies using cloud storage, integrating CDNs into their infrastructure is an effective configuration to improve the global availability of content, making it an important aspect of cloud storage and application performance optimization.

Case Study: Musify Improves Latency and Reduces Cloud Bill by 70%

To illustrate the impact of reduced latency on performance, consider the example of music streaming platform Musify. By moving from Amazon S3 to Backblaze B2 and leveraging the partnership with Cloudflare, Musify significantly improved its service offering. Musify egresses about 1PB of data per month, which, under traditional cloud storage pricing models, can lead to significant costs. Because Backblaze and Cloudflare are both members of the Bandwidth Alliance, Musify now has no data transfer costs, contributing to an estimated 70% reduction in cloud spend. And, thanks to the high cache hit ratio, 90% of the transfer takes place in the CDN layer, which helps maintain high performance, regardless of the location of the file or the user.

Latency Wrap Up

As we wrap up our look at the role latency plays in cloud-based applications, it’s clear that understanding and strategically reducing latency is a necessary approach for CTOs, architects, and decision-makers building many of the modern applications we all use today.  There are several factors that impact upload and download latency, and it’s important to understand the nuances to effectively improve performance.

Additionally, Backblaze B2’s integrations with CDNs like Fastly, bunny.net, and Cloudflare offer a cost-effective way to improve performance and reduce latency. The strategic decisions Musify made demonstrate how reducing latency with a CDN can significantly improve content delivery while saving on egress costs, and reducing overall business OpEx.

For additional information and guidance on reducing latency, improving TTFB numbers and overall performance, the insights shared in “Cloud Performance and When It Matters” offer a deeper, technical look.

If you’re keen to explore further into how an object storage platform may support your needs and help scale your bandwidth-intensive applications, read more about Backblaze B2 Cloud Storage.

The post Navigating Cloud Storage: What is Latency and Why Does It Matter? appeared first on Backblaze Blog | Cloud Storage & Cloud Backup.

]]>
https://www.backblaze.com/blog/navigating-cloud-storage-what-is-latency-and-why-does-it-matter/feed/ 0
The Power of Specialized Cloud Providers: A Game Changer for SaaS Companies https://www.backblaze.com/blog/the-power-of-specialized-cloud-providers-a-game-changer-for-saas-companies/ https://www.backblaze.com/blog/the-power-of-specialized-cloud-providers-a-game-changer-for-saas-companies/#respond Tue, 13 Jun 2023 16:40:34 +0000 https://www.backblaze.com/blog/?p=108971 Cloud-based tech stacks have moved beyond a one-size-fits all approach. Here's how specialized cloud providers can help your SaaS company customize its tech stack.

The post The Power of Specialized Cloud Providers: A Game Changer for SaaS Companies appeared first on Backblaze Blog | Cloud Storage & Cloud Backup.

]]>
A decorative image showing a cloud with the Backblaze logo, then logos hanging off it it for Vultr, Fastly, Equinix metal, Terraform, and rclone.

“Nobody ever got fired for buying AWS.” It’s true: AWS’s one-size-fits-all solution worked great for most businesses, and those businesses made the shift away from the traditional model of on-prem and self-hosted servers—what we think of as Cloud 1.0—to an era where AWS was the cloud, the one and only, which is what we call Cloud 2.0. However, as the cloud landscape evolves, it’s time to question the old ways. Maybe nobody ever got fired for buying AWS, but these days, you can certainly get a lot of value (and kudos) for exploring other options. 

Developers and IT teams might hesitate when it comes to moving away from AWS, but AWS comes with risks, too. If you don’t have the resources to manage and maintain your infrastructure, costs can get out of control, for one. As we enter Cloud 3.0 where the landscape is defined by the open, multi-cloud internet, there is an emerging trend that is worth considering: the rise of specialized cloud providers.

Today, I’m sharing how software as a service (SaaS) startups and modern businesses can take advantage of these highly-focused, tailored services, each specializing and excelling in specific areas like cloud storage, content delivery, cloud compute, and more. Building on a specialized stack offers more control, return on investment, and flexibility, while being able to achieve the same performance you expect from hyperscaler infrastructure.

From a cost of goods sold perspective, AWS pricing wasn’t a great fit. From an engineering perspective, we didn’t want a net-new platform. So the fact that we got both with Backblaze—a drop-in API replacement with a much better cost structure—it was just a no-brainer.

—Rory Petty, Co-Founder & CTO, Tribute

The Rise of Specialized Cloud Providers

Specialized providers—including content delivery networks (CDNs) like Fastly, bunny.net, and Cloudflare, as well as cloud compute providers like Vultr—offer services that focus on a particular area of the infrastructure stack. Rather than trying to be everything to everyone, like the hyperscalers of Cloud 2.0, they do one thing and do it really well. Customers get best-of-breed services that allow them to build a tech stack tailored to their needs. 

Use Cases for Specialized Cloud Providers

There are a number of businesses that might benefit from switching from hyperscalers to specialized cloud providers, including:

In order for businesses to take advantage of the benefits (since most applications rely on more than just one service), these services must work together seamlessly. 

Let’s Take a Closer Look at How Specialized Stacks Can Work For You

If you’re wondering how exactly specialized clouds can “play well with each other,” we ran a whole series of application storage webinars that talk through specific examples and uses cases. I’ll share what’s in it for you below.

1. Low Latency Multi-Region Content Delivery with Fastly and Backblaze

Did you know a 100-millisecond delay in website load time can hurt conversion rates by 7%? In this session, Pat Patterson from Backblaze and Jim Bartos from Fastly discuss the importance of speed and latency in user experience. They highlight how Backblaze’s B2 Cloud Storage and Fastly’s content delivery network work together to deliver content quickly and efficiently across multiple regions. Businesses can ensure that their content is delivered with low latency, reducing delays and optimizing user experience regardless of the user’s location.

2. Scaling Media Delivery Workflows with bunny.net and Backblaze

Delivering content to your end users at scale can be challenging and costly. Users expect exceptional web and mobile experiences with snappy load times and zero buffering. Anything less than an instantaneous response may cause them to bounce. 

In this webinar, Pat Patterson demonstrates how to efficiently scale your content delivery workflows from content ingestion, transcoding, storage, to last-mile acceleration via bunny.net CDN. Pat demonstrates how to build a video hosting platform called “Cat Tube” and shows how to upload a video and play it using HTML5 video element with controls. Watch below and download the demo code to try it yourself.

3. Balancing Cloud Cost and Performance with Fastly and Backblaze

With a global economic slowdown, IT and development teams are looking for ways to slash cloud budgets without compromising performance. E-commerce, SaaS platforms, and streaming applications all rely on high-performant infrastructure, but balancing bandwidth and storage costs can be challenging. In this 45-minute session, we explored how to recession-proof your growing business with key cloud optimization strategies, including ways to leverage Fastly’s CDN to balance bandwidth costs while avoiding performance tradeoffs.

4. Reducing Cloud OpEx Without Sacrificing Performance and Speed

Greg Hamer from Backblaze and DJ Johnson from Vultr explore the benefits of building on best-of-breed, specialized cloud stacks tailored to your business model, rather than being locked into traditional hyperscaler infrastructure. They cover real-world use cases, including:

  • How Can Stock Photo broke free from AWS and reduced their cloud bill by 55% while achieving 4x faster generation.
  • How Monument Labs launched a new cloud-based photo management service to 25,000+ users.
  • How Black.ai processes 1000s of files simultaneously, with a significant reduction of infrastructure costs.

5. Leveling Up a Global Gaming Platform while Slashing Cloud Spend by 85%

James Ross of Nodecraft, an online gaming platform that aims to make gaming online easy, shares how he moved his global game server platform from Amazon S3 to Backblaze B2 for greater flexibility and 85% savings on storage and egress. He discusses the challenges of managing large files over the public internet, which can result in expensive bandwidth costs. By storing game titles on Backblaze B2 and delivering them through Cloudflare’s CDN, they achieve reduced latency since games are cached at the edge, and pay zero egress fees thanks to the Bandwidth Alliance. Nodecraft also benefited from Universal Data Migration, which allows customers to move large amounts of data from any cloud services or on-premises storage to Backblaze’s B2 Cloud Storage, managed by Backblaze and free of charge.

Migrating From a Hyperscaler

Though it may seem daunting to transition from a hyperscaler to a specialized cloud provider, it doesn’t have to be. Many specialized providers offer tools and services to make the transition as smooth as possible. 

  • S3-compatible APIs, SDKs, CLI: Interface with storage as you would with Amazon S3—switching can be as easy as dropping in a new storage target.
  • Universal Data Migration: Free and fully managed migrations to make switching as seamless as possible.
  • Free egress: Move data freely with the Bandwidth Alliance and other partnerships between specialized cloud storage providers.

As the decision maker at your growing SaaS company, it’s worth considering whether a specialized cloud stack could be a better fit for your business. By doing so you could potentially unlock cost savings, improve performance, and gain flexibility to adapt your services to your unique needs. The one-size-fits-all is no longer the only option out there. 

Want to Test It Out Yourself?

Take a proactive approach to cloud cost management: Get 10GB free to test and validate your proof of concept (POC) with Backblaze B2. All it takes is an email to get started.

Download the Ransomware Guide ➔ 

The post The Power of Specialized Cloud Providers: A Game Changer for SaaS Companies appeared first on Backblaze Blog | Cloud Storage & Cloud Backup.

]]>
https://www.backblaze.com/blog/the-power-of-specialized-cloud-providers-a-game-changer-for-saas-companies/feed/ 0
The Free Credit Trap: Building SaaS Infrastructure for Long-Term Sustainability https://www.backblaze.com/blog/the-free-credit-trap-building-saas-infrastructure-for-long-term-sustainability/ https://www.backblaze.com/blog/the-free-credit-trap-building-saas-infrastructure-for-long-term-sustainability/#respond Tue, 23 May 2023 16:29:10 +0000 https://www.backblaze.com/blog/?p=108768 Many businesses use AWS's free cloud credits to create their cloud-based infrastructure. But, those AWS bills are no fun. Here are some ways you can leverage free credits without getting locked into AWS long term.

The post The Free Credit Trap: Building SaaS Infrastructure for Long-Term Sustainability appeared first on Backblaze Blog | Cloud Storage & Cloud Backup.

]]>

In today’s economic climate, cost cutting is on everyone’s mind, and businesses are doing everything they can to save money. But, it’s equally important that they can’t afford to compromise the integrity of their infrastructure or the quality of the customer experience. As a startup, taking advantage of free cloud credits from cloud providers like Amazon AWS, especially at a time like this, seems enticing. 

Using those credits can make sense, but it takes more planning than you might think to use them in a way that allows you to continue managing cloud costs once the credits run out. 

In this blog post, I’ll walk through common use cases for credit programs, the risks of using credits, and alternatives that help you balance growth and cloud costs.

The True Cost of “Free”

This post is part of a series exploring free cloud credits and the hidden complexities and limitations that come with these offers. Check out our previous installments:

The Shift to Cloud 3.0

As we see it, there have been three stages of “The Cloud” in its history:

Phase 1: What is the Cloud?

Starting around when Backblaze was founded in 2007, the public cloud was in its infancy. Most people weren’t clear on what cloud computing was or if it was going to take root. Businesses were asking themselves, “What is the cloud and how will it work with my business?”

Phase 2: Cloud = Amazon Web Services

Fast forward to 10 years later, and AWS and “The Cloud” started to become synonymous. Amazon had nearly 50% of market share of public cloud services, more than Microsoft, Google, and IBM combined. “The Cloud” was well-established, and for most folks, the cloud was AWS.

Phase 3: Multi-Cloud

Today, we’re in Phase 3 of the cloud. “The Cloud” of today is defined by the open, multi-cloud internet. Traditional cloud vendors are expensive, complicated, and seek to lock customers into their walled gardens. Customers have come to realize that (see below) and to value the benefits they can get from moving away from a model that demands exclusivity in cloud infrastructure.

An image displaying a Tweet from user Philo Hermans @Philo01 that says 

I migrated most infrastructure away from AWS. Now that I think about it, those AWS credits are a well-designed trap to create a vendor lock in, and once your credits expire and you notice the actual cost, chances are you are in shock and stuck at the same time (laughing emoji).
Source.

In Cloud Phase 3.0, companies are looking to reign in spending, and are increasingly seeking specialized cloud providers offering affordable, best-of-breed services without sacrificing speed and performance. How do you balance that with the draw of free credits? I’ll get into that next, and the two are far from mutually exclusive.

Getting Hooked on Credits: Common Use Cases

So, you have $100k in free cloud credits from AWS. What do you do with them? Well, in our experience, there are a wide range of use cases for credits, including:

  • App development and testing: Teams may leverage credits to run an app development proof of concept (PoC) utilizing Amazon EC2, RDS, and S3 for compute, database, and storage needs, for example, but without understanding how these will scale in the longer term, there may be risks involved. Spinning up EC2 instances can quickly lead to burning through your credits and getting hit with an unexpected bill.
  • Machine learning (ML): Machine learning models require huge amounts of computing power and storage. Free cloud credits might be a good way to start, but you can expect them to quickly run out if you’re using them for this use case. 
  • Data analytics: While free cloud credits may cover storage and computing resources, data transfer costs might still apply. Analyzing large volumes of data or frequently transferring data in and out of the cloud can lead to unexpected expenses.
  • Website hosting: Hosting your website with free cloud credits can eliminate the up front infrastructure spend and provide an entry point into the cloud, but remember that when the credits expire, traffic spikes you should be celebrating can crater your bottom line.
  • Backup and disaster recovery: Free cloud credits may have restrictions on data retention, limiting the duration for which backups can be stored. This can pose challenges for organizations requiring long-term data retention for compliance or disaster recovery purposes.

All of this is to say: Proper configuration, long-term management and upkeep, and cost optimization all play a role on how you scale on monolith platforms. It is important to note that the risks and benefits mentioned above are general considerations, and specific terms and conditions may vary depending on the cloud service provider and the details of their free credit offerings. It’s crucial to thoroughly review the terms and plan accordingly to maximize the benefits and mitigate the risks associated with free cloud credits for each specific use case. (And, given the complicated pricing structures we mentioned before, that might take some effort.)

Monument Uses Free Credits Wisely

Monument, a photo management service with a strong focus on security and privacy, utilized free startup credits from AWS. But, they knew free credits wouldn’t last forever. Monument’s co-founder, Ercan Erciyes, realized they’d ultimately lose money if they built the infrastructure for Monument Cloud on AWS.

He also didn’t want to accumulate tech debt and become locked in to AWS. Rather than using the credits to build a minimum viable product as fast as humanly possible, he used the credits to develop the AI model, and built infrastructure that could scale as they grew.

➔ Read More

The Risks of AWS Credits: Lessons from Founders

If you’re handed $100,000 in credits, it’s crucial to be aware of the risks and implications that come along with it. While it may seem like an exciting opportunity to explore the capabilities of the cloud without immediate financial constraints, there are several factors to consider:

  1. The temptation to overspend: With a credit balance at your disposal just waiting to be spent, there is a possibility of underestimating the actual costs of your cloud usage. This can lead to a scenario where you inadvertently exhaust the credits sooner than anticipated, leaving you with unexpected expenses that may strain your budget.
  2. The shock of high bills once credits expire: Without proper planning and monitoring of your cloud usage, the transition from “free” to paying for services can result in high bills that catch you off guard. It is essential to closely track your cloud usage throughout the credit period and have a clear understanding of the costs associated with the services you’re utilizing. Or better yet, use those credits for a discrete project to test your PoC or develop your minimum viable product, and plan to build your long-term infrastructure elsewhere.
  3. The risk of vendor lock-in: As you build and deploy your infrastructure within a specific cloud provider’s ecosystem, the process of migrating to an alternative provider can seem complex and can definitely be costly (shameless plug: at Backblaze, we’ll cover your migration over 50TB). Vendor lock-in can limit your flexibility, making it challenging to adapt to changing business needs or take advantage of cost-saving opportunities in the future.

The problems are nothing new for founders, as the online conversation bears out.

First, there’s the old surprise bill:

A Tweet from user Ajul Sahul @anjuls that says 

Similar story, AWS provided us free credits so we though we will use it for some data processing tasks. The credit expired after one year and team forgot about the abandoned resources to give a surprise bill. Cloud governance is super importance right from the start.
Source.

Even with some optimization, AWS cloud spend can still be pretty “obscene” as this user vividly shows:

A Tweet from user DHH @dhh that says 

We spent $3,201,564.24 on cloud in 2022 at @37signals, mostly AWS. $907,837.83 on S3. $473,196.30 on RDS. $519,959.60 on OpenSearch. $123,852.30 on Elasticache. This is with long commits (S3 for 4 years!!), reserved instances, etc. Just obscene. Will publish full accounting soon.
Source.

There’s the founder raising rounds just to pay AWS bills:

A Tweet from user Guille Ojeda @itsguilleojeda that says 

Tech first startups raise their first rounds to pay AWS bills. By the way, there's free credits, in case you didn't know. Up to $100k. And you'll still need funding.
Source.

Some use the surprise bill as motivation to get paying customers.

Lastly, there’s the comic relief:

A tweet from user Mrinal Wahal @MrinalWahal that reads 

Yeah high credit card bills are scary but have you forgotten turning off your AWS instances?
Source.

Strategies for Balancing Growth and Cloud Costs

Where does that leave you today? Here are some best practices startups and early founders can implement to balance growth and cloud costs:

  1. Establishing a cloud cost management plan early on.
  2. Monitoring and optimizing cloud usage to avoid wasted resources.
  3. Leveraging multiple cloud providers.
  4. Moving to a new cloud provider altogether.
  5. Setting aside some of your credits for the migration.

1. Establishing a Cloud Cost Management Plan

Put some time into creating a well-thought-out cloud cost management strategy from the beginning. This includes closely monitoring your usage, optimizing resource allocation, and planning for the expiration of credits to ensure a smooth transition. By understanding the risks involved and proactively managing your cloud usage, you can maximize the benefits of the credits while minimizing potential financial setbacks and vendor lock-in concerns.

2. Monitoring and Optimizing Cloud Usage

Monitoring and optimizing cloud usage plays a vital role in avoiding wasted resources and controlling costs. By regularly analyzing usage patterns, organizations can identify opportunities to right-size resources, adopt automation to reduce idle time, and leverage cost-effective pricing options. Effective monitoring and optimization ensure that businesses are only paying for the resources they truly need, maximizing cost efficiency while maintaining the necessary levels of performance and scalability.

3. Leveraging Multiple Cloud Providers

By adopting a multi-cloud strategy, businesses can diversify their cloud infrastructure and services across different providers. This allows them to benefit from each provider’s unique offerings, such as specialized services, geographical coverage, or pricing models. Additionally, it provides a layer of protection against potential service disruptions or price increases from a single provider. Adopting a multi-cloud approach requires careful planning and management to ensure compatibility, data integration, and consistent security measures across multiple platforms. However, it offers the flexibility to choose the best-fit cloud services from different providers, reducing dependency on a single vendor and enabling businesses to optimize costs while harnessing the capabilities of various cloud platforms.

4. Moving to a New Cloud Provider Altogether

If you’re already deeply invested in a major cloud platform, shifting away can seem cumbersome, but there may be long-term benefits that outweigh the short term “pains” (this leads into the shift to Cloud 3.0). The process could involve re-architecting applications, migrating data, and retraining personnel on the new platform. However, factors such as pricing models, performance, scalability, or access to specialized services may win out in the end. It’s worth noting that many specialized providers have taken measures to “ease the pain” and make the transition away from AWS more seamless without overhauling code. For example, at Backblaze, we developed an S3 compatible API so switching providers is as simple as dropping in a new storage target.

5. Setting Aside Credits for the Migration

By setting aside credits for future migration, businesses can ensure they have the necessary resources to transition to a different provider without incurring significant up front expenses like egress fees to transfer large data sets. This strategic allocation of credits allows organizations to explore alternative cloud platforms, evaluate their pricing models, and assess the cost-effectiveness of migrating their infrastructure and services without worrying about being able to afford the migration.

Welcome to Cloud 3.0: Alternatives to AWS

In 2022, David Heinemeier Hansson, the creator of Basecamp and Hey, announced that he was moving Hey’s infrastructure from AWS to on-premises. Hansson cited the high cost of AWS as one of the reasons for the move. His estimate? “We stand to save $7m over five years from our cloud exit,” he said.  

Going back to on-premises solutions is certainly one answer to the problem of AWS bills. In fact, when we started designing Backblaze’s Personal Backup solution, we were faced with the same problem. Hosting data storage for our computer backup product on AWS was a non-starter—it was going to be too expensive, and our business wouldn’t be able to deliver a reasonable consumer price point and be solvent. So, we didn’t just invest in on-premises resources: We built our own Storage Pods, the first evolution of the Backblaze Storage Cloud. 

But, moving back to on-premises solutions isn’t the only answer—it’s just the only answer if it’s 2007 and your two options are AWS and on-premises solutions. The cloud environment as it exists today has better choices. We’ve now grown that collection of Storage Pods into the Backblaze B2 Storage Cloud, which delivers performant, interoperable storage at one-fifth the cost of AWS. And, we offer free egress to our content delivery network (CDN) and compute partners. Backblaze may provide an even more cost-effective solution for mid-sized SaaS startups looking to save on cloud costs while maintaining speed and performance.

As we transition to Cloud 3.0 in 2023 and beyond, companies are expected to undergo a shift, reevaluating their cloud spending to ensure long-term sustainability and directing saved funds into other critical areas of their businesses. The age of limited choices is over. The age of customizable cloud integration is here. 

So, shout out to David Heinemeier Hansson: We’d love to chat about your storage bills some time.

Want to Test It Yourself?

Take a proactive approach to cloud cost management: If you’ve got more than 50TB of data storage or want to check out our capacity-based pricing model, B2 Reserve, contact our Sales Team to test a PoC for free with Backblaze B2. And, for the streamlined, self–serve option, all you need is an email to get started today.

FAQs About Cloud Spend

If you’re thinking about moving to Backblaze B2 after taking AWS credits, but you’re not sure if it’s right for you, we’ve put together some frequently asked questions that folks have shared with us before their migrations:

My cloud credits are running out. What should I do?

Backblaze’s Universal Data Migration service can help you off-load some of your data to Backblaze B2 for free. Speak with a migration expert today.

AWS has all of the services I need, and Backblaze only offers storage. What about the other services I need?

Shifting away from AWS doesn’t mean ditching the workflows you have already set up. You can migrate some of your data storage while keeping some on AWS or continuing to use other AWS services. Moreover, AWS may be overkill for small to midsize SaaS businesses with limited resources.

How should I approach a migration?

Identify the specific services and functionalities that your applications and systems require, such as CDN for content delivery or compute resources for processing tasks. Check out our partner ecosystem to identify other independent cloud providers that offer the services you need at a lower cost than AWS.

What CDN partners does Backblaze have?

With the ease of use, predictable pricing, zero egress, our joint solutions are perfect for businesses looking to reduce their IT costs, improve their operational efficiency, and increase their competitive advantage in the market. Our CDN partners include Fastly, bunny.net, and Cloudflare. And, we extend free egress to joint customers.

What compute partners does Backblaze have?

Our compute partners include Vultr and Equinix Metal. You can connect Backblaze B2 Cloud Storage with Vultr’s global compute network to access, store, and scale application data on-demand, at a fraction of the cost of the hyperscalers.

The post The Free Credit Trap: Building SaaS Infrastructure for Long-Term Sustainability appeared first on Backblaze Blog | Cloud Storage & Cloud Backup.

]]>
https://www.backblaze.com/blog/the-free-credit-trap-building-saas-infrastructure-for-long-term-sustainability/feed/ 0
Announcing Tech Day ‘22: Live Tech Talks, Demos, and Dialogues https://www.backblaze.com/blog/announcing-tech-day-22-live-tech-talks-demos-and-dialogues/ https://www.backblaze.com/blog/announcing-tech-day-22-live-tech-talks-demos-and-dialogues/#respond Thu, 06 Oct 2022 16:00:20 +0000 https://www.backblaze.com/blog/?p=106961 Have you heard? Tech Day '22 takes place on October 31. Come connect with other tech pros and learn more about Backblaze B2 Cloud Storage solutions.

The post Announcing Tech Day ‘22: Live Tech Talks, Demos, and Dialogues appeared first on Backblaze Blog | Cloud Storage & Cloud Backup.

]]>

For those looking to build and grow blazing applications and do more with their data, we’d like to welcome you to this year’s Tech Day ‘22. We have a great community that works with Backblaze B2 Cloud Storage, including our internal team, IT professionals, developers, tech decision makers, cloud partners and more—and we felt it was high time to bring you all together again to share ideas, discuss upcoming changes, win some swag, and network.

Join our Technical Evangelists in live interactive sessions, demos, and tech talks that help you unlock your cloud potential and put B2 Cloud Storage to work for you. Whatever your role in the tech world—or if you’re simply curious about leveraging the Backblaze B2 platform—we invite you to join us!

➔ Register Now

Here’s What to Expect at Tech Day ’22

Tech Day ’22 is happening October 31, 10 a.m. PT. Can’t make it? Sign up anyway and we’ll share the event recording straight to your inbox.

IaaS Unboxed

A live showcase of how organizations are using the Backblaze ecosystem to support their needs for backup, active archive, delivery from origin store, compute solutions, and more. You won’t want to miss:

  • How SaaS platforms with millions of daily active users scale while keeping infrastructure costs low.
  • How backup pros tier data from Veeam for virtual air gapping, as well as protecting from disasters and ransomware.
  • How one global ad agency increased team access to content while achieving new sustainability goals.

Sneak Peek

An early look at the Q3 2022 Drive Stats data with Andy Klein as he walks through the latest learnings to inform your thinking and purchase decisions.

Hands-On Demos

Pat Patterson (Chief Technical Evangelist), and Greg Hamer (Senior Developer Evangelist) team up to facilitate an action-packed set of interactive sessions aimed at helping you do more in the cloud. If you don’t have an account already, you’ll definitely want to create a free Backblaze B2 account so you can follow along. All you need to do is sign up with your email and create a password—it’s really that easy.

  • Scaling a Social App with Appwrite: Appwrite is a self-hosted backend-as-a-service platform that provides developers with all the core APIs required to build any application. Appwrite’s storage abstraction allows developers to store project files in a range of devices, including Backblaze B2. In this session, you’ll learn how to get started with Appwrite, and quickly build a social app that stores user-generated content in a Backblaze B2 Bucket.
  • Go Serverless with Fastly Compute@Edge: Fastly has long been a Backblaze partner—mutual customers are able to serve assets stored in Backblaze B2 Buckets via Fastly’s global content delivery network with zero download charges from Backblaze B2. Compute@Edge leverages Fastly’s network to enable developers to create high-scale, globally-distributed applications, and execute code at the edge. Discover how to build a simple serverless application in JavaScript and deploy it globally with a single command.
  • Provisioning Resources with the Backblaze B2 Terraform Provider: Hashicorp Terraform is an open-source infrastructure-as-code software tool that enables you to safely and predictably create, change, and improve infrastructure. Learn how our Terraform Provider unlocks Backblaze B2’s capabilities for DevOps engineers, allowing you to create, list, and delete Buckets and application keys, as well as upload and download objects.
  • Storing and Querying Analytical Data with Trino: Trino is a SQL-compliant query engine that supports a wide range of business intelligence and analytical tools, allowing you to write queries against structured and semi-structured data in a variety of formats and storage locations. We’ll share how we optimized Backblaze’s Drive Stats data for queries and used Trino to gain new insights into nine years of real-world data.

And So Much More

Join the live Q&A and our user community of tech leaders, IT pros, and developers like you. Register for free to grab your spot (and swag) and we’ll see you on October 31.

➔ Register Now

The post Announcing Tech Day ‘22: Live Tech Talks, Demos, and Dialogues appeared first on Backblaze Blog | Cloud Storage & Cloud Backup.

]]>
https://www.backblaze.com/blog/announcing-tech-day-22-live-tech-talks-demos-and-dialogues/feed/ 0
“An Ideal Solution”: Daltix’s Automated Data Lake Archive Saves $100K https://www.backblaze.com/blog/an-ideal-solution-daltixs-automated-data-lake-archive-saves-100k/ https://www.backblaze.com/blog/an-ideal-solution-daltixs-automated-data-lake-archive-saves-100k/#respond Tue, 04 Oct 2022 16:20:37 +0000 https://www.backblaze.com/blog/?p=106911 Meet Daltix: A Backblaze customer in the consumer goods space, they are a pioneer in providing complete, transparent, and high-quality retail data. Read more to see how they implemented Backblaze B2 Cloud Storage to easily and efficiently manage their growing data lake.

The post “An Ideal Solution”: Daltix’s Automated Data Lake Archive Saves $100K appeared first on Backblaze Blog | Cloud Storage & Cloud Backup.

]]>

In the fast-moving consumer goods space, Daltix is a pioneer in providing complete, transparent, and high-quality retail data. With global industry leaders like GFK and Unilever depending on their pricing, product, promotion, and location data to build go-to market strategies and make critical decisions, maintaining a reliable data ecosystem is an imperative for Daltix.

As the company has grown since its founding in 2016, the amount of data Daltix is processing has increased exponentially. They’re currently managing around 250TB, but that amount is spread across billions of files, which soon created a massive drag on time and resources. With an infrastructure built almost entirely around AWS and billions of miniscule files to manage, Daltix started to outgrow AWS’ storage options in both scalability and cost efficiency.

The Daltix team in Belgium.

We got to chat with Charlie Orford, Principal Software Engineer for Daltix, about how Daltix switched to Backblaze B2 Cloud Storage and their takeaways from that process. Here are some highlights:

  • They used a custom engine to migrate billions of files from AWS S3 to Backblaze B2.
  • Monthly costs reduced by $2,500 while increasing data portability and reliability.
  • Daltix established the infrastructure to automatically back up 8.4 million data objects every day.

Read on to learn how they did it.

A Complex Data Pipeline Built Around AWS

Most of the S3-based infrastructure Daltix built in the company’s early days is still intact. Historically, the data pipeline started with web-scraped resources written directly to Amazon S3, which were then standardized by Lamba-based extractors before being sent back to S3. Then AWS Batch picked up the resources to be augmented and enriched using other data sources.

All those steps took place before the data was ready for Daltix’s team of analysts. In order to optimize the pipeline and increase efficiency, Orford started absorbing pieces of that process into Kubernetes. But there was still a data storage problem; Daltix generates about 300GB of compressed data per day, and that figure was growing rapidly. “As we’d scaled up our data collection, we’d had to sharpen our focus on cost control, data portability, and reliability,” said Orford. “They’re obvious, but at scale, they’re extremely important.”

Cost Concerns Inspire The Search For Warm Archival Storage

By 2020, Daltix had started to realize the limitations of building so much of their infrastructure in AWS. For example, heavy customization around S3 metadata made the ability to move objects entirely dependent on the target system’s compatibility with S3. Orford was also concerned about the costs of permanently storing such a huge data lake in S3. As he puts it, “It was clear that there was no need to have everything in S3 forever. If we didn’t do anything about it, our S3 costs were going to continue to rise and eventually dwarf virtually all of our other AWS costs.”

Side-by-side comparison of server costs.

Because Daltix works with billions of tiny files, using Glacier was out of the question as its pricing model is based around retrieval fees. Even using Glacier Instant Retrieval, the sheer number of files Daltix works with would have forced them to rack up an additional $200,000 in fees per year. So Daltix’s data collection team—which produces more than 85% of the company’s overall data—pushed for an alternative solution that could address a number of competing concerns:

  • The sheer size of the data lake.
  • The need to store raw resources as discrete files (which means that batching is not an option).
  • Limitations on the team’s ability to invest time and effort.
  • A desire for simplicity to guarantee the solution’s reliability.

Daltix settled on using Amazon S3 for hot storage and moving warm storage into a new archival solution, which would reduce costs while keeping priority data accessible—even if the intention is to keep files stored away. “It was important to find something that would be very easy to integrate, have a low development risk, and start meaningfully eating into our costs,” said Orford. “For us, Backblaze really ticked all the boxes.”

Initial Migration Unlocks Immediate Savings of $2,000 Per Month

Before launching into a full migration, Orford and his team tested a proof of concept (POC) to make sure the solution addressed his key priorities:

  • Making sure the huge volume of data was migrated successfully.
  • Avoiding data corruption and checking for errors with audit logs.
  • Preserving custom metadata on each individual object.

“Early on, Backblaze worked with us hand-in-hand to come up with a custom migration tool that fit all our requirements,” said Orford. “That’s what gave us the confidence to proceed.” Backblaze delivered a tailor-made engine to ensure that the migration process would transfer the entire data lake reliably and with object-level metadata intact. After the initial POC bucket was migrated successfully, Daltix had everything they needed to start modeling and forecasting future costs. “As soon as we started interacting with Backblaze, we stopped looking at other options,” Orford said.

In August 2021, Daltix moved a 120TB bucket of 2.2 billion objects from standard storage in S3 to Backblaze B2 cloud storage. That initial migration alone unlocked an immediate cost savings of $2,000 per month, or $24,000 per year.

A peaceful data lake.

Quadruple the Data, Direct S3 Compatibility, and $100,000 Cumulative Savings

Today, Daltix is migrating about 3.2 million data objects (approximately 160GB of data) from Amazon S3 into Backblaze B2 every day. They keep 18 months of hot data in S3, and as soon as an object reaches 18 months and one day, it becomes eligible for archiving in B2. On the rare occasions that Daltix receives requests for data outside that 18-month window, they can pull data directly from Backblaze B2 into Amazon S3 thanks to Backblaze’s S3-compatible API and ever-available data.

Daily audit logs summarize how much data has been transferred, and the entire migration process happens automatically every day. “It runs in the background, there’s nothing to manage, we have full visibility, and it’s cost effective,” Orford said. “Backblaze B2 is an ideal solution for us.”

As daily data collection increases and more data ages out of the hot storage window, Orford expects more cost reductions. Orford expects it will take about a year and a half for daily migrations to nearly triple their current levels: that means Daltix will be backing up 9 million objects (about 450GB of data) to Backblaze B2 every day. Taking that long-term view, we see incredible cost savings for Daltix by switching from Amazon S3 to Backblaze B2. “By 2023, we forecast we will have realized a cumulative saving in the region of $75,000-$100,000 on our storage spend thanks to leveraging Backblaze B2, with expected ongoing savings of at least $30,000 per year,” said Orford.

“It runs in the background, there’s nothing to manage, we have full visibility, and it’s cost effective. Backblaze B2 is an ideal solution for us.” —Charlie Orford, Principal Software Engineer, Daltix

Crunch the Numbers and See for Yourself

Want to find out what your business could do with an extra $30,000 a year? Check out our Cloud Storage Pricing Calculator to see what you could save switching to Backblaze B2.

The post “An Ideal Solution”: Daltix’s Automated Data Lake Archive Saves $100K appeared first on Backblaze Blog | Cloud Storage & Cloud Backup.

]]>
https://www.backblaze.com/blog/an-ideal-solution-daltixs-automated-data-lake-archive-saves-100k/feed/ 0
Free Isn’t Always Free: A Guide to Free Cloud Tiers https://www.backblaze.com/blog/free-isnt-always-free-a-guide-to-free-cloud-tiers/ https://www.backblaze.com/blog/free-isnt-always-free-a-guide-to-free-cloud-tiers/#comments Thu, 14 Jul 2022 15:16:24 +0000 https://www.backblaze.com/blog/?p=106193 Learn how to navigate free cloud tiers and avoid surprise bills with this guide.

The post Free Isn’t Always Free: A Guide to Free Cloud Tiers appeared first on Backblaze Blog | Cloud Storage & Cloud Backup.

]]>
Free $$$

They say “the best things in life are free.” But when most cloud storage companies offer a free tier, what they really want is money. While free tiers do offer some early-stage technical founders the opportunity to test out a proof of concept or allow students to experiment without breaking the bank, their ultimate goal is to turn you into a paying customer. This isn’t always nefarious (we offer 10GB for free, so we’re playing the same game!), but some cloud vendors’ free tiers come with hidden surprises that can lead to scary bills with little warning.

The truth is that free isn’t always free. Today, we’re digging into a cautionary tale for developers and technical founders exploring cloud services to support their applications or SaaS products. Naturally, you want to know if a cloud vendor’s free tier will work for you. Understanding what to expect and how to navigate free tiers accordingly can help you avoid huge surprise bills later.

Free Tiers: A Quick Reference

Most large, diversified cloud providers offer a free tier—AWS, Google Cloud Platform, and Azure, to name a few—and each one structures theirs a bit differently:

  • AWS: AWS has 100+ products and services with free options ranging from “always free” to 12 months free, and each has different use limitations. For example, you get 5GB of object storage free with AWS S3 for the first 12 months, then you are billed at the respective rate.
  • Google Cloud Platform: Google offers a $300 credit good for 90 days so you can explore services “for free.” They also offer an “always free tier” for specific services like Cloud Storage, Compute Engine, and several others that are free to a certain limit. For example, you get 5GB of storage for free and 1GB of network egress for their Cloud Storage service.
  • Azure: Azure offers a free trial similar to Google’s but with a shorter time frame (30 days) and lower credit amount ($200). It gives you the option to move up to paid when you’ve used up your credits or your time expires. Azure also offers a range of services that are free for 12 months and have varying limits and thresholds as well as an “always free tier” option.

After even a quick review of the free tier offers from major cloud providers, you can glean some immediate takeaways:

  1. You can’t rely on free tiers or promotional credits as a long-term solution. They work well for testing a proof of concept or a minimum viable product without making a big commitment, but they’re not going to serve you past the time or usage limits.
  2. “Free” has different mileage depending on the platform and service. Keep that in mind before you spin up servers and resources, and read the fine print as it relates to limitations.
  3. The end goal is to move you to paid. Obviously, the cloud providers want to move you from testing a proof of concept to paid, with your full production hosted and running on their platforms.

With Google Cloud Platform and Azure, you’re at least somewhat protected from being billed beyond the credits you receive since they require you to upgrade to the paid tier to continue. Thus, most of the horror stories you’ll see involve AWS. With AWS, once your trial expires or you exceed your allotted limits, you are billed the standard rate. For the purposes of this guide, we’ll look specifically at AWS.

The Problem With the AWS Free Tier

The internet is littered with cautionary tales of AWS bills run amok. A quick search for “AWS free tier bill” on Twitter or Reddit shows that it’s possible and pretty common to run up a bill on AWS’s so-called free tier…
AWS Free Tier Tweet
AWS Free Tier Multi-thousand Bill Tweet
AWS Reddit - I just received a bill for $19,000 for Free tier

The problem with the AWS free tier is threefold:

  1. There are a number of ways a “free tier” instance can turn into a bill.
  2. Safeguards against surprise bills are mediocre at best.
  3. Surprise bills are scary, and next steps aren’t the most comforting.

Problem 1: It’s Really Easy to Go From Free to Not Free

There are a number of ways an unattended “free tier” instance turns into a bill, sometimes a catastrophically huge bill. Here are just a few:

  1. You spin up Elastic Compute Cloud (EC2) instances for a project and forget about them until they exceed the free tier limits.
  2. You sign up for several AWS accounts, and you can’t figure out which one is running up charges.
  3. Your account gets hacked and used for mining crypto (yes, this definitely happens, and it results in some of the biggest surprise bills of them all).

Problem 2: Safeguards Against Surprise Bills Are Mediocre at Best

Confounding the problem is the fact that AWS keeps safeguards against surprise billing to a minimum. The free tier has limits and defined constraints, and the only way to keep your account in the free zone is to keep usage below those limits (and this is key) for each service you use.

AWS has hundreds of services, and each service comes with its own pricing structure and limits. While one AWS service might be free, it can be paired with another AWS service that’s not free or doesn’t have the same free threshold, for example, egress between services. Thus, managing your usage to keep it within the free tier can be somewhat straightforward or prohibitively complex depending on which services you use.

Wait, Shouldn’t I Get Alerts?

Yes, you can get alerted if you’re approaching the free limit, but that’s not foolproof either. First, billing alarms are not instantaneous. The notification might come after you’ve already exceeded the limit. And second, not every service has alerts or alerts that work in the same way.

You can also configure services so that they automatically shut down when they exceed a certain billing threshold, but this may pose more problems than it solves. First, navigating the AWS UI to set this up is complex. Your average free tier user may not be aware of or even interested in how to set that up. Second, you may not want to shut down services depending on how you’re using AWS.

Problem 3: Knowing What to Do Next

If it’s not your first rodeo, you might not default to panic mode when you get that surprise bill. You tracked your usage. You know you’re in the right. All you have to do is contact AWS support and dispute the charge. But imagine how a college student might react to a bill the size of their yearly tuition. While large five- to six-figure bills might be negotiable and completely waived, there are untold numbers of two- to three-figure bills that just end up getting paid because people weren’t aware of how to dispute the charges.

Even experienced developers can fall victim to unexpected charges in the thousands.

Avoiding Unexpected AWS Bills in the First Place

The first thing to recognize is that free isn’t always free. If you’re new to the platform, there are a few steps you can take to put yourself in a better position to avoid unexpected charges:

  1. Read the fine print before spinning up servers or uploading test data.
  2. Look for sandboxed environments that don’t let you exceed charges beyond a certain amount or that allow you to set limits that shut off services once limits are exceeded.
  3. Proceed with caution and understand how alerts work before spinning up services.
  4. Steer clear of free tiers completely, because the short-term savings aren’t huge and aren’t worth the added risk.

Final Thought: It Ain’t Free If They Have Your CC

AWS requires credit card information before you can do anything on the free tier—all the more reason to be extremely cautious.

Shameless plug here: Backblaze B2 Cloud Storage offers the first 10GB of storage free, and you don’t need to give us a credit card to create an account. You can also set billing alerts and caps easily in your dashboard. So, you’re unlikely to run up a surprise bill.

Ready to get started with Backblaze B2 Cloud Storage? Sign up here today to get started with 10GB and no CC.

The post Free Isn’t Always Free: A Guide to Free Cloud Tiers appeared first on Backblaze Blog | Cloud Storage & Cloud Backup.

]]>
https://www.backblaze.com/blog/free-isnt-always-free-a-guide-to-free-cloud-tiers/feed/ 1
Cloud Performance and When It Matters https://www.backblaze.com/blog/cloud-performance-and-when-it-matters/ https://www.backblaze.com/blog/cloud-performance-and-when-it-matters/#respond Fri, 04 Feb 2022 17:46:53 +0000 https://www.backblaze.com/blog/?p=104727 Learn about cloud performance metrics and factors to consider when assessing a cloud solution for your development needs.

The post Cloud Performance and When It Matters appeared first on Backblaze Blog | Cloud Storage & Cloud Backup.

]]>

If you run an application that’s bandwidth intensive like media streaming, game hosting, or an e-commerce platform, performance is probably top of mind. You need to be able to deliver content to your users fast and without errors in order to keep them happy. But, what specific performance metrics matter for your use case?

As it turns out, you might think you need a Porsche when what you really need and want to transport your data with is a trusty, reliable (Still speedy!) Volvo.

In this post, we’re taking a closer look at performance metrics and when they matter as well as some strategies that can impact performance, including range requests, prefetching, and others. When you’re assessing a cloud solution for application development, taking these factors into consideration can help you make the best decision for your business.

Performance Metrics: Time to First Byte

Time to first byte (TTFB) is the time between a page request and when the page receives the first byte of information from the server. In other words, TTFB is measured by how long it takes between the start of the request and the start of the response, including DNS lookup and establishing the connection using a TCP handshake and SSL handshake if you’ve made the request over HTTPS.

TTFB identifies pages that load slowly due to server-side calculations that could instead benefit from client-side scripting. It’s often used to assess search rankings by displaying websites that respond to a request faster and appear more usable before other websites.

TTFB is a useful metric, but it doesn’t tell the whole story every time and shouldn’t be the only metric used to make decisions when it comes to choosing a cloud storage solution. For example, when David Liu, Founder and CEO of Musify, a music streaming app, approached his search for a new cloud storage provider, he had a specific TTFB benchmark in mind. He thought he absolutely needed to meet this benchmark in order for his new storage solution to work for his use case, however, upon further testing, he found that his initial benchmark was more aggressive than he actually needed. The performance he got by utilizing Cloudflare in front of his origin store in Backblaze B2 Cloud Storage more than met his needs and served his users well.

Optimizing Cloud Storage Performance

TTFB is the dominant method of measuring performance, but TTFB can be impacted by any number of factors—your location, your connection, the data being sent, etc. As such, there are ways to improve TTFB, including using a content delivery network (CDN) on top of origin storage, range requests, and prefetching.

Performance and Content Delivery Networks

A CDN helps speed content delivery by storing content at the edge, meaning faster load times and reduced latency. For high-bandwidth use cases, a CDN can optimize media delivery.

Companies like Kanopy, a media streaming service; Big Cartel, an e-commerce platform; and CloudSpot, a professional photo gallery platform, use a CDN between their origin storage in Backblaze B2 and their end users to great effect. Kanopy offers a library of 25,000+ titles to 45 million patrons worldwide. Latency and poor performance is not an option. “Video needs to have a quick startup time,” Kanopy’s Lead Video Software Engineer, Pierre-Antoine Tible said. “With Backblaze over [our CDN] Cloudflare, we didn’t have any issues.”

For Big Cartel, hosting one million customer sites likewise demands high-speed performance. Big Cartel’s Technical Director, Lee Jensen, noted, “We had no problems with the content served from Backblaze B2. The time to serve files in our 99th percentile, including fully rendering content, was under one second, and that’s our worst case scenario.” The time to serve files in their 75th percentile was under just 200 to 300 milliseconds, and that’s when content needs to be pulled from origin storage in Backblaze B2 when it’s not already cached in their CDN Fastly’s edge servers.

“We had no problems with the content served from Backblaze B2. The time to serve files in our 99th percentile, including fully rendering content, was under one second, and that’s our worst case scenario.”
—Lee Jensen, Technical Director, Big Cartel

Range Requests and Performance

HTTP range requests allow sending only a portion of an HTTP message from a server to a client. Partial requests are useful for large media or downloading files with pause and resume functions, and they’re common for developers who like to concatenate files and store them as big files. For example, if a user wants to skip to a clip of a full video or a specific frame in a video, using range requests means the application doesn’t have to serve the whole file.

Because the Backblaze B2 vault architecture separates files into shards, you get the same performance whether you call the whole file or just part of the file in a range request. Rather than wasting time learning how to optimize performance on a new platform or adjusting your code to comply with frustrating limitations, developers moving over to Backblaze B2 can utilize existing code they’re already invested in.

Prefetching and Performance

Prefetching is a way to “queue up” data before it’s actually required. This improves latency if that data is subsequently requested. When you’re using a CDN in front of your origin storage, this means the user queues up data/files/content in the CDN before someone asks for it.

Video streaming service, Kanopy, uses prefetching with popular videos they expect will see high demand in certain regions. This would violate some cloud storage providers’ terms of service because they egress out more than they store. Because Kanopy gets free egress between their origin store in Backblaze B2 and their CDN Cloudflare, the initial download cost for prefetching is $0. (Backblaze also has partnerships with other CDN providers like Fastly and bunny.net to offer zero egress.) The partnership means Kanopy doesn’t have to worry about running up egress charges, and they’re empowered to use prefetching to optimize their infrastructure.

Other Metrics to Consider When Assessing Cloud Performance

In addition to TTFB, there are a number of other metrics to consider when it comes to assessing cloud performance, including availability, the provider’s service level agreements (SLAs), and durability.

Availability measures the percentage of time the data is available to be accessed. All data occasionally becomes unavailable due to regular operating procedures like system maintenance. But, obviously data availability is very important when you’re serving content around the globe 24/7. Backblaze B2, for example, commits to a 99.9% uptime with no cold delays. Commitments like uptime are usually outlined in a cloud provider’s SLA—an agreement that lists the performance metrics the cloud provider agrees to provide.

Durability measures how healthy your data is. Object storage providers express data durability as an annual percentage in nines, as in two nines before the decimal point and as many nines as warranted after the decimal point. For example, 11 nines of durability is expressed as 99.999999999%. What this means is that the storage vendor is promising that your data will remain intact while it is under their care without losing any more than 0.000000001% of your data in a year (in the case of 11 nines annual durability).

Ready to Get Started?

Understanding the different performance metrics that might impact your data can help when you’re evaluating cloud storage providers. Ready to get started with Backblaze B2? We offer the first 10GB free.

The post Cloud Performance and When It Matters appeared first on Backblaze Blog | Cloud Storage & Cloud Backup.

]]>
https://www.backblaze.com/blog/cloud-performance-and-when-it-matters/feed/ 0
Developers Get EC2 Alternative With Vultr Cloud Compute and Bare Metal https://www.backblaze.com/blog/developers-get-ec2-alternative-with-vultr-cloud-compute-and-bare-metal/ https://www.backblaze.com/blog/developers-get-ec2-alternative-with-vultr-cloud-compute-and-bare-metal/#comments Wed, 01 Sep 2021 15:00:52 +0000 https://www.backblaze.com/blog/?p=102899 Read about our newest partnership with Vultr—the largest privately-owned, global hyperscale cloud—that provides developers and businesses with infrastructure as a service that’s easier to use and lower cost than perhaps better known alternatives.

The post Developers Get EC2 Alternative With Vultr Cloud Compute and Bare Metal appeared first on Backblaze Blog | Cloud Storage & Cloud Backup.

]]>

The old saying, “birds of a feather flock together,” couldn’t be more true of the latest addition to the Backblaze partner network. Today, we announce a new partnership with Vultr—the largest privately-owned, global hyperscale cloud—to serve developers and businesses with infrastructure as a service that’s easier to use and lower cost than perhaps better known alternatives.

With the Backblaze + Vultr combination, developers now have the ability to connect data stored in Backblaze B2 with virtualized cloud compute and bare metal resources in Vultr—providing a compelling alternative to Amazon S3 and EC2. Each Vultr compute instance includes a fixed amount of bandwidth, meaning that developers can easily transfer data between Vultr’s 17 global locations and Backblaze with no egress fees or other data transfer costs.

In addition to replacing AWS EC2, Vultr’s complete product line also offers load balancers and block storage which can seamlessly replace Amazon Elastic Load Balancing (ELB) and Elastic Block Storage (EBS).

With this partnership, developers of any size can avoid vendor lock-in, access best of breed services, and do more with the data they have stored in the cloud with ease, including:

  • Running analysis on stored data.
  • Deploying applications and storing application data.
  • Transcoding media and provisioning origin storage for streaming and video on-demand applications.

Backblaze + Vultr: Better Together

Vultr’s ease of use and comparatively low costs have motivated more than 1.3 million developers around the world to use its service. We recognized a shared culture in Vultr, which is why we’re looking forward to seeing what our joint customers can do with this partnership. Like Backblaze, Vultr was founded with minimal outside investment. Both services are transparent, affordable, simple to start without having to talk to sales (although sales support is only a call or email away), and, above all, easy. Vultr is on a mission to simplify deployment of cloud infrastructure, and Backblaze is on a mission to simplify cloud storage.

Rather than trying to be everything for everyone, both businesses play to their strengths, and customers get the benefit of working with unconflicted partners.

Vultr’s pricing often comes in at half the cost of the big three—Amazon, Google, and Microsoft—and with Vultr’s bundled egress, we’re working together to alleviate the burden of bandwidth costs, which can be disproportionately huge for small and medium-sized businesses.

“The Backblaze-Vultr partnership means more developers can build the flexible tech stacks they want to build, without having to make needlessly tough choices between access and affordability,” said Shane Zide, VP of Global Partnerships at Vultr. “When two companies who focus on ease of use and price performance work together, the whole is greater than the sum of the parts.”

Fly Higher With Backblaze B2 + Vultr

Existing Backblaze B2 customers now have unfettered access to compute resources with Vultr, and Vultr customers can connect to astonishingly easy cloud storage with Backblaze B2. If you’re not yet a B2 Cloud Storage customer, create an account to get started in minutes. If you’re already a B2 Cloud Storage customer, click here to activate an account with Vultr. For more information on putting the solution to work, this knowledge base article explains how to setup Vultr Compute with Backblaze B2.

For developers looking to do more with their data, we welcome you to join the flock. Get started with Backblaze B2 and Vultr today.

The post Developers Get EC2 Alternative With Vultr Cloud Compute and Bare Metal appeared first on Backblaze Blog | Cloud Storage & Cloud Backup.

]]>
https://www.backblaze.com/blog/developers-get-ec2-alternative-with-vultr-cloud-compute-and-bare-metal/feed/ 15
Easy Storage + Easy Provisioning: Backblaze Is Now a Terraform Provider https://www.backblaze.com/blog/easy-storage-easy-provisioning-backblaze-is-now-a-terraform-provider/ https://www.backblaze.com/blog/easy-storage-easy-provisioning-backblaze-is-now-a-terraform-provider/#comments Thu, 11 Mar 2021 16:46:36 +0000 https://www.backblaze.com/blog/?p=97617 Learn how to start managing infrastructure for DevOps projects using Backblaze B2 Cloud Storage and Terraform.

The post Easy Storage + Easy Provisioning: Backblaze Is Now a Terraform Provider appeared first on Backblaze Blog | Cloud Storage & Cloud Backup.

]]>
Backblaze + HashiCorp Terraform logos

Isn’t it nice when you tell someone to do something, and they just…do it? No step-by-step instructions necessary. For developers, that someone is Terraform, and that something is provisioning infrastructure for large-scale DevOps projects.

Terraform is an open-source infrastructure as code (IaC) tool developed by HashiCorp. Instead of using imperative commands (step-by-step instructions), Terraform uses declarative code (“here’s what I want”). Developers simply describe a desired end state—for example, the number of virtual machines or Kubernetes clusters they need—and Terraform generates and executes a plan for reaching that end state.

Backblaze B2 Cloud Storage is now a provider in the Terraform registry, which will support developers in their IaC efforts. This means you can provision and manage B2 Cloud Storage resources directly from your Terraform configuration file, alongside any other providers that you may already be using in your workflow.

What is infrastructure as code?
Infrastructure as code (IaC) transforms the user interfaces (UI) of infrastructure providers like Backblaze B2 into code. Developers can “provision” or set up the infrastructure they need using that code rather than setting it up manually in the UI or command line interface. Since the set up is codified, it can be automated.

How Does Terraform Work?

Terraform helps developers manage infrastructure at scale for sophisticated software projects. It automates the work of defining, deploying, and updating infrastructure, thus improving speed and reliability and avoiding human error in complex development environments. Because it uses declarative language, you don’t have to worry about the API-level calls between your application and your providers—Terraform handles that for you.

Terraform takes a DevOps-first approach, incorporating best practices in their platform so that you can focus on other needs. For example, it allows you to use the same configuration to provision testing, staging, and production environments rather than rewriting the code for those environments each time, which can introduce mistakes and configuration drift (i.e. when the configuration that you used to provision your environment doesn’t match the actual environment).

Since Terraform is pluggable by design, users have access to the best features of 500+ providers in one platform. Providers can include infrastructure as a service offerings like Backblaze B2, platform as a service companies like Heroku, or software as a service solutions like Cloudflare. For DevOps outfits that need to manage a diverse toolset with a legion of configurations and various kinds of infrastructure for different purposes, Terraform’s automation enables speed and efficiency, reducing operating cost over time.

HashiCorp Terraform logo

How Terraform + Backblaze B2 Delivers Benefits

Developers working in Terraform can now easily tap into Backblaze B2. Using Terraform’s declarative code, you can spin up and maintain B2 Cloud Storage resources including storage buckets and access keys.

Bucket settings, lifecycle settings, and CORS rules all can be controlled via the Terraform configuration file. These settings appear as flags which allow for automated configuration across all your environments. Within that Terraform configuration file, you can also reference a bucket, a single file, or multiple files within a bucket.

This gives developers greater freedom of choice when it comes to cloud storage providers. If you’re using AWS S3 or Google Cloud Storage in Terraform (or any other storage provider) and have considered implementing a multi-cloud approach or switching your cloud provider altogether, this integration enables you to easily set up, test, and start using B2 Cloud Storage—unlocking the astonishing ease, transparent pricing, and affordability the platform is known for.

How to Get Started Using Backblaze B2 in Terraform

After creating your Backblaze B2 account, using B2 Cloud Storage through Terraform is as easy as defining the various resources you want in your Terraform configuration file. For instance, let’s say you want the same Backblaze B2 configuration across your test, staging, and production environments. Instead of manually configuring Backblaze B2 for each environment, you can create a Terraform configuration file to:

  • Provision N public Backblaze B2 buckets.
  • Assign each bucket one access key.
  • Create a secret key for each access key.

You can then apply that configuration file to each of your environments.

Whether you are new to Terraform or are ready to use B2 Cloud Storage as a provider, here’s a step-by-step quickstart guide.

illustration of sending data to the Backblaze cloud

Why Being a Terraform Provider Is Exciting

As software projects become more multifaceted, developers are increasingly relying on IaC and automating code. Adding Backblaze B2 to the IaC “movement” will help developers scale and manage storage resources alongside everything else they control with Terraform.

Providing astonishingly easy cloud storage is our mission. Making provision of that storage easier for developers is part of Terraform’s DNA. Joining the Terraform registry combines easy storage with easy storage automation.

We look forward to continuing to serve the Terraform community and growing our provider functionality.

This open-source release was made possible thanks to our community contributors, Paweł Polewicz (ppolewicz) and Maciej Lech (mlech-reef).

The post Easy Storage + Easy Provisioning: Backblaze Is Now a Terraform Provider appeared first on Backblaze Blog | Cloud Storage & Cloud Backup.

]]>
https://www.backblaze.com/blog/easy-storage-easy-provisioning-backblaze-is-now-a-terraform-provider/feed/ 1