CDN Archives https://www.backblaze.com/blog/category/cloud-storage/cdn/ Cloud Storage & Cloud Backup Thu, 29 Feb 2024 15:38:40 +0000 en-US hourly 1 https://wordpress.org/?v=6.4.3 https://www.backblaze.com/blog/wp-content/uploads/2019/04/cropped-cropped-backblaze_icon_transparent-80x80.png CDN Archives https://www.backblaze.com/blog/category/cloud-storage/cdn/ 32 32 Navigating Cloud Storage: What is Latency and Why Does It Matter? https://www.backblaze.com/blog/navigating-cloud-storage-what-is-latency-and-why-does-it-matter/ https://www.backblaze.com/blog/navigating-cloud-storage-what-is-latency-and-why-does-it-matter/#respond Tue, 27 Feb 2024 16:28:41 +0000 https://www.backblaze.com/blog/?p=110914 Latency is an important factor impacting performance and user experience. Let's talk about what it is and some ways to get better performance.

The post Navigating Cloud Storage: What is Latency and Why Does It Matter? appeared first on Backblaze Blog | Cloud Storage & Cloud Backup.

]]>
A decorative image showing a computer and a server arrows moving between them, and a stopwatch indicating time.

In today’s bandwidth-intensive world, latency is an important factor that can impact performance and the end-user experience for modern cloud-based applications. For many CTOs, architects, and decision-makers at growing small and medium sized businesses (SMBs), understanding and reducing latency is not just a technical need but also a strategic play. 

Latency, or the time it takes for data to travel from one point to another, affects everything from how snappy or responsive your application may feel to content delivery speeds to media streaming. As infrastructure increasingly relies on cloud object storage to manage terabytes or even petabytes of data, optimizing latency can be the difference between success and failure. 

Let’s get into the nuances of latency and its impact on cloud storage performance.

Upload vs. Download Latency: What’s the Difference?

In the world of cloud storage, you’ll typically encounter two forms of latency: upload latency and download latency. Each can impact the responsiveness and efficiency of your cloud-based application.

Upload Latency

Upload latency refers to the delay when data is sent from a client or user’s device to the cloud. Live streaming applications, backup solutions, or any application that relies heavily on real-time data uploading will experience hiccups if upload latency is high, leading to buffering delays or momentary stream interruptions.

Download Latency

Download latency, on the other hand, is the delay when retrieving data from the cloud to the client or end user’s device. Download latency is particularly relevant for content delivery applications, such as on demand video streaming platforms, e-commerce, or other web-based applications. Reducing download latency, creating a snappy web experience, and ensuring content is swiftly delivered to the end user will make for a more favorable user experience.

Ideally, you’ll want to optimize for latency in both directions, but, depending on your use case and the type of application you are building, it’s important to understand the nuances of upload and download latency and their impact on your end users.

Decoding Cloud Latency: Key Factors and Their Impact

When it comes to cloud storage, how good or bad the latency is can be influenced by a number of factors, each having an impact on the overall performance of your application. Let’s explore a few of these key factors.

Network Congestion

Like traffic on a freeway, packets of data can experience congestion on the internet. This can lead to slower data transmission speeds, especially during peak hours, leading to a laggy experience. Internet connection quality and the capacity of networks can also contribute to this congestion.

Geographical Distance

Often overlooked, the physical distance from the client or end user’s device to the cloud origin store can have an impact on latency. The farther the distance from the client to the server, the farther the data has to traverse and the longer it takes for transmission to complete, leading to higher latency.

Infrastructure Components

The quality of infrastructure, including routers, switches, and cables, may affect network performance and latency numbers. Modern hardware, such as fiber-optic cables, can reduce latency, unlike outdated systems that don’t meet current demands. Often, you don’t have full control over all of these infrastructure elements, but awareness of potential bottlenecks may be helpful, guiding upgrades wherever possible.

Technical Processes

  • TCP/IP Handshake: Connecting a client and a server involves a handshake process, which may introduce a delay, especially if it’s a new connection.
  • DNS Resolution: Latency can be increased by the time it takes to resolve a domain name to its IP address. There is a small reduction in total latency with faster DNS resolution times.
  • Data routing: Data does not necessarily travel a straight line from its source to its destination. Latency can be influenced by the effectiveness of routing algorithms and the number of hops that data must make.

Reduced latency and improved application performance are important for businesses that rely on frequently accessing data stored in cloud storage. This may include selecting providers with strategically positioned data centers, fine-tuning network configurations, and understanding how internet infrastructure affects the latency of their applications.

Minimizing Latency With Content Delivery Networks (CDNs)

Further reducing latency in your application may be achieved by layering a content delivery network (CDN) in front of your origin storage. CDNs help reduce the time it takes for content to reach the end user by caching data in distributed servers that store content across multiple geographic locations. When your end-user requests or downloads content, the CDN delivers it from the nearest server, minimizing the distance the data has to travel, which significantly reduces latency.

Backblaze B2 Cloud Storage integrates with multiple CDN solutions, including Fastly, Bunny.net, and Cloudflare, providing a performance advantage. And, Backblaze offers the additional benefit of free egress between where the data is stored and the CDN’s edge servers. This not only reduces latency, but also optimizes bandwidth usage, making it cost effective for businesses building bandwidth intensive applications such as on demand media streaming. 

To get slightly into the technical weeds, CDNs essentially cache content at the edge of the network, meaning that once content is stored on a CDN server, subsequent requests do not need to go back to the origin server to request data. 

This reduces the load on the origin server and reduces the time needed to deliver the content to the user. For companies using cloud storage, integrating CDNs into their infrastructure is an effective configuration to improve the global availability of content, making it an important aspect of cloud storage and application performance optimization.

Case Study: Musify Improves Latency and Reduces Cloud Bill by 70%

To illustrate the impact of reduced latency on performance, consider the example of music streaming platform Musify. By moving from Amazon S3 to Backblaze B2 and leveraging the partnership with Cloudflare, Musify significantly improved its service offering. Musify egresses about 1PB of data per month, which, under traditional cloud storage pricing models, can lead to significant costs. Because Backblaze and Cloudflare are both members of the Bandwidth Alliance, Musify now has no data transfer costs, contributing to an estimated 70% reduction in cloud spend. And, thanks to the high cache hit ratio, 90% of the transfer takes place in the CDN layer, which helps maintain high performance, regardless of the location of the file or the user.

Latency Wrap Up

As we wrap up our look at the role latency plays in cloud-based applications, it’s clear that understanding and strategically reducing latency is a necessary approach for CTOs, architects, and decision-makers building many of the modern applications we all use today.  There are several factors that impact upload and download latency, and it’s important to understand the nuances to effectively improve performance.

Additionally, Backblaze B2’s integrations with CDNs like Fastly, bunny.net, and Cloudflare offer a cost-effective way to improve performance and reduce latency. The strategic decisions Musify made demonstrate how reducing latency with a CDN can significantly improve content delivery while saving on egress costs, and reducing overall business OpEx.

For additional information and guidance on reducing latency, improving TTFB numbers and overall performance, the insights shared in “Cloud Performance and When It Matters” offer a deeper, technical look.

If you’re keen to explore further into how an object storage platform may support your needs and help scale your bandwidth-intensive applications, read more about Backblaze B2 Cloud Storage.

The post Navigating Cloud Storage: What is Latency and Why Does It Matter? appeared first on Backblaze Blog | Cloud Storage & Cloud Backup.

]]>
https://www.backblaze.com/blog/navigating-cloud-storage-what-is-latency-and-why-does-it-matter/feed/ 0
Cloud 101: Data Egress Fees Explained https://www.backblaze.com/blog/cloud-101-data-egress-fees-explained/ https://www.backblaze.com/blog/cloud-101-data-egress-fees-explained/#comments Thu, 30 Nov 2023 16:48:27 +0000 https://www.backblaze.com/blog/?p=110467 Traditional cloud storage pricing models come with complexity—by which we mean fees. Today, let's talk about what egress fees are and where they fit into your cloud storage strategy.

The post Cloud 101: Data Egress Fees Explained appeared first on Backblaze Blog | Cloud Storage & Cloud Backup.

]]>
A decorative article showing a server, a cloud, and arrows pointing up and down with a dollar sign.

You can imagine data egress fees like tolls on a highway—your data is cruising along trying to get to its destination, but it has to pay a fee for the privilege of continuing its journey. If you have a lot of data to move, or a lot of toll booths (different cloud services) to move it through, those fees can add up quickly. 

Data egress fees are charges you incur for moving data out of a cloud service. They can be a big part of your cloud bill depending on how you use the cloud. And, they’re frequently a reason behind surprise AWS bills. So, let’s take a closer look at egress, egress fees, and ways you can reduce or eliminate them, so that your data can travel the cloud superhighways at will. 

What Is Data Egress?

In computing generally, data egress refers to the transfer or movement of data out of a given location, typically from within a network or system to an external destination. When it comes to cloud computing, egress generally means whenever data leaves the boundaries of a cloud provider’s network. 

In the simplest terms, data egress is the outbound flow of data.

A photo of a stair case with a sign that says "out" and an arrow pointing up.
The fees, like these stairs, climb higher. Source.

Egress vs. Ingress?

While egress pertains to data exiting a system, ingress refers to data entering a system. When you download something, you’re egressing data from a service. When you upload something, you’re ingressing data to that service. 

Unsurprisingly, most cloud storage providers do not charge you to ingress data—they want you to store your data on their platform, so why would they? 

Egress vs. Download

You might hear egress referred to as download, and that’s not wrong, but there are some nuances. Egress applies not only to downloads, but also when you migrate data between cloud services, for example. So, egress includes downloads, but it’s not limited to them. 

In the context of cloud service providers, the distinction between egress and download may not always be explicitly stated, and the terminology used can vary between providers. It’s essential to refer to the specific terms and pricing details provided by the service or platform you are using to understand how they classify and charge for data transfers.

How Do Egress Fees Work?

Data egress fees are charges incurred when data is transferred out of a cloud provider’s environment. These fees are often associated with cloud computing services, where users pay not only for the resources they consume within the cloud (such as storage and compute) but also for the data that is transferred from the cloud to external destinations.

There are a number of scenarios where a cloud provider typically charges egress: 

  • When you’re migrating data from one cloud to another.
  • When you’re downloading data from a cloud to a local repository.
  • When you move data between regions or zones with certain cloud providers. 
  • When an application, end user, or content delivery network (CDN) requests data from your cloud storage bucket. 

The fees can vary depending on the amount of data transferred and the destination of the data. For example, transferring data between regions within the same cloud provider’s network might incur lower fees than transferring data to the internet or to a different cloud provider.

Data egress fees are an important consideration for organizations using cloud services, and they can impact the overall cost of hosting and managing data in the cloud. It’s important to be aware of the pricing details related to data egress in the cloud provider’s pricing documentation, as these fees can contribute significantly to the total cost of using cloud services.

Why Do Cloud Providers Charge Egress Fees?

Both ingressing and egressing data costs cloud providers money. They have to build the physical infrastructure to allow users to do that, including switches, routers, fiber cables, etc. They also have to have enough of that infrastructure on hand to meet customer demand, not to mention staff to deploy and maintain it. 

However, it’s telling that most cloud providers don’t charge ingress fees, only egress fees. It would be hard to entice people to use your service if you charged them extra for uploading their data. But, once cloud providers have your data, they want you to keep it there. Charging you to remove it is one way cloud providers like AWS, Google Cloud, and Microsoft Azure do that. 

What Are AWS’s Egress Fees?

AWS S3 gives customers 100GB of data transfer out to the internet free each month, with some caveats—that 100GB excludes data stored in China and GovCloud. After that, the published rates for U.S. regions for data transferred over the public internet are as follows as of the date of publication:

  • The first 10TB per month is $0.09 per GB.
  • The next 40TB per month is $0.085 per GB.
  • The next 100TB per month is $0.07 per GB.
  • Anything greater than 150TB per month is $0.05 per GB. 

But AWS also charges customers egress between certain services and regions, and it can get complicated quickly as the following diagram shows…

illustration of AWS Data Transfer Costs
Source.

How Can I Reduce Egress Fees?

If you’re using cloud services, minimizing your egress fees is probably a high priority. Companies like the Duckbill Group (the creators of the diagram above) exist to help businesses manage their AWS bills. In fact, there’s a whole industry of consultants that focuses solely on reducing your AWS bills. 

Aside from hiring a consultant to help you spend less, there are a few simple ways to lower your egress fees:

  1. Use a content delivery network (CDN): If you’re hosting an application, using a CDN can lower your egress fees since a CDN will cache data on edge servers. That way, when a user sends a request for your data, it can pull it from the CDN server rather than your cloud storage provider where you would be charged egress. 
  2. Optimize data transfer protocols: Choose efficient data transfer protocols that minimize the amount of data transmitted. For example, consider using compression or delta encoding techniques to reduce the size of transferred files. Compressing data before transfer can reduce the volume of data sent over the network, leading to lower egress costs. However, the effectiveness of compression depends on the nature of the data.
  3. Utilize integrated cloud providers: Some cloud providers offer free data transfer with a range of other cloud partners. (Hint: that’s what we do here at Backblaze!)
  4. Be aware of tiering: It may sound enticing to opt for a cold(er) storage tier to save on storage, but some of those tiers come with much higher egress fees. 

How Does Backblaze Reduce Egress Fees?

There’s one more way you can drastically reduce egress, and we’ll just come right out and say it: Backblaze gives you free egress up to 3x the average monthly storage and unlimited free egress through a number of CDN and compute partners, including Fastly, Cloudflare, Bunny.net, and Vultr. 

Why do we offer free egress? Supporting an open cloud environment is central to our mission, so we expanded free egress to all customers so they can move data when and where they prefer. Cloud providers like AWS and others charge high egress fees that make it expensive for customers to use multi-cloud infrastructures and therefore lock in customers to their services. These walled gardens hamper innovation and long-term growth.

Free Egress = A Better, Multi-Cloud World

The bottom line: the high egress fees charged by hyperscalers like AWS, Google, and Microsoft are a direct impediment to a multi-cloud future driven by customer choice and industry need. And, a multi-cloud future is something we believe in. So go forth and build the multi-cloud future of your dreams, and leave worries about high egress fees in the past. 

The post Cloud 101: Data Egress Fees Explained appeared first on Backblaze Blog | Cloud Storage & Cloud Backup.

]]>
https://www.backblaze.com/blog/cloud-101-data-egress-fees-explained/feed/ 1
Cloud Storage Performance: The Metrics That Matter https://www.backblaze.com/blog/cloud-storage-performance-the-metrics-that-matter/ https://www.backblaze.com/blog/cloud-storage-performance-the-metrics-that-matter/#comments Tue, 24 Oct 2023 16:05:00 +0000 https://www.backblaze.com/blog/?p=110074 Latency, throughput, availability, and durability are the main factors that affect cloud performance. Let's talk about how.

The post Cloud Storage Performance: The Metrics That Matter appeared first on Backblaze Blog | Cloud Storage & Cloud Backup.

]]>
A decorative image showing a cloud in the foreground and various mocked up graphs in the background.

Availability, time to first byte, throughput, durability—there are plenty of ways to measure “performance” when it comes to cloud storage. But, which measure is best and how should performance factor in when you’re choosing a cloud storage provider? Other than security and cost, performance is arguably the most important decision criteria, but it’s also the hardest dimension to clarify. It can be highly variable and depends on your own infrastructure, your workload, and all the network connections between your infrastructure and the cloud provider as well.

Today, I’m walking through how to think strategically about cloud storage performance, including which metrics matter and which may not be as important for you.

First, What’s Your Use Case?

The first thing to keep in mind is how you’re going to be using cloud storage. After all, performance requirements will vary from one use case to another. For instance, you may need greater performance in terms of latency if you’re using cloud storage to serve up software as a service (SaaS) content; however, if you’re using cloud storage to back up and archive data, throughput is probably more important for your purposes.

For something like application storage, you should also have other tools in your toolbox even when you are using hot, fast, public cloud storage, like the ability to cache content on edge servers, closer to end users, with a content delivery network (CDN).

Ultimately, you need to decide which cloud storage metrics are the most important to your organization. Performance is important, certainly, but security or cost may be weighted more heavily in your decision matrix.

A decorative image showing several icons representing different types of files on a grid over a cloud.

What Is Performant Cloud Storage?

Performance can be described using a number of different criteria, including:

  • Latency
  • Throughput
  • Availability
  • Durability

I’ll define each of these and talk a bit about what each means when you’re evaluating a given cloud storage provider and how they may affect upload and download speeds.

Latency

  • Latency is defined as the time between a client request and a server response. It quantifies the time it takes data to transfer across a network.  
  • Latency is primarily influenced by physical distance—the farther away the client is from the server, the longer it takes to complete the request. 
  • If you’re serving content to many geographically dispersed clients, you can use a CDN to reduce the latency they experience. 

Latency can be influenced by network congestion, security protocols on a network, or network infrastructure, but the primary cause is generally distance, as we noted above. 

Downstream latency is typically measured using time to first byte (TTFB). In the context of surfing the web, TTFB is the time between a page request and when the browser receives the first byte of information from the server. In other words, TTFB is measured by how long it takes between the start of the request and the start of the response, including DNS lookup and establishing the connection using a TCP handshake and TLS handshake if you’ve made the request over HTTPS.

Let’s say you’re uploading data from California to a cloud storage data center in Sacramento. In that case, you’ll experience lower latency than if your business data is stored in, say, Ohio and has to make the cross-country trip. However, making the “right” decision about where to store your data isn’t quite as simple as that, and the complexity goes back to your use case. If you’re using cloud storage for off-site backup, you may want your data to be stored farther away from your organization to protect against natural disasters. In this case, performance is likely secondary to location—you only need fast enough performance to meet your backup schedule. 

Using a CDN to Improve Latency

If you’re using cloud storage to store active data, you can speed up performance by using a CDN. A CDN helps speed content delivery by caching content at the edge, meaning faster load times and reduced latency. 

Edge networks create “satellite servers” that are separate from your central data server, and CDNs leverage these to chart the fastest data delivery path to end users. 

Throughput

  • Throughput is a measure of the amount of data passing through a system at a given time.
  • If you have spare bandwidth, you can use multi-threading to improve throughput. 
  • Cloud storage providers’ architecture influences throughput, as do their policies around slowdowns (i.e. throttling).

Throughput is often confused with bandwidth. The two concepts are closely related, but different. 

To explain them, it’s helpful to use a metaphor: Imagine a swimming pool. The amount of water in it is your file size. When you want to drain the pool, you need a pipe. Bandwidth is the size of the pipe, and throughput is the rate at which water moves through the pipe successfully. So, bandwidth affects your ultimate throughput. Throughput is also influenced by processing power, packet loss, and network topology, but bandwidth is the main factor. 

Using Multi-Threading to Improve Throughput

Assuming you have some bandwidth to spare, one of the best ways to improve throughput is to enable multi-threading. Threads are units of execution within processes. When you transmit files using a program across a network, they are being communicated by threads. Using more than one thread (multi-threading) to transmit files is, not surprisingly, better and faster than using just one (although a greater number of threads will require more processing power and memory). To return to our water pipe analogy, multi-threading is like having multiple water pumps (threads) running to that same pipe. Maybe with one pump, you can only fill 10% of your pipe. But you can keep adding pumps until you reach pipe capacity.

When you’re using cloud storage with an integration like backup software or a network attached storage (NAS) device, the multi-threading setting is typically found in the integration’s settings. Many backup tools, like Veeam, are already set to multi-thread by default. Veeam automatically makes adjustments based on details like the number of individual backup jobs, or you can configure the number of threads manually. Other integrations, like Synology’s Cloud Sync, also give you granular control over threading so you can dial in your performance.  

A diagram showing single vs. multi-threaded processes.
Still confused about threads? Learn more in our deep dive, including what’s going on in this diagram.

That said, the gains from increasing the number of threads are limited by the available bandwidth, processing power, and memory. Finding the right setting can involve some trial and error, but the improvements can be substantial (as we discovered when we compared download speeds on different Python versions using single vs. multi-threading).

What About Throttling?

One question you’ll absolutely want to ask when you’re choosing a cloud storage provider is whether they throttle traffic. That means they deliberately slow down your connection for various reasons. Shameless plug here: Backblaze does not throttle, so customers are able to take advantage of all their bandwidth while uploading to B2 Cloud Storage. Many other public cloud services do throttle, although they certainly may not make it widely known, so be sure to ask the question upfront when engaging with a storage provider.

Upload Speed and Download Speed

Your ultimate upload and download speeds will be affected by throughput and latency. Again, it’s important to consider your use case when determining which performance measure is most important for you. Latency is important to application storage use cases where things like how fast a website loads can make or break a potential SaaS customer. With latency being primarily influenced by distance, it can be further optimized with the help of a CDN. Throughput is often the measurement that’s more important to backup and archive customers because it is indicative of the upload and download speeds an end user will experience, and it can be influenced by cloud storage provider practices, like throttling.   

Availability

  • Availability is the percentage of time a cloud service or a resource is functioning correctly.
  • Make sure the availability listed in the cloud provider’s service level agreement (SLA) matches your needs. 
  • Keep in mind the difference between hot and cold storage—cold storage services like Amazon Glacier offer slower retrieval and response times.

Also called uptime, this metric measures the percentage of time that a cloud service or resource is available and functioning correctly. It’s usually expressed as a percentage, with 99.9% (three nines) or 99.99% (four nines) availability being common targets for critical services. Availability is often backed by SLAs that define the uptime customers can expect and what happens if availability falls below that metric. 

You’ll also want to consider availability if you’re considering whether you want to store in cold storage versus hot storage. Cold storage is lower performing by design. It prioritizes durability and cost-effectiveness over availability. Services like Amazon Glacier and Google Coldline take this approach, offering slower retrieval and response times than their hot storage counterparts. While cost savings is typically a big factor when it comes to considering cold storage, keep in mind that if you do need to retrieve your data, it will take much longer (potentially days instead of seconds), and speeding that up at all is still going to cost you. You may end up paying more to get your data back faster, and you should also be aware of the exorbitant egress fees and minimum storage duration requirements for cold storage—unexpected costs that can easily add up. 

ColdHot
Access SpeedSlowFast
Access Frequency Seldom or NeverFrequent
Data VolumeLowHigh
Storage MediaSlower drives, LTO, offlineFaster drives, durable drives, SSDs
CostLowerHigher

Durability

  • Durability is the ability of a storage system to consistently preserve data.
  • Durability is measured in “nines” or the probability that your data is retrievable after one year of storage. 
  • We designed the Backblaze B2 Storage Cloud for 11 nines of durability using erasure coding.

Data durability refers to the ability of a data storage system to reliably and consistently preserve data over time, even in the face of hardware failures, errors, or unforeseen issues. It is a measure of data’s long-term resilience and permanence. Highly durable data storage systems ensure that data remains intact and accessible, meeting reliability and availability expectations, making it a fundamental consideration for critical applications and data management.

We usually measure durability or, more precisely annual durability, in “nines”, referring to the number of nines in the probability (expressed as a percentage) that your data is retrievable after one year of storage. We know from our work on Drive Stats that an annual failure rate of 1% is typical for a hard drive. So, if you were to store your data on a single drive, its durability, the probability that it would not fail, would be 99%, or two nines.

The very simplest way of improving durability is to simply replicate data across multiple drives. If a file is lost, you still have the remaining copies. It’s also simple to calculate the durability with this approach. If you write each file to two drives, you lose data only if both drives fail. We calculate the probability of both drives failing by multiplying the probabilities of either drive failing, 0.01 x 0.01 = 0.0001, giving a durability of 99.99%, or four nines. While simple, this approach is costly—it incurs a 100% overhead in the amount of storage required to deliver four nines of durability.

Erasure coding is a more sophisticated technique, improving durability with much less overhead than simple replication. An erasure code takes a “message,” such as a data file, and makes a longer message in a way that the original can be reconstructed from the longer message even if parts of the longer message have been lost. 

A decorative image showing the matrices that get multiplied to allow Reed-Solomon code to re-create files.
A representation of Reed-Solomon erasure coding, with some very cool Storage Pods in the background.

The durability calculation for this approach is much more complex than for replication, as it involves the time required to replace and rebuild failed drives as well as the probability that a drive will fail, but we calculated that we could take advantage of erasure coding in designing the Backblaze B2 Storage Cloud for 11 nines of durability with just 25% overhead in the amount of storage required. 

How does this work? Briefly, when we store a file, we split it into 16 equal-sized pieces, or shards. We then calculate four more shards, called parity shards, in such a way that the original file can be reconstructed from any 16 of the 20 shards. We then store the resulting 20 shards on 20 different drives, each in a separate Storage Pod (storage server).

Note: As hard disk drive capacity increases, so does the time required to recover after a drive failure, so we periodically adjust the ratio between data shards and parity shards to maintain our eleven nines durability target. Consequently, our very newest vaults use a 15+5 scheme.

If a drive does fail, it can be replaced with a new drive, and its data rebuilt from the remaining good drives. We open sourced our implementation of Reed-Solomon erasure coding, so you can dive into the source code for more details.

Additional Factors Impacting Cloud Storage Performance

In addition to bandwidth and latency, there are a few additional factors that impact cloud storage performance, including:

  • The size of your files.
  • The number of files you upload or download.
  • Block (part) size.
  • The amount of available memory on your machine. 

Small files—that is, those less than 5GB—can be uploaded in a single API call. Larger files, from 5MB to 10TB, can be uploaded as “parts”, in multiple API calls. You’ll notice that there is quite an overlap here! For uploading files between 5MB and 5GB, is it better to upload them in a single API call, or split them into parts? What is the optimum part size? For backup applications, which typically split all data into equal-sized blocks, storing each block as a file, what is the optimum block size? As with many questions, the answer is that it depends.

Remember latency? Each API call incurs a more-or-less fixed overhead due to latency. For a 1GB file, assuming a single thread of execution, uploading all 1GB in a single API call will be faster than ten API calls each uploading a 100MB part, since those additional nine API calls each incur some latency overhead. So, bigger is better, right?

Not necessarily. Multi-threading, as mentioned above, affords us the opportunity to upload multiple parts simultaneously, which improves performance—but there are trade-offs. Typically, each part must be stored in memory as it is uploaded, so more threads means more memory consumption. If the number of threads multiplied by the part size exceeds available memory, then either the application will fail with an out of memory error, or data will be swapped to disk, reducing performance.

Downloading data offers even more flexibility, since applications can specify any portion of the file to download in each API call. Whether uploading or downloading, there is a maximum number of threads that will drive throughput to consume all of the available bandwidth. Exceeding this maximum will consume more memory, but provide no performance benefit. If you go back to our pipe analogy, you’ll have reached the maximum capacity of the pipe, so adding more pumps won’t make things move faster. 

So, what to do to get the best performance possible for your use case? Simple: customize your settings. 

Most backup and file transfer tools allow you to configure the number of threads and the amount of data to be transferred per API call, whether that’s block size or part size. If you are writing your own application, you should allow for these parameters to be configured. When it comes to deployment, some experimentation may be required to achieve maximum throughput given available memory.

How to Evaluate Cloud Performance

To sum up, the cloud is increasingly becoming a cornerstone of every company’s tech stack. Gartner predicts that by 2026, 75% of organizations will adopt a digital transformation model predicated on cloud as the fundamental underlying platform. So, cloud storage performance will likely be a consideration for your company in the next few years if it isn’t already.

It’s important to consider that cloud storage performance can be highly subjective and heavily influenced by things like use case considerations (i.e. backup and archive versus application storage, media workflow, or another), end user bandwidth and throughput, file size, block size, etc. Any evaluation of cloud performance should take these factors into account rather than simply relying on metrics in isolation. And, a holistic cloud strategy will likely have multiple operational schemas to optimize resources for different use cases.

Wait, Aren’t You, Backblaze, a Cloud Storage Company?

Why, yes. Thank you for noticing. We ARE a cloud storage company, and we OFTEN get questions about all of the topics above. In fact, that’s why we put this guide together—our customers and prospects are the best sources of content ideas we can think of. Circling back to the beginning, it bears repeating that performance is one factor to consider in addition to security and cost. (And, hey, we would be remiss not to mention that we’re also one-fifth the cost of AWS S3.) Ultimately, whether you choose Backblaze B2 Cloud Storage or not though, we hope the information is useful to you. Let us know if there’s anything we missed.

The post Cloud Storage Performance: The Metrics That Matter appeared first on Backblaze Blog | Cloud Storage & Cloud Backup.

]]>
https://www.backblaze.com/blog/cloud-storage-performance-the-metrics-that-matter/feed/ 2
The Power of Specialized Cloud Providers: A Game Changer for SaaS Companies https://www.backblaze.com/blog/the-power-of-specialized-cloud-providers-a-game-changer-for-saas-companies/ https://www.backblaze.com/blog/the-power-of-specialized-cloud-providers-a-game-changer-for-saas-companies/#respond Tue, 13 Jun 2023 16:40:34 +0000 https://www.backblaze.com/blog/?p=108971 Cloud-based tech stacks have moved beyond a one-size-fits all approach. Here's how specialized cloud providers can help your SaaS company customize its tech stack.

The post The Power of Specialized Cloud Providers: A Game Changer for SaaS Companies appeared first on Backblaze Blog | Cloud Storage & Cloud Backup.

]]>
A decorative image showing a cloud with the Backblaze logo, then logos hanging off it it for Vultr, Fastly, Equinix metal, Terraform, and rclone.

“Nobody ever got fired for buying AWS.” It’s true: AWS’s one-size-fits-all solution worked great for most businesses, and those businesses made the shift away from the traditional model of on-prem and self-hosted servers—what we think of as Cloud 1.0—to an era where AWS was the cloud, the one and only, which is what we call Cloud 2.0. However, as the cloud landscape evolves, it’s time to question the old ways. Maybe nobody ever got fired for buying AWS, but these days, you can certainly get a lot of value (and kudos) for exploring other options. 

Developers and IT teams might hesitate when it comes to moving away from AWS, but AWS comes with risks, too. If you don’t have the resources to manage and maintain your infrastructure, costs can get out of control, for one. As we enter Cloud 3.0 where the landscape is defined by the open, multi-cloud internet, there is an emerging trend that is worth considering: the rise of specialized cloud providers.

Today, I’m sharing how software as a service (SaaS) startups and modern businesses can take advantage of these highly-focused, tailored services, each specializing and excelling in specific areas like cloud storage, content delivery, cloud compute, and more. Building on a specialized stack offers more control, return on investment, and flexibility, while being able to achieve the same performance you expect from hyperscaler infrastructure.

From a cost of goods sold perspective, AWS pricing wasn’t a great fit. From an engineering perspective, we didn’t want a net-new platform. So the fact that we got both with Backblaze—a drop-in API replacement with a much better cost structure—it was just a no-brainer.

—Rory Petty, Co-Founder & CTO, Tribute

The Rise of Specialized Cloud Providers

Specialized providers—including content delivery networks (CDNs) like Fastly, bunny.net, and Cloudflare, as well as cloud compute providers like Vultr—offer services that focus on a particular area of the infrastructure stack. Rather than trying to be everything to everyone, like the hyperscalers of Cloud 2.0, they do one thing and do it really well. Customers get best-of-breed services that allow them to build a tech stack tailored to their needs. 

Use Cases for Specialized Cloud Providers

There are a number of businesses that might benefit from switching from hyperscalers to specialized cloud providers, including:

In order for businesses to take advantage of the benefits (since most applications rely on more than just one service), these services must work together seamlessly. 

Let’s Take a Closer Look at How Specialized Stacks Can Work For You

If you’re wondering how exactly specialized clouds can “play well with each other,” we ran a whole series of application storage webinars that talk through specific examples and uses cases. I’ll share what’s in it for you below.

1. Low Latency Multi-Region Content Delivery with Fastly and Backblaze

Did you know a 100-millisecond delay in website load time can hurt conversion rates by 7%? In this session, Pat Patterson from Backblaze and Jim Bartos from Fastly discuss the importance of speed and latency in user experience. They highlight how Backblaze’s B2 Cloud Storage and Fastly’s content delivery network work together to deliver content quickly and efficiently across multiple regions. Businesses can ensure that their content is delivered with low latency, reducing delays and optimizing user experience regardless of the user’s location.

2. Scaling Media Delivery Workflows with bunny.net and Backblaze

Delivering content to your end users at scale can be challenging and costly. Users expect exceptional web and mobile experiences with snappy load times and zero buffering. Anything less than an instantaneous response may cause them to bounce. 

In this webinar, Pat Patterson demonstrates how to efficiently scale your content delivery workflows from content ingestion, transcoding, storage, to last-mile acceleration via bunny.net CDN. Pat demonstrates how to build a video hosting platform called “Cat Tube” and shows how to upload a video and play it using HTML5 video element with controls. Watch below and download the demo code to try it yourself.

3. Balancing Cloud Cost and Performance with Fastly and Backblaze

With a global economic slowdown, IT and development teams are looking for ways to slash cloud budgets without compromising performance. E-commerce, SaaS platforms, and streaming applications all rely on high-performant infrastructure, but balancing bandwidth and storage costs can be challenging. In this 45-minute session, we explored how to recession-proof your growing business with key cloud optimization strategies, including ways to leverage Fastly’s CDN to balance bandwidth costs while avoiding performance tradeoffs.

4. Reducing Cloud OpEx Without Sacrificing Performance and Speed

Greg Hamer from Backblaze and DJ Johnson from Vultr explore the benefits of building on best-of-breed, specialized cloud stacks tailored to your business model, rather than being locked into traditional hyperscaler infrastructure. They cover real-world use cases, including:

  • How Can Stock Photo broke free from AWS and reduced their cloud bill by 55% while achieving 4x faster generation.
  • How Monument Labs launched a new cloud-based photo management service to 25,000+ users.
  • How Black.ai processes 1000s of files simultaneously, with a significant reduction of infrastructure costs.

5. Leveling Up a Global Gaming Platform while Slashing Cloud Spend by 85%

James Ross of Nodecraft, an online gaming platform that aims to make gaming online easy, shares how he moved his global game server platform from Amazon S3 to Backblaze B2 for greater flexibility and 85% savings on storage and egress. He discusses the challenges of managing large files over the public internet, which can result in expensive bandwidth costs. By storing game titles on Backblaze B2 and delivering them through Cloudflare’s CDN, they achieve reduced latency since games are cached at the edge, and pay zero egress fees thanks to the Bandwidth Alliance. Nodecraft also benefited from Universal Data Migration, which allows customers to move large amounts of data from any cloud services or on-premises storage to Backblaze’s B2 Cloud Storage, managed by Backblaze and free of charge.

Migrating From a Hyperscaler

Though it may seem daunting to transition from a hyperscaler to a specialized cloud provider, it doesn’t have to be. Many specialized providers offer tools and services to make the transition as smooth as possible. 

  • S3-compatible APIs, SDKs, CLI: Interface with storage as you would with Amazon S3—switching can be as easy as dropping in a new storage target.
  • Universal Data Migration: Free and fully managed migrations to make switching as seamless as possible.
  • Free egress: Move data freely with the Bandwidth Alliance and other partnerships between specialized cloud storage providers.

As the decision maker at your growing SaaS company, it’s worth considering whether a specialized cloud stack could be a better fit for your business. By doing so you could potentially unlock cost savings, improve performance, and gain flexibility to adapt your services to your unique needs. The one-size-fits-all is no longer the only option out there. 

Want to Test It Out Yourself?

Take a proactive approach to cloud cost management: Get 10GB free to test and validate your proof of concept (POC) with Backblaze B2. All it takes is an email to get started.

Download the Ransomware Guide ➔ 

The post The Power of Specialized Cloud Providers: A Game Changer for SaaS Companies appeared first on Backblaze Blog | Cloud Storage & Cloud Backup.

]]>
https://www.backblaze.com/blog/the-power-of-specialized-cloud-providers-a-game-changer-for-saas-companies/feed/ 0
The Free Credit Trap: Building SaaS Infrastructure for Long-Term Sustainability https://www.backblaze.com/blog/the-free-credit-trap-building-saas-infrastructure-for-long-term-sustainability/ https://www.backblaze.com/blog/the-free-credit-trap-building-saas-infrastructure-for-long-term-sustainability/#respond Tue, 23 May 2023 16:29:10 +0000 https://www.backblaze.com/blog/?p=108768 Many businesses use AWS's free cloud credits to create their cloud-based infrastructure. But, those AWS bills are no fun. Here are some ways you can leverage free credits without getting locked into AWS long term.

The post The Free Credit Trap: Building SaaS Infrastructure for Long-Term Sustainability appeared first on Backblaze Blog | Cloud Storage & Cloud Backup.

]]>

In today’s economic climate, cost cutting is on everyone’s mind, and businesses are doing everything they can to save money. But, it’s equally important that they can’t afford to compromise the integrity of their infrastructure or the quality of the customer experience. As a startup, taking advantage of free cloud credits from cloud providers like Amazon AWS, especially at a time like this, seems enticing. 

Using those credits can make sense, but it takes more planning than you might think to use them in a way that allows you to continue managing cloud costs once the credits run out. 

In this blog post, I’ll walk through common use cases for credit programs, the risks of using credits, and alternatives that help you balance growth and cloud costs.

The True Cost of “Free”

This post is part of a series exploring free cloud credits and the hidden complexities and limitations that come with these offers. Check out our previous installments:

The Shift to Cloud 3.0

As we see it, there have been three stages of “The Cloud” in its history:

Phase 1: What is the Cloud?

Starting around when Backblaze was founded in 2007, the public cloud was in its infancy. Most people weren’t clear on what cloud computing was or if it was going to take root. Businesses were asking themselves, “What is the cloud and how will it work with my business?”

Phase 2: Cloud = Amazon Web Services

Fast forward to 10 years later, and AWS and “The Cloud” started to become synonymous. Amazon had nearly 50% of market share of public cloud services, more than Microsoft, Google, and IBM combined. “The Cloud” was well-established, and for most folks, the cloud was AWS.

Phase 3: Multi-Cloud

Today, we’re in Phase 3 of the cloud. “The Cloud” of today is defined by the open, multi-cloud internet. Traditional cloud vendors are expensive, complicated, and seek to lock customers into their walled gardens. Customers have come to realize that (see below) and to value the benefits they can get from moving away from a model that demands exclusivity in cloud infrastructure.

An image displaying a Tweet from user Philo Hermans @Philo01 that says 

I migrated most infrastructure away from AWS. Now that I think about it, those AWS credits are a well-designed trap to create a vendor lock in, and once your credits expire and you notice the actual cost, chances are you are in shock and stuck at the same time (laughing emoji).
Source.

In Cloud Phase 3.0, companies are looking to reign in spending, and are increasingly seeking specialized cloud providers offering affordable, best-of-breed services without sacrificing speed and performance. How do you balance that with the draw of free credits? I’ll get into that next, and the two are far from mutually exclusive.

Getting Hooked on Credits: Common Use Cases

So, you have $100k in free cloud credits from AWS. What do you do with them? Well, in our experience, there are a wide range of use cases for credits, including:

  • App development and testing: Teams may leverage credits to run an app development proof of concept (PoC) utilizing Amazon EC2, RDS, and S3 for compute, database, and storage needs, for example, but without understanding how these will scale in the longer term, there may be risks involved. Spinning up EC2 instances can quickly lead to burning through your credits and getting hit with an unexpected bill.
  • Machine learning (ML): Machine learning models require huge amounts of computing power and storage. Free cloud credits might be a good way to start, but you can expect them to quickly run out if you’re using them for this use case. 
  • Data analytics: While free cloud credits may cover storage and computing resources, data transfer costs might still apply. Analyzing large volumes of data or frequently transferring data in and out of the cloud can lead to unexpected expenses.
  • Website hosting: Hosting your website with free cloud credits can eliminate the up front infrastructure spend and provide an entry point into the cloud, but remember that when the credits expire, traffic spikes you should be celebrating can crater your bottom line.
  • Backup and disaster recovery: Free cloud credits may have restrictions on data retention, limiting the duration for which backups can be stored. This can pose challenges for organizations requiring long-term data retention for compliance or disaster recovery purposes.

All of this is to say: Proper configuration, long-term management and upkeep, and cost optimization all play a role on how you scale on monolith platforms. It is important to note that the risks and benefits mentioned above are general considerations, and specific terms and conditions may vary depending on the cloud service provider and the details of their free credit offerings. It’s crucial to thoroughly review the terms and plan accordingly to maximize the benefits and mitigate the risks associated with free cloud credits for each specific use case. (And, given the complicated pricing structures we mentioned before, that might take some effort.)

Monument Uses Free Credits Wisely

Monument, a photo management service with a strong focus on security and privacy, utilized free startup credits from AWS. But, they knew free credits wouldn’t last forever. Monument’s co-founder, Ercan Erciyes, realized they’d ultimately lose money if they built the infrastructure for Monument Cloud on AWS.

He also didn’t want to accumulate tech debt and become locked in to AWS. Rather than using the credits to build a minimum viable product as fast as humanly possible, he used the credits to develop the AI model, and built infrastructure that could scale as they grew.

➔ Read More

The Risks of AWS Credits: Lessons from Founders

If you’re handed $100,000 in credits, it’s crucial to be aware of the risks and implications that come along with it. While it may seem like an exciting opportunity to explore the capabilities of the cloud without immediate financial constraints, there are several factors to consider:

  1. The temptation to overspend: With a credit balance at your disposal just waiting to be spent, there is a possibility of underestimating the actual costs of your cloud usage. This can lead to a scenario where you inadvertently exhaust the credits sooner than anticipated, leaving you with unexpected expenses that may strain your budget.
  2. The shock of high bills once credits expire: Without proper planning and monitoring of your cloud usage, the transition from “free” to paying for services can result in high bills that catch you off guard. It is essential to closely track your cloud usage throughout the credit period and have a clear understanding of the costs associated with the services you’re utilizing. Or better yet, use those credits for a discrete project to test your PoC or develop your minimum viable product, and plan to build your long-term infrastructure elsewhere.
  3. The risk of vendor lock-in: As you build and deploy your infrastructure within a specific cloud provider’s ecosystem, the process of migrating to an alternative provider can seem complex and can definitely be costly (shameless plug: at Backblaze, we’ll cover your migration over 50TB). Vendor lock-in can limit your flexibility, making it challenging to adapt to changing business needs or take advantage of cost-saving opportunities in the future.

The problems are nothing new for founders, as the online conversation bears out.

First, there’s the old surprise bill:

A Tweet from user Ajul Sahul @anjuls that says 

Similar story, AWS provided us free credits so we though we will use it for some data processing tasks. The credit expired after one year and team forgot about the abandoned resources to give a surprise bill. Cloud governance is super importance right from the start.
Source.

Even with some optimization, AWS cloud spend can still be pretty “obscene” as this user vividly shows:

A Tweet from user DHH @dhh that says 

We spent $3,201,564.24 on cloud in 2022 at @37signals, mostly AWS. $907,837.83 on S3. $473,196.30 on RDS. $519,959.60 on OpenSearch. $123,852.30 on Elasticache. This is with long commits (S3 for 4 years!!), reserved instances, etc. Just obscene. Will publish full accounting soon.
Source.

There’s the founder raising rounds just to pay AWS bills:

A Tweet from user Guille Ojeda @itsguilleojeda that says 

Tech first startups raise their first rounds to pay AWS bills. By the way, there's free credits, in case you didn't know. Up to $100k. And you'll still need funding.
Source.

Some use the surprise bill as motivation to get paying customers.

Lastly, there’s the comic relief:

A tweet from user Mrinal Wahal @MrinalWahal that reads 

Yeah high credit card bills are scary but have you forgotten turning off your AWS instances?
Source.

Strategies for Balancing Growth and Cloud Costs

Where does that leave you today? Here are some best practices startups and early founders can implement to balance growth and cloud costs:

  1. Establishing a cloud cost management plan early on.
  2. Monitoring and optimizing cloud usage to avoid wasted resources.
  3. Leveraging multiple cloud providers.
  4. Moving to a new cloud provider altogether.
  5. Setting aside some of your credits for the migration.

1. Establishing a Cloud Cost Management Plan

Put some time into creating a well-thought-out cloud cost management strategy from the beginning. This includes closely monitoring your usage, optimizing resource allocation, and planning for the expiration of credits to ensure a smooth transition. By understanding the risks involved and proactively managing your cloud usage, you can maximize the benefits of the credits while minimizing potential financial setbacks and vendor lock-in concerns.

2. Monitoring and Optimizing Cloud Usage

Monitoring and optimizing cloud usage plays a vital role in avoiding wasted resources and controlling costs. By regularly analyzing usage patterns, organizations can identify opportunities to right-size resources, adopt automation to reduce idle time, and leverage cost-effective pricing options. Effective monitoring and optimization ensure that businesses are only paying for the resources they truly need, maximizing cost efficiency while maintaining the necessary levels of performance and scalability.

3. Leveraging Multiple Cloud Providers

By adopting a multi-cloud strategy, businesses can diversify their cloud infrastructure and services across different providers. This allows them to benefit from each provider’s unique offerings, such as specialized services, geographical coverage, or pricing models. Additionally, it provides a layer of protection against potential service disruptions or price increases from a single provider. Adopting a multi-cloud approach requires careful planning and management to ensure compatibility, data integration, and consistent security measures across multiple platforms. However, it offers the flexibility to choose the best-fit cloud services from different providers, reducing dependency on a single vendor and enabling businesses to optimize costs while harnessing the capabilities of various cloud platforms.

4. Moving to a New Cloud Provider Altogether

If you’re already deeply invested in a major cloud platform, shifting away can seem cumbersome, but there may be long-term benefits that outweigh the short term “pains” (this leads into the shift to Cloud 3.0). The process could involve re-architecting applications, migrating data, and retraining personnel on the new platform. However, factors such as pricing models, performance, scalability, or access to specialized services may win out in the end. It’s worth noting that many specialized providers have taken measures to “ease the pain” and make the transition away from AWS more seamless without overhauling code. For example, at Backblaze, we developed an S3 compatible API so switching providers is as simple as dropping in a new storage target.

5. Setting Aside Credits for the Migration

By setting aside credits for future migration, businesses can ensure they have the necessary resources to transition to a different provider without incurring significant up front expenses like egress fees to transfer large data sets. This strategic allocation of credits allows organizations to explore alternative cloud platforms, evaluate their pricing models, and assess the cost-effectiveness of migrating their infrastructure and services without worrying about being able to afford the migration.

Welcome to Cloud 3.0: Alternatives to AWS

In 2022, David Heinemeier Hansson, the creator of Basecamp and Hey, announced that he was moving Hey’s infrastructure from AWS to on-premises. Hansson cited the high cost of AWS as one of the reasons for the move. His estimate? “We stand to save $7m over five years from our cloud exit,” he said.  

Going back to on-premises solutions is certainly one answer to the problem of AWS bills. In fact, when we started designing Backblaze’s Personal Backup solution, we were faced with the same problem. Hosting data storage for our computer backup product on AWS was a non-starter—it was going to be too expensive, and our business wouldn’t be able to deliver a reasonable consumer price point and be solvent. So, we didn’t just invest in on-premises resources: We built our own Storage Pods, the first evolution of the Backblaze Storage Cloud. 

But, moving back to on-premises solutions isn’t the only answer—it’s just the only answer if it’s 2007 and your two options are AWS and on-premises solutions. The cloud environment as it exists today has better choices. We’ve now grown that collection of Storage Pods into the Backblaze B2 Storage Cloud, which delivers performant, interoperable storage at one-fifth the cost of AWS. And, we offer free egress to our content delivery network (CDN) and compute partners. Backblaze may provide an even more cost-effective solution for mid-sized SaaS startups looking to save on cloud costs while maintaining speed and performance.

As we transition to Cloud 3.0 in 2023 and beyond, companies are expected to undergo a shift, reevaluating their cloud spending to ensure long-term sustainability and directing saved funds into other critical areas of their businesses. The age of limited choices is over. The age of customizable cloud integration is here. 

So, shout out to David Heinemeier Hansson: We’d love to chat about your storage bills some time.

Want to Test It Yourself?

Take a proactive approach to cloud cost management: If you’ve got more than 50TB of data storage or want to check out our capacity-based pricing model, B2 Reserve, contact our Sales Team to test a PoC for free with Backblaze B2. And, for the streamlined, self–serve option, all you need is an email to get started today.

FAQs About Cloud Spend

If you’re thinking about moving to Backblaze B2 after taking AWS credits, but you’re not sure if it’s right for you, we’ve put together some frequently asked questions that folks have shared with us before their migrations:

My cloud credits are running out. What should I do?

Backblaze’s Universal Data Migration service can help you off-load some of your data to Backblaze B2 for free. Speak with a migration expert today.

AWS has all of the services I need, and Backblaze only offers storage. What about the other services I need?

Shifting away from AWS doesn’t mean ditching the workflows you have already set up. You can migrate some of your data storage while keeping some on AWS or continuing to use other AWS services. Moreover, AWS may be overkill for small to midsize SaaS businesses with limited resources.

How should I approach a migration?

Identify the specific services and functionalities that your applications and systems require, such as CDN for content delivery or compute resources for processing tasks. Check out our partner ecosystem to identify other independent cloud providers that offer the services you need at a lower cost than AWS.

What CDN partners does Backblaze have?

With the ease of use, predictable pricing, zero egress, our joint solutions are perfect for businesses looking to reduce their IT costs, improve their operational efficiency, and increase their competitive advantage in the market. Our CDN partners include Fastly, bunny.net, and Cloudflare. And, we extend free egress to joint customers.

What compute partners does Backblaze have?

Our compute partners include Vultr and Equinix Metal. You can connect Backblaze B2 Cloud Storage with Vultr’s global compute network to access, store, and scale application data on-demand, at a fraction of the cost of the hyperscalers.

The post The Free Credit Trap: Building SaaS Infrastructure for Long-Term Sustainability appeared first on Backblaze Blog | Cloud Storage & Cloud Backup.

]]>
https://www.backblaze.com/blog/the-free-credit-trap-building-saas-infrastructure-for-long-term-sustainability/feed/ 0
CDN Bandwidth Fees: What You Need to Know https://www.backblaze.com/blog/cdn-bandwidth-fees-what-you-need-to-know/ https://www.backblaze.com/blog/cdn-bandwidth-fees-what-you-need-to-know/#respond Thu, 16 Mar 2023 16:08:18 +0000 https://www.backblaze.com/blog/?p=108274 If you're delivering seamless, media-rich experiences, then you're likely using a CDN. Let's talk about some of the fees you may run into.

The post CDN Bandwidth Fees: What You Need to Know appeared first on Backblaze Blog | Cloud Storage & Cloud Backup.

]]>
A decorative image showing a cloud with three dollar signs and the word "Egress", three CDN nodes, and a series of 0s and 1s representing data.

You know that sinking feeling you get in your stomach when you receive a hefty bill you weren’t expecting? That is what some content delivery network (CDN) customers experience when they get slammed with bandwidth fees without warning. To avoid that sinking feeling, it’s important to understand how bandwidth fees work. It’s critical to know precisely what you are paying for and how you use the cloud service before you get hit with an eye-popping bill you can’t pay.

A content delivery network is an excellent way to speed up your website and improve performance and SEO, but not all vendors are created equal. Some charge more for data transfer than others. As the leading specialized cloud storage provider, we have developed partnerships with many top CDN providers, giving us the advantage of fully understanding how their services work and what they charge.

So, let’s talk about bandwidth fees and how they work to help you decide which CDN provider is right for you.

What Are CDN Bandwidth Fees?

Most CDN cloud services work like this: You can configure the CDN to pull data from one or more origins (such as a Backblaze B2 Cloud Storage Bucket) for free or for a flat fee, and then you’re charged fees for usage, namely when data is transferred when a user requests it. These are known as bandwidth, download, or data transfer fees. (We’ll use these terms somewhat interchangeably.) Typically, storage providers also charge egress fees when data is called up by a CDN.

The fees aren’t a problem in and of themselves, but if you don’t have a good understanding of them, successes you should be celebrating can be counterbalanced by overhead. For example, let’s say you’re a small game-sharing platform, and one of your games goes viral. Bandwidth and egress fees can add up quickly in a case like this. CDN providers charge in arrears, meaning they wait to see how much of the data was accessed each month, and then they apply their fees.

Thus, monitoring and managing data transfer fees can be incredibly challenging. Although some services offer a calculation tool, you could still receive a shock bill at the end of the month. It’s important to know exactly how these fees work so you can plan your workflows better and strategically position your content where it will be the most efficient.

How Do CDN Bandwidth Fees Work?

Data transfer occurs when data leaves the network. An example might be when your application server serves an HTML page to the browser or your cloud object store serves an image, in each case via the CDN. Another example is when your data is moved to a different regional server within the CDN to be more efficiently accessed by users close to it.

A decorative photo of a sign that says "$5 fee per usage for non-members."

There are dozens of instances where your data may be accessed or moved, and every bit adds up. Typically, CDN vendors charge a fee per GB or TB up to a specific limit. Once you hit these thresholds, you may advance up another pricing tier. A busy month could cost you a mint, and traffic spikes for different reasons in different industries—like a Black Friday rush for an e-commerce site or around events like the Super Bowl for a sports betting site, for example.

To give you some perspective, Apple spent more than $50 million in data transfer fees in a single year, Netflix $15 million, and Adobe and Salesforce spent more than $7 million according to The Information. You can see how quickly things add up before breaking the bank.

Price Comparison of Bandwidth Fees Across CDN Services

To get a better sense of how each CDN service charges for bandwidth, let’s explore the top providers and what they offer and charge.

As part of the Bandwidth Alliance, some of these vendors have agreed to discount customer data transfer fees when transferring one or both ways between member companies. What’s more, Backblaze offers free egress or discounts above and beyond what folks get with the Bandwidth Alliance for customers.

Note: Prices are as published by vendors as of 3/16/2023.

Fastly

Fastly offers edge caches to deliver content instantly around the globe. The company also offers SSL services for $20/per domain per month. They have various additional add-ons for things like web application firewalls (WAFs), managed rules, DDoS protection, and their Gold support.

Fastly bases its pricing structure on usage. They have three tiered plans:

  1. Essential: up to 3TB of global delivery per month.
  2. Professional: up to 10TB of global delivery per month.
  3. Enterprise: unlimited global delivery.

They bill customers a minimum of $50/month for bandwidth and request usage.

bunny.net

bunny.net labels itself as the world’s lightning-fast CDN service. They price their CDN services based on region. For North America and Europe, prices begin at $0.01/GB per month. For companies with more than 100TB per month, you must call for pricing. If you have high bandwidth needs, bunny.net offers fewer PoPs (Points of Presence) for $0.005/GB per month.

Cloudflare

Cloudflare offers a limited free plan for hobbyists and individuals. They also have tiered pricing plans for businesses called Pro, Business, and Enterprise. Instead of charging bandwidth fees, Cloudflare opts for the monthly subscription model, which includes everything.

The Pro plan costs $20/month (for 100MB of upload). The Business plan is $200/month (for 200MB of upload). You must call to get pricing for the enterprise plan (for 500MB of upload).

Cloudflare also offers dozens of add-ons for load balancing, smart routing, security, serverless functions, etc. Each one costs extra per month.

AWS Cloudfront

AWS Cloudfront is Amazon’s CDN and is tightly integrated with its AWS services. The company offers tiered pricing based on bandwidth usage. The specifics are as follows for North America:

  • $0.085/GB up to the first 10TB per month.
  • $0.080/GB for the next 40TB per month.
  • $0.060/GB for the next 100TB per month.
  • $0.040/GB for the next 350TB per month.
  • $0.030/GB for the next 524TB per month.

Their pricing extends up to 5PB per month, and there are different pricing breakdowns for different regions.

Amazon offers special discounts for high-data users and those customers who use AWS as their application storage container. You can also purchase add-on products that work with the CDN for media streaming and security.

A decorative image showing a portion of the earth viewed from space with lights clustered around city centers.
Sure it’s pretty. Until you know all those lights represent possible fees.

Google Cloud CDN

Google Cloud CDN offers fast and reliable content delivery services. However, Google charges bandwidth, cache egress fees, and for cache misses. Their pricing structure is as follows:

  • Cache Egress: $0.02–$0.20 per GB.
  • Cache Fill: $0.01–$0.04 per GB.
  • Cache Lookup Requests: $0.0075 per 10,000 requests.

Cache egress fees are priced per region, and in the U.S., they start at $0.08 for the first 10TB. Between 10–150TB costs $0.055, and beyond 500TB, you have to call for pricing.
Google charges $0.01 per GB for cache fill services.

Microsoft Azure

The Azure content delivery network is Microsoft’s offering that promises speed, reliability, and a high level of security.

Azure offers a limited free account for individuals to play around with. For business customers, they offer the following price structure:

Depending on the zone, the price will vary for data transfer. For Zone One, which includes North America, Europe, Middle East, and Africa, pricing is as follows:

  • First 10TB: $0.158/GB per month.
  • Next 40TB: $0.14/GB per month.
  • Next 100TB: $0.121/GB per month.
  • Next 350TB: $0.102/GB per month.
  • Next 500TB: $0.093/GB per month.
  • Next 4,000TB: $0.084/GB per month.

Azure charges $.60 per 1,000,000,000 requests per month and $1 for rules per month. You can also purchase WAF services and other products for an additional monthly fee.

How to Save on Bandwidth Fees

A CDN can significantly enhance the performance of your website or web application and is well worth the investment. However, finding ways to save is helpful. Many of the CDN providers listed above are members of the Bandwidth Alliance and have agreed to offer discounted rates for bandwidth and egress fees. Another way to save money each month is to find affordable origin storage that works seamlessly with your chosen CDN provider. Here at Backblaze, we think the world needs lower egress fees, and we offer free egress between Backblaze B2 and many CDN partners like Fastly, bunny.net, and Cloudflare.

The post CDN Bandwidth Fees: What You Need to Know appeared first on Backblaze Blog | Cloud Storage & Cloud Backup.

]]>
https://www.backblaze.com/blog/cdn-bandwidth-fees-what-you-need-to-know/feed/ 0
Backblaze Joins the CDN Alliance https://www.backblaze.com/blog/backblaze-joins-the-cdn-alliance/ https://www.backblaze.com/blog/backblaze-joins-the-cdn-alliance/#respond Mon, 06 Mar 2023 17:28:11 +0000 https://www.backblaze.com/blog/?p=108216 Backblaze joins the CDN Alliance, a community of industry leaders focused on ensuring that the content delivery network (CDN) industry is evolving in a way that best serves businesses distributing content of every type around the world.

The post Backblaze Joins the CDN Alliance appeared first on Backblaze Blog | Cloud Storage & Cloud Backup.

]]>
A decorative image that features the Backblaze logo and the CDN Alliance logo.

As the leading specialized storage cloud platform, Backblaze is a big proponent of the open, collaborative nature of independent cloud service providers. From our participation in the Bandwidth Alliance to our large ecosystem of partners, we’re focused on what we call “Phase Three” of the cloud. What’s happening in Phase Three? The age of walled gardens, hidden fees, and out of control egress fees driven by the hyperscalers is in the past. Today’s specialized cloud solutions are oriented toward what’s best for users—an open, multi-cloud internet.

Which is why I’m particularly happy to announce today that we’ve joined the CDN Alliance, a nonprofit organization and community of industry leaders focused on ensuring that the content delivery network (CDN) industry is evolving in a way that best serves businesses distributing content of every type around the world—from streaming media to stock image resources to e-commerce and more.

The majority of the content we consume today on our many devices and platforms is being delivered through a CDN. Being part of the CDN Alliance allows Backblaze to collaborate and drive innovation with our peers to ensure that everyone’s content experience only gets better.

Through participation in and sponsorships of joint events, panels, and gatherings, we look forward to working with the CDN Alliance on the key challenges facing the industry, including availability, scalability, reliability, privacy, security, sustainability, interoperability, education, certification, regulations, and numerous others. Check out the CDN Alliance and its CDN Community for more info.

For more resources on CDN integrations with Backblaze B2 Cloud Storage you can read more about our top partners here.

The post Backblaze Joins the CDN Alliance appeared first on Backblaze Blog | Cloud Storage & Cloud Backup.

]]>
https://www.backblaze.com/blog/backblaze-joins-the-cdn-alliance/feed/ 0
AWS CloudFront vs. bunny.net: How Do the CDNs Compare? https://www.backblaze.com/blog/aws-cloudfront-vs-bunny-net-how-do-the-cdns-compare/ https://www.backblaze.com/blog/aws-cloudfront-vs-bunny-net-how-do-the-cdns-compare/#comments Thu, 23 Feb 2023 17:27:22 +0000 https://www.backblaze.com/blog/?p=108126 Next in our series comparing content delivery network (CDN) providers: AWS Cloudfront and bunny.net.

The post AWS CloudFront vs. bunny.net: How Do the CDNs Compare? appeared first on Backblaze Blog | Cloud Storage & Cloud Backup.

]]>
CDN Comparison: Bunny.net vs. Cloudfront

Remember the story about the hare and the tortoise? Well, this is not that story, but we are comparing bunny.net with another global content delivery network (CDN) provider, AWS CloudFront, to see how the two stack up. When you think of rabbits, you automatically think of speed, but a CDN is not just about speed; sometimes, other factors “win the race.”

As a leading specialized cloud storage provider, we provide application storage that folks use with many of the top CDNs. Working with these vendors allows us deep insight into the features of each platform so we can share the information with you. Read on to get our take on these two leading CDNs.

Editor’s Note

We give more ink to bunny.net than AWS CloudFront in this comparison because we’re in favor of supporting independent cloud providers that challenge the hyperscalers. So, full transparency: yes, we partner with bunny.net, but no, this post is not paid or sponsored in any way. That being said, there are use cases where AWS CloudFront is the better choice. Do you have a preference? Let us know in the comments.

What Is a CDN?

A CDN is a network of servers dispersed around the globe that host content closer to end users to speed up website performance. Let’s say you keep your website content on a server in New York City. If you use a CDN, when a user in Las Vegas calls up your website, the request can pull your content from a server in, say, Phoenix instead of going all the way to New York. This is known as caching. A CDN’s job is to reduce latency and improve the responsiveness of online content.

Scale Media Delivery Workflows With bunny.net + Backblaze B2 

In this webinar, Pat Patterson demonstrates how to efficiently scale your content delivery workflows from content ingestion, transcoding, storage, to last-mile acceleration via bunny.net CDN. Pat demonstrates how to build a video hosting platform called “Cat Tube” and shows how to upload a video and play it using HTML5 video element with controls. Watch below and download the demo code to try it yourself.

CDN Use Cases

Before we compare these two CDNs, it’s important to understand how they might fit into your overall tech stack. Some common use cases for a CDN include:

  • Website Reliability: If your website server goes down and you have a CDN in place, the CDN can continue to serve up static content to your customers. Not only can a CDN speed up your website performance tremendously, but it can also keep your online presence up and running, keeping your customers happy.
  • App Optimization: Internet apps use a lot of dynamic content. A CDN can optimize that content and keep your apps running smoothly without any glitches, regardless of where in the world your users access them.
  • Streaming Video and Media: Streaming media is essential to keep customers engaged these days. Companies that offer high-resolution video services need to know that their customers won’t be bothered by buffering or slow speeds. A CDN can quickly solve this problem by hosting 8K videos and delivering reliable streams across the globe.
  • Scalability: Various times of the year are busier than others—think Black Friday. If you want the ultimate scalability, a CDN can help buffer the traffic coming into your website and ease the burden on the origin server.
  • Gaming: Video game fans know nothing is worse than having your favorite online duel lock up during gameplay. Video game providers use CDNs to host high-resolution content, so all their games run flawlessly to keep players engaged. They also use CDN platforms to roll out new updates and security patches without any limits.
  • Images/E-Commerce: Online retailers typically host thousands of images for their products so you can see every color, angle, and option available. A CDN is an excellent way to instantly deliver crystal clear, high-quality images without any speed issues or quality degradation.
  • Improved Security: CDN services often come with beefed-up security protocols, including distributed denial-of-service (DDoS) prevention across the platform and detection of suspicious behavior on the network.

Speed Tests: How Fast Can You Go?

Speed tests are a valuable tool that businesses can use to gauge site performance, page load times, and customer experience. You can use dozens of free online speed tests to evaluate time to first byte (TTFB) and the number of requests (how many times the browser has to make the request before the page loads). Some speed tests show other more advanced metrics.

A CDN is one aspect that can affect speed and performance, but there are other factors at play as well. A speed test can help you identify bottlenecks and other issues.

Some of the most popular tools are:

Comparing bunny.net vs. AWS CloudFront

Although bunny.net and AWS CloudFront provide CDN services, their features and technology work differently. You will want all of the details when deciding which CDN is right for your application.

bunny.net is a powerfully simple CDN that delivers content at lightning speeds across the globe. The service is scalable, affordable, and secure. They offer edge storage, optimization services, and DNS resources for small to large companies.

AWS CloudFront is a global CDN designed to work primarily with other AWS services. The service offers robust cloud-based resources for enterprise businesses.

Let’s compare all the features to get a good sense of how each CDN option stacks up. To best understand how the two CDNs compare, we’ll look at different aspects of each one so you can decide which option works best for you, including:

  • Network
  • Cache
  • Compression
  • DDoS Protection
  • Integrations
  • TLS Protocols
  • CORS Support
  • Signed Exchange Support
  • Pricing

Network

Distribution points are the number of servers within a CDN network. These points are distributed throughout the globe to reach users anywhere. When users request content through a website or app, the CDN connects them to the closest distribution point server to deliver the video, image, script, etc., as quickly as possible.

bunny.net

Bunny CDN has 114 global distribution points (also called points of presence or PoPs) in 113 cities and 77 countries. For high-bandwidth users, they also offer a separate, cost-optimized network of 10 PoPs. They don’t charge any request fees and offer multiple payment options.

AWS CloudFront

Currently, AWS CloudFront advertises that they have roughly 450 distribution points in 90 cities in 48 countries.

Our Take

While AWS CloudFront has many points in some major cities, bunny.net has a wider global distribution—AWS CloudFront covers 90 cities, and bunny.net covers 114. And Bunny CDN ranks first on CDNPerf, a third-party CDN performance analytics and comparison tool.

Cache

Caching files allows a CDN to serve up copies of your digital content from distribution points closer to end users, thus improving performance and reliability.

bunny.net

With their Origin Shield feature, when CDN nodes have a cache miss (meaning the content an end user wants isn’t at the node closest to them), the network directs the request to another node versus the origin. They offer Perma-Cache where you can permanently store your files at the edge for a 100% cache hit rate. They also recently introduced request coalescing, where requests by different users for the same file are combined into one request. Request coalescing works well for streaming content or large objects.

AWS CloudFront

AWS CloudFront uses caching to reduce the load of requests to your origin store. When a user visits your website, AWS CloudFront directs them to the closest edge cache so they can view content without any wait. You can configure AWS CloudFront’s cache settings using the backend interface.

Our Take

Caching is one of bunny.net’s strongest points of differentiation, primarily around static content. They also offer dynamic caching with one-click configuration by query string, cookie, and state cache as well as cache chunking for video delivery. With their Perma-Cache and request coalescing, their capabilities for dynamic caching are improving.

Compression

Compressing files makes them smaller, which saves space and makes them load faster. Many CDNs allow compression to maximize your server space and decrease page load times. The two services are on par with each other when it comes to compression.

bunny.net

The Bunny CDN system automatically optimizes/compresses images and minifies CSS and JavaScript files to improve performance. Images are compressed by roughly 80%, improving load times by up to 90%. bunny.net supports both .gzip and .br (Brotli) compression formats. The bunny.net optimizer can compress images and optimize files on the fly.

AWS CloudFront

AWS CloudFront allows you to compress certain file types automatically and use them as compressed objects. The service supports both .gzip and .br compression formats.

DDoS Protection

Distributed denial of service (DDoS) attacks can overwhelm a website or app with too much traffic causing it to crash and interrupting actual website traffic. CDNs can help prevent DDoS attacks.

bunny.net

bunny.net stops DDoS attacks via a layered DDoS protection system that stops both network and HTTP layer attacks. Additionally, a number of checks and balances—like download speed limits, connection counts for IP addresses, burst requests, and geoblocking—can be configured. You can hide IP addresses and use edge rules to block requests.

AWS CloudFront

AWS CloudFront uses security technology called AWS Shield designed to prevent DDoS and other types of attacks.

Our Take

As an independent, specialized CDN service, bunny.net has put most of their focus on being a standout when it comes to core CDN tasks like caching static content. That’s not to say that their security services are lacking, but just that their security capabilities are sufficient to meet most users’ needs. AWS Shield is a specialized DDoS protection software, so it is more robust. However, that robustness comes at an added cost.

Integrations

Integrations allow you to customize a product or service using add-ons or APIs to extend the original functionality. One popular tool we’ll highlight here is Terraform, a tool that allows you to provision infrastructure as code (IaC).

Terraform

HashiCorp’s Terraform is a third-party program that allows you to manage your CDN, store source code in repositories like GitHub, track each version, and even roll back to an older version if needed. You can use Terraform to configure Bunny CDN pull zones only. You can use Terraform with AWS CloudFront by editing configuration files and installing Terraform on your local machine.

TLS Protocols

Transport Layer Security (TLS), formerly known as secure sockets layer (SSL), are encryption protocols used to protect website data. Whenever you see the lock sign on your internet browser, you are using a website that is protected by an TLS (HTTPS). Both services conform adequately to TLS standards.

bunny.net offers customers free TLS with its CDN service. They make setting it up a breeze (two clicks) in the backend of your account. You also have the option of installing your own SSL. They provide helpful step-by-step instructions on how to install it.

Because AWS CloudFront assigns a unique URL for your CDN content, you can use the default TLS certificate installed on the server or your own TLS. If you use your own, you should consult the explicit instructions for key length and install it correctly. You also have the option of using an Amazon TLS certificate.

CORS Support

Cross-origin resource sharing (CORS) is a service that allows your internet browser to deliver content from different sources seamlessly on a single webpage or app. Default security settings normally reject certain items if they come from a different origin and they may block the content. CORS is a security exception that allows you to host various types of content from other servers and deliver them to your users without any errors.

bunny.net and AWS CloudFront both offer customers CORS support through configurable CORS headers. Using CORS, you can host images, scripts, style sheets, and other content in different locations without any issues.

Signed Exchange Support

Signed exchange (SXG) is a service that allows search engines to find and serve cached pages to users in place of the original content. SXG speeds up performance and improves SEO in the process. The service uses cryptography to authenticate the origin of digital assets.

Both bunny.net and AWS CloudFront support SXG. bunny.net supports signed exchange through its token authentication system. The service allows you to enable, configure, and generate tokens and assign them an expiration date to stop working when you want.

AWS CloudFront supports SXG through its security settings. When configuring your settings, you can choose which cipher to use to verify the origin of the content.

Pricing

bunny.net

Bunny CDN offers simple, affordable, region-based pricing starting at $0.01/GB in the U.S. For high-bandwidth projects, their volume pricing starts at $0.005/GB for the first 500TB.

AWS CloudFront

AWS CloudFront offers a free plan, including 1TB of data transfer out, 10,000,000 HTTP or HTTPS requests, and 2,000,000 functions invocations each month.

AWS CloudFront’s paid service is tiered based on bandwidth usage. AWS CloudFront’s pricing starts at $0.085 per GB up to 10TB in North America. All told, there are seven pricing tiers from 10TB to >5PB. If you stay within the AWS ecosystem, data transfer is free from Amazon S3, their object storage service, however you’ll be charged to transfer data outside of AWS. Each tier is priced by location/country.

Our Take

bunny.net is probably one of the most cost effective CDNs on the market. For example, their traffic pricing for 5TB in Europe or North America is $50 compared to $425 with CloudFront. There are no request fees, you only pay for the bandwidth you actually use. All of their features are included without extra charges. And finally, egress is free between bunny.net and Backblaze B2, if you choose to pair the two services.

Our Final Take

bunny.net’s key advantages are its simplicity, pricing, and customer support. Many of the above features are configured in one-click, giving you advanced capabilities without the headache of trying to figure out complicated provisioning. Their pricing is straightforward and affordable. And, not for nothing, they also offer one-to-one, round-the-clock customer support. If it’s important to you to be able to speak with an expert when you need to, bunny.net is the better choice.

AWS CloudFront offers more robust features, like advanced security services, but those services come with a price tag and you’re on your own when it comes to setting them up properly. AWS also prefers customers to stay within the AWS ecosystem, so using any third-party services outside of AWS can be costly.

If you’re looking for an agnostic, specialized, affordable CDN, bunny.net would be a great fit. If you need more advanced features and have the time, know-how, and money to make them work for you, AWS CloudFront offers those.

CDNs and Cloud Storage

A CDN can boost the speed of your website pages and apps. However, you still need reliable, affordable application storage for the cache to pull from. Pairing robust application storage with a speedy CDN is the perfect solution for improved performance, security, and scalability.

The post AWS CloudFront vs. bunny.net: How Do the CDNs Compare? appeared first on Backblaze Blog | Cloud Storage & Cloud Backup.

]]>
https://www.backblaze.com/blog/aws-cloudfront-vs-bunny-net-how-do-the-cdns-compare/feed/ 1
Fastly vs. AWS CloudFront: How Do the CDNs Stack Up? https://www.backblaze.com/blog/fastly-vs-aws-cloudfront-how-do-the-cdns-stack-up/ https://www.backblaze.com/blog/fastly-vs-aws-cloudfront-how-do-the-cdns-stack-up/#comments Thu, 16 Feb 2023 17:50:09 +0000 https://www.backblaze.com/blog/?p=108053 Read the first in our series comparing content delivery network (CDN) providers. Today, we compare AWS Cloudfront and Fastly.net.

The post Fastly vs. AWS CloudFront: How Do the CDNs Stack Up? appeared first on Backblaze Blog | Cloud Storage & Cloud Backup.

]]>
CDN Comparison: AWS CloudFront vs. Fastly

As a leading specialized cloud platform for application storage, we work with a variety of content delivery network (CDN) providers. From this perch, we get to see the specifics on how each operates. Today, we’re sharing those learnings with you by comparing Fastly and AWS CloudFront to help you understand your options when it comes to choosing a CDN.

Editor’s Note

We give more ink to Fastly than AWS CloudFront in this comparison because we’re in favor of supporting independent cloud providers that challenge the hyperscalers. So, full transparency: yes, we partner with Fastly, but no, this post is not paid or sponsored in any way. That being said, there are use cases where AWS CloudFront is the better choice. Do you have a preference? Let us know in the comments.

What Is a CDN?

If you run a website or a digital app, you need to ensure that you are delivering your content to your audience as quickly and efficiently as possible to beat out the competition. One way to do this is by using a CDN. A CDN caches all your digital assets like videos, images, scripts, style sheets, apps, etc. Then, whenever a user accesses your content, the CDN connects them with the closest server so that your items load quickly and without any issues. Many CDNs have servers around the globe to offer low-latency data access and drastically improve the responsiveness of your app through caching.

Before you choose a CDN, you need to consider your options. There are dozens of CDNs to choose from, and they all have benefits and drawbacks. Let’s compare Fastly with AWS CloudFront to see which works best for you.

CDN Use Cases

Before we compare these two CDNs, it’s important to understand how they might fit into your overall tech stack. Here are some everyday use cases for a CDN:

  • Websites: If you have a video- or image-heavy website, you will want to use a CDN to deliver all your content without any delays for your visitors.
  • Web Applications: A CDN can help optimize your dynamic content and allow your web apps to run flawlessly, regardless of where your users access them.
  • Streaming Video: Customers expect more from companies these days and will not put up with buffering or intermittent video streaming issues. If you host a video streaming service like Hulu, Netflix, Kanopy, or Amazon, a CDN can solve these problems. You can host high-resolution (8K) video on your CDN and then stream it to your users, offering them a smooth, gapless streaming experience.
  • Gaming: If you are a “Call of Duty” or “Halo” fan, you know that most video games use high-resolution images and video to provide the most immersive gaming experience possible. Video game providers use CDNs to ensure responsive gameplay without any blips. You can also use a CDN to streamline rolling out critical patches or updates to all your customers without any limits.
  • E-Commerce Applications: Online retailers typically use dozens of images to showcase their products. If you want to use high-quality images, your website could suffer slow page loads unless you use a CDN to deliver all your photos instantly without any wait.

Need for Speed (Test)

Website developers and owners use speed tests to gauge page load speeds and other aspects affecting the user experience. A CDN is one way to improve your website metrics. You can use various online speed tests that show details like load time, time to first byte (TTFB), and the number of requests (how many times the browser must make the request before the page loads).

A CDN can help improve performance quite a bit, but speed tests are dependent on many factors outside of a CDN. To find out exactly how well your site performs, there are dozens of reputable speed test tools online that you can use to evaluate your site, and then you can make improvements from there. Some of the most popular tools are:

Comparing Fastly vs. AWS CloudFront

Fastly, founded in 2011, has rapidly grown to be a competitive global edge cloud platform and CDN offering international customers a wide variety of products and services. The company’s flagship product is its CDN which offers nearly instant content delivery for companies like The New York Times, Reddit, and Pinterest.

AWS CloudFront is Amazon Web Service’s (AWS) CDN offering. It’s tightly integrated with other AWS products.

To best understand how the two CDNs compare, we’ll look at different aspects of each one so you can decide which option works best for you, including:

  • Network
  • Caching
  • DDoS Protection
  • Log streaming
  • Integrations
  • TLS Protocols
  • Pricing

Network

CDN networks are made up of distribution points, which are network connections (servers) that allow a CDN to deliver content instantly to users anywhere.

Fastly

Fastly’s network is built fundamentally differently than a legacy CDN. Rather than a wide-ranging network populated with many points of presence (PoPs), Fastly built a stronger network based on fewer, more powerful, and strategically placed PoPs. Fastly promises 233Tbps of connected global capacity with its system of PoPs (as of 9/30/2022).

AWS CloudFront

AWS CloudFront doesn’t share specific capacity figures in terms of terabits per second (Tbps). They keep that claim somewhat vague, advertising “hundreds of terabits of deployed capacity.” But they do advertise that they have roughly 450 distribution points in 90 cities in 48 countries.

Our Take

At first glance, it might seem like more PoPs means a faster, more robust network. Fastly uses a useful metaphor to explain why that’s not true. They compare legacy PoPs to convenience stores—they’re everywhere, but they’re small, meaning that the content your users are requesting may not be there when they need it. Fastly’s PoPs are more like supermarkets—you have a better chance of getting everything you need (your cached content) in one place. It only takes a few milliseconds to get to one of Fastly’s PoPs nowadays (as opposed to when legacy providers like AWS CloudFront built their networks), and there’s much more likelihood that the content you need is going to be housed in that PoP already, instead of needing to be called up from origin storage.

Caching

Caching reduces the number of direct requests to your origin server. A CDN acts as a middleman responding to requests for content on your behalf and directing users to edge caches nearest to the user. When a user calls up your website, the CDN serves up a cached version located on the server closest to them. This feature drastically improves the speed and performance of your website.

Fastly

Fastly uses a process of calculating the Time to Live (TTL) with its caching feature. TTL is the maximum time Fastly will use the content to answer requests before returning to your origin server. You can set various cache settings like purging objects, conditional caching, and assigning different TTLs for cached content through Fastly’s API.

Fastly shows its average cache hit ratio live on its website, which is over 91% at the time of publication. This is the ratio of how many content requests the CDN is able to fill from the cache versus the total number of requests.

Fastly also allows you to automatically compress some file types in gzip and then cache them. You can modify these settings from inside Fastly’s web interface. The service also includes support for Brotli data compression via general availability as of February 7, 2023.

AWS CloudFront

AWS CloudFront routes requests for your content to servers holding a cached version, lessening the burden on your origin container. When users visit your site, the CDN directs them to the closest edge cache for instantaneous page loads. You can change your cache settings in AWS CloudFront’s backend. AWS CloudFront supports compressed files and allows you to store and access gzip and Brotli compressed objects.

Our Take

Fastly does not charge a fee no matter how many times content is purged from the cache, while AWS CloudFront does. And, Fastly can invalidate content in 150 milliseconds, while AWS CloudFront can be 60–120 times slower. Both of these aspects make Fastly better with dynamic content that changes quickly for customers, such as news outlets, social media sites, and e-commerce sites.

DDoS Protection

Distributed denial of service (DDoS) attacks are a serious concern for website and web app owners. A typical attack can interrupt website traffic or crash it completely, making it impossible for your customers to reach you.

Fastly

Fastly relies on its 233Tbps+ (as of 9/30/2022) of globally-distributed network capacity to absorb any DDoS attacks, so they don’t affect customers’ origin content. They also use sophisticated filtering technology to remove malicious requests at the edge before they get close to your origin.

AWS CloudFront

AWS CloudFront is backed by comprehensive security technology designed to prevent DDoS and other types of attacks. Amazon calls its DDoS protection service AWS Shield.

Our Take

Fastly’s next gen web application firewall (WAF) actively filters the correct traffic. More than 90% of their customers use the WAF in active full blocking mode whereas across the industry, only 57% of customers use their WAF in full blocking mode. This means the Fastly WAF works as it should out of the box. Other WAFs require more fine-tuning and advanced rule setting to be as efficient as Fastly’s. Fastly’s WAF can also be deployed anywhere—at the edge, on-premises, or both—whereas most AWS instances are cloud hosted.

Log Streaming

Log streaming enables you to collect logs from your CDN and forward them to specific destinations. They help customers stay on top of up-to-date information about what’s happening within the CDN, including detecting security anomalies.

Fastly

Fastly allows for near real-time visibility into delivery performance with real-time logs. Logs can be sent to 29 endpoints, including popular third-party services like Datadog, Sumo Logic, Splunk, and others where they can be monitored.

AWS CloudFront

AWS CloudFront real-time logs are integrated with Amazon Kinesis Data Streams to enable delivery using Amazon Kinesis Data Firehose. Kinesis Data Firehose can then deliver logs to Amazon S3, Amazon Redshift, Amazon Elasticsearch Service, as well as service providers like Datadog, New Relic, and Splunk. AWS charges for real-time logs in addition to charging for Kinesis Data Streams.

Our Take

More visibility into your data is always better, and Fastly’s free real-time log streaming is the clear winner here with more choice of endpoints, allowing customers to use the specialized third-party services they prefer. AWS encourages staying within the AWS ecosystem and penalizes customers for not using AWS services, namely their S3 object storage.

Integrations

Integrations allow you to extend a product or service’s functionality through add-ons. With your CDN, you might want to enhance it with a different interface or add on new features the original doesn’t include. One popular tool we’ll highlight here is Terraform, a tool that allows you to provision infrastructure as code (IaC).

Terraform

Both Fastly and AWS CloudFront support Terraform. Fastly has detailed instructions on its website about how to set this up and configure it to work seamlessly with the service.

Amazon’s AWS CloudFront allows you to integrate with Terraform by installing the program on your local machine and configuring it within AWS CloudFront’s configuration files.

The Drawbacks of a Closed Ecosystem

It’s important to note that AWS CloudFront, as an AWS product, works best with other AWS products, and doesn’t exactly play nice with competitor products. As an independent cloud services provider, Fastly is vendor agnostic and works with many other cloud providers, including AWS’s other products and Backblaze.

TLS (Transport Layer Security) Protocols

TLS or transport layer security (formerly known as secure sockets layer (SSL)) is an encryption device used to protect website data. Whenever you see the lock sign on your internet browser, you are using a website that is protected by an TLS (HTTPS).

Fastly assigns a shared domain name to your CDN content. You can use the associated TLS certificate for free or bring your own TLS certificate and install it. Fastly offers detailed instructions and help guides so you can securely configure your content.

Amazon’s AWS CloudFront also assigns a unique URL for your CDN content. You can use an Amazon-issued certificate, the default TLS certificate installed on the server or use your own TLS. If you use your own TLS, you must follow the explicit instructions for key length and install it correctly on the server.

Pricing

Fastly

Fastly offers a free trial which includes $50 of traffic with pay-as-you-go bandwidth pricing after that. Bandwidth pricing is based on geographic location and starts at, for example, $0.12 per GB for the first 10TB for North America. The next 10TB is $0.08 per GB, and they charge $0.0075 per 10,000 requests. Fastly also offers tiered capacity-based pricing for edge cloud services, starting with its Essential product for small businesses, which includes 3TB of global delivery per month. Their Professional tier includes 10TB of global delivery per month, and their Enterprise tier is unlimited. They also offer add-on products for security and distributed applications.

AWS CloudFront

AWS CloudFront offers a free plan including 1TB of data transfer out, 10,000,000 HTTP or HTTPS requests, and 2,000,000 functions invocations each month. However, customers needing more than the basic plan will have to consider the tiered pricing based on bandwidth usage. AWS CloudFront’s pricing starts at $0.085 per GB up to 10TB in North America. All told, there are seven pricing tiers from 10TB to >5PB.

Our Take

When it comes to content delivery, AWS CloudFront can’t compete on total cost of ownership. Not only that, but Fastly’s pay-as-you-go pricing model with only two tiers is simpler than AWS CloudFront’s pricing with seven tiers. As with many AWS products, complexity demands configuration and management time. Customers tend to spend less time getting Fastly to work the way they want it to. With AWS CloudFront, customers also run the risk of getting locked in to the AWS ecosystem.

Our Final Take

Between the two CDNs, Fastly is the better choice for customers that rely on managing and serving dynamic content without paying high fees to create personalized experiences for their end users. Fastly wins over AWS CloudFront on a few key points:

  • More price competitive for content delivery
  • Simpler pricing tiers
  • Vendor agnostic
  • Better caching
  • Easier image optimization
  • Real-time log streaming
  • More expensive, but better performing out-of-the-box WAF

Using a CDN with Cloud Storage

A CDN can greatly speed up your website load times, but there will still be times when a request will call the origin store. Having reliable and affordable origin storage is key when the cache doesn’t have the content stored. When you pair a CDN with origin storage in the cloud, you get the benefit of both scalability and speed.

The post Fastly vs. AWS CloudFront: How Do the CDNs Stack Up? appeared first on Backblaze Blog | Cloud Storage & Cloud Backup.

]]>
https://www.backblaze.com/blog/fastly-vs-aws-cloudfront-how-do-the-cdns-stack-up/feed/ 3