If you run an application that’s bandwidth intensive like media streaming, game hosting, or an e-commerce platform, performance is probably top of mind. You need to be able to deliver content to your users fast and without errors in order to keep them happy. But, what specific performance metrics matter for your use case?
As it turns out, you might think you need a Porsche when what you really need and want to transport your data with is a trusty, reliable (Still speedy!) Volvo.
In this post, we’re taking a closer look at performance metrics and when they matter as well as some strategies that can impact performance, including range requests, prefetching, and others. When you’re assessing a cloud solution for application development, taking these factors into consideration can help you make the best decision for your business.
Performance Metrics: Time to First Byte
Time to first byte (TTFB) is the time between a page request and when the page receives the first byte of information from the server. In other words, TTFB is measured by how long it takes between the start of the request and the start of the response, including DNS lookup and establishing the connection using a TCP handshake and SSL handshake if you’ve made the request over HTTPS.
TTFB identifies pages that load slowly due to server-side calculations that could instead benefit from client-side scripting. It’s often used to assess search rankings by displaying websites that respond to a request faster and appear more usable before other websites.
TTFB is a useful metric, but it doesn’t tell the whole story every time and shouldn’t be the only metric used to make decisions when it comes to choosing a cloud storage solution. For example, when David Liu, Founder and CEO of Musify, a music streaming app, approached his search for a new cloud storage provider, he had a specific TTFB benchmark in mind. He thought he absolutely needed to meet this benchmark in order for his new storage solution to work for his use case, however, upon further testing, he found that his initial benchmark was more aggressive than he actually needed. The performance he got by utilizing Cloudflare in front of his origin store in Backblaze B2 Cloud Storage more than met his needs and served his users well.
Optimizing Cloud Storage Performance
TTFB is the dominant method of measuring performance, but TTFB can be impacted by any number of factors—your location, your connection, the data being sent, etc. As such, there are ways to improve TTFB, including using a content delivery network (CDN) on top of origin storage, range requests, and prefetching.
Performance and Content Delivery Networks
A CDN helps speed content delivery by storing content at the edge, meaning faster load times and reduced latency. For high-bandwidth use cases, a CDN can optimize media delivery.
Companies like Kanopy, a media streaming service; Big Cartel, an e-commerce platform; and CloudSpot, a professional photo gallery platform, use a CDN between their origin storage in Backblaze B2 and their end users to great effect. Kanopy offers a library of 25,000+ titles to 45 million patrons worldwide. Latency and poor performance is not an option. “Video needs to have a quick startup time,” Kanopy’s Lead Video Software Engineer, Pierre-Antoine Tible said. “With Backblaze over [our CDN] Cloudflare, we didn’t have any issues.”
For Big Cartel, hosting one million customer sites likewise demands high-speed performance. Big Cartel’s Technical Director, Lee Jensen, noted, “We had no problems with the content served from Backblaze B2. The time to serve files in our 99th percentile, including fully rendering content, was under one second, and that’s our worst case scenario.” The time to serve files in their 75th percentile was under just 200 to 300 milliseconds, and that’s when content needs to be pulled from origin storage in Backblaze B2 when it’s not already cached in their CDN Fastly’s edge servers.
“We had no problems with the content served from Backblaze B2. The time to serve files in our 99th percentile, including fully rendering content, was under one second, and that’s our worst case scenario.”
—Lee Jensen, Technical Director, Big Cartel
Range Requests and Performance
HTTP range requests allow sending only a portion of an HTTP message from a server to a client. Partial requests are useful for large media or downloading files with pause and resume functions, and they’re common for developers who like to concatenate files and store them as big files. For example, if a user wants to skip to a clip of a full video or a specific frame in a video, using range requests means the application doesn’t have to serve the whole file.
Because the Backblaze B2 vault architecture separates files into shards, you get the same performance whether you call the whole file or just part of the file in a range request. Rather than wasting time learning how to optimize performance on a new platform or adjusting your code to comply with frustrating limitations, developers moving over to Backblaze B2 can utilize existing code they’re already invested in.
Prefetching and Performance
Prefetching is a way to “queue up” data before it’s actually required. This improves latency if that data is subsequently requested. When you’re using a CDN in front of your origin storage, this means the user queues up data/files/content in the CDN before someone asks for it.
Video streaming service, Kanopy, uses prefetching with popular videos they expect will see high demand in certain regions. This would violate some cloud storage providers’ terms of service because they egress out more than they store. Because Kanopy gets free egress between their origin store in Backblaze B2 and their CDN Cloudflare, the initial download cost for prefetching is $0. (Backblaze also has partnerships with other CDN providers like Fastly and bunny.net to offer zero egress.) The partnership means Kanopy doesn’t have to worry about running up egress charges, and they’re empowered to use prefetching to optimize their infrastructure.
Other Metrics to Consider When Assessing Cloud Performance
In addition to TTFB, there are a number of other metrics to consider when it comes to assessing cloud performance, including availability, the provider’s service level agreements (SLAs), and durability.
Availability measures the percentage of time the data is available to be accessed. All data occasionally becomes unavailable due to regular operating procedures like system maintenance. But, obviously data availability is very important when you’re serving content around the globe 24/7. Backblaze B2, for example, commits to a 99.9% uptime with no cold delays. Commitments like uptime are usually outlined in a cloud provider’s SLA—an agreement that lists the performance metrics the cloud provider agrees to provide.
Durability measures how healthy your data is. Object storage providers express data durability as an annual percentage in nines, as in two nines before the decimal point and as many nines as warranted after the decimal point. For example, 11 nines of durability is expressed as 99.999999999%. What this means is that the storage vendor is promising that your data will remain intact while it is under their care without losing any more than 0.000000001% of your data in a year (in the case of 11 nines annual durability).
Ready to Get Started?
Understanding the different performance metrics that might impact your data can help when you’re evaluating cloud storage providers. Ready to get started with Backblaze B2? We offer the first 10GB free.