A little over a year ago, we announced general availability of Backblaze Cloud Replication, the ability to automatically copy data across buckets, accounts, or regions. There are several ways to use this service, but today we’re focusing on how to use Cloud Replication to replicate data between environments like testing, staging, and production when developing applications.
First we’ll talk about why you might want to replicate environments and how to go about it. Then, we’ll get into the details: there are some nuances that might not be obvious when you set out to use Cloud Replication in this way, and we’ll talk about those so that you can replicate successfully.
Other Ways to Use Cloud Replication
In addition to replicating between environments, there are two main reasons you might want to use Cloud Replication:
- Data Redundancy: Replicating data for security, compliance, and continuity purposes.
- Data Proximity: Bringing data closer to distant teams or customers for faster access.
Maintaining a redundant copy of your data sounds, well, redundant, but it is the most common use case for cloud replication. It supports disaster recovery as part of a broad cyber resilience framework, reduces the risk of downtime, and helps you comply with regulations.
The second reason (replicating data to bring it geographically closer to end users) has the goal of improving performance and user experience. We looked at this use case in detail in the webinar Low Latency Multi-Region Content Delivery with Fastly and Backblaze.
Four Levels of Testing: Unit, Integration, System, and Acceptance
The Most Interesting Man in the World may test his code in production, but most of us prefer to lead a somewhat less “interesting” life. If you work in software development, you are likely well aware of the various types of testing, but it’s useful to review them to see how different tests might interact with data in cloud object storage.
Let’s consider a photo storage service that stores images in a Backblaze B2 Bucket. There are several real-world Backblaze customers that do exactly this, including Can Stock Photo and CloudSpot, but we’ll just imagine some of the features that any photo storage service might provide that its developers would need to write tests for.
Unit Tests
Unit tests test the smallest components of a system. For example, our photo storage service will contain code to manipulate images in a B2 Bucket, so its developers will write unit tests to verify that each low-level operation completes successfully. A test for thumbnail creation, for example, might do the following:
- Directly upload a test image to the bucket.
- Run the “‘Create Thumbnail” function against the test image.
- Verify that the resulting thumbnail image has indeed been created in the expected location in the bucket with the expected dimensions.
- Delete both the test and thumbnail images.
A large application might have hundreds, or even thousands, of unit tests, and it’s not unusual for development teams to set up automation to run the entire test suite against every change to the system to help guard against bugs being introduced during the development process.
Typically, unit tests require a blank slate to work against, with test code creating and deleting files as illustrated above. In this scenario, the test automation might create a bucket, run the test suite, then delete the bucket, ensuring a consistent environment for each test run.
Integration Tests
Integration tests bring together multiple components to test that they interact correctly. In our photo storage example, an integration test might combine image upload, thumbnail creation, and artificial intelligence (AI) object detection—all of the functions executed when a user adds an image to the photo storage service. In this case, the test code would do the following:
- Run the “Add Image” procedure against a test image of a specific subject, such as a cat.
- Verify that the test and thumbnail images are present in the expected location in the bucket, the thumbnail image has the expected dimensions, and an entry has been created in the image index with the “cat” tag.
- Delete the test and thumbnail images, and remove the image’s entry from the index.
Again, integration tests operate against an empty bucket, since they test particular groups of functions in isolation, and require a consistent, known environment.
System Tests
The next level of testing, system testing, verifies that the system as a whole operates as expected. System testing can be performed manually by a QA engineer following a test script, but is more likely to be automated, with test software taking the place of the user. For example, the Selenium suite of open source test tools can simulate a user interacting with a web browser. A system test for our photo storage service might operate as follows:
- Open the photo storage service web page.
- Click the upload button.
- In the resulting file selection dialog, provide a name for the image, navigate to the location of the test image, select it, and click the submit button.
- Wait as the image is uploaded and processed.
- When the page is updated, verify that it shows that the image was uploaded with the provided name.
- Click the image to go to its details.
- Verify that the image metadata is as expected. For example, the file size and object tag match the test image and its subject.
When we test the system at this level, we usually want to verify that it operates correctly against real-world data, rather than a synthetic test environment. Although we can generate “dummy data” to simulate the scale of a real-world system, real-world data is where we find the wrinkles and edge cases that tend to result in unexpected system behavior. For example, a German-speaking user might name an image “Schloss Schönburg.” Does the system behave correctly with non-ASCII characters such as ö in image names? Would the developers think to add such names to their dummy data?
Acceptance Tests
The final testing level, acceptance testing, again involves the system as a whole. But, where system testing verifies that the software produces correct results without crashing, acceptance testing focuses on whether the software works for the user. Beta testing, where end-users attempt to work with the system, is a form of acceptance testing. Here, real-world data is essential to verify that the system is ready for release.
How Does Cloud Replication Fit Into Testing Environments?
Of course, we can’t just use the actual production environment for system and acceptance testing, since there may be bugs that destroy data. This is where Cloud Replication comes in: we can create a replica of the production environment, complete with its quirks and edge cases, against which we can run tests with no risk of destroying real production data. The term staging environment is often used in connection with acceptance testing, with test(ing) environments used with unit, integration, and system testing.
Caution: Be Aware of PII!
Before we move on to look at how you can put replication into practice, it’s worth mentioning that it’s essential to determine whether you should be replicating the data at all, and what safeguards you should place on replicated data—and to do that, you’ll need to consider whether or not it is or contains personally identifiable information (PII).
The National Institute of Science and Technology (NIST) document SP 800-122 provides guidelines for identifying and protecting PII. In our example photo storage site, if the images include photographs of people that may be used to identify them, then that data may be considered PII.
In most cases, you can still replicate the data to a test or staging environment as necessary for business purposes, but you must protect it at the same level that it is protected in the production environment. Keep in mind that there are different requirements for data protection in different industries and different countries or regions, so make sure to check in with your legal or compliance team to ensure everything is up to standard.
In some circumstances, it may be preferable to use dummy data, rather than replicating real-world data. For example, if the photo storage site was used to store classified images related to national security, we would likely assemble a dummy set of images rather than replicating production data.
How Does Backblaze Cloud Replication Work?
To replicate data in Backblaze B2, you must create a replication rule via either the web console or the B2 Native API. The replication rule specifies the source and destination buckets for replication and, optionally, advanced replication configuration. The source and destination buckets can be located in the same account, different accounts in the same region, or even different accounts in different regions; replication works just the same in all cases. While standard Backblaze B2 Cloud Storage rates apply to replicated data storage, note that Backblaze does not charge service or egress fees for replication, even between regions.
It’s easier to create replication rules in the web console, but the API allows access to two advanced features not currently accessible from the web console:
- Setting a prefix to constrain the set of files to be replicated.
- Excluding existing files from the replication rule.
Don’t worry: this blog post provides a detailed explanation of how to create replication rules via both methods.
Once you’ve created the replication rule, files will begin to replicate at midnight UTC, and it can take several hours for the initial replication if you have a large quantity of data. Files uploaded after the initial replication rule is active are automatically replicated within a few seconds, depending on file size. You can check whether a given file has been replicated either in the web console or via the b2-get-file-info
API call. Here’s an example using curl at the command line:
% curl -s -H "Authorization: ${authorizationToken}" \ -d "{\"fileId\": \"${fileId}\"}" \ "${apiUrl}/b2api/v2/b2_get_file_info" | jq . { "accountId": "15f935cf4dcb", "action": "upload", "bucketId": "11d5cf096385dc5f841d0c1b", ... "replicationStatus": "pending", ... }
In the example response, replicationStatus
returns the response pending
; once the file has been replicated, it will change to completed
.
Here’s a short Python script that uses the B2 Python SDK to retrieve replication status for all files in a bucket, printing the names of any files with pending status:
import argparse import os from dotenv import load_dotenv from b2sdk.v2 import B2Api, InMemoryAccountInfo from b2sdk.replication.types import ReplicationStatus # Load credentials from .env file into environment load_dotenv() # Read bucket name from the command line parser = argparse.ArgumentParser(description='Show files with "pending" replication status') parser.add_argument('bucket', type=str, help='a bucket name') args = parser.parse_args() # Create B2 API client and authenticate with key and ID from environment b2_api = B2Api(InMemoryAccountInfo()) b2_api.authorize_account("production", os.environ["B2_APPLICATION_KEY_ID"], os.environ["B2_APPLICATION_KEY"]) # Get the bucket object bucket = b2_api.get_bucket_by_name(args.bucket) # List all files in the bucket, printing names of files that are pending replication for file_version, folder_name in bucket.ls(recursive=True): if file_version.replication_status == ReplicationStatus.PENDING: print(file_version.file_name)
Note: Backblaze B2’s S3-compatible API (just like Amazon S3 itself) does not include replication status when listing bucket contents—so for this purpose, it’s much more efficient to use the B2 Native API, as used by the B2 Python SDK.
You can pause and resume replication rules, again via the web console or the API. No files are replicated while a rule is paused. After you resume replication, newly uploaded files are replicated as before. Assuming that the replication rule does not exclude existing files, any files that were uploaded while the rule was paused will be replicated in the next midnight-UTC replication job.
How to Replicate Production Data for Testing
The first question is: does your system and acceptance testing strategy require read-write access to the replicated data, or is read-only access sufficient?
Read-Only Access Testing
If read-only access suffices, it might be tempting to create a read-only application key to test against the production environment, but be aware that testing and production make different demands on data. When we run a set of tests against a dataset, we usually don’t want the data to change during the test. That is: the production environment is a moving target, and we don’t want the changes that are normal in production to interfere with our tests. Creating a replica gives you a snapshot of real-world data against which you can run a series of tests and get consistent results.
It’s straightforward to create a read-only replica of a bucket: you just create a replication rule to replicate the data to a destination bucket, allow replication to complete, then pause replication. Now you can run system or acceptance tests against a static replica of your production data.
To later bring the replica up to date, simply resume replication and wait for the nightly replication job to complete. You can run the script shown in the previous section to verify that all files in the source bucket have been replicated.
Read-Write Access Testing
Alternatively, if, as is usually the case, your tests will create, update, and/or delete files in the replica bucket, there is a bit more work to do. Since testing intends to change the dataset you’ve replicated, there is no easy way to bring the source and destination buckets back into sync—changes may have happened in both buckets while your replication rule was paused.
In this case, you must delete the replication rule, replicated files, and the replica bucket, then create a new destination bucket and rule. You can reuse the destination bucket name if you wish since, internally, replication status is tracked via the bucket ID.
Always Test Your Code in an Environment Other Than Production
In short, we all want to lead interesting lives—but let’s introduce risk in a controlled way, by testing code in the proper environments. Cloud Replication lets you achieve that end while remaining nimble, which means you get to spend more time creating interesting tests to improve your product and less time trying to figure out why your data transformed in unexpected ways.
Now you have everything you need to create test and staging environments for applications that use Backblaze B2 Cloud Object Storage. If you don’t already have a Backblaze B2 account, sign up here to receive 10GB of storage, free, to try it out.