Since its launch in 2014 as a cloud-based data warehouse, Snowflake has evolved into a broad data-as-a-service platform addressing a wide variety of use cases, including artificial intelligence (AI), machine learning (ML), collaboration across organizations, and data lakes. Last year, Snowflake introduced support for S3 compatible cloud object stores, such as Backblaze B2 Cloud Storage. Now, Snowflake customers can access unstructured data such as images and videos, as well as structured and semi-structured data such as CSV, JSON, Parquet, and XML files, directly in the Snowflake Platform, served up from Backblaze B2.
Why access external data from Snowflake, when Snowflake is itself a data as a service (DaaS) platform with a cloud-based relational database at its core? To put it simply, not all data belongs in Snowflake. Organizations use cloud object storage solutions such as Backblaze B2 as a cost-effective way to maintain both master and archive data, with multiple applications reading and writing that data. In this situation, Snowflake is just another consumer of the data. Besides, data storage in Snowflake is much more expensive than in Backblaze B2, raising the possibility of significant cost savings as a result of optimizing your data’s storage location.
Snowflake Basics
At Snowflake’s core is a cloud-based relational database. You can create tables, load data into them, and run SQL queries just as you can with a traditional on-premises database. Given Snowflake’s origin as a data warehouse, it is currently better suited to running analytical queries against large datasets than as an operational database serving a high volume of transactions, but Snowflake Unistore’s hybrid tables feature (currently in private preview) aims to bridge the gap between transactional and analytical workloads.
As a DaaS platform, Snowflake runs on your choice of public cloud—currently Amazon Web Services (AWS), Microsoft Azure, and Google Cloud Platform—but insulates you from the details of managing storage, compute, and networking infrastructure. Having said that, sometimes you need to step outside the Snowflake box to access data that you are managing in your own cloud object storage account. I’ll explain exactly how that works in this blog post, but, first, let’s take a quick look at how we classify data according to its degree of structure, as this can have a big impact on your decision of where to store it.
Structured and Semi-Structured Data
Structured data conforms to a rigid data model. Relational database tables are the most familiar example—a table’s schema describes required and optional fields and their data types, and it is not possible to insert rows into the table that contain additional fields not listed in the schema. Aside from relational databases, file formats such as Apache Parquet, Optimized Row Columnar (ORC), and Avro can all store structured data; each file format specifies a schema that fully describes the data stored within a file. Here’s an example of a schema for a Parquet file:
% parquet meta customer.parquet File path: /data/customer.parquet ... Schema: message hive_schema { required int64 custkey; required binary name (STRING); required binary address (STRING); required int64 nationkey; required binary phone (STRING); required int64 acctbal; optional binary mktsegment (STRING); optional binary comment (STRING); }
Semi-structured data, as its name suggests, is more flexible. File formats such as CSV, XML and JSON need not use a formal schema, since they can be self-describing. That is, an application can infer the structure of the data as it reads the file, a mechanism often termed “schema-on-read.”
This simple JSON example illustrates the principle. You can see how it’s possible for an application to build the schema of a product record as it reads the file:
{ "products" : [ { "name" : "Paper Shredder", "description" : "Crosscut shredder with auto-feed" }, { "name" : "Stapler", "color" : "Red" }, { "name" : "Sneakers", "size" : "11" } ] }
Accessing Structured and Semi-Structured Data Stored in Backblaze B2 from Snowflake
You can access data located in cloud object storage external to Snowflake, such as Backblaze B2, by creating an external stage. The external stage is a Snowflake database object that holds a URL for the external location, as well as configuration (e.g., credentials) required to access the data. For example:
CREATE STAGE b2_stage URL = 's3compat://your-b2-bucket-name/' ENDPOINT = 's3.your-region.backblazeb2.com' REGION = 'your-region' CREDENTIALS = ( AWS_KEY_ID = 'your-application-key-id' AWS_SECRET_KEY = 'your-application-key' );
You can create an external table to query data stored in an external stage as if the data were inside a table in Snowflake, specifying the table’s columns as well as filenames, file formats, and data partitioning. Just like the external stage, the external table is a database object, located in a Snowflake schema, that stores the metadata required to access data stored externally to Snowflake, rather than the data itself.
Every external table automatically contains a single VARIANT type column, named value
, that can hold arbitrary collections of fields. An external table definition for semi-structured data needs no further column definitions, only metadata such as the location of the data. For example:
CREATE EXTERNAL TABLE product LOCATION = @b2_stage/data/ FILE_FORMAT = (TYPE = JSON) AUTO_REFRESH = false;
When you query the external table, you can reference elements within the value column, like this:
SELECT value:name FROM product WHERE value:color = ‘Red’; +------------+ | VALUE:NAME | |------------| | "Stapler" | +------------+
Since structured data has a more rigid layout, you must define table columns (technically, in Snowflake, these are referred to as “pseudocolumns”), corresponding to the fields in the data files, in terms of the value column. For example:
CREATE EXTERNAL TABLE customer ( custkey number AS (value:custkey::number), name varchar AS (value:name::varchar), address varchar AS (value:address::varchar), nationkey number AS (value:nationkey::number), phone varchar AS (value:phone::varchar), acctbal number AS (value:acctbal::number), mktsegment varchar AS (value:mktsegment::varchar), comment varchar AS (value:comment::varchar) ) LOCATION = @b2_stage/data/ FILE_FORMAT = (TYPE = PARQUET) AUTO_REFRESH = false;
Once you’ve created the external table, you can write SQL statements to query the data stored externally, just as if it were inside a table in Snowflake:
SELECT phone FROM customer WHERE name = ‘Acme, Inc.’; +----------------+ | PHONE | |----------------| | "111-222-3333" | +----------------+
The Backblaze B2 documentation includes a pair of technical articles that go further into the details, describing how to export data from Snowflake to an external table stored in Backblaze B2, and how to create an external table definition for existing structured data stored in Backblaze B2.
Accessing Unstructured Data Stored in Backblaze B2 from Snowflake
The term “unstructured”, in this context, refers to data such as images, audio, and video, that cannot be defined in terms of a data model. You still need to create an external stage to access unstructured data located outside of Snowflake, but, rather than creating external tables and writing SQL queries, you typically access unstructured data from custom code running in Snowflake’s Snowpark environment.
Here’s an excerpt from a Snowflake user-defined function, written in Python, that loads an image file from an external stage:
from snowflake.snowpark.files import SnowflakeFile # The file_path argument is a scoped Snowflake file URL to a file in the # external stage, created with the BUILD_SCOPED_FILE_URL function. # It has the form # https://abc12345.snowflakecomputing.com/api/files/01b1690e-0001-f66c-... def generate_image_label(file_path): # Read the image file with SnowflakeFile.open(file_path, 'rb') as f: image_bytes = f.readall() ...
In this example, the user-defined function reads an image file from an external stage, then runs an ML model on the image data to generate a label for the image according to its content. A Snowflake task using this user-defined function can insert rows into a table of image names and labels as image files are uploaded into a Backblaze B2 Bucket. You can learn more about this use case in particular, and loading unstructured data from Backblaze B2 into Snowflake in general, from the Backblaze Tech Day ‘23 session that I co-presented with Snowflake Product Manager Saurin Shah:
Choices, Choices: Where Should I Store My Data?
Given that, currently, Snowflake charges at least $23/TB/month for data storage on its platform compared to Backblaze B2 at $6/TB/month, it might seem tempting to move your data wholesale from Snowflake to Backblaze B2 and create external tables to replace tables currently residing in Snowflake. There are, however, a couple of caveats to mention: performance and egress costs.
The same query on the same dataset will run much more quickly against tables inside Snowflake than the corresponding external tables. A comprehensive analysis of performance and best practices for Snowflake external tables is a whole other blog post, but, as an example, one of my queries that completes in 30 seconds against a table in Snowflake takes three minutes to run against the same data in an external table.
Similarly, when you query an external table located in Backblaze B2, Snowflake must download data across the internet. Data formats such as Parquet can make this very efficient, organizing data column-wise and compressing it to minimize the amount of data that must be transferred. But, some amount of data still has to be moved from Backblaze B2 to Snowflake. Downloading data from Backblaze B2 is free of charge for up to 3x your average monthly data footprint, then $0.01/GB for additional egress, so there is a trade-off between data storage cost and data transfer costs for frequently-accessed data.
Some data naturally lives on one platform or the other. Frequently-accessed tables should probably be located in Snowflake. Media files, that might only ever need to be downloaded once to be processed by code running in Snowpark, belong in Backblaze B2. The gray area is large datasets that will only be accessed a few times a month, where the performance disparity is not an issue, and the amount of data transferred might fit into Backblaze B2’s free egress allowance. By understanding how you access your data, and doing some math, you’re better able to choose the right cloud storage tool for your specific tasks.