- Print
- DarkLight
Use Python to Create an Application
- Print
- DarkLight
The following procedures demonstrate how to create an application using the AWS SDK for Python.
Install the AWS SDK
You must have Python 3.7 or later. Use the following command to install the current version of the AWS SDK for Python (Boto3) using pip: pip install boto3
.
Configure AWS (Python)
Follow this procedure to create a named profile to access Backblaze B2 Cloud Storage; this allows you to easily access Backblaze B2 and alternative Amazon Simple Storage Service (Amazon S3)-compatible cloud object stores. You can also configure your default profile, set environment variables, or use any other configuration mechanism that is supported by Python.
If you don't have the CLI, you can create a new AWS profile by creating or editing the AWS configuration files.
You can find the AWS credentials file at the following locations:
~/.aws/credentials
on Linux, macOS, or UnixC:\Users\USERNAME\.aws\credentials
on Windows
You can find the AWS configuration file at the following locations:
~/.aws/config
on Linux, macOS, or UnixC:\Users\USERNAME\.aws\config
on Windows
- Create the .aws directory and credentials file, if they do not already exist, and add the following section to the file, substituting your credentials.
[b2tutorial] aws_access_key_id = <your_key_id> aws_secret_access_key = <your_application_key>
- Create the configuration file if it does not already exist and add the following section.
[b2tutorial] output = json s3 = signature_version = s3v4
List Existing Buckets (Python)
The simplest Amazon S3 action is 'List Buckets'. It requires no parameters and returns a list of all of the buckets within the account.
- Create a file called
app.py
with the following content:import boto3.session import os # Change this to the endpoint from your bucket details, prefixed with "https://" ENDPOINT_URL = 'https://<your endpoint>' # Create a Boto3 Session with the tutorial profile b2session = boto3.session.Session(profile_name='b2tutorial') # Create a Boto3 Resource from the session, specifying Amazon S3 as the service, and our B2 endpoint b2 = b2session.resource(service_name='s3', endpoint_url=ENDPOINT_URL) # Get the list of buckets buckets = b2.buckets.all() # Iterate through the list, printing each bucket's name print('Buckets in account:') for bucket in buckets: print(bucket.name)
- Edit the value of the
ENDPOINT_URL
constant to match your endpoint. For example:ENDPOINT_URL = 'https://s3.us-west-004.backblazeb2.com'
The app creates a Boto3 session with the profile and an Amazon S3 resource client from the session that specifies the endpoint. The app then calls theall()
method on the resource'sbuckets
collection to retrieve a list ofBucket
objects. Finally, the app iterates through the list, printing each bucket's name. - Run the application using the following code:
python app.py
An output similar to the following example is returned:Buckets in account: my-unique-bucket-name
Create a Private Bucket (Python)
You already created a public bucket in the Backblaze web UI. Follow this procedure to use the Amazon S3 'Create Bucket' action to create a private bucket programmatically.
- Add the following code at the bottom of
app.py
, and replace the bucket name with a unique name.# Create a new private bucket. Replace the bucket name with your own. bucket_name = 'another-unique-bucket-name' try: print(f'\nTrying to create bucket: {bucket_name}') bucket = b2.create_bucket(Bucket=bucket_name, ACL='private') print(f'Success! Response is: {bucket}') except b2.meta.client.exceptions.BucketAlreadyOwnedByYou: print(f'You already created {bucket_name}. \nCarrying on...') bucket = b2.Bucket(bucket_name) except b2.meta.client.exceptions.BucketAlreadyExists: print(f'{bucket_name} already exists in another account.\nExiting.') exit(1)
The app calls the Backblaze B2 resource'screate-bucket
method with the bucket name and the canned ACL valueprivate
, and displays the resulting bucket object.
The bucket may already exist, in which casecreate_bucket
throws an exception. The exception's class indicates whether the bucket is owned by your account. If so, then the app creates a bucket object from the bucket name and continues to the next step; otherwise, the app exits with an error. - Enter the following code to run the app again.
python app.py
An output similar to the following example is returned.Buckets in account: my-unique-bucket-name Trying to create bucket: another-unique-bucket-name Success! Response is: s3.Bucket(name='another-unique-bucket-name')
If the bucket already exists in another account, the following message is returned:Buckets in account: my-unique-bucket-name Trying to create bucket: tester tester already exists in another account. Exiting.
- After the bucket is created, run the following code again.
python app.py
The following output is returned that indicates that the exception was handled.Buckets in account: another-unique-bucket-name my-unique-bucket-name Trying to create bucket: another-unique-bucket-name You already created another-unique-bucket-name. Carrying on...
- Return to the bucket listing in the Backblaze web UI and refresh the page. The new private bucket is listed.
Upload a File to a Bucket (Python)
In this final section of the tutorial, you will upload a file to the private bucket using the Amazon S3 'Put Object' action.
- To upload a single file to your private bucket in Backblaze B2, add the following code to the bottom of
app.py
, and replace the path with the path of your file to upload.# The key in B2 is set to the file name. path_to_file = './myimage.png' print(f'Uploading: {path_to_file}') obj = bucket.put_object(Body=open(path_to_file, mode='rb'), Key=os.path.basename(path_to_file)) # Create a response dict with the values returned from B2 response = {attr: getattr(obj, attr) for attr in ['e_tag', 'version_id']} print(f'Success! Response is: {response}')
This section of code calls theput_object
method on the bucket that the app just created, with the file content and a key set to the file name from the specified path. Since theput_object
method returns a Boto3 Object representing the file in Backblaze B2, rather than the Backblaze B2 response itself, the code extracts theETag
andVersionId
values returned by Backblaze B2 and displays them. - Use the following code to run the app again.
python app.py
An output similar to the following example is returned.Buckets in account: another-unique-bucket-name my-unique-bucket-name Trying to create bucket: another-unique-bucket-name You already created another-unique-bucket-name. Carrying on... Uploading: ./myimage.png Success! Response is: {'e_tag': '"3de71fbae1459a1e084b091fedff7b52"', 'version_id': '4_zc34d68b13f96d7c87bf80413_f112be370c4da1c29_d20220725_m214852_c004_v0402009_t0031_u01658785732321'}
Etag and VersionId Output (Python)
The Etag
value (represented in Boto3 as e_tag
) identifies a specific version of the file's content. Etag
is a standard HTTP header that is included when clients download files from Backblaze B2. Etag
enables caches to be more efficient and save bandwidth because a web server does not need to resend a full response if the content was not changed. VersionId
(version_id
) identifies a specific version of the file within Backblaze B2. If a file is uploaded to an existing key in a bucket, a new version of the file is stored even if the file content is the same.
To see the difference between ETag
and VersionId
, run the 'upload file' commands a second time and upload the same file content to the same bucket and key. The ETag
is the same since the content hasn't changed, but a new VersionId
is returned.
An output similar to the following example is returned.
Buckets in account: another-unique-bucket-name my-unique-bucket-name Trying to create bucket: another-unique-bucket-name You already created another-unique-bucket-name. Carrying on... Uploading: ./myimage.png Success! Response is: {'e_tag': '"3de71fbae1459a1e084b091fedff7b52"', 'version_id': '4_zc34d68b13f96d7c87bf80413_f102dc31873d979a2_d20220725_m214855_c004_v0402009_t0015_u01658785735087'}
Use the put_object
method to upload a single file. To upload multiple files, your application must build a list of files to upload and iterate through that list.
Browse Files (Python)
In the Backblaze web UI, navigate to your private bucket on the Browse Files page. Your file is displayed with a (2) next to the filename.
If you click the (2), and click one of the file versions, you will see that the Fguid matches the VersionId that was returned when the file was created.
There is also no File Info for this file. The Backblaze web UI set the src_last_modified_millis attribute for the file that you uploaded earlier, but you did not specify one when you uploaded the file.
Click one of the URLs to open it in the browser. You cannot access the file because it is in a private bucket. The S3-Compatible API returns the following XML-formatted error for the Amazon S3 URL.
<Error> <Code>UnauthorizedAccess</Code> <Message>bucket is not authorized: another-unique-bucket-name</Message> </Error>
The Native API returns a similar, JSON-formatted error for the Native and Friendly URLs:
{ "code": "unauthorized", "message": "", "status": 401 }