Filebase
Search…
AWS CLI
Learn how to use AWS CLI to interact with Filebase's S3-Compatible API.

What is AWS CLI?

AWS CLI, or Amazon Web Services Command Line Interface, is a command line tool developed by Amazon using Python that is used for transferring data to object storage services. This is one of the most commonly used CLI tools by IT system administrators, developers, and programmers. Even though this tool is developed by Amazon, you can use it with any S3 API compatible object storage service, including Filebase, to manage your storage objects and buckets.
Since this tool is utilized through a command line, it’s quite popular because it can be easily referenced through automation scripts, backup jobs, and other custom utilities such as cron jobs.

Prerequisites:

The Access Key ID and Secret Access Key will be stored in the AWS CLI configuration file, but the API endpoint will need to be referenced with each command.

Configuration

1. First, configure AWS CLI to work with Filebase and your Filebase account. To do this, open a new terminal window. From there, run the command:

1
aws configure
Copied!
This command will generate a series of prompts, asking for the Access Key ID and Secret Access Key. It will also ask for a region and an output format, though you can leave both of these fields blank by pressing enter when prompted for them.

2. After completing the prompt, begin interacting with the Filebase S3 API using the AWS CLI tool. You will not need to configure AWS CLI again as long as your Access ID and Secret Access Key do not change.

All AWS CLI commands will begin with aws --endpoint https://s3.filebase.com The portion that follows this initial command will be the part that determines what action is to be performed and with what bucket.

Creating a New Bucket

To create a new bucket on Filebase using the AWS CLI, use the s3 mb s3://[bucket-name] command. For example, to create a new bucket called 'filebase-bucket':
1
aws --endpoint https://s3.filebase.com s3 mb s3://filebase-bucket
Copied!
Bucket names must be unique across all Filebase users, be between 3 and 63 characters long, and can contain only lowercase characters, numbers, and dashes.
The terminal should return the line: make_bucket: filebase-bucket

Listing Buckets

To verify that the new bucket has been created, use the s3 ls command. This following command will list all buckets in your Filebase account:
1
aws --endpoint https://s3.filebase.com s3 ls
Copied!

Listing the Content of a Bucket

To list the contents of a bucket, use the command s3 ls s3://[bucket-name]. For example, to list the contents of 'filebase-bucket':
1
aws --endpoint https://s3.filebase.com s3 ls s3://filebase-bucket
Copied!
In the bucket 'filebase-bucket', there should not be anything returned since this is a brand new bucket without any files in it yet.

Uploading Files to a Bucket

To upload a single file, we can use the s3 cp [filename] s3://[bucket-name] command. For example, to upload a file called '1200.jpeg':
1
aws --endpoint https://s3.filebase.com s3 cp 1200.jpeg s3://filebase-bucket
Copied!
To verify that this file has been uploaded by listing the contents of the bucket with the s3 ls command previously used:
1
aws --endpoint https://s3.filebase.com s3 ls s3://filebase-bucket
Copied!
To verify that this file is available from the browser based console as well, go to https://console.filebase.com:
To upload multiple files, use the s3 sync [folder name] s3://[bucket-name] command. For example, to upload the contents of a folder called 'test_folder', use the command:
1
aws --endpoint https://s3.filebase.com s3 sync test_folder s3://filebase-bucket
Copied!
To verify that these files have been uploaded, use the command:
1
aws --endpoint https://s3.filebase.com s3 ls s3://filebase-bucket
Copied!
Or by looking in the browser based console interface:

Multipart Uploads

S3-compatible object storage services support uploading large files in separate chunks of data and uploading them in parallel when the file size is above a certain threshold. This is called a multipart threshold. This is important because in the event of a network outage or error, the file transfer is able to be resumed, and it helps with increasing network performance of the transferred files.
By default, the multipart threshold for AWS CLI is 8MB. This means that any file larger than 8MB will be automatically broken into chunks and uploaded together in parallel. To use this feature, simply upload a file that is larger than 8MB in size and AWS CLI takes care of the rest automatically.
Read more in depth about Multipart Upload here:

Verifying Uploaded Files

To verify the metadata of the file to confirm it has been uploaded, AWS CLI uses the s3api head-object command to fetch object metadata about each file uploaded to a bucket. Included in this metadata is what is called an 'entity tag', also known as an ETag. In Filebase, for files that were not uploaded in a multipart upload, the ETag is the same as an object’s MD5 checksum value, which is a common practice among S3-compatible object storage services.
By fetching the file object’s metadata using the Filebase S3 API, we can compare the ETag value, which is the same as the MD5 value, to the MD5 value calculated on our local machine. Ideally, these two values will match and we can confirm that our upload was successful and that the Filebase service received our uploaded data properly.
To view the metadata information for the file 1201.jpg, use the command:
1
aws --endpoint https://s3.filebase.com s3api head-object --bucket filebase-bucket --key 1201.jpg
Copied!
To calculate the MD5 checksum of our local machine, this command will vary based on what operating system your local host is running:
  • For macOS, terminal command is: md5sum 1201.jpg
  • For Linux based systems, the terminal command is: md5sum 1201.jpg
  • For Windows, the PowerShell command is: get-filehash -Algorithm MD5 1201.jpg
The MD5sum value matches the ETag value from the AWS CLI command, so the data was received properly by Filebase.
This method of verification is only for files that were not uploaded in multiple parts. If the file is larger than 8MB, it was uploaded using multipart threshold. The Etag will be the UUID, not the MD5 checksum.

Deleting Single Files

To delete a file, use the s3 rm s3://[bucket_name]/[file_name] command. For example, to delete the file '1200.jpeg' from the bucket 'filebase-bucket':
1
aws --endpoint https://s3.filebase.com s3 rm s3://[bucket_name]/[file_name]
Copied!
You will not be prompted if you are sure that you want to delete the file, so you will not have an opportunity to think twice about the file you are deleting. Be careful with similar file names when deleting files from buckets.

Deleting All Files In A Bucket

To delete all files in a bucket, use the aws --endpoint <https://s3.filebase.com> s3 rm --recursive s3://[bucket_name]/ command. For example, to delete all files from the bucket 'filebase-bucket':
1
aws --endpoint <https://s3.filebase.com> s3 rm --recursive s3://filebase-bucket/
Copied!
For more detailed information about deleting files using AWS CLI, see our dedicated guide below:

Using AWS CLI to generate a pre-signed S3 URL

To create a pre-signed URL with AWS CLI:
1
aws s3 --endpoint https://s3.filebase.com presign s3://filebase-bucket-name/file.name
Copied!
This command should return a pre-signed URL. By default, the expiration time is one hour:
You can specify a different expiration time by adding the flag --expires-in followed by the number of minutes.
If you have any questions, please email [email protected].
Last modified 18d ago