Learn how to use AWS CLI to interact with Filebase's S3-compatible API.

What is AWS CLI?

AWS CLI, or Amazon Web Services Command Line Interface, is a command line tool developed by Amazon using Python that is used for transferring data to object storage services. This is one of the most commonly used CLI tools by IT system administrators, developers, and programmers. Even though this tool is developed by Amazon, you can use it with any S3-compatible API object storage service, including Filebase, to manage your storage objects and buckets.

Since this tool is utilized through a command line, it’s quite popular because it can be easily referenced through automation scripts, backup jobs, and other custom utilities such as cron jobs.


The Access Key ID and Secret Access Key will be stored in the AWS CLI configuration file, but the API endpoint will need to be referenced with each command.


1. First, configure AWS CLI to work with Filebase and your Filebase account. To do this, open a new terminal window. From there, run the command:

aws configure

This command will generate a series of prompts, which should be filled out as such:

  • Access Key ID: Filebase Access Key

  • Secret Access Key: Filebase Secret Key

  • Region: us-east-1

  • Output Format: Optional

2. After completing the prompt, begin interacting with the Filebase S3 API using the AWS CLI tool. You will not need to configure AWS CLI again as long as your Access ID and Secret Access Key does not change.

All AWS CLI commands will begin with aws --endpoint The portion that follows this initial command will be the part that determines what action is to be performed and with what bucket.

Creating a New Bucket

To create a new bucket on Filebase using the AWS CLI, use the command:

aws --endpoint s3 mb s3://[bucket-name]

For example, to create a new bucket called 'filebase-bucket':

aws --endpoint s3 mb s3://filebase-bucket

Bucket names must be unique across all Filebase users, be between 3 and 63 characters long, and can contain only lowercase characters, numbers, and dashes.

The terminal should return the line:

make_bucket: filebase-bucket

Listing Buckets

The following command will list all buckets in your Filebase account:

aws --endpoint s3 ls

Listing the Content of a Bucket

To list the contents of a bucket, use the command:

aws --endpoint s3 ls s3://[bucket-name]

For example, to list the contents of 'filebase-bucket':

aws --endpoint s3 ls s3://filebase-bucket

Uploading A Single File

To upload a single file, use the command:

aws --endpoint s3 cp [filename] s3://[bucket-name]

For example, to upload a file called '1200.jpeg' to the bucket 'filebase-bucket':

aws --endpoint s3 cp 1200.jpeg s3://filebase-bucket

To verify that this file has been uploaded by listing the contents of the bucket with the s3 ls command previously used:

aws --endpoint s3 ls s3://filebase-bucket

To verify that this file is available from the web console, go to

Uploading Multiple Files

To upload multiple files, use the command:

aws --endpoint s3 sync [folder name] s3://[bucket-name]

For example, to upload the contents of a folder called 'test_folder', use the command:

aws --endpoint s3 sync test_folder s3://filebase-bucket

To verify that these files have been uploaded, use the command:

aws --endpoint s3 ls s3://filebase-bucket

Or navigate to the bucket through the web console dashboard:

Multipart Uploads

S3-compatible object storage services support uploading large files in separate chunks of data and uploading them in parallel when the file size is above a certain threshold. This is called a multipart threshold. This is important because in the event of a network outage or error, the file transfer is able to be resumed, and it helps with increasing the network performance of the transferred files.

By default, the multipart threshold for AWS CLI is 8MB. This means that any file larger than 8MB will be automatically broken into chunks and uploaded together in parallel. To use this feature, simply upload a file that is larger than 8MB in size and AWS CLI takes care of the rest automatically.

Read more in-depth about Multipart Upload here:

pageWhat is Multipart Upload?

Verifying Uploaded Files

To verify the metadata of the file to confirm it has been uploaded, AWS CLI uses the s3api head-object command to fetch object metadata about each file uploaded to a bucket. Included in this metadata is what is called an 'entity tag', also known as an ETag. In Filebase, for files that were not uploaded in a multipart upload, the ETag is the same as an object’s MD5 checksum value, which is a common practice among S3-compatible object storage services.

By fetching the file object’s metadata using the Filebase S3 API, we can compare the ETag value, which is the same as the MD5 value, to the MD5 value calculated on our local machine. Ideally, these two values will match and we can confirm that our upload was successful and that the Filebase service received our uploaded data properly.

To view the metadata information for the file 1201.jpg, use the command:

aws --endpoint s3api head-object --bucket filebase-bucket --key 1201.jpg

Take note of the ETag value.

To calculate the MD5 checksum of our local machine, this command will vary based on what operating system your local host is running:

  • For macOS, terminal command is: md5sum 1201.jpg

  • For Linux based systems, the terminal command is: md5sum 1201.jpg

  • For Windows, the PowerShell command is: get-filehash -Algorithm MD5 1201.jpg

The MD5sum value matches the ETag value from the AWS CLI command, so the data was received properly by Filebase.

This method of verification is only for files that were not uploaded in multiple parts. If the file is larger than 8MB, it was uploaded using multipart threshold. The Etag will be the UUID, not the MD5 checksum.

Downloading A Single File

To download a single file, use the command:

aws --endpoint s3 cp s3://[bucket-name]/[file-name] /path/to/download/filename

For example, to download a file called '1200.jpeg' from the bucket 'filebase-bucket':

aws --endpoint s3 cp s3://filebase-bucket/1200.jpeg /Users/Filebase/Downloads/1200.jpeg

Downloading Folders

To download a folder, use the command:

aws --endpoint s3 cp --recursive s3://[bucket-name]/[folder name] /path/to/download/folder

For example, to upload the contents of a folder called 'test_folder', use the command:

aws --endpoint s3 cp --recursive s3://filebase-bucket/test_folder /Users/Filebase/Downloads/new-folder

Deleting Single Files

To delete a file, use the command:

aws --endpoint s3 rm s3://[bucket_name]/[file_name]

Deleting All Files In A Bucket

To delete all files in a bucket, use the command:

aws --endpoint s3 rm --recursive s3://[bucket_name]/

For example, to delete all files from the bucket 'filebase-bucket':

aws --endpoint s3 rm --recursive s3://filebase-bucket/

For more detailed information about deleting files using AWS CLI, see our dedicated guide below:

pageHow To Delete Data with AWS CLI

Using AWS CLI to generate a pre-signed S3 URL

To create a pre-signed URL with AWS CLI, use the following command syntax:

aws s3 --endpoint presign s3://filebase-bucket-name/

This command should return a pre-signed URL. By default, the expiration time is one hour.

You can specify a different expiration time by adding the flag --expires-in followed by the number of minutes.

If you have any questions, please join our Discord server, or send us an email at

Last updated